OpenAI
Repository | https://github.com/dyalog/OpenAI |
Copyright | Made with Material for MkDocs. Contents copyright ©2024 Dyalog, LTD |
OpenAI
is a Dyalog APL namespace that contains code to interact with the OpenAI API. OpenAI develops and maintains several Large Language Models (LLMs). OpenAI
contains code to interact with OpenAI endpoints. A few things to note:
- We are going to use the term "OpenAI" a lot in this documentation. When formatted as
OpenAI
, we are referring to the Dyalog namespace which implements an interface to the OpenAI API. When formatted as OpenAI, we are referring to the OpenAI API itself. - ChatGPT is actively developing OpenAI, adding new LLMs and endpoint definitions. This puts us in a reactive mode to incorporate these new features and interfaces. We anticipate
- Not all OpenAI endpoints are currently implemented in
OpenAI
. This is partly due to the reason stated above. Our goal is to have sufficient endpoint coverage that our users can perform useful tasks with OpenAI from APL. - There are other LLMs available -
OpenAI
should serve as a good model for how to implement interfaces to them. In particular,OpenAI
makes heavy use ofHttpCommand
and the techniques used should be applicable for interacting with other LLM APIs. - This documentation presents information on how to use the
OpenAI
namespace to interact with the OpenAI endpoints. It does not attempt to document all of the features and nuances of those endpoints. For that information, please see the OpenAI API Reference. - We encourage feedback, feature requests, and guidance from our users.
Endpoints Currently Implemented
- Audio - turn audio into text or text into audio
- Chat - have a conversation with an OpenAI model
- Files - upload and manage documents that can be used with other features
- Image - given a prompt or existing image, generate a new image
- Models - list and describe the various models available in the OpenAI API
- Moderations - Classify text input as potentially harmful
Forthcoming Endpoints
Expected in Q4 2024
OpenAI has released, in beta, version 2 of their Assistants and related endpoints. We expect to have completed their development in OpenAI
in the 4th quarter of 2024.
- Assistants - Build assistants that can call models and use tools to perform tasks.
- Threads - Create threads that assistants can interact with.
- Messages - Create messages within threads.
- Runs - Represents an execution on a thread.
- Vector Stores - Used to store files for use be the OpenAI's
file_search
tool. - Vector Store Files - Represent files inside a vector store.
Future
Beyond 2024, we expect to add support for the following endpoints.
- Vector Store File Batches - Represent operations to add multiple files to a vector store.
- Run Steps - Represents the steps (model and tool calls) taken during the run.
- Uploads - Upload large files in multiple parts.
- Embeddings - Get a vector representation of a given input that can be easily consumed by machine learning models and algorithms.
Other future work may include Projects and Users endpoints depending on our users' needs.
Getting Started
This is an abbreviated introduction to get you started with OpenAI
. The OpenAI website has considerably more information. OpenAI
uses HttpCommand
to communicate with the OpenAI API.
1. Create an OpenAI account
- Navigate to OpenAI's quickstart page.
- Click "Sign up" and follow the instructions to create your account
2. Optionally, create an OpenAI project
- You can create an OpenAI project. If you don't create a project, OpenAI will use a Default project.
3. Create an OpenAI API project key
You will need a project API key to be able to access the OpenAI API via OpenAI
. Protect this API key, do not publish it on GitHub or other public places.
- Navigate to OpenAI's API keys page
- Click "+ Create a new secret key"
- Click "Create secret key"
- Make sure you copy and save the generated secret key! This will be the only time it will be displayed.
- OpenAI recommends that you set an environment variable to hold your API key. This way you will not expose the key in your APL code.
If you're using Linux, do:export OPENAI_API_KEY="your_api_key_here"
If you're using Windows, under PowerShell do:setx OPENAI_API_KEY "your_api_key_here"
- If you choose to not use an environment variable, you could save your API key in a text file and then
4. Obtain OpenAI
There are a couple options to obtain OpenAI
.
- Use the
]get
user command]get github.com/Dyalog/OpenAI/blob/main/source/OpenAI.apln
- Download
OpenAI.apln
from GitHub. Remember where you saved the downloaded file and then use the]import
user command, replacing "downloaded-file-path" with the path name where you saved the downloaded file.]import # downloaded-file-path/OpenAI.apln
5. Configure OpenAI
Now that you have OpenAI
in your workspace, you can configure it. First, run OpenAI.Initialize
. This will copy the latest release of HttpCommand
into the OpenAI
namespace.
OpenAI.Initialize
0 Initialized
OpenAI.Initialize
is not as shown above, the result will contain an error message explaining what failed.
If you have saved your OpenAI API key as an environment variable, you can use HttpCommand
's HeaderSubstitution
setting to have HttpCommand
use the environment variable rather than explicitly setting the API key in the configuration.
OpenAI.HttpCommand.HeaderSubstitution←'%' ⍝ sets '%' as the environment variable indicator
OpenAI.APIKey←'%your-environment-variable-name%' ⍝ note the leading and trailing '%'
OpenAI.APIKey
to the API key's value directly.
OpenAI.APIKey←'your-OpenAI-API-key-here'
Next Steps
Having completed the above steps, you are now ready to begin interacting with the OpenAI API. From here you may want to look at:
You'll need to register and obtain an OpenAI API key as described in the Quick Start page.
Obtaining the demos
The easiest way to obtain and set up the OpenAI
demos is to use the ]get
user command.
)clear
clear ws
]get https://github.com/Dyalog/OpenAI/archive/refs/heads/main.zip
Working on it…
#.main
)cs #.main.demos
#.main.demos
If this doesn't work for you, then you can download and unzip demos.zip from GitHub.
Using the folder name that you unzipped into, use ]link.import
to import the demos.
]link.import # /your-folder-name-here/OpenAI-main/demos
Imported: # - ...
Setting up the demos
Run the Setup
function to initialize OpenAI
.
Setup
Enter your OpenAI API key or the environment variable that contains your API key: your-API-key-here
The OpenAI interface is set up and ready for use
At the prompt either enter your OpenAI API key or, if you've saved your API key in an environment variable, enter the name of the environment variable.
Congratulations! You're now ready to run the demos.
Audio Demo Functions
Speech
which generates audio from text input.Play
uses HTMLRenderer to play an audio file.Transcription
which transcribes audio into the input language.Translation
which translates audio in the English text.ShowText
uses HTMLRenderer to display the result ofTranscription
orTranslation
.
Speech
Syntax | (rc msg)←audioFile Speech text |
audioFile |
The name of the .mp3 audio file to save. You do not have to specify the .mp3 extension. |
text |
Either
|
rc |
Either
|
msg |
Either
|
Example | '/tmp/spanish.mp3' Speech 'APL es muy fácil de aprender y divertido de usar' 0 C:/tmp/spanish.mp3 |
Notes | You can pass the result of Speech as the argument to Play to play the audio file.For example: Play '/tmp/test' Speech 'This is a test' |
Play
Syntax | Play args |
args |
Either
|
Examples | Play '/tmp/test' Speech 'This is a test' Play '/tmp/spanish.mp3' |
Transcription
Syntax | (rc msg)←Transcription args |
args |
audioFile [language] where
|
rc |
Either
|
msg |
Either
|
Example | Transcription '/tmp/spanish.mp3' 0 APL es muy fácil de aprender y divertido de usar. |
Notes | You can pass the result of Transcription as the argument to ShowText to display the transcribed text in an HTMLRenderer window.For example: ShowText Transcription '/tmp/spanish.mp3' |
Translation
Syntax | (rc msg)←Translation args |
audioFile |
The name of the .mp3 audio file to translate. |
rc |
Either
|
msg |
Either
|
Example | Translation '/tmp/spanish.mp3' 0 APL is very easy to learn and fun to use. |
Notes | You can pass the result of Translation as the argument to ShowText to display the translated text in an HTMLRenderer window.For example: ShowText Translation '/tmp/spanish.mp3' |
ShowText
Syntax | ShowText args |
args |
The result from Transcription or Translation |
Examples | ShowText Transcription '/tmp/spanish.mp3' |
Image Demo Functions
- Image - generate an image from a text prompt
- ShowImage - display an image using HTMLRenderer
Image
Syntax | (rc msg)←Image description |
description |
a description of the image to generate. The description can be in most languages. |
rc |
Either
|
msg |
Either
|
Example | Image 'drei süße Welpen' ⍝ three cute puppies |
Notes | You can pass the result of Image as the argument to ShowImage to display the generated imagein an HTMLRenderer window.For example: ShowImage Image 'a pretty sunset on the water' |
ShowImage
Syntax | ShowImage args |
args |
The result from Image |
Examples | ShowImage Image 'dos perros' ⍝ two dogs |
Chat Demo Functions
- Chat - have a chat with an assistant you describe
- Linguist - translate phrases into a desired language, optionally playing audio of the translation
Chat
Syntax | Chat |
Usage |
|
Chat
Example 1
Chat
Describe the assistant you would like to chat with: poetic
Enter user message: compose a haiku about APL
Symbols dance with code,
APL's concise power,
Elegance in form.
Enter user message: translate that to German
Symbole tanzen mit Code,
APLs präzise Kraft,
Eleganz in Form.
Enter user message:
Chat
Example 2
Chat
Describe the assistant you would like to chat with: you understand programming languages
Enter user message: translate APL's +.× to python
In APL, the +.× operator represents the matrix multiplication operation. To
translate this operation to Python using NumPy, you can use the numpy.dot()
function. Here's how you can do it:
python
import numpy as np
# Define two matrices
matrix1 = np.array([[1, 2], [3, 4]])
matrix2 = np.array([[5, 6], [7, 8]])
# Perform matrix multiplication
result = np.dot(matrix1, matrix2)
print("Result of matrix multiplication:")
print(result)
In this Python code snippet, we first import the NumPy library. We define two
matrices matrix1 and matrix2 using NumPy arrays. Then, we use the np.dot()
function to perform matrix multiplication between matrix1 and matrix2. Finally,
we print the result of the matrix multiplication.
Enter user message:
Linguist
Syntax | Linguist |
Usage |
|
Linguist
Example
Linguist
What output language would you like? German
Would you like audio output (y/N)? n
Text to translate: Good morning
Guten Morgen
Text to translate: Buenos dias
Guten Morgen
Text to translate:
User Guide and Reference ↵
Whenever you see OpenAI formatted as OpenAI
, it is referring to the Dyalog OpenAI API interface. OpenAI
is a Dyalog APL namespace which contains code that implements interfaces to OpenAI API endpoints.
Getting Started
To use the OpenAI API, you will need to:
- Create an OpenAI account
- Optionally, create an OpenAI project
- Create an OpenAI API key - this will enable you to make calls to the OpenAI API
- Download
OpenAI
- Configure
OpenAI
Create an OpenAI account
- Navigate to OpenAI's quickstart page.
- Click "Sign up" and follow the instructions to create your account
Optionally, create an OpenAI project
- Optionally, create an OpenAI project. If you don't create a project, OpenAI will use a "Default project".
-
Create one or more project API key(s). You will need a project API key to be able to access the OpenAI API via
OpenAI
. Protect this API key, do not publish it on GitHub or other public places.- Navigate to OpenAI's API keys page
- Click "+ Create a new secret key"
- Click "Create secret key"
- Make sure you copy the generated secret key! This will be the only time it will be displayed.
-
OpenAI recommends that you set an environment variable to hold your API key. This way you will not expose the key in your APL code.
If you're using Linux, do:export OPENAI_API_KEY="your_api_key_here"
If you're using Windows, under PowerShell do:setx OPENAI_API_KEY "your_api_key_here"
Obtain
OpenAI
Download
OpenAI.apln
from GitHub.Configure
OpenAI
You will need to provide the API key you created earlier.
OpenAI.APIKey←'your-API-key-here'
OpenAI
. One technique to avoid this is to store your APIKey in an environment variable and then retrieve its value.
OpenAI.APIKey←2 ⎕NQ # 'GetEnvironment' 'your-APIKey-environment-variable-name'
OpenAI
Initialization
OpenAI
makes heavy use of HttpCommand
and requires HttpCommand
version 5.6 or later. During initialization, OpenAI
will look for HttpCommand
in its parent namespace and if it doesn't find HttpCommand
it will load it from your Dyalog installation and then upgrade to the latest version of HttpCommand. The endpoints implemented in OpenAI
will initialize OpenAI
if not already initialized. You can also initialize OpenAI
by running OpenAI.Initialize
. In a production environment, HttpCommand
should be copied and saved into the workspace rather than relying on loading and upgrading.
OpenAI
Naming Conventions
Each of the endpoints implemented in OpenAI
has a number of parameters. Parameters beginning with a lower-case letter (a-z) are parameters that OpenAI itself uses. Parameters beginning with an upper-case letter (A-Z) are parameters used by OpenAI
to make it easier to use in an APL environment.
Endpoints
Most endpoints, when run, will save the last HttpCommand
response namespace in the Response
variable for the endpoint. This can be used to access the result of the endpoint's execution or to examine in case the execution failed.
For example:
s←OpenAI.Audio.Speech 'This is a test'
s.Run
[rc: 0 | msg: | HTTP Status: 401 "Unauthorized" | ≢Data: 1 (namespace)]
s.Show ⍝ show Response.Data
{
"error": {
"code": null,
"message": "You didn't provide an API key. You need to provide your API key in an Authorization header using Bearer auth (i.e. Authorization: Bearer YOUR_KEY), or as the password field (with blank username) if you're accessing the API from your browser and are prompted for a username and password. You can obtain an API key from https://platform.openai.com/account/api-keys.",
"param": null,
"type": "invalid_request_error"
}
}
Important
Remember to set OpenAI.APIKey
prior to running any endpoints.
Pending publication
Pending publication
Pending publication
Pending publication
Pending publication
Pending publication
Ended: User Guide and Reference
About ↵
MIT License
Copyright (c) 2024 Dyalog
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
Version 0.4.0
This is the initial release of OpenAI
. It implements interfaces to the following OpenAI endpoints: audio, chat, files, image, models, and moderations. Additional interfaces to other OpenAI endpoints will be available in future releases.
A set of demos is available in the demos folder