Model: Sao10K: Llama 3.1 70B Hanami x1, Context: 16000, Cost: $3 per 1M input tokens, $3 per 1M output tokens
Note: Calculation is approximate based on public data. Prices may change, check official websites.
This calculator estimates the cost of using the Sao10K Llama 3.1 70B Hanami x1 model. It is an experiment by Sao10K based on the Euryale v2.2 model, featuring a 16,000-token context window.
Input and output tokens are priced at $3 per 1 million tokens respectively.
The cost is calculated based on the following formula:
- Total Cost = ((Input Tokens / 1,000,000) * Input Cost per 1M Tokens) + ((Output Tokens / 1,000,000) * Output Cost per 1M Tokens) * Number of Requests
Example:
- Input Tokens: 50,000
- Output Tokens: 20,000
- Number of Requests: 10
- Input Cost = (50,000 / 1,000,000) * $3 = $0.15
- Output Cost = (20,000 / 1,000,000) * $3 = $0.06
- Total Cost = ($0.15 + $0.06) * 10 = $2.10