Mock Description
Introduction
The mock description in Skyramp allows you to create lightweight static and dynamic mocks to simulate service dependencies. It comprises three primary components:
-
Mock Configuration: This file, residing in the
mocks
folder, defines the overall behavior of the mock. It empowers you to configure proxying, delays, errors, and more, facilitating comprehensive testing of your application. -
Response Configuration: Located in the
responses
folder, these files define response behavior for specific methods, allowing you to configure payloads and dynamic responses. -
Endpoint Configuration: Found in the
endpoints
folder, these files specify details related to the service’s networking aspects, supporting gRPC, REST, JSON-RPC WebSocket, and JSON-RPC HTTP endpoints.
To get started, follow the steps outlined in the How to Mock Services page. This guide will teach you how to dynamically generate a mock description by providing service-level information. Alternatively, if you prefer to create a mock definition from scratch, you can create .yaml
files in the mocks
, responses
, and endpoints
directories of your project (e.g., my-mock.yaml
, my-response.yaml
, and my-endpoint.yaml
) and configure the necessary information by following the guidelines below.
Mock Configuration
The mock configuration serves as the central component of the mock definition and defines the overall mock behavior.
Example Mock Configuration:
version: v1
mock:
description: routeguide
responses:
- responseName: ListFeatures
- responseName: RecordRoute
- responseName: RouteChat
lossPercentage: 50
delayConfig:
minDelay: 1000
maxDelay: 2000
proxies:
- endpointName: routeguide_jMBp
methodName: GetFeature
In this example:
description
: Provides a description of your mock configuration.responses
: Allows you to specify responses for various gRPC methods.proxies
: Enables gRPC proxying for specific endpoints and methods.
This example showcases advanced mock capabilities, including:
- gRPC Proxying: Routing mock data to specific endpoints and methods.
- Delays and Errors: Simulating network conditions by introducing delays and error percentages.
gRPC Proxying
Skyramp provides the capability to act as a proxy for gRPC services, selectively mocking certain methods while forwarding the rest to the live service. To enable this feature, you can specify the endpoint and methods to be proxied in the proxies
section of the mock configuration.
Example Mock Configuration:
version: v1
mock:
description: routeguide
responses:
- responseName: ListFeatures
- responseName: RecordRoute
- responseName: RouteChat
proxies:
- endpointName: routeguide_jMBp
methodName: GetFeature
In this gRPC configuration example, requests to the GetFeature
method are directed to the live service, while all other requests to the routeguide
service are mocked.
Note: If a gRPC method is defined in the ‘.proto’ file but not listed in the mock description, Skyramp implicitly forwards the corresponding request(s) to the live service. This flexibility allows you to control the behavior of specific gRPC methods in your mock configurations.
Delays and Errors
In your mock configuration, you can introduce delays and error configurations using the following properties:
lossPercentage
: Specifies the percentage of requests that will result in an error response.delayConfig
: Defines the delay configuration for the mock response, including the minimum (minDelay
) and maximum (maxDelay
) delay in milliseconds.
Note: When minDelay
and maxDelay
share the same value, the delay is static. However, if these values differ, Skyramp will apply a random delay within the specified range, with a maximum delay of 10,000 milliseconds (10 seconds).
Example Mock Configuration:
version: v1
mock:
description: routeguide
responses:
- responseName: GetFeature
- responseName: ListFeatures
- responseName: RecordRoute
- responseName: RouteChat
lossPercentage: 50
delayConfig:
minDelay: 1000 # in ms
maxDelay: 2000 # in ms
In the provided example, the RouteChat
mock response will experience a random delay between 1,000 and 2,000 milliseconds before being returned. Additionally, around 50% of requests will result in an error response.
You have the flexibility to specify delays and errors at two levels: for a specific method or for the entire endpoint. The previous example demonstrates how to configure delays and errors for a specific response. To apply the same delay and error settings to all responses, define the lossPercentage
and delayConfig
in the mock
section:
Example Mock Configuration:
version: v1
mock:
description: routeguide
responses:
- responseName: GetFeature
- responseName: ListFeatures
- responseName: RecordRoute
- responseName: RouteChat
lossPercentage: 50
delayConfig:
minDelay: 1000 # in ms
maxDelay: 2000 # in ms
In this scenario, all responses will encounter a delay between 1,000 and 2,000 milliseconds, and approximately 50% of requests will result in an error response.
Response Configuration
The response configuration file defines the response behavior for a specific method of the service.
Example Response Configuration:
version: v1
responses:
# Unary RPC
- name: GetFeature
blob: |-
{
"name": "fake",
"location": {
"latitude": 400,
"longitude": 600
}
}
endpointName: routeguide_jMBp
methodName: GetFeature
# Server Streaming RPC
- name: ListFeatures
javascript: |
function handler(req) {
const values = [];
for (let i = 0; i < 5; i) {
values[i] = {
name: "random" + i,
location: {
longitude: i * 100,
latitude: i * 100
}
};
}
return {
values: values
};
}
endpointName: routeguide_jMBp
methodName: ListFeatures
# Client Streaming RPC
- name: RecordRoute
javascript: |
function handler(req) {
var l = req.values.length;
return {
value: {
pointCount: l,
featureCount: l,
distance: l * 100,
elapsedTime: 0
}
};
}
endpointName: routeguide_jMBp
methodName: RecordRoute
# Bidirectional Streaming RPC
- name: RouteChat
javascript: |-
const msgs = [];
function handler(req) {
msgs.push(req.value);
return {
values: msgs
};
}
endpointName: routeguide_jMBp
methodName: RouteChat
In this example, you can see support for mocking various gRPC methods, including Unary RPC, Server Streaming RPC, Client Streaming RPC, and Bidirectional Streaming RPC. It also demonstrates the use of dynamic responses for more complex testing scenarios.
Dynamic Responses
Dynamic responses offer flexibility in customizing response generation logic and simulating complex response configurations. You can use different attributes to specify dynamic response behavior, such as javascript
, javascriptPath
, python
, or pythonPath
. Each attribute allows you to define custom response handling logic and return a JSON representation of the response value.
JavaScript Dynamic Response
-
Using the
javascript
AttributeTo create JavaScript-based dynamic responses, employ the
javascript
attribute for a response in your response configuration. Define a function calledhandler
that takes any necessary parameters. Implement your custom JavaScript logic within thehandler
function and return a JSON object representing the response value.Example Response Configuration:
version: v1 responses: - name: RecordRoute javascript: | function handler(req) { var l = req.values.length; return { value: { pointCount: l, featureCount: l, distance: l * 100, elapsedTime: 0 } }; } endpointName: routeguide_jMBp methodName: RecordRoute # More response configurations...
-
Using the
javascriptPath
AttributeAlternatively, you can use the
javascriptPath
attribute to specify the path to an external JavaScript script file containing your custom response handling logic.Example Response Configuration:
version: v1 responses: - name: RecordRoute javascriptPath: scripts/recordRoute.js endpointName: routeguide_jMBp methodName: RecordRoute # More response configurations...
The external JavaScript script file
recordRoute.js
defines ahandler
function to process incoming requests and generate appropriate responses.
Python Dynamic Response
Note: If your dynamic Python response relies on additional Python modules, refer to the Installing Worker with Python Modules section to learn how to build the custom Skyramp Worker image.
-
Using the
python
AttributeYou can use the
python
attribute within theresponseValues
section of your response definition. This attribute allows you to define a function calledhandler
that takes any necessary parameters, representing the incoming request or context. Within thehandler
function, you can implement your custom Python logic and return a JSON representation of the response value usingSkyrampValue
.Example Response Configuration:
version: v1 responses: - name: RecordRoute python: | def handler(req): l = req.values.length; return SkyrampValue( value = { "pointCount": l, "featureCount": l, "distance": l * 100, "elapsedTime": 0 } ) } endpointName: routeguide_jMBp methodName: RecordRoute # More response configurations...
-
Using the
pythonPath
AttributeAlternatively, you can use the
pythonPath
attribute to specify the path to an external Python script file containing your custom response handling logic.Example Response Configuration:
version: v1 responses: - name: RecordRoute pythonPath: scripts/record_route.py endpointName: routeguide_jMBp methodName: RecordRoute # More response configurations...
The external Python script file
record_route.py
defines ahandler
function to process the request or context and generate the response.
These dynamic response options allow you to tailor the responses generated by your mock server based on specific conditions or logic needed for testing.
AI-Generated Default Values
Skyramp integrates with OpenAI to provide AI-generated default values for response configurations. This optional feature can be enabled by invoking skyramp mocker generate
with the --openai
option.
To use this option:
-
Create an OpenAI developer account by following the OpenAI documentation if you don’t have one already.
-
Set the
OPENAI_API_KEY
environment variable by running the following command in your terminal:
export OPENAI_API_KEY=<YOUR_API_KEY>
Note: You can set this environment variable temporarily for the current terminal session. For permanent setup, add the export command to your shell’s profile file (e.g., .bashrc, .bash_profile, .zshrc, etc.).
- Run the
skyramp mocker generate
command with the--openai
option, as shown below:
skyramp mocker generate grpc \
--api-schema routeguide.proto \
--alias routeguide \
--port 50051 \
--service RouteGuide \
--openai
Skyramp limits AI-generated default values to a maximum of three response value JSON blobs per session to prevent excessive use of OpenAI tokens for large schema files. AI-generated defaults will be available for a maximum of three responses, while the remaining responses will have Skyramp-generated defaults.
Please note that the limits may change based on usage and feedback.
Endpoint Configuration
The endpoint configuration file defines networking-level service details for an endpoint.
Example Endpoint Configuration:
version: v1
services:
- name: routeguide
port: 50051
alias: routeguide
protocol: grpc
endpoints:
- name: routeguide_jMBp
methods:
- name: GetFeature
- name: ListFeatures
- name: RecordRoute
- name: RouteChat
defined:
path: ./pb/route_guide.proto
name: RouteGuide
serviceName: routeguide
For configuring the endpoint file, we have a few key attributes:
-
services
: This section lists the services available in your project. In this example, there is one service namedrouteguide
. -
endpoints
: Under theendpoints
section, you define individual endpoints, specifying the available methods, the service definition path, and the service name. In the example, we have an endpoint namedrouteguide_jMBp
for theRouteGuide
service. -
methods
: Within each endpoint, you list the available methods. In this case, we have methods likeGetFeature
,ListFeatures
,RecordRoute
, andRouteChat
. This helps specify the details of each method and how it should behave. -
defined
: Here, you specify the service definition file’s path and the service name. The service definition file (route_guide.proto
) outlines the structure of the service and its methods.
By configuring endpoints, you define the available services and methods within your project, facilitating mocking services in your distributed application. We recommend dynamically generating a mock by providing service-level information.
TLS Support
Please note that mocking is not supported for endpoints using TLS. Endpoints using HTTPS
should be replaced with HTTP
.