Samples

On this page you will find a few basic and a few advanced samples what can you do with the Respresso’s flow.

Basic samples

Webhook

In some cases you will need a webhook to integrate Respresso in your workflow.

Use cases:
  • Trigger CI build

  • Notify the team of a change in resources

  • Sync the converted resources and do your stuff with them. (Backup or host for dynamic usage)

Used processors:
<flow xmlns="https://app.respresso.io/public/schema/flow.xsd">
	<nodes>
		<processor id="webhook" name="WebhookProcessor" version="1">
			{
				"url": "https://your_webhook_page_url/"
			}
		</processor>
	</nodes>
	<connections>
		<connection from="@input" to="webhook" mergeType="none"/>
		<connection from="webhook" to="@output" mergeType="none"/>
	</connections>
</flow>
Flow visualization.

Explanation

When this flow is called the webhook processor will be executed while the data at the input will be ignored due to mergeType="none". This also applies to the output of the webhook (It will return an empty object).

How to trigger a webhook after a resource category changed?

When you have to integrate Respresso with a CI you have to make sure that at the time of the webhook the conversion ended and all data is persisted. Unfortunately currently flow execution runs in a single transaction so while the flow is executed only this transaction can access the fresh data. The easiest way is to call a webhook which will close the connection and wait a few seconds before requesting data from Respresso to ensure that the transaction is already committed. So you have to execute the webhook after the changes has been stored in this transaction which is usually StoreChangedResourceCategoryProcessor:v1 at the make flow. To achieve this you have to make the webhook dependent of the commit point and the output dependent of the webhook to execute it.

Let’s see how to do that:

<flow xmlns="https://app.respresso.io/public/schema/flow.xsd">
<nodes>
	<processor id="convert" name="ResourceCategoryConversionExecutorProcessor" version="1">
		{ ... Your category dependent config ... }
	</processor>
	<processor id="executeActions" name="StoreChangedResourceCategoryProcessor" version="1"/>
	<processor id="webhook" name="WebhookProcessor" version="1">
		{
			"url": "https://your_webhook_page_url/"
		}
	</processor>
</nodes>
<connections>
	<connection from="@input" to="convert"/>
	<connection from="convert" to="executeActions"/>
	<connection from="executeActions" to="@output"/>
	<connection from="executeActions" to="webhook" mergeType="none"/>
	<connection from="webhook" to="@output" mergeType="none"/>
</connections>
</flow>
Flow visualization.

Note

Make sure to use mergeType="none" to ensure that the output will not be overridden by the empty object returned from the webhook processor.

Note

In this example we do not used any special feature of the WebhookProcessor:v1. For more details please read its docs.

Warning

Make sure that after the webhook’s connection is closed the called server waits a few seconds before accessing Respresso’s data. If you do not need any further data you do not have to worry about this.

There is an other, more safe way of triggering a webhook with the fresh data but it is a bit more complex. The idea is to wait for the store processor (like in the above example) than read the resulted snapshot’s file and send it to a given URL. You can achieve this using HttpSendFileProcessor with the sending of root/<category_name>.respresso file. This file is matching the CategorySnapshotStructure. Note that this method and naming may change in the future.

Advanced samples

Custom conversion

In some cases you may found that you need an other format of a resource category which is currently not supported by Respresso. Fortunately you are able to extend it with your conversion relatively simple.

For this you will need to implement the conversion and host it somewhere which is accessible to the Respresso server. When your conversion has to be executed Respresso will send a HTTP POST request sending all the data provided to the input of that node in JSON format and put the parsed JSON response to the output. That’s it your conversion is added to Respresso.

Use cases:
  • You need a custom file format

  • The target platform is currently not supported by Respresso

Used processors:

Note

Before the request the input data of HttpFilesStructureRemoteProcessor:v1 is resolved from any Lazy value. This means every Lazy is executed and the input Lazy is replaced with it’s resolved value.

Note

Respresso serializes Binary data to a base64 encoded json string. So you will need to decode it in your converter.

Let’s see what would it look in case of localization.

<flow xmlns="https://app.respresso.io/public/schema/flow.xsd">
	<nodes>
		<processor id="parser" name="AllLocalizationsParserProcessor" version="1"/>
		<processor id="custom" name="HttpFilesStructureRemoteProcessor" version="1">
			{"url": "https://your_converter_url/"}
		</processor>
		<!--Other processors... Not part of this example. -->
	</nodes>
	<connections>
		<connection from="@input" to="parser"/>
		<connection from="parser" to="custom"/>
		<!--Other connections... Not part of this example.-->
		<connection read="files" from="custom" write="files[+]" to="@output"/>
	</connections>
</flow>
Flow visualization.

This method is a simplified version of a custom conversion which requires you to return the converted files but you can check HttpRemoteProcessor:v1 for a more flexible way of custom processing.

Explanation

When this flow is called after parsing the resource category a LocalizationParsedStructure:v1 object is posted to the url configured to HttpFilesStructureRemoteProcessor:v1. The result is parsed as a HttpFilesStructure:v1 object and converted to Respresso’s internally used format with Lazy values.

Note

In the response the fileContent fields must contain a base64 encoded string.

Note

In this example we used write="files[+]" in the connection to convertFiles. This ensures that you can concat multiple files to the input of convertFiles so multiple conversions can be joined this way.

Best practises

  • When you use HttpFilesStructureRemoteProcessor:v1 you may want to use a token to ensure that no one will use your publicly exposed conversion service.

  • When possible use https to ensure that no one can steal your resources during the communication process. It’s a business secret, don’t forget about it.

  • For a quick implementation you may want to use AWS lambda or Google Cloud Functions. Both provide an easy to deploy and host model with SSL in a free tier. So it will probably fit your needs for a simple conversion task.

Filter resources by it’s tags

In some cases when you have multiple platforms in a project you will want to include some of the resources only in a set of specific platforms. In Respresso you can add tags to some of our resources (localization, image) and this can be used as additional information for the conversion. You can define a set of special tags (Now: android, ios, web) which will be used as some kind of filter during the conversion. In this example we will show have can you include some localization keys only in the platforms you want and exclude it from the others. (A similar flow can be created for the image category also.)

Used processors:

Let’s see what would it look in case of localization.

<flow xmlns="https://app.respresso.io/public/schema/flow.xsd">
	<nodes>
		<processor id="parser" name="AllLocalizationsParserProcessor" version="1"/>

		<!--Split localization resources to two categories.
		One where no platform tag is present and one where a platform tag is present-->
		<processor id="filterWithPlatformTags" name="FilterArrayProcessor" version="1">
			{
			"arrayPath": "resources",
			"condition": "hasAny(readCurrentElement('data.tags'), ['android', 'ios', 'web']]"
			}
		</processor>

		<!--For the matched resources where there is at least one platform tag present we want to associate that resource with the platform.
		For this we use AssociateArrayByFieldProcessor which will output an object with the tags as keys and the values will be the matched resources in an array.-->
		<processor id="associateByTags" name="AssociateArrayByFieldProcessor" version="1">
			{
			"arrayPath": "resources",
			"keyPath": "data.tags"
			}
		</processor>

		<!--Android-->
		<processor id="convertToAndroid" name="AllLocalizationsToAndroidStringsXmlConverterProcessor" version="1">
			{ "exportDefaultAsIndividualLanguage":true }
		</processor>

		<!--ios-->
		<processor id="convertToIOS" name="AllLocalizationsToAppleStringsConverterProcessor" version="1">
			{ "fileName":"respresso" }
		</processor>

		<!--ios classes-->
		<processor id="convertToIosClasses" name="AllLocalizationsToObjectiveCClassConverterProcessor" version="1"/>

		<!--json-->
		<processor id="convertToJSON" name="AllLocalizationsToStructuredJsonConverterProcessor" version="1"/>

		<!--Some helper data nodes to make it easier to read the different platforms-->
		<data id="common"/>
		<data id="ios"/>
		<data id="android"/>
		<data id="web"/>

	</nodes>
	<connections>
		<connection from="@input" to="parser"/>

		<!--Copy the whole parsed localization data to the common node to make sure everything is copied. -->
		<connection read="config" from="parser" write="config" to="common"/>

		<!--Copy the resources array to the filter input.-->
		<connection read="resources" from="parser" write="resources" to="filterWithPlatformTags"/>

		<!--Copy the resources without platform tags to the common data node.-->
		<connection read="notMatched" from="filterWithPlatformTags" write="resources" to="common"/>
		<!--Copy the resources with platform tags to the associator processor-->
		<connection read="matched" from="filterWithPlatformTags" write="resources" to="associateByTags"/>

		<!--Copy every common resource to the platform specific data nodes to ensure common data will be included in every platform-->
		<connection from="common" to="android"/>
		<connection from="common" to="ios"/>
		<connection from="common" to="web"/>

		<!--Append platform specific resource arrays to the platform specific data nodes to ensure every platform specific data will be included in the corresponding platform-->
		<connection read="android" from="associateByTags" write="resources[+]" to="android"/>
		<connection read="ios" from="associateByTags" write="resources[+]" to="ios"/>
		<connection read="web" from="associateByTags" write="resources[+]" to="web"/>

		<!--Copy the resulting data to the platform specific converters which will get only the corresponding resources to convert.-->
		<connection from="android" to="convertToAndroid"/>
		<connection from="ios" to="convertToIOS"/>
		<connection from="ios" to="convertToIosClasses"/>
		<connection from="web" to="convertToJSON"/>

		<!--Collect the converted files to the output node-->
		<connection read="files" from="convertToAndroid" write="files[+]" to="@output"/>
		<connection read="files" from="convertToIOS" write="files[+]" to="@output"/>
		<connection read="files" from="convertToIosClasses" write="files[+]" to="@output"/>
		<connection read="files" from="convertToJSON" write="files[+]" to="@output"/>
	</connections>
</flow>
Flow visualization.

Explanation

Don’t panic, it seems a bit messy at the first sight but we will figure out what happens.
The basic idea is to find localization keys without platform tags (android, ios, web) and collect them to a ‘common’ group. Then the keys with platform tags are grouped according to their tags and appended to the common set in platform specific data node.
As a result we will get platform specific LocalizationParsedStructures where the resources array is filtered according to the specific platform.
From this point we can handle it as it was returned by the parser processor and we can convert is as in our default conversion would without these pre-filters.
The resulting conversion will include every key in every platform except it has a platform tag in it’s tag list. If it has, it will be included only to the referenced platforms.
For step by step explanation please see the comments in the xml above.