Posted on

Introducing RFPs, Grant Updates & New EDF Management.

A lot has been happening behind the scenes at the Ecosystem Development Fund over the past months. We’ve been working on a number of changes to make the EDF more transparent and responsive to the IOTA community.

Most notably we’ve gotten back to everyone who’ve submitted a grant thus far to update them on the status of their application. For those whose status is pending we will be going through the grants in the next few weeks classifying and responding to these applications.

Grant Updates

With this blogpost, the Ecosystem Team is happy to announce three new grantees. All of whom will improve our understanding of IOTA and the Tangle.

Whether this is through research or through the development of monitoring tools, we are looking forward to see the progress these three projects will make within the next few months. We will ensure the community is continually updated on the progress of these grants via blog posts.

UNIO ($98,472)

Alexander Ladas (known in our community as cyclux), Maximilian Beck (Maximilian), and Evaldas Drasutis (Lunfardo) are three of the IOTA Ecosystem’s most known Community Developers. Primarily for creating three of the most popular analytical tools within the IOTA Ecosystem — tanglemonitor.com, tangle.glumb.de and traviota.iotalt.com.

As part of the grant, these three tools will be open sourced, documented and unified into UNTANGLE IOTA (UNIO).

In this combination, UNIO will be able to ensure data integrity while providing a good overall reflection of the state and health of a Tangle within an intuitive consolidated dashboard. Not only that, UNIO will also provide API access to data for consumption by other applications.

We are very excited to collaborate and combine efforts through the UNIO project! With support from the Ecosystem Fund, we will be able to exploit the full potential of our frequently used Tangle tools. Besides various (performance) improvements and feature additions, a major advantage will be the utilization of a shared backend, which ensures data integrity across all metrics and visualizations. We are also particularly looking forward to release the first open metrics API, including Tangle history. This will not only facilitate retrospective Tangle analysis but will also create opportunities for future projects, which will be able to freely access our APIs. Finally, our goal is to provide valuable data along the progressive development of the Tangle and its implementations, as well as an educational and informational source to represent the Tangle technology.

UNIO Team

To learn more about UNIO, check out the following link:

UNIO – Untangle IOTA

Green IOTA ($19,800)

Amir Abbaszadeh Sori is a Ph.D. Student in Software Engineering at Islamic Azad University in Iran and received his M.sc degree in Computer Networks from the Amirkabir University of Technology. In his past, he published and presented several papers about Green Banking, Cryptocurrencies, IOTA and the Tangle at important events in Iran.

As part of the Grant, Amir Abbaszadeh Sori will focus his Ph.D. thesis on Green IOTA. It aims to investigate and define green cryptocurrency requirements, analyse IOTA and the Tangle within the defined requirements, and provide solutions to reduce energy consumption for computation, storage and communication.

“This research adds a new topic into the IOTA research roadmap and a good step for the future of green efficiency in cryptocurrency scope that is scarce in the scientific community.

Green IOTA aims at providing a solution to reduce energy consumption in computation, storage, communication, proof that IOTA is green and keep the earth healthy in payment systems.

A large part of the energy waste in the cryptocurrencies is in the mining sector. IOTA does not have this specific problem. Keep in mind that removal of mining alone will not prevent loss of energy. Note that IOTA has some evident green standards and this research try to highlight the IOTA green efficiency and enhance them.

I really appreciate Dr. Serguei Popov and Dr. Moody Alam for their guidance and close collaboration.”

– Amir Abbaszadeh Sori

To stay updated about Green IOTA, bookmark his research on our Ecosystem website: https://ecosystem.iota.org/projects/green-iota

Sebastian Mueller Research Grant ($18,900)

Sebastian Mueller finished his PhD in mathematics 2007 with “summa cum laude” and recently obtained his Habilitation in Mathematics at the University Aix-Marseille. In his career, he worked and lived in 5 different countries, won over 10 prizes and grants, and published over 20 research articles.

As part of the grant, Sebastian Mueller’s research will focus on the “Fast Probabilistic Consensus” (FPC) to investigate mathematical questions, propose different ways of implementations of the consensus, but also to perform simulations and optimizations alongside our own research team.

“Randomness is ubiquitous in most aspects of life. The right amount of randomness enables systems to adapt to environmental changes and stressors. IOTA technologies already use the power of randomness at various parts, notably in the construction of the Tangle. At the present moment, validations of transactions in the IOTA Tangle still depend on a centralized and non-random coordinator. The coordicide project aims to eventually remove the Coordinator and replace it with a decentralized validation protocol. This research project aims to find the optimal amount of randomness in a coordinator-free IOTA.”

Sebastian Mueller

Any grants paid out by the Ecosystem Development Fund are visible on the EDF Transparency Tracker, which stores an immutable record of the agreements on the Tangle.

Requests for Proposals

Alongside an improvement to the way we consume grant applications, we are also introducing request for proposals on specific topics we see as vital to the IOTA Ecosystem.

Over the last few months we’ve seen a large variance in submission focus and quality. With the introduction of RFPs, we aim to focus the applications we receive to high impact areas that we’ve identified. That being said: we are still happy to receive grant applications outside of the RFP areas.

Developer Tooling

Over the last 3 years the IOTA protocol has matured in leaps and bounds due to the contributions of the IOTA Foundation and its vibrant community. As the protocol matures so does its requirement for a set of open source tools that enable greater consistency, accessibility, and abstraction when understanding the mechanics of an operating network.

We’ve seen a number of amazing tools emerge from developers within the community. These tools have become a vital part of the development cycle for IOTA Foundation and community developers alike.

The Ecosystem Development Fund is actively interested in ensuring active development of tools that promote the adoption of the technology. That is why we are requesting proposals for the development of open source tools that improve the development experience.

Developer Tooling

Digital ID

Identity is a crucial concept which shapes the way in which humans interact with the individuals, companies and governments around them. As time progresses identity will shift away from centralised or owned identity solutions towards ones rooted in technology that empowers personal ownership of one’s identity.

This will improve the ability to communicate an identity in a secure and seamless manner that respects the individual whom it relates to.

The majority of current identity solutions are based on systems that are owned by corporations or governments. These solutions, such as Facebook Login or Google OAuth, reduce the complexity of managing identity but crucially exclude the individual from owning their identity.

The EDF is seeking proposals for a Digital Identity standard & example implementation that utilizes the IOTA protocol and is compatible with the W3C’s DID Specification. This standard should be able to service both machines and humans. Allowing both groups to maintain their own Self-Sovereign Digital Identity as needed.

More information on Digital Identity will be announced in the near future on the IOTA blog.

Digital Identity RFP

Decentralised DMP

Looking towards the future, an ever growing number of devices will be connected to the internet and each other. These devices will have various roles but the majority will be devoted to sensing the environment around them. The real time information will be a wealth of value not only to the owner of the device but for those who purchase the data coming from this device.

In order to ensure the seamless advertisement and sale of information the EDF is requesting proposals for applications that utilize the IOTA protocol to create a decentralized data marketplace standard, library and accompanying reference implementation.

Decentralized Data Marketplace RFP

Changes within the EDF

Over the last few months there has also been a number of structural and management changes around the EDF.

Previously, the development fund was an externally operated entity. This has now changed so that the EDF is now a part of the Ecosystem Department of the IOTA Foundation.

This has helped streamline a lot of processes and reduce duplicated efforts for everyday admins. In addition, we are in the process of re-working the grant application and management process to ensure clear communication to applicants, grantees and alumni. We are also simplifying the website in order to provide a clearer understanding of what the EDF’s purpose is and how to get involved.

Furthermore, Mark Schmidt, who was previously the Community Manager for the Foundation, has changed roles to become the Fund Manager of the EDF.

This means he will take over the daily activities of the EDF from John Licciardello, who has since left the EDF. We are excited for Mark to take on this challenge and given his intimate knowledge of the community we expect him to excel in the role.


Introducing RFPs, Grant Updates & New EDF Management. was originally published in IOTA on Medium, where people are continuing the conversation by highlighting and responding to this story.

This article was originally published on the: IOTA Blog on 
Posted on

Explaining the Qubic Computation Model: part 6

In the first part of this series we have looked at a conceptual overview of the Abra specification, which describes how data flow is achieved within the Qubic Computation Model. In the second, third, fourth, and fifth parts we used the Abra specification to implement Qupla, a higher level programming language for Qubic. Now it is time to introduce the Qubic Supervisor and the way it interacts with Qupla functions to be able to create even more complex behavior.

As a reminder: we advise readers who finish this series to re-read it in its entirety for deeper understanding. What seems complex at first reading will become clearer at the second or even the third reading. There is a huge amount of new concepts and paradigms that we are conveying. We are doing our best to keep the explanation as clear as possible, but your best tools for learning and understanding are repetition and experimentation. So check out the Qupla repository that we created and play around with the language.

The Qubic Supervisor

Qubics are strictly event-driven, meaning that they depend on the Tangle and other sources for input data, and run whenever that input data becomes available or changes. This behavior requires an agent that can wait for these input events to happen, gathers the new input data, schedules a qubic to run its computations with this input data, and then passes on the output data of the qubic. Because this entity supervises the dispatching of data to and from qubics we call it the Qubic Supervisor, or Supervisor for short.

The Supervisor has a plug-in design that allows any kind of data source to function as a source of input data or output data. Possible data sources that come to mind are messages posted on the Tangle, HTTP GET and POST urls, or hardware I/O ports.

With qubics being packaged as intermediate trit-code, it also falls on the Supervisor to find/load the qubics on demand and prepare them for running on the target processing device.

Environment-Entity-Effect

Remember how we explained the reactive data flow in part 1 as consisting of computational entities that send effects to environments. Any computational entity can generate data and send that as an effect to one or more environments. The effect data is subsequently passed on to any computational entity that has joined these environments. Any entity that processes an effect can in turn send results as effect data to one or more environments.

The indirection that is created by sending effects between entities through environments allows for very flexible combinations of events to trigger all kinds of computations. Entities do not necessarily have to be aware of each other. They simply send effects to environments, and any entity can join these environments and get delivered the effect data for processing.

This is how we can implement oracles that deliver external data to the Qubic system. Any source of data can package this data as an effect and ask the Supervisor to send it to an environment. If a qubic entity has joined that environment, it will be triggered to process the effect data. Once the qubic has processed the effect data, its result data can in turn be sent as effect data to an environment, which causes the Supervisor to pass it to entities that have joined that environment for further processing. One such entity could be yet another oracle, which then can function as the data provider for further external processing.

Note that it is entirely possible for a qubic entity to be triggered by the effect data generated by another qubic entity instead of data generated by an oracle entity. That’s why we have always maintained that there is no clear demarcation line between classical, external oracles and (internal) qubics functioning as oracles. There are only processing entities generating and consuming effect data via environments. Some entities happen to lie on the boundary of the system and the outside world. Others are entirely within the system and are constructed in such a way that they can process data in complex ways by a divide-and-conquer technique that combines many small, easy to understand processing entities into more complex processing behaviors.

The processing entities aren’t necessarily only qubics, either. Effect data can be processed by an oracle by passing it on as external data outside the system. Or a processing entity could be a view that displays the effect data. You can mix and match in a very flexible way.

Imagine for example a special monitor entity that can join any environment and just displays what effect data passes through. You could hook a separate monitor entity to each environment that you are interested in to see what is going on internally. Or even to provide that environment with effect data for processing, for that matter. This would allow us to test entities in isolation.

Another way that oracles can be used is as gateways between Q-nodes, passing effect data over external connections of any type. Such oracles would function as simple communication adapters, and all they have to do is pass the effect data to the correct environment on their counterpart’s Q-node. To the local Q-node this is practically invisible. All it does as far as it is concerned is send effect data to a local environment, and the oracle that joined that environment sends it to the other Q-node, where it gets picked up by its counterpart, which then forwards it to a local environment on own system.

It is up to the Supervisor to integrate all the different entities in the system and keep track of what environments they have joined, so that it can pass any effect data arriving at these environments to the correct joined entities. In addition it will make sure that the effects are processed in a deterministic way, so that when the same data is processed by different Q-nodes the resulting order of processing is the same for any Q-node, no matter how many effects arrive or how many environments get triggered simultaneously.

Qupla functions as entities

Any function in Qupla can operate as an entity that can receive effects from environments as well as generate effects for environments. To facilitate this we have added two special statements to the language that can be used to indicate this: join and affect.

Join statement

The join statement is used to join an environment. Join statements are always the first group of statements declared in a function. There are two forms:

func Int entity (Int val) {
join myEnvironment
join myOtherEnvironment limit 3
return val
}

The function entity() specifies that it joins 2 separate environments. The environments joined are myEnvironment and myOtherEnvironment. As you can see, the second join statement specifies a limit of 3 invocations of this entity function through myOtherEnvironment within a single quant. The first join statement does not specify a limit, in which case the invocation limit defaults to 1. This means that the only way to have an entity function be invoked multiple times within the same quant is to explicitly specify an invocation limit. This limit guarantees that entities that cause an invocation loop can never cause an infinite invocation loop within a single quant.

Whenever effect data is sent to myEnvironment or myOtherEnvironment the function entity() will be called and the effect data gets passed in as the val argument. When the size of the effect data trit vector is smaller than the size of the function argument, it will be padded with zero trits until it has equal size. When it is larger than the size of the function argument it will be truncated to the correct size. It is up to the creator of the entity() function to make sure that the effect data trit vector is interpreted in the way it was defined for the environment. Note that you can see an environment as an interface definition and the entities that join it as implementations of that interface.

Affect statement

The affect statement is used to send effect data to an environment. This will trigger invocation of the entities that have joined that environment by passing the effect data to them. Affect statements are always the second group of statements declared in a function (after any join statements). There are again two forms:

func Int entity (Int val) {
affect myEnvironment
affect myOtherEnvironment delay 5
return val
}

The function entity() here specifies that it affects 2 environments. The environments affected are myEnvironment and myOtherEnvironment. As you can see, the second affect statement specifies a delay of 5 quants before it takes effect. The first affect statement does not specify a delay, in which case the delay defaults to 0 (zero). A delay of zero means that the effect data is sent to the environment within the current quant. Note that this does not necessarily mean that it gets processed within the current quant. If an entity in that environment has reached its invocation limit, processing of the effect for that entity will be delayed until the next quant instead.

The actual effect data that is sent to myEnvironment or myOtherEnvironment is the return value of the function entity(). When the return value is a null trit vector, this means that no environments will be affected. In any other case the return value will be sent to the specified environments. Any null trits in the trit vector will be replaced with zero trits before the effect data is sent to any environments. It is up to the creator of the entity() function to make sure that the effect data trit vector is structured in the way it was defined by these environments.

Introducing the Game of Life

We will be using the original Proof of Concept application for the QCM as a study case. That application is an implementation of Conway’s Game of Life. This cellular automaton ‘game’ displays the evolution of generation after generation of cells on a grid that live or die according to a very simple set of rules:

  1. A live cell with less than 2 neighbors starves and dies.
  2. A live cell with more than 3 neighbors suffocates and dies.
  3. A dead cell with exactly 3 neighbors spawns a new live cell.

It turns out that with these simple rules very complex behaviors can occur that are highly fascinating to watch and experiment with. Starting with a grid that contains a number of random cells is already very entertaining. But figuring out what cell patterns exhibit special behaviors makes for such nice puzzles that an entire online community has formed around the Game of Life that explores its intricacies and shares their results.

So what do we need to be able to implement the Game of Life (GoL) in terms of EEE? Only a handful of entities are necessary:

  • Of course, you want to be able to display the current grid state, so you need a view entity that can display the grid.
  • Another entity is responsible for handling user interaction and translating that into operations on the grid.
  • And finally, we need an entity for which Qupla is extremely well-suited, namely one that can calculate the next generation of the grid from a starting grid. This is a large computation, that would be very well-suited to parallelization and execution on an FPGA.

A Qupla Game of Life

You can see the Game of Life in action and play with it yourself when you run the Qupla interpreter with the GameOfLife module as parameter. The GoL will run with a fixed 81×81 grid. Qupla comes with a Java GameOfLifeEntity that doubles as both the view entity and the user interaction entity. We will start our explanation with the simplest version: a single user that has a single single view of the grid.

At startup the GameOfLifeEntity will join the GolView environment, where updates to the game grid will be sent as an effect that contains the entire 81×81 grid as effect data. When the GameOfLifeEntity receives such an effect it will store a copy of the grid data in its local memory and display the current grid state in its view.

User interaction comes in the form of mouse input. You can use the left mouse button to click or draw directly on the view. The state of the cell under the mouse pointer determines whether clicking/drawing creates or removes live cells. Clicking first inverts the state of the cell under the mouse pointer and then copies that state along the mouse path as it draws. The changes are sent directly as an effect to the GolView environment, so that the view can update itself automatically and you can see the effect of your mouse input taking place in real time.

The right mouse button performs a special function. Each time it is clicked the current grid is sent as an effect to the GolGen environment. This will cause a Qupla entity to calculate the next generation of the grid, which is then sent as effect data to the GolView environment so that it can be displayed in the view.

Note how the Java GameOfLifeEntity functions as both an input oracle and an output oracle in the QCM. It will provide the external data for the Qupla gameOfLife() function entity that calculates the next generation, and also processes the changes to the grid meant for external display purposes.

The gameOfLife() function

From the rules of GoL it is clear that to calculate the new generation of the grid we need two things: a count of neighbors of a cell and a decision based on that count that results in the new state of the cell. And of course we need to do this for every cell in the 81×81 grid, preferably in parallel. This means we take the input grid, calculate the new state for each cell, and construct a new grid from those new cells to be returned as the next generation grid.

Since any cell that has >3 neighbors is doomed, we can use a special counter that only keeps track of 0–3 and >3 neighbors. This is easily done with a LUT. Whether that LUT counts binary or trinary does not really matter. For historic reasons we use a binary count here. This is what the LUT looks like:

// input trit 1 & 2: current binary amount of neighbors (-- when >3)
// input trit 3: next neighbor cell state (0 = dead, 1 = alive)
// output: new binary amount of neighbors (-- when >3)
lut binSum {
0,0,0 = 0,0 // 0 + 0 = 0 neighbors
0,0,1 = 1,0 // 0 + 1 = 1 neighbor
1,0,0 = 1,0 // 1 + 0 = 1 neighbor
1,0,1 = 0,1 // 1 + 1 = 2 neighbors
0,1,0 = 0,1 // 2 + 0 = 2 neighbors
0,1,1 = 1,1 // 2 + 1 = 3 neighbors
1,1,0 = 1,1 // 3 + 0 = 3 neighbors
1,1,1 = -,- // 3 + 1 = >3 neighbors
-,-,0 = -,- // >3 + 0 = >3 neighbors
-,-,1 = -,- // >3 + 1 = >3 neighbors
}

Now all we need to do is start with a binary amount of zero and feed the running tally of the amount plus the state of each of the 8 neighbor cells through the binSum LUT. Note how each combination where the next cell is dead simply outputs the original binary input value. And each combination where the next cell is alive outputs the next higher binary number, until it becomes more than 3, in which case we return a special overflow indicator: -,-. Any neighbor we try to add after that simply returns the overflow indicator again.

The next step is to use the rules of the game on the calculated number of neighbors in combination with the current state of the current cell. Again, we can use a simple LUT to decide:

// input trit 1 & 2: binary amount of neighbors (-- when >3)
// input trit 3: old cell state (0 = dead, 1 = alive)
// output: new cell state according to GoL rules
lut newCellState {
0,0,0 = 0 // 0 neighbors + dead -> stay dead
0,0,1 = 0 // 0 neighbors + alive -> starve cell
1,0,0 = 0 // 1 neighbor + dead -> stay dead
1,0,1 = 0 // 1 neighbor + alive -> starve cell
0,1,0 = 0 // 2 neighbors + dead -> stay dead
0,1,1 = 1 // 2 neighbors + alive -> stay alive
1,1,0 = 1 // 3 neighbors + dead -> spawn cell
1,1,1 = 1 // 3 neighbors + alive -> stay alive
-,-,0 = 0 // >3 neighbors + dead -> suffocate cell, stay dead
-,-,1 = 0 // >3 neighbors + alive -> suffocate cell
}

So once we have calculated the total number of neighbors of a cell by using the binSum LUT we can immediately decide on the new state of the cell by using the newCellState LUT. Now all we need is a mechanism to iterate over each row of the grid, and within each row iterate over each cell in the row.

The problem we encounter here is that in Qupla it is not possible to use a flexible index that points directly at the current cell in the grid. We need another way to achieve this. And the way we do this is by shifting each row 1 cell at a time, each time using the first cell in the row as the current cell. Then we determine the new state for that cell and append that new state at the end of the row. Once we have done this for each cell in the row, we have a new row consisting of all new cells in the correct sequence.

We can use the same trick in an outer loop for each row, by shifting out one row at a time from the grid, creating the new row cell by cell, and then appending the new row at the end of the grid. Once we have done this for each row in the grid, we have a new grid consisting of all new rows in the correct sequence.

There’s only one additional thing left to do and that is to take care of the cells at the edges of the grid. They don’t have any neighbors. There are usually 2 ways of dealing with this in game environments. The first way is to add an empty border around the grid and only iterate over the grid within that border. The second way is to treat the grid as ‘wrap-around’ so that the cells at the far ends are seen as neighbors of each other. In our PoC we use the first approach, an empty border around the grid. Here is the final Qupla implementation, which has been turned into a grid-size-independent template already:

template gameOfLife<T> {
type Cell [Trit]
type Col [T]
type Row [T]
type Grid [Row * Col]
type BorderedGrid [Row + Grid + Row]
type BorderedRow [Cell + Row + Cell]
type NeighborRows [BorderedRow * 3]

func Grid gameOfLife<T> (Grid grid) {
// sandwich grid between two dummy zero border rows
borderRow = as<Row>(0)
borderedGrid = borderRow & grid & borderRow

// loop over all grid rows, rowShifter starts at first position
newBorderedGrid = golLoopRows<T>(borderedGrid, all<Row>(1))

// extract new grid from tail end
return newBorderedGrid[BorderedGrid - Grid : Grid]
}

func BorderedGrid golLoopRows<T> (BorderedGrid grid, Row rowShifter) {
// check if row shifter is done
rowShifterIsDone = isZero[rowShifter[0]]
return rowShifterIsDone ? grid : golProcessRows<T>(grid, rowShifter)
}

func BorderedGrid golProcessRows<T> (BorderedGrid grid, Row rowShifter) {
// extract current row and neighbor rows
// and sandwich them between dummy zero border cells
borderCell = as<Cell>(0)
rows = borderCell & grid[Row * 0 : Row] & borderCell
& borderCell & grid[Row * 1 : Row] & borderCell
& borderCell & grid[Row * 2 : Row] & borderCell

// loop over all row columns, colShifter starts at first position
newRows = golLoopCols<T>(rows, all<Col>(1))

// extract new row from tail of newRows
newRow = newRows[NeighborRows - Row : Row]

// shift grid one row and append new row at end.
// when done iterating (shifter runs out) the
// tail end of newGrid will hold the new grid
newGrid = grid[Row : BorderedGrid - Row] & newRow

// one more row iteration done
newRowShifter = lshift<Row>(rowShifter)
return golLoopRows<T>(newGrid, newRowShifter)
}

func NeighborRows golLoopCols<T> (NeighborRows rows, Col colShifter) {
// check if col shifter is done
colShifterIsDone = isZero[colShifter[0]]
return colShifterIsDone ? rows : golProcessCols<T>(rows, colShifter)
}

func NeighborRows golProcessCols<T> (NeighborRows rows, Col colShifter) {
// calculate number of alive neighbours for current cell
alive0 = binSum[ 0, 0, rows[0 * BorderedRow + 0]]
alive1 = binSum[alive0[0], alive0[1], rows[0 * BorderedRow + 1]]
alive2 = binSum[alive1[0], alive1[1], rows[0 * BorderedRow + 2]]
alive3 = binSum[alive2[0], alive2[1], rows[1 * BorderedRow + 0]]
alive5 = binSum[alive3[0], alive3[1], rows[1 * BorderedRow + 2]]
alive6 = binSum[alive5[0], alive5[1], rows[2 * BorderedRow + 0]]
alive7 = binSum[alive6[0], alive6[1], rows[2 * BorderedRow + 1]]
alive8 = binSum[alive7[0], alive7[1], rows[2 * BorderedRow + 2]]

// determine new cell state for current cell
newCell = newCellState[alive8[0], alive8[1], rows[1 * BorderedRow + 1]]

// shift rows 1 cell and append new cell at end.
// when done iterating (shifter runs out) the tail
// end of newRows will hold the new row of cells
newRows = rows[Cell : NeighborRows - Cell] & newCell

// one more col iteration done
newColShifter = lshift<Col>(colShifter)
return golLoopCols<T>(newRows, newColShifter)
}
}

We use a trick here to determine the number of iterations in a loop. We set a shifter parameter of the same size as a row or column to all(1). We now shift this parameter at each iteration to the left. As long as we’re not done the first trit will contain a one. The moment we’re done we have successively shifted out all the ones and are left with zeroes, so the first trit will be a zero. This way, we only need to check a single trit instead of the entire shifter value, and instead of doing a much more costly decrement operation we use a cheap shift operation.

Note how we separate the loop control mechanism from the loop contents by using separate functions. This is a good practice to keep the functions simple and clear. It does not matter for the quality and speed of the generated code anyway.

Another optimization we chose to implement is that, because we are only interested in the direct neighbors of a cell, we don’t pass the entire game grid around constantly, but only the rows that contain the direct neighbors of the current row we operate on.

Now that we can use Qupla to generate the next generation grid from an input grid it is time to start putting things together with EEE.

Integrating the Qupla gameOfLife() function in EEE

The way that we integrate a function as an entity in EEE with Qupla is by wrapping that function in another function that joins and/or affects specific environments. It may sound like an unnecessary extra step to wrap a function within a function, but in this way we are decoupling the functionality of the function implementation from its integration as a function entity within EEE. That in turn means that we can use the same function (which could even be derived from a template) in other function entities without change.

type GolSize [81]
type GolGrid [GolSize * GolSize]
func GolGrid golGen(GolGrid grid) {
join GolGen
affect GolView

return gameOfLife<GolSize>(grid)
}

In this case we define the grid size for GoL to be 81×81. We also define a GolGrid type that will contain the game grid and that we can send around as effect data. Now all we need to do is create an entity function golGen() that joins the GolGen environment so that it will receive game grids from the GameOfLifeEntity that we defined earlier whenever the user right-clicks the view. The golGen() entity function calls the gameOfLife() function version for the correct size of the game grid and the result will affect the GolView environment. This in turn will cause the GameOfLifeEntity to receive the new generation of the game grid and display it.

So to recap, this simple example has 2 entity components that interact with each other through 2 separate environments:

  1. Any 81×81 grid that is sent to the GolView environment will be displayed somehow.
  2. Any 81×81 grid that is sent to the GolGen environment will cause the next generation of the grid to be calculated and displayed somehow.

The 2 entities that work together are:

  1. The Java GameOfLifeEntity that joins the GolView environment to display any grid that is received from that environment in its view.
  2. The Qupla golGen() function entity that joins the GolGen environment to calculate the next generation of the grid.

The effects being sent by the entities are:

  1. The Java GameOfLifeEntity sends any grid it wants to be displayed to the GolView environment, and sends any grid it wants to use as basis for a new generation to the GolGen environment.
  2. The Qupla golGen() function entity sends any new generation grid it has calculated to the GolView environment to be displayed.

You can see all this in action when you run the GameOfLife demo that we have provided with the Qupla interpreter.

Conclusion

We introduced the Qubic Supervisor and how it provides the Environment, Entity, Effect event model as defined by the QCM that allows application components to communicate with each other in intricate ways. We then showed how to integrate a Qupla function into the EEE model and next proceeded to implement Conway’s Game of Life as a simple Proof of Concept example application to demonstrate how it all works together.

In the next part of this series we will explore in what kinds of interesting ways we can expand on this simple Proof of Concept example.


Explaining the Qubic Computation Model: part 6 was originally published in IOTA on Medium, where people are continuing the conversation by highlighting and responding to this story.

This article was originally published on the: IOTA Blog on