Total Internal Reflection

Technology and Art



Code

Cobol REKT
Tape/Z
Plenoxels
Transformer
Basis-Processing
Cataract
COMRADE
Duck-Angular
Exo
IRIS
MuchHeap
Snail-MapReduce
Underline
Lambda-Queuer
jQuery-Jenkins Radiator

Contact

Github
Twitter
LinkedIn

Site Feed

Test Page

LaTeX Test

\[\text{Confidence Interval } = \hat{X} \pm Z.\frac{\sigma}{\sqrt{n} }\]

IncludeChart-ChartJS Test

MermaidJS Test

graph LR; debt[Tech Debt]-->principal[Cost of Fixing Debt: Principal]; debt-->interest[Recurring Cost: Interest]; debt-->risk[Risk-Related Cost]; architecture_decision[Architecture Decision]-->resources[Cloud Resources]; microservice-->database[Cloud DB Resources]; microservice-->development_cost[Development Cost]; microservice-->latency[Latency]; microservice-->bugs[Fixing bugs]; microservice-->downtime[Downtime]-->lost_transactions[Lesser Lost Transactions]; style microservice fill:#006f00,stroke:#000,stroke-width:2px,color:#fff style debt fill:#006fff,stroke:#000,stroke-width:2px,color:#fff style architecture_decision fill:#8f0f00,stroke:#000,stroke-width:2px,color:#fff

IncludeCodeTag Test


    # The encoder output is injected directly into the sublayer of every Decoder. To build up the chain of Decoders
    # in PyTorch, so that we can put the full stack inside a Sequential block, we simply inject the encoder output
    # to the root Decoder, and have it output the encoder output (together with the actual Decoder output) as part of
    # the Decoder's actual output to make it easy for the next Decoder in the stack to consume the Encoder and Decoder
    # outputs
    def forward(self, input):
        encoder_output, previous_stage_output = input
        masked_mh_output = self.masked_multiheaded_attention_layer(
            self.masked_qkv_source.forward(previous_stage_output))
        input_qkv = self.unmasked_qkv_source.forward((encoder_output, masked_mh_output))
        mh_output = self.multiheaded_attention_layer(input_qkv)
        # Adds the residual connection to the output of the attention layer
        layer_normed_multihead_output = self.layer_norm(mh_output + previous_stage_output)
        ffnn_outputs = torch.stack(
            list(map(lambda attention_vector: self.feedforward_layer(attention_vector), layer_normed_multihead_output)))
        layer_normed_ffnn_output = self.layer_norm(ffnn_outputs + layer_normed_multihead_output)
        return (encoder_output, layer_normed_ffnn_output)