Split Learning—Bank Marketing#

The following codes are demos only. It’s NOT for production due to system security concerns, please DO NOT use it directly in production.

In this tutorial, we will use the bank’s marketing model as an example to show how to accomplish split learning in vertical scenarios under the SecretFlow framework. SecretFlow provides a user-friendly Api that makes it easy to apply your Keras model or PyTorch model to split learning scenarios to complete joint modeling tasks for vertical scenarios.

In this tutorial we will show you how to turn your existing ‘Keras’ model into a split learning model under Secretflow to complete federated multi-party modeling tasks.

What is Split Learning?#

The core idea of split learning is split the network structure. Each device (silo) retains only a part of the network structure, and the sub-network structure of all devices is combined together to form a complete network model. In the training process, different devices (silo) only perform forward or reverse calculation on the local network structure, and transfer the calculation results to the next device. Multiple devices complete the training through joint model until convergence.

split_learning_tutorial.png

Alice:have data_alicemodel_base_alice
Bob: have data_bobmodel_base_bobmodel_fuse
  1. Alice uses its data to get hidden0 through model_base_Alice and send it to Bob.

  2. Bob gets hidden1 with its data through model_base_bob.

  3. hidden_0 and hidden_1 are input to the AggLayer for aggregation, and the aggregated hidden_merge is the output.

  4. Bob input hidden_merge to model_fuse, get the gradient with label and send it back.

  5. The gradient is split into two parts g_0, g_1 through AggLayer, which are sent to Alice and Bob respectively.

  6. Then Alice and Bob update their local base net with g_0 or g_1.

Task#

Marketing is the banking industry in the ever-changing market environment, to meet the needs of customers, to achieve business objectives of the overall operation and sales activities. In the current environment of big data, data analysis provides a more effective analysis means for the banking industry. Customer demand analysis, understanding of target market trends and more macro market strategies can provide the basis and direction.

The data from kaggle is a set of classic marketing data bank, is a Portuguese bank agency telephone direct marketing activities, The target variable is whether the customer subscribes to deposit product.

Data#

  1. The total sample size was 11162, including 8929 training set and 2233 test set

  2. Feature dim is 16, target is binary classification

  3. We have cut the data in advance. Alice holds the 4-dimensional basic attribute features, Bob holds the 12-dimensional bank transaction features, and only Alice holds the corresponding label

Let’s start by looking at what our bank’s marketing data look like?

The original data is divided into Bank Alice and Bank Bob, which stores in Alice and Bob respectively. Here, CSV is the original data that has only been separated without pre-processing, we will use secretflow preprocess for FedData preprocess

[1]:
%load_ext autoreload
%autoreload 2

import secretflow as sf

sf.init(['alice', 'bob'], num_cpus=8, log_to_driver=True)
alice, bob = sf.PYU('alice'), sf.PYU('bob')
2022-09-07 15:08:41.518030: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /opt/rh/devtoolset-10/root/usr/lib64:/opt/rh/devtoolset-10/root/usr/lib:/opt/rh/devtoolset-10/root/usr/lib64/dyninst:/opt/rh/devtoolset-10/root/usr/lib/dyninst:/opt/rh/devtoolset-10/root/usr/lib64:/opt/rh/devtoolset-10/root/usr/lib

prepare data#

[2]:
import pandas as pd
from secretflow.utils.simulation.datasets import dataset

df = pd.read_csv(dataset('bank_marketing'), sep=';')

We assume that Alice is a new bank, and they only have the basic information of the user and purchased the label of financial products from other bank.

[3]:
alice_data = df[["age", "job", "marital", "education", "y"]]
alice_data
[3]:
age job marital education y
0 30 unemployed married primary no
1 33 services married secondary no
2 35 management single tertiary no
3 30 management married tertiary no
4 59 blue-collar married secondary no
... ... ... ... ... ...
4516 33 services married secondary no
4517 57 self-employed married tertiary no
4518 57 technician married secondary no
4519 28 blue-collar married secondary no
4520 44 entrepreneur single tertiary no

4521 rows × 5 columns

Bob is an old bank, they have the user’s account balance, house, loan, and recent marketing feedback

[4]:
bob_data = df[["default", "balance", "housing", "loan", "contact",
             "day","month","duration","campaign","pdays","previous","poutcome"]]
bob_data
[4]:
default balance housing loan contact day month duration campaign pdays previous poutcome
0 no 1787 no no cellular 19 oct 79 1 -1 0 unknown
1 no 4789 yes yes cellular 11 may 220 1 339 4 failure
2 no 1350 yes no cellular 16 apr 185 1 330 1 failure
3 no 1476 yes yes unknown 3 jun 199 4 -1 0 unknown
4 no 0 yes no unknown 5 may 226 1 -1 0 unknown
... ... ... ... ... ... ... ... ... ... ... ... ...
4516 no -333 yes no cellular 30 jul 329 5 -1 0 unknown
4517 yes -3313 yes yes unknown 9 may 153 1 -1 0 unknown
4518 no 295 no no cellular 19 aug 151 11 -1 0 unknown
4519 no 1137 no no cellular 6 feb 129 4 211 3 other
4520 no 1136 yes yes cellular 3 apr 345 2 249 7 other

4521 rows × 12 columns

Create Secretflow Environment#

Create 2 entities in the Secretflow environment [Alice, Bob] Where ‘Alice’ and ‘Bob’ are two PYU Once you’ve constructed the two objects, you can happily start Splitting Learning

Import Dependency#

[5]:
from secretflow.data.split import train_test_split
from secretflow.ml.nn import SLModel

Prepare Data#

Build Federated Table

Federated table is a virtual concept that cross multiple parties, We define VDataFrame for vertical setting

  1. The data of all parties in a federated table is stored locally and is not allowed to go out of the domain.

  2. No one has access to data store except the party that owns the data.

  3. Any operation of the federated table will be scheduled by the driver to each worker, and the execution instructions will be delivered layer by layer until the Python Runtime of the specific worker. The framework ensures that only worker.device and Object. device can operate data at the same time.

  4. Federated tables are designed to management and manipulation multi-party data from a central perspective.

  5. Interfaces to Federated Table are aligned to pandas.DataFrame to reduce the cost of multi-party data operations.

  6. The SecretFlow framework provides Plain&Ciphertext hybrid programming capabilities. Vertical federated tables are built using SPU, and MPC-PSI is used to safely get intersection and align data from all parties.

vdataframe.png

VDataFrame provides read_csv interface similar to pandas, except that secretflow.read_csv receives a dictionary that defines the path of data for both parties. We can use secretflow.vertical.read_csv to build the VDataFrame.

read_csv(file_dict,delimiter,ppu,keys,drop_key)
    filepath: Path of the participant file. The address can be a relative or absolute path to a local file
    ppu: PPU Device for PSI; If this parameter is not specified, data must be prealigned
    keys: Key for intersection

Create spu object

[6]:
spu = sf.SPU(sf.utils.testing.cluster_def(['alice', 'bob']))
[7]:
from secretflow.utils.simulation.datasets import load_bank_marketing

# Alice has the first four features,
# while bob has the left features
data = load_bank_marketing(parts={alice: (0, 4), bob: (4, 16)}, axis=1)
# Alice holds the label.
label = load_bank_marketing(parts={alice: (16, 17)}, axis=1)

data is a vertically federated table. It only has the Schema of all the data globally

Let’s take a closer look at VDF data management

As can be seen from an example, the age field belongs to Alice, so the corresponding column can be obtained in the partition of Alice, but Bob will report KeyError error when trying to obtain age.
There is a concept of Partition, which is a data fragment defined by us. Each Partition has its own device to which it belongs, and only the device that belongs can operate data.
[8]:
data['age'].partitions[alice].data
<secretflow.device.device.pyu.PYUObject object at 0x7f4b61e8c5e0>
[ ]:
# You can uncomment this and you will get a KeyError.
# data['age'].partitions[bob]
We then do data preprocessing on the VDataFrame.。
Here we take LabelEncoder and MinMaxScaler as examples. These two preprocessor functions have corresponding concepts in SkLearn and their use methods are similar to those in SkLearn
[9]:
from secretflow.preprocessing.scaler import MinMaxScaler
from secretflow.preprocessing.encoder import LabelEncoder
(SPURuntime pid=124081) I0907 15:09:05.583189 124081 external/com_github_brpc_brpc/src/brpc/server.cpp:1066] Server[yasl::link::internal::ReceiverServiceImpl] is serving on port=12485.
(SPURuntime pid=124081) I0907 15:09:05.583254 124081 external/com_github_brpc_brpc/src/brpc/server.cpp:1069] Check out http://i85c08157.eu95sqa:12485 in web browser.
(SPURuntime pid=124080) I0907 15:09:05.607770 124080 external/com_github_brpc_brpc/src/brpc/server.cpp:1066] Server[yasl::link::internal::ReceiverServiceImpl] is serving on port=57321.
(SPURuntime pid=124080) I0907 15:09:05.607844 124080 external/com_github_brpc_brpc/src/brpc/server.cpp:1069] Check out http://i85c08157.eu95sqa:57321 in web browser.
(SPURuntime pid=124081) I0907 15:09:05.683906 124435 external/com_github_brpc_brpc/src/brpc/socket.cpp:2202] Checking Socket{id=0 addr=127.0.0.1:57321} (0x564890479a40)
(SPURuntime pid=124081) I0907 15:09:05.684046 124435 external/com_github_brpc_brpc/src/brpc/socket.cpp:2262] Revived Socket{id=0 addr=127.0.0.1:57321} (0x564890479a40) (Connectable)
[10]:
encoder = LabelEncoder()
data['job'] = encoder.fit_transform(data['job'])
data['marital'] = encoder.fit_transform(data['marital'])
data['education'] = encoder.fit_transform(data['education'])
data['default'] = encoder.fit_transform(data['default'])
data['housing'] = encoder.fit_transform(data['housing'])
data['loan'] = encoder.fit_transform(data['loan'])
data['contact'] = encoder.fit_transform(data['contact'])
data['poutcome'] = encoder.fit_transform(data['poutcome'])
data['month'] = encoder.fit_transform(data['month'])
label = encoder.fit_transform(label)
[11]:
print(f"label= {type(label)},\ndata = {type(data)}")
label= <class 'secretflow.data.vertical.dataframe.VDataFrame'>,
data = <class 'secretflow.data.vertical.dataframe.VDataFrame'>

Standardize data via MinMaxScaler

[12]:
scaler = MinMaxScaler()

data = scaler.fit_transform(data)

(_run pid=119631) /home/xingmeng.zhxm/anaconda3/envs/secretflow/lib/python3.8/site-packages/sklearn/base.py:443: UserWarning: X has feature names, but MinMaxScaler was fitted without feature names
(_run pid=119631)   warnings.warn(
(_run pid=119633) /home/xingmeng.zhxm/anaconda3/envs/secretflow/lib/python3.8/site-packages/sklearn/base.py:443: UserWarning: X has feature names, but MinMaxScaler was fitted without feature names
(_run pid=119633)   warnings.warn(

Next we divide the data set into train-set and test-set

[13]:
from secretflow.data.split import train_test_split
random_state = 1234
train_data,test_data = train_test_split(data, train_size=0.8, random_state=random_state)
train_label,test_label = train_test_split(label, train_size=0.8, random_state=random_state)

Summary: At this point, we have completed the definition of federated tables, data preprocessing, and training set and test set partitioning The secretFlow framework defines a set of operations to be built on the federated table (its logical counterpart is pandas.DataFrame). The secretflow framework defines a set of operations to be built on the federated table (its logical counterpart is sklearn) Refer to our documentation and API introduction to learn more about other features

Introduce Model#

local version: For this task, a basic DNN can be completed, input 16-dimensional features, through a DNN network, output the probability of positive and negative samples.

Federate version: * Alice: - base_net: Input 4-dimensional feature and go through a DNN network to get hidden - fuse_net: Receive hidden features calculated by Alice and Bob, input them to FUSENET for feature fusion, and complete the forward process and backward process * Bob: - base_net: Input 12-dimensional features, get hidden through a DNN network, and then send hidden to Alice to complete the following operation

Define Model#

Next we start creating the federated model we define SLTFModel and SLTorchModel(WIP), which are used to build split learning of vertical scene. We define a simple and easy to use extensible interface, which can easily transform your existing Model into SF-Model, and then conduct vertical scene federation modeling

Split learning is to break up a model so that one part is executed locally on the data and the other part is executed on the label side. First let’s define the locally executed model – base_model

[14]:
def create_base_model(input_dim, output_dim,  name='base_model'):
    # Create model
    def create_model():
        from tensorflow import keras
        from tensorflow.keras import layers
        import tensorflow as tf
        model = keras.Sequential(
            [
                keras.Input(shape=input_dim),
                layers.Dense(100,activation ="relu" ),
                layers.Dense(output_dim, activation="relu"),
            ]
        )
        # Compile model
        model.summary()
        model.compile(loss='binary_crossentropy',
                      optimizer='adam',
                      metrics=["accuracy",tf.keras.metrics.AUC()])
        return model
    return create_model

We use create_base_model to create their base models for ‘Alice’ and ‘Bob’, respectively

[15]:
# prepare model
hidden_size = 64

model_base_alice = create_base_model(4, hidden_size)
model_base_bob = create_base_model(12, hidden_size)
[16]:
model_base_alice()
model_base_bob()
2022-09-07 15:09:22.473998: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /opt/rh/devtoolset-10/root/usr/lib64:/opt/rh/devtoolset-10/root/usr/lib:/opt/rh/devtoolset-10/root/usr/lib64/dyninst:/opt/rh/devtoolset-10/root/usr/lib/dyninst:/opt/rh/devtoolset-10/root/usr/lib64:/opt/rh/devtoolset-10/root/usr/lib
2022-09-07 15:09:22.474030: W tensorflow/stream_executor/cuda/cuda_driver.cc:269] failed call to cuInit: UNKNOWN ERROR (303)
Model: "sequential"
_________________________________________________________________
 Layer (type)                Output Shape              Param #
=================================================================
 dense (Dense)               (None, 100)               500

 dense_1 (Dense)             (None, 64)                6464

=================================================================
Total params: 6,964
Trainable params: 6,964
Non-trainable params: 0
_________________________________________________________________
Model: "sequential_1"
_________________________________________________________________
 Layer (type)                Output Shape              Param #
=================================================================
 dense_2 (Dense)             (None, 100)               1300

 dense_3 (Dense)             (None, 64)                6464

=================================================================
Total params: 7,764
Trainable params: 7,764
Non-trainable params: 0
_________________________________________________________________
[16]:
<keras.engine.sequential.Sequential at 0x7f4b61e31100>

Next we define the side with the label, or the server-side model – fuse_model In the definition of fuse_model, we need to correctly define loss, optimizer, and metrics. This is compatible with all configurations of your existing Keras model

[17]:
def create_fuse_model(input_dim, output_dim, party_nums, name='fuse_model'):
    def create_model():
        from tensorflow import keras
        from tensorflow.keras import layers
        import tensorflow as tf
        # input
        input_layers = []
        for i in range(party_nums):
            input_layers.append(keras.Input(input_dim,))

        merged_layer = layers.concatenate(input_layers)
        fuse_layer = layers.Dense(64, activation='relu')(merged_layer)
        output = layers.Dense(output_dim, activation='sigmoid')(fuse_layer)

        model = keras.Model(inputs=input_layers, outputs=output)
        model.summary()

        model.compile(loss='binary_crossentropy',
                      optimizer='adam',
                      metrics=["accuracy",tf.keras.metrics.AUC()])
        return model
    return create_model
[18]:
model_fuse = create_fuse_model(
    input_dim=hidden_size, party_nums=2, output_dim=1)
[19]:
model_fuse()
Model: "model"
__________________________________________________________________________________________________
 Layer (type)                   Output Shape         Param #     Connected to
==================================================================================================
 input_3 (InputLayer)           [(None, 64)]         0           []

 input_4 (InputLayer)           [(None, 64)]         0           []

 concatenate (Concatenate)      (None, 128)          0           ['input_3[0][0]',
                                                                  'input_4[0][0]']

 dense_4 (Dense)                (None, 64)           8256        ['concatenate[0][0]']

 dense_5 (Dense)                (None, 1)            65          ['dense_4[0][0]']

==================================================================================================
Total params: 8,321
Trainable params: 8,321
Non-trainable params: 0
__________________________________________________________________________________________________
[19]:
<keras.engine.functional.Functional at 0x7f4a84633460>

Create Split Learning Model#

Secretflow provides the split learning model SLModel To initial SLModel only need 3 parameters * base_model_dict: A dictionary needs to be passed in all clients participating in the training along with base_model mappings * device_y: PYU, which device has label * model_fuse: The fusion model

Define base_model_dict

base_model_dict:Dict[PYU,model_fn]
[20]:
base_model_dict = {
    alice: model_base_alice,
    bob:   model_base_bob
}
[21]:
from secretflow.security.privacy import DPStrategy, GaussianEmbeddingDP, LabelDP

# Define DP operations
train_batch_size = 128
gaussian_embedding_dp = GaussianEmbeddingDP(
    noise_multiplier=0.5,
    l2_norm_clip=1.0,
    batch_size=train_batch_size,
    num_samples=train_data.values.partition_shape()[alice][0],
    is_secure_generator=False,
)
dp_strategy_alice = DPStrategy(embedding_dp=gaussian_embedding_dp)
label_dp = LabelDP(eps=64.0)
dp_strategy_bob = DPStrategy(label_dp=label_dp)
dp_strategy_dict = {alice: dp_strategy_alice, bob: dp_strategy_bob}
dp_spent_step_freq = 10
[22]:
sl_model = SLModel(
    base_model_dict=base_model_dict,
    device_y=alice,
    model_fuse=model_fuse,
    dp_strategy_dict=dp_strategy_dict,)
[23]:
sf.reveal(test_data.partitions[alice].data), sf.reveal(test_label.partitions[alice].data)
[23]:
(           age       job  marital  education
 1426  0.279412  0.181818      0.5   0.333333
 416   0.176471  0.636364      1.0   0.333333
 3977  0.264706  0.000000      0.5   0.666667
 2291  0.338235  0.000000      0.5   0.333333
 257   0.132353  0.909091      1.0   0.333333
 ...        ...       ...      ...        ...
 1508  0.264706  0.818182      1.0   0.333333
 979   0.544118  0.090909      0.0   0.000000
 3494  0.455882  0.090909      0.5   0.000000
 42    0.485294  0.090909      0.5   0.333333
 1386  0.455882  0.636364      0.5   0.333333

 [905 rows x 4 columns],
       y
 1426  0
 416   0
 3977  0
 2291  0
 257   0
 ...  ..
 1508  0
 979   0
 3494  0
 42    0
 1386  0

 [905 rows x 1 columns])
(pid=128496) 2022-09-07 15:09:30.126909: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /opt/rh/devtoolset-10/root/usr/lib64:/opt/rh/devtoolset-10/root/usr/lib:/opt/rh/devtoolset-10/root/usr/lib64/dyninst:/opt/rh/devtoolset-10/root/usr/lib/dyninst:/opt/rh/devtoolset-10/root/usr/lib64:/opt/rh/devtoolset-10/root/usr/lib
(pid=128494) 2022-09-07 15:09:30.186352: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /opt/rh/devtoolset-10/root/usr/lib64:/opt/rh/devtoolset-10/root/usr/lib:/opt/rh/devtoolset-10/root/usr/lib64/dyninst:/opt/rh/devtoolset-10/root/usr/lib/dyninst:/opt/rh/devtoolset-10/root/usr/lib64:/opt/rh/devtoolset-10/root/usr/lib
[24]:
sf.reveal(train_data.partitions[alice].data), sf.reveal(train_label.partitions[alice].data)
[24]:
(           age       job  marital  education
 1106  0.235294  0.090909      0.5   0.333333
 1309  0.176471  0.363636      0.5   0.333333
 2140  0.411765  0.272727      1.0   0.666667
 2134  0.573529  0.454545      0.5   0.333333
 960   0.485294  0.818182      0.5   0.333333
 ...        ...       ...      ...        ...
 664   0.397059  0.090909      1.0   0.333333
 3276  0.235294  0.181818      0.5   0.666667
 1318  0.220588  0.818182      0.5   0.333333
 723   0.220588  0.636364      0.5   0.333333
 2863  0.176471  0.363636      1.0   0.666667

 [3616 rows x 4 columns],
       y
 1106  0
 1309  0
 2140  1
 2134  0
 960   0
 ...  ..
 664   0
 3276  0
 1318  0
 723   0
 2863  0

 [3616 rows x 1 columns])
(PYUSLTFModel pid=128496) 2022-09-07 15:09:32.495262: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /opt/rh/devtoolset-10/root/usr/lib64:/opt/rh/devtoolset-10/root/usr/lib:/opt/rh/devtoolset-10/root/usr/lib64/dyninst:/opt/rh/devtoolset-10/root/usr/lib/dyninst:/opt/rh/devtoolset-10/root/usr/lib64:/opt/rh/devtoolset-10/root/usr/lib
(PYUSLTFModel pid=128496) 2022-09-07 15:09:32.495291: W tensorflow/stream_executor/cuda/cuda_driver.cc:269] failed call to cuInit: UNKNOWN ERROR (303)
(PYUSLTFModel pid=128494) 2022-09-07 15:09:32.673837: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /opt/rh/devtoolset-10/root/usr/lib64:/opt/rh/devtoolset-10/root/usr/lib:/opt/rh/devtoolset-10/root/usr/lib64/dyninst:/opt/rh/devtoolset-10/root/usr/lib/dyninst:/opt/rh/devtoolset-10/root/usr/lib64:/opt/rh/devtoolset-10/root/usr/lib
(PYUSLTFModel pid=128494) 2022-09-07 15:09:32.673871: W tensorflow/stream_executor/cuda/cuda_driver.cc:269] failed call to cuInit: UNKNOWN ERROR (303)
(PYUSLTFModel pid=128496) Model: "sequential"
(PYUSLTFModel pid=128496) _________________________________________________________________
(PYUSLTFModel pid=128496)  Layer (type)                Output Shape              Param #
(PYUSLTFModel pid=128496) =================================================================
(PYUSLTFModel pid=128496)  dense (Dense)               (None, 100)               1300
(PYUSLTFModel pid=128496)
(PYUSLTFModel pid=128496)  dense_1 (Dense)             (None, 64)                6464
(PYUSLTFModel pid=128496)
(PYUSLTFModel pid=128496) =================================================================
(PYUSLTFModel pid=128496) Total params: 7,764
(PYUSLTFModel pid=128496) Trainable params: 7,764
(PYUSLTFModel pid=128496) Non-trainable params: 0
(PYUSLTFModel pid=128496) _________________________________________________________________
(PYUSLTFModel pid=128494) Model: "sequential"
(PYUSLTFModel pid=128494) _________________________________________________________________
(PYUSLTFModel pid=128494)  Layer (type)                Output Shape              Param #
(PYUSLTFModel pid=128494) =================================================================
(PYUSLTFModel pid=128494)  dense (Dense)               (None, 100)               500
(PYUSLTFModel pid=128494)
(PYUSLTFModel pid=128494)  dense_1 (Dense)             (None, 64)                6464
(PYUSLTFModel pid=128494)
(PYUSLTFModel pid=128494) =================================================================
(PYUSLTFModel pid=128494) Total params: 6,964
(PYUSLTFModel pid=128494) Trainable params: 6,964
(PYUSLTFModel pid=128494) Non-trainable params: 0
(PYUSLTFModel pid=128494) _________________________________________________________________
(PYUSLTFModel pid=128494) Model: "model"
(PYUSLTFModel pid=128494) __________________________________________________________________________________________________
(PYUSLTFModel pid=128494)  Layer (type)                   Output Shape         Param #     Connected to
(PYUSLTFModel pid=128494) ==================================================================================================
(PYUSLTFModel pid=128494)  input_2 (InputLayer)           [(None, 64)]         0           []
(PYUSLTFModel pid=128494)
(PYUSLTFModel pid=128494)  input_3 (InputLayer)           [(None, 64)]         0           []
(PYUSLTFModel pid=128494)
(PYUSLTFModel pid=128494)  concatenate (Concatenate)      (None, 128)          0           ['input_2[0][0]',
(PYUSLTFModel pid=128494)                                                                   'input_3[0][0]']
(PYUSLTFModel pid=128494)
(PYUSLTFModel pid=128494)  dense_2 (Dense)                (None, 64)           8256        ['concatenate[0][0]']
(PYUSLTFModel pid=128494)
(PYUSLTFModel pid=128494)  dense_3 (Dense)                (None, 1)            65          ['dense_2[0][0]']
(PYUSLTFModel pid=128494)
(PYUSLTFModel pid=128494) ==================================================================================================
(PYUSLTFModel pid=128494) Total params: 8,321
(PYUSLTFModel pid=128494) Trainable params: 8,321
(PYUSLTFModel pid=128494) Non-trainable params: 0
(PYUSLTFModel pid=128494) __________________________________________________________________________________________________
[25]:
sl_model.fit(train_data,
             train_label,
             validation_data=(test_data,test_label),
             epochs=10,
             batch_size=train_batch_size,
             shuffle=True,
             verbose=1,
             validation_freq=1,
             dp_spent_step_freq=dp_spent_step_freq,)
(_run pid=3681860) 2022-07-13 08:27:48.176592: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /opt/rh/gcc-toolset-11/root/usr/lib64:/opt/rh/gcc-toolset-11/root/usr/lib:/opt/rh/gcc-toolset-11/root/usr/lib64/dyninst:/opt/rh/gcc-toolset-11/root/usr/lib/dyninst
(_run pid=3681862) 2022-07-13 08:27:48.189799: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /opt/rh/gcc-toolset-11/root/usr/lib64:/opt/rh/gcc-toolset-11/root/usr/lib:/opt/rh/gcc-toolset-11/root/usr/lib64/dyninst:/opt/rh/gcc-toolset-11/root/usr/lib/dyninst
 76%|███████▌  | 22/29 [00:00<00:00, 49.58it/s](PYUSLTFModel pid=3681860) 2022-07-13 08:27:49.935229: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /opt/rh/gcc-toolset-11/root/usr/lib64:/opt/rh/gcc-toolset-11/root/usr/lib:/opt/rh/gcc-toolset-11/root/usr/lib64/dyninst:/opt/rh/gcc-toolset-11/root/usr/lib/dyninst
(PYUSLTFModel pid=3681860) 2022-07-13 08:27:49.935259: W tensorflow/stream_executor/cuda/cuda_driver.cc:269] failed call to cuInit: UNKNOWN ERROR (303)
(PYUSLTFModel pid=3681860) Model: "sequential"
(PYUSLTFModel pid=3681860) _________________________________________________________________
(PYUSLTFModel pid=3681860)  Layer (type)                Output Shape              Param #
(PYUSLTFModel pid=3681860) =================================================================
(PYUSLTFModel pid=3681860)  dense (Dense)               (None, 100)               1300
(PYUSLTFModel pid=3681860)
(PYUSLTFModel pid=3681860)  dense_1 (Dense)             (None, 64)                6464
(PYUSLTFModel pid=3681860)
(PYUSLTFModel pid=3681860) =================================================================
(PYUSLTFModel pid=3681860) Total params: 7,764
(PYUSLTFModel pid=3681860) Trainable params: 7,764
(PYUSLTFModel pid=3681860) Non-trainable params: 0
(PYUSLTFModel pid=3681860) _________________________________________________________________
(PYUSLTFModel pid=3681860) Model: "sequential_1"
(PYUSLTFModel pid=3681860) _________________________________________________________________
(PYUSLTFModel pid=3681860)  Layer (type)                Output Shape              Param #
(PYUSLTFModel pid=3681860) =================================================================
(PYUSLTFModel pid=3681860)  dense_2 (Dense)             (None, 100)               1300
(PYUSLTFModel pid=3681860)
(PYUSLTFModel pid=3681860)  dense_3 (Dense)             (None, 64)                6464
(PYUSLTFModel pid=3681860)
(PYUSLTFModel pid=3681860) =================================================================
(PYUSLTFModel pid=3681860) Total params: 7,764
(PYUSLTFModel pid=3681860) Trainable params: 7,764
(PYUSLTFModel pid=3681860) Non-trainable params: 0
(PYUSLTFModel pid=3681860) _________________________________________________________________
(PYUSLTFModel pid=3681862) 2022-07-13 08:27:50.103144: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /opt/rh/gcc-toolset-11/root/usr/lib64:/opt/rh/gcc-toolset-11/root/usr/lib:/opt/rh/gcc-toolset-11/root/usr/lib64/dyninst:/opt/rh/gcc-toolset-11/root/usr/lib/dyninst
(PYUSLTFModel pid=3681862) 2022-07-13 08:27:50.103181: W tensorflow/stream_executor/cuda/cuda_driver.cc:269] failed call to cuInit: UNKNOWN ERROR (303)
(PYUSLTFModel pid=3681862) Model: "sequential"
(PYUSLTFModel pid=3681862) _________________________________________________________________
(PYUSLTFModel pid=3681862)  Layer (type)                Output Shape              Param #
(PYUSLTFModel pid=3681862) =================================================================
(PYUSLTFModel pid=3681862)  dense (Dense)               (None, 100)               500
(PYUSLTFModel pid=3681862)
(PYUSLTFModel pid=3681862)  dense_1 (Dense)             (None, 64)                6464
(PYUSLTFModel pid=3681862)
(PYUSLTFModel pid=3681862) =================================================================
(PYUSLTFModel pid=3681862) Total params: 6,964
(PYUSLTFModel pid=3681862) Trainable params: 6,964
(PYUSLTFModel pid=3681862) Non-trainable params: 0
(PYUSLTFModel pid=3681862) _________________________________________________________________
(PYUSLTFModel pid=3681862) Model: "sequential_1"
(PYUSLTFModel pid=3681862) _________________________________________________________________
(PYUSLTFModel pid=3681862)  Layer (type)                Output Shape              Param #
(PYUSLTFModel pid=3681862) =================================================================
(PYUSLTFModel pid=3681862)  dense_2 (Dense)             (None, 100)               500
(PYUSLTFModel pid=3681862)
(PYUSLTFModel pid=3681862)  dense_3 (Dense)             (None, 64)                6464
(PYUSLTFModel pid=3681862)
(PYUSLTFModel pid=3681862) =================================================================
(PYUSLTFModel pid=3681862) Total params: 6,964
(PYUSLTFModel pid=3681862) Trainable params: 6,964
(PYUSLTFModel pid=3681862) Non-trainable params: 0
(PYUSLTFModel pid=3681862) _________________________________________________________________
(PYUSLTFModel pid=3681862) Model: "model"
(PYUSLTFModel pid=3681862) __________________________________________________________________________________________________
(PYUSLTFModel pid=3681862)  Layer (type)                   Output Shape         Param #     Connected to
(PYUSLTFModel pid=3681862) ==================================================================================================
(PYUSLTFModel pid=3681862)  input_3 (InputLayer)           [(None, 64)]         0           []
(PYUSLTFModel pid=3681862)
(PYUSLTFModel pid=3681862)  input_4 (InputLayer)           [(None, 64)]         0           []
(PYUSLTFModel pid=3681862)
(PYUSLTFModel pid=3681862)  concatenate (Concatenate)      (None, 128)          0           ['input_3[0][0]',
(PYUSLTFModel pid=3681862)                                                                   'input_4[0][0]']
(PYUSLTFModel pid=3681862)
(PYUSLTFModel pid=3681862)  dense_4 (Dense)                (None, 64)           8256        ['concatenate[0][0]']
(PYUSLTFModel pid=3681862)
(PYUSLTFModel pid=3681862)  dense_5 (Dense)                (None, 1)            65          ['dense_4[0][0]']
(PYUSLTFModel pid=3681862)
(PYUSLTFModel pid=3681862) ==================================================================================================
(PYUSLTFModel pid=3681862) Total params: 8,321
(PYUSLTFModel pid=3681862) Trainable params: 8,321
(PYUSLTFModel pid=3681862) Non-trainable params: 0
(PYUSLTFModel pid=3681862) __________________________________________________________________________________________________
100%|██████████| 29/29 [00:03<00:00,  8.82it/s, epoch: 0/10 -  train_loss:0.42108118534088135  train_accuracy:0.8505681753158569  train_auc_2:0.546562671661377  val_loss:0.3594009280204773  val_accuracy:0.8877212405204773  val_auc_2:0.6065167188644409 ]
100%|██████████| 29/29 [00:01<00:00, 20.75it/s, epoch: 1/10 -  train_loss:0.3594178855419159  train_accuracy:0.8831877708435059  train_auc_2:0.6193124055862427  val_loss:0.3354310989379883  val_accuracy:0.8877212405204773  val_auc_2:0.6627764701843262 ]
100%|██████████| 29/29 [00:01<00:00, 20.43it/s, epoch: 2/10 -  train_loss:0.33101484179496765  train_accuracy:0.8888274431228638  train_auc_2:0.6729121208190918  val_loss:0.3168722987174988  val_accuracy:0.8877212405204773  val_auc_2:0.7386461496353149 ]
100%|██████████| 29/29 [00:01<00:00, 22.91it/s, epoch: 3/10 -  train_loss:0.30363929271698  train_accuracy:0.8918694853782654  train_auc_2:0.7538092732429504  val_loss:0.29769349098205566  val_accuracy:0.8877212405204773  val_auc_2:0.792555570602417 ]
100%|██████████| 29/29 [00:01<00:00, 23.11it/s, epoch: 4/10 -  train_loss:0.2974799871444702  train_accuracy:0.88606196641922  train_auc_2:0.7942224740982056  val_loss:0.27211683988571167  val_accuracy:0.8874446749687195  val_auc_2:0.8455085754394531 ]
100%|██████████| 29/29 [00:01<00:00, 23.23it/s, epoch: 5/10 -  train_loss:0.2701774835586548  train_accuracy:0.8856502175331116  train_auc_2:0.8522427082061768  val_loss:0.2566554844379425  val_accuracy:0.8918694853782654  val_auc_2:0.862573504447937 ]
100%|██████████| 29/29 [00:01<00:00, 23.72it/s, epoch: 6/10 -  train_loss:0.25473421812057495  train_accuracy:0.8919214010238647  train_auc_2:0.8635954856872559  val_loss:0.24478046596050262  val_accuracy:0.8979535102844238  val_auc_2:0.8796356916427612 ]
100%|██████████| 29/29 [00:01<00:00, 23.65it/s, epoch: 7/10 -  train_loss:0.23962831497192383  train_accuracy:0.8991539478302002  train_auc_2:0.8786008954048157  val_loss:0.24174632132053375  val_accuracy:0.8960176706314087  val_auc_2:0.8820918202400208 ]
100%|██████████| 29/29 [00:01<00:00, 24.32it/s, epoch: 8/10 -  train_loss:0.24154597520828247  train_accuracy:0.8996636867523193  train_auc_2:0.880363941192627  val_loss:0.28965064883232117  val_accuracy:0.8938053250312805  val_auc_2:0.864720344543457 ]
100%|██████████| 29/29 [00:01<00:00, 22.25it/s, epoch: 9/10 -  train_loss:0.2731249928474426  train_accuracy:0.8942201137542725  train_auc_2:0.8479944467544556  val_loss:0.24363426864147186  val_accuracy:0.8982300758361816  val_auc_2:0.8842920660972595 ]
[25]:
{'train_loss': [0.4210812,
  0.3594179,
  0.33101484,
  0.3036393,
  0.29748,
  0.27017748,
  0.25473422,
  0.23962831,
  0.24154598,
  0.273125],
 'train_accuracy': [0.8505682,
  0.8831878,
  0.88882744,
  0.8918695,
  0.88606197,
  0.8856502,
  0.8919214,
  0.89915395,
  0.8996637,
  0.8942201],
 'train_auc_2': [0.5465627,
  0.6193124,
  0.6729121,
  0.7538093,
  0.7942225,
  0.8522427,
  0.8635955,
  0.8786009,
  0.88036394,
  0.84799445],
 'val_loss': [0.35940093,
  0.3354311,
  0.3168723,
  0.2976935,
  0.27211684,
  0.25665548,
  0.24478047,
  0.24174632,
  0.28965065,
  0.24363427],
 'val_accuracy': [0.88772124,
  0.88772124,
  0.88772124,
  0.88772124,
  0.8874447,
  0.8918695,
  0.8979535,
  0.8960177,
  0.8938053,
  0.8982301],
 'val_auc_2': [0.6065167,
  0.66277647,
  0.73864615,
  0.7925556,
  0.8455086,
  0.8625735,
  0.8796357,
  0.8820918,
  0.86472034,
  0.88429207]}

Let’s call the evaluation function

[26]:
global_metric = sl_model.evaluate(test_data, test_label, batch_size=128)
print(global_metric)
Evaluate Processing:: 100%|██████████| 29/29 [00:00<00:00, 72.20it/s, loss:0.24379125237464905 accuracy:0.8979535102844238 auc_2:0.8850194811820984]
{'loss': 0.24379125, 'accuracy': 0.8979535, 'auc_2': 0.8850195}

Contrast to local model#

The model structure is consistent with the model of split learning above, but only the model structure of Alice is used here. The model definition refers to the code below. #### Data The data also use kaggle’s anti-fraud data. Here, we just use Alice’s data of the new bank.

[27]:
from tensorflow import keras
from tensorflow.keras import layers
import tensorflow as tf
from sklearn.model_selection import train_test_split

def create_model():

    model = keras.Sequential(
        [
            keras.Input(shape=4),
            layers.Dense(100,activation ="relu" ),
            layers.Dense(64, activation='relu'),
            layers.Dense(64, activation='relu'),
            layers.Dense(1, activation='sigmoid')
        ]
    )
    model.compile(loss='binary_crossentropy',
                      optimizer='adam',
                      metrics=["accuracy",tf.keras.metrics.AUC()])
    return model

single_model = create_model()

data process

[28]:
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler
from sklearn.preprocessing import LabelEncoder

encoder = LabelEncoder()
alice_data.loc[:, 'job'] = encoder.fit_transform(alice_data['job'])
alice_data.loc[:, 'marital'] = encoder.fit_transform(alice_data['marital'])
alice_data.loc[:, 'education'] = encoder.fit_transform(alice_data['education'])
alice_data.loc[:, 'y'] =  encoder.fit_transform(alice_data['y'])
/tmp/ipykernel_3681029/3166940482.py:7: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead

See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
  alice_data.loc[:, 'job'] = encoder.fit_transform(alice_data['job'])
/tmp/ipykernel_3681029/3166940482.py:8: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead

See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
  alice_data.loc[:, 'marital'] = encoder.fit_transform(alice_data['marital'])
/tmp/ipykernel_3681029/3166940482.py:9: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead

See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
  alice_data.loc[:, 'education'] = encoder.fit_transform(alice_data['education'])
/tmp/ipykernel_3681029/3166940482.py:10: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead

See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
  alice_data.loc[:, 'y'] =  encoder.fit_transform(alice_data['y'])
[29]:
y = alice_data['y']
alice_data = alice_data.drop(columns=['y'],inplace=False)
[30]:
scaler = MinMaxScaler()
alice_data = scaler.fit_transform(alice_data)
[31]:
train_data,test_data = train_test_split(alice_data,train_size=0.8,random_state=random_state)
train_label,test_label = train_test_split(y,train_size=0.8,random_state=random_state)
[32]:
test_data.shape
[32]:
(905, 4)
[33]:
single_model.fit(train_data,train_label,validation_data=(test_data,test_label),batch_size=128,epochs=10,shuffle=False)
Epoch 1/10
29/29 [==============================] - 1s 13ms/step - loss: 0.5276 - accuracy: 0.8603 - auc_3: 0.4496 - val_loss: 0.4029 - val_accuracy: 0.8729 - val_auc_3: 0.4166
Epoch 2/10
29/29 [==============================] - 0s 4ms/step - loss: 0.3715 - accuracy: 0.8877 - auc_3: 0.4557 - val_loss: 0.3980 - val_accuracy: 0.8729 - val_auc_3: 0.4094
Epoch 3/10
29/29 [==============================] - 0s 4ms/step - loss: 0.3652 - accuracy: 0.8877 - auc_3: 0.4300 - val_loss: 0.3924 - val_accuracy: 0.8729 - val_auc_3: 0.4060
Epoch 4/10
29/29 [==============================] - 0s 4ms/step - loss: 0.3598 - accuracy: 0.8877 - auc_3: 0.4403 - val_loss: 0.3892 - val_accuracy: 0.8729 - val_auc_3: 0.4206
Epoch 5/10
29/29 [==============================] - 0s 4ms/step - loss: 0.3584 - accuracy: 0.8877 - auc_3: 0.4546 - val_loss: 0.3871 - val_accuracy: 0.8729 - val_auc_3: 0.4446
Epoch 6/10
29/29 [==============================] - 0s 4ms/step - loss: 0.3572 - accuracy: 0.8877 - auc_3: 0.4705 - val_loss: 0.3853 - val_accuracy: 0.8729 - val_auc_3: 0.4678
Epoch 7/10
29/29 [==============================] - 0s 4ms/step - loss: 0.3559 - accuracy: 0.8877 - auc_3: 0.4822 - val_loss: 0.3837 - val_accuracy: 0.8729 - val_auc_3: 0.4922
Epoch 8/10
29/29 [==============================] - 0s 4ms/step - loss: 0.3548 - accuracy: 0.8877 - auc_3: 0.4977 - val_loss: 0.3823 - val_accuracy: 0.8729 - val_auc_3: 0.5142
Epoch 9/10
29/29 [==============================] - 0s 4ms/step - loss: 0.3538 - accuracy: 0.8877 - auc_3: 0.5101 - val_loss: 0.3811 - val_accuracy: 0.8729 - val_auc_3: 0.5290
Epoch 10/10
29/29 [==============================] - 0s 4ms/step - loss: 0.3529 - accuracy: 0.8877 - auc_3: 0.5220 - val_loss: 0.3800 - val_accuracy: 0.8729 - val_auc_3: 0.5447
[33]:
<keras.callbacks.History at 0x7f1a74578610>

Summary#

The above two experiments simulate a typical vertical scene training problem. Alice and Bob have the same sample group, but each side has only a part of the features. If Alice only uses her own data to train the model, an accuracy of 0.872, AUC 0.53 model can be obtained. However, if Bob’s data are combined, a model with an accuracy of 0.893 and AUC 0.883 can be obtained.

Conclusion#

  • This tutorial introduces what is split learning and how to do it in secretFlow

  • It can be seen from the experimental data that split learning has significant advantages in expanding sample dimension and improving model effect through joint multi-party training

  • This tutorial uses plaintext aggregation to demonstrate, without considering the leakage problem of hidden layer. Secretflow provides AggLayer to avoid the leakage problem of hidden layer plaintext transmission through MPC,TEE,HE, and DP. If you are interested, please refer to relevant documents.

  • Next, you may want to try different data sets, you need to vertically shard the data first and then follow the flow of this tutorial