Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Submit feedback
Sign in
Toggle navigation
A
alpha-mind
Project
Project
Details
Activity
Releases
Cycle Analytics
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Charts
Issues
0
Issues
0
List
Board
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Charts
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Charts
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
Dr.李
alpha-mind
Commits
6d06e29d
Commit
6d06e29d
authored
Aug 28, 2017
by
Dr.李
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
update models
parent
7b4198be
Changes
2
Show whitespace changes
Inline
Side-by-side
Showing
2 changed files
with
5 additions
and
4 deletions
+5
-4
model_training.py
alphamind/examples/model_training.py
+3
-2
data_preparing.py
alphamind/model/data_preparing.py
+2
-2
No files found.
alphamind/examples/model_training.py
View file @
6d06e29d
...
@@ -31,14 +31,14 @@ training - every 4 week
...
@@ -31,14 +31,14 @@ training - every 4 week
engine
=
SqlEngine
(
'postgresql+psycopg2://postgres:A12345678!@10.63.6.220/alpha'
)
engine
=
SqlEngine
(
'postgresql+psycopg2://postgres:A12345678!@10.63.6.220/alpha'
)
universe
=
Universe
(
'zz500'
,
[
'zz500'
])
universe
=
Universe
(
'zz500'
,
[
'zz500'
])
neutralize_risk
=
industry_styles
neutralize_risk
=
[
'SIZE'
]
+
industry_styles
alpha_factors
=
[
'RVOL'
,
'EPS'
,
'CFinc1'
,
'BDTO'
,
'VAL'
,
'GREV'
,
alpha_factors
=
[
'RVOL'
,
'EPS'
,
'CFinc1'
,
'BDTO'
,
'VAL'
,
'GREV'
,
'ROEDiluted'
]
# ['BDTO', 'RVOL', 'CHV', 'VAL', 'CFinc1'] # risk_styles
'ROEDiluted'
]
# ['BDTO', 'RVOL', 'CHV', 'VAL', 'CFinc1'] # risk_styles
benchmark
=
905
benchmark
=
905
n_bins
=
5
n_bins
=
5
frequency
=
'1w'
frequency
=
'1w'
batch
=
4
batch
=
4
start_date
=
'201
2-01-01
'
start_date
=
'201
1-01-05
'
end_date
=
'2017-08-31'
end_date
=
'2017-08-31'
'''
'''
...
@@ -146,6 +146,7 @@ for i, predict_date in enumerate(dates):
...
@@ -146,6 +146,7 @@ for i, predict_date in enumerate(dates):
is_tradable
=
is_tradable
)
is_tradable
=
is_tradable
)
final_res
[
i
]
=
analysis
[
'er'
][
'total'
]
/
benchmark_w
.
sum
()
final_res
[
i
]
=
analysis
[
'er'
][
'total'
]
/
benchmark_w
.
sum
()
print
(
'trade_date: {0} predicting finished'
.
format
(
train_date
))
last_date
=
advanceDateByCalendar
(
'china.sse'
,
dates
[
-
1
],
frequency
)
last_date
=
advanceDateByCalendar
(
'china.sse'
,
dates
[
-
1
],
frequency
)
...
...
alphamind/model/data_preparing.py
View file @
6d06e29d
...
@@ -139,8 +139,6 @@ def fetch_data_package(engine: SqlEngine,
...
@@ -139,8 +139,6 @@ def fetch_data_package(engine: SqlEngine,
benchmark
,
benchmark
,
warm_start
)
warm_start
)
alpha_logger
.
info
(
"Loading data is finished"
)
if
neutralized_risk
:
if
neutralized_risk
:
risk_df
=
engine
.
fetch_risk_model_range
(
universe
,
dates
=
dates
,
risk_model
=
risk_model
)[
1
]
risk_df
=
engine
.
fetch_risk_model_range
(
universe
,
dates
=
dates
,
risk_model
=
risk_model
)[
1
]
used_neutralized_risk
=
list
(
set
(
neutralized_risk
)
.
difference
(
transformer
.
names
))
used_neutralized_risk
=
list
(
set
(
neutralized_risk
)
.
difference
(
transformer
.
names
))
...
@@ -168,6 +166,8 @@ def fetch_data_package(engine: SqlEngine,
...
@@ -168,6 +166,8 @@ def fetch_data_package(engine: SqlEngine,
return_df
[
'industry_code'
]
=
train_x
[
'industry_code'
]
return_df
[
'industry_code'
]
=
train_x
[
'industry_code'
]
return_df
[
'isOpen'
]
=
train_x
[
'isOpen'
]
return_df
[
'isOpen'
]
=
train_x
[
'isOpen'
]
alpha_logger
.
info
(
"Loading data is finished"
)
train_x_buckets
,
train_y_buckets
,
predict_x_buckets
=
batch_processing
(
x_values
,
train_x_buckets
,
train_y_buckets
,
predict_x_buckets
=
batch_processing
(
x_values
,
y_values
,
y_values
,
dates
,
dates
,
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment