Feb 9 05:23:39.570711 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Thu Feb 8 21:14:17 -00 2024 Feb 9 05:23:39.570723 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.oem.id=packet flatcar.autologin verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 9 05:23:39.570730 kernel: BIOS-provided physical RAM map: Feb 9 05:23:39.570734 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Feb 9 05:23:39.570738 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Feb 9 05:23:39.570741 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Feb 9 05:23:39.570746 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Feb 9 05:23:39.570750 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Feb 9 05:23:39.570754 kernel: BIOS-e820: [mem 0x0000000040400000-0x00000000820dcfff] usable Feb 9 05:23:39.570758 kernel: BIOS-e820: [mem 0x00000000820dd000-0x00000000820ddfff] ACPI NVS Feb 9 05:23:39.570762 kernel: BIOS-e820: [mem 0x00000000820de000-0x00000000820defff] reserved Feb 9 05:23:39.570766 kernel: BIOS-e820: [mem 0x00000000820df000-0x000000008afccfff] usable Feb 9 05:23:39.570770 kernel: BIOS-e820: [mem 0x000000008afcd000-0x000000008c0b1fff] reserved Feb 9 05:23:39.570774 kernel: BIOS-e820: [mem 0x000000008c0b2000-0x000000008c23afff] usable Feb 9 05:23:39.570779 kernel: BIOS-e820: [mem 0x000000008c23b000-0x000000008c66cfff] ACPI NVS Feb 9 05:23:39.570784 kernel: BIOS-e820: [mem 0x000000008c66d000-0x000000008eefefff] reserved Feb 9 05:23:39.570788 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable Feb 9 05:23:39.570792 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved Feb 9 05:23:39.570796 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Feb 9 05:23:39.570800 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Feb 9 05:23:39.570804 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Feb 9 05:23:39.570809 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Feb 9 05:23:39.570813 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Feb 9 05:23:39.570817 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable Feb 9 05:23:39.570821 kernel: NX (Execute Disable) protection: active Feb 9 05:23:39.570825 kernel: SMBIOS 3.2.1 present. Feb 9 05:23:39.570830 kernel: DMI: Supermicro X11SCM-F/X11SCM-F, BIOS 1.9 09/16/2022 Feb 9 05:23:39.570834 kernel: tsc: Detected 3400.000 MHz processor Feb 9 05:23:39.570838 kernel: tsc: Detected 3399.906 MHz TSC Feb 9 05:23:39.570843 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 9 05:23:39.570847 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 9 05:23:39.570852 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 Feb 9 05:23:39.570856 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 9 05:23:39.570860 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 Feb 9 05:23:39.570864 kernel: Using GB pages for direct mapping Feb 9 05:23:39.570869 kernel: ACPI: Early table checksum verification disabled Feb 9 05:23:39.570874 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Feb 9 05:23:39.570878 kernel: ACPI: XSDT 0x000000008C54E0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Feb 9 05:23:39.570882 kernel: ACPI: FACP 0x000000008C58A670 000114 (v06 01072009 AMI 00010013) Feb 9 05:23:39.570887 kernel: ACPI: DSDT 0x000000008C54E268 03C404 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Feb 9 05:23:39.570893 kernel: ACPI: FACS 0x000000008C66CF80 000040 Feb 9 05:23:39.570898 kernel: ACPI: APIC 0x000000008C58A788 00012C (v04 01072009 AMI 00010013) Feb 9 05:23:39.570903 kernel: ACPI: FPDT 0x000000008C58A8B8 000044 (v01 01072009 AMI 00010013) Feb 9 05:23:39.570908 kernel: ACPI: FIDT 0x000000008C58A900 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Feb 9 05:23:39.570912 kernel: ACPI: MCFG 0x000000008C58A9A0 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Feb 9 05:23:39.570917 kernel: ACPI: SPMI 0x000000008C58A9E0 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Feb 9 05:23:39.570922 kernel: ACPI: SSDT 0x000000008C58AA28 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Feb 9 05:23:39.570926 kernel: ACPI: SSDT 0x000000008C58C548 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Feb 9 05:23:39.570931 kernel: ACPI: SSDT 0x000000008C58F710 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Feb 9 05:23:39.570936 kernel: ACPI: HPET 0x000000008C591A40 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 9 05:23:39.570941 kernel: ACPI: SSDT 0x000000008C591A78 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Feb 9 05:23:39.570946 kernel: ACPI: SSDT 0x000000008C592A28 0008F4 (v02 INTEL xh_mossb 00000000 INTL 20160527) Feb 9 05:23:39.570950 kernel: ACPI: UEFI 0x000000008C593320 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 9 05:23:39.570955 kernel: ACPI: LPIT 0x000000008C593368 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 9 05:23:39.570959 kernel: ACPI: SSDT 0x000000008C593400 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Feb 9 05:23:39.570964 kernel: ACPI: SSDT 0x000000008C595BE0 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Feb 9 05:23:39.570969 kernel: ACPI: DBGP 0x000000008C5970C8 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 9 05:23:39.570973 kernel: ACPI: DBG2 0x000000008C597100 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Feb 9 05:23:39.570979 kernel: ACPI: SSDT 0x000000008C597158 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Feb 9 05:23:39.570983 kernel: ACPI: DMAR 0x000000008C598CC0 000070 (v01 INTEL EDK2 00000002 01000013) Feb 9 05:23:39.570988 kernel: ACPI: SSDT 0x000000008C598D30 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Feb 9 05:23:39.570993 kernel: ACPI: TPM2 0x000000008C598E78 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Feb 9 05:23:39.570997 kernel: ACPI: SSDT 0x000000008C598EB0 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Feb 9 05:23:39.571002 kernel: ACPI: WSMT 0x000000008C599C40 000028 (v01 SUPERM 01072009 AMI 00010013) Feb 9 05:23:39.571006 kernel: ACPI: EINJ 0x000000008C599C68 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Feb 9 05:23:39.571011 kernel: ACPI: ERST 0x000000008C599D98 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Feb 9 05:23:39.571016 kernel: ACPI: BERT 0x000000008C599FC8 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Feb 9 05:23:39.571021 kernel: ACPI: HEST 0x000000008C599FF8 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Feb 9 05:23:39.571026 kernel: ACPI: SSDT 0x000000008C59A278 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Feb 9 05:23:39.571030 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58a670-0x8c58a783] Feb 9 05:23:39.571035 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54e268-0x8c58a66b] Feb 9 05:23:39.571040 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66cf80-0x8c66cfbf] Feb 9 05:23:39.571044 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58a788-0x8c58a8b3] Feb 9 05:23:39.571049 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58a8b8-0x8c58a8fb] Feb 9 05:23:39.571053 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58a900-0x8c58a99b] Feb 9 05:23:39.571058 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58a9a0-0x8c58a9db] Feb 9 05:23:39.571063 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58a9e0-0x8c58aa20] Feb 9 05:23:39.571068 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58aa28-0x8c58c543] Feb 9 05:23:39.571073 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58c548-0x8c58f70d] Feb 9 05:23:39.571077 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58f710-0x8c591a3a] Feb 9 05:23:39.571082 kernel: ACPI: Reserving HPET table memory at [mem 0x8c591a40-0x8c591a77] Feb 9 05:23:39.571086 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c591a78-0x8c592a25] Feb 9 05:23:39.571091 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c592a28-0x8c59331b] Feb 9 05:23:39.571096 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c593320-0x8c593361] Feb 9 05:23:39.571100 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c593368-0x8c5933fb] Feb 9 05:23:39.571106 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c593400-0x8c595bdd] Feb 9 05:23:39.571110 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c595be0-0x8c5970c1] Feb 9 05:23:39.571115 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c5970c8-0x8c5970fb] Feb 9 05:23:39.571119 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c597100-0x8c597153] Feb 9 05:23:39.571124 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c597158-0x8c598cbe] Feb 9 05:23:39.571129 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c598cc0-0x8c598d2f] Feb 9 05:23:39.571133 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598d30-0x8c598e73] Feb 9 05:23:39.571138 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c598e78-0x8c598eab] Feb 9 05:23:39.571143 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598eb0-0x8c599c3e] Feb 9 05:23:39.571148 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c599c40-0x8c599c67] Feb 9 05:23:39.571152 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c599c68-0x8c599d97] Feb 9 05:23:39.571157 kernel: ACPI: Reserving ERST table memory at [mem 0x8c599d98-0x8c599fc7] Feb 9 05:23:39.571162 kernel: ACPI: Reserving BERT table memory at [mem 0x8c599fc8-0x8c599ff7] Feb 9 05:23:39.571166 kernel: ACPI: Reserving HEST table memory at [mem 0x8c599ff8-0x8c59a273] Feb 9 05:23:39.571171 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59a278-0x8c59a3d9] Feb 9 05:23:39.571176 kernel: No NUMA configuration found Feb 9 05:23:39.571180 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] Feb 9 05:23:39.571185 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] Feb 9 05:23:39.571190 kernel: Zone ranges: Feb 9 05:23:39.571195 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 9 05:23:39.571200 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Feb 9 05:23:39.571204 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] Feb 9 05:23:39.571209 kernel: Movable zone start for each node Feb 9 05:23:39.571213 kernel: Early memory node ranges Feb 9 05:23:39.571218 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Feb 9 05:23:39.571222 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Feb 9 05:23:39.571227 kernel: node 0: [mem 0x0000000040400000-0x00000000820dcfff] Feb 9 05:23:39.571232 kernel: node 0: [mem 0x00000000820df000-0x000000008afccfff] Feb 9 05:23:39.571237 kernel: node 0: [mem 0x000000008c0b2000-0x000000008c23afff] Feb 9 05:23:39.571242 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] Feb 9 05:23:39.571246 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] Feb 9 05:23:39.571251 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] Feb 9 05:23:39.571256 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 9 05:23:39.571264 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Feb 9 05:23:39.571269 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Feb 9 05:23:39.571274 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Feb 9 05:23:39.571279 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges Feb 9 05:23:39.571285 kernel: On node 0, zone DMA32: 11460 pages in unavailable ranges Feb 9 05:23:39.571290 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges Feb 9 05:23:39.571295 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges Feb 9 05:23:39.571300 kernel: ACPI: PM-Timer IO Port: 0x1808 Feb 9 05:23:39.571305 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Feb 9 05:23:39.571310 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Feb 9 05:23:39.571315 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Feb 9 05:23:39.571321 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Feb 9 05:23:39.571326 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Feb 9 05:23:39.571331 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Feb 9 05:23:39.571336 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Feb 9 05:23:39.571341 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Feb 9 05:23:39.571346 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Feb 9 05:23:39.571351 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Feb 9 05:23:39.571356 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Feb 9 05:23:39.571361 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Feb 9 05:23:39.571367 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Feb 9 05:23:39.571372 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Feb 9 05:23:39.571376 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Feb 9 05:23:39.571381 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Feb 9 05:23:39.571386 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Feb 9 05:23:39.571391 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 9 05:23:39.571396 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 9 05:23:39.571401 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 9 05:23:39.571406 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 9 05:23:39.571412 kernel: TSC deadline timer available Feb 9 05:23:39.571417 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Feb 9 05:23:39.571422 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices Feb 9 05:23:39.571427 kernel: Booting paravirtualized kernel on bare hardware Feb 9 05:23:39.571432 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 9 05:23:39.571437 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:16 nr_node_ids:1 Feb 9 05:23:39.571442 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u262144 Feb 9 05:23:39.571447 kernel: pcpu-alloc: s185624 r8192 d31464 u262144 alloc=1*2097152 Feb 9 05:23:39.571452 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Feb 9 05:23:39.571458 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232415 Feb 9 05:23:39.571463 kernel: Policy zone: Normal Feb 9 05:23:39.571468 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.oem.id=packet flatcar.autologin verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 9 05:23:39.571474 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 9 05:23:39.571478 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Feb 9 05:23:39.571483 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Feb 9 05:23:39.571489 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 9 05:23:39.571494 kernel: Memory: 32724720K/33452980K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 728000K reserved, 0K cma-reserved) Feb 9 05:23:39.571500 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Feb 9 05:23:39.571505 kernel: ftrace: allocating 34475 entries in 135 pages Feb 9 05:23:39.571510 kernel: ftrace: allocated 135 pages with 4 groups Feb 9 05:23:39.571515 kernel: rcu: Hierarchical RCU implementation. Feb 9 05:23:39.571520 kernel: rcu: RCU event tracing is enabled. Feb 9 05:23:39.571525 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Feb 9 05:23:39.571530 kernel: Rude variant of Tasks RCU enabled. Feb 9 05:23:39.571535 kernel: Tracing variant of Tasks RCU enabled. Feb 9 05:23:39.571540 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 9 05:23:39.571546 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Feb 9 05:23:39.571551 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Feb 9 05:23:39.571556 kernel: random: crng init done Feb 9 05:23:39.571561 kernel: Console: colour dummy device 80x25 Feb 9 05:23:39.571566 kernel: printk: console [tty0] enabled Feb 9 05:23:39.571571 kernel: printk: console [ttyS1] enabled Feb 9 05:23:39.571578 kernel: ACPI: Core revision 20210730 Feb 9 05:23:39.571583 kernel: hpet: HPET dysfunctional in PC10. Force disabled. Feb 9 05:23:39.571607 kernel: APIC: Switch to symmetric I/O mode setup Feb 9 05:23:39.571613 kernel: DMAR: Host address width 39 Feb 9 05:23:39.571618 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Feb 9 05:23:39.571623 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Feb 9 05:23:39.571628 kernel: DMAR: RMRR base: 0x0000008cf18000 end: 0x0000008d161fff Feb 9 05:23:39.571648 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 Feb 9 05:23:39.571653 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Feb 9 05:23:39.571658 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Feb 9 05:23:39.571663 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Feb 9 05:23:39.571668 kernel: x2apic enabled Feb 9 05:23:39.571673 kernel: Switched APIC routing to cluster x2apic. Feb 9 05:23:39.571678 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Feb 9 05:23:39.571684 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Feb 9 05:23:39.571689 kernel: CPU0: Thermal monitoring enabled (TM1) Feb 9 05:23:39.571693 kernel: process: using mwait in idle threads Feb 9 05:23:39.571698 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 9 05:23:39.571703 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 9 05:23:39.571708 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 9 05:23:39.571713 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Feb 9 05:23:39.571719 kernel: Spectre V2 : Mitigation: Enhanced IBRS Feb 9 05:23:39.571724 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 9 05:23:39.571729 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Feb 9 05:23:39.571734 kernel: RETBleed: Mitigation: Enhanced IBRS Feb 9 05:23:39.571738 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 9 05:23:39.571743 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Feb 9 05:23:39.571748 kernel: TAA: Mitigation: TSX disabled Feb 9 05:23:39.571753 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Feb 9 05:23:39.571758 kernel: SRBDS: Mitigation: Microcode Feb 9 05:23:39.571763 kernel: GDS: Vulnerable: No microcode Feb 9 05:23:39.571768 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 9 05:23:39.571774 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 9 05:23:39.571779 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 9 05:23:39.571784 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Feb 9 05:23:39.571789 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Feb 9 05:23:39.571794 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 9 05:23:39.571798 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Feb 9 05:23:39.571803 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Feb 9 05:23:39.571808 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Feb 9 05:23:39.571813 kernel: Freeing SMP alternatives memory: 32K Feb 9 05:23:39.571818 kernel: pid_max: default: 32768 minimum: 301 Feb 9 05:23:39.571823 kernel: LSM: Security Framework initializing Feb 9 05:23:39.571828 kernel: SELinux: Initializing. Feb 9 05:23:39.571833 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 9 05:23:39.571839 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 9 05:23:39.571843 kernel: smpboot: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Feb 9 05:23:39.571848 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Feb 9 05:23:39.571853 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Feb 9 05:23:39.571858 kernel: ... version: 4 Feb 9 05:23:39.571863 kernel: ... bit width: 48 Feb 9 05:23:39.571868 kernel: ... generic registers: 4 Feb 9 05:23:39.571873 kernel: ... value mask: 0000ffffffffffff Feb 9 05:23:39.571878 kernel: ... max period: 00007fffffffffff Feb 9 05:23:39.571884 kernel: ... fixed-purpose events: 3 Feb 9 05:23:39.571889 kernel: ... event mask: 000000070000000f Feb 9 05:23:39.571894 kernel: signal: max sigframe size: 2032 Feb 9 05:23:39.571899 kernel: rcu: Hierarchical SRCU implementation. Feb 9 05:23:39.571904 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Feb 9 05:23:39.571909 kernel: smp: Bringing up secondary CPUs ... Feb 9 05:23:39.571914 kernel: x86: Booting SMP configuration: Feb 9 05:23:39.571919 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 Feb 9 05:23:39.571924 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 9 05:23:39.571930 kernel: #9 #10 #11 #12 #13 #14 #15 Feb 9 05:23:39.571935 kernel: smp: Brought up 1 node, 16 CPUs Feb 9 05:23:39.571940 kernel: smpboot: Max logical packages: 1 Feb 9 05:23:39.571945 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Feb 9 05:23:39.571950 kernel: devtmpfs: initialized Feb 9 05:23:39.571955 kernel: x86/mm: Memory block size: 128MB Feb 9 05:23:39.571960 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x820dd000-0x820ddfff] (4096 bytes) Feb 9 05:23:39.571965 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23b000-0x8c66cfff] (4399104 bytes) Feb 9 05:23:39.571971 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 9 05:23:39.571976 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Feb 9 05:23:39.571981 kernel: pinctrl core: initialized pinctrl subsystem Feb 9 05:23:39.571986 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 9 05:23:39.571991 kernel: audit: initializing netlink subsys (disabled) Feb 9 05:23:39.571996 kernel: audit: type=2000 audit(1707456214.040:1): state=initialized audit_enabled=0 res=1 Feb 9 05:23:39.572001 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 9 05:23:39.572006 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 9 05:23:39.572011 kernel: cpuidle: using governor menu Feb 9 05:23:39.572017 kernel: ACPI: bus type PCI registered Feb 9 05:23:39.572022 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 9 05:23:39.572027 kernel: dca service started, version 1.12.1 Feb 9 05:23:39.572032 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Feb 9 05:23:39.572037 kernel: PCI: MMCONFIG at [mem 0xe0000000-0xefffffff] reserved in E820 Feb 9 05:23:39.572042 kernel: PCI: Using configuration type 1 for base access Feb 9 05:23:39.572047 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Feb 9 05:23:39.572051 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 9 05:23:39.572056 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 9 05:23:39.572062 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 9 05:23:39.572067 kernel: ACPI: Added _OSI(Module Device) Feb 9 05:23:39.572072 kernel: ACPI: Added _OSI(Processor Device) Feb 9 05:23:39.572077 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 9 05:23:39.572082 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 9 05:23:39.572087 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 9 05:23:39.572092 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 9 05:23:39.572097 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 9 05:23:39.572102 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Feb 9 05:23:39.572108 kernel: ACPI: Dynamic OEM Table Load: Feb 9 05:23:39.572113 kernel: ACPI: SSDT 0xFFFF9FCFC0213800 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Feb 9 05:23:39.572118 kernel: ACPI: \_SB_.PR00: _OSC native thermal LVT Acked Feb 9 05:23:39.572123 kernel: ACPI: Dynamic OEM Table Load: Feb 9 05:23:39.572128 kernel: ACPI: SSDT 0xFFFF9FCFC1AE1000 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Feb 9 05:23:39.572133 kernel: ACPI: Dynamic OEM Table Load: Feb 9 05:23:39.572138 kernel: ACPI: SSDT 0xFFFF9FCFC1A5C000 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Feb 9 05:23:39.572142 kernel: ACPI: Dynamic OEM Table Load: Feb 9 05:23:39.572147 kernel: ACPI: SSDT 0xFFFF9FCFC1A5C800 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Feb 9 05:23:39.572152 kernel: ACPI: Dynamic OEM Table Load: Feb 9 05:23:39.572158 kernel: ACPI: SSDT 0xFFFF9FCFC0149000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Feb 9 05:23:39.572163 kernel: ACPI: Dynamic OEM Table Load: Feb 9 05:23:39.572168 kernel: ACPI: SSDT 0xFFFF9FCFC1AE3C00 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Feb 9 05:23:39.572173 kernel: ACPI: Interpreter enabled Feb 9 05:23:39.572178 kernel: ACPI: PM: (supports S0 S5) Feb 9 05:23:39.572183 kernel: ACPI: Using IOAPIC for interrupt routing Feb 9 05:23:39.572188 kernel: HEST: Enabling Firmware First mode for corrected errors. Feb 9 05:23:39.572193 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Feb 9 05:23:39.572198 kernel: HEST: Table parsing has been initialized. Feb 9 05:23:39.572204 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Feb 9 05:23:39.572209 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 9 05:23:39.572214 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Feb 9 05:23:39.572219 kernel: ACPI: PM: Power Resource [USBC] Feb 9 05:23:39.572224 kernel: ACPI: PM: Power Resource [V0PR] Feb 9 05:23:39.572229 kernel: ACPI: PM: Power Resource [V1PR] Feb 9 05:23:39.572234 kernel: ACPI: PM: Power Resource [V2PR] Feb 9 05:23:39.572239 kernel: ACPI: PM: Power Resource [WRST] Feb 9 05:23:39.572243 kernel: ACPI: PM: Power Resource [FN00] Feb 9 05:23:39.572249 kernel: ACPI: PM: Power Resource [FN01] Feb 9 05:23:39.572254 kernel: ACPI: PM: Power Resource [FN02] Feb 9 05:23:39.572259 kernel: ACPI: PM: Power Resource [FN03] Feb 9 05:23:39.572264 kernel: ACPI: PM: Power Resource [FN04] Feb 9 05:23:39.572269 kernel: ACPI: PM: Power Resource [PIN] Feb 9 05:23:39.572274 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Feb 9 05:23:39.572340 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 9 05:23:39.572384 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Feb 9 05:23:39.572426 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Feb 9 05:23:39.572434 kernel: PCI host bridge to bus 0000:00 Feb 9 05:23:39.572478 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 9 05:23:39.572516 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 9 05:23:39.572552 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 9 05:23:39.572591 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] Feb 9 05:23:39.572663 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Feb 9 05:23:39.572701 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Feb 9 05:23:39.572752 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Feb 9 05:23:39.572800 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Feb 9 05:23:39.572843 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Feb 9 05:23:39.572888 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Feb 9 05:23:39.572929 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] Feb 9 05:23:39.572975 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Feb 9 05:23:39.573018 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit] Feb 9 05:23:39.573064 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Feb 9 05:23:39.573107 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] Feb 9 05:23:39.573148 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Feb 9 05:23:39.573192 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Feb 9 05:23:39.573235 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] Feb 9 05:23:39.573276 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit] Feb 9 05:23:39.573322 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Feb 9 05:23:39.573362 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Feb 9 05:23:39.573407 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Feb 9 05:23:39.573448 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Feb 9 05:23:39.573493 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Feb 9 05:23:39.573535 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] Feb 9 05:23:39.573578 kernel: pci 0000:00:16.0: PME# supported from D3hot Feb 9 05:23:39.573659 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Feb 9 05:23:39.573699 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] Feb 9 05:23:39.573742 kernel: pci 0000:00:16.1: PME# supported from D3hot Feb 9 05:23:39.573798 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Feb 9 05:23:39.573843 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] Feb 9 05:23:39.573885 kernel: pci 0000:00:16.4: PME# supported from D3hot Feb 9 05:23:39.573931 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Feb 9 05:23:39.573974 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] Feb 9 05:23:39.574015 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] Feb 9 05:23:39.574057 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] Feb 9 05:23:39.574097 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] Feb 9 05:23:39.574146 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] Feb 9 05:23:39.574190 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] Feb 9 05:23:39.574232 kernel: pci 0000:00:17.0: PME# supported from D3hot Feb 9 05:23:39.574280 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Feb 9 05:23:39.574324 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Feb 9 05:23:39.574371 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Feb 9 05:23:39.574414 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Feb 9 05:23:39.574463 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Feb 9 05:23:39.574507 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Feb 9 05:23:39.574553 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Feb 9 05:23:39.574600 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Feb 9 05:23:39.574646 kernel: pci 0000:00:1c.3: [8086:a33b] type 01 class 0x060400 Feb 9 05:23:39.574691 kernel: pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold Feb 9 05:23:39.574738 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Feb 9 05:23:39.574781 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Feb 9 05:23:39.574829 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Feb 9 05:23:39.574879 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Feb 9 05:23:39.574922 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] Feb 9 05:23:39.574965 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Feb 9 05:23:39.575011 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Feb 9 05:23:39.575054 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Feb 9 05:23:39.575102 kernel: pci 0000:01:00.0: [15b3:1015] type 00 class 0x020000 Feb 9 05:23:39.575148 kernel: pci 0000:01:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Feb 9 05:23:39.575195 kernel: pci 0000:01:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] Feb 9 05:23:39.575238 kernel: pci 0000:01:00.0: PME# supported from D3cold Feb 9 05:23:39.575283 kernel: pci 0000:01:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Feb 9 05:23:39.575326 kernel: pci 0000:01:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Feb 9 05:23:39.575375 kernel: pci 0000:01:00.1: [15b3:1015] type 00 class 0x020000 Feb 9 05:23:39.575419 kernel: pci 0000:01:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Feb 9 05:23:39.575465 kernel: pci 0000:01:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] Feb 9 05:23:39.575508 kernel: pci 0000:01:00.1: PME# supported from D3cold Feb 9 05:23:39.575553 kernel: pci 0000:01:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Feb 9 05:23:39.575599 kernel: pci 0000:01:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Feb 9 05:23:39.575644 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Feb 9 05:23:39.575686 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Feb 9 05:23:39.575729 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Feb 9 05:23:39.575771 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Feb 9 05:23:39.575821 kernel: pci 0000:03:00.0: [8086:1533] type 00 class 0x020000 Feb 9 05:23:39.575867 kernel: pci 0000:03:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] Feb 9 05:23:39.575910 kernel: pci 0000:03:00.0: reg 0x18: [io 0x5000-0x501f] Feb 9 05:23:39.575954 kernel: pci 0000:03:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] Feb 9 05:23:39.575997 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Feb 9 05:23:39.576040 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Feb 9 05:23:39.576082 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Feb 9 05:23:39.576125 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Feb 9 05:23:39.576174 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Feb 9 05:23:39.576219 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] Feb 9 05:23:39.576263 kernel: pci 0000:04:00.0: reg 0x18: [io 0x4000-0x401f] Feb 9 05:23:39.576307 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] Feb 9 05:23:39.576351 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Feb 9 05:23:39.576395 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Feb 9 05:23:39.576439 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Feb 9 05:23:39.576483 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Feb 9 05:23:39.576526 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Feb 9 05:23:39.576579 kernel: pci 0000:06:00.0: [1a03:1150] type 01 class 0x060400 Feb 9 05:23:39.576626 kernel: pci 0000:06:00.0: enabling Extended Tags Feb 9 05:23:39.576671 kernel: pci 0000:06:00.0: supports D1 D2 Feb 9 05:23:39.576716 kernel: pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 9 05:23:39.576760 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Feb 9 05:23:39.576803 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Feb 9 05:23:39.576850 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Feb 9 05:23:39.576898 kernel: pci_bus 0000:07: extended config space not accessible Feb 9 05:23:39.576949 kernel: pci 0000:07:00.0: [1a03:2000] type 00 class 0x030000 Feb 9 05:23:39.576996 kernel: pci 0000:07:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] Feb 9 05:23:39.577042 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] Feb 9 05:23:39.577088 kernel: pci 0000:07:00.0: reg 0x18: [io 0x3000-0x307f] Feb 9 05:23:39.577136 kernel: pci 0000:07:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 9 05:23:39.577183 kernel: pci 0000:07:00.0: supports D1 D2 Feb 9 05:23:39.577230 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 9 05:23:39.577274 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Feb 9 05:23:39.577318 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Feb 9 05:23:39.577362 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Feb 9 05:23:39.577371 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Feb 9 05:23:39.577376 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Feb 9 05:23:39.577383 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Feb 9 05:23:39.577389 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Feb 9 05:23:39.577395 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Feb 9 05:23:39.577400 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Feb 9 05:23:39.577406 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Feb 9 05:23:39.577411 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Feb 9 05:23:39.577417 kernel: iommu: Default domain type: Translated Feb 9 05:23:39.577423 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 9 05:23:39.577469 kernel: pci 0000:07:00.0: vgaarb: setting as boot VGA device Feb 9 05:23:39.577516 kernel: pci 0000:07:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 9 05:23:39.577623 kernel: pci 0000:07:00.0: vgaarb: bridge control possible Feb 9 05:23:39.577631 kernel: vgaarb: loaded Feb 9 05:23:39.577637 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 9 05:23:39.577643 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it> Feb 9 05:23:39.577649 kernel: PTP clock support registered Feb 9 05:23:39.577654 kernel: PCI: Using ACPI for IRQ routing Feb 9 05:23:39.577660 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 9 05:23:39.577665 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Feb 9 05:23:39.577672 kernel: e820: reserve RAM buffer [mem 0x820dd000-0x83ffffff] Feb 9 05:23:39.577678 kernel: e820: reserve RAM buffer [mem 0x8afcd000-0x8bffffff] Feb 9 05:23:39.577683 kernel: e820: reserve RAM buffer [mem 0x8c23b000-0x8fffffff] Feb 9 05:23:39.577689 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] Feb 9 05:23:39.577694 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] Feb 9 05:23:39.577699 kernel: clocksource: Switched to clocksource tsc-early Feb 9 05:23:39.577705 kernel: VFS: Disk quotas dquot_6.6.0 Feb 9 05:23:39.577711 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 9 05:23:39.577716 kernel: pnp: PnP ACPI init Feb 9 05:23:39.577762 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Feb 9 05:23:39.577805 kernel: pnp 00:02: [dma 0 disabled] Feb 9 05:23:39.577849 kernel: pnp 00:03: [dma 0 disabled] Feb 9 05:23:39.577893 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Feb 9 05:23:39.577932 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Feb 9 05:23:39.577974 kernel: system 00:05: [io 0x1854-0x1857] has been reserved Feb 9 05:23:39.578018 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved Feb 9 05:23:39.578057 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved Feb 9 05:23:39.578096 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved Feb 9 05:23:39.578134 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved Feb 9 05:23:39.578172 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved Feb 9 05:23:39.578209 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved Feb 9 05:23:39.578247 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved Feb 9 05:23:39.578287 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved Feb 9 05:23:39.578328 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved Feb 9 05:23:39.578367 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved Feb 9 05:23:39.578406 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved Feb 9 05:23:39.578445 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved Feb 9 05:23:39.578483 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved Feb 9 05:23:39.578521 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved Feb 9 05:23:39.578561 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved Feb 9 05:23:39.578604 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved Feb 9 05:23:39.578612 kernel: pnp: PnP ACPI: found 10 devices Feb 9 05:23:39.578618 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 9 05:23:39.578624 kernel: NET: Registered PF_INET protocol family Feb 9 05:23:39.578629 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 9 05:23:39.578635 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Feb 9 05:23:39.578640 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 9 05:23:39.578648 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 9 05:23:39.578653 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Feb 9 05:23:39.578659 kernel: TCP: Hash tables configured (established 262144 bind 65536) Feb 9 05:23:39.578665 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 9 05:23:39.578670 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 9 05:23:39.578676 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 9 05:23:39.578682 kernel: NET: Registered PF_XDP protocol family Feb 9 05:23:39.578726 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] Feb 9 05:23:39.578770 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] Feb 9 05:23:39.578814 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] Feb 9 05:23:39.578859 kernel: pci 0000:01:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Feb 9 05:23:39.578905 kernel: pci 0000:01:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Feb 9 05:23:39.578950 kernel: pci 0000:01:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Feb 9 05:23:39.578994 kernel: pci 0000:01:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Feb 9 05:23:39.579037 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Feb 9 05:23:39.579080 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Feb 9 05:23:39.579125 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Feb 9 05:23:39.579167 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Feb 9 05:23:39.579210 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Feb 9 05:23:39.579252 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Feb 9 05:23:39.579296 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Feb 9 05:23:39.579340 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Feb 9 05:23:39.579383 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Feb 9 05:23:39.579425 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Feb 9 05:23:39.579467 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Feb 9 05:23:39.579511 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Feb 9 05:23:39.579555 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Feb 9 05:23:39.579601 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Feb 9 05:23:39.579643 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Feb 9 05:23:39.579686 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Feb 9 05:23:39.579731 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Feb 9 05:23:39.579770 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Feb 9 05:23:39.579807 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 9 05:23:39.579844 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 9 05:23:39.579881 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 9 05:23:39.579918 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] Feb 9 05:23:39.579956 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Feb 9 05:23:39.580001 kernel: pci_bus 0000:01: resource 1 [mem 0x95100000-0x952fffff] Feb 9 05:23:39.580043 kernel: pci_bus 0000:01: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Feb 9 05:23:39.580088 kernel: pci_bus 0000:03: resource 0 [io 0x5000-0x5fff] Feb 9 05:23:39.580128 kernel: pci_bus 0000:03: resource 1 [mem 0x95400000-0x954fffff] Feb 9 05:23:39.580171 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Feb 9 05:23:39.580210 kernel: pci_bus 0000:04: resource 1 [mem 0x95300000-0x953fffff] Feb 9 05:23:39.580254 kernel: pci_bus 0000:06: resource 0 [io 0x3000-0x3fff] Feb 9 05:23:39.580294 kernel: pci_bus 0000:06: resource 1 [mem 0x94000000-0x950fffff] Feb 9 05:23:39.580337 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Feb 9 05:23:39.580379 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] Feb 9 05:23:39.580388 kernel: PCI: CLS 64 bytes, default 64 Feb 9 05:23:39.580394 kernel: DMAR: No ATSR found Feb 9 05:23:39.580399 kernel: DMAR: No SATC found Feb 9 05:23:39.580405 kernel: DMAR: dmar0: Using Queued invalidation Feb 9 05:23:39.580448 kernel: pci 0000:00:00.0: Adding to iommu group 0 Feb 9 05:23:39.580494 kernel: pci 0000:00:01.0: Adding to iommu group 1 Feb 9 05:23:39.580538 kernel: pci 0000:00:08.0: Adding to iommu group 2 Feb 9 05:23:39.580583 kernel: pci 0000:00:12.0: Adding to iommu group 3 Feb 9 05:23:39.580626 kernel: pci 0000:00:14.0: Adding to iommu group 4 Feb 9 05:23:39.580670 kernel: pci 0000:00:14.2: Adding to iommu group 4 Feb 9 05:23:39.580713 kernel: pci 0000:00:15.0: Adding to iommu group 5 Feb 9 05:23:39.580755 kernel: pci 0000:00:15.1: Adding to iommu group 5 Feb 9 05:23:39.580798 kernel: pci 0000:00:16.0: Adding to iommu group 6 Feb 9 05:23:39.580842 kernel: pci 0000:00:16.1: Adding to iommu group 6 Feb 9 05:23:39.580885 kernel: pci 0000:00:16.4: Adding to iommu group 6 Feb 9 05:23:39.580927 kernel: pci 0000:00:17.0: Adding to iommu group 7 Feb 9 05:23:39.580971 kernel: pci 0000:00:1b.0: Adding to iommu group 8 Feb 9 05:23:39.581013 kernel: pci 0000:00:1b.4: Adding to iommu group 9 Feb 9 05:23:39.581057 kernel: pci 0000:00:1b.5: Adding to iommu group 10 Feb 9 05:23:39.581099 kernel: pci 0000:00:1c.0: Adding to iommu group 11 Feb 9 05:23:39.581141 kernel: pci 0000:00:1c.3: Adding to iommu group 12 Feb 9 05:23:39.581185 kernel: pci 0000:00:1e.0: Adding to iommu group 13 Feb 9 05:23:39.581228 kernel: pci 0000:00:1f.0: Adding to iommu group 14 Feb 9 05:23:39.581271 kernel: pci 0000:00:1f.4: Adding to iommu group 14 Feb 9 05:23:39.581314 kernel: pci 0000:00:1f.5: Adding to iommu group 14 Feb 9 05:23:39.581358 kernel: pci 0000:01:00.0: Adding to iommu group 1 Feb 9 05:23:39.581402 kernel: pci 0000:01:00.1: Adding to iommu group 1 Feb 9 05:23:39.581447 kernel: pci 0000:03:00.0: Adding to iommu group 15 Feb 9 05:23:39.581491 kernel: pci 0000:04:00.0: Adding to iommu group 16 Feb 9 05:23:39.581538 kernel: pci 0000:06:00.0: Adding to iommu group 17 Feb 9 05:23:39.581587 kernel: pci 0000:07:00.0: Adding to iommu group 17 Feb 9 05:23:39.581595 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Feb 9 05:23:39.581601 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 9 05:23:39.581607 kernel: software IO TLB: mapped [mem 0x0000000086fcd000-0x000000008afcd000] (64MB) Feb 9 05:23:39.581613 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer Feb 9 05:23:39.581618 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Feb 9 05:23:39.581624 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Feb 9 05:23:39.581631 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Feb 9 05:23:39.581678 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Feb 9 05:23:39.581686 kernel: Initialise system trusted keyrings Feb 9 05:23:39.581692 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Feb 9 05:23:39.581697 kernel: Key type asymmetric registered Feb 9 05:23:39.581703 kernel: Asymmetric key parser 'x509' registered Feb 9 05:23:39.581708 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 9 05:23:39.581714 kernel: io scheduler mq-deadline registered Feb 9 05:23:39.581721 kernel: io scheduler kyber registered Feb 9 05:23:39.581726 kernel: io scheduler bfq registered Feb 9 05:23:39.581769 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 Feb 9 05:23:39.581812 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 122 Feb 9 05:23:39.581856 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 123 Feb 9 05:23:39.581899 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 124 Feb 9 05:23:39.581942 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 125 Feb 9 05:23:39.581985 kernel: pcieport 0000:00:1c.3: PME: Signaling with IRQ 126 Feb 9 05:23:39.582037 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Feb 9 05:23:39.582046 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Feb 9 05:23:39.582052 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Feb 9 05:23:39.582058 kernel: pstore: Registered erst as persistent store backend Feb 9 05:23:39.582063 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 9 05:23:39.582069 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 9 05:23:39.582074 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 9 05:23:39.582080 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Feb 9 05:23:39.582087 kernel: hpet_acpi_add: no address or irqs in _CRS Feb 9 05:23:39.582130 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Feb 9 05:23:39.582138 kernel: i8042: PNP: No PS/2 controller found. Feb 9 05:23:39.582175 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Feb 9 05:23:39.582216 kernel: rtc_cmos rtc_cmos: registered as rtc0 Feb 9 05:23:39.582255 kernel: rtc_cmos rtc_cmos: setting system clock to 2024-02-09T05:23:38 UTC (1707456218) Feb 9 05:23:39.582294 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Feb 9 05:23:39.582301 kernel: fail to initialize ptp_kvm Feb 9 05:23:39.582308 kernel: intel_pstate: Intel P-state driver initializing Feb 9 05:23:39.582314 kernel: intel_pstate: Disabling energy efficiency optimization Feb 9 05:23:39.582320 kernel: intel_pstate: HWP enabled Feb 9 05:23:39.582325 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 Feb 9 05:23:39.582331 kernel: vesafb: scrolling: redraw Feb 9 05:23:39.582336 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 Feb 9 05:23:39.582342 kernel: vesafb: framebuffer at 0x94000000, mapped to 0x0000000039fe2b58, using 768k, total 768k Feb 9 05:23:39.582348 kernel: Console: switching to colour frame buffer device 128x48 Feb 9 05:23:39.582353 kernel: fb0: VESA VGA frame buffer device Feb 9 05:23:39.582360 kernel: NET: Registered PF_INET6 protocol family Feb 9 05:23:39.582365 kernel: Segment Routing with IPv6 Feb 9 05:23:39.582371 kernel: In-situ OAM (IOAM) with IPv6 Feb 9 05:23:39.582377 kernel: NET: Registered PF_PACKET protocol family Feb 9 05:23:39.582382 kernel: Key type dns_resolver registered Feb 9 05:23:39.582388 kernel: microcode: sig=0x906ed, pf=0x2, revision=0xf4 Feb 9 05:23:39.582393 kernel: microcode: Microcode Update Driver: v2.2. Feb 9 05:23:39.582399 kernel: IPI shorthand broadcast: enabled Feb 9 05:23:39.582404 kernel: sched_clock: Marking stable (1675538644, 1334072934)->(4429876491, -1420264913) Feb 9 05:23:39.582411 kernel: registered taskstats version 1 Feb 9 05:23:39.582417 kernel: Loading compiled-in X.509 certificates Feb 9 05:23:39.582422 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: e9d857ae0e8100c174221878afd1046acbb054a6' Feb 9 05:23:39.582428 kernel: Key type .fscrypt registered Feb 9 05:23:39.582433 kernel: Key type fscrypt-provisioning registered Feb 9 05:23:39.582439 kernel: pstore: Using crash dump compression: deflate Feb 9 05:23:39.582444 kernel: ima: Allocated hash algorithm: sha1 Feb 9 05:23:39.582450 kernel: ima: No architecture policies found Feb 9 05:23:39.582456 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 9 05:23:39.582462 kernel: Write protecting the kernel read-only data: 28672k Feb 9 05:23:39.582468 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 9 05:23:39.582473 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 9 05:23:39.582479 kernel: Run /init as init process Feb 9 05:23:39.582485 kernel: with arguments: Feb 9 05:23:39.582491 kernel: /init Feb 9 05:23:39.582496 kernel: with environment: Feb 9 05:23:39.582502 kernel: HOME=/ Feb 9 05:23:39.582507 kernel: TERM=linux Feb 9 05:23:39.582513 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 9 05:23:39.582520 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 05:23:39.582527 systemd[1]: Detected architecture x86-64. Feb 9 05:23:39.582533 systemd[1]: Running in initrd. Feb 9 05:23:39.582539 systemd[1]: No hostname configured, using default hostname. Feb 9 05:23:39.582544 systemd[1]: Hostname set to <localhost>. Feb 9 05:23:39.582549 systemd[1]: Initializing machine ID from random generator. Feb 9 05:23:39.582556 systemd[1]: Queued start job for default target initrd.target. Feb 9 05:23:39.582562 systemd[1]: Started systemd-ask-password-console.path. Feb 9 05:23:39.582568 systemd[1]: Reached target cryptsetup.target. Feb 9 05:23:39.582574 systemd[1]: Reached target ignition-diskful-subsequent.target. Feb 9 05:23:39.582582 systemd[1]: Reached target paths.target. Feb 9 05:23:39.582587 systemd[1]: Reached target slices.target. Feb 9 05:23:39.582593 systemd[1]: Reached target swap.target. Feb 9 05:23:39.582598 systemd[1]: Reached target timers.target. Feb 9 05:23:39.582606 systemd[1]: Listening on iscsid.socket. Feb 9 05:23:39.582611 systemd[1]: Listening on iscsiuio.socket. Feb 9 05:23:39.582617 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 05:23:39.582623 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 05:23:39.582628 kernel: tsc: Refined TSC clocksource calibration: 3407.999 MHz Feb 9 05:23:39.582634 systemd[1]: Listening on systemd-journald.socket. Feb 9 05:23:39.582640 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd336761, max_idle_ns: 440795243819 ns Feb 9 05:23:39.582646 kernel: clocksource: Switched to clocksource tsc Feb 9 05:23:39.582652 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 05:23:39.582658 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 05:23:39.582664 systemd[1]: Reached target sockets.target. Feb 9 05:23:39.582669 systemd[1]: Starting iscsiuio.service... Feb 9 05:23:39.582675 systemd[1]: Starting kmod-static-nodes.service... Feb 9 05:23:39.582681 kernel: SCSI subsystem initialized Feb 9 05:23:39.582686 systemd[1]: Starting systemd-fsck-usr.service... Feb 9 05:23:39.582692 kernel: Loading iSCSI transport class v2.0-870. Feb 9 05:23:39.582697 systemd[1]: Starting systemd-journald.service... Feb 9 05:23:39.582704 systemd[1]: Starting systemd-modules-load.service... Feb 9 05:23:39.582712 systemd-journald[269]: Journal started Feb 9 05:23:39.582740 systemd-journald[269]: Runtime Journal (/run/log/journal/cb72b5f0acec45a78b454be0454abbd2) is 8.0M, max 640.1M, 632.1M free. Feb 9 05:23:39.586207 systemd-modules-load[270]: Inserted module 'overlay' Feb 9 05:23:39.630664 systemd[1]: Starting systemd-vconsole-setup.service... Feb 9 05:23:39.630676 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 9 05:23:39.661612 kernel: Bridge firewalling registered Feb 9 05:23:39.661627 systemd[1]: Started iscsiuio.service. Feb 9 05:23:39.675046 systemd-modules-load[270]: Inserted module 'br_netfilter' Feb 9 05:23:39.799847 kernel: audit: type=1130 audit(1707456219.681:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:39.799859 systemd[1]: Started systemd-journald.service. Feb 9 05:23:39.799867 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 9 05:23:39.799874 kernel: device-mapper: uevent: version 1.0.3 Feb 9 05:23:39.799880 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 9 05:23:39.681000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:39.790631 systemd-modules-load[270]: Inserted module 'dm_multipath' Feb 9 05:23:39.824000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:39.824926 systemd[1]: Finished kmod-static-nodes.service. Feb 9 05:23:39.918320 kernel: audit: type=1130 audit(1707456219.824:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:39.918334 kernel: audit: type=1130 audit(1707456219.875:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:39.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:39.875946 systemd[1]: Finished systemd-fsck-usr.service. Feb 9 05:23:39.969643 kernel: audit: type=1130 audit(1707456219.926:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:39.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:39.926858 systemd[1]: Finished systemd-modules-load.service. Feb 9 05:23:40.036834 kernel: audit: type=1130 audit(1707456219.983:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:39.983000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:39.983874 systemd[1]: Finished systemd-vconsole-setup.service. Feb 9 05:23:40.045000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:40.046302 systemd[1]: Starting dracut-cmdline-ask.service... Feb 9 05:23:40.092628 kernel: audit: type=1130 audit(1707456220.045:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:40.092643 systemd[1]: Starting systemd-sysctl.service... Feb 9 05:23:40.092946 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 05:23:40.096456 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 05:23:40.095000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:40.097077 systemd[1]: Finished systemd-sysctl.service. Feb 9 05:23:40.216861 kernel: audit: type=1130 audit(1707456220.095:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:40.216879 kernel: audit: type=1130 audit(1707456220.160:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:40.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:40.160918 systemd[1]: Finished dracut-cmdline-ask.service. Feb 9 05:23:40.225000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:40.226365 systemd[1]: Starting dracut-cmdline.service... Feb 9 05:23:40.306693 kernel: audit: type=1130 audit(1707456220.225:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:40.306707 kernel: iscsi: registered transport (tcp) Feb 9 05:23:40.306714 dracut-cmdline[292]: dracut-dracut-053 Feb 9 05:23:40.306714 dracut-cmdline[292]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA Feb 9 05:23:40.306714 dracut-cmdline[292]: BEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.oem.id=packet flatcar.autologin verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 9 05:23:40.379874 kernel: iscsi: registered transport (qla4xxx) Feb 9 05:23:40.379889 kernel: QLogic iSCSI HBA Driver Feb 9 05:23:40.366932 systemd[1]: Finished dracut-cmdline.service. Feb 9 05:23:40.405000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:40.406434 systemd[1]: Starting dracut-pre-udev.service... Feb 9 05:23:40.420957 systemd[1]: Starting iscsid.service... Feb 9 05:23:40.435753 systemd[1]: Started iscsid.service. Feb 9 05:23:40.473646 kernel: raid6: avx2x4 gen() 42257 MB/s Feb 9 05:23:40.454000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:40.473682 iscsid[451]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 9 05:23:40.473682 iscsid[451]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Feb 9 05:23:40.473682 iscsid[451]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.<reversed domain name>[:identifier]. Feb 9 05:23:40.473682 iscsid[451]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 9 05:23:40.473682 iscsid[451]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 9 05:23:40.473682 iscsid[451]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 9 05:23:40.473682 iscsid[451]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 9 05:23:40.641690 kernel: raid6: avx2x4 xor() 21718 MB/s Feb 9 05:23:40.641702 kernel: raid6: avx2x2 gen() 53550 MB/s Feb 9 05:23:40.641710 kernel: raid6: avx2x2 xor() 32096 MB/s Feb 9 05:23:40.641716 kernel: raid6: avx2x1 gen() 45270 MB/s Feb 9 05:23:40.641723 kernel: raid6: avx2x1 xor() 27775 MB/s Feb 9 05:23:40.683617 kernel: raid6: sse2x4 gen() 21348 MB/s Feb 9 05:23:40.718653 kernel: raid6: sse2x4 xor() 11972 MB/s Feb 9 05:23:40.752583 kernel: raid6: sse2x2 gen() 21843 MB/s Feb 9 05:23:40.786582 kernel: raid6: sse2x2 xor() 13417 MB/s Feb 9 05:23:40.820582 kernel: raid6: sse2x1 gen() 18357 MB/s Feb 9 05:23:40.873087 kernel: raid6: sse2x1 xor() 8934 MB/s Feb 9 05:23:40.873102 kernel: raid6: using algorithm avx2x2 gen() 53550 MB/s Feb 9 05:23:40.873110 kernel: raid6: .... xor() 32096 MB/s, rmw enabled Feb 9 05:23:40.891406 kernel: raid6: using avx2x2 recovery algorithm Feb 9 05:23:40.937634 kernel: xor: automatically using best checksumming function avx Feb 9 05:23:41.016590 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 9 05:23:41.021934 systemd[1]: Finished dracut-pre-udev.service. Feb 9 05:23:41.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:41.030000 audit: BPF prog-id=6 op=LOAD Feb 9 05:23:41.030000 audit: BPF prog-id=7 op=LOAD Feb 9 05:23:41.031511 systemd[1]: Starting systemd-udevd.service... Feb 9 05:23:41.039617 systemd-udevd[470]: Using default interface naming scheme 'v252'. Feb 9 05:23:41.060000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:41.044715 systemd[1]: Started systemd-udevd.service. Feb 9 05:23:41.086709 dracut-pre-trigger[482]: rd.md=0: removing MD RAID activation Feb 9 05:23:41.061183 systemd[1]: Starting dracut-pre-trigger.service... Feb 9 05:23:41.103000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:41.088403 systemd[1]: Finished dracut-pre-trigger.service. Feb 9 05:23:41.104975 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 05:23:41.183635 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 05:23:41.183000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:41.184319 systemd[1]: Starting dracut-initqueue.service... Feb 9 05:23:41.227654 kernel: cryptd: max_cpu_qlen set to 1000 Feb 9 05:23:41.227674 kernel: ACPI: bus type USB registered Feb 9 05:23:41.249477 kernel: usbcore: registered new interface driver usbfs Feb 9 05:23:41.249503 kernel: usbcore: registered new interface driver hub Feb 9 05:23:41.267697 kernel: usbcore: registered new device driver usb Feb 9 05:23:41.295587 kernel: AVX2 version of gcm_enc/dec engaged. Feb 9 05:23:41.295621 kernel: libata version 3.00 loaded. Feb 9 05:23:41.295629 kernel: mlx5_core 0000:01:00.0: firmware version: 14.27.1016 Feb 9 05:23:41.295710 kernel: mlx5_core 0000:01:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Feb 9 05:23:41.348581 kernel: AES CTR mode by8 optimization enabled Feb 9 05:23:41.365581 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Feb 9 05:23:41.399998 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Feb 9 05:23:41.404583 kernel: ahci 0000:00:17.0: version 3.0 Feb 9 05:23:41.440592 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Feb 9 05:23:41.440850 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 7 ports 6 Gbps 0x7f impl SATA mode Feb 9 05:23:41.441046 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Feb 9 05:23:41.441239 kernel: pps pps0: new PPS source ptp0 Feb 9 05:23:41.441466 kernel: igb 0000:03:00.0: added PHC on eth0 Feb 9 05:23:41.441672 kernel: igb 0000:03:00.0: Intel(R) Gigabit Ethernet Network Connection Feb 9 05:23:41.441732 kernel: igb 0000:03:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:6b:0a:d0 Feb 9 05:23:41.441791 kernel: igb 0000:03:00.0: eth0: PBA No: 010000-000 Feb 9 05:23:41.441848 kernel: igb 0000:03:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Feb 9 05:23:41.441906 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Feb 9 05:23:41.475580 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Feb 9 05:23:41.490643 kernel: pps pps1: new PPS source ptp1 Feb 9 05:23:41.493581 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Feb 9 05:23:41.493667 kernel: scsi host0: ahci Feb 9 05:23:41.493731 kernel: scsi host1: ahci Feb 9 05:23:41.493795 kernel: scsi host2: ahci Feb 9 05:23:41.493856 kernel: scsi host3: ahci Feb 9 05:23:41.493912 kernel: scsi host4: ahci Feb 9 05:23:41.493971 kernel: scsi host5: ahci Feb 9 05:23:41.494032 kernel: scsi host6: ahci Feb 9 05:23:41.494082 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 132 Feb 9 05:23:41.494090 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 132 Feb 9 05:23:41.494100 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 132 Feb 9 05:23:41.494108 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 132 Feb 9 05:23:41.494115 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 132 Feb 9 05:23:41.494121 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 132 Feb 9 05:23:41.494129 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 132 Feb 9 05:23:41.525087 kernel: igb 0000:04:00.0: added PHC on eth1 Feb 9 05:23:41.525158 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Feb 9 05:23:41.554581 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Feb 9 05:23:41.554654 kernel: mlx5_core 0000:01:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Feb 9 05:23:41.570622 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Feb 9 05:23:41.570698 kernel: mlx5_core 0000:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Feb 9 05:23:41.594084 kernel: igb 0000:04:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:6b:0a:d1 Feb 9 05:23:41.594160 kernel: hub 1-0:1.0: USB hub found Feb 9 05:23:41.620985 kernel: igb 0000:04:00.0: eth1: PBA No: 010000-000 Feb 9 05:23:41.621057 kernel: hub 1-0:1.0: 16 ports detected Feb 9 05:23:41.632202 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Feb 9 05:23:41.653637 kernel: hub 2-0:1.0: USB hub found Feb 9 05:23:41.765601 kernel: mlx5_core 0000:01:00.0: Supported tc offload range - chains: 4294967294, prios: 4294967295 Feb 9 05:23:41.780664 kernel: hub 2-0:1.0: 10 ports detected Feb 9 05:23:41.794580 kernel: mlx5_core 0000:01:00.1: firmware version: 14.27.1016 Feb 9 05:23:41.802582 kernel: usb: port power management may be unreliable Feb 9 05:23:41.802597 kernel: ata5: SATA link down (SStatus 0 SControl 300) Feb 9 05:23:41.802604 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Feb 9 05:23:41.803618 kernel: ata3: SATA link down (SStatus 0 SControl 300) Feb 9 05:23:41.803633 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Feb 9 05:23:41.803641 kernel: ata4: SATA link down (SStatus 0 SControl 300) Feb 9 05:23:41.803647 kernel: ata7: SATA link down (SStatus 0 SControl 300) Feb 9 05:23:41.803656 kernel: ata6: SATA link down (SStatus 0 SControl 300) Feb 9 05:23:41.803662 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Feb 9 05:23:41.804641 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Feb 9 05:23:41.807641 kernel: mlx5_core 0000:01:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Feb 9 05:23:41.807712 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Feb 9 05:23:41.807720 kernel: ata2.00: Features: NCQ-prio Feb 9 05:23:41.808651 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Feb 9 05:23:41.808667 kernel: ata1.00: Features: NCQ-prio Feb 9 05:23:41.812644 kernel: ata2.00: configured for UDMA/133 Feb 9 05:23:41.813582 kernel: ata1.00: configured for UDMA/133 Feb 9 05:23:41.813597 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Feb 9 05:23:41.813673 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Feb 9 05:23:41.867579 kernel: igb 0000:03:00.0 eno1: renamed from eth0 Feb 9 05:23:42.010584 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Feb 9 05:23:42.154605 kernel: mlx5_core 0000:01:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Feb 9 05:23:42.283626 kernel: ata1.00: Enabling discard_zeroes_data Feb 9 05:23:42.288609 kernel: igb 0000:04:00.0 eno2: renamed from eth1 Feb 9 05:23:42.288713 kernel: port_module: 9 callbacks suppressed Feb 9 05:23:42.288727 kernel: mlx5_core 0000:01:00.1: Port module event: module 1, Cable plugged Feb 9 05:23:42.296472 kernel: ata2.00: Enabling discard_zeroes_data Feb 9 05:23:42.296580 kernel: sd 0:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Feb 9 05:23:42.296663 kernel: sd 1:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Feb 9 05:23:42.296734 kernel: sd 1:0:0:0: [sda] 4096-byte physical blocks Feb 9 05:23:42.296797 kernel: sd 1:0:0:0: [sda] Write Protect is off Feb 9 05:23:42.296882 kernel: sd 1:0:0:0: [sda] Mode Sense: 00 3a 00 00 Feb 9 05:23:42.296953 kernel: sd 1:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Feb 9 05:23:42.297015 kernel: ata2.00: Enabling discard_zeroes_data Feb 9 05:23:42.297024 kernel: ata2.00: Enabling discard_zeroes_data Feb 9 05:23:42.297031 kernel: sd 1:0:0:0: [sda] Attached SCSI disk Feb 9 05:23:42.449648 kernel: hub 1-14:1.0: USB hub found Feb 9 05:23:42.449733 kernel: sd 0:0:0:0: [sdb] 4096-byte physical blocks Feb 9 05:23:42.449792 kernel: sd 0:0:0:0: [sdb] Write Protect is off Feb 9 05:23:42.483202 kernel: hub 1-14:1.0: 4 ports detected Feb 9 05:23:42.483278 kernel: sd 0:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Feb 9 05:23:42.599662 kernel: sd 0:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Feb 9 05:23:42.613831 kernel: ata1.00: Enabling discard_zeroes_data Feb 9 05:23:42.613848 kernel: mlx5_core 0000:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Feb 9 05:23:42.631641 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Feb 9 05:23:42.660414 kernel: ata1.00: Enabling discard_zeroes_data Feb 9 05:23:42.660431 kernel: sd 0:0:0:0: [sdb] Attached SCSI disk Feb 9 05:23:42.723805 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 9 05:23:42.751745 kernel: BTRFS: device label OEM devid 1 transid 19 /dev/sdb6 scanned by (udev-worker) (522) Feb 9 05:23:42.730881 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 9 05:23:42.782893 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 9 05:23:42.813753 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Feb 9 05:23:42.806875 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 05:23:42.847617 kernel: mlx5_core 0000:01:00.1: Supported tc offload range - chains: 4294967294, prios: 4294967295 Feb 9 05:23:42.831660 systemd[1]: Reached target initrd-root-device.target. Feb 9 05:23:42.875815 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: renamed from eth0 Feb 9 05:23:42.862040 systemd[1]: Starting disk-uuid.service... Feb 9 05:23:42.903665 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: renamed from eth2 Feb 9 05:23:42.887067 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 9 05:23:43.014705 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 9 05:23:43.014721 kernel: audit: type=1130 audit(1707456222.915:19): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:43.014730 kernel: audit: type=1131 audit(1707456222.915:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:43.014737 kernel: usbcore: registered new interface driver usbhid Feb 9 05:23:42.915000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:42.915000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:42.887108 systemd[1]: Finished disk-uuid.service. Feb 9 05:23:43.083810 kernel: usbhid: USB HID core driver Feb 9 05:23:43.083828 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Feb 9 05:23:42.916901 systemd[1]: Reached target local-fs-pre.target. Feb 9 05:23:43.054678 systemd[1]: Reached target local-fs.target. Feb 9 05:23:43.233019 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 9 05:23:43.233034 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Feb 9 05:23:43.233119 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Feb 9 05:23:43.233128 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Feb 9 05:23:43.071527 systemd[1]: Reached target sysinit.target. Feb 9 05:23:43.091789 systemd[1]: Reached target basic.target. Feb 9 05:23:43.256000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:43.092359 systemd[1]: Starting verity-setup.service... Feb 9 05:23:43.326794 kernel: audit: type=1130 audit(1707456223.256:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:43.158104 systemd[1]: Found device dev-mapper-usr.device. Feb 9 05:23:43.240791 systemd[1]: Finished dracut-initqueue.service. Feb 9 05:23:43.257083 systemd[1]: Reached target remote-fs-pre.target. Feb 9 05:23:43.311657 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 05:23:43.381000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:43.311689 systemd[1]: Reached target remote-fs.target. Feb 9 05:23:43.514772 kernel: audit: type=1130 audit(1707456223.381:22): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:43.514789 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 9 05:23:43.514797 kernel: audit: type=1130 audit(1707456223.443:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:43.443000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:43.336114 systemd[1]: Mounting sysusr-usr.mount... Feb 9 05:23:43.351033 systemd[1]: Starting dracut-pre-mount.service... Feb 9 05:23:43.365829 systemd[1]: Finished verity-setup.service. Feb 9 05:23:43.381781 systemd[1]: Finished dracut-pre-mount.service. Feb 9 05:23:43.444876 systemd[1]: Starting systemd-fsck-root.service... Feb 9 05:23:43.522096 systemd[1]: Mounted sysusr-usr.mount. Feb 9 05:23:43.532915 systemd-fsck[721]: ROOT: clean, 638/553520 files, 134952/553472 blocks Feb 9 05:23:43.569254 systemd[1]: Finished systemd-fsck-root.service. Feb 9 05:23:43.654233 kernel: audit: type=1130 audit(1707456223.577:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:43.654247 kernel: EXT4-fs (sdb9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 9 05:23:43.577000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:43.578457 systemd[1]: Mounting sysroot.mount... Feb 9 05:23:43.661226 systemd[1]: Mounted sysroot.mount. Feb 9 05:23:43.674849 systemd[1]: Reached target initrd-root-fs.target. Feb 9 05:23:43.682588 systemd[1]: Mounting sysroot-usr.mount... Feb 9 05:23:43.703417 systemd[1]: Mounted sysroot-usr.mount. Feb 9 05:23:43.719493 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 05:23:43.825867 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Feb 9 05:23:43.825884 kernel: BTRFS info (device sdb6): using free space tree Feb 9 05:23:43.825892 kernel: BTRFS info (device sdb6): has skinny extents Feb 9 05:23:43.825898 kernel: BTRFS info (device sdb6): enabling ssd optimizations Feb 9 05:23:43.737624 systemd[1]: Starting initrd-setup-root.service... Feb 9 05:23:43.833883 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 05:23:43.850883 systemd[1]: Finished initrd-setup-root.service. Feb 9 05:23:43.935821 kernel: audit: type=1130 audit(1707456223.867:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:43.867000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:43.868234 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 9 05:23:44.000589 kernel: audit: type=1130 audit(1707456223.945:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:43.945000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:43.928903 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 9 05:23:44.017839 initrd-setup-root-after-ignition[801]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 9 05:23:43.945835 systemd[1]: Reached target ignition-subsequent.target. Feb 9 05:23:44.101613 kernel: audit: type=1130 audit(1707456224.046:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:44.046000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:44.046000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:44.009096 systemd[1]: Starting initrd-parse-etc.service... Feb 9 05:23:44.030427 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 9 05:23:44.030475 systemd[1]: Finished initrd-parse-etc.service. Feb 9 05:23:44.147000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:44.046813 systemd[1]: Reached target initrd-fs.target. Feb 9 05:23:44.109815 systemd[1]: Reached target initrd.target. Feb 9 05:23:44.109871 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 9 05:23:44.110208 systemd[1]: Starting dracut-pre-pivot.service... Feb 9 05:23:44.209000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:44.130941 systemd[1]: Finished dracut-pre-pivot.service. Feb 9 05:23:44.148191 systemd[1]: Starting initrd-cleanup.service... Feb 9 05:23:44.165554 systemd[1]: Stopped target remote-cryptsetup.target. Feb 9 05:23:44.176860 systemd[1]: Stopped target timers.target. Feb 9 05:23:44.190974 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 9 05:23:44.191213 systemd[1]: Stopped dracut-pre-pivot.service. Feb 9 05:23:44.210413 systemd[1]: Stopped target initrd.target. Feb 9 05:23:44.225139 systemd[1]: Stopped target basic.target. Feb 9 05:23:44.239253 systemd[1]: Stopped target ignition-subsequent.target. Feb 9 05:23:44.255131 systemd[1]: Stopped target ignition-diskful-subsequent.target. Feb 9 05:23:44.274136 systemd[1]: Stopped target initrd-root-device.target. Feb 9 05:23:44.290151 systemd[1]: Stopped target paths.target. Feb 9 05:23:44.306144 systemd[1]: Stopped target remote-fs.target. Feb 9 05:23:44.323251 systemd[1]: Stopped target remote-fs-pre.target. Feb 9 05:23:44.339122 systemd[1]: Stopped target slices.target. Feb 9 05:23:44.353129 systemd[1]: Stopped target sockets.target. Feb 9 05:23:44.464000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:44.370133 systemd[1]: Stopped target sysinit.target. Feb 9 05:23:44.388133 systemd[1]: Stopped target local-fs.target. Feb 9 05:23:44.405137 systemd[1]: Stopped target local-fs-pre.target. Feb 9 05:23:44.511000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:44.420110 systemd[1]: Stopped target swap.target. Feb 9 05:23:44.528000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:44.434098 systemd[1]: iscsid.socket: Deactivated successfully. Feb 9 05:23:44.544000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:44.434329 systemd[1]: Closed iscsid.socket. Feb 9 05:23:44.449147 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 9 05:23:44.574000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:44.449460 systemd[1]: Stopped dracut-pre-mount.service. Feb 9 05:23:44.590000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:44.465344 systemd[1]: Stopped target cryptsetup.target. Feb 9 05:23:44.606000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:44.481028 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 9 05:23:44.621000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:44.485840 systemd[1]: Stopped systemd-ask-password-console.path. Feb 9 05:23:44.496010 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 9 05:23:44.656000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:44.496338 systemd[1]: Stopped dracut-initqueue.service. Feb 9 05:23:44.673000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:44.512257 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 9 05:23:44.512600 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 9 05:23:44.529216 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 9 05:23:44.529523 systemd[1]: Stopped initrd-setup-root.service. Feb 9 05:23:44.739000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:44.545554 systemd[1]: Stopping iscsiuio.service... Feb 9 05:23:44.755000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:44.559781 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 05:23:44.771000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:44.560114 systemd[1]: Stopped systemd-sysctl.service. Feb 9 05:23:44.575227 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 9 05:23:44.802000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:44.575527 systemd[1]: Stopped systemd-modules-load.service. Feb 9 05:23:44.822000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:44.591222 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 9 05:23:44.839000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:44.591529 systemd[1]: Stopped systemd-udev-trigger.service. Feb 9 05:23:44.857000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:44.857000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:44.607333 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 9 05:23:44.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:44.873000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:44.607667 systemd[1]: Stopped dracut-pre-trigger.service. Feb 9 05:23:44.622618 systemd[1]: Stopping systemd-udevd.service... Feb 9 05:23:44.641207 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 9 05:23:44.641665 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 9 05:23:44.641714 systemd[1]: Stopped iscsiuio.service. Feb 9 05:23:44.657072 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 9 05:23:44.657154 systemd[1]: Stopped systemd-udevd.service. Feb 9 05:23:44.675196 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 9 05:23:44.980855 iscsid[451]: iscsid shutting down. Feb 9 05:23:44.675263 systemd[1]: Closed iscsiuio.socket. Feb 9 05:23:44.687823 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 9 05:23:44.687937 systemd[1]: Closed systemd-udevd-control.socket. Feb 9 05:23:44.705878 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 9 05:23:44.705966 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 9 05:23:44.721847 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 9 05:23:44.721959 systemd[1]: Stopped dracut-pre-udev.service. Feb 9 05:23:44.739970 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 9 05:23:44.740100 systemd[1]: Stopped dracut-cmdline.service. Feb 9 05:23:44.755967 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 9 05:23:44.756098 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 9 05:23:44.773541 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 9 05:23:44.788768 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 9 05:23:44.788797 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Feb 9 05:23:44.802889 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 9 05:23:44.802931 systemd[1]: Stopped kmod-static-nodes.service. Feb 9 05:23:44.822797 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 9 05:23:44.822856 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 9 05:23:44.842185 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 9 05:23:44.843483 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 9 05:23:44.843692 systemd[1]: Finished initrd-cleanup.service. Feb 9 05:23:44.858385 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 9 05:23:44.858575 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 9 05:23:44.875662 systemd[1]: Reached target initrd-switch-root.target. Feb 9 05:23:44.893509 systemd[1]: Starting initrd-switch-root.service... Feb 9 05:23:44.927001 systemd[1]: Switching root. Feb 9 05:23:44.981371 systemd-journald[269]: Journal stopped Feb 9 05:23:48.996940 systemd-journald[269]: Received SIGTERM from PID 1 (n/a). Feb 9 05:23:48.996954 kernel: SELinux: Class mctp_socket not defined in policy. Feb 9 05:23:48.996963 kernel: SELinux: Class anon_inode not defined in policy. Feb 9 05:23:48.996969 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 9 05:23:48.996974 kernel: SELinux: policy capability network_peer_controls=1 Feb 9 05:23:48.996979 kernel: SELinux: policy capability open_perms=1 Feb 9 05:23:48.996984 kernel: SELinux: policy capability extended_socket_class=1 Feb 9 05:23:48.996990 kernel: SELinux: policy capability always_check_network=0 Feb 9 05:23:48.996995 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 9 05:23:48.997001 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 9 05:23:48.997006 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 9 05:23:48.997011 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 9 05:23:48.997017 systemd[1]: Successfully loaded SELinux policy in 304.928ms. Feb 9 05:23:48.997023 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 5.944ms. Feb 9 05:23:48.997031 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 05:23:48.997038 systemd[1]: Detected architecture x86-64. Feb 9 05:23:48.997043 systemd[1]: Detected first boot. Feb 9 05:23:48.997049 systemd[1]: Hostname set to <ci-3510.3.2-a-8a9497f9cf>. Feb 9 05:23:48.997055 systemd[1]: Initializing machine ID from random generator. Feb 9 05:23:48.997061 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 9 05:23:48.997067 systemd[1]: Populated /etc with preset unit settings. Feb 9 05:23:48.997074 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 05:23:48.997080 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 05:23:48.997087 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 05:23:48.997093 kernel: kauditd_printk_skb: 40 callbacks suppressed Feb 9 05:23:48.997098 kernel: audit: type=1334 audit(1707456227.302:61): prog-id=10 op=LOAD Feb 9 05:23:48.997104 kernel: audit: type=1334 audit(1707456227.302:62): prog-id=3 op=UNLOAD Feb 9 05:23:48.997111 kernel: audit: type=1334 audit(1707456227.344:63): prog-id=11 op=LOAD Feb 9 05:23:48.997116 kernel: audit: type=1334 audit(1707456227.387:64): prog-id=12 op=LOAD Feb 9 05:23:48.997122 systemd[1]: iscsid.service: Deactivated successfully. Feb 9 05:23:48.997128 kernel: audit: type=1334 audit(1707456227.387:65): prog-id=4 op=UNLOAD Feb 9 05:23:48.997133 systemd[1]: Stopped iscsid.service. Feb 9 05:23:48.997139 kernel: audit: type=1334 audit(1707456227.387:66): prog-id=5 op=UNLOAD Feb 9 05:23:48.997145 kernel: audit: type=1131 audit(1707456227.388:67): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:48.997150 kernel: audit: type=1334 audit(1707456227.537:68): prog-id=10 op=UNLOAD Feb 9 05:23:48.997156 kernel: audit: type=1131 audit(1707456227.548:69): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:48.997163 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 9 05:23:48.997169 systemd[1]: Stopped initrd-switch-root.service. Feb 9 05:23:48.997175 kernel: audit: type=1130 audit(1707456227.656:70): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:48.997181 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 9 05:23:48.997189 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 9 05:23:48.997195 systemd[1]: Created slice system-addon\x2drun.slice. Feb 9 05:23:48.997202 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Feb 9 05:23:48.997209 systemd[1]: Created slice system-getty.slice. Feb 9 05:23:48.997215 systemd[1]: Created slice system-modprobe.slice. Feb 9 05:23:48.997221 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 9 05:23:48.997227 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 9 05:23:48.997233 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 9 05:23:48.997240 systemd[1]: Created slice user.slice. Feb 9 05:23:48.997246 systemd[1]: Started systemd-ask-password-console.path. Feb 9 05:23:48.997252 systemd[1]: Started systemd-ask-password-wall.path. Feb 9 05:23:48.997259 systemd[1]: Set up automount boot.automount. Feb 9 05:23:48.997265 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 9 05:23:48.997272 systemd[1]: Stopped target initrd-switch-root.target. Feb 9 05:23:48.997278 systemd[1]: Stopped target initrd-fs.target. Feb 9 05:23:48.997284 systemd[1]: Stopped target initrd-root-fs.target. Feb 9 05:23:48.997290 systemd[1]: Reached target integritysetup.target. Feb 9 05:23:48.997296 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 05:23:48.997302 systemd[1]: Reached target remote-fs.target. Feb 9 05:23:48.997309 systemd[1]: Reached target slices.target. Feb 9 05:23:48.997316 systemd[1]: Reached target swap.target. Feb 9 05:23:48.997322 systemd[1]: Reached target torcx.target. Feb 9 05:23:48.997328 systemd[1]: Reached target veritysetup.target. Feb 9 05:23:48.997334 systemd[1]: Listening on systemd-coredump.socket. Feb 9 05:23:48.997341 systemd[1]: Listening on systemd-initctl.socket. Feb 9 05:23:48.997348 systemd[1]: Listening on systemd-networkd.socket. Feb 9 05:23:48.997355 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 05:23:48.997361 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 05:23:48.997367 systemd[1]: Listening on systemd-userdbd.socket. Feb 9 05:23:48.997374 systemd[1]: Mounting dev-hugepages.mount... Feb 9 05:23:48.997380 systemd[1]: Mounting dev-mqueue.mount... Feb 9 05:23:48.997386 systemd[1]: Mounting media.mount... Feb 9 05:23:48.997393 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 05:23:48.997400 systemd[1]: Mounting sys-kernel-debug.mount... Feb 9 05:23:48.997406 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 9 05:23:48.997413 systemd[1]: Mounting tmp.mount... Feb 9 05:23:48.997419 systemd[1]: Starting flatcar-tmpfiles.service... Feb 9 05:23:48.997426 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 9 05:23:48.997432 systemd[1]: Starting kmod-static-nodes.service... Feb 9 05:23:48.997438 systemd[1]: Starting modprobe@configfs.service... Feb 9 05:23:48.997444 systemd[1]: Starting modprobe@dm_mod.service... Feb 9 05:23:48.997451 systemd[1]: Starting modprobe@drm.service... Feb 9 05:23:48.997458 systemd[1]: Starting modprobe@efi_pstore.service... Feb 9 05:23:48.997464 systemd[1]: Starting modprobe@fuse.service... Feb 9 05:23:48.997470 kernel: fuse: init (API version 7.34) Feb 9 05:23:48.997476 systemd[1]: Starting modprobe@loop.service... Feb 9 05:23:48.997482 kernel: loop: module loaded Feb 9 05:23:48.997488 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 9 05:23:48.997495 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 9 05:23:48.997501 systemd[1]: Stopped systemd-fsck-root.service. Feb 9 05:23:48.997508 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 9 05:23:48.997515 systemd[1]: Stopped systemd-fsck-usr.service. Feb 9 05:23:48.997521 systemd[1]: Stopped systemd-journald.service. Feb 9 05:23:48.997527 systemd[1]: Starting systemd-journald.service... Feb 9 05:23:48.997534 systemd[1]: Starting systemd-modules-load.service... Feb 9 05:23:48.997542 systemd-journald[943]: Journal started Feb 9 05:23:48.997567 systemd-journald[943]: Runtime Journal (/run/log/journal/ac3eb69a92304cd0a10b2a7091309e90) is 8.0M, max 640.1M, 632.1M free. Feb 9 05:23:45.432000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 9 05:23:45.697000 audit[1]: AVC avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 9 05:23:45.699000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 05:23:45.699000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 05:23:45.699000 audit: BPF prog-id=8 op=LOAD Feb 9 05:23:45.699000 audit: BPF prog-id=8 op=UNLOAD Feb 9 05:23:45.700000 audit: BPF prog-id=9 op=LOAD Feb 9 05:23:45.700000 audit: BPF prog-id=9 op=UNLOAD Feb 9 05:23:45.767000 audit[835]: AVC avc: denied { associate } for pid=835 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 9 05:23:45.767000 audit[835]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001a58e2 a1=c00002ce58 a2=c00002b100 a3=32 items=0 ppid=818 pid=835 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 05:23:45.767000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 05:23:45.792000 audit[835]: AVC avc: denied { associate } for pid=835 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 9 05:23:45.792000 audit[835]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001a59b9 a2=1ed a3=0 items=2 ppid=818 pid=835 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 05:23:45.792000 audit: CWD cwd="/" Feb 9 05:23:45.792000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 05:23:45.792000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 05:23:45.792000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 05:23:47.302000 audit: BPF prog-id=10 op=LOAD Feb 9 05:23:47.302000 audit: BPF prog-id=3 op=UNLOAD Feb 9 05:23:47.344000 audit: BPF prog-id=11 op=LOAD Feb 9 05:23:47.387000 audit: BPF prog-id=12 op=LOAD Feb 9 05:23:47.387000 audit: BPF prog-id=4 op=UNLOAD Feb 9 05:23:47.387000 audit: BPF prog-id=5 op=UNLOAD Feb 9 05:23:47.388000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:47.537000 audit: BPF prog-id=10 op=UNLOAD Feb 9 05:23:47.548000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:47.656000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:47.656000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:48.912000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:48.948000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:48.969000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:48.969000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:48.970000 audit: BPF prog-id=13 op=LOAD Feb 9 05:23:48.970000 audit: BPF prog-id=14 op=LOAD Feb 9 05:23:48.970000 audit: BPF prog-id=15 op=LOAD Feb 9 05:23:48.970000 audit: BPF prog-id=11 op=UNLOAD Feb 9 05:23:48.970000 audit: BPF prog-id=12 op=UNLOAD Feb 9 05:23:48.994000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 05:23:48.994000 audit[943]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffe749d4860 a2=4000 a3=7ffe749d48fc items=0 ppid=1 pid=943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 05:23:48.994000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 9 05:23:45.765864 /usr/lib/systemd/system-generators/torcx-generator[835]: time="2024-02-09T05:23:45Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 05:23:47.301057 systemd[1]: Queued start job for default target multi-user.target. Feb 9 05:23:45.766312 /usr/lib/systemd/system-generators/torcx-generator[835]: time="2024-02-09T05:23:45Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 05:23:47.301064 systemd[1]: Unnecessary job was removed for dev-sdb6.device. Feb 9 05:23:45.766326 /usr/lib/systemd/system-generators/torcx-generator[835]: time="2024-02-09T05:23:45Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 05:23:47.388645 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 9 05:23:45.766349 /usr/lib/systemd/system-generators/torcx-generator[835]: time="2024-02-09T05:23:45Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 9 05:23:45.766356 /usr/lib/systemd/system-generators/torcx-generator[835]: time="2024-02-09T05:23:45Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 9 05:23:45.766377 /usr/lib/systemd/system-generators/torcx-generator[835]: time="2024-02-09T05:23:45Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 9 05:23:45.766386 /usr/lib/systemd/system-generators/torcx-generator[835]: time="2024-02-09T05:23:45Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 9 05:23:45.766761 /usr/lib/systemd/system-generators/torcx-generator[835]: time="2024-02-09T05:23:45Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 9 05:23:45.766790 /usr/lib/systemd/system-generators/torcx-generator[835]: time="2024-02-09T05:23:45Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 05:23:45.766799 /usr/lib/systemd/system-generators/torcx-generator[835]: time="2024-02-09T05:23:45Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 05:23:45.767237 /usr/lib/systemd/system-generators/torcx-generator[835]: time="2024-02-09T05:23:45Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 9 05:23:45.767261 /usr/lib/systemd/system-generators/torcx-generator[835]: time="2024-02-09T05:23:45Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 9 05:23:45.767274 /usr/lib/systemd/system-generators/torcx-generator[835]: time="2024-02-09T05:23:45Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 9 05:23:45.767284 /usr/lib/systemd/system-generators/torcx-generator[835]: time="2024-02-09T05:23:45Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 9 05:23:45.767295 /usr/lib/systemd/system-generators/torcx-generator[835]: time="2024-02-09T05:23:45Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 9 05:23:45.767305 /usr/lib/systemd/system-generators/torcx-generator[835]: time="2024-02-09T05:23:45Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 9 05:23:46.956758 /usr/lib/systemd/system-generators/torcx-generator[835]: time="2024-02-09T05:23:46Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 05:23:46.956904 /usr/lib/systemd/system-generators/torcx-generator[835]: time="2024-02-09T05:23:46Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 05:23:46.956959 /usr/lib/systemd/system-generators/torcx-generator[835]: time="2024-02-09T05:23:46Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 05:23:46.957051 /usr/lib/systemd/system-generators/torcx-generator[835]: time="2024-02-09T05:23:46Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 05:23:46.957081 /usr/lib/systemd/system-generators/torcx-generator[835]: time="2024-02-09T05:23:46Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 9 05:23:46.957116 /usr/lib/systemd/system-generators/torcx-generator[835]: time="2024-02-09T05:23:46Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 9 05:23:49.028768 systemd[1]: Starting systemd-network-generator.service... Feb 9 05:23:49.050626 systemd[1]: Starting systemd-remount-fs.service... Feb 9 05:23:49.071619 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 05:23:49.105075 systemd[1]: verity-setup.service: Deactivated successfully. Feb 9 05:23:49.105096 systemd[1]: Stopped verity-setup.service. Feb 9 05:23:49.111000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:49.139625 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 05:23:49.153624 systemd[1]: Started systemd-journald.service. Feb 9 05:23:49.161000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:49.162095 systemd[1]: Mounted dev-hugepages.mount. Feb 9 05:23:49.169843 systemd[1]: Mounted dev-mqueue.mount. Feb 9 05:23:49.176842 systemd[1]: Mounted media.mount. Feb 9 05:23:49.183841 systemd[1]: Mounted sys-kernel-debug.mount. Feb 9 05:23:49.192818 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 9 05:23:49.201802 systemd[1]: Mounted tmp.mount. Feb 9 05:23:49.208884 systemd[1]: Finished flatcar-tmpfiles.service. Feb 9 05:23:49.216000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:49.216913 systemd[1]: Finished kmod-static-nodes.service. Feb 9 05:23:49.225000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:49.225940 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 9 05:23:49.226057 systemd[1]: Finished modprobe@configfs.service. Feb 9 05:23:49.234000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:49.234000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:49.235183 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 9 05:23:49.235358 systemd[1]: Finished modprobe@dm_mod.service. Feb 9 05:23:49.244000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:49.244000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:49.245286 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 9 05:23:49.245533 systemd[1]: Finished modprobe@drm.service. Feb 9 05:23:49.253000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:49.253000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:49.254428 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 9 05:23:49.254748 systemd[1]: Finished modprobe@efi_pstore.service. Feb 9 05:23:49.263000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:49.263000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:49.264440 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 9 05:23:49.264755 systemd[1]: Finished modprobe@fuse.service. Feb 9 05:23:49.272000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:49.272000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:49.273389 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 9 05:23:49.273704 systemd[1]: Finished modprobe@loop.service. Feb 9 05:23:49.281000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:49.281000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:49.282412 systemd[1]: Finished systemd-modules-load.service. Feb 9 05:23:49.290000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:49.291385 systemd[1]: Finished systemd-network-generator.service. Feb 9 05:23:49.299000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:49.300356 systemd[1]: Finished systemd-remount-fs.service. Feb 9 05:23:49.308000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:49.309370 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 05:23:49.317000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:49.318957 systemd[1]: Reached target network-pre.target. Feb 9 05:23:49.330434 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 9 05:23:49.341238 systemd[1]: Mounting sys-kernel-config.mount... Feb 9 05:23:49.347827 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 9 05:23:49.351068 systemd[1]: Starting systemd-hwdb-update.service... Feb 9 05:23:49.360074 systemd[1]: Starting systemd-journal-flush.service... Feb 9 05:23:49.368858 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 9 05:23:49.371195 systemd[1]: Starting systemd-random-seed.service... Feb 9 05:23:49.371921 systemd-journald[943]: Time spent on flushing to /var/log/journal/ac3eb69a92304cd0a10b2a7091309e90 is 10.988ms for 1266 entries. Feb 9 05:23:49.371921 systemd-journald[943]: System Journal (/var/log/journal/ac3eb69a92304cd0a10b2a7091309e90) is 8.0M, max 195.6M, 187.6M free. Feb 9 05:23:49.402516 systemd-journald[943]: Received client request to flush runtime journal. Feb 9 05:23:49.386735 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 9 05:23:49.387200 systemd[1]: Starting systemd-sysctl.service... Feb 9 05:23:49.397243 systemd[1]: Starting systemd-sysusers.service... Feb 9 05:23:49.405200 systemd[1]: Starting systemd-udev-settle.service... Feb 9 05:23:49.412717 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 9 05:23:49.421734 systemd[1]: Mounted sys-kernel-config.mount. Feb 9 05:23:49.429775 systemd[1]: Finished systemd-journal-flush.service. Feb 9 05:23:49.437000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:49.437776 systemd[1]: Finished systemd-random-seed.service. Feb 9 05:23:49.445000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:49.445780 systemd[1]: Finished systemd-sysctl.service. Feb 9 05:23:49.453000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:49.453779 systemd[1]: Finished systemd-sysusers.service. Feb 9 05:23:49.461000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:49.462710 systemd[1]: Reached target first-boot-complete.target. Feb 9 05:23:49.471295 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 05:23:49.480785 udevadm[960]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 9 05:23:49.490797 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 05:23:49.498000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:49.673860 systemd[1]: Finished systemd-hwdb-update.service. Feb 9 05:23:49.683000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:49.683000 audit: BPF prog-id=16 op=LOAD Feb 9 05:23:49.684000 audit: BPF prog-id=17 op=LOAD Feb 9 05:23:49.684000 audit: BPF prog-id=6 op=UNLOAD Feb 9 05:23:49.684000 audit: BPF prog-id=7 op=UNLOAD Feb 9 05:23:49.684911 systemd[1]: Starting systemd-udevd.service... Feb 9 05:23:49.696169 systemd-udevd[963]: Using default interface naming scheme 'v252'. Feb 9 05:23:49.715911 systemd[1]: Started systemd-udevd.service. Feb 9 05:23:49.723000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:49.726149 systemd[1]: Condition check resulted in dev-ttyS1.device being skipped. Feb 9 05:23:49.726000 audit: BPF prog-id=18 op=LOAD Feb 9 05:23:49.727537 systemd[1]: Starting systemd-networkd.service... Feb 9 05:23:49.761723 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 Feb 9 05:23:49.761794 kernel: ACPI: button: Sleep Button [SLPB] Feb 9 05:23:49.779123 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Feb 9 05:23:49.778000 audit: BPF prog-id=19 op=LOAD Feb 9 05:23:49.778000 audit: BPF prog-id=20 op=LOAD Feb 9 05:23:49.778000 audit: BPF prog-id=21 op=LOAD Feb 9 05:23:49.779583 kernel: IPMI message handler: version 39.2 Feb 9 05:23:49.779706 systemd[1]: Starting systemd-userdbd.service... Feb 9 05:23:49.789584 kernel: mousedev: PS/2 mouse device common for all mice Feb 9 05:23:49.789626 kernel: ACPI: button: Power Button [PWRF] Feb 9 05:23:49.768000 audit[977]: AVC avc: denied { confidentiality } for pid=977 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 9 05:23:49.832451 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 05:23:49.847871 systemd[1]: Started systemd-userdbd.service. Feb 9 05:23:49.859586 kernel: ipmi device interface Feb 9 05:23:49.867000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:49.768000 audit[977]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55c7afec7240 a1=4d8bc a2=7f7994e9cbc5 a3=5 items=42 ppid=963 pid=977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 05:23:49.768000 audit: CWD cwd="/" Feb 9 05:23:49.768000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 05:23:49.768000 audit: PATH item=1 name=(null) inode=22294 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 05:23:49.768000 audit: PATH item=2 name=(null) inode=22294 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 05:23:49.768000 audit: PATH item=3 name=(null) inode=22295 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 05:23:49.768000 audit: PATH item=4 name=(null) inode=22294 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 05:23:49.768000 audit: PATH item=5 name=(null) inode=22296 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 05:23:49.768000 audit: PATH item=6 name=(null) inode=22294 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 05:23:49.768000 audit: PATH item=7 name=(null) inode=22297 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 05:23:49.768000 audit: PATH item=8 name=(null) inode=22297 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 05:23:49.768000 audit: PATH item=9 name=(null) inode=22298 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 05:23:49.768000 audit: PATH item=10 name=(null) inode=22297 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 05:23:49.768000 audit: PATH item=11 name=(null) inode=22299 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 05:23:49.768000 audit: PATH item=12 name=(null) inode=22297 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 05:23:49.768000 audit: PATH item=13 name=(null) inode=22300 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 05:23:49.768000 audit: PATH item=14 name=(null) inode=22297 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 05:23:49.768000 audit: PATH item=15 name=(null) inode=22301 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 05:23:49.768000 audit: PATH item=16 name=(null) inode=22297 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 05:23:49.768000 audit: PATH item=17 name=(null) inode=22302 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 05:23:49.768000 audit: PATH item=18 name=(null) inode=22294 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 05:23:49.768000 audit: PATH item=19 name=(null) inode=22303 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 05:23:49.768000 audit: PATH item=20 name=(null) inode=22303 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 05:23:49.768000 audit: PATH item=21 name=(null) inode=22304 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 05:23:49.768000 audit: PATH item=22 name=(null) inode=22303 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 05:23:49.768000 audit: PATH item=23 name=(null) inode=22305 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 05:23:49.768000 audit: PATH item=24 name=(null) inode=22303 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 05:23:49.768000 audit: PATH item=25 name=(null) inode=22306 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 05:23:49.768000 audit: PATH item=26 name=(null) inode=22303 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 05:23:49.768000 audit: PATH item=27 name=(null) inode=22307 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 05:23:49.768000 audit: PATH item=28 name=(null) inode=22303 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 05:23:49.768000 audit: PATH item=29 name=(null) inode=22308 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 05:23:49.768000 audit: PATH item=30 name=(null) inode=22294 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 05:23:49.768000 audit: PATH item=31 name=(null) inode=22309 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 05:23:49.768000 audit: PATH item=32 name=(null) inode=22309 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 05:23:49.768000 audit: PATH item=33 name=(null) inode=22310 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 05:23:49.768000 audit: PATH item=34 name=(null) inode=22309 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 05:23:49.768000 audit: PATH item=35 name=(null) inode=22311 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 05:23:49.768000 audit: PATH item=36 name=(null) inode=22309 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 05:23:49.768000 audit: PATH item=37 name=(null) inode=22312 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 05:23:49.768000 audit: PATH item=38 name=(null) inode=22309 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 05:23:49.768000 audit: PATH item=39 name=(null) inode=22313 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 05:23:49.768000 audit: PATH item=40 name=(null) inode=22309 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 05:23:49.768000 audit: PATH item=41 name=(null) inode=22314 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 05:23:49.768000 audit: PROCTITLE proctitle="(udev-worker)" Feb 9 05:23:49.894229 kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set Feb 9 05:23:49.894394 kernel: ipmi_si: IPMI System Interface driver Feb 9 05:23:49.922851 kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt Feb 9 05:23:49.923020 kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS Feb 9 05:23:49.954382 kernel: i2c i2c-0: 2/4 memory slots populated (from DMI) Feb 9 05:23:49.954473 kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 Feb 9 05:23:49.987544 kernel: ipmi_si: Adding SMBIOS-specified kcs state machine Feb 9 05:23:50.003416 kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI Feb 9 05:23:50.003518 kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2] regsize 1 spacing 1 irq 0 Feb 9 05:23:50.038581 kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface Feb 9 05:23:50.038678 kernel: mei_me 0000:00:16.4: Device doesn't have valid ME Interface Feb 9 05:23:50.089088 kernel: iTCO_vendor_support: vendor-support=0 Feb 9 05:23:50.089116 kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI Feb 9 05:23:50.089189 kernel: ipmi_si: Adding ACPI-specified kcs state machine Feb 9 05:23:50.104629 kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 Feb 9 05:23:50.105817 systemd-networkd[1007]: bond0: netdev ready Feb 9 05:23:50.107856 systemd-networkd[1007]: lo: Link UP Feb 9 05:23:50.107859 systemd-networkd[1007]: lo: Gained carrier Feb 9 05:23:50.108161 systemd-networkd[1007]: Enumeration completed Feb 9 05:23:50.108249 systemd[1]: Started systemd-networkd.service. Feb 9 05:23:50.108440 systemd-networkd[1007]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Feb 9 05:23:50.109290 systemd-networkd[1007]: enp1s0f1np1: Configuring with /etc/systemd/network/10-1c:34:da:42:74:e9.network. Feb 9 05:23:50.138000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:50.166507 kernel: iTCO_wdt iTCO_wdt: Found a Intel PCH TCO device (Version=6, TCOBASE=0x0400) Feb 9 05:23:50.166608 kernel: iTCO_wdt iTCO_wdt: initialized. heartbeat=30 sec (nowayout=0) Feb 9 05:23:50.166666 kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. Feb 9 05:23:50.243262 kernel: intel_rapl_common: Found RAPL domain package Feb 9 05:23:50.243309 kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1b0f, dev_id: 0x20) Feb 9 05:23:50.243391 kernel: intel_rapl_common: Found RAPL domain core Feb 9 05:23:50.275935 kernel: intel_rapl_common: Found RAPL domain dram Feb 9 05:23:50.275959 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Feb 9 05:23:50.314597 kernel: bond0: (slave enp1s0f1np1): Enslaving as a backup interface with an up link Feb 9 05:23:50.314830 systemd-networkd[1007]: enp1s0f0np0: Configuring with /etc/systemd/network/10-1c:34:da:42:74:e8.network. Feb 9 05:23:50.336583 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Feb 9 05:23:50.336616 kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized Feb 9 05:23:50.371586 kernel: ipmi_ssif: IPMI SSIF Interface driver Feb 9 05:23:50.468644 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Feb 9 05:23:50.538638 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Feb 9 05:23:50.563613 kernel: bond0: (slave enp1s0f0np0): Enslaving as a backup interface with an up link Feb 9 05:23:50.584652 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): bond0: link becomes ready Feb 9 05:23:50.592942 systemd-networkd[1007]: bond0: Link UP Feb 9 05:23:50.593231 systemd-networkd[1007]: enp1s0f1np1: Link UP Feb 9 05:23:50.593439 systemd-networkd[1007]: enp1s0f0np0: Link UP Feb 9 05:23:50.593613 systemd-networkd[1007]: enp1s0f1np1: Gained carrier Feb 9 05:23:50.594922 systemd-networkd[1007]: enp1s0f1np1: Reconfiguring with /etc/systemd/network/10-1c:34:da:42:74:e8.network. Feb 9 05:23:50.634753 kernel: bond0: (slave enp1s0f1np1): link status definitely up, 10000 Mbps full duplex Feb 9 05:23:50.634776 kernel: bond0: active interface up! Feb 9 05:23:50.662580 kernel: bond0: (slave enp1s0f0np0): link status definitely up, 10000 Mbps full duplex Feb 9 05:23:50.699633 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Feb 9 05:23:50.701819 systemd[1]: Finished systemd-udev-settle.service. Feb 9 05:23:50.709000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:50.710323 systemd[1]: Starting lvm2-activation-early.service... Feb 9 05:23:50.726243 lvm[1068]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 05:23:50.757005 systemd[1]: Finished lvm2-activation-early.service. Feb 9 05:23:50.772000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:50.773352 systemd[1]: Reached target cryptsetup.target. Feb 9 05:23:50.785610 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Feb 9 05:23:50.803309 systemd[1]: Starting lvm2-activation.service... Feb 9 05:23:50.805201 lvm[1069]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 05:23:50.807611 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Feb 9 05:23:50.829652 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Feb 9 05:23:50.852632 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Feb 9 05:23:50.874641 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Feb 9 05:23:50.896612 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Feb 9 05:23:50.897992 systemd[1]: Finished lvm2-activation.service. Feb 9 05:23:50.914000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:50.914713 systemd[1]: Reached target local-fs-pre.target. Feb 9 05:23:50.919620 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Feb 9 05:23:50.935684 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 9 05:23:50.935698 systemd[1]: Reached target local-fs.target. Feb 9 05:23:50.942630 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Feb 9 05:23:50.958655 systemd[1]: Reached target machines.target. Feb 9 05:23:50.964636 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Feb 9 05:23:50.981302 systemd[1]: Starting ldconfig.service... Feb 9 05:23:50.987629 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Feb 9 05:23:51.003596 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 9 05:23:51.003619 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 05:23:51.004223 systemd[1]: Starting systemd-boot-update.service... Feb 9 05:23:51.010581 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Feb 9 05:23:51.026278 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 9 05:23:51.033649 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Feb 9 05:23:51.052181 systemd[1]: Starting systemd-machine-id-commit.service... Feb 9 05:23:51.054086 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 9 05:23:51.054106 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 9 05:23:51.054582 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Feb 9 05:23:51.054612 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 9 05:23:51.054826 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1071 (bootctl) Feb 9 05:23:51.055690 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 9 05:23:51.065491 systemd-tmpfiles[1075]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 9 05:23:51.067554 systemd-tmpfiles[1075]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 9 05:23:51.069698 systemd-tmpfiles[1075]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 9 05:23:51.075585 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Feb 9 05:23:51.092020 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 9 05:23:51.095584 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Feb 9 05:23:51.095000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:51.096185 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 9 05:23:51.096461 systemd[1]: Finished systemd-machine-id-commit.service. Feb 9 05:23:51.095000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:51.116581 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Feb 9 05:23:51.136627 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Feb 9 05:23:51.137542 systemd-networkd[1007]: bond0: Gained carrier Feb 9 05:23:51.137655 systemd-networkd[1007]: enp1s0f0np0: Gained carrier Feb 9 05:23:51.160195 systemd-fsck[1079]: fsck.fat 4.2 (2021-01-31) Feb 9 05:23:51.160195 systemd-fsck[1079]: /dev/sdb1: 789 files, 115332/258078 clusters Feb 9 05:23:51.162275 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 9 05:23:51.170248 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Feb 9 05:23:51.170279 kernel: bond0: (slave enp1s0f1np1): invalid new link 1 on slave Feb 9 05:23:51.179000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:51.179924 systemd-networkd[1007]: enp1s0f1np1: Link DOWN Feb 9 05:23:51.179927 systemd-networkd[1007]: enp1s0f1np1: Lost carrier Feb 9 05:23:51.181374 systemd[1]: Mounting boot.mount... Feb 9 05:23:51.235537 systemd[1]: Mounted boot.mount. Feb 9 05:23:51.260084 systemd[1]: Finished systemd-boot-update.service. Feb 9 05:23:51.268000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:51.295382 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 9 05:23:51.303000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 05:23:51.304459 systemd[1]: Starting audit-rules.service... Feb 9 05:23:51.311186 systemd[1]: Starting clean-ca-certificates.service... Feb 9 05:23:51.313057 ldconfig[1070]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 9 05:23:51.321962 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 9 05:23:51.325000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 05:23:51.325000 audit[1104]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffcb1b459a0 a2=420 a3=0 items=0 ppid=1087 pid=1104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 05:23:51.325000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 05:23:51.326684 augenrules[1104]: No rules Feb 9 05:23:51.330627 systemd[1]: Starting systemd-resolved.service... Feb 9 05:23:51.338795 systemd[1]: Starting systemd-timesyncd.service... Feb 9 05:23:51.347354 systemd[1]: Starting systemd-update-utmp.service... Feb 9 05:23:51.355034 systemd[1]: Finished ldconfig.service. Feb 9 05:23:51.362936 systemd[1]: Finished audit-rules.service. Feb 9 05:23:51.374057 systemd[1]: Finished clean-ca-certificates.service. Feb 9 05:23:51.382622 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Feb 9 05:23:51.397982 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 9 05:23:51.399630 kernel: bond0: (slave enp1s0f1np1): link status up again after 200 ms Feb 9 05:23:51.399704 kernel: bond0: (slave enp1s0f1np1): speed changed to 0 on port 1 Feb 9 05:23:51.414588 kernel: bond0: (slave enp1s0f1np1): link status up again after 200 ms Feb 9 05:23:51.415523 systemd-networkd[1007]: enp1s0f1np1: Link UP Feb 9 05:23:51.415756 systemd-networkd[1007]: enp1s0f1np1: Gained carrier Feb 9 05:23:51.447643 kernel: bond0: (slave enp1s0f1np1): link status definitely up, 10000 Mbps full duplex Feb 9 05:23:51.449439 systemd[1]: Starting systemd-update-done.service... Feb 9 05:23:51.456663 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 9 05:23:51.457045 systemd[1]: Finished systemd-update-done.service. Feb 9 05:23:51.466317 systemd[1]: Finished systemd-update-utmp.service. Feb 9 05:23:51.474991 systemd[1]: Started systemd-timesyncd.service. Feb 9 05:23:51.476309 systemd-resolved[1109]: Positive Trust Anchors: Feb 9 05:23:51.476314 systemd-resolved[1109]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 05:23:51.476332 systemd-resolved[1109]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 05:23:51.482865 systemd[1]: Reached target time-set.target. Feb 9 05:23:51.495284 systemd-resolved[1109]: Using system hostname 'ci-3510.3.2-a-8a9497f9cf'. Feb 9 05:23:51.496362 systemd[1]: Started systemd-resolved.service. Feb 9 05:23:51.504690 systemd[1]: Reached target network.target. Feb 9 05:23:51.513660 systemd[1]: Reached target nss-lookup.target. Feb 9 05:23:51.522674 systemd[1]: Reached target sysinit.target. Feb 9 05:23:51.530701 systemd[1]: Started motdgen.path. Feb 9 05:23:51.537664 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 9 05:23:51.547730 systemd[1]: Started logrotate.timer. Feb 9 05:23:51.554689 systemd[1]: Started mdadm.timer. Feb 9 05:23:51.561658 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 9 05:23:51.569655 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 9 05:23:51.569671 systemd[1]: Reached target paths.target. Feb 9 05:23:51.576652 systemd[1]: Reached target timers.target. Feb 9 05:23:51.583764 systemd[1]: Listening on dbus.socket. Feb 9 05:23:51.591191 systemd[1]: Starting docker.socket... Feb 9 05:23:51.599137 systemd[1]: Listening on sshd.socket. Feb 9 05:23:51.605719 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 05:23:51.605933 systemd[1]: Listening on docker.socket. Feb 9 05:23:51.612709 systemd[1]: Reached target sockets.target. Feb 9 05:23:51.620653 systemd[1]: Reached target basic.target. Feb 9 05:23:51.627675 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 05:23:51.627689 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 05:23:51.628129 systemd[1]: Starting containerd.service... Feb 9 05:23:51.635068 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Feb 9 05:23:51.644141 systemd[1]: Starting coreos-metadata.service... Feb 9 05:23:51.651203 systemd[1]: Starting dbus.service... Feb 9 05:23:51.657108 systemd[1]: Starting enable-oem-cloudinit.service... Feb 9 05:23:51.663002 jq[1122]: false Feb 9 05:23:51.664468 systemd[1]: Starting extend-filesystems.service... Feb 9 05:23:51.668624 systemd-networkd[1007]: bond0: Gained IPv6LL Feb 9 05:23:51.668832 systemd-timesyncd[1110]: Network configuration changed, trying to establish connection. Feb 9 05:23:51.671653 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 9 05:23:51.672423 systemd[1]: Starting motdgen.service... Feb 9 05:23:51.673232 extend-filesystems[1125]: Found sda Feb 9 05:23:51.692174 extend-filesystems[1125]: Found sdb Feb 9 05:23:51.692174 extend-filesystems[1125]: Found sdb1 Feb 9 05:23:51.692174 extend-filesystems[1125]: Found sdb2 Feb 9 05:23:51.692174 extend-filesystems[1125]: Found sdb3 Feb 9 05:23:51.692174 extend-filesystems[1125]: Found usr Feb 9 05:23:51.692174 extend-filesystems[1125]: Found sdb4 Feb 9 05:23:51.692174 extend-filesystems[1125]: Found sdb6 Feb 9 05:23:51.692174 extend-filesystems[1125]: Found sdb7 Feb 9 05:23:51.692174 extend-filesystems[1125]: Found sdb9 Feb 9 05:23:51.692174 extend-filesystems[1125]: Checking size of /dev/sdb9 Feb 9 05:23:51.692174 extend-filesystems[1125]: Resized partition /dev/sdb9 Feb 9 05:23:51.821686 kernel: EXT4-fs (sdb9): resizing filesystem from 553472 to 116605649 blocks Feb 9 05:23:51.673726 dbus-daemon[1121]: [system] SELinux support is enabled Feb 9 05:23:51.679236 systemd[1]: Starting prepare-cni-plugins.service... Feb 9 05:23:51.821876 extend-filesystems[1139]: resize2fs 1.46.5 (30-Dec-2021) Feb 9 05:23:51.836657 coreos-metadata[1117]: Feb 09 05:23:51.702 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 9 05:23:51.836770 coreos-metadata[1118]: Feb 09 05:23:51.702 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 9 05:23:51.705721 systemd[1]: Starting prepare-critools.service... Feb 9 05:23:51.721368 systemd[1]: Starting prepare-helm.service... Feb 9 05:23:51.740267 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 9 05:23:51.760206 systemd[1]: Starting sshd-keygen.service... Feb 9 05:23:51.787905 systemd[1]: Starting systemd-logind.service... Feb 9 05:23:51.800617 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 05:23:51.801201 systemd[1]: Starting tcsd.service... Feb 9 05:23:51.808469 systemd-logind[1154]: Watching system buttons on /dev/input/event3 (Power Button) Feb 9 05:23:51.837422 jq[1157]: true Feb 9 05:23:51.808479 systemd-logind[1154]: Watching system buttons on /dev/input/event2 (Sleep Button) Feb 9 05:23:51.808488 systemd-logind[1154]: Watching system buttons on /dev/input/event0 (HID 0557:2419) Feb 9 05:23:51.808585 systemd-logind[1154]: New seat seat0. Feb 9 05:23:51.814089 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 9 05:23:51.814432 systemd[1]: Starting update-engine.service... Feb 9 05:23:51.829238 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 9 05:23:51.844968 systemd[1]: Started dbus.service. Feb 9 05:23:51.853325 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 9 05:23:51.853410 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 9 05:23:51.853585 systemd[1]: motdgen.service: Deactivated successfully. Feb 9 05:23:51.853662 systemd[1]: Finished motdgen.service. Feb 9 05:23:51.856672 update_engine[1156]: I0209 05:23:51.856297 1156 main.cc:92] Flatcar Update Engine starting Feb 9 05:23:51.859320 update_engine[1156]: I0209 05:23:51.859309 1156 update_check_scheduler.cc:74] Next update check in 4m18s Feb 9 05:23:51.861723 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 9 05:23:51.861807 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 9 05:23:51.867880 tar[1159]: ./ Feb 9 05:23:51.867880 tar[1159]: ./loopback Feb 9 05:23:51.872217 tar[1161]: linux-amd64/helm Feb 9 05:23:51.872360 jq[1165]: false Feb 9 05:23:51.872535 dbus-daemon[1121]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 9 05:23:51.873321 systemd[1]: update-ssh-keys-after-ignition.service: Skipped due to 'exec-condition'. Feb 9 05:23:51.873402 systemd[1]: Condition check resulted in update-ssh-keys-after-ignition.service being skipped. Feb 9 05:23:51.873564 tar[1160]: crictl Feb 9 05:23:51.877632 systemd[1]: tcsd.service: Skipped due to 'exec-condition'. Feb 9 05:23:51.877731 systemd[1]: Condition check resulted in tcsd.service being skipped. Feb 9 05:23:51.877801 systemd[1]: Started systemd-logind.service. Feb 9 05:23:51.882890 env[1166]: time="2024-02-09T05:23:51.882862245Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 9 05:23:51.889475 systemd[1]: Started update-engine.service. Feb 9 05:23:51.891021 tar[1159]: ./bandwidth Feb 9 05:23:51.896556 env[1166]: time="2024-02-09T05:23:51.896529598Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 9 05:23:51.896693 env[1166]: time="2024-02-09T05:23:51.896647932Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 9 05:23:51.897358 env[1166]: time="2024-02-09T05:23:51.897317230Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 9 05:23:51.897358 env[1166]: time="2024-02-09T05:23:51.897332277Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 9 05:23:51.897458 env[1166]: time="2024-02-09T05:23:51.897445023Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 05:23:51.897484 env[1166]: time="2024-02-09T05:23:51.897459766Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 9 05:23:51.897484 env[1166]: time="2024-02-09T05:23:51.897470320Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 9 05:23:51.897484 env[1166]: time="2024-02-09T05:23:51.897476146Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 9 05:23:51.897529 env[1166]: time="2024-02-09T05:23:51.897519439Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 9 05:23:51.897833 env[1166]: time="2024-02-09T05:23:51.897716749Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 9 05:23:51.897833 env[1166]: time="2024-02-09T05:23:51.897789700Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 05:23:51.897833 env[1166]: time="2024-02-09T05:23:51.897799763Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 9 05:23:51.897833 env[1166]: time="2024-02-09T05:23:51.897827319Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 9 05:23:51.897903 env[1166]: time="2024-02-09T05:23:51.897838888Z" level=info msg="metadata content store policy set" policy=shared Feb 9 05:23:51.899339 systemd[1]: Started locksmithd.service. Feb 9 05:23:51.906701 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 9 05:23:51.906788 systemd[1]: Reached target system-config.target. Feb 9 05:23:51.915663 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 9 05:23:51.915762 systemd[1]: Reached target user-config.target. Feb 9 05:23:51.919163 env[1166]: time="2024-02-09T05:23:51.919114001Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 9 05:23:51.919163 env[1166]: time="2024-02-09T05:23:51.919137289Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 9 05:23:51.919163 env[1166]: time="2024-02-09T05:23:51.919145913Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 9 05:23:51.919238 env[1166]: time="2024-02-09T05:23:51.919167157Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 9 05:23:51.919238 env[1166]: time="2024-02-09T05:23:51.919176706Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 9 05:23:51.919238 env[1166]: time="2024-02-09T05:23:51.919184309Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 9 05:23:51.919238 env[1166]: time="2024-02-09T05:23:51.919191893Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 9 05:23:51.919238 env[1166]: time="2024-02-09T05:23:51.919199938Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 9 05:23:51.919238 env[1166]: time="2024-02-09T05:23:51.919207425Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 9 05:23:51.919238 env[1166]: time="2024-02-09T05:23:51.919214925Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 9 05:23:51.919238 env[1166]: time="2024-02-09T05:23:51.919221586Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 9 05:23:51.919238 env[1166]: time="2024-02-09T05:23:51.919228734Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 9 05:23:51.919363 env[1166]: time="2024-02-09T05:23:51.919291914Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 9 05:23:51.919363 env[1166]: time="2024-02-09T05:23:51.919340477Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 9 05:23:51.919484 env[1166]: time="2024-02-09T05:23:51.919476599Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 9 05:23:51.919503 env[1166]: time="2024-02-09T05:23:51.919492418Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 9 05:23:51.919503 env[1166]: time="2024-02-09T05:23:51.919500187Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 9 05:23:51.919534 env[1166]: time="2024-02-09T05:23:51.919525083Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 9 05:23:51.919550 env[1166]: time="2024-02-09T05:23:51.919532908Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 9 05:23:51.919550 env[1166]: time="2024-02-09T05:23:51.919540164Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 9 05:23:51.919550 env[1166]: time="2024-02-09T05:23:51.919546307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 9 05:23:51.919596 env[1166]: time="2024-02-09T05:23:51.919552914Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 9 05:23:51.919596 env[1166]: time="2024-02-09T05:23:51.919559369Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 9 05:23:51.919596 env[1166]: time="2024-02-09T05:23:51.919565890Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 9 05:23:51.919596 env[1166]: time="2024-02-09T05:23:51.919572393Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 9 05:23:51.919596 env[1166]: time="2024-02-09T05:23:51.919584177Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 9 05:23:51.919673 env[1166]: time="2024-02-09T05:23:51.919648309Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 9 05:23:51.919673 env[1166]: time="2024-02-09T05:23:51.919659582Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 9 05:23:51.919673 env[1166]: time="2024-02-09T05:23:51.919666690Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 9 05:23:51.919719 env[1166]: time="2024-02-09T05:23:51.919673305Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 9 05:23:51.919719 env[1166]: time="2024-02-09T05:23:51.919681730Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 9 05:23:51.919719 env[1166]: time="2024-02-09T05:23:51.919688355Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 9 05:23:51.919719 env[1166]: time="2024-02-09T05:23:51.919697911Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 9 05:23:51.919782 env[1166]: time="2024-02-09T05:23:51.919719118Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 9 05:23:51.919906 env[1166]: time="2024-02-09T05:23:51.919847787Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 9 05:23:51.919906 env[1166]: time="2024-02-09T05:23:51.919880575Z" level=info msg="Connect containerd service" Feb 9 05:23:51.919906 env[1166]: time="2024-02-09T05:23:51.919900244Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 9 05:23:51.923214 env[1166]: time="2024-02-09T05:23:51.920180837Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 05:23:51.923214 env[1166]: time="2024-02-09T05:23:51.920274409Z" level=info msg="Start subscribing containerd event" Feb 9 05:23:51.923214 env[1166]: time="2024-02-09T05:23:51.920305503Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 9 05:23:51.923214 env[1166]: time="2024-02-09T05:23:51.920315132Z" level=info msg="Start recovering state" Feb 9 05:23:51.923214 env[1166]: time="2024-02-09T05:23:51.920326718Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 9 05:23:51.923214 env[1166]: time="2024-02-09T05:23:51.920348863Z" level=info msg="containerd successfully booted in 0.037933s" Feb 9 05:23:51.923214 env[1166]: time="2024-02-09T05:23:51.920357589Z" level=info msg="Start event monitor" Feb 9 05:23:51.923214 env[1166]: time="2024-02-09T05:23:51.920365776Z" level=info msg="Start snapshots syncer" Feb 9 05:23:51.923214 env[1166]: time="2024-02-09T05:23:51.920372845Z" level=info msg="Start cni network conf syncer for default" Feb 9 05:23:51.923214 env[1166]: time="2024-02-09T05:23:51.920379584Z" level=info msg="Start streaming server" Feb 9 05:23:51.925466 systemd[1]: Started containerd.service. Feb 9 05:23:51.925725 tar[1159]: ./ptp Feb 9 05:23:51.955900 tar[1159]: ./vlan Feb 9 05:23:51.985723 tar[1159]: ./host-device Feb 9 05:23:51.993706 locksmithd[1188]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 9 05:23:52.011732 tar[1159]: ./tuning Feb 9 05:23:52.035246 tar[1159]: ./vrf Feb 9 05:23:52.060616 tar[1159]: ./sbr Feb 9 05:23:52.084243 tar[1159]: ./tap Feb 9 05:23:52.110977 tar[1159]: ./dhcp Feb 9 05:23:52.163261 tar[1161]: linux-amd64/LICENSE Feb 9 05:23:52.163261 tar[1161]: linux-amd64/README.md Feb 9 05:23:52.165156 systemd[1]: Finished prepare-helm.service. Feb 9 05:23:52.181343 tar[1159]: ./static Feb 9 05:23:52.193213 systemd[1]: Finished prepare-critools.service. Feb 9 05:23:52.200042 tar[1159]: ./firewall Feb 9 05:23:52.230856 tar[1159]: ./macvlan Feb 9 05:23:52.257714 tar[1159]: ./dummy Feb 9 05:23:52.284290 tar[1159]: ./bridge Feb 9 05:23:52.312563 tar[1159]: ./ipvlan Feb 9 05:23:52.338175 tar[1159]: ./portmap Feb 9 05:23:52.363773 tar[1159]: ./host-local Feb 9 05:23:52.389114 systemd[1]: Finished prepare-cni-plugins.service. Feb 9 05:23:52.464634 kernel: EXT4-fs (sdb9): resized filesystem to 116605649 Feb 9 05:23:52.492637 extend-filesystems[1139]: Filesystem at /dev/sdb9 is mounted on /; on-line resizing required Feb 9 05:23:52.492637 extend-filesystems[1139]: old_desc_blocks = 1, new_desc_blocks = 56 Feb 9 05:23:52.492637 extend-filesystems[1139]: The filesystem on /dev/sdb9 is now 116605649 (4k) blocks long. Feb 9 05:23:52.530678 extend-filesystems[1125]: Resized filesystem in /dev/sdb9 Feb 9 05:23:52.545664 sshd_keygen[1153]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 9 05:23:52.493015 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 9 05:23:52.493090 systemd[1]: Finished extend-filesystems.service. Feb 9 05:23:52.530326 systemd[1]: Finished sshd-keygen.service. Feb 9 05:23:52.531468 systemd[1]: Starting issuegen.service... Feb 9 05:23:52.553962 systemd[1]: issuegen.service: Deactivated successfully. Feb 9 05:23:52.554044 systemd[1]: Finished issuegen.service. Feb 9 05:23:52.562464 systemd[1]: Starting systemd-user-sessions.service... Feb 9 05:23:52.571836 systemd[1]: Finished systemd-user-sessions.service. Feb 9 05:23:52.581359 systemd[1]: Started getty@tty1.service. Feb 9 05:23:52.590260 systemd[1]: Started serial-getty@ttyS1.service. Feb 9 05:23:52.598836 systemd[1]: Reached target getty.target. Feb 9 05:23:52.693833 systemd-timesyncd[1110]: Network configuration changed, trying to establish connection. Feb 9 05:23:52.694297 systemd-timesyncd[1110]: Network configuration changed, trying to establish connection. Feb 9 05:23:52.754810 kernel: mlx5_core 0000:01:00.0: lag map port 1:1 port 2:2 shared_fdb:0 Feb 9 05:23:52.839837 kernel: mlx5_core 0000:01:00.0: modify lag map port 1:1 port 2:1 Feb 9 05:23:53.683636 kernel: mlx5_core 0000:01:00.0: modify lag map port 1:1 port 2:2 Feb 9 05:23:57.626492 login[1216]: pam_lastlog(login:session): file /var/log/lastlog is locked/write Feb 9 05:23:57.627920 login[1217]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 9 05:23:57.634859 systemd[1]: Created slice user-500.slice. Feb 9 05:23:57.635378 systemd[1]: Starting user-runtime-dir@500.service... Feb 9 05:23:57.636366 systemd-logind[1154]: New session 1 of user core. Feb 9 05:23:57.640332 systemd[1]: Finished user-runtime-dir@500.service. Feb 9 05:23:57.640955 systemd[1]: Starting user@500.service... Feb 9 05:23:57.642968 (systemd)[1221]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 9 05:23:57.712473 systemd[1221]: Queued start job for default target default.target. Feb 9 05:23:57.712712 systemd[1221]: Reached target paths.target. Feb 9 05:23:57.712724 systemd[1221]: Reached target sockets.target. Feb 9 05:23:57.712731 systemd[1221]: Reached target timers.target. Feb 9 05:23:57.712738 systemd[1221]: Reached target basic.target. Feb 9 05:23:57.712758 systemd[1221]: Reached target default.target. Feb 9 05:23:57.712772 systemd[1221]: Startup finished in 66ms. Feb 9 05:23:57.712818 systemd[1]: Started user@500.service. Feb 9 05:23:57.713347 systemd[1]: Started session-1.scope. Feb 9 05:23:57.878185 coreos-metadata[1118]: Feb 09 05:23:57.877 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Name or service not known Feb 9 05:23:57.878936 coreos-metadata[1117]: Feb 09 05:23:57.877 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Name or service not known Feb 9 05:23:58.627403 login[1216]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 9 05:23:58.638274 systemd-logind[1154]: New session 2 of user core. Feb 9 05:23:58.640728 systemd[1]: Started session-2.scope. Feb 9 05:23:58.674227 kernel: mlx5_core 0000:01:00.0: modify lag map port 1:2 port 2:2 Feb 9 05:23:58.674339 kernel: mlx5_core 0000:01:00.0: modify lag map port 1:1 port 2:2 Feb 9 05:23:58.878401 coreos-metadata[1118]: Feb 09 05:23:58.878 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Feb 9 05:23:58.879137 coreos-metadata[1117]: Feb 09 05:23:58.878 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Feb 9 05:23:58.907575 coreos-metadata[1117]: Feb 09 05:23:58.907 INFO Fetch successful Feb 9 05:23:58.907665 coreos-metadata[1118]: Feb 09 05:23:58.907 INFO Fetch successful Feb 9 05:23:58.929317 systemd[1]: Finished coreos-metadata.service. Feb 9 05:23:58.930236 systemd[1]: Started packet-phone-home.service. Feb 9 05:23:58.930393 unknown[1117]: wrote ssh authorized keys file for user: core Feb 9 05:23:58.937252 curl[1243]: % Total % Received % Xferd Average Speed Time Time Time Current Feb 9 05:23:58.937388 curl[1243]: Dload Upload Total Spent Left Speed Feb 9 05:23:58.956037 systemd[1]: Created slice system-sshd.slice. Feb 9 05:23:58.958346 update-ssh-keys[1244]: Updated "/home/core/.ssh/authorized_keys" Feb 9 05:23:58.959365 systemd[1]: Started sshd@0-147.75.90.151:22-147.75.109.163:36638.service. Feb 9 05:23:58.961906 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Feb 9 05:23:58.963531 systemd[1]: Reached target multi-user.target. Feb 9 05:23:58.967036 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 9 05:23:58.981752 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 9 05:23:58.981823 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 9 05:23:58.981952 systemd[1]: Startup finished in 1.842s (kernel) + 6.272s (initrd) + 13.880s (userspace) = 21.996s. Feb 9 05:23:59.020886 sshd[1246]: Accepted publickey for core from 147.75.109.163 port 36638 ssh2: RSA SHA256:by5us56zV59xWLeZ0jKNtrh0jNbtksa6rAc7n50Br/w Feb 9 05:23:59.021659 sshd[1246]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 05:23:59.024236 systemd-logind[1154]: New session 3 of user core. Feb 9 05:23:59.024671 systemd[1]: Started session-3.scope. Feb 9 05:23:59.076655 systemd[1]: Started sshd@1-147.75.90.151:22-147.75.109.163:36650.service. Feb 9 05:23:59.116144 curl[1243]: \u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 Feb 9 05:23:59.116669 systemd[1]: packet-phone-home.service: Deactivated successfully. Feb 9 05:23:59.135351 sshd[1253]: Accepted publickey for core from 147.75.109.163 port 36650 ssh2: RSA SHA256:by5us56zV59xWLeZ0jKNtrh0jNbtksa6rAc7n50Br/w Feb 9 05:23:59.136170 sshd[1253]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 05:23:59.138501 systemd-logind[1154]: New session 4 of user core. Feb 9 05:23:59.138975 systemd[1]: Started session-4.scope. Feb 9 05:23:59.188509 sshd[1253]: pam_unix(sshd:session): session closed for user core Feb 9 05:23:59.189994 systemd[1]: sshd@1-147.75.90.151:22-147.75.109.163:36650.service: Deactivated successfully. Feb 9 05:23:59.190328 systemd[1]: session-4.scope: Deactivated successfully. Feb 9 05:23:59.190672 systemd-logind[1154]: Session 4 logged out. Waiting for processes to exit. Feb 9 05:23:59.191116 systemd[1]: Started sshd@2-147.75.90.151:22-147.75.109.163:36658.service. Feb 9 05:23:59.191491 systemd-logind[1154]: Removed session 4. Feb 9 05:23:59.283426 sshd[1259]: Accepted publickey for core from 147.75.109.163 port 36658 ssh2: RSA SHA256:by5us56zV59xWLeZ0jKNtrh0jNbtksa6rAc7n50Br/w Feb 9 05:23:59.285914 sshd[1259]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 05:23:59.293503 systemd-logind[1154]: New session 5 of user core. Feb 9 05:23:59.295405 systemd[1]: Started session-5.scope. Feb 9 05:23:59.363010 sshd[1259]: pam_unix(sshd:session): session closed for user core Feb 9 05:23:59.369689 systemd[1]: sshd@2-147.75.90.151:22-147.75.109.163:36658.service: Deactivated successfully. Feb 9 05:23:59.371264 systemd[1]: session-5.scope: Deactivated successfully. Feb 9 05:23:59.372835 systemd-logind[1154]: Session 5 logged out. Waiting for processes to exit. Feb 9 05:23:59.375480 systemd[1]: Started sshd@3-147.75.90.151:22-147.75.109.163:36670.service. Feb 9 05:23:59.377827 systemd-logind[1154]: Removed session 5. Feb 9 05:23:59.487147 sshd[1265]: Accepted publickey for core from 147.75.109.163 port 36670 ssh2: RSA SHA256:by5us56zV59xWLeZ0jKNtrh0jNbtksa6rAc7n50Br/w Feb 9 05:23:59.489004 sshd[1265]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 05:23:59.494628 systemd-logind[1154]: New session 6 of user core. Feb 9 05:23:59.495929 systemd[1]: Started session-6.scope. Feb 9 05:23:59.555650 sshd[1265]: pam_unix(sshd:session): session closed for user core Feb 9 05:23:59.557059 systemd[1]: sshd@3-147.75.90.151:22-147.75.109.163:36670.service: Deactivated successfully. Feb 9 05:23:59.557368 systemd[1]: session-6.scope: Deactivated successfully. Feb 9 05:23:59.557760 systemd-logind[1154]: Session 6 logged out. Waiting for processes to exit. Feb 9 05:23:59.558237 systemd[1]: Started sshd@4-147.75.90.151:22-147.75.109.163:36674.service. Feb 9 05:23:59.558571 systemd-logind[1154]: Removed session 6. Feb 9 05:23:59.595984 sshd[1271]: Accepted publickey for core from 147.75.109.163 port 36674 ssh2: RSA SHA256:by5us56zV59xWLeZ0jKNtrh0jNbtksa6rAc7n50Br/w Feb 9 05:23:59.596971 sshd[1271]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 05:23:59.600450 systemd-logind[1154]: New session 7 of user core. Feb 9 05:23:59.601178 systemd[1]: Started session-7.scope. Feb 9 05:23:59.682433 sudo[1275]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 9 05:23:59.683035 sudo[1275]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 05:24:03.721039 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 05:24:03.725318 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 05:24:03.725627 systemd[1]: Reached target network-online.target. Feb 9 05:24:03.726460 systemd[1]: Starting docker.service... Feb 9 05:24:03.750269 env[1296]: time="2024-02-09T05:24:03.750212126Z" level=info msg="Starting up" Feb 9 05:24:03.750805 env[1296]: time="2024-02-09T05:24:03.750753055Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 05:24:03.750805 env[1296]: time="2024-02-09T05:24:03.750762385Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 05:24:03.750805 env[1296]: time="2024-02-09T05:24:03.750773424Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock <nil> 0 <nil>}] <nil> <nil>}" module=grpc Feb 9 05:24:03.750805 env[1296]: time="2024-02-09T05:24:03.750779054Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 05:24:03.751737 env[1296]: time="2024-02-09T05:24:03.751690207Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 05:24:03.751737 env[1296]: time="2024-02-09T05:24:03.751698451Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 05:24:03.751737 env[1296]: time="2024-02-09T05:24:03.751711246Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock <nil> 0 <nil>}] <nil> <nil>}" module=grpc Feb 9 05:24:03.751737 env[1296]: time="2024-02-09T05:24:03.751718452Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 05:24:03.762761 env[1296]: time="2024-02-09T05:24:03.762751482Z" level=info msg="Loading containers: start." Feb 9 05:24:03.842643 kernel: Initializing XFRM netlink socket Feb 9 05:24:03.911300 env[1296]: time="2024-02-09T05:24:03.911279465Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 9 05:24:03.911836 systemd-timesyncd[1110]: Network configuration changed, trying to establish connection. Feb 9 05:24:03.955031 systemd-networkd[1007]: docker0: Link UP Feb 9 05:24:03.960711 env[1296]: time="2024-02-09T05:24:03.960693699Z" level=info msg="Loading containers: done." Feb 9 05:24:03.967680 env[1296]: time="2024-02-09T05:24:03.967652299Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 9 05:24:03.967797 env[1296]: time="2024-02-09T05:24:03.967780490Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 9 05:24:03.967865 env[1296]: time="2024-02-09T05:24:03.967853849Z" level=info msg="Daemon has completed initialization" Feb 9 05:24:03.979654 systemd[1]: Started docker.service. Feb 9 05:24:03.984855 env[1296]: time="2024-02-09T05:24:03.984808104Z" level=info msg="API listen on /run/docker.sock" Feb 9 05:24:03.994453 systemd[1]: Reloading. Feb 9 05:24:04.055079 /usr/lib/systemd/system-generators/torcx-generator[1448]: time="2024-02-09T05:24:04Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 05:24:04.055100 /usr/lib/systemd/system-generators/torcx-generator[1448]: time="2024-02-09T05:24:04Z" level=info msg="torcx already run" Feb 9 05:24:04.103395 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 05:24:04.103403 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 05:24:04.115599 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 05:24:04.168061 systemd[1]: Started kubelet.service. Feb 9 05:24:04.330883 systemd-timesyncd[1110]: Contacted time server [2607:b500:410:7700::1]:123 (2.flatcar.pool.ntp.org). Feb 9 05:24:04.331026 systemd-timesyncd[1110]: Initial clock synchronization to Fri 2024-02-09 05:24:03.991687 UTC. Feb 9 05:24:04.763780 kubelet[1505]: E0209 05:24:04.763461 1505 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 9 05:24:04.769015 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 05:24:04.769083 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 05:24:05.341890 env[1166]: time="2024-02-09T05:24:05.341838274Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.6\"" Feb 9 05:24:06.020154 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2976529166.mount: Deactivated successfully. Feb 9 05:24:07.826538 env[1166]: time="2024-02-09T05:24:07.826512566Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 05:24:07.827241 env[1166]: time="2024-02-09T05:24:07.827230074Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:70e88c5e3a8e409ff4604a5fdb1dacb736ea02ba0b7a3da635f294e953906f47,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 05:24:07.828118 env[1166]: time="2024-02-09T05:24:07.828107599Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 05:24:07.829081 env[1166]: time="2024-02-09T05:24:07.829068557Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:98a686df810b9f1de8e3b2ae869e79c51a36e7434d33c53f011852618aec0a68,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 05:24:07.829458 env[1166]: time="2024-02-09T05:24:07.829445961Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.6\" returns image reference \"sha256:70e88c5e3a8e409ff4604a5fdb1dacb736ea02ba0b7a3da635f294e953906f47\"" Feb 9 05:24:07.834766 env[1166]: time="2024-02-09T05:24:07.834751219Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.6\"" Feb 9 05:24:09.964546 env[1166]: time="2024-02-09T05:24:09.964518802Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 05:24:09.965134 env[1166]: time="2024-02-09T05:24:09.965123434Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:18dbd2df3bb54036300d2af8b20ef60d479173946ff089a4d16e258b27faa55c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 05:24:09.966383 env[1166]: time="2024-02-09T05:24:09.966369678Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 05:24:09.967155 env[1166]: time="2024-02-09T05:24:09.967142865Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:80bdcd72cfe26028bb2fed75732fc2f511c35fa8d1edc03deae11f3490713c9e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 05:24:09.967886 env[1166]: time="2024-02-09T05:24:09.967874315Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.6\" returns image reference \"sha256:18dbd2df3bb54036300d2af8b20ef60d479173946ff089a4d16e258b27faa55c\"" Feb 9 05:24:09.974095 env[1166]: time="2024-02-09T05:24:09.974049313Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.6\"" Feb 9 05:24:11.242506 env[1166]: time="2024-02-09T05:24:11.242481127Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 05:24:11.243137 env[1166]: time="2024-02-09T05:24:11.243123439Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7597ecaaf12074e2980eee086736dbd01e566dc266351560001aa47dbbb0e5fe,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 05:24:11.244011 env[1166]: time="2024-02-09T05:24:11.243999337Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 05:24:11.244959 env[1166]: time="2024-02-09T05:24:11.244949535Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:a89db556c34d652d403d909882dbd97336f2e935b1c726b2e2b2c0400186ac39,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 05:24:11.245335 env[1166]: time="2024-02-09T05:24:11.245324619Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.6\" returns image reference \"sha256:7597ecaaf12074e2980eee086736dbd01e566dc266351560001aa47dbbb0e5fe\"" Feb 9 05:24:11.255594 env[1166]: time="2024-02-09T05:24:11.255558123Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.6\"" Feb 9 05:24:12.481369 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1982280900.mount: Deactivated successfully. Feb 9 05:24:12.816497 env[1166]: time="2024-02-09T05:24:12.816396665Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 05:24:12.816919 env[1166]: time="2024-02-09T05:24:12.816905104Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:342a759d88156b4f56ba522a1aed0e3d32d72542545346b40877f6583bebe05f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 05:24:12.817704 env[1166]: time="2024-02-09T05:24:12.817691181Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 05:24:12.818414 env[1166]: time="2024-02-09T05:24:12.818403519Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:3898a1671ae42be1cd3c2e777549bc7b5b306b8da3a224b747365f6679fb902a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 05:24:12.818703 env[1166]: time="2024-02-09T05:24:12.818685720Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.6\" returns image reference \"sha256:342a759d88156b4f56ba522a1aed0e3d32d72542545346b40877f6583bebe05f\"" Feb 9 05:24:12.824978 env[1166]: time="2024-02-09T05:24:12.824918446Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 9 05:24:13.290600 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount58168188.mount: Deactivated successfully. Feb 9 05:24:13.291776 env[1166]: time="2024-02-09T05:24:13.291752473Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 05:24:13.292365 env[1166]: time="2024-02-09T05:24:13.292352809Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 05:24:13.293115 env[1166]: time="2024-02-09T05:24:13.293089179Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 05:24:13.293944 env[1166]: time="2024-02-09T05:24:13.293932417Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 05:24:13.294573 env[1166]: time="2024-02-09T05:24:13.294560573Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 9 05:24:13.300071 env[1166]: time="2024-02-09T05:24:13.300032209Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.9-0\"" Feb 9 05:24:13.831399 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2702132245.mount: Deactivated successfully. Feb 9 05:24:15.019773 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 9 05:24:15.019969 systemd[1]: Stopped kubelet.service. Feb 9 05:24:15.021232 systemd[1]: Started kubelet.service. Feb 9 05:24:15.119344 kubelet[1587]: E0209 05:24:15.119270 1587 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 9 05:24:15.121437 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 05:24:15.121504 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 05:24:17.002130 env[1166]: time="2024-02-09T05:24:17.002072559Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.9-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 05:24:17.003706 env[1166]: time="2024-02-09T05:24:17.003668191Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 05:24:17.007999 env[1166]: time="2024-02-09T05:24:17.007928939Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.9-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 05:24:17.010811 env[1166]: time="2024-02-09T05:24:17.010739252Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 05:24:17.012247 env[1166]: time="2024-02-09T05:24:17.012179596Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.9-0\" returns image reference \"sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9\"" Feb 9 05:24:17.026034 env[1166]: time="2024-02-09T05:24:17.025977875Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Feb 9 05:24:17.562382 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3543134586.mount: Deactivated successfully. Feb 9 05:24:18.044080 env[1166]: time="2024-02-09T05:24:18.044002585Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 05:24:18.044778 env[1166]: time="2024-02-09T05:24:18.044735820Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 05:24:18.045404 env[1166]: time="2024-02-09T05:24:18.045353414Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 05:24:18.046210 env[1166]: time="2024-02-09T05:24:18.046171553Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 05:24:18.046588 env[1166]: time="2024-02-09T05:24:18.046549393Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\"" Feb 9 05:24:19.890719 systemd[1]: Stopped kubelet.service. Feb 9 05:24:19.900686 systemd[1]: Reloading. Feb 9 05:24:19.933428 /usr/lib/systemd/system-generators/torcx-generator[1751]: time="2024-02-09T05:24:19Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 05:24:19.933444 /usr/lib/systemd/system-generators/torcx-generator[1751]: time="2024-02-09T05:24:19Z" level=info msg="torcx already run" Feb 9 05:24:19.986142 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 05:24:19.986150 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 05:24:19.998337 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 05:24:20.053297 systemd[1]: Started kubelet.service. Feb 9 05:24:20.077188 kubelet[1809]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 05:24:20.077188 kubelet[1809]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 9 05:24:20.077188 kubelet[1809]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 05:24:20.077188 kubelet[1809]: I0209 05:24:20.077181 1809 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 05:24:20.243580 kubelet[1809]: I0209 05:24:20.243489 1809 server.go:467] "Kubelet version" kubeletVersion="v1.28.1" Feb 9 05:24:20.243580 kubelet[1809]: I0209 05:24:20.243503 1809 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 05:24:20.243674 kubelet[1809]: I0209 05:24:20.243663 1809 server.go:895] "Client rotation is on, will bootstrap in background" Feb 9 05:24:20.254988 kubelet[1809]: I0209 05:24:20.254949 1809 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 05:24:20.257730 kubelet[1809]: E0209 05:24:20.257694 1809 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://147.75.90.151:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 147.75.90.151:6443: connect: connection refused Feb 9 05:24:20.293288 kubelet[1809]: I0209 05:24:20.293221 1809 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 05:24:20.294993 kubelet[1809]: I0209 05:24:20.294916 1809 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 05:24:20.295340 kubelet[1809]: I0209 05:24:20.295264 1809 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 9 05:24:20.295340 kubelet[1809]: I0209 05:24:20.295309 1809 topology_manager.go:138] "Creating topology manager with none policy" Feb 9 05:24:20.295340 kubelet[1809]: I0209 05:24:20.295331 1809 container_manager_linux.go:301] "Creating device plugin manager" Feb 9 05:24:20.296052 kubelet[1809]: I0209 05:24:20.295975 1809 state_mem.go:36] "Initialized new in-memory state store" Feb 9 05:24:20.299714 kubelet[1809]: I0209 05:24:20.299647 1809 kubelet.go:393] "Attempting to sync node with API server" Feb 9 05:24:20.299714 kubelet[1809]: I0209 05:24:20.299689 1809 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 05:24:20.301115 kubelet[1809]: I0209 05:24:20.301046 1809 kubelet.go:309] "Adding apiserver pod source" Feb 9 05:24:20.301115 kubelet[1809]: W0209 05:24:20.301028 1809 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://147.75.90.151:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-8a9497f9cf&limit=500&resourceVersion=0": dial tcp 147.75.90.151:6443: connect: connection refused Feb 9 05:24:20.301362 kubelet[1809]: E0209 05:24:20.301150 1809 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://147.75.90.151:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-8a9497f9cf&limit=500&resourceVersion=0": dial tcp 147.75.90.151:6443: connect: connection refused Feb 9 05:24:20.301362 kubelet[1809]: I0209 05:24:20.301087 1809 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 05:24:20.304064 kubelet[1809]: W0209 05:24:20.303922 1809 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://147.75.90.151:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 147.75.90.151:6443: connect: connection refused Feb 9 05:24:20.304279 kubelet[1809]: E0209 05:24:20.304146 1809 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://147.75.90.151:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 147.75.90.151:6443: connect: connection refused Feb 9 05:24:20.308011 kubelet[1809]: I0209 05:24:20.307940 1809 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 05:24:20.310303 kubelet[1809]: W0209 05:24:20.310236 1809 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 9 05:24:20.312032 kubelet[1809]: I0209 05:24:20.311959 1809 server.go:1232] "Started kubelet" Feb 9 05:24:20.312208 kubelet[1809]: I0209 05:24:20.312130 1809 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 9 05:24:20.312370 kubelet[1809]: I0209 05:24:20.312308 1809 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 05:24:20.312753 kubelet[1809]: I0209 05:24:20.312674 1809 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 9 05:24:20.314045 kubelet[1809]: E0209 05:24:20.313972 1809 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 05:24:20.314045 kubelet[1809]: E0209 05:24:20.314032 1809 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 05:24:20.324089 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 9 05:24:20.324374 kubelet[1809]: I0209 05:24:20.324307 1809 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 05:24:20.324549 kubelet[1809]: I0209 05:24:20.324478 1809 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 9 05:24:20.324714 kubelet[1809]: I0209 05:24:20.324628 1809 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 05:24:20.324714 kubelet[1809]: E0209 05:24:20.324647 1809 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-8a9497f9cf\" not found" Feb 9 05:24:20.324935 kubelet[1809]: E0209 05:24:20.324443 1809 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-8a9497f9cf.17b21a6c196826c2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-8a9497f9cf", UID:"ci-3510.3.2-a-8a9497f9cf", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-8a9497f9cf"}, FirstTimestamp:time.Date(2024, time.February, 9, 5, 24, 20, 311885506, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 5, 24, 20, 311885506, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ci-3510.3.2-a-8a9497f9cf"}': 'Post "https://147.75.90.151:6443/api/v1/namespaces/default/events": dial tcp 147.75.90.151:6443: connect: connection refused'(may retry after sleeping) Feb 9 05:24:20.324935 kubelet[1809]: I0209 05:24:20.324724 1809 reconciler_new.go:29] "Reconciler: start to sync state" Feb 9 05:24:20.325287 kubelet[1809]: I0209 05:24:20.325256 1809 server.go:462] "Adding debug handlers to kubelet server" Feb 9 05:24:20.325635 kubelet[1809]: E0209 05:24:20.325600 1809 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://147.75.90.151:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-8a9497f9cf?timeout=10s\": dial tcp 147.75.90.151:6443: connect: connection refused" interval="200ms" Feb 9 05:24:20.325635 kubelet[1809]: W0209 05:24:20.325564 1809 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://147.75.90.151:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.75.90.151:6443: connect: connection refused Feb 9 05:24:20.325884 kubelet[1809]: E0209 05:24:20.325681 1809 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://147.75.90.151:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.75.90.151:6443: connect: connection refused Feb 9 05:24:20.345421 kubelet[1809]: I0209 05:24:20.345348 1809 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 9 05:24:20.346879 kubelet[1809]: I0209 05:24:20.346846 1809 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 9 05:24:20.347002 kubelet[1809]: I0209 05:24:20.346914 1809 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 9 05:24:20.347002 kubelet[1809]: I0209 05:24:20.346947 1809 kubelet.go:2303] "Starting kubelet main sync loop" Feb 9 05:24:20.347126 kubelet[1809]: E0209 05:24:20.347049 1809 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 9 05:24:20.348799 kubelet[1809]: W0209 05:24:20.348698 1809 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://147.75.90.151:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.75.90.151:6443: connect: connection refused Feb 9 05:24:20.348939 kubelet[1809]: E0209 05:24:20.348806 1809 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://147.75.90.151:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.75.90.151:6443: connect: connection refused Feb 9 05:24:20.357460 kubelet[1809]: I0209 05:24:20.357434 1809 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 05:24:20.357460 kubelet[1809]: I0209 05:24:20.357463 1809 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 05:24:20.357666 kubelet[1809]: I0209 05:24:20.357503 1809 state_mem.go:36] "Initialized new in-memory state store" Feb 9 05:24:20.358676 kubelet[1809]: I0209 05:24:20.358623 1809 policy_none.go:49] "None policy: Start" Feb 9 05:24:20.359362 kubelet[1809]: I0209 05:24:20.359335 1809 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 05:24:20.359471 kubelet[1809]: I0209 05:24:20.359390 1809 state_mem.go:35] "Initializing new in-memory state store" Feb 9 05:24:20.367620 systemd[1]: Created slice kubepods.slice. Feb 9 05:24:20.374789 systemd[1]: Created slice kubepods-burstable.slice. Feb 9 05:24:20.379425 systemd[1]: Created slice kubepods-besteffort.slice. Feb 9 05:24:20.398373 kubelet[1809]: I0209 05:24:20.398299 1809 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 05:24:20.399462 kubelet[1809]: I0209 05:24:20.399403 1809 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 05:24:20.399757 kubelet[1809]: E0209 05:24:20.399717 1809 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.2-a-8a9497f9cf\" not found" Feb 9 05:24:20.429359 kubelet[1809]: I0209 05:24:20.429271 1809 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-8a9497f9cf" Feb 9 05:24:20.430094 kubelet[1809]: E0209 05:24:20.430007 1809 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://147.75.90.151:6443/api/v1/nodes\": dial tcp 147.75.90.151:6443: connect: connection refused" node="ci-3510.3.2-a-8a9497f9cf" Feb 9 05:24:20.447319 kubelet[1809]: I0209 05:24:20.447229 1809 topology_manager.go:215] "Topology Admit Handler" podUID="b625442b46f231e72decf82d0902ef3e" podNamespace="kube-system" podName="kube-apiserver-ci-3510.3.2-a-8a9497f9cf" Feb 9 05:24:20.451068 kubelet[1809]: I0209 05:24:20.450986 1809 topology_manager.go:215] "Topology Admit Handler" podUID="fbf7acc4d84822a5face2885494625db" podNamespace="kube-system" podName="kube-controller-manager-ci-3510.3.2-a-8a9497f9cf" Feb 9 05:24:20.454823 kubelet[1809]: I0209 05:24:20.454746 1809 topology_manager.go:215] "Topology Admit Handler" podUID="bab1d80fce10b9c59e5184269ad335a1" podNamespace="kube-system" podName="kube-scheduler-ci-3510.3.2-a-8a9497f9cf" Feb 9 05:24:20.467488 systemd[1]: Created slice kubepods-burstable-podb625442b46f231e72decf82d0902ef3e.slice. Feb 9 05:24:20.498130 systemd[1]: Created slice kubepods-burstable-podfbf7acc4d84822a5face2885494625db.slice. Feb 9 05:24:20.507826 systemd[1]: Created slice kubepods-burstable-podbab1d80fce10b9c59e5184269ad335a1.slice. Feb 9 05:24:20.526546 kubelet[1809]: I0209 05:24:20.526452 1809 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b625442b46f231e72decf82d0902ef3e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.2-a-8a9497f9cf\" (UID: \"b625442b46f231e72decf82d0902ef3e\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-8a9497f9cf" Feb 9 05:24:20.526546 kubelet[1809]: I0209 05:24:20.526547 1809 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fbf7acc4d84822a5face2885494625db-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.2-a-8a9497f9cf\" (UID: \"fbf7acc4d84822a5face2885494625db\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-8a9497f9cf" Feb 9 05:24:20.526887 kubelet[1809]: E0209 05:24:20.526623 1809 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://147.75.90.151:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-8a9497f9cf?timeout=10s\": dial tcp 147.75.90.151:6443: connect: connection refused" interval="400ms" Feb 9 05:24:20.526887 kubelet[1809]: I0209 05:24:20.526634 1809 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fbf7acc4d84822a5face2885494625db-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-8a9497f9cf\" (UID: \"fbf7acc4d84822a5face2885494625db\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-8a9497f9cf" Feb 9 05:24:20.526887 kubelet[1809]: I0209 05:24:20.526780 1809 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fbf7acc4d84822a5face2885494625db-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.2-a-8a9497f9cf\" (UID: \"fbf7acc4d84822a5face2885494625db\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-8a9497f9cf" Feb 9 05:24:20.526887 kubelet[1809]: I0209 05:24:20.526845 1809 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bab1d80fce10b9c59e5184269ad335a1-kubeconfig\") pod \"kube-scheduler-ci-3510.3.2-a-8a9497f9cf\" (UID: \"bab1d80fce10b9c59e5184269ad335a1\") " pod="kube-system/kube-scheduler-ci-3510.3.2-a-8a9497f9cf" Feb 9 05:24:20.527261 kubelet[1809]: I0209 05:24:20.526965 1809 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b625442b46f231e72decf82d0902ef3e-ca-certs\") pod \"kube-apiserver-ci-3510.3.2-a-8a9497f9cf\" (UID: \"b625442b46f231e72decf82d0902ef3e\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-8a9497f9cf" Feb 9 05:24:20.527261 kubelet[1809]: I0209 05:24:20.527101 1809 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b625442b46f231e72decf82d0902ef3e-k8s-certs\") pod \"kube-apiserver-ci-3510.3.2-a-8a9497f9cf\" (UID: \"b625442b46f231e72decf82d0902ef3e\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-8a9497f9cf" Feb 9 05:24:20.527261 kubelet[1809]: I0209 05:24:20.527244 1809 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fbf7acc4d84822a5face2885494625db-ca-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-8a9497f9cf\" (UID: \"fbf7acc4d84822a5face2885494625db\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-8a9497f9cf" Feb 9 05:24:20.527519 kubelet[1809]: I0209 05:24:20.527356 1809 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fbf7acc4d84822a5face2885494625db-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.2-a-8a9497f9cf\" (UID: \"fbf7acc4d84822a5face2885494625db\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-8a9497f9cf" Feb 9 05:24:20.634497 kubelet[1809]: I0209 05:24:20.634446 1809 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-8a9497f9cf" Feb 9 05:24:20.635244 kubelet[1809]: E0209 05:24:20.635206 1809 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://147.75.90.151:6443/api/v1/nodes\": dial tcp 147.75.90.151:6443: connect: connection refused" node="ci-3510.3.2-a-8a9497f9cf" Feb 9 05:24:20.798335 env[1166]: time="2024-02-09T05:24:20.798139196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-a-8a9497f9cf,Uid:b625442b46f231e72decf82d0902ef3e,Namespace:kube-system,Attempt:0,}" Feb 9 05:24:20.804227 env[1166]: time="2024-02-09T05:24:20.804108397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.2-a-8a9497f9cf,Uid:fbf7acc4d84822a5face2885494625db,Namespace:kube-system,Attempt:0,}" Feb 9 05:24:20.813264 env[1166]: time="2024-02-09T05:24:20.813158438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.2-a-8a9497f9cf,Uid:bab1d80fce10b9c59e5184269ad335a1,Namespace:kube-system,Attempt:0,}" Feb 9 05:24:20.927463 kubelet[1809]: E0209 05:24:20.927366 1809 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://147.75.90.151:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-8a9497f9cf?timeout=10s\": dial tcp 147.75.90.151:6443: connect: connection refused" interval="800ms" Feb 9 05:24:21.039855 kubelet[1809]: I0209 05:24:21.039809 1809 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-8a9497f9cf" Feb 9 05:24:21.040866 kubelet[1809]: E0209 05:24:21.040830 1809 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://147.75.90.151:6443/api/v1/nodes\": dial tcp 147.75.90.151:6443: connect: connection refused" node="ci-3510.3.2-a-8a9497f9cf" Feb 9 05:24:21.289585 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1835269376.mount: Deactivated successfully. Feb 9 05:24:21.291051 env[1166]: time="2024-02-09T05:24:21.291002625Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 05:24:21.292137 env[1166]: time="2024-02-09T05:24:21.292096425Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 05:24:21.293243 env[1166]: time="2024-02-09T05:24:21.293188427Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 05:24:21.294272 env[1166]: time="2024-02-09T05:24:21.294236080Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 05:24:21.295087 env[1166]: time="2024-02-09T05:24:21.295041089Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 05:24:21.295486 env[1166]: time="2024-02-09T05:24:21.295442207Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 05:24:21.295934 env[1166]: time="2024-02-09T05:24:21.295894953Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 05:24:21.297192 env[1166]: time="2024-02-09T05:24:21.297146682Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 05:24:21.298262 kubelet[1809]: W0209 05:24:21.298197 1809 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://147.75.90.151:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.75.90.151:6443: connect: connection refused Feb 9 05:24:21.298262 kubelet[1809]: E0209 05:24:21.298232 1809 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://147.75.90.151:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.75.90.151:6443: connect: connection refused Feb 9 05:24:21.298912 env[1166]: time="2024-02-09T05:24:21.298877003Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 05:24:21.299853 env[1166]: time="2024-02-09T05:24:21.299817458Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 05:24:21.300281 env[1166]: time="2024-02-09T05:24:21.300246349Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 05:24:21.300738 env[1166]: time="2024-02-09T05:24:21.300703815Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 05:24:21.305422 env[1166]: time="2024-02-09T05:24:21.305379261Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 05:24:21.305422 env[1166]: time="2024-02-09T05:24:21.305405355Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 05:24:21.305422 env[1166]: time="2024-02-09T05:24:21.305412307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 05:24:21.305602 env[1166]: time="2024-02-09T05:24:21.305557017Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ee06047fa5b134aa70cdd035917ceb1ebaf8933639944587432815b9b0604e28 pid=1860 runtime=io.containerd.runc.v2 Feb 9 05:24:21.306455 env[1166]: time="2024-02-09T05:24:21.306428213Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 05:24:21.306455 env[1166]: time="2024-02-09T05:24:21.306449085Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 05:24:21.306517 env[1166]: time="2024-02-09T05:24:21.306456058Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 05:24:21.306539 env[1166]: time="2024-02-09T05:24:21.306516868Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f83e81c5c605f3711cd2048ad5de7271dc4faa787d8f7cc6b17ba41feb370458 pid=1872 runtime=io.containerd.runc.v2 Feb 9 05:24:21.307952 env[1166]: time="2024-02-09T05:24:21.307895499Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 05:24:21.307952 env[1166]: time="2024-02-09T05:24:21.307915494Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 05:24:21.307952 env[1166]: time="2024-02-09T05:24:21.307922733Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 05:24:21.308077 env[1166]: time="2024-02-09T05:24:21.308030237Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/cf2d5b24f7e169d5c003d4220309c3ddabff424606acc1848d43092aba8c35e2 pid=1894 runtime=io.containerd.runc.v2 Feb 9 05:24:21.324512 systemd[1]: Started cri-containerd-ee06047fa5b134aa70cdd035917ceb1ebaf8933639944587432815b9b0604e28.scope. Feb 9 05:24:21.325275 systemd[1]: Started cri-containerd-f83e81c5c605f3711cd2048ad5de7271dc4faa787d8f7cc6b17ba41feb370458.scope. Feb 9 05:24:21.326832 systemd[1]: Started cri-containerd-cf2d5b24f7e169d5c003d4220309c3ddabff424606acc1848d43092aba8c35e2.scope. Feb 9 05:24:21.348658 kubelet[1809]: W0209 05:24:21.348628 1809 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://147.75.90.151:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.75.90.151:6443: connect: connection refused Feb 9 05:24:21.348658 kubelet[1809]: E0209 05:24:21.348660 1809 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://147.75.90.151:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.75.90.151:6443: connect: connection refused Feb 9 05:24:21.360129 env[1166]: time="2024-02-09T05:24:21.360097007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.2-a-8a9497f9cf,Uid:fbf7acc4d84822a5face2885494625db,Namespace:kube-system,Attempt:0,} returns sandbox id \"f83e81c5c605f3711cd2048ad5de7271dc4faa787d8f7cc6b17ba41feb370458\"" Feb 9 05:24:21.360192 env[1166]: time="2024-02-09T05:24:21.360097235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-a-8a9497f9cf,Uid:b625442b46f231e72decf82d0902ef3e,Namespace:kube-system,Attempt:0,} returns sandbox id \"ee06047fa5b134aa70cdd035917ceb1ebaf8933639944587432815b9b0604e28\"" Feb 9 05:24:21.360584 env[1166]: time="2024-02-09T05:24:21.360564772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.2-a-8a9497f9cf,Uid:bab1d80fce10b9c59e5184269ad335a1,Namespace:kube-system,Attempt:0,} returns sandbox id \"cf2d5b24f7e169d5c003d4220309c3ddabff424606acc1848d43092aba8c35e2\"" Feb 9 05:24:21.362624 env[1166]: time="2024-02-09T05:24:21.362608320Z" level=info msg="CreateContainer within sandbox \"cf2d5b24f7e169d5c003d4220309c3ddabff424606acc1848d43092aba8c35e2\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 9 05:24:21.362624 env[1166]: time="2024-02-09T05:24:21.362610188Z" level=info msg="CreateContainer within sandbox \"f83e81c5c605f3711cd2048ad5de7271dc4faa787d8f7cc6b17ba41feb370458\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 9 05:24:21.362690 env[1166]: time="2024-02-09T05:24:21.362631616Z" level=info msg="CreateContainer within sandbox \"ee06047fa5b134aa70cdd035917ceb1ebaf8933639944587432815b9b0604e28\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 9 05:24:21.369148 env[1166]: time="2024-02-09T05:24:21.369103641Z" level=info msg="CreateContainer within sandbox \"cf2d5b24f7e169d5c003d4220309c3ddabff424606acc1848d43092aba8c35e2\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b2a123f80d25a1afd9af00c2297be1741e21d9ddad10cfffcd34be783f61fa2d\"" Feb 9 05:24:21.369426 env[1166]: time="2024-02-09T05:24:21.369384604Z" level=info msg="StartContainer for \"b2a123f80d25a1afd9af00c2297be1741e21d9ddad10cfffcd34be783f61fa2d\"" Feb 9 05:24:21.370045 env[1166]: time="2024-02-09T05:24:21.370014742Z" level=info msg="CreateContainer within sandbox \"f83e81c5c605f3711cd2048ad5de7271dc4faa787d8f7cc6b17ba41feb370458\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a62b4c32040452c4bda1862d14be56b47f54d47bcff9450321bdf01f2cb7c0fb\"" Feb 9 05:24:21.370272 env[1166]: time="2024-02-09T05:24:21.370255132Z" level=info msg="StartContainer for \"a62b4c32040452c4bda1862d14be56b47f54d47bcff9450321bdf01f2cb7c0fb\"" Feb 9 05:24:21.371083 env[1166]: time="2024-02-09T05:24:21.371066117Z" level=info msg="CreateContainer within sandbox \"ee06047fa5b134aa70cdd035917ceb1ebaf8933639944587432815b9b0604e28\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"463c7d1a3c391ffcfb6d28ce5ddd3ffda5340d53228a3dd4c3b7bf7767ffd2c5\"" Feb 9 05:24:21.371284 env[1166]: time="2024-02-09T05:24:21.371258745Z" level=info msg="StartContainer for \"463c7d1a3c391ffcfb6d28ce5ddd3ffda5340d53228a3dd4c3b7bf7767ffd2c5\"" Feb 9 05:24:21.379124 systemd[1]: Started cri-containerd-463c7d1a3c391ffcfb6d28ce5ddd3ffda5340d53228a3dd4c3b7bf7767ffd2c5.scope. Feb 9 05:24:21.379770 systemd[1]: Started cri-containerd-a62b4c32040452c4bda1862d14be56b47f54d47bcff9450321bdf01f2cb7c0fb.scope. Feb 9 05:24:21.389109 systemd[1]: Started cri-containerd-b2a123f80d25a1afd9af00c2297be1741e21d9ddad10cfffcd34be783f61fa2d.scope. Feb 9 05:24:21.413441 env[1166]: time="2024-02-09T05:24:21.413406703Z" level=info msg="StartContainer for \"b2a123f80d25a1afd9af00c2297be1741e21d9ddad10cfffcd34be783f61fa2d\" returns successfully" Feb 9 05:24:21.415183 env[1166]: time="2024-02-09T05:24:21.415159927Z" level=info msg="StartContainer for \"a62b4c32040452c4bda1862d14be56b47f54d47bcff9450321bdf01f2cb7c0fb\" returns successfully" Feb 9 05:24:21.416284 env[1166]: time="2024-02-09T05:24:21.416259011Z" level=info msg="StartContainer for \"463c7d1a3c391ffcfb6d28ce5ddd3ffda5340d53228a3dd4c3b7bf7767ffd2c5\" returns successfully" Feb 9 05:24:21.843072 kubelet[1809]: I0209 05:24:21.843057 1809 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-8a9497f9cf" Feb 9 05:24:21.964385 kubelet[1809]: E0209 05:24:21.964361 1809 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.2-a-8a9497f9cf\" not found" node="ci-3510.3.2-a-8a9497f9cf" Feb 9 05:24:22.060557 kubelet[1809]: I0209 05:24:22.060535 1809 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.2-a-8a9497f9cf" Feb 9 05:24:22.302968 kubelet[1809]: I0209 05:24:22.302757 1809 apiserver.go:52] "Watching apiserver" Feb 9 05:24:22.325256 kubelet[1809]: I0209 05:24:22.325214 1809 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 05:24:22.366932 kubelet[1809]: E0209 05:24:22.366875 1809 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510.3.2-a-8a9497f9cf\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-3510.3.2-a-8a9497f9cf" Feb 9 05:24:22.366932 kubelet[1809]: E0209 05:24:22.366888 1809 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3510.3.2-a-8a9497f9cf\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-3510.3.2-a-8a9497f9cf" Feb 9 05:24:22.367280 kubelet[1809]: E0209 05:24:22.367000 1809 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.2-a-8a9497f9cf\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-3510.3.2-a-8a9497f9cf" Feb 9 05:24:23.370741 kubelet[1809]: W0209 05:24:23.370678 1809 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 9 05:24:23.861268 kubelet[1809]: W0209 05:24:23.861082 1809 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 9 05:24:25.229494 systemd[1]: Reloading. Feb 9 05:24:25.294028 /usr/lib/systemd/system-generators/torcx-generator[2148]: time="2024-02-09T05:24:25Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 05:24:25.294057 /usr/lib/systemd/system-generators/torcx-generator[2148]: time="2024-02-09T05:24:25Z" level=info msg="torcx already run" Feb 9 05:24:25.347513 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 05:24:25.347522 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 05:24:25.361901 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 05:24:25.426414 kubelet[1809]: I0209 05:24:25.426326 1809 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 05:24:25.426377 systemd[1]: Stopping kubelet.service... Feb 9 05:24:25.447967 systemd[1]: kubelet.service: Deactivated successfully. Feb 9 05:24:25.448071 systemd[1]: Stopped kubelet.service. Feb 9 05:24:25.448927 systemd[1]: Started kubelet.service. Feb 9 05:24:25.472563 kubelet[2203]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 05:24:25.472563 kubelet[2203]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 9 05:24:25.472563 kubelet[2203]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 05:24:25.472862 kubelet[2203]: I0209 05:24:25.472622 2203 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 05:24:25.475249 kubelet[2203]: I0209 05:24:25.475236 2203 server.go:467] "Kubelet version" kubeletVersion="v1.28.1" Feb 9 05:24:25.475249 kubelet[2203]: I0209 05:24:25.475249 2203 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 05:24:25.475362 kubelet[2203]: I0209 05:24:25.475356 2203 server.go:895] "Client rotation is on, will bootstrap in background" Feb 9 05:24:25.477259 kubelet[2203]: I0209 05:24:25.477225 2203 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 9 05:24:25.477820 kubelet[2203]: I0209 05:24:25.477781 2203 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 05:24:25.511069 kubelet[2203]: I0209 05:24:25.510883 2203 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 05:24:25.511387 kubelet[2203]: I0209 05:24:25.511316 2203 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 05:24:25.511937 kubelet[2203]: I0209 05:24:25.511861 2203 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 9 05:24:25.511937 kubelet[2203]: I0209 05:24:25.511918 2203 topology_manager.go:138] "Creating topology manager with none policy" Feb 9 05:24:25.511937 kubelet[2203]: I0209 05:24:25.511946 2203 container_manager_linux.go:301] "Creating device plugin manager" Feb 9 05:24:25.512753 kubelet[2203]: I0209 05:24:25.512027 2203 state_mem.go:36] "Initialized new in-memory state store" Feb 9 05:24:25.512753 kubelet[2203]: I0209 05:24:25.512198 2203 kubelet.go:393] "Attempting to sync node with API server" Feb 9 05:24:25.512753 kubelet[2203]: I0209 05:24:25.512231 2203 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 05:24:25.512753 kubelet[2203]: I0209 05:24:25.512282 2203 kubelet.go:309] "Adding apiserver pod source" Feb 9 05:24:25.512753 kubelet[2203]: I0209 05:24:25.512322 2203 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 05:24:25.513667 kubelet[2203]: I0209 05:24:25.513556 2203 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 05:24:25.514815 kubelet[2203]: I0209 05:24:25.514777 2203 server.go:1232] "Started kubelet" Feb 9 05:24:25.515027 kubelet[2203]: I0209 05:24:25.514961 2203 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 05:24:25.515027 kubelet[2203]: I0209 05:24:25.515002 2203 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 9 05:24:25.515566 kubelet[2203]: I0209 05:24:25.515514 2203 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 9 05:24:25.515973 kubelet[2203]: E0209 05:24:25.515933 2203 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 05:24:25.516124 kubelet[2203]: E0209 05:24:25.515985 2203 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 05:24:25.518570 kubelet[2203]: I0209 05:24:25.518483 2203 server.go:462] "Adding debug handlers to kubelet server" Feb 9 05:24:25.518961 kubelet[2203]: I0209 05:24:25.518900 2203 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 05:24:25.519481 kubelet[2203]: I0209 05:24:25.519403 2203 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 9 05:24:25.519770 kubelet[2203]: I0209 05:24:25.519646 2203 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 05:24:25.520348 kubelet[2203]: I0209 05:24:25.520295 2203 reconciler_new.go:29] "Reconciler: start to sync state" Feb 9 05:24:25.534143 kubelet[2203]: I0209 05:24:25.534084 2203 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 9 05:24:25.535512 kubelet[2203]: I0209 05:24:25.535400 2203 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 9 05:24:25.535630 kubelet[2203]: I0209 05:24:25.535530 2203 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 9 05:24:25.535630 kubelet[2203]: I0209 05:24:25.535555 2203 kubelet.go:2303] "Starting kubelet main sync loop" Feb 9 05:24:25.535772 kubelet[2203]: E0209 05:24:25.535650 2203 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 9 05:24:25.558130 kubelet[2203]: I0209 05:24:25.558079 2203 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 05:24:25.558130 kubelet[2203]: I0209 05:24:25.558099 2203 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 05:24:25.558130 kubelet[2203]: I0209 05:24:25.558111 2203 state_mem.go:36] "Initialized new in-memory state store" Feb 9 05:24:25.558265 kubelet[2203]: I0209 05:24:25.558230 2203 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 9 05:24:25.558265 kubelet[2203]: I0209 05:24:25.558248 2203 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 9 05:24:25.558265 kubelet[2203]: I0209 05:24:25.558254 2203 policy_none.go:49] "None policy: Start" Feb 9 05:24:25.558599 kubelet[2203]: I0209 05:24:25.558588 2203 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 05:24:25.558644 kubelet[2203]: I0209 05:24:25.558606 2203 state_mem.go:35] "Initializing new in-memory state store" Feb 9 05:24:25.558700 kubelet[2203]: I0209 05:24:25.558693 2203 state_mem.go:75] "Updated machine memory state" Feb 9 05:24:25.560634 sudo[2245]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 9 05:24:25.560797 sudo[2245]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Feb 9 05:24:25.561250 kubelet[2203]: I0209 05:24:25.561241 2203 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 05:24:25.561408 kubelet[2203]: I0209 05:24:25.561400 2203 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 05:24:25.621003 kubelet[2203]: I0209 05:24:25.620961 2203 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-8a9497f9cf" Feb 9 05:24:25.627108 kubelet[2203]: I0209 05:24:25.627097 2203 kubelet_node_status.go:108] "Node was previously registered" node="ci-3510.3.2-a-8a9497f9cf" Feb 9 05:24:25.627165 kubelet[2203]: I0209 05:24:25.627142 2203 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.2-a-8a9497f9cf" Feb 9 05:24:25.636489 kubelet[2203]: I0209 05:24:25.636445 2203 topology_manager.go:215] "Topology Admit Handler" podUID="b625442b46f231e72decf82d0902ef3e" podNamespace="kube-system" podName="kube-apiserver-ci-3510.3.2-a-8a9497f9cf" Feb 9 05:24:25.636546 kubelet[2203]: I0209 05:24:25.636506 2203 topology_manager.go:215] "Topology Admit Handler" podUID="fbf7acc4d84822a5face2885494625db" podNamespace="kube-system" podName="kube-controller-manager-ci-3510.3.2-a-8a9497f9cf" Feb 9 05:24:25.636546 kubelet[2203]: I0209 05:24:25.636528 2203 topology_manager.go:215] "Topology Admit Handler" podUID="bab1d80fce10b9c59e5184269ad335a1" podNamespace="kube-system" podName="kube-scheduler-ci-3510.3.2-a-8a9497f9cf" Feb 9 05:24:25.639454 kubelet[2203]: W0209 05:24:25.639443 2203 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 9 05:24:25.641341 kubelet[2203]: W0209 05:24:25.641333 2203 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 9 05:24:25.641379 kubelet[2203]: W0209 05:24:25.641358 2203 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 9 05:24:25.641400 kubelet[2203]: E0209 05:24:25.641379 2203 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.2-a-8a9497f9cf\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.2-a-8a9497f9cf" Feb 9 05:24:25.641400 kubelet[2203]: E0209 05:24:25.641386 2203 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510.3.2-a-8a9497f9cf\" already exists" pod="kube-system/kube-controller-manager-ci-3510.3.2-a-8a9497f9cf" Feb 9 05:24:25.720753 kubelet[2203]: I0209 05:24:25.720704 2203 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fbf7acc4d84822a5face2885494625db-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.2-a-8a9497f9cf\" (UID: \"fbf7acc4d84822a5face2885494625db\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-8a9497f9cf" Feb 9 05:24:25.720753 kubelet[2203]: I0209 05:24:25.720727 2203 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b625442b46f231e72decf82d0902ef3e-k8s-certs\") pod \"kube-apiserver-ci-3510.3.2-a-8a9497f9cf\" (UID: \"b625442b46f231e72decf82d0902ef3e\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-8a9497f9cf" Feb 9 05:24:25.720753 kubelet[2203]: I0209 05:24:25.720745 2203 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b625442b46f231e72decf82d0902ef3e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.2-a-8a9497f9cf\" (UID: \"b625442b46f231e72decf82d0902ef3e\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-8a9497f9cf" Feb 9 05:24:25.720880 kubelet[2203]: I0209 05:24:25.720783 2203 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fbf7acc4d84822a5face2885494625db-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-8a9497f9cf\" (UID: \"fbf7acc4d84822a5face2885494625db\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-8a9497f9cf" Feb 9 05:24:25.720880 kubelet[2203]: I0209 05:24:25.720811 2203 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fbf7acc4d84822a5face2885494625db-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.2-a-8a9497f9cf\" (UID: \"fbf7acc4d84822a5face2885494625db\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-8a9497f9cf" Feb 9 05:24:25.720880 kubelet[2203]: I0209 05:24:25.720846 2203 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bab1d80fce10b9c59e5184269ad335a1-kubeconfig\") pod \"kube-scheduler-ci-3510.3.2-a-8a9497f9cf\" (UID: \"bab1d80fce10b9c59e5184269ad335a1\") " pod="kube-system/kube-scheduler-ci-3510.3.2-a-8a9497f9cf" Feb 9 05:24:25.720880 kubelet[2203]: I0209 05:24:25.720867 2203 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b625442b46f231e72decf82d0902ef3e-ca-certs\") pod \"kube-apiserver-ci-3510.3.2-a-8a9497f9cf\" (UID: \"b625442b46f231e72decf82d0902ef3e\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-8a9497f9cf" Feb 9 05:24:25.720880 kubelet[2203]: I0209 05:24:25.720880 2203 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fbf7acc4d84822a5face2885494625db-ca-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-8a9497f9cf\" (UID: \"fbf7acc4d84822a5face2885494625db\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-8a9497f9cf" Feb 9 05:24:25.720965 kubelet[2203]: I0209 05:24:25.720900 2203 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fbf7acc4d84822a5face2885494625db-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.2-a-8a9497f9cf\" (UID: \"fbf7acc4d84822a5face2885494625db\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-8a9497f9cf" Feb 9 05:24:26.006325 sudo[2245]: pam_unix(sudo:session): session closed for user root Feb 9 05:24:26.512812 kubelet[2203]: I0209 05:24:26.512709 2203 apiserver.go:52] "Watching apiserver" Feb 9 05:24:26.520271 kubelet[2203]: I0209 05:24:26.520230 2203 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 05:24:26.547954 kubelet[2203]: W0209 05:24:26.547900 2203 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 9 05:24:26.547954 kubelet[2203]: W0209 05:24:26.547905 2203 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 9 05:24:26.547954 kubelet[2203]: E0209 05:24:26.547949 2203 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.2-a-8a9497f9cf\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.2-a-8a9497f9cf" Feb 9 05:24:26.548085 kubelet[2203]: E0209 05:24:26.547979 2203 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510.3.2-a-8a9497f9cf\" already exists" pod="kube-system/kube-controller-manager-ci-3510.3.2-a-8a9497f9cf" Feb 9 05:24:26.555743 kubelet[2203]: I0209 05:24:26.555697 2203 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.2-a-8a9497f9cf" podStartSLOduration=1.555664353 podCreationTimestamp="2024-02-09 05:24:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 05:24:26.555559274 +0000 UTC m=+1.104732259" watchObservedRunningTime="2024-02-09 05:24:26.555664353 +0000 UTC m=+1.104837333" Feb 9 05:24:26.565105 kubelet[2203]: I0209 05:24:26.565088 2203 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.2-a-8a9497f9cf" podStartSLOduration=3.565067127 podCreationTimestamp="2024-02-09 05:24:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 05:24:26.56044008 +0000 UTC m=+1.109613065" watchObservedRunningTime="2024-02-09 05:24:26.565067127 +0000 UTC m=+1.114240111" Feb 9 05:24:26.565178 kubelet[2203]: I0209 05:24:26.565144 2203 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.2-a-8a9497f9cf" podStartSLOduration=3.565131011 podCreationTimestamp="2024-02-09 05:24:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 05:24:26.565041157 +0000 UTC m=+1.114214143" watchObservedRunningTime="2024-02-09 05:24:26.565131011 +0000 UTC m=+1.114303992" Feb 9 05:24:27.248799 sudo[1275]: pam_unix(sudo:session): session closed for user root Feb 9 05:24:27.249824 sshd[1271]: pam_unix(sshd:session): session closed for user core Feb 9 05:24:27.251661 systemd[1]: sshd@4-147.75.90.151:22-147.75.109.163:36674.service: Deactivated successfully. Feb 9 05:24:27.252232 systemd[1]: session-7.scope: Deactivated successfully. Feb 9 05:24:27.252363 systemd[1]: session-7.scope: Consumed 3.238s CPU time. Feb 9 05:24:27.252837 systemd-logind[1154]: Session 7 logged out. Waiting for processes to exit. Feb 9 05:24:27.253534 systemd-logind[1154]: Removed session 7. Feb 9 05:24:36.995683 update_engine[1156]: I0209 05:24:36.995549 1156 update_attempter.cc:509] Updating boot flags... Feb 9 05:24:40.761713 kubelet[2203]: I0209 05:24:40.761256 2203 topology_manager.go:215] "Topology Admit Handler" podUID="36606f27-8453-4f29-bb6c-21ba759f739d" podNamespace="kube-system" podName="kube-proxy-lwzzh" Feb 9 05:24:40.762055 kubelet[2203]: I0209 05:24:40.761836 2203 topology_manager.go:215] "Topology Admit Handler" podUID="2408f292-2250-4987-b43a-40c443d8686d" podNamespace="kube-system" podName="cilium-5bb7w" Feb 9 05:24:40.766010 systemd[1]: Created slice kubepods-besteffort-pod36606f27_8453_4f29_bb6c_21ba759f739d.slice. Feb 9 05:24:40.775374 systemd[1]: Created slice kubepods-burstable-pod2408f292_2250_4987_b43a_40c443d8686d.slice. Feb 9 05:24:40.793190 kubelet[2203]: I0209 05:24:40.793165 2203 topology_manager.go:215] "Topology Admit Handler" podUID="0262ec75-64c5-44dd-9532-2b872c3c571f" podNamespace="kube-system" podName="cilium-operator-6bc8ccdb58-qgl5r" Feb 9 05:24:40.797125 systemd[1]: Created slice kubepods-besteffort-pod0262ec75_64c5_44dd_9532_2b872c3c571f.slice. Feb 9 05:24:40.813714 kubelet[2203]: I0209 05:24:40.813673 2203 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2408f292-2250-4987-b43a-40c443d8686d-cilium-config-path\") pod \"cilium-5bb7w\" (UID: \"2408f292-2250-4987-b43a-40c443d8686d\") " pod="kube-system/cilium-5bb7w" Feb 9 05:24:40.813714 kubelet[2203]: I0209 05:24:40.813713 2203 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpwwj\" (UniqueName: \"kubernetes.io/projected/2408f292-2250-4987-b43a-40c443d8686d-kube-api-access-wpwwj\") pod \"cilium-5bb7w\" (UID: \"2408f292-2250-4987-b43a-40c443d8686d\") " pod="kube-system/cilium-5bb7w" Feb 9 05:24:40.813827 kubelet[2203]: I0209 05:24:40.813729 2203 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0262ec75-64c5-44dd-9532-2b872c3c571f-cilium-config-path\") pod \"cilium-operator-6bc8ccdb58-qgl5r\" (UID: \"0262ec75-64c5-44dd-9532-2b872c3c571f\") " pod="kube-system/cilium-operator-6bc8ccdb58-qgl5r" Feb 9 05:24:40.813827 kubelet[2203]: I0209 05:24:40.813746 2203 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2408f292-2250-4987-b43a-40c443d8686d-xtables-lock\") pod \"cilium-5bb7w\" (UID: \"2408f292-2250-4987-b43a-40c443d8686d\") " pod="kube-system/cilium-5bb7w" Feb 9 05:24:40.813827 kubelet[2203]: I0209 05:24:40.813761 2203 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2408f292-2250-4987-b43a-40c443d8686d-bpf-maps\") pod \"cilium-5bb7w\" (UID: \"2408f292-2250-4987-b43a-40c443d8686d\") " pod="kube-system/cilium-5bb7w" Feb 9 05:24:40.813827 kubelet[2203]: I0209 05:24:40.813777 2203 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2408f292-2250-4987-b43a-40c443d8686d-hostproc\") pod \"cilium-5bb7w\" (UID: \"2408f292-2250-4987-b43a-40c443d8686d\") " pod="kube-system/cilium-5bb7w" Feb 9 05:24:40.813827 kubelet[2203]: I0209 05:24:40.813789 2203 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2408f292-2250-4987-b43a-40c443d8686d-cni-path\") pod \"cilium-5bb7w\" (UID: \"2408f292-2250-4987-b43a-40c443d8686d\") " pod="kube-system/cilium-5bb7w" Feb 9 05:24:40.813827 kubelet[2203]: I0209 05:24:40.813805 2203 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2408f292-2250-4987-b43a-40c443d8686d-lib-modules\") pod \"cilium-5bb7w\" (UID: \"2408f292-2250-4987-b43a-40c443d8686d\") " pod="kube-system/cilium-5bb7w" Feb 9 05:24:40.813962 kubelet[2203]: I0209 05:24:40.813833 2203 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2408f292-2250-4987-b43a-40c443d8686d-host-proc-sys-kernel\") pod \"cilium-5bb7w\" (UID: \"2408f292-2250-4987-b43a-40c443d8686d\") " pod="kube-system/cilium-5bb7w" Feb 9 05:24:40.813962 kubelet[2203]: I0209 05:24:40.813867 2203 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2408f292-2250-4987-b43a-40c443d8686d-hubble-tls\") pod \"cilium-5bb7w\" (UID: \"2408f292-2250-4987-b43a-40c443d8686d\") " pod="kube-system/cilium-5bb7w" Feb 9 05:24:40.813962 kubelet[2203]: I0209 05:24:40.813887 2203 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2408f292-2250-4987-b43a-40c443d8686d-clustermesh-secrets\") pod \"cilium-5bb7w\" (UID: \"2408f292-2250-4987-b43a-40c443d8686d\") " pod="kube-system/cilium-5bb7w" Feb 9 05:24:40.813962 kubelet[2203]: I0209 05:24:40.813908 2203 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/36606f27-8453-4f29-bb6c-21ba759f739d-xtables-lock\") pod \"kube-proxy-lwzzh\" (UID: \"36606f27-8453-4f29-bb6c-21ba759f739d\") " pod="kube-system/kube-proxy-lwzzh" Feb 9 05:24:40.813962 kubelet[2203]: I0209 05:24:40.813925 2203 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2408f292-2250-4987-b43a-40c443d8686d-cilium-cgroup\") pod \"cilium-5bb7w\" (UID: \"2408f292-2250-4987-b43a-40c443d8686d\") " pod="kube-system/cilium-5bb7w" Feb 9 05:24:40.814053 kubelet[2203]: I0209 05:24:40.813937 2203 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2408f292-2250-4987-b43a-40c443d8686d-host-proc-sys-net\") pod \"cilium-5bb7w\" (UID: \"2408f292-2250-4987-b43a-40c443d8686d\") " pod="kube-system/cilium-5bb7w" Feb 9 05:24:40.814053 kubelet[2203]: I0209 05:24:40.813959 2203 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2408f292-2250-4987-b43a-40c443d8686d-cilium-run\") pod \"cilium-5bb7w\" (UID: \"2408f292-2250-4987-b43a-40c443d8686d\") " pod="kube-system/cilium-5bb7w" Feb 9 05:24:40.814053 kubelet[2203]: I0209 05:24:40.813986 2203 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kszrf\" (UniqueName: \"kubernetes.io/projected/0262ec75-64c5-44dd-9532-2b872c3c571f-kube-api-access-kszrf\") pod \"cilium-operator-6bc8ccdb58-qgl5r\" (UID: \"0262ec75-64c5-44dd-9532-2b872c3c571f\") " pod="kube-system/cilium-operator-6bc8ccdb58-qgl5r" Feb 9 05:24:40.814053 kubelet[2203]: I0209 05:24:40.814009 2203 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/36606f27-8453-4f29-bb6c-21ba759f739d-kube-proxy\") pod \"kube-proxy-lwzzh\" (UID: \"36606f27-8453-4f29-bb6c-21ba759f739d\") " pod="kube-system/kube-proxy-lwzzh" Feb 9 05:24:40.814053 kubelet[2203]: I0209 05:24:40.814034 2203 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/36606f27-8453-4f29-bb6c-21ba759f739d-lib-modules\") pod \"kube-proxy-lwzzh\" (UID: \"36606f27-8453-4f29-bb6c-21ba759f739d\") " pod="kube-system/kube-proxy-lwzzh" Feb 9 05:24:40.814142 kubelet[2203]: I0209 05:24:40.814057 2203 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2408f292-2250-4987-b43a-40c443d8686d-etc-cni-netd\") pod \"cilium-5bb7w\" (UID: \"2408f292-2250-4987-b43a-40c443d8686d\") " pod="kube-system/cilium-5bb7w" Feb 9 05:24:40.814142 kubelet[2203]: I0209 05:24:40.814070 2203 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rt2zg\" (UniqueName: \"kubernetes.io/projected/36606f27-8453-4f29-bb6c-21ba759f739d-kube-api-access-rt2zg\") pod \"kube-proxy-lwzzh\" (UID: \"36606f27-8453-4f29-bb6c-21ba759f739d\") " pod="kube-system/kube-proxy-lwzzh" Feb 9 05:24:40.864465 kubelet[2203]: I0209 05:24:40.864361 2203 kuberuntime_manager.go:1463] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 9 05:24:40.865316 env[1166]: time="2024-02-09T05:24:40.865235932Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 9 05:24:40.866198 kubelet[2203]: I0209 05:24:40.865730 2203 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 9 05:24:41.075949 env[1166]: time="2024-02-09T05:24:41.075689268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lwzzh,Uid:36606f27-8453-4f29-bb6c-21ba759f739d,Namespace:kube-system,Attempt:0,}" Feb 9 05:24:41.077804 env[1166]: time="2024-02-09T05:24:41.077726763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5bb7w,Uid:2408f292-2250-4987-b43a-40c443d8686d,Namespace:kube-system,Attempt:0,}" Feb 9 05:24:41.100348 env[1166]: time="2024-02-09T05:24:41.100239408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-qgl5r,Uid:0262ec75-64c5-44dd-9532-2b872c3c571f,Namespace:kube-system,Attempt:0,}" Feb 9 05:24:41.105372 env[1166]: time="2024-02-09T05:24:41.105213754Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 05:24:41.105372 env[1166]: time="2024-02-09T05:24:41.105315520Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 05:24:41.105372 env[1166]: time="2024-02-09T05:24:41.105354380Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 05:24:41.105875 env[1166]: time="2024-02-09T05:24:41.105711547Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 05:24:41.105875 env[1166]: time="2024-02-09T05:24:41.105803179Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 05:24:41.105875 env[1166]: time="2024-02-09T05:24:41.105842782Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 05:24:41.106191 env[1166]: time="2024-02-09T05:24:41.105828285Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bb9bc707843e01e650955711155ca8ac2cc40d15ac179be9710072977f30ad33 pid=2379 runtime=io.containerd.runc.v2 Feb 9 05:24:41.106306 env[1166]: time="2024-02-09T05:24:41.106182312Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e616d57c04355cd1e3bfcd8d5591de3f77cf4f91da3872a0c51de5671185e565 pid=2380 runtime=io.containerd.runc.v2 Feb 9 05:24:41.123566 env[1166]: time="2024-02-09T05:24:41.123407553Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 05:24:41.123566 env[1166]: time="2024-02-09T05:24:41.123506636Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 05:24:41.123566 env[1166]: time="2024-02-09T05:24:41.123546712Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 05:24:41.124078 env[1166]: time="2024-02-09T05:24:41.123890709Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/094b44a3babb140a46d906565e0d03e26cc20148f9b0665b646244e61ad2f675 pid=2418 runtime=io.containerd.runc.v2 Feb 9 05:24:41.135466 systemd[1]: Started cri-containerd-e616d57c04355cd1e3bfcd8d5591de3f77cf4f91da3872a0c51de5671185e565.scope. Feb 9 05:24:41.144669 systemd[1]: Started cri-containerd-bb9bc707843e01e650955711155ca8ac2cc40d15ac179be9710072977f30ad33.scope. Feb 9 05:24:41.156166 systemd[1]: Started cri-containerd-094b44a3babb140a46d906565e0d03e26cc20148f9b0665b646244e61ad2f675.scope. Feb 9 05:24:41.165690 env[1166]: time="2024-02-09T05:24:41.165649587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5bb7w,Uid:2408f292-2250-4987-b43a-40c443d8686d,Namespace:kube-system,Attempt:0,} returns sandbox id \"bb9bc707843e01e650955711155ca8ac2cc40d15ac179be9710072977f30ad33\"" Feb 9 05:24:41.166715 env[1166]: time="2024-02-09T05:24:41.166694641Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 9 05:24:41.169932 env[1166]: time="2024-02-09T05:24:41.169902535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lwzzh,Uid:36606f27-8453-4f29-bb6c-21ba759f739d,Namespace:kube-system,Attempt:0,} returns sandbox id \"e616d57c04355cd1e3bfcd8d5591de3f77cf4f91da3872a0c51de5671185e565\"" Feb 9 05:24:41.171386 env[1166]: time="2024-02-09T05:24:41.171367629Z" level=info msg="CreateContainer within sandbox \"e616d57c04355cd1e3bfcd8d5591de3f77cf4f91da3872a0c51de5671185e565\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 9 05:24:41.178163 env[1166]: time="2024-02-09T05:24:41.178118509Z" level=info msg="CreateContainer within sandbox \"e616d57c04355cd1e3bfcd8d5591de3f77cf4f91da3872a0c51de5671185e565\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ae80e1d21d1a43ba83a1fd2ea5cd2545739dff5af5196552fff592130afd451b\"" Feb 9 05:24:41.178486 env[1166]: time="2024-02-09T05:24:41.178438534Z" level=info msg="StartContainer for \"ae80e1d21d1a43ba83a1fd2ea5cd2545739dff5af5196552fff592130afd451b\"" Feb 9 05:24:41.187641 env[1166]: time="2024-02-09T05:24:41.187616398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-qgl5r,Uid:0262ec75-64c5-44dd-9532-2b872c3c571f,Namespace:kube-system,Attempt:0,} returns sandbox id \"094b44a3babb140a46d906565e0d03e26cc20148f9b0665b646244e61ad2f675\"" Feb 9 05:24:41.201317 systemd[1]: Started cri-containerd-ae80e1d21d1a43ba83a1fd2ea5cd2545739dff5af5196552fff592130afd451b.scope. Feb 9 05:24:41.229770 env[1166]: time="2024-02-09T05:24:41.229705850Z" level=info msg="StartContainer for \"ae80e1d21d1a43ba83a1fd2ea5cd2545739dff5af5196552fff592130afd451b\" returns successfully" Feb 9 05:24:41.602708 kubelet[2203]: I0209 05:24:41.602564 2203 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-lwzzh" podStartSLOduration=1.602435848 podCreationTimestamp="2024-02-09 05:24:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 05:24:41.601927732 +0000 UTC m=+16.151100802" watchObservedRunningTime="2024-02-09 05:24:41.602435848 +0000 UTC m=+16.151608924" Feb 9 05:24:44.795402 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1536734844.mount: Deactivated successfully. Feb 9 05:24:46.489938 env[1166]: time="2024-02-09T05:24:46.489891626Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 05:24:46.490468 env[1166]: time="2024-02-09T05:24:46.490453613Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 05:24:46.491169 env[1166]: time="2024-02-09T05:24:46.491130099Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 05:24:46.491810 env[1166]: time="2024-02-09T05:24:46.491766980Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 9 05:24:46.492153 env[1166]: time="2024-02-09T05:24:46.492074322Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 9 05:24:46.492870 env[1166]: time="2024-02-09T05:24:46.492851606Z" level=info msg="CreateContainer within sandbox \"bb9bc707843e01e650955711155ca8ac2cc40d15ac179be9710072977f30ad33\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 05:24:46.498305 env[1166]: time="2024-02-09T05:24:46.498261890Z" level=info msg="CreateContainer within sandbox \"bb9bc707843e01e650955711155ca8ac2cc40d15ac179be9710072977f30ad33\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f7d3442abcab9fd161592818f22be47de79e4f56e6ccda66b337f5c80bf93661\"" Feb 9 05:24:46.499162 env[1166]: time="2024-02-09T05:24:46.498716668Z" level=info msg="StartContainer for \"f7d3442abcab9fd161592818f22be47de79e4f56e6ccda66b337f5c80bf93661\"" Feb 9 05:24:46.526962 systemd[1]: Started cri-containerd-f7d3442abcab9fd161592818f22be47de79e4f56e6ccda66b337f5c80bf93661.scope. Feb 9 05:24:46.550761 env[1166]: time="2024-02-09T05:24:46.550731491Z" level=info msg="StartContainer for \"f7d3442abcab9fd161592818f22be47de79e4f56e6ccda66b337f5c80bf93661\" returns successfully" Feb 9 05:24:46.556040 systemd[1]: cri-containerd-f7d3442abcab9fd161592818f22be47de79e4f56e6ccda66b337f5c80bf93661.scope: Deactivated successfully. Feb 9 05:24:47.501880 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f7d3442abcab9fd161592818f22be47de79e4f56e6ccda66b337f5c80bf93661-rootfs.mount: Deactivated successfully. Feb 9 05:24:47.727765 env[1166]: time="2024-02-09T05:24:47.727663786Z" level=info msg="shim disconnected" id=f7d3442abcab9fd161592818f22be47de79e4f56e6ccda66b337f5c80bf93661 Feb 9 05:24:47.728508 env[1166]: time="2024-02-09T05:24:47.727761764Z" level=warning msg="cleaning up after shim disconnected" id=f7d3442abcab9fd161592818f22be47de79e4f56e6ccda66b337f5c80bf93661 namespace=k8s.io Feb 9 05:24:47.728508 env[1166]: time="2024-02-09T05:24:47.727790679Z" level=info msg="cleaning up dead shim" Feb 9 05:24:47.756786 env[1166]: time="2024-02-09T05:24:47.756567052Z" level=warning msg="cleanup warnings time=\"2024-02-09T05:24:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2706 runtime=io.containerd.runc.v2\n" Feb 9 05:24:48.330783 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1505793399.mount: Deactivated successfully. Feb 9 05:24:48.598823 env[1166]: time="2024-02-09T05:24:48.598715843Z" level=info msg="CreateContainer within sandbox \"bb9bc707843e01e650955711155ca8ac2cc40d15ac179be9710072977f30ad33\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 05:24:48.603986 env[1166]: time="2024-02-09T05:24:48.603932887Z" level=info msg="CreateContainer within sandbox \"bb9bc707843e01e650955711155ca8ac2cc40d15ac179be9710072977f30ad33\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d329b06513e1afa8e969af246f52a5a35ac4165510e005f3468cb890ecb8c1e9\"" Feb 9 05:24:48.604341 env[1166]: time="2024-02-09T05:24:48.604303158Z" level=info msg="StartContainer for \"d329b06513e1afa8e969af246f52a5a35ac4165510e005f3468cb890ecb8c1e9\"" Feb 9 05:24:48.605963 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3344995642.mount: Deactivated successfully. Feb 9 05:24:48.626075 systemd[1]: Started cri-containerd-d329b06513e1afa8e969af246f52a5a35ac4165510e005f3468cb890ecb8c1e9.scope. Feb 9 05:24:48.650343 env[1166]: time="2024-02-09T05:24:48.650316152Z" level=info msg="StartContainer for \"d329b06513e1afa8e969af246f52a5a35ac4165510e005f3468cb890ecb8c1e9\" returns successfully" Feb 9 05:24:48.655808 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 05:24:48.655929 systemd[1]: Stopped systemd-sysctl.service. Feb 9 05:24:48.656052 systemd[1]: Stopping systemd-sysctl.service... Feb 9 05:24:48.656856 systemd[1]: Starting systemd-sysctl.service... Feb 9 05:24:48.657962 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 9 05:24:48.658303 systemd[1]: cri-containerd-d329b06513e1afa8e969af246f52a5a35ac4165510e005f3468cb890ecb8c1e9.scope: Deactivated successfully. Feb 9 05:24:48.660717 systemd[1]: Finished systemd-sysctl.service. Feb 9 05:24:48.821889 env[1166]: time="2024-02-09T05:24:48.821776761Z" level=info msg="shim disconnected" id=d329b06513e1afa8e969af246f52a5a35ac4165510e005f3468cb890ecb8c1e9 Feb 9 05:24:48.821889 env[1166]: time="2024-02-09T05:24:48.821880477Z" level=warning msg="cleaning up after shim disconnected" id=d329b06513e1afa8e969af246f52a5a35ac4165510e005f3468cb890ecb8c1e9 namespace=k8s.io Feb 9 05:24:48.822760 env[1166]: time="2024-02-09T05:24:48.821916367Z" level=info msg="cleaning up dead shim" Feb 9 05:24:48.833342 env[1166]: time="2024-02-09T05:24:48.833288174Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 05:24:48.833912 env[1166]: time="2024-02-09T05:24:48.833873729Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 05:24:48.834463 env[1166]: time="2024-02-09T05:24:48.834422321Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 05:24:48.834745 env[1166]: time="2024-02-09T05:24:48.834698741Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 9 05:24:48.835648 env[1166]: time="2024-02-09T05:24:48.835636153Z" level=info msg="CreateContainer within sandbox \"094b44a3babb140a46d906565e0d03e26cc20148f9b0665b646244e61ad2f675\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 9 05:24:48.840254 env[1166]: time="2024-02-09T05:24:48.840208175Z" level=info msg="CreateContainer within sandbox \"094b44a3babb140a46d906565e0d03e26cc20148f9b0665b646244e61ad2f675\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"e4344695430d48554ccbbfc970f95806ddb40a851f34611a59a0fa6afd61d5b8\"" Feb 9 05:24:48.840399 env[1166]: time="2024-02-09T05:24:48.840387703Z" level=info msg="StartContainer for \"e4344695430d48554ccbbfc970f95806ddb40a851f34611a59a0fa6afd61d5b8\"" Feb 9 05:24:48.841413 env[1166]: time="2024-02-09T05:24:48.841399043Z" level=warning msg="cleanup warnings time=\"2024-02-09T05:24:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2768 runtime=io.containerd.runc.v2\n" Feb 9 05:24:48.859820 systemd[1]: Started cri-containerd-e4344695430d48554ccbbfc970f95806ddb40a851f34611a59a0fa6afd61d5b8.scope. Feb 9 05:24:48.885982 env[1166]: time="2024-02-09T05:24:48.885950306Z" level=info msg="StartContainer for \"e4344695430d48554ccbbfc970f95806ddb40a851f34611a59a0fa6afd61d5b8\" returns successfully" Feb 9 05:24:49.608886 env[1166]: time="2024-02-09T05:24:49.608765025Z" level=info msg="CreateContainer within sandbox \"bb9bc707843e01e650955711155ca8ac2cc40d15ac179be9710072977f30ad33\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 05:24:49.610423 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d329b06513e1afa8e969af246f52a5a35ac4165510e005f3468cb890ecb8c1e9-rootfs.mount: Deactivated successfully. Feb 9 05:24:49.627881 env[1166]: time="2024-02-09T05:24:49.627734219Z" level=info msg="CreateContainer within sandbox \"bb9bc707843e01e650955711155ca8ac2cc40d15ac179be9710072977f30ad33\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7b94bb10ef3873c267b4254dc9288bb1915f6c49e06938d68bd515a93df3d193\"" Feb 9 05:24:49.628899 env[1166]: time="2024-02-09T05:24:49.628820448Z" level=info msg="StartContainer for \"7b94bb10ef3873c267b4254dc9288bb1915f6c49e06938d68bd515a93df3d193\"" Feb 9 05:24:49.652287 kubelet[2203]: I0209 05:24:49.652241 2203 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-6bc8ccdb58-qgl5r" podStartSLOduration=2.005416433 podCreationTimestamp="2024-02-09 05:24:40 +0000 UTC" firstStartedPulling="2024-02-09 05:24:41.188175219 +0000 UTC m=+15.737348200" lastFinishedPulling="2024-02-09 05:24:48.834904556 +0000 UTC m=+23.384077538" observedRunningTime="2024-02-09 05:24:49.651486001 +0000 UTC m=+24.200659022" watchObservedRunningTime="2024-02-09 05:24:49.652145771 +0000 UTC m=+24.201318783" Feb 9 05:24:49.677793 systemd[1]: Started cri-containerd-7b94bb10ef3873c267b4254dc9288bb1915f6c49e06938d68bd515a93df3d193.scope. Feb 9 05:24:49.717030 env[1166]: time="2024-02-09T05:24:49.716933178Z" level=info msg="StartContainer for \"7b94bb10ef3873c267b4254dc9288bb1915f6c49e06938d68bd515a93df3d193\" returns successfully" Feb 9 05:24:49.721439 systemd[1]: cri-containerd-7b94bb10ef3873c267b4254dc9288bb1915f6c49e06938d68bd515a93df3d193.scope: Deactivated successfully. Feb 9 05:24:49.775993 env[1166]: time="2024-02-09T05:24:49.775859302Z" level=info msg="shim disconnected" id=7b94bb10ef3873c267b4254dc9288bb1915f6c49e06938d68bd515a93df3d193 Feb 9 05:24:49.775993 env[1166]: time="2024-02-09T05:24:49.775949305Z" level=warning msg="cleaning up after shim disconnected" id=7b94bb10ef3873c267b4254dc9288bb1915f6c49e06938d68bd515a93df3d193 namespace=k8s.io Feb 9 05:24:49.775993 env[1166]: time="2024-02-09T05:24:49.775975442Z" level=info msg="cleaning up dead shim" Feb 9 05:24:49.791756 env[1166]: time="2024-02-09T05:24:49.791648899Z" level=warning msg="cleanup warnings time=\"2024-02-09T05:24:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2874 runtime=io.containerd.runc.v2\n" Feb 9 05:24:50.604020 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7b94bb10ef3873c267b4254dc9288bb1915f6c49e06938d68bd515a93df3d193-rootfs.mount: Deactivated successfully. Feb 9 05:24:50.614619 env[1166]: time="2024-02-09T05:24:50.614550543Z" level=info msg="CreateContainer within sandbox \"bb9bc707843e01e650955711155ca8ac2cc40d15ac179be9710072977f30ad33\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 05:24:50.622850 env[1166]: time="2024-02-09T05:24:50.622762350Z" level=info msg="CreateContainer within sandbox \"bb9bc707843e01e650955711155ca8ac2cc40d15ac179be9710072977f30ad33\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"233e38c82a6c68646925524b1a7dbf0fbcbc0cd6342d1e4b53ee1f5debd895b7\"" Feb 9 05:24:50.623370 env[1166]: time="2024-02-09T05:24:50.623300397Z" level=info msg="StartContainer for \"233e38c82a6c68646925524b1a7dbf0fbcbc0cd6342d1e4b53ee1f5debd895b7\"" Feb 9 05:24:50.648467 systemd[1]: Started cri-containerd-233e38c82a6c68646925524b1a7dbf0fbcbc0cd6342d1e4b53ee1f5debd895b7.scope. Feb 9 05:24:50.672526 systemd[1]: cri-containerd-233e38c82a6c68646925524b1a7dbf0fbcbc0cd6342d1e4b53ee1f5debd895b7.scope: Deactivated successfully. Feb 9 05:24:50.672855 env[1166]: time="2024-02-09T05:24:50.672826649Z" level=info msg="StartContainer for \"233e38c82a6c68646925524b1a7dbf0fbcbc0cd6342d1e4b53ee1f5debd895b7\" returns successfully" Feb 9 05:24:50.684513 env[1166]: time="2024-02-09T05:24:50.684479427Z" level=info msg="shim disconnected" id=233e38c82a6c68646925524b1a7dbf0fbcbc0cd6342d1e4b53ee1f5debd895b7 Feb 9 05:24:50.684678 env[1166]: time="2024-02-09T05:24:50.684513732Z" level=warning msg="cleaning up after shim disconnected" id=233e38c82a6c68646925524b1a7dbf0fbcbc0cd6342d1e4b53ee1f5debd895b7 namespace=k8s.io Feb 9 05:24:50.684678 env[1166]: time="2024-02-09T05:24:50.684522532Z" level=info msg="cleaning up dead shim" Feb 9 05:24:50.690109 env[1166]: time="2024-02-09T05:24:50.690054213Z" level=warning msg="cleanup warnings time=\"2024-02-09T05:24:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2928 runtime=io.containerd.runc.v2\n" Feb 9 05:24:51.607187 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-233e38c82a6c68646925524b1a7dbf0fbcbc0cd6342d1e4b53ee1f5debd895b7-rootfs.mount: Deactivated successfully. Feb 9 05:24:51.613792 env[1166]: time="2024-02-09T05:24:51.613773440Z" level=info msg="CreateContainer within sandbox \"bb9bc707843e01e650955711155ca8ac2cc40d15ac179be9710072977f30ad33\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 05:24:51.620041 env[1166]: time="2024-02-09T05:24:51.620011915Z" level=info msg="CreateContainer within sandbox \"bb9bc707843e01e650955711155ca8ac2cc40d15ac179be9710072977f30ad33\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"fde55f59728c88950188e4a9373730124eec04e1e0d8f4a7af3625ebdea04cc2\"" Feb 9 05:24:51.620357 env[1166]: time="2024-02-09T05:24:51.620341853Z" level=info msg="StartContainer for \"fde55f59728c88950188e4a9373730124eec04e1e0d8f4a7af3625ebdea04cc2\"" Feb 9 05:24:51.629787 systemd[1]: Started cri-containerd-fde55f59728c88950188e4a9373730124eec04e1e0d8f4a7af3625ebdea04cc2.scope. Feb 9 05:24:51.655614 env[1166]: time="2024-02-09T05:24:51.655584089Z" level=info msg="StartContainer for \"fde55f59728c88950188e4a9373730124eec04e1e0d8f4a7af3625ebdea04cc2\" returns successfully" Feb 9 05:24:51.714649 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Feb 9 05:24:51.760997 kubelet[2203]: I0209 05:24:51.760984 2203 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 9 05:24:51.772312 kubelet[2203]: I0209 05:24:51.772293 2203 topology_manager.go:215] "Topology Admit Handler" podUID="36002a85-45d6-4bd6-b584-82d8b7ad3f1a" podNamespace="kube-system" podName="coredns-5dd5756b68-lh45l" Feb 9 05:24:51.773071 kubelet[2203]: I0209 05:24:51.773060 2203 topology_manager.go:215] "Topology Admit Handler" podUID="97951dd9-c855-4667-a82e-7d1bcbaa55d2" podNamespace="kube-system" podName="coredns-5dd5756b68-pbxpz" Feb 9 05:24:51.775436 systemd[1]: Created slice kubepods-burstable-pod36002a85_45d6_4bd6_b584_82d8b7ad3f1a.slice. Feb 9 05:24:51.778012 systemd[1]: Created slice kubepods-burstable-pod97951dd9_c855_4667_a82e_7d1bcbaa55d2.slice. Feb 9 05:24:51.791190 kubelet[2203]: I0209 05:24:51.791175 2203 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/97951dd9-c855-4667-a82e-7d1bcbaa55d2-config-volume\") pod \"coredns-5dd5756b68-pbxpz\" (UID: \"97951dd9-c855-4667-a82e-7d1bcbaa55d2\") " pod="kube-system/coredns-5dd5756b68-pbxpz" Feb 9 05:24:51.791292 kubelet[2203]: I0209 05:24:51.791196 2203 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpbdn\" (UniqueName: \"kubernetes.io/projected/97951dd9-c855-4667-a82e-7d1bcbaa55d2-kube-api-access-rpbdn\") pod \"coredns-5dd5756b68-pbxpz\" (UID: \"97951dd9-c855-4667-a82e-7d1bcbaa55d2\") " pod="kube-system/coredns-5dd5756b68-pbxpz" Feb 9 05:24:51.791292 kubelet[2203]: I0209 05:24:51.791213 2203 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66m8h\" (UniqueName: \"kubernetes.io/projected/36002a85-45d6-4bd6-b584-82d8b7ad3f1a-kube-api-access-66m8h\") pod \"coredns-5dd5756b68-lh45l\" (UID: \"36002a85-45d6-4bd6-b584-82d8b7ad3f1a\") " pod="kube-system/coredns-5dd5756b68-lh45l" Feb 9 05:24:51.791292 kubelet[2203]: I0209 05:24:51.791226 2203 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/36002a85-45d6-4bd6-b584-82d8b7ad3f1a-config-volume\") pod \"coredns-5dd5756b68-lh45l\" (UID: \"36002a85-45d6-4bd6-b584-82d8b7ad3f1a\") " pod="kube-system/coredns-5dd5756b68-lh45l" Feb 9 05:24:51.858587 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Feb 9 05:24:52.078204 env[1166]: time="2024-02-09T05:24:52.078072978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-lh45l,Uid:36002a85-45d6-4bd6-b584-82d8b7ad3f1a,Namespace:kube-system,Attempt:0,}" Feb 9 05:24:52.080378 env[1166]: time="2024-02-09T05:24:52.080245128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-pbxpz,Uid:97951dd9-c855-4667-a82e-7d1bcbaa55d2,Namespace:kube-system,Attempt:0,}" Feb 9 05:24:52.624463 kubelet[2203]: I0209 05:24:52.624442 2203 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-5bb7w" podStartSLOduration=7.298859817 podCreationTimestamp="2024-02-09 05:24:40 +0000 UTC" firstStartedPulling="2024-02-09 05:24:41.166406311 +0000 UTC m=+15.715579298" lastFinishedPulling="2024-02-09 05:24:46.491956943 +0000 UTC m=+21.041129924" observedRunningTime="2024-02-09 05:24:52.624115551 +0000 UTC m=+27.173288537" watchObservedRunningTime="2024-02-09 05:24:52.624410443 +0000 UTC m=+27.173583427" Feb 9 05:24:53.454326 systemd-networkd[1007]: cilium_host: Link UP Feb 9 05:24:53.454415 systemd-networkd[1007]: cilium_net: Link UP Feb 9 05:24:53.461458 systemd-networkd[1007]: cilium_net: Gained carrier Feb 9 05:24:53.468605 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Feb 9 05:24:53.468665 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 9 05:24:53.468699 systemd-networkd[1007]: cilium_host: Gained carrier Feb 9 05:24:53.534232 systemd-networkd[1007]: cilium_vxlan: Link UP Feb 9 05:24:53.534238 systemd-networkd[1007]: cilium_vxlan: Gained carrier Feb 9 05:24:53.673672 kernel: NET: Registered PF_ALG protocol family Feb 9 05:24:54.005699 systemd-networkd[1007]: cilium_net: Gained IPv6LL Feb 9 05:24:54.180938 systemd-networkd[1007]: lxc_health: Link UP Feb 9 05:24:54.204440 systemd-networkd[1007]: lxc_health: Gained carrier Feb 9 05:24:54.204601 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 05:24:54.388770 systemd-networkd[1007]: cilium_host: Gained IPv6LL Feb 9 05:24:54.630664 systemd-networkd[1007]: lxc3d357ec97c8d: Link UP Feb 9 05:24:54.672637 kernel: eth0: renamed from tmp1be4e Feb 9 05:24:54.685658 kernel: eth0: renamed from tmp0aef0 Feb 9 05:24:54.694812 systemd-networkd[1007]: lxcbd93dd435b4c: Link UP Feb 9 05:24:54.709038 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 05:24:54.709135 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc3d357ec97c8d: link becomes ready Feb 9 05:24:54.709642 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcbd93dd435b4c: link becomes ready Feb 9 05:24:54.717215 systemd-networkd[1007]: lxc3d357ec97c8d: Gained carrier Feb 9 05:24:54.717486 systemd-networkd[1007]: lxcbd93dd435b4c: Gained carrier Feb 9 05:24:55.156720 systemd-networkd[1007]: cilium_vxlan: Gained IPv6LL Feb 9 05:24:55.284694 systemd-networkd[1007]: lxc_health: Gained IPv6LL Feb 9 05:24:56.500710 systemd-networkd[1007]: lxc3d357ec97c8d: Gained IPv6LL Feb 9 05:24:56.756832 systemd-networkd[1007]: lxcbd93dd435b4c: Gained IPv6LL Feb 9 05:24:57.007464 env[1166]: time="2024-02-09T05:24:57.007398218Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 05:24:57.007464 env[1166]: time="2024-02-09T05:24:57.007419573Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 05:24:57.007464 env[1166]: time="2024-02-09T05:24:57.007401807Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 05:24:57.007464 env[1166]: time="2024-02-09T05:24:57.007419523Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 05:24:57.007464 env[1166]: time="2024-02-09T05:24:57.007426274Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 05:24:57.007464 env[1166]: time="2024-02-09T05:24:57.007426283Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 05:24:57.007771 env[1166]: time="2024-02-09T05:24:57.007487271Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1be4e430dea4a0a31772405c16147a00fe8978e9497054a917b58bdb97948ef1 pid=3610 runtime=io.containerd.runc.v2 Feb 9 05:24:57.007771 env[1166]: time="2024-02-09T05:24:57.007494123Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0aef09dd7c68e0da924edd27f9856c962b070a729d7a1351d91afdbf48d274cb pid=3612 runtime=io.containerd.runc.v2 Feb 9 05:24:57.014755 systemd[1]: Started cri-containerd-0aef09dd7c68e0da924edd27f9856c962b070a729d7a1351d91afdbf48d274cb.scope. Feb 9 05:24:57.025603 systemd[1]: Started cri-containerd-1be4e430dea4a0a31772405c16147a00fe8978e9497054a917b58bdb97948ef1.scope. Feb 9 05:24:57.036950 env[1166]: time="2024-02-09T05:24:57.036924334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-pbxpz,Uid:97951dd9-c855-4667-a82e-7d1bcbaa55d2,Namespace:kube-system,Attempt:0,} returns sandbox id \"0aef09dd7c68e0da924edd27f9856c962b070a729d7a1351d91afdbf48d274cb\"" Feb 9 05:24:57.038126 env[1166]: time="2024-02-09T05:24:57.038110782Z" level=info msg="CreateContainer within sandbox \"0aef09dd7c68e0da924edd27f9856c962b070a729d7a1351d91afdbf48d274cb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 05:24:57.042454 env[1166]: time="2024-02-09T05:24:57.042411867Z" level=info msg="CreateContainer within sandbox \"0aef09dd7c68e0da924edd27f9856c962b070a729d7a1351d91afdbf48d274cb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"369253b2eca8f5e69f55633964b6b23abe2b013828c97695b282586ed7363506\"" Feb 9 05:24:57.042624 env[1166]: time="2024-02-09T05:24:57.042611558Z" level=info msg="StartContainer for \"369253b2eca8f5e69f55633964b6b23abe2b013828c97695b282586ed7363506\"" Feb 9 05:24:57.049632 systemd[1]: Started cri-containerd-369253b2eca8f5e69f55633964b6b23abe2b013828c97695b282586ed7363506.scope. Feb 9 05:24:57.058945 env[1166]: time="2024-02-09T05:24:57.058917040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-lh45l,Uid:36002a85-45d6-4bd6-b584-82d8b7ad3f1a,Namespace:kube-system,Attempt:0,} returns sandbox id \"1be4e430dea4a0a31772405c16147a00fe8978e9497054a917b58bdb97948ef1\"" Feb 9 05:24:57.060151 env[1166]: time="2024-02-09T05:24:57.060133535Z" level=info msg="CreateContainer within sandbox \"1be4e430dea4a0a31772405c16147a00fe8978e9497054a917b58bdb97948ef1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 05:24:57.064323 env[1166]: time="2024-02-09T05:24:57.064301018Z" level=info msg="CreateContainer within sandbox \"1be4e430dea4a0a31772405c16147a00fe8978e9497054a917b58bdb97948ef1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"23f818ad258b2c2ea38ee05d2c0989b7f0a501459de1dfc1d4ce0fc66209ac5a\"" Feb 9 05:24:57.064564 env[1166]: time="2024-02-09T05:24:57.064550868Z" level=info msg="StartContainer for \"23f818ad258b2c2ea38ee05d2c0989b7f0a501459de1dfc1d4ce0fc66209ac5a\"" Feb 9 05:24:57.080185 env[1166]: time="2024-02-09T05:24:57.080112351Z" level=info msg="StartContainer for \"369253b2eca8f5e69f55633964b6b23abe2b013828c97695b282586ed7363506\" returns successfully" Feb 9 05:24:57.115293 systemd[1]: Started cri-containerd-23f818ad258b2c2ea38ee05d2c0989b7f0a501459de1dfc1d4ce0fc66209ac5a.scope. Feb 9 05:24:57.152703 env[1166]: time="2024-02-09T05:24:57.152628017Z" level=info msg="StartContainer for \"23f818ad258b2c2ea38ee05d2c0989b7f0a501459de1dfc1d4ce0fc66209ac5a\" returns successfully" Feb 9 05:24:57.641228 kubelet[2203]: I0209 05:24:57.641204 2203 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-pbxpz" podStartSLOduration=17.641177919 podCreationTimestamp="2024-02-09 05:24:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 05:24:57.640952015 +0000 UTC m=+32.190125000" watchObservedRunningTime="2024-02-09 05:24:57.641177919 +0000 UTC m=+32.190350897" Feb 9 05:24:57.646660 kubelet[2203]: I0209 05:24:57.646641 2203 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-lh45l" podStartSLOduration=17.646616387999998 podCreationTimestamp="2024-02-09 05:24:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 05:24:57.646155155 +0000 UTC m=+32.195328139" watchObservedRunningTime="2024-02-09 05:24:57.646616388 +0000 UTC m=+32.195789369" Feb 9 05:25:05.397055 kubelet[2203]: I0209 05:25:05.396928 2203 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 9 05:28:10.086163 update_engine[1156]: I0209 05:28:10.086047 1156 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Feb 9 05:28:10.086163 update_engine[1156]: I0209 05:28:10.086125 1156 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Feb 9 05:28:10.087260 update_engine[1156]: I0209 05:28:10.086844 1156 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Feb 9 05:28:10.087851 update_engine[1156]: I0209 05:28:10.087772 1156 omaha_request_params.cc:62] Current group set to lts Feb 9 05:28:10.088104 update_engine[1156]: I0209 05:28:10.088058 1156 update_attempter.cc:499] Already updated boot flags. Skipping. Feb 9 05:28:10.088104 update_engine[1156]: I0209 05:28:10.088078 1156 update_attempter.cc:643] Scheduling an action processor start. Feb 9 05:28:10.088346 update_engine[1156]: I0209 05:28:10.088111 1156 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 9 05:28:10.088346 update_engine[1156]: I0209 05:28:10.088174 1156 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Feb 9 05:28:10.088346 update_engine[1156]: I0209 05:28:10.088311 1156 omaha_request_action.cc:270] Posting an Omaha request to disabled Feb 9 05:28:10.088346 update_engine[1156]: I0209 05:28:10.088330 1156 omaha_request_action.cc:271] Request: <?xml version="1.0" encoding="UTF-8"?> Feb 9 05:28:10.088346 update_engine[1156]: <request protocol="3.0" version="update_engine-0.4.10" updaterversion="update_engine-0.4.10" installsource="scheduler" ismachine="1"> Feb 9 05:28:10.088346 update_engine[1156]: <os version="Chateau" platform="CoreOS" sp="3510.3.2_x86_64"></os> Feb 9 05:28:10.088346 update_engine[1156]: <app appid="{e96281a6-d1af-4bde-9a0a-97b76e56dc57}" version="3510.3.2" track="lts" bootid="{d9def1ab-366b-4461-bdcf-c45f0835cf85}" oem="packet" oemversion="0.2.2" alephversion="3510.3.2" machineid="ac3eb69a92304cd0a10b2a7091309e90" machinealias="" lang="en-US" board="amd64-usr" hardware_class="" delta_okay="false" > Feb 9 05:28:10.088346 update_engine[1156]: <ping active="1"></ping> Feb 9 05:28:10.088346 update_engine[1156]: <updatecheck></updatecheck> Feb 9 05:28:10.088346 update_engine[1156]: <event eventtype="3" eventresult="2" previousversion="0.0.0.0"></event> Feb 9 05:28:10.088346 update_engine[1156]: </app> Feb 9 05:28:10.088346 update_engine[1156]: </request> Feb 9 05:28:10.088346 update_engine[1156]: I0209 05:28:10.088339 1156 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 9 05:28:10.089799 locksmithd[1188]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Feb 9 05:28:10.091688 update_engine[1156]: I0209 05:28:10.091627 1156 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 9 05:28:10.091893 update_engine[1156]: E0209 05:28:10.091861 1156 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 9 05:28:10.092048 update_engine[1156]: I0209 05:28:10.092013 1156 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Feb 9 05:28:19.996162 update_engine[1156]: I0209 05:28:19.996081 1156 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 9 05:28:19.996405 update_engine[1156]: I0209 05:28:19.996227 1156 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 9 05:28:19.996405 update_engine[1156]: E0209 05:28:19.996282 1156 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 9 05:28:19.996405 update_engine[1156]: I0209 05:28:19.996322 1156 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Feb 9 05:28:29.996571 update_engine[1156]: I0209 05:28:29.996455 1156 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 9 05:28:29.997507 update_engine[1156]: I0209 05:28:29.996925 1156 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 9 05:28:29.997507 update_engine[1156]: E0209 05:28:29.997118 1156 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 9 05:28:29.997507 update_engine[1156]: I0209 05:28:29.997285 1156 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Feb 9 05:28:39.996520 update_engine[1156]: I0209 05:28:39.996470 1156 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 9 05:28:39.996791 update_engine[1156]: I0209 05:28:39.996613 1156 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 9 05:28:39.996791 update_engine[1156]: E0209 05:28:39.996670 1156 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 9 05:28:39.996791 update_engine[1156]: I0209 05:28:39.996709 1156 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 9 05:28:39.996791 update_engine[1156]: I0209 05:28:39.996713 1156 omaha_request_action.cc:621] Omaha request response: Feb 9 05:28:39.996791 update_engine[1156]: E0209 05:28:39.996755 1156 omaha_request_action.cc:640] Omaha request network transfer failed. Feb 9 05:28:39.996791 update_engine[1156]: I0209 05:28:39.996762 1156 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Feb 9 05:28:39.996791 update_engine[1156]: I0209 05:28:39.996765 1156 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 9 05:28:39.996791 update_engine[1156]: I0209 05:28:39.996767 1156 update_attempter.cc:306] Processing Done. Feb 9 05:28:39.996791 update_engine[1156]: E0209 05:28:39.996775 1156 update_attempter.cc:619] Update failed. Feb 9 05:28:39.996791 update_engine[1156]: I0209 05:28:39.996778 1156 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Feb 9 05:28:39.996791 update_engine[1156]: I0209 05:28:39.996780 1156 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Feb 9 05:28:39.996791 update_engine[1156]: I0209 05:28:39.996783 1156 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Feb 9 05:28:39.997138 update_engine[1156]: I0209 05:28:39.996831 1156 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 9 05:28:39.997138 update_engine[1156]: I0209 05:28:39.996848 1156 omaha_request_action.cc:270] Posting an Omaha request to disabled Feb 9 05:28:39.997138 update_engine[1156]: I0209 05:28:39.996852 1156 omaha_request_action.cc:271] Request: <?xml version="1.0" encoding="UTF-8"?> Feb 9 05:28:39.997138 update_engine[1156]: <request protocol="3.0" version="update_engine-0.4.10" updaterversion="update_engine-0.4.10" installsource="scheduler" ismachine="1"> Feb 9 05:28:39.997138 update_engine[1156]: <os version="Chateau" platform="CoreOS" sp="3510.3.2_x86_64"></os> Feb 9 05:28:39.997138 update_engine[1156]: <app appid="{e96281a6-d1af-4bde-9a0a-97b76e56dc57}" version="3510.3.2" track="lts" bootid="{d9def1ab-366b-4461-bdcf-c45f0835cf85}" oem="packet" oemversion="0.2.2" alephversion="3510.3.2" machineid="ac3eb69a92304cd0a10b2a7091309e90" machinealias="" lang="en-US" board="amd64-usr" hardware_class="" delta_okay="false" > Feb 9 05:28:39.997138 update_engine[1156]: <event eventtype="3" eventresult="0" errorcode="268437456"></event> Feb 9 05:28:39.997138 update_engine[1156]: </app> Feb 9 05:28:39.997138 update_engine[1156]: </request> Feb 9 05:28:39.997138 update_engine[1156]: I0209 05:28:39.996856 1156 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 9 05:28:39.997138 update_engine[1156]: I0209 05:28:39.996948 1156 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 9 05:28:39.997138 update_engine[1156]: E0209 05:28:39.996989 1156 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 9 05:28:39.997138 update_engine[1156]: I0209 05:28:39.997020 1156 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 9 05:28:39.997138 update_engine[1156]: I0209 05:28:39.997025 1156 omaha_request_action.cc:621] Omaha request response: Feb 9 05:28:39.997138 update_engine[1156]: I0209 05:28:39.997026 1156 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 9 05:28:39.997138 update_engine[1156]: I0209 05:28:39.997028 1156 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 9 05:28:39.997138 update_engine[1156]: I0209 05:28:39.997030 1156 update_attempter.cc:306] Processing Done. Feb 9 05:28:39.997138 update_engine[1156]: I0209 05:28:39.997032 1156 update_attempter.cc:310] Error event sent. Feb 9 05:28:39.997138 update_engine[1156]: I0209 05:28:39.997038 1156 update_check_scheduler.cc:74] Next update check in 42m39s Feb 9 05:28:39.997603 locksmithd[1188]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Feb 9 05:28:39.997603 locksmithd[1188]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Feb 9 05:34:29.507270 systemd[1]: Started sshd@5-147.75.90.151:22-85.209.11.27:57944.service. Feb 9 05:34:32.605687 sshd[3855]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=85.209.11.27 user=root Feb 9 05:34:34.916962 sshd[3855]: Failed password for root from 85.209.11.27 port 57944 ssh2 Feb 9 05:34:36.721289 sshd[3855]: Connection closed by authenticating user root 85.209.11.27 port 57944 [preauth] Feb 9 05:34:36.723896 systemd[1]: sshd@5-147.75.90.151:22-85.209.11.27:57944.service: Deactivated successfully. Feb 9 05:38:55.443002 systemd[1]: Starting systemd-tmpfiles-clean.service... Feb 9 05:38:55.449255 systemd-tmpfiles[3886]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 9 05:38:55.449555 systemd-tmpfiles[3886]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 9 05:38:55.450265 systemd-tmpfiles[3886]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 9 05:38:55.460930 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully. Feb 9 05:38:55.461022 systemd[1]: Finished systemd-tmpfiles-clean.service. Feb 9 05:38:55.462086 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully. Feb 9 05:45:25.485889 systemd[1]: Started sshd@6-147.75.90.151:22-147.75.109.163:60720.service. Feb 9 05:45:25.523276 sshd[3936]: Accepted publickey for core from 147.75.109.163 port 60720 ssh2: RSA SHA256:by5us56zV59xWLeZ0jKNtrh0jNbtksa6rAc7n50Br/w Feb 9 05:45:25.524209 sshd[3936]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 05:45:25.527474 systemd-logind[1154]: New session 8 of user core. Feb 9 05:45:25.528139 systemd[1]: Started session-8.scope. Feb 9 05:45:25.662893 sshd[3936]: pam_unix(sshd:session): session closed for user core Feb 9 05:45:25.664316 systemd[1]: sshd@6-147.75.90.151:22-147.75.109.163:60720.service: Deactivated successfully. Feb 9 05:45:25.664740 systemd[1]: session-8.scope: Deactivated successfully. Feb 9 05:45:25.665149 systemd-logind[1154]: Session 8 logged out. Waiting for processes to exit. Feb 9 05:45:25.665550 systemd-logind[1154]: Removed session 8. Feb 9 05:45:30.672847 systemd[1]: Started sshd@7-147.75.90.151:22-147.75.109.163:60732.service. Feb 9 05:45:30.710323 sshd[3967]: Accepted publickey for core from 147.75.109.163 port 60732 ssh2: RSA SHA256:by5us56zV59xWLeZ0jKNtrh0jNbtksa6rAc7n50Br/w Feb 9 05:45:30.711201 sshd[3967]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 05:45:30.714320 systemd-logind[1154]: New session 9 of user core. Feb 9 05:45:30.714932 systemd[1]: Started session-9.scope. Feb 9 05:45:30.804548 sshd[3967]: pam_unix(sshd:session): session closed for user core Feb 9 05:45:30.806087 systemd[1]: sshd@7-147.75.90.151:22-147.75.109.163:60732.service: Deactivated successfully. Feb 9 05:45:30.806525 systemd[1]: session-9.scope: Deactivated successfully. Feb 9 05:45:30.806943 systemd-logind[1154]: Session 9 logged out. Waiting for processes to exit. Feb 9 05:45:30.807445 systemd-logind[1154]: Removed session 9. Feb 9 05:45:35.814704 systemd[1]: Started sshd@8-147.75.90.151:22-147.75.109.163:56630.service. Feb 9 05:45:35.852021 sshd[3996]: Accepted publickey for core from 147.75.109.163 port 56630 ssh2: RSA SHA256:by5us56zV59xWLeZ0jKNtrh0jNbtksa6rAc7n50Br/w Feb 9 05:45:35.852711 sshd[3996]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 05:45:35.855115 systemd-logind[1154]: New session 10 of user core. Feb 9 05:45:35.855654 systemd[1]: Started session-10.scope. Feb 9 05:45:35.943233 sshd[3996]: pam_unix(sshd:session): session closed for user core Feb 9 05:45:35.944610 systemd[1]: sshd@8-147.75.90.151:22-147.75.109.163:56630.service: Deactivated successfully. Feb 9 05:45:35.945018 systemd[1]: session-10.scope: Deactivated successfully. Feb 9 05:45:35.945368 systemd-logind[1154]: Session 10 logged out. Waiting for processes to exit. Feb 9 05:45:35.945864 systemd-logind[1154]: Removed session 10. Feb 9 05:45:40.946773 systemd[1]: Started sshd@9-147.75.90.151:22-147.75.109.163:56634.service. Feb 9 05:45:41.011638 sshd[4022]: Accepted publickey for core from 147.75.109.163 port 56634 ssh2: RSA SHA256:by5us56zV59xWLeZ0jKNtrh0jNbtksa6rAc7n50Br/w Feb 9 05:45:41.014062 sshd[4022]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 05:45:41.021617 systemd-logind[1154]: New session 11 of user core. Feb 9 05:45:41.023390 systemd[1]: Started session-11.scope. Feb 9 05:45:41.168685 sshd[4022]: pam_unix(sshd:session): session closed for user core Feb 9 05:45:41.170323 systemd[1]: sshd@9-147.75.90.151:22-147.75.109.163:56634.service: Deactivated successfully. Feb 9 05:45:41.170685 systemd[1]: session-11.scope: Deactivated successfully. Feb 9 05:45:41.171024 systemd-logind[1154]: Session 11 logged out. Waiting for processes to exit. Feb 9 05:45:41.171551 systemd[1]: Started sshd@10-147.75.90.151:22-147.75.109.163:56642.service. Feb 9 05:45:41.171960 systemd-logind[1154]: Removed session 11. Feb 9 05:45:41.209020 sshd[4048]: Accepted publickey for core from 147.75.109.163 port 56642 ssh2: RSA SHA256:by5us56zV59xWLeZ0jKNtrh0jNbtksa6rAc7n50Br/w Feb 9 05:45:41.209719 sshd[4048]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 05:45:41.212273 systemd-logind[1154]: New session 12 of user core. Feb 9 05:45:41.212752 systemd[1]: Started session-12.scope. Feb 9 05:45:41.658258 sshd[4048]: pam_unix(sshd:session): session closed for user core Feb 9 05:45:41.660508 systemd[1]: sshd@10-147.75.90.151:22-147.75.109.163:56642.service: Deactivated successfully. Feb 9 05:45:41.660920 systemd[1]: session-12.scope: Deactivated successfully. Feb 9 05:45:41.661273 systemd-logind[1154]: Session 12 logged out. Waiting for processes to exit. Feb 9 05:45:41.662053 systemd[1]: Started sshd@11-147.75.90.151:22-147.75.109.163:56644.service. Feb 9 05:45:41.662427 systemd-logind[1154]: Removed session 12. Feb 9 05:45:41.699520 sshd[4075]: Accepted publickey for core from 147.75.109.163 port 56644 ssh2: RSA SHA256:by5us56zV59xWLeZ0jKNtrh0jNbtksa6rAc7n50Br/w Feb 9 05:45:41.700340 sshd[4075]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 05:45:41.702882 systemd-logind[1154]: New session 13 of user core. Feb 9 05:45:41.703377 systemd[1]: Started session-13.scope. Feb 9 05:45:41.846326 sshd[4075]: pam_unix(sshd:session): session closed for user core Feb 9 05:45:41.848041 systemd[1]: sshd@11-147.75.90.151:22-147.75.109.163:56644.service: Deactivated successfully. Feb 9 05:45:41.848573 systemd[1]: session-13.scope: Deactivated successfully. Feb 9 05:45:41.849051 systemd-logind[1154]: Session 13 logged out. Waiting for processes to exit. Feb 9 05:45:41.849583 systemd-logind[1154]: Removed session 13. Feb 9 05:45:46.855670 systemd[1]: Started sshd@12-147.75.90.151:22-147.75.109.163:57476.service. Feb 9 05:45:46.892985 sshd[4100]: Accepted publickey for core from 147.75.109.163 port 57476 ssh2: RSA SHA256:by5us56zV59xWLeZ0jKNtrh0jNbtksa6rAc7n50Br/w Feb 9 05:45:46.893885 sshd[4100]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 05:45:46.897088 systemd-logind[1154]: New session 14 of user core. Feb 9 05:45:46.897738 systemd[1]: Started session-14.scope. Feb 9 05:45:46.988959 sshd[4100]: pam_unix(sshd:session): session closed for user core Feb 9 05:45:46.990366 systemd[1]: sshd@12-147.75.90.151:22-147.75.109.163:57476.service: Deactivated successfully. Feb 9 05:45:46.990810 systemd[1]: session-14.scope: Deactivated successfully. Feb 9 05:45:46.991138 systemd-logind[1154]: Session 14 logged out. Waiting for processes to exit. Feb 9 05:45:46.991496 systemd-logind[1154]: Removed session 14. Feb 9 05:45:51.992019 systemd[1]: Started sshd@13-147.75.90.151:22-147.75.109.163:57482.service. Feb 9 05:45:52.030847 sshd[4125]: Accepted publickey for core from 147.75.109.163 port 57482 ssh2: RSA SHA256:by5us56zV59xWLeZ0jKNtrh0jNbtksa6rAc7n50Br/w Feb 9 05:45:52.031700 sshd[4125]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 05:45:52.034663 systemd-logind[1154]: New session 15 of user core. Feb 9 05:45:52.035392 systemd[1]: Started session-15.scope. Feb 9 05:45:52.125361 sshd[4125]: pam_unix(sshd:session): session closed for user core Feb 9 05:45:52.127132 systemd[1]: sshd@13-147.75.90.151:22-147.75.109.163:57482.service: Deactivated successfully. Feb 9 05:45:52.127467 systemd[1]: session-15.scope: Deactivated successfully. Feb 9 05:45:52.127833 systemd-logind[1154]: Session 15 logged out. Waiting for processes to exit. Feb 9 05:45:52.128439 systemd[1]: Started sshd@14-147.75.90.151:22-147.75.109.163:57488.service. Feb 9 05:45:52.128936 systemd-logind[1154]: Removed session 15. Feb 9 05:45:52.165959 sshd[4150]: Accepted publickey for core from 147.75.109.163 port 57488 ssh2: RSA SHA256:by5us56zV59xWLeZ0jKNtrh0jNbtksa6rAc7n50Br/w Feb 9 05:45:52.166642 sshd[4150]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 05:45:52.168969 systemd-logind[1154]: New session 16 of user core. Feb 9 05:45:52.169467 systemd[1]: Started session-16.scope. Feb 9 05:45:53.263198 sshd[4150]: pam_unix(sshd:session): session closed for user core Feb 9 05:45:53.264936 systemd[1]: sshd@14-147.75.90.151:22-147.75.109.163:57488.service: Deactivated successfully. Feb 9 05:45:53.265328 systemd[1]: session-16.scope: Deactivated successfully. Feb 9 05:45:53.265672 systemd-logind[1154]: Session 16 logged out. Waiting for processes to exit. Feb 9 05:45:53.266282 systemd[1]: Started sshd@15-147.75.90.151:22-147.75.109.163:57490.service. Feb 9 05:45:53.266838 systemd-logind[1154]: Removed session 16. Feb 9 05:45:53.304020 sshd[4172]: Accepted publickey for core from 147.75.109.163 port 57490 ssh2: RSA SHA256:by5us56zV59xWLeZ0jKNtrh0jNbtksa6rAc7n50Br/w Feb 9 05:45:53.305050 sshd[4172]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 05:45:53.308527 systemd-logind[1154]: New session 17 of user core. Feb 9 05:45:53.309369 systemd[1]: Started session-17.scope. Feb 9 05:45:54.112594 sshd[4172]: pam_unix(sshd:session): session closed for user core Feb 9 05:45:54.114257 systemd[1]: sshd@15-147.75.90.151:22-147.75.109.163:57490.service: Deactivated successfully. Feb 9 05:45:54.114613 systemd[1]: session-17.scope: Deactivated successfully. Feb 9 05:45:54.114971 systemd-logind[1154]: Session 17 logged out. Waiting for processes to exit. Feb 9 05:45:54.115537 systemd[1]: Started sshd@16-147.75.90.151:22-147.75.109.163:57502.service. Feb 9 05:45:54.116035 systemd-logind[1154]: Removed session 17. Feb 9 05:45:54.152557 sshd[4202]: Accepted publickey for core from 147.75.109.163 port 57502 ssh2: RSA SHA256:by5us56zV59xWLeZ0jKNtrh0jNbtksa6rAc7n50Br/w Feb 9 05:45:54.153329 sshd[4202]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 05:45:54.155766 systemd-logind[1154]: New session 18 of user core. Feb 9 05:45:54.156362 systemd[1]: Started session-18.scope. Feb 9 05:45:54.372554 sshd[4202]: pam_unix(sshd:session): session closed for user core Feb 9 05:45:54.374602 systemd[1]: sshd@16-147.75.90.151:22-147.75.109.163:57502.service: Deactivated successfully. Feb 9 05:45:54.375004 systemd[1]: session-18.scope: Deactivated successfully. Feb 9 05:45:54.375382 systemd-logind[1154]: Session 18 logged out. Waiting for processes to exit. Feb 9 05:45:54.376015 systemd[1]: Started sshd@17-147.75.90.151:22-147.75.109.163:57516.service. Feb 9 05:45:54.376505 systemd-logind[1154]: Removed session 18. Feb 9 05:45:54.414237 sshd[4232]: Accepted publickey for core from 147.75.109.163 port 57516 ssh2: RSA SHA256:by5us56zV59xWLeZ0jKNtrh0jNbtksa6rAc7n50Br/w Feb 9 05:45:54.417530 sshd[4232]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 05:45:54.428211 systemd-logind[1154]: New session 19 of user core. Feb 9 05:45:54.432072 systemd[1]: Started session-19.scope. Feb 9 05:45:54.595680 sshd[4232]: pam_unix(sshd:session): session closed for user core Feb 9 05:45:54.597307 systemd[1]: sshd@17-147.75.90.151:22-147.75.109.163:57516.service: Deactivated successfully. Feb 9 05:45:54.597833 systemd[1]: session-19.scope: Deactivated successfully. Feb 9 05:45:54.598249 systemd-logind[1154]: Session 19 logged out. Waiting for processes to exit. Feb 9 05:45:54.598699 systemd-logind[1154]: Removed session 19. Feb 9 05:45:59.605389 systemd[1]: Started sshd@18-147.75.90.151:22-147.75.109.163:50656.service. Feb 9 05:45:59.642994 sshd[4262]: Accepted publickey for core from 147.75.109.163 port 50656 ssh2: RSA SHA256:by5us56zV59xWLeZ0jKNtrh0jNbtksa6rAc7n50Br/w Feb 9 05:45:59.644051 sshd[4262]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 05:45:59.647142 systemd-logind[1154]: New session 20 of user core. Feb 9 05:45:59.648257 systemd[1]: Started session-20.scope. Feb 9 05:45:59.739776 sshd[4262]: pam_unix(sshd:session): session closed for user core Feb 9 05:45:59.745856 systemd[1]: sshd@18-147.75.90.151:22-147.75.109.163:50656.service: Deactivated successfully. Feb 9 05:45:59.748016 systemd[1]: session-20.scope: Deactivated successfully. Feb 9 05:45:59.749947 systemd-logind[1154]: Session 20 logged out. Waiting for processes to exit. Feb 9 05:45:59.752065 systemd-logind[1154]: Removed session 20. Feb 9 05:46:04.749213 systemd[1]: Started sshd@19-147.75.90.151:22-147.75.109.163:39344.service. Feb 9 05:46:04.786956 sshd[4287]: Accepted publickey for core from 147.75.109.163 port 39344 ssh2: RSA SHA256:by5us56zV59xWLeZ0jKNtrh0jNbtksa6rAc7n50Br/w Feb 9 05:46:04.787985 sshd[4287]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 05:46:04.791489 systemd-logind[1154]: New session 21 of user core. Feb 9 05:46:04.792642 systemd[1]: Started session-21.scope. Feb 9 05:46:04.881815 sshd[4287]: pam_unix(sshd:session): session closed for user core Feb 9 05:46:04.883225 systemd[1]: sshd@19-147.75.90.151:22-147.75.109.163:39344.service: Deactivated successfully. Feb 9 05:46:04.883744 systemd[1]: session-21.scope: Deactivated successfully. Feb 9 05:46:04.884156 systemd-logind[1154]: Session 21 logged out. Waiting for processes to exit. Feb 9 05:46:04.884532 systemd-logind[1154]: Removed session 21. Feb 9 05:46:09.891799 systemd[1]: Started sshd@20-147.75.90.151:22-147.75.109.163:39348.service. Feb 9 05:46:09.958310 sshd[4313]: Accepted publickey for core from 147.75.109.163 port 39348 ssh2: RSA SHA256:by5us56zV59xWLeZ0jKNtrh0jNbtksa6rAc7n50Br/w Feb 9 05:46:09.961599 sshd[4313]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 05:46:09.971921 systemd-logind[1154]: New session 22 of user core. Feb 9 05:46:09.975237 systemd[1]: Started session-22.scope. Feb 9 05:46:10.061744 sshd[4313]: pam_unix(sshd:session): session closed for user core Feb 9 05:46:10.063109 systemd[1]: sshd@20-147.75.90.151:22-147.75.109.163:39348.service: Deactivated successfully. Feb 9 05:46:10.063543 systemd[1]: session-22.scope: Deactivated successfully. Feb 9 05:46:10.063938 systemd-logind[1154]: Session 22 logged out. Waiting for processes to exit. Feb 9 05:46:10.064433 systemd-logind[1154]: Removed session 22. Feb 9 05:46:15.071674 systemd[1]: Started sshd@21-147.75.90.151:22-147.75.109.163:53552.service. Feb 9 05:46:15.108996 sshd[4340]: Accepted publickey for core from 147.75.109.163 port 53552 ssh2: RSA SHA256:by5us56zV59xWLeZ0jKNtrh0jNbtksa6rAc7n50Br/w Feb 9 05:46:15.109746 sshd[4340]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 05:46:15.112339 systemd-logind[1154]: New session 23 of user core. Feb 9 05:46:15.112854 systemd[1]: Started session-23.scope. Feb 9 05:46:15.200197 sshd[4340]: pam_unix(sshd:session): session closed for user core Feb 9 05:46:15.201840 systemd[1]: sshd@21-147.75.90.151:22-147.75.109.163:53552.service: Deactivated successfully. Feb 9 05:46:15.202178 systemd[1]: session-23.scope: Deactivated successfully. Feb 9 05:46:15.202513 systemd-logind[1154]: Session 23 logged out. Waiting for processes to exit. Feb 9 05:46:15.203095 systemd[1]: Started sshd@22-147.75.90.151:22-147.75.109.163:53558.service. Feb 9 05:46:15.203462 systemd-logind[1154]: Removed session 23. Feb 9 05:46:15.240512 sshd[4365]: Accepted publickey for core from 147.75.109.163 port 53558 ssh2: RSA SHA256:by5us56zV59xWLeZ0jKNtrh0jNbtksa6rAc7n50Br/w Feb 9 05:46:15.241405 sshd[4365]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 05:46:15.244385 systemd-logind[1154]: New session 24 of user core. Feb 9 05:46:15.245032 systemd[1]: Started session-24.scope. Feb 9 05:46:16.624469 env[1166]: time="2024-02-09T05:46:16.624347412Z" level=info msg="StopContainer for \"e4344695430d48554ccbbfc970f95806ddb40a851f34611a59a0fa6afd61d5b8\" with timeout 30 (s)" Feb 9 05:46:16.625433 env[1166]: time="2024-02-09T05:46:16.625091633Z" level=info msg="Stop container \"e4344695430d48554ccbbfc970f95806ddb40a851f34611a59a0fa6afd61d5b8\" with signal terminated" Feb 9 05:46:16.653503 systemd[1]: cri-containerd-e4344695430d48554ccbbfc970f95806ddb40a851f34611a59a0fa6afd61d5b8.scope: Deactivated successfully. Feb 9 05:46:16.653856 systemd[1]: cri-containerd-e4344695430d48554ccbbfc970f95806ddb40a851f34611a59a0fa6afd61d5b8.scope: Consumed 2.471s CPU time. Feb 9 05:46:16.666455 env[1166]: time="2024-02-09T05:46:16.666370031Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 05:46:16.670980 env[1166]: time="2024-02-09T05:46:16.670929217Z" level=info msg="StopContainer for \"fde55f59728c88950188e4a9373730124eec04e1e0d8f4a7af3625ebdea04cc2\" with timeout 2 (s)" Feb 9 05:46:16.671194 env[1166]: time="2024-02-09T05:46:16.671147776Z" level=info msg="Stop container \"fde55f59728c88950188e4a9373730124eec04e1e0d8f4a7af3625ebdea04cc2\" with signal terminated" Feb 9 05:46:16.681383 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e4344695430d48554ccbbfc970f95806ddb40a851f34611a59a0fa6afd61d5b8-rootfs.mount: Deactivated successfully. Feb 9 05:46:16.688632 env[1166]: time="2024-02-09T05:46:16.688588365Z" level=info msg="shim disconnected" id=e4344695430d48554ccbbfc970f95806ddb40a851f34611a59a0fa6afd61d5b8 Feb 9 05:46:16.688784 env[1166]: time="2024-02-09T05:46:16.688634945Z" level=warning msg="cleaning up after shim disconnected" id=e4344695430d48554ccbbfc970f95806ddb40a851f34611a59a0fa6afd61d5b8 namespace=k8s.io Feb 9 05:46:16.688784 env[1166]: time="2024-02-09T05:46:16.688647464Z" level=info msg="cleaning up dead shim" Feb 9 05:46:16.688816 systemd-networkd[1007]: lxc_health: Link DOWN Feb 9 05:46:16.688822 systemd-networkd[1007]: lxc_health: Lost carrier Feb 9 05:46:16.695079 env[1166]: time="2024-02-09T05:46:16.695022710Z" level=warning msg="cleanup warnings time=\"2024-02-09T05:46:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4432 runtime=io.containerd.runc.v2\n" Feb 9 05:46:16.696182 env[1166]: time="2024-02-09T05:46:16.696123901Z" level=info msg="StopContainer for \"e4344695430d48554ccbbfc970f95806ddb40a851f34611a59a0fa6afd61d5b8\" returns successfully" Feb 9 05:46:16.696682 env[1166]: time="2024-02-09T05:46:16.696626085Z" level=info msg="StopPodSandbox for \"094b44a3babb140a46d906565e0d03e26cc20148f9b0665b646244e61ad2f675\"" Feb 9 05:46:16.696759 env[1166]: time="2024-02-09T05:46:16.696685485Z" level=info msg="Container to stop \"e4344695430d48554ccbbfc970f95806ddb40a851f34611a59a0fa6afd61d5b8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 05:46:16.698484 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-094b44a3babb140a46d906565e0d03e26cc20148f9b0665b646244e61ad2f675-shm.mount: Deactivated successfully. Feb 9 05:46:16.715547 systemd[1]: cri-containerd-094b44a3babb140a46d906565e0d03e26cc20148f9b0665b646244e61ad2f675.scope: Deactivated successfully. Feb 9 05:46:16.755306 systemd[1]: cri-containerd-fde55f59728c88950188e4a9373730124eec04e1e0d8f4a7af3625ebdea04cc2.scope: Deactivated successfully. Feb 9 05:46:16.756030 systemd[1]: cri-containerd-fde55f59728c88950188e4a9373730124eec04e1e0d8f4a7af3625ebdea04cc2.scope: Consumed 11.665s CPU time. Feb 9 05:46:16.776392 env[1166]: time="2024-02-09T05:46:16.776270386Z" level=info msg="shim disconnected" id=094b44a3babb140a46d906565e0d03e26cc20148f9b0665b646244e61ad2f675 Feb 9 05:46:16.776869 env[1166]: time="2024-02-09T05:46:16.776395710Z" level=warning msg="cleaning up after shim disconnected" id=094b44a3babb140a46d906565e0d03e26cc20148f9b0665b646244e61ad2f675 namespace=k8s.io Feb 9 05:46:16.776869 env[1166]: time="2024-02-09T05:46:16.776439117Z" level=info msg="cleaning up dead shim" Feb 9 05:46:16.777211 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-094b44a3babb140a46d906565e0d03e26cc20148f9b0665b646244e61ad2f675-rootfs.mount: Deactivated successfully. Feb 9 05:46:16.793106 env[1166]: time="2024-02-09T05:46:16.793019420Z" level=warning msg="cleanup warnings time=\"2024-02-09T05:46:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4474 runtime=io.containerd.runc.v2\n" Feb 9 05:46:16.793718 env[1166]: time="2024-02-09T05:46:16.793659142Z" level=info msg="TearDown network for sandbox \"094b44a3babb140a46d906565e0d03e26cc20148f9b0665b646244e61ad2f675\" successfully" Feb 9 05:46:16.793907 env[1166]: time="2024-02-09T05:46:16.793712971Z" level=info msg="StopPodSandbox for \"094b44a3babb140a46d906565e0d03e26cc20148f9b0665b646244e61ad2f675\" returns successfully" Feb 9 05:46:16.818554 env[1166]: time="2024-02-09T05:46:16.818432694Z" level=info msg="shim disconnected" id=fde55f59728c88950188e4a9373730124eec04e1e0d8f4a7af3625ebdea04cc2 Feb 9 05:46:16.818935 env[1166]: time="2024-02-09T05:46:16.818567075Z" level=warning msg="cleaning up after shim disconnected" id=fde55f59728c88950188e4a9373730124eec04e1e0d8f4a7af3625ebdea04cc2 namespace=k8s.io Feb 9 05:46:16.818935 env[1166]: time="2024-02-09T05:46:16.818629551Z" level=info msg="cleaning up dead shim" Feb 9 05:46:16.819321 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fde55f59728c88950188e4a9373730124eec04e1e0d8f4a7af3625ebdea04cc2-rootfs.mount: Deactivated successfully. Feb 9 05:46:16.835066 env[1166]: time="2024-02-09T05:46:16.834942059Z" level=warning msg="cleanup warnings time=\"2024-02-09T05:46:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4492 runtime=io.containerd.runc.v2\n" Feb 9 05:46:16.837002 env[1166]: time="2024-02-09T05:46:16.836935601Z" level=info msg="StopContainer for \"fde55f59728c88950188e4a9373730124eec04e1e0d8f4a7af3625ebdea04cc2\" returns successfully" Feb 9 05:46:16.837852 env[1166]: time="2024-02-09T05:46:16.837789738Z" level=info msg="StopPodSandbox for \"bb9bc707843e01e650955711155ca8ac2cc40d15ac179be9710072977f30ad33\"" Feb 9 05:46:16.838050 env[1166]: time="2024-02-09T05:46:16.837918873Z" level=info msg="Container to stop \"d329b06513e1afa8e969af246f52a5a35ac4165510e005f3468cb890ecb8c1e9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 05:46:16.838050 env[1166]: time="2024-02-09T05:46:16.837961404Z" level=info msg="Container to stop \"f7d3442abcab9fd161592818f22be47de79e4f56e6ccda66b337f5c80bf93661\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 05:46:16.838050 env[1166]: time="2024-02-09T05:46:16.837991443Z" level=info msg="Container to stop \"7b94bb10ef3873c267b4254dc9288bb1915f6c49e06938d68bd515a93df3d193\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 05:46:16.838050 env[1166]: time="2024-02-09T05:46:16.838020474Z" level=info msg="Container to stop \"233e38c82a6c68646925524b1a7dbf0fbcbc0cd6342d1e4b53ee1f5debd895b7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 05:46:16.838542 env[1166]: time="2024-02-09T05:46:16.838046754Z" level=info msg="Container to stop \"fde55f59728c88950188e4a9373730124eec04e1e0d8f4a7af3625ebdea04cc2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 05:46:16.850871 systemd[1]: cri-containerd-bb9bc707843e01e650955711155ca8ac2cc40d15ac179be9710072977f30ad33.scope: Deactivated successfully. Feb 9 05:46:16.851687 kubelet[2203]: I0209 05:46:16.851630 2203 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0262ec75-64c5-44dd-9532-2b872c3c571f-cilium-config-path\") pod \"0262ec75-64c5-44dd-9532-2b872c3c571f\" (UID: \"0262ec75-64c5-44dd-9532-2b872c3c571f\") " Feb 9 05:46:16.852531 kubelet[2203]: I0209 05:46:16.851735 2203 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kszrf\" (UniqueName: \"kubernetes.io/projected/0262ec75-64c5-44dd-9532-2b872c3c571f-kube-api-access-kszrf\") pod \"0262ec75-64c5-44dd-9532-2b872c3c571f\" (UID: \"0262ec75-64c5-44dd-9532-2b872c3c571f\") " Feb 9 05:46:16.856557 kubelet[2203]: I0209 05:46:16.856495 2203 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0262ec75-64c5-44dd-9532-2b872c3c571f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0262ec75-64c5-44dd-9532-2b872c3c571f" (UID: "0262ec75-64c5-44dd-9532-2b872c3c571f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 05:46:16.857749 kubelet[2203]: I0209 05:46:16.857682 2203 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0262ec75-64c5-44dd-9532-2b872c3c571f-kube-api-access-kszrf" (OuterVolumeSpecName: "kube-api-access-kszrf") pod "0262ec75-64c5-44dd-9532-2b872c3c571f" (UID: "0262ec75-64c5-44dd-9532-2b872c3c571f"). InnerVolumeSpecName "kube-api-access-kszrf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 05:46:16.900752 env[1166]: time="2024-02-09T05:46:16.900503335Z" level=info msg="shim disconnected" id=bb9bc707843e01e650955711155ca8ac2cc40d15ac179be9710072977f30ad33 Feb 9 05:46:16.900752 env[1166]: time="2024-02-09T05:46:16.900666437Z" level=warning msg="cleaning up after shim disconnected" id=bb9bc707843e01e650955711155ca8ac2cc40d15ac179be9710072977f30ad33 namespace=k8s.io Feb 9 05:46:16.900752 env[1166]: time="2024-02-09T05:46:16.900713554Z" level=info msg="cleaning up dead shim" Feb 9 05:46:16.917238 env[1166]: time="2024-02-09T05:46:16.917157108Z" level=warning msg="cleanup warnings time=\"2024-02-09T05:46:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4523 runtime=io.containerd.runc.v2\n" Feb 9 05:46:16.917849 env[1166]: time="2024-02-09T05:46:16.917791009Z" level=info msg="TearDown network for sandbox \"bb9bc707843e01e650955711155ca8ac2cc40d15ac179be9710072977f30ad33\" successfully" Feb 9 05:46:16.918035 env[1166]: time="2024-02-09T05:46:16.917844771Z" level=info msg="StopPodSandbox for \"bb9bc707843e01e650955711155ca8ac2cc40d15ac179be9710072977f30ad33\" returns successfully" Feb 9 05:46:16.952502 kubelet[2203]: I0209 05:46:16.952445 2203 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-kszrf\" (UniqueName: \"kubernetes.io/projected/0262ec75-64c5-44dd-9532-2b872c3c571f-kube-api-access-kszrf\") on node \"ci-3510.3.2-a-8a9497f9cf\" DevicePath \"\"" Feb 9 05:46:16.952502 kubelet[2203]: I0209 05:46:16.952504 2203 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0262ec75-64c5-44dd-9532-2b872c3c571f-cilium-config-path\") on node \"ci-3510.3.2-a-8a9497f9cf\" DevicePath \"\"" Feb 9 05:46:17.053380 kubelet[2203]: I0209 05:46:17.053297 2203 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2408f292-2250-4987-b43a-40c443d8686d-clustermesh-secrets\") pod \"2408f292-2250-4987-b43a-40c443d8686d\" (UID: \"2408f292-2250-4987-b43a-40c443d8686d\") " Feb 9 05:46:17.053796 kubelet[2203]: I0209 05:46:17.053435 2203 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2408f292-2250-4987-b43a-40c443d8686d-etc-cni-netd\") pod \"2408f292-2250-4987-b43a-40c443d8686d\" (UID: \"2408f292-2250-4987-b43a-40c443d8686d\") " Feb 9 05:46:17.053796 kubelet[2203]: I0209 05:46:17.053540 2203 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2408f292-2250-4987-b43a-40c443d8686d-host-proc-sys-kernel\") pod \"2408f292-2250-4987-b43a-40c443d8686d\" (UID: \"2408f292-2250-4987-b43a-40c443d8686d\") " Feb 9 05:46:17.053796 kubelet[2203]: I0209 05:46:17.053565 2203 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2408f292-2250-4987-b43a-40c443d8686d-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "2408f292-2250-4987-b43a-40c443d8686d" (UID: "2408f292-2250-4987-b43a-40c443d8686d"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 05:46:17.053796 kubelet[2203]: I0209 05:46:17.053664 2203 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2408f292-2250-4987-b43a-40c443d8686d-host-proc-sys-net\") pod \"2408f292-2250-4987-b43a-40c443d8686d\" (UID: \"2408f292-2250-4987-b43a-40c443d8686d\") " Feb 9 05:46:17.053796 kubelet[2203]: I0209 05:46:17.053709 2203 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2408f292-2250-4987-b43a-40c443d8686d-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "2408f292-2250-4987-b43a-40c443d8686d" (UID: "2408f292-2250-4987-b43a-40c443d8686d"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 05:46:17.054928 kubelet[2203]: I0209 05:46:17.053784 2203 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wpwwj\" (UniqueName: \"kubernetes.io/projected/2408f292-2250-4987-b43a-40c443d8686d-kube-api-access-wpwwj\") pod \"2408f292-2250-4987-b43a-40c443d8686d\" (UID: \"2408f292-2250-4987-b43a-40c443d8686d\") " Feb 9 05:46:17.054928 kubelet[2203]: I0209 05:46:17.053826 2203 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2408f292-2250-4987-b43a-40c443d8686d-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "2408f292-2250-4987-b43a-40c443d8686d" (UID: "2408f292-2250-4987-b43a-40c443d8686d"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 05:46:17.054928 kubelet[2203]: I0209 05:46:17.053880 2203 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2408f292-2250-4987-b43a-40c443d8686d-hostproc\") pod \"2408f292-2250-4987-b43a-40c443d8686d\" (UID: \"2408f292-2250-4987-b43a-40c443d8686d\") " Feb 9 05:46:17.054928 kubelet[2203]: I0209 05:46:17.053941 2203 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2408f292-2250-4987-b43a-40c443d8686d-hostproc" (OuterVolumeSpecName: "hostproc") pod "2408f292-2250-4987-b43a-40c443d8686d" (UID: "2408f292-2250-4987-b43a-40c443d8686d"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 05:46:17.054928 kubelet[2203]: I0209 05:46:17.053989 2203 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2408f292-2250-4987-b43a-40c443d8686d-cilium-cgroup\") pod \"2408f292-2250-4987-b43a-40c443d8686d\" (UID: \"2408f292-2250-4987-b43a-40c443d8686d\") " Feb 9 05:46:17.054928 kubelet[2203]: I0209 05:46:17.054087 2203 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2408f292-2250-4987-b43a-40c443d8686d-cilium-run\") pod \"2408f292-2250-4987-b43a-40c443d8686d\" (UID: \"2408f292-2250-4987-b43a-40c443d8686d\") " Feb 9 05:46:17.055827 kubelet[2203]: I0209 05:46:17.054059 2203 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2408f292-2250-4987-b43a-40c443d8686d-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "2408f292-2250-4987-b43a-40c443d8686d" (UID: "2408f292-2250-4987-b43a-40c443d8686d"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 05:46:17.055827 kubelet[2203]: I0209 05:46:17.054155 2203 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2408f292-2250-4987-b43a-40c443d8686d-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "2408f292-2250-4987-b43a-40c443d8686d" (UID: "2408f292-2250-4987-b43a-40c443d8686d"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 05:46:17.055827 kubelet[2203]: I0209 05:46:17.054191 2203 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2408f292-2250-4987-b43a-40c443d8686d-bpf-maps\") pod \"2408f292-2250-4987-b43a-40c443d8686d\" (UID: \"2408f292-2250-4987-b43a-40c443d8686d\") " Feb 9 05:46:17.055827 kubelet[2203]: I0209 05:46:17.054286 2203 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2408f292-2250-4987-b43a-40c443d8686d-lib-modules\") pod \"2408f292-2250-4987-b43a-40c443d8686d\" (UID: \"2408f292-2250-4987-b43a-40c443d8686d\") " Feb 9 05:46:17.055827 kubelet[2203]: I0209 05:46:17.054283 2203 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2408f292-2250-4987-b43a-40c443d8686d-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "2408f292-2250-4987-b43a-40c443d8686d" (UID: "2408f292-2250-4987-b43a-40c443d8686d"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 05:46:17.056370 kubelet[2203]: I0209 05:46:17.054328 2203 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2408f292-2250-4987-b43a-40c443d8686d-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "2408f292-2250-4987-b43a-40c443d8686d" (UID: "2408f292-2250-4987-b43a-40c443d8686d"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 05:46:17.056370 kubelet[2203]: I0209 05:46:17.054390 2203 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2408f292-2250-4987-b43a-40c443d8686d-cni-path\") pod \"2408f292-2250-4987-b43a-40c443d8686d\" (UID: \"2408f292-2250-4987-b43a-40c443d8686d\") " Feb 9 05:46:17.056370 kubelet[2203]: I0209 05:46:17.054442 2203 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2408f292-2250-4987-b43a-40c443d8686d-cni-path" (OuterVolumeSpecName: "cni-path") pod "2408f292-2250-4987-b43a-40c443d8686d" (UID: "2408f292-2250-4987-b43a-40c443d8686d"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 05:46:17.056370 kubelet[2203]: I0209 05:46:17.054501 2203 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2408f292-2250-4987-b43a-40c443d8686d-hubble-tls\") pod \"2408f292-2250-4987-b43a-40c443d8686d\" (UID: \"2408f292-2250-4987-b43a-40c443d8686d\") " Feb 9 05:46:17.056370 kubelet[2203]: I0209 05:46:17.054626 2203 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2408f292-2250-4987-b43a-40c443d8686d-cilium-config-path\") pod \"2408f292-2250-4987-b43a-40c443d8686d\" (UID: \"2408f292-2250-4987-b43a-40c443d8686d\") " Feb 9 05:46:17.056370 kubelet[2203]: I0209 05:46:17.054746 2203 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2408f292-2250-4987-b43a-40c443d8686d-xtables-lock\") pod \"2408f292-2250-4987-b43a-40c443d8686d\" (UID: \"2408f292-2250-4987-b43a-40c443d8686d\") " Feb 9 05:46:17.056998 kubelet[2203]: I0209 05:46:17.054854 2203 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2408f292-2250-4987-b43a-40c443d8686d-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "2408f292-2250-4987-b43a-40c443d8686d" (UID: "2408f292-2250-4987-b43a-40c443d8686d"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 05:46:17.056998 kubelet[2203]: I0209 05:46:17.054882 2203 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2408f292-2250-4987-b43a-40c443d8686d-etc-cni-netd\") on node \"ci-3510.3.2-a-8a9497f9cf\" DevicePath \"\"" Feb 9 05:46:17.056998 kubelet[2203]: I0209 05:46:17.054977 2203 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2408f292-2250-4987-b43a-40c443d8686d-host-proc-sys-kernel\") on node \"ci-3510.3.2-a-8a9497f9cf\" DevicePath \"\"" Feb 9 05:46:17.056998 kubelet[2203]: I0209 05:46:17.055041 2203 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2408f292-2250-4987-b43a-40c443d8686d-host-proc-sys-net\") on node \"ci-3510.3.2-a-8a9497f9cf\" DevicePath \"\"" Feb 9 05:46:17.056998 kubelet[2203]: I0209 05:46:17.055102 2203 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2408f292-2250-4987-b43a-40c443d8686d-hostproc\") on node \"ci-3510.3.2-a-8a9497f9cf\" DevicePath \"\"" Feb 9 05:46:17.056998 kubelet[2203]: I0209 05:46:17.055158 2203 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2408f292-2250-4987-b43a-40c443d8686d-cilium-cgroup\") on node \"ci-3510.3.2-a-8a9497f9cf\" DevicePath \"\"" Feb 9 05:46:17.056998 kubelet[2203]: I0209 05:46:17.055212 2203 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2408f292-2250-4987-b43a-40c443d8686d-cilium-run\") on node \"ci-3510.3.2-a-8a9497f9cf\" DevicePath \"\"" Feb 9 05:46:17.057721 kubelet[2203]: I0209 05:46:17.055276 2203 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2408f292-2250-4987-b43a-40c443d8686d-bpf-maps\") on node \"ci-3510.3.2-a-8a9497f9cf\" DevicePath \"\"" Feb 9 05:46:17.057721 kubelet[2203]: I0209 05:46:17.055331 2203 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2408f292-2250-4987-b43a-40c443d8686d-lib-modules\") on node \"ci-3510.3.2-a-8a9497f9cf\" DevicePath \"\"" Feb 9 05:46:17.057721 kubelet[2203]: I0209 05:46:17.055387 2203 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2408f292-2250-4987-b43a-40c443d8686d-cni-path\") on node \"ci-3510.3.2-a-8a9497f9cf\" DevicePath \"\"" Feb 9 05:46:17.060000 kubelet[2203]: I0209 05:46:17.059901 2203 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2408f292-2250-4987-b43a-40c443d8686d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2408f292-2250-4987-b43a-40c443d8686d" (UID: "2408f292-2250-4987-b43a-40c443d8686d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 05:46:17.060225 kubelet[2203]: I0209 05:46:17.059993 2203 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2408f292-2250-4987-b43a-40c443d8686d-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "2408f292-2250-4987-b43a-40c443d8686d" (UID: "2408f292-2250-4987-b43a-40c443d8686d"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 05:46:17.060362 kubelet[2203]: I0209 05:46:17.060235 2203 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2408f292-2250-4987-b43a-40c443d8686d-kube-api-access-wpwwj" (OuterVolumeSpecName: "kube-api-access-wpwwj") pod "2408f292-2250-4987-b43a-40c443d8686d" (UID: "2408f292-2250-4987-b43a-40c443d8686d"). InnerVolumeSpecName "kube-api-access-wpwwj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 05:46:17.060828 kubelet[2203]: I0209 05:46:17.060733 2203 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2408f292-2250-4987-b43a-40c443d8686d-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "2408f292-2250-4987-b43a-40c443d8686d" (UID: "2408f292-2250-4987-b43a-40c443d8686d"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 05:46:17.156674 kubelet[2203]: I0209 05:46:17.156420 2203 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2408f292-2250-4987-b43a-40c443d8686d-xtables-lock\") on node \"ci-3510.3.2-a-8a9497f9cf\" DevicePath \"\"" Feb 9 05:46:17.156674 kubelet[2203]: I0209 05:46:17.156515 2203 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2408f292-2250-4987-b43a-40c443d8686d-clustermesh-secrets\") on node \"ci-3510.3.2-a-8a9497f9cf\" DevicePath \"\"" Feb 9 05:46:17.156674 kubelet[2203]: I0209 05:46:17.156618 2203 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-wpwwj\" (UniqueName: \"kubernetes.io/projected/2408f292-2250-4987-b43a-40c443d8686d-kube-api-access-wpwwj\") on node \"ci-3510.3.2-a-8a9497f9cf\" DevicePath \"\"" Feb 9 05:46:17.156674 kubelet[2203]: I0209 05:46:17.156684 2203 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2408f292-2250-4987-b43a-40c443d8686d-hubble-tls\") on node \"ci-3510.3.2-a-8a9497f9cf\" DevicePath \"\"" Feb 9 05:46:17.157307 kubelet[2203]: I0209 05:46:17.156763 2203 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2408f292-2250-4987-b43a-40c443d8686d-cilium-config-path\") on node \"ci-3510.3.2-a-8a9497f9cf\" DevicePath \"\"" Feb 9 05:46:17.472291 kubelet[2203]: I0209 05:46:17.472219 2203 scope.go:117] "RemoveContainer" containerID="fde55f59728c88950188e4a9373730124eec04e1e0d8f4a7af3625ebdea04cc2" Feb 9 05:46:17.475378 env[1166]: time="2024-02-09T05:46:17.475274639Z" level=info msg="RemoveContainer for \"fde55f59728c88950188e4a9373730124eec04e1e0d8f4a7af3625ebdea04cc2\"" Feb 9 05:46:17.478773 systemd[1]: Removed slice kubepods-burstable-pod2408f292_2250_4987_b43a_40c443d8686d.slice. Feb 9 05:46:17.478859 systemd[1]: kubepods-burstable-pod2408f292_2250_4987_b43a_40c443d8686d.slice: Consumed 11.743s CPU time. Feb 9 05:46:17.478908 env[1166]: time="2024-02-09T05:46:17.478829091Z" level=info msg="RemoveContainer for \"fde55f59728c88950188e4a9373730124eec04e1e0d8f4a7af3625ebdea04cc2\" returns successfully" Feb 9 05:46:17.478959 kubelet[2203]: I0209 05:46:17.478947 2203 scope.go:117] "RemoveContainer" containerID="233e38c82a6c68646925524b1a7dbf0fbcbc0cd6342d1e4b53ee1f5debd895b7" Feb 9 05:46:17.479452 env[1166]: time="2024-02-09T05:46:17.479434426Z" level=info msg="RemoveContainer for \"233e38c82a6c68646925524b1a7dbf0fbcbc0cd6342d1e4b53ee1f5debd895b7\"" Feb 9 05:46:17.479459 systemd[1]: Removed slice kubepods-besteffort-pod0262ec75_64c5_44dd_9532_2b872c3c571f.slice. Feb 9 05:46:17.479537 systemd[1]: kubepods-besteffort-pod0262ec75_64c5_44dd_9532_2b872c3c571f.slice: Consumed 2.498s CPU time. Feb 9 05:46:17.480469 env[1166]: time="2024-02-09T05:46:17.480455967Z" level=info msg="RemoveContainer for \"233e38c82a6c68646925524b1a7dbf0fbcbc0cd6342d1e4b53ee1f5debd895b7\" returns successfully" Feb 9 05:46:17.480525 kubelet[2203]: I0209 05:46:17.480518 2203 scope.go:117] "RemoveContainer" containerID="7b94bb10ef3873c267b4254dc9288bb1915f6c49e06938d68bd515a93df3d193" Feb 9 05:46:17.480970 env[1166]: time="2024-02-09T05:46:17.480955966Z" level=info msg="RemoveContainer for \"7b94bb10ef3873c267b4254dc9288bb1915f6c49e06938d68bd515a93df3d193\"" Feb 9 05:46:17.481959 env[1166]: time="2024-02-09T05:46:17.481946659Z" level=info msg="RemoveContainer for \"7b94bb10ef3873c267b4254dc9288bb1915f6c49e06938d68bd515a93df3d193\" returns successfully" Feb 9 05:46:17.482047 kubelet[2203]: I0209 05:46:17.482037 2203 scope.go:117] "RemoveContainer" containerID="d329b06513e1afa8e969af246f52a5a35ac4165510e005f3468cb890ecb8c1e9" Feb 9 05:46:17.482454 env[1166]: time="2024-02-09T05:46:17.482442501Z" level=info msg="RemoveContainer for \"d329b06513e1afa8e969af246f52a5a35ac4165510e005f3468cb890ecb8c1e9\"" Feb 9 05:46:17.483425 env[1166]: time="2024-02-09T05:46:17.483412503Z" level=info msg="RemoveContainer for \"d329b06513e1afa8e969af246f52a5a35ac4165510e005f3468cb890ecb8c1e9\" returns successfully" Feb 9 05:46:17.483475 kubelet[2203]: I0209 05:46:17.483469 2203 scope.go:117] "RemoveContainer" containerID="f7d3442abcab9fd161592818f22be47de79e4f56e6ccda66b337f5c80bf93661" Feb 9 05:46:17.483948 env[1166]: time="2024-02-09T05:46:17.483936736Z" level=info msg="RemoveContainer for \"f7d3442abcab9fd161592818f22be47de79e4f56e6ccda66b337f5c80bf93661\"" Feb 9 05:46:17.484991 env[1166]: time="2024-02-09T05:46:17.484976917Z" level=info msg="RemoveContainer for \"f7d3442abcab9fd161592818f22be47de79e4f56e6ccda66b337f5c80bf93661\" returns successfully" Feb 9 05:46:17.485094 kubelet[2203]: I0209 05:46:17.485085 2203 scope.go:117] "RemoveContainer" containerID="fde55f59728c88950188e4a9373730124eec04e1e0d8f4a7af3625ebdea04cc2" Feb 9 05:46:17.485215 env[1166]: time="2024-02-09T05:46:17.485174284Z" level=error msg="ContainerStatus for \"fde55f59728c88950188e4a9373730124eec04e1e0d8f4a7af3625ebdea04cc2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fde55f59728c88950188e4a9373730124eec04e1e0d8f4a7af3625ebdea04cc2\": not found" Feb 9 05:46:17.485281 kubelet[2203]: E0209 05:46:17.485274 2203 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fde55f59728c88950188e4a9373730124eec04e1e0d8f4a7af3625ebdea04cc2\": not found" containerID="fde55f59728c88950188e4a9373730124eec04e1e0d8f4a7af3625ebdea04cc2" Feb 9 05:46:17.485326 kubelet[2203]: I0209 05:46:17.485321 2203 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fde55f59728c88950188e4a9373730124eec04e1e0d8f4a7af3625ebdea04cc2"} err="failed to get container status \"fde55f59728c88950188e4a9373730124eec04e1e0d8f4a7af3625ebdea04cc2\": rpc error: code = NotFound desc = an error occurred when try to find container \"fde55f59728c88950188e4a9373730124eec04e1e0d8f4a7af3625ebdea04cc2\": not found" Feb 9 05:46:17.485353 kubelet[2203]: I0209 05:46:17.485329 2203 scope.go:117] "RemoveContainer" containerID="233e38c82a6c68646925524b1a7dbf0fbcbc0cd6342d1e4b53ee1f5debd895b7" Feb 9 05:46:17.485433 env[1166]: time="2024-02-09T05:46:17.485408068Z" level=error msg="ContainerStatus for \"233e38c82a6c68646925524b1a7dbf0fbcbc0cd6342d1e4b53ee1f5debd895b7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"233e38c82a6c68646925524b1a7dbf0fbcbc0cd6342d1e4b53ee1f5debd895b7\": not found" Feb 9 05:46:17.485483 kubelet[2203]: E0209 05:46:17.485476 2203 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"233e38c82a6c68646925524b1a7dbf0fbcbc0cd6342d1e4b53ee1f5debd895b7\": not found" containerID="233e38c82a6c68646925524b1a7dbf0fbcbc0cd6342d1e4b53ee1f5debd895b7" Feb 9 05:46:17.485507 kubelet[2203]: I0209 05:46:17.485494 2203 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"233e38c82a6c68646925524b1a7dbf0fbcbc0cd6342d1e4b53ee1f5debd895b7"} err="failed to get container status \"233e38c82a6c68646925524b1a7dbf0fbcbc0cd6342d1e4b53ee1f5debd895b7\": rpc error: code = NotFound desc = an error occurred when try to find container \"233e38c82a6c68646925524b1a7dbf0fbcbc0cd6342d1e4b53ee1f5debd895b7\": not found" Feb 9 05:46:17.485507 kubelet[2203]: I0209 05:46:17.485501 2203 scope.go:117] "RemoveContainer" containerID="7b94bb10ef3873c267b4254dc9288bb1915f6c49e06938d68bd515a93df3d193" Feb 9 05:46:17.485612 env[1166]: time="2024-02-09T05:46:17.485586547Z" level=error msg="ContainerStatus for \"7b94bb10ef3873c267b4254dc9288bb1915f6c49e06938d68bd515a93df3d193\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7b94bb10ef3873c267b4254dc9288bb1915f6c49e06938d68bd515a93df3d193\": not found" Feb 9 05:46:17.485655 kubelet[2203]: E0209 05:46:17.485649 2203 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7b94bb10ef3873c267b4254dc9288bb1915f6c49e06938d68bd515a93df3d193\": not found" containerID="7b94bb10ef3873c267b4254dc9288bb1915f6c49e06938d68bd515a93df3d193" Feb 9 05:46:17.485683 kubelet[2203]: I0209 05:46:17.485662 2203 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7b94bb10ef3873c267b4254dc9288bb1915f6c49e06938d68bd515a93df3d193"} err="failed to get container status \"7b94bb10ef3873c267b4254dc9288bb1915f6c49e06938d68bd515a93df3d193\": rpc error: code = NotFound desc = an error occurred when try to find container \"7b94bb10ef3873c267b4254dc9288bb1915f6c49e06938d68bd515a93df3d193\": not found" Feb 9 05:46:17.485683 kubelet[2203]: I0209 05:46:17.485670 2203 scope.go:117] "RemoveContainer" containerID="d329b06513e1afa8e969af246f52a5a35ac4165510e005f3468cb890ecb8c1e9" Feb 9 05:46:17.485773 env[1166]: time="2024-02-09T05:46:17.485741614Z" level=error msg="ContainerStatus for \"d329b06513e1afa8e969af246f52a5a35ac4165510e005f3468cb890ecb8c1e9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d329b06513e1afa8e969af246f52a5a35ac4165510e005f3468cb890ecb8c1e9\": not found" Feb 9 05:46:17.485813 kubelet[2203]: E0209 05:46:17.485809 2203 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d329b06513e1afa8e969af246f52a5a35ac4165510e005f3468cb890ecb8c1e9\": not found" containerID="d329b06513e1afa8e969af246f52a5a35ac4165510e005f3468cb890ecb8c1e9" Feb 9 05:46:17.485840 kubelet[2203]: I0209 05:46:17.485821 2203 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d329b06513e1afa8e969af246f52a5a35ac4165510e005f3468cb890ecb8c1e9"} err="failed to get container status \"d329b06513e1afa8e969af246f52a5a35ac4165510e005f3468cb890ecb8c1e9\": rpc error: code = NotFound desc = an error occurred when try to find container \"d329b06513e1afa8e969af246f52a5a35ac4165510e005f3468cb890ecb8c1e9\": not found" Feb 9 05:46:17.485840 kubelet[2203]: I0209 05:46:17.485827 2203 scope.go:117] "RemoveContainer" containerID="f7d3442abcab9fd161592818f22be47de79e4f56e6ccda66b337f5c80bf93661" Feb 9 05:46:17.485919 env[1166]: time="2024-02-09T05:46:17.485896073Z" level=error msg="ContainerStatus for \"f7d3442abcab9fd161592818f22be47de79e4f56e6ccda66b337f5c80bf93661\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f7d3442abcab9fd161592818f22be47de79e4f56e6ccda66b337f5c80bf93661\": not found" Feb 9 05:46:17.485982 kubelet[2203]: E0209 05:46:17.485970 2203 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f7d3442abcab9fd161592818f22be47de79e4f56e6ccda66b337f5c80bf93661\": not found" containerID="f7d3442abcab9fd161592818f22be47de79e4f56e6ccda66b337f5c80bf93661" Feb 9 05:46:17.485982 kubelet[2203]: I0209 05:46:17.485981 2203 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f7d3442abcab9fd161592818f22be47de79e4f56e6ccda66b337f5c80bf93661"} err="failed to get container status \"f7d3442abcab9fd161592818f22be47de79e4f56e6ccda66b337f5c80bf93661\": rpc error: code = NotFound desc = an error occurred when try to find container \"f7d3442abcab9fd161592818f22be47de79e4f56e6ccda66b337f5c80bf93661\": not found" Feb 9 05:46:17.486046 kubelet[2203]: I0209 05:46:17.485986 2203 scope.go:117] "RemoveContainer" containerID="e4344695430d48554ccbbfc970f95806ddb40a851f34611a59a0fa6afd61d5b8" Feb 9 05:46:17.486491 env[1166]: time="2024-02-09T05:46:17.486477296Z" level=info msg="RemoveContainer for \"e4344695430d48554ccbbfc970f95806ddb40a851f34611a59a0fa6afd61d5b8\"" Feb 9 05:46:17.487562 env[1166]: time="2024-02-09T05:46:17.487549885Z" level=info msg="RemoveContainer for \"e4344695430d48554ccbbfc970f95806ddb40a851f34611a59a0fa6afd61d5b8\" returns successfully" Feb 9 05:46:17.487621 kubelet[2203]: I0209 05:46:17.487613 2203 scope.go:117] "RemoveContainer" containerID="e4344695430d48554ccbbfc970f95806ddb40a851f34611a59a0fa6afd61d5b8" Feb 9 05:46:17.487750 env[1166]: time="2024-02-09T05:46:17.487720361Z" level=error msg="ContainerStatus for \"e4344695430d48554ccbbfc970f95806ddb40a851f34611a59a0fa6afd61d5b8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e4344695430d48554ccbbfc970f95806ddb40a851f34611a59a0fa6afd61d5b8\": not found" Feb 9 05:46:17.487832 kubelet[2203]: E0209 05:46:17.487825 2203 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e4344695430d48554ccbbfc970f95806ddb40a851f34611a59a0fa6afd61d5b8\": not found" containerID="e4344695430d48554ccbbfc970f95806ddb40a851f34611a59a0fa6afd61d5b8" Feb 9 05:46:17.487873 kubelet[2203]: I0209 05:46:17.487847 2203 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e4344695430d48554ccbbfc970f95806ddb40a851f34611a59a0fa6afd61d5b8"} err="failed to get container status \"e4344695430d48554ccbbfc970f95806ddb40a851f34611a59a0fa6afd61d5b8\": rpc error: code = NotFound desc = an error occurred when try to find container \"e4344695430d48554ccbbfc970f95806ddb40a851f34611a59a0fa6afd61d5b8\": not found" Feb 9 05:46:17.542774 kubelet[2203]: I0209 05:46:17.542729 2203 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="0262ec75-64c5-44dd-9532-2b872c3c571f" path="/var/lib/kubelet/pods/0262ec75-64c5-44dd-9532-2b872c3c571f/volumes" Feb 9 05:46:17.543923 kubelet[2203]: I0209 05:46:17.543893 2203 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="2408f292-2250-4987-b43a-40c443d8686d" path="/var/lib/kubelet/pods/2408f292-2250-4987-b43a-40c443d8686d/volumes" Feb 9 05:46:17.655727 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bb9bc707843e01e650955711155ca8ac2cc40d15ac179be9710072977f30ad33-rootfs.mount: Deactivated successfully. Feb 9 05:46:17.655988 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bb9bc707843e01e650955711155ca8ac2cc40d15ac179be9710072977f30ad33-shm.mount: Deactivated successfully. Feb 9 05:46:17.656203 systemd[1]: var-lib-kubelet-pods-2408f292\x2d2250\x2d4987\x2db43a\x2d40c443d8686d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwpwwj.mount: Deactivated successfully. Feb 9 05:46:17.656296 systemd[1]: var-lib-kubelet-pods-0262ec75\x2d64c5\x2d44dd\x2d9532\x2d2b872c3c571f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkszrf.mount: Deactivated successfully. Feb 9 05:46:17.656342 systemd[1]: var-lib-kubelet-pods-2408f292\x2d2250\x2d4987\x2db43a\x2d40c443d8686d-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 05:46:17.656375 systemd[1]: var-lib-kubelet-pods-2408f292\x2d2250\x2d4987\x2db43a\x2d40c443d8686d-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 05:46:18.565228 sshd[4365]: pam_unix(sshd:session): session closed for user core Feb 9 05:46:18.572366 systemd[1]: sshd@22-147.75.90.151:22-147.75.109.163:53558.service: Deactivated successfully. Feb 9 05:46:18.573281 systemd[1]: session-24.scope: Deactivated successfully. Feb 9 05:46:18.573664 systemd-logind[1154]: Session 24 logged out. Waiting for processes to exit. Feb 9 05:46:18.574170 systemd[1]: Started sshd@23-147.75.90.151:22-147.75.109.163:53572.service. Feb 9 05:46:18.574541 systemd-logind[1154]: Removed session 24. Feb 9 05:46:18.611878 sshd[4540]: Accepted publickey for core from 147.75.109.163 port 53572 ssh2: RSA SHA256:by5us56zV59xWLeZ0jKNtrh0jNbtksa6rAc7n50Br/w Feb 9 05:46:18.612901 sshd[4540]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 05:46:18.616174 systemd-logind[1154]: New session 25 of user core. Feb 9 05:46:18.616850 systemd[1]: Started session-25.scope. Feb 9 05:46:18.910710 sshd[4540]: pam_unix(sshd:session): session closed for user core Feb 9 05:46:18.912667 systemd[1]: sshd@23-147.75.90.151:22-147.75.109.163:53572.service: Deactivated successfully. Feb 9 05:46:18.913074 systemd[1]: session-25.scope: Deactivated successfully. Feb 9 05:46:18.913447 systemd-logind[1154]: Session 25 logged out. Waiting for processes to exit. Feb 9 05:46:18.914179 systemd[1]: Started sshd@24-147.75.90.151:22-147.75.109.163:53580.service. Feb 9 05:46:18.914612 systemd-logind[1154]: Removed session 25. Feb 9 05:46:18.922317 kubelet[2203]: I0209 05:46:18.922292 2203 topology_manager.go:215] "Topology Admit Handler" podUID="b8dd4fad-0647-4a7e-ba8a-e31d1ae41001" podNamespace="kube-system" podName="cilium-v26ng" Feb 9 05:46:18.922552 kubelet[2203]: E0209 05:46:18.922338 2203 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2408f292-2250-4987-b43a-40c443d8686d" containerName="apply-sysctl-overwrites" Feb 9 05:46:18.922552 kubelet[2203]: E0209 05:46:18.922345 2203 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2408f292-2250-4987-b43a-40c443d8686d" containerName="cilium-agent" Feb 9 05:46:18.922552 kubelet[2203]: E0209 05:46:18.922349 2203 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2408f292-2250-4987-b43a-40c443d8686d" containerName="mount-cgroup" Feb 9 05:46:18.922552 kubelet[2203]: E0209 05:46:18.922353 2203 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0262ec75-64c5-44dd-9532-2b872c3c571f" containerName="cilium-operator" Feb 9 05:46:18.922552 kubelet[2203]: E0209 05:46:18.922357 2203 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2408f292-2250-4987-b43a-40c443d8686d" containerName="mount-bpf-fs" Feb 9 05:46:18.922552 kubelet[2203]: E0209 05:46:18.922363 2203 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2408f292-2250-4987-b43a-40c443d8686d" containerName="clean-cilium-state" Feb 9 05:46:18.922552 kubelet[2203]: I0209 05:46:18.922380 2203 memory_manager.go:346] "RemoveStaleState removing state" podUID="0262ec75-64c5-44dd-9532-2b872c3c571f" containerName="cilium-operator" Feb 9 05:46:18.922552 kubelet[2203]: I0209 05:46:18.922385 2203 memory_manager.go:346] "RemoveStaleState removing state" podUID="2408f292-2250-4987-b43a-40c443d8686d" containerName="cilium-agent" Feb 9 05:46:18.925637 systemd[1]: Created slice kubepods-burstable-podb8dd4fad_0647_4a7e_ba8a_e31d1ae41001.slice. Feb 9 05:46:18.952943 sshd[4564]: Accepted publickey for core from 147.75.109.163 port 53580 ssh2: RSA SHA256:by5us56zV59xWLeZ0jKNtrh0jNbtksa6rAc7n50Br/w Feb 9 05:46:18.953750 sshd[4564]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 05:46:18.956220 systemd-logind[1154]: New session 26 of user core. Feb 9 05:46:18.956657 systemd[1]: Started session-26.scope. Feb 9 05:46:18.970874 kubelet[2203]: I0209 05:46:18.970822 2203 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b8dd4fad-0647-4a7e-ba8a-e31d1ae41001-cilium-cgroup\") pod \"cilium-v26ng\" (UID: \"b8dd4fad-0647-4a7e-ba8a-e31d1ae41001\") " pod="kube-system/cilium-v26ng" Feb 9 05:46:18.971139 kubelet[2203]: I0209 05:46:18.971030 2203 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrdqg\" (UniqueName: \"kubernetes.io/projected/b8dd4fad-0647-4a7e-ba8a-e31d1ae41001-kube-api-access-nrdqg\") pod \"cilium-v26ng\" (UID: \"b8dd4fad-0647-4a7e-ba8a-e31d1ae41001\") " pod="kube-system/cilium-v26ng" Feb 9 05:46:18.971139 kubelet[2203]: I0209 05:46:18.971134 2203 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b8dd4fad-0647-4a7e-ba8a-e31d1ae41001-clustermesh-secrets\") pod \"cilium-v26ng\" (UID: \"b8dd4fad-0647-4a7e-ba8a-e31d1ae41001\") " pod="kube-system/cilium-v26ng" Feb 9 05:46:18.971520 kubelet[2203]: I0209 05:46:18.971208 2203 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b8dd4fad-0647-4a7e-ba8a-e31d1ae41001-bpf-maps\") pod \"cilium-v26ng\" (UID: \"b8dd4fad-0647-4a7e-ba8a-e31d1ae41001\") " pod="kube-system/cilium-v26ng" Feb 9 05:46:18.971520 kubelet[2203]: I0209 05:46:18.971367 2203 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b8dd4fad-0647-4a7e-ba8a-e31d1ae41001-etc-cni-netd\") pod \"cilium-v26ng\" (UID: \"b8dd4fad-0647-4a7e-ba8a-e31d1ae41001\") " pod="kube-system/cilium-v26ng" Feb 9 05:46:18.971928 kubelet[2203]: I0209 05:46:18.971661 2203 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b8dd4fad-0647-4a7e-ba8a-e31d1ae41001-cni-path\") pod \"cilium-v26ng\" (UID: \"b8dd4fad-0647-4a7e-ba8a-e31d1ae41001\") " pod="kube-system/cilium-v26ng" Feb 9 05:46:18.971928 kubelet[2203]: I0209 05:46:18.971814 2203 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b8dd4fad-0647-4a7e-ba8a-e31d1ae41001-host-proc-sys-net\") pod \"cilium-v26ng\" (UID: \"b8dd4fad-0647-4a7e-ba8a-e31d1ae41001\") " pod="kube-system/cilium-v26ng" Feb 9 05:46:18.972284 kubelet[2203]: I0209 05:46:18.971941 2203 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b8dd4fad-0647-4a7e-ba8a-e31d1ae41001-host-proc-sys-kernel\") pod \"cilium-v26ng\" (UID: \"b8dd4fad-0647-4a7e-ba8a-e31d1ae41001\") " pod="kube-system/cilium-v26ng" Feb 9 05:46:18.972284 kubelet[2203]: I0209 05:46:18.972148 2203 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b8dd4fad-0647-4a7e-ba8a-e31d1ae41001-cilium-run\") pod \"cilium-v26ng\" (UID: \"b8dd4fad-0647-4a7e-ba8a-e31d1ae41001\") " pod="kube-system/cilium-v26ng" Feb 9 05:46:18.972682 kubelet[2203]: I0209 05:46:18.972391 2203 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b8dd4fad-0647-4a7e-ba8a-e31d1ae41001-hubble-tls\") pod \"cilium-v26ng\" (UID: \"b8dd4fad-0647-4a7e-ba8a-e31d1ae41001\") " pod="kube-system/cilium-v26ng" Feb 9 05:46:18.972682 kubelet[2203]: I0209 05:46:18.972572 2203 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b8dd4fad-0647-4a7e-ba8a-e31d1ae41001-xtables-lock\") pod \"cilium-v26ng\" (UID: \"b8dd4fad-0647-4a7e-ba8a-e31d1ae41001\") " pod="kube-system/cilium-v26ng" Feb 9 05:46:18.972926 kubelet[2203]: I0209 05:46:18.972795 2203 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b8dd4fad-0647-4a7e-ba8a-e31d1ae41001-cilium-config-path\") pod \"cilium-v26ng\" (UID: \"b8dd4fad-0647-4a7e-ba8a-e31d1ae41001\") " pod="kube-system/cilium-v26ng" Feb 9 05:46:18.972926 kubelet[2203]: I0209 05:46:18.972911 2203 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b8dd4fad-0647-4a7e-ba8a-e31d1ae41001-cilium-ipsec-secrets\") pod \"cilium-v26ng\" (UID: \"b8dd4fad-0647-4a7e-ba8a-e31d1ae41001\") " pod="kube-system/cilium-v26ng" Feb 9 05:46:18.973237 kubelet[2203]: I0209 05:46:18.973019 2203 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b8dd4fad-0647-4a7e-ba8a-e31d1ae41001-hostproc\") pod \"cilium-v26ng\" (UID: \"b8dd4fad-0647-4a7e-ba8a-e31d1ae41001\") " pod="kube-system/cilium-v26ng" Feb 9 05:46:18.973237 kubelet[2203]: I0209 05:46:18.973121 2203 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b8dd4fad-0647-4a7e-ba8a-e31d1ae41001-lib-modules\") pod \"cilium-v26ng\" (UID: \"b8dd4fad-0647-4a7e-ba8a-e31d1ae41001\") " pod="kube-system/cilium-v26ng" Feb 9 05:46:19.097131 sshd[4564]: pam_unix(sshd:session): session closed for user core Feb 9 05:46:19.098819 systemd[1]: sshd@24-147.75.90.151:22-147.75.109.163:53580.service: Deactivated successfully. Feb 9 05:46:19.099267 systemd[1]: session-26.scope: Deactivated successfully. Feb 9 05:46:19.099723 systemd-logind[1154]: Session 26 logged out. Waiting for processes to exit. Feb 9 05:46:19.100311 systemd[1]: Started sshd@25-147.75.90.151:22-147.75.109.163:53582.service. Feb 9 05:46:19.100770 systemd-logind[1154]: Removed session 26. Feb 9 05:46:19.103341 env[1166]: time="2024-02-09T05:46:19.103314389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-v26ng,Uid:b8dd4fad-0647-4a7e-ba8a-e31d1ae41001,Namespace:kube-system,Attempt:0,}" Feb 9 05:46:19.109239 env[1166]: time="2024-02-09T05:46:19.109202809Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 05:46:19.109239 env[1166]: time="2024-02-09T05:46:19.109227273Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 05:46:19.109350 env[1166]: time="2024-02-09T05:46:19.109237897Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 05:46:19.109350 env[1166]: time="2024-02-09T05:46:19.109307860Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a3263fa4dc80da79342c7e5e4a56295b1ddc42af658c6dbb82f0f9cd7a4bed75 pid=4602 runtime=io.containerd.runc.v2 Feb 9 05:46:19.126877 systemd[1]: Started cri-containerd-a3263fa4dc80da79342c7e5e4a56295b1ddc42af658c6dbb82f0f9cd7a4bed75.scope. Feb 9 05:46:19.138728 sshd[4594]: Accepted publickey for core from 147.75.109.163 port 53582 ssh2: RSA SHA256:by5us56zV59xWLeZ0jKNtrh0jNbtksa6rAc7n50Br/w Feb 9 05:46:19.139647 sshd[4594]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 05:46:19.142151 systemd-logind[1154]: New session 27 of user core. Feb 9 05:46:19.142643 systemd[1]: Started session-27.scope. Feb 9 05:46:19.149843 env[1166]: time="2024-02-09T05:46:19.149820778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-v26ng,Uid:b8dd4fad-0647-4a7e-ba8a-e31d1ae41001,Namespace:kube-system,Attempt:0,} returns sandbox id \"a3263fa4dc80da79342c7e5e4a56295b1ddc42af658c6dbb82f0f9cd7a4bed75\"" Feb 9 05:46:19.151175 env[1166]: time="2024-02-09T05:46:19.151159533Z" level=info msg="CreateContainer within sandbox \"a3263fa4dc80da79342c7e5e4a56295b1ddc42af658c6dbb82f0f9cd7a4bed75\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 05:46:19.155933 env[1166]: time="2024-02-09T05:46:19.155914626Z" level=info msg="CreateContainer within sandbox \"a3263fa4dc80da79342c7e5e4a56295b1ddc42af658c6dbb82f0f9cd7a4bed75\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1c947b6c68eb165ea72f525531e0438ca3e80b60fbdb933742d09fce956611af\"" Feb 9 05:46:19.156179 env[1166]: time="2024-02-09T05:46:19.156165139Z" level=info msg="StartContainer for \"1c947b6c68eb165ea72f525531e0438ca3e80b60fbdb933742d09fce956611af\"" Feb 9 05:46:19.164716 systemd[1]: Started cri-containerd-1c947b6c68eb165ea72f525531e0438ca3e80b60fbdb933742d09fce956611af.scope. Feb 9 05:46:19.171220 systemd[1]: cri-containerd-1c947b6c68eb165ea72f525531e0438ca3e80b60fbdb933742d09fce956611af.scope: Deactivated successfully. Feb 9 05:46:19.171391 systemd[1]: Stopped cri-containerd-1c947b6c68eb165ea72f525531e0438ca3e80b60fbdb933742d09fce956611af.scope. Feb 9 05:46:19.199372 env[1166]: time="2024-02-09T05:46:19.199306673Z" level=info msg="shim disconnected" id=1c947b6c68eb165ea72f525531e0438ca3e80b60fbdb933742d09fce956611af Feb 9 05:46:19.199372 env[1166]: time="2024-02-09T05:46:19.199347389Z" level=warning msg="cleaning up after shim disconnected" id=1c947b6c68eb165ea72f525531e0438ca3e80b60fbdb933742d09fce956611af namespace=k8s.io Feb 9 05:46:19.199372 env[1166]: time="2024-02-09T05:46:19.199358533Z" level=info msg="cleaning up dead shim" Feb 9 05:46:19.217083 env[1166]: time="2024-02-09T05:46:19.217011580Z" level=warning msg="cleanup warnings time=\"2024-02-09T05:46:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4672 runtime=io.containerd.runc.v2\ntime=\"2024-02-09T05:46:19Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/1c947b6c68eb165ea72f525531e0438ca3e80b60fbdb933742d09fce956611af/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 9 05:46:19.217284 env[1166]: time="2024-02-09T05:46:19.217208662Z" level=error msg="copy shim log" error="read /proc/self/fd/31: file already closed" Feb 9 05:46:19.217407 env[1166]: time="2024-02-09T05:46:19.217372051Z" level=error msg="Failed to pipe stdout of container \"1c947b6c68eb165ea72f525531e0438ca3e80b60fbdb933742d09fce956611af\"" error="reading from a closed fifo" Feb 9 05:46:19.217464 env[1166]: time="2024-02-09T05:46:19.217416771Z" level=error msg="Failed to pipe stderr of container \"1c947b6c68eb165ea72f525531e0438ca3e80b60fbdb933742d09fce956611af\"" error="reading from a closed fifo" Feb 9 05:46:19.218173 env[1166]: time="2024-02-09T05:46:19.218140788Z" level=error msg="StartContainer for \"1c947b6c68eb165ea72f525531e0438ca3e80b60fbdb933742d09fce956611af\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 9 05:46:19.218398 kubelet[2203]: E0209 05:46:19.218350 2203 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="1c947b6c68eb165ea72f525531e0438ca3e80b60fbdb933742d09fce956611af" Feb 9 05:46:19.218488 kubelet[2203]: E0209 05:46:19.218479 2203 kuberuntime_manager.go:1209] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 9 05:46:19.218488 kubelet[2203]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 9 05:46:19.218488 kubelet[2203]: rm /hostbin/cilium-mount Feb 9 05:46:19.218595 kubelet[2203]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-nrdqg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-v26ng_kube-system(b8dd4fad-0647-4a7e-ba8a-e31d1ae41001): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 9 05:46:19.218595 kubelet[2203]: E0209 05:46:19.218528 2203 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-v26ng" podUID="b8dd4fad-0647-4a7e-ba8a-e31d1ae41001" Feb 9 05:46:19.490610 env[1166]: time="2024-02-09T05:46:19.490502929Z" level=info msg="StopPodSandbox for \"a3263fa4dc80da79342c7e5e4a56295b1ddc42af658c6dbb82f0f9cd7a4bed75\"" Feb 9 05:46:19.490994 env[1166]: time="2024-02-09T05:46:19.490698349Z" level=info msg="Container to stop \"1c947b6c68eb165ea72f525531e0438ca3e80b60fbdb933742d09fce956611af\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 05:46:19.517549 systemd[1]: cri-containerd-a3263fa4dc80da79342c7e5e4a56295b1ddc42af658c6dbb82f0f9cd7a4bed75.scope: Deactivated successfully. Feb 9 05:46:19.573232 env[1166]: time="2024-02-09T05:46:19.573067974Z" level=info msg="shim disconnected" id=a3263fa4dc80da79342c7e5e4a56295b1ddc42af658c6dbb82f0f9cd7a4bed75 Feb 9 05:46:19.573232 env[1166]: time="2024-02-09T05:46:19.573204815Z" level=warning msg="cleaning up after shim disconnected" id=a3263fa4dc80da79342c7e5e4a56295b1ddc42af658c6dbb82f0f9cd7a4bed75 namespace=k8s.io Feb 9 05:46:19.573232 env[1166]: time="2024-02-09T05:46:19.573240445Z" level=info msg="cleaning up dead shim" Feb 9 05:46:19.604307 env[1166]: time="2024-02-09T05:46:19.604171600Z" level=warning msg="cleanup warnings time=\"2024-02-09T05:46:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4711 runtime=io.containerd.runc.v2\n" Feb 9 05:46:19.604991 env[1166]: time="2024-02-09T05:46:19.604882787Z" level=info msg="TearDown network for sandbox \"a3263fa4dc80da79342c7e5e4a56295b1ddc42af658c6dbb82f0f9cd7a4bed75\" successfully" Feb 9 05:46:19.604991 env[1166]: time="2024-02-09T05:46:19.604949742Z" level=info msg="StopPodSandbox for \"a3263fa4dc80da79342c7e5e4a56295b1ddc42af658c6dbb82f0f9cd7a4bed75\" returns successfully" Feb 9 05:46:19.679640 kubelet[2203]: I0209 05:46:19.679532 2203 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b8dd4fad-0647-4a7e-ba8a-e31d1ae41001-hostproc\") pod \"b8dd4fad-0647-4a7e-ba8a-e31d1ae41001\" (UID: \"b8dd4fad-0647-4a7e-ba8a-e31d1ae41001\") " Feb 9 05:46:19.679640 kubelet[2203]: I0209 05:46:19.679647 2203 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b8dd4fad-0647-4a7e-ba8a-e31d1ae41001-lib-modules\") pod \"b8dd4fad-0647-4a7e-ba8a-e31d1ae41001\" (UID: \"b8dd4fad-0647-4a7e-ba8a-e31d1ae41001\") " Feb 9 05:46:19.680096 kubelet[2203]: I0209 05:46:19.679710 2203 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b8dd4fad-0647-4a7e-ba8a-e31d1ae41001-host-proc-sys-net\") pod \"b8dd4fad-0647-4a7e-ba8a-e31d1ae41001\" (UID: \"b8dd4fad-0647-4a7e-ba8a-e31d1ae41001\") " Feb 9 05:46:19.680096 kubelet[2203]: I0209 05:46:19.679703 2203 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b8dd4fad-0647-4a7e-ba8a-e31d1ae41001-hostproc" (OuterVolumeSpecName: "hostproc") pod "b8dd4fad-0647-4a7e-ba8a-e31d1ae41001" (UID: "b8dd4fad-0647-4a7e-ba8a-e31d1ae41001"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 05:46:19.680096 kubelet[2203]: I0209 05:46:19.679777 2203 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b8dd4fad-0647-4a7e-ba8a-e31d1ae41001-cilium-run\") pod \"b8dd4fad-0647-4a7e-ba8a-e31d1ae41001\" (UID: \"b8dd4fad-0647-4a7e-ba8a-e31d1ae41001\") " Feb 9 05:46:19.680096 kubelet[2203]: I0209 05:46:19.679795 2203 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b8dd4fad-0647-4a7e-ba8a-e31d1ae41001-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b8dd4fad-0647-4a7e-ba8a-e31d1ae41001" (UID: "b8dd4fad-0647-4a7e-ba8a-e31d1ae41001"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 05:46:19.680096 kubelet[2203]: I0209 05:46:19.679852 2203 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b8dd4fad-0647-4a7e-ba8a-e31d1ae41001-cilium-ipsec-secrets\") pod \"b8dd4fad-0647-4a7e-ba8a-e31d1ae41001\" (UID: \"b8dd4fad-0647-4a7e-ba8a-e31d1ae41001\") " Feb 9 05:46:19.680096 kubelet[2203]: I0209 05:46:19.679844 2203 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b8dd4fad-0647-4a7e-ba8a-e31d1ae41001-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "b8dd4fad-0647-4a7e-ba8a-e31d1ae41001" (UID: "b8dd4fad-0647-4a7e-ba8a-e31d1ae41001"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 05:46:19.680096 kubelet[2203]: I0209 05:46:19.679912 2203 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b8dd4fad-0647-4a7e-ba8a-e31d1ae41001-xtables-lock\") pod \"b8dd4fad-0647-4a7e-ba8a-e31d1ae41001\" (UID: \"b8dd4fad-0647-4a7e-ba8a-e31d1ae41001\") " Feb 9 05:46:19.680096 kubelet[2203]: I0209 05:46:19.679902 2203 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b8dd4fad-0647-4a7e-ba8a-e31d1ae41001-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "b8dd4fad-0647-4a7e-ba8a-e31d1ae41001" (UID: "b8dd4fad-0647-4a7e-ba8a-e31d1ae41001"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 05:46:19.680096 kubelet[2203]: I0209 05:46:19.679970 2203 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b8dd4fad-0647-4a7e-ba8a-e31d1ae41001-host-proc-sys-kernel\") pod \"b8dd4fad-0647-4a7e-ba8a-e31d1ae41001\" (UID: \"b8dd4fad-0647-4a7e-ba8a-e31d1ae41001\") " Feb 9 05:46:19.680096 kubelet[2203]: I0209 05:46:19.679986 2203 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b8dd4fad-0647-4a7e-ba8a-e31d1ae41001-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "b8dd4fad-0647-4a7e-ba8a-e31d1ae41001" (UID: "b8dd4fad-0647-4a7e-ba8a-e31d1ae41001"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 05:46:19.680096 kubelet[2203]: I0209 05:46:19.680004 2203 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b8dd4fad-0647-4a7e-ba8a-e31d1ae41001-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "b8dd4fad-0647-4a7e-ba8a-e31d1ae41001" (UID: "b8dd4fad-0647-4a7e-ba8a-e31d1ae41001"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 05:46:19.680096 kubelet[2203]: I0209 05:46:19.680038 2203 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b8dd4fad-0647-4a7e-ba8a-e31d1ae41001-hubble-tls\") pod \"b8dd4fad-0647-4a7e-ba8a-e31d1ae41001\" (UID: \"b8dd4fad-0647-4a7e-ba8a-e31d1ae41001\") " Feb 9 05:46:19.681398 kubelet[2203]: I0209 05:46:19.680136 2203 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b8dd4fad-0647-4a7e-ba8a-e31d1ae41001-cilium-cgroup\") pod \"b8dd4fad-0647-4a7e-ba8a-e31d1ae41001\" (UID: \"b8dd4fad-0647-4a7e-ba8a-e31d1ae41001\") " Feb 9 05:46:19.681398 kubelet[2203]: I0209 05:46:19.680196 2203 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b8dd4fad-0647-4a7e-ba8a-e31d1ae41001-etc-cni-netd\") pod \"b8dd4fad-0647-4a7e-ba8a-e31d1ae41001\" (UID: \"b8dd4fad-0647-4a7e-ba8a-e31d1ae41001\") " Feb 9 05:46:19.681398 kubelet[2203]: I0209 05:46:19.680265 2203 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b8dd4fad-0647-4a7e-ba8a-e31d1ae41001-clustermesh-secrets\") pod \"b8dd4fad-0647-4a7e-ba8a-e31d1ae41001\" (UID: \"b8dd4fad-0647-4a7e-ba8a-e31d1ae41001\") " Feb 9 05:46:19.681398 kubelet[2203]: I0209 05:46:19.680253 2203 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b8dd4fad-0647-4a7e-ba8a-e31d1ae41001-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "b8dd4fad-0647-4a7e-ba8a-e31d1ae41001" (UID: "b8dd4fad-0647-4a7e-ba8a-e31d1ae41001"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 05:46:19.681398 kubelet[2203]: I0209 05:46:19.680323 2203 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b8dd4fad-0647-4a7e-ba8a-e31d1ae41001-bpf-maps\") pod \"b8dd4fad-0647-4a7e-ba8a-e31d1ae41001\" (UID: \"b8dd4fad-0647-4a7e-ba8a-e31d1ae41001\") " Feb 9 05:46:19.681398 kubelet[2203]: I0209 05:46:19.680307 2203 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b8dd4fad-0647-4a7e-ba8a-e31d1ae41001-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "b8dd4fad-0647-4a7e-ba8a-e31d1ae41001" (UID: "b8dd4fad-0647-4a7e-ba8a-e31d1ae41001"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 05:46:19.681398 kubelet[2203]: I0209 05:46:19.680387 2203 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nrdqg\" (UniqueName: \"kubernetes.io/projected/b8dd4fad-0647-4a7e-ba8a-e31d1ae41001-kube-api-access-nrdqg\") pod \"b8dd4fad-0647-4a7e-ba8a-e31d1ae41001\" (UID: \"b8dd4fad-0647-4a7e-ba8a-e31d1ae41001\") " Feb 9 05:46:19.681398 kubelet[2203]: I0209 05:46:19.680442 2203 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b8dd4fad-0647-4a7e-ba8a-e31d1ae41001-cni-path\") pod \"b8dd4fad-0647-4a7e-ba8a-e31d1ae41001\" (UID: \"b8dd4fad-0647-4a7e-ba8a-e31d1ae41001\") " Feb 9 05:46:19.681398 kubelet[2203]: I0209 05:46:19.680443 2203 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b8dd4fad-0647-4a7e-ba8a-e31d1ae41001-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "b8dd4fad-0647-4a7e-ba8a-e31d1ae41001" (UID: "b8dd4fad-0647-4a7e-ba8a-e31d1ae41001"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 05:46:19.681398 kubelet[2203]: I0209 05:46:19.680533 2203 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b8dd4fad-0647-4a7e-ba8a-e31d1ae41001-cilium-config-path\") pod \"b8dd4fad-0647-4a7e-ba8a-e31d1ae41001\" (UID: \"b8dd4fad-0647-4a7e-ba8a-e31d1ae41001\") " Feb 9 05:46:19.681398 kubelet[2203]: I0209 05:46:19.680539 2203 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b8dd4fad-0647-4a7e-ba8a-e31d1ae41001-cni-path" (OuterVolumeSpecName: "cni-path") pod "b8dd4fad-0647-4a7e-ba8a-e31d1ae41001" (UID: "b8dd4fad-0647-4a7e-ba8a-e31d1ae41001"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 05:46:19.681398 kubelet[2203]: I0209 05:46:19.680689 2203 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b8dd4fad-0647-4a7e-ba8a-e31d1ae41001-cni-path\") on node \"ci-3510.3.2-a-8a9497f9cf\" DevicePath \"\"" Feb 9 05:46:19.681398 kubelet[2203]: I0209 05:46:19.680731 2203 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b8dd4fad-0647-4a7e-ba8a-e31d1ae41001-hostproc\") on node \"ci-3510.3.2-a-8a9497f9cf\" DevicePath \"\"" Feb 9 05:46:19.681398 kubelet[2203]: I0209 05:46:19.680763 2203 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b8dd4fad-0647-4a7e-ba8a-e31d1ae41001-lib-modules\") on node \"ci-3510.3.2-a-8a9497f9cf\" DevicePath \"\"" Feb 9 05:46:19.681398 kubelet[2203]: I0209 05:46:19.680799 2203 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b8dd4fad-0647-4a7e-ba8a-e31d1ae41001-host-proc-sys-net\") on node \"ci-3510.3.2-a-8a9497f9cf\" DevicePath \"\"" Feb 9 05:46:19.681398 kubelet[2203]: I0209 05:46:19.680831 2203 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b8dd4fad-0647-4a7e-ba8a-e31d1ae41001-cilium-run\") on node \"ci-3510.3.2-a-8a9497f9cf\" DevicePath \"\"" Feb 9 05:46:19.681398 kubelet[2203]: I0209 05:46:19.680865 2203 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b8dd4fad-0647-4a7e-ba8a-e31d1ae41001-xtables-lock\") on node \"ci-3510.3.2-a-8a9497f9cf\" DevicePath \"\"" Feb 9 05:46:19.683147 kubelet[2203]: I0209 05:46:19.680896 2203 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b8dd4fad-0647-4a7e-ba8a-e31d1ae41001-host-proc-sys-kernel\") on node \"ci-3510.3.2-a-8a9497f9cf\" DevicePath \"\"" Feb 9 05:46:19.683147 kubelet[2203]: I0209 05:46:19.680928 2203 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b8dd4fad-0647-4a7e-ba8a-e31d1ae41001-cilium-cgroup\") on node \"ci-3510.3.2-a-8a9497f9cf\" DevicePath \"\"" Feb 9 05:46:19.683147 kubelet[2203]: I0209 05:46:19.680958 2203 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b8dd4fad-0647-4a7e-ba8a-e31d1ae41001-etc-cni-netd\") on node \"ci-3510.3.2-a-8a9497f9cf\" DevicePath \"\"" Feb 9 05:46:19.683147 kubelet[2203]: I0209 05:46:19.680987 2203 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b8dd4fad-0647-4a7e-ba8a-e31d1ae41001-bpf-maps\") on node \"ci-3510.3.2-a-8a9497f9cf\" DevicePath \"\"" Feb 9 05:46:19.686327 kubelet[2203]: I0209 05:46:19.686236 2203 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8dd4fad-0647-4a7e-ba8a-e31d1ae41001-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b8dd4fad-0647-4a7e-ba8a-e31d1ae41001" (UID: "b8dd4fad-0647-4a7e-ba8a-e31d1ae41001"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 05:46:19.686810 kubelet[2203]: I0209 05:46:19.686699 2203 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8dd4fad-0647-4a7e-ba8a-e31d1ae41001-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "b8dd4fad-0647-4a7e-ba8a-e31d1ae41001" (UID: "b8dd4fad-0647-4a7e-ba8a-e31d1ae41001"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 05:46:19.687100 kubelet[2203]: I0209 05:46:19.686993 2203 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8dd4fad-0647-4a7e-ba8a-e31d1ae41001-kube-api-access-nrdqg" (OuterVolumeSpecName: "kube-api-access-nrdqg") pod "b8dd4fad-0647-4a7e-ba8a-e31d1ae41001" (UID: "b8dd4fad-0647-4a7e-ba8a-e31d1ae41001"). InnerVolumeSpecName "kube-api-access-nrdqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 05:46:19.687298 kubelet[2203]: I0209 05:46:19.687108 2203 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8dd4fad-0647-4a7e-ba8a-e31d1ae41001-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "b8dd4fad-0647-4a7e-ba8a-e31d1ae41001" (UID: "b8dd4fad-0647-4a7e-ba8a-e31d1ae41001"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 05:46:19.687619 kubelet[2203]: I0209 05:46:19.687529 2203 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8dd4fad-0647-4a7e-ba8a-e31d1ae41001-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "b8dd4fad-0647-4a7e-ba8a-e31d1ae41001" (UID: "b8dd4fad-0647-4a7e-ba8a-e31d1ae41001"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 05:46:19.782507 kubelet[2203]: I0209 05:46:19.782286 2203 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b8dd4fad-0647-4a7e-ba8a-e31d1ae41001-clustermesh-secrets\") on node \"ci-3510.3.2-a-8a9497f9cf\" DevicePath \"\"" Feb 9 05:46:19.782507 kubelet[2203]: I0209 05:46:19.782366 2203 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-nrdqg\" (UniqueName: \"kubernetes.io/projected/b8dd4fad-0647-4a7e-ba8a-e31d1ae41001-kube-api-access-nrdqg\") on node \"ci-3510.3.2-a-8a9497f9cf\" DevicePath \"\"" Feb 9 05:46:19.782507 kubelet[2203]: I0209 05:46:19.782408 2203 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b8dd4fad-0647-4a7e-ba8a-e31d1ae41001-cilium-config-path\") on node \"ci-3510.3.2-a-8a9497f9cf\" DevicePath \"\"" Feb 9 05:46:19.782507 kubelet[2203]: I0209 05:46:19.782442 2203 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b8dd4fad-0647-4a7e-ba8a-e31d1ae41001-cilium-ipsec-secrets\") on node \"ci-3510.3.2-a-8a9497f9cf\" DevicePath \"\"" Feb 9 05:46:19.782507 kubelet[2203]: I0209 05:46:19.782474 2203 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b8dd4fad-0647-4a7e-ba8a-e31d1ae41001-hubble-tls\") on node \"ci-3510.3.2-a-8a9497f9cf\" DevicePath \"\"" Feb 9 05:46:20.080378 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a3263fa4dc80da79342c7e5e4a56295b1ddc42af658c6dbb82f0f9cd7a4bed75-rootfs.mount: Deactivated successfully. Feb 9 05:46:20.080641 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a3263fa4dc80da79342c7e5e4a56295b1ddc42af658c6dbb82f0f9cd7a4bed75-shm.mount: Deactivated successfully. Feb 9 05:46:20.080846 systemd[1]: var-lib-kubelet-pods-b8dd4fad\x2d0647\x2d4a7e\x2dba8a\x2de31d1ae41001-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnrdqg.mount: Deactivated successfully. Feb 9 05:46:20.081021 systemd[1]: var-lib-kubelet-pods-b8dd4fad\x2d0647\x2d4a7e\x2dba8a\x2de31d1ae41001-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 05:46:20.081202 systemd[1]: var-lib-kubelet-pods-b8dd4fad\x2d0647\x2d4a7e\x2dba8a\x2de31d1ae41001-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 9 05:46:20.081371 systemd[1]: var-lib-kubelet-pods-b8dd4fad\x2d0647\x2d4a7e\x2dba8a\x2de31d1ae41001-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 05:46:20.495732 kubelet[2203]: I0209 05:46:20.495648 2203 scope.go:117] "RemoveContainer" containerID="1c947b6c68eb165ea72f525531e0438ca3e80b60fbdb933742d09fce956611af" Feb 9 05:46:20.498171 env[1166]: time="2024-02-09T05:46:20.498051511Z" level=info msg="RemoveContainer for \"1c947b6c68eb165ea72f525531e0438ca3e80b60fbdb933742d09fce956611af\"" Feb 9 05:46:20.502524 env[1166]: time="2024-02-09T05:46:20.502414336Z" level=info msg="RemoveContainer for \"1c947b6c68eb165ea72f525531e0438ca3e80b60fbdb933742d09fce956611af\" returns successfully" Feb 9 05:46:20.505997 systemd[1]: Removed slice kubepods-burstable-podb8dd4fad_0647_4a7e_ba8a_e31d1ae41001.slice. Feb 9 05:46:20.544457 kubelet[2203]: I0209 05:46:20.544435 2203 topology_manager.go:215] "Topology Admit Handler" podUID="e0798e4a-eea6-47ac-a482-7f840c023260" podNamespace="kube-system" podName="cilium-svvzw" Feb 9 05:46:20.544559 kubelet[2203]: E0209 05:46:20.544477 2203 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b8dd4fad-0647-4a7e-ba8a-e31d1ae41001" containerName="mount-cgroup" Feb 9 05:46:20.544559 kubelet[2203]: I0209 05:46:20.544508 2203 memory_manager.go:346] "RemoveStaleState removing state" podUID="b8dd4fad-0647-4a7e-ba8a-e31d1ae41001" containerName="mount-cgroup" Feb 9 05:46:20.547964 systemd[1]: Created slice kubepods-burstable-pode0798e4a_eea6_47ac_a482_7f840c023260.slice. Feb 9 05:46:20.688084 kubelet[2203]: I0209 05:46:20.688013 2203 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e0798e4a-eea6-47ac-a482-7f840c023260-bpf-maps\") pod \"cilium-svvzw\" (UID: \"e0798e4a-eea6-47ac-a482-7f840c023260\") " pod="kube-system/cilium-svvzw" Feb 9 05:46:20.688458 kubelet[2203]: I0209 05:46:20.688110 2203 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e0798e4a-eea6-47ac-a482-7f840c023260-lib-modules\") pod \"cilium-svvzw\" (UID: \"e0798e4a-eea6-47ac-a482-7f840c023260\") " pod="kube-system/cilium-svvzw" Feb 9 05:46:20.688458 kubelet[2203]: I0209 05:46:20.688177 2203 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e0798e4a-eea6-47ac-a482-7f840c023260-cilium-cgroup\") pod \"cilium-svvzw\" (UID: \"e0798e4a-eea6-47ac-a482-7f840c023260\") " pod="kube-system/cilium-svvzw" Feb 9 05:46:20.688458 kubelet[2203]: I0209 05:46:20.688239 2203 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e0798e4a-eea6-47ac-a482-7f840c023260-host-proc-sys-net\") pod \"cilium-svvzw\" (UID: \"e0798e4a-eea6-47ac-a482-7f840c023260\") " pod="kube-system/cilium-svvzw" Feb 9 05:46:20.688458 kubelet[2203]: I0209 05:46:20.688428 2203 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e0798e4a-eea6-47ac-a482-7f840c023260-cilium-config-path\") pod \"cilium-svvzw\" (UID: \"e0798e4a-eea6-47ac-a482-7f840c023260\") " pod="kube-system/cilium-svvzw" Feb 9 05:46:20.689269 kubelet[2203]: I0209 05:46:20.688625 2203 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e0798e4a-eea6-47ac-a482-7f840c023260-host-proc-sys-kernel\") pod \"cilium-svvzw\" (UID: \"e0798e4a-eea6-47ac-a482-7f840c023260\") " pod="kube-system/cilium-svvzw" Feb 9 05:46:20.689269 kubelet[2203]: I0209 05:46:20.688757 2203 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e0798e4a-eea6-47ac-a482-7f840c023260-hubble-tls\") pod \"cilium-svvzw\" (UID: \"e0798e4a-eea6-47ac-a482-7f840c023260\") " pod="kube-system/cilium-svvzw" Feb 9 05:46:20.689269 kubelet[2203]: I0209 05:46:20.688919 2203 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e0798e4a-eea6-47ac-a482-7f840c023260-xtables-lock\") pod \"cilium-svvzw\" (UID: \"e0798e4a-eea6-47ac-a482-7f840c023260\") " pod="kube-system/cilium-svvzw" Feb 9 05:46:20.689269 kubelet[2203]: I0209 05:46:20.689041 2203 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e0798e4a-eea6-47ac-a482-7f840c023260-clustermesh-secrets\") pod \"cilium-svvzw\" (UID: \"e0798e4a-eea6-47ac-a482-7f840c023260\") " pod="kube-system/cilium-svvzw" Feb 9 05:46:20.689269 kubelet[2203]: I0209 05:46:20.689177 2203 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e0798e4a-eea6-47ac-a482-7f840c023260-cni-path\") pod \"cilium-svvzw\" (UID: \"e0798e4a-eea6-47ac-a482-7f840c023260\") " pod="kube-system/cilium-svvzw" Feb 9 05:46:20.690075 kubelet[2203]: I0209 05:46:20.689305 2203 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e0798e4a-eea6-47ac-a482-7f840c023260-etc-cni-netd\") pod \"cilium-svvzw\" (UID: \"e0798e4a-eea6-47ac-a482-7f840c023260\") " pod="kube-system/cilium-svvzw" Feb 9 05:46:20.690075 kubelet[2203]: I0209 05:46:20.689421 2203 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e0798e4a-eea6-47ac-a482-7f840c023260-hostproc\") pod \"cilium-svvzw\" (UID: \"e0798e4a-eea6-47ac-a482-7f840c023260\") " pod="kube-system/cilium-svvzw" Feb 9 05:46:20.690075 kubelet[2203]: I0209 05:46:20.689569 2203 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e0798e4a-eea6-47ac-a482-7f840c023260-cilium-ipsec-secrets\") pod \"cilium-svvzw\" (UID: \"e0798e4a-eea6-47ac-a482-7f840c023260\") " pod="kube-system/cilium-svvzw" Feb 9 05:46:20.690075 kubelet[2203]: I0209 05:46:20.689705 2203 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nx5wx\" (UniqueName: \"kubernetes.io/projected/e0798e4a-eea6-47ac-a482-7f840c023260-kube-api-access-nx5wx\") pod \"cilium-svvzw\" (UID: \"e0798e4a-eea6-47ac-a482-7f840c023260\") " pod="kube-system/cilium-svvzw" Feb 9 05:46:20.690075 kubelet[2203]: I0209 05:46:20.689782 2203 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e0798e4a-eea6-47ac-a482-7f840c023260-cilium-run\") pod \"cilium-svvzw\" (UID: \"e0798e4a-eea6-47ac-a482-7f840c023260\") " pod="kube-system/cilium-svvzw" Feb 9 05:46:20.850428 env[1166]: time="2024-02-09T05:46:20.850261495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-svvzw,Uid:e0798e4a-eea6-47ac-a482-7f840c023260,Namespace:kube-system,Attempt:0,}" Feb 9 05:46:20.867687 env[1166]: time="2024-02-09T05:46:20.867495937Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 05:46:20.867687 env[1166]: time="2024-02-09T05:46:20.867615690Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 05:46:20.867687 env[1166]: time="2024-02-09T05:46:20.867659381Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 05:46:20.868188 env[1166]: time="2024-02-09T05:46:20.868097980Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b35189640c4bcd6b53ce9f15430985f1fa4f4c01d6ea7fbcb43010c9a78c4bb5 pid=4738 runtime=io.containerd.runc.v2 Feb 9 05:46:20.907846 systemd[1]: Started cri-containerd-b35189640c4bcd6b53ce9f15430985f1fa4f4c01d6ea7fbcb43010c9a78c4bb5.scope. Feb 9 05:46:20.942640 env[1166]: time="2024-02-09T05:46:20.942612909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-svvzw,Uid:e0798e4a-eea6-47ac-a482-7f840c023260,Namespace:kube-system,Attempt:0,} returns sandbox id \"b35189640c4bcd6b53ce9f15430985f1fa4f4c01d6ea7fbcb43010c9a78c4bb5\"" Feb 9 05:46:20.943863 env[1166]: time="2024-02-09T05:46:20.943847362Z" level=info msg="CreateContainer within sandbox \"b35189640c4bcd6b53ce9f15430985f1fa4f4c01d6ea7fbcb43010c9a78c4bb5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 05:46:20.948516 env[1166]: time="2024-02-09T05:46:20.948498897Z" level=info msg="CreateContainer within sandbox \"b35189640c4bcd6b53ce9f15430985f1fa4f4c01d6ea7fbcb43010c9a78c4bb5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8f6d5feb93f4a266ac74ad737c1faa311c74a3ca62fef42a375416680bdb3703\"" Feb 9 05:46:20.948789 env[1166]: time="2024-02-09T05:46:20.948742780Z" level=info msg="StartContainer for \"8f6d5feb93f4a266ac74ad737c1faa311c74a3ca62fef42a375416680bdb3703\"" Feb 9 05:46:20.954227 kubelet[2203]: E0209 05:46:20.954211 2203 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 05:46:20.970366 systemd[1]: Started cri-containerd-8f6d5feb93f4a266ac74ad737c1faa311c74a3ca62fef42a375416680bdb3703.scope. Feb 9 05:46:21.009993 env[1166]: time="2024-02-09T05:46:21.009846871Z" level=info msg="StartContainer for \"8f6d5feb93f4a266ac74ad737c1faa311c74a3ca62fef42a375416680bdb3703\" returns successfully" Feb 9 05:46:21.032265 systemd[1]: cri-containerd-8f6d5feb93f4a266ac74ad737c1faa311c74a3ca62fef42a375416680bdb3703.scope: Deactivated successfully. Feb 9 05:46:21.086304 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8f6d5feb93f4a266ac74ad737c1faa311c74a3ca62fef42a375416680bdb3703-rootfs.mount: Deactivated successfully. Feb 9 05:46:21.090920 env[1166]: time="2024-02-09T05:46:21.090848415Z" level=info msg="shim disconnected" id=8f6d5feb93f4a266ac74ad737c1faa311c74a3ca62fef42a375416680bdb3703 Feb 9 05:46:21.090920 env[1166]: time="2024-02-09T05:46:21.090897335Z" level=warning msg="cleaning up after shim disconnected" id=8f6d5feb93f4a266ac74ad737c1faa311c74a3ca62fef42a375416680bdb3703 namespace=k8s.io Feb 9 05:46:21.090920 env[1166]: time="2024-02-09T05:46:21.090909160Z" level=info msg="cleaning up dead shim" Feb 9 05:46:21.109872 env[1166]: time="2024-02-09T05:46:21.109770363Z" level=warning msg="cleanup warnings time=\"2024-02-09T05:46:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4820 runtime=io.containerd.runc.v2\n" Feb 9 05:46:21.508322 env[1166]: time="2024-02-09T05:46:21.508194448Z" level=info msg="CreateContainer within sandbox \"b35189640c4bcd6b53ce9f15430985f1fa4f4c01d6ea7fbcb43010c9a78c4bb5\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 05:46:21.521299 env[1166]: time="2024-02-09T05:46:21.521211174Z" level=info msg="CreateContainer within sandbox \"b35189640c4bcd6b53ce9f15430985f1fa4f4c01d6ea7fbcb43010c9a78c4bb5\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"919c052f343699b50da7eb615080a9dff8d55845661657e08b5db2c488c3030c\"" Feb 9 05:46:21.522240 env[1166]: time="2024-02-09T05:46:21.522156254Z" level=info msg="StartContainer for \"919c052f343699b50da7eb615080a9dff8d55845661657e08b5db2c488c3030c\"" Feb 9 05:46:21.540412 kubelet[2203]: I0209 05:46:21.540375 2203 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="b8dd4fad-0647-4a7e-ba8a-e31d1ae41001" path="/var/lib/kubelet/pods/b8dd4fad-0647-4a7e-ba8a-e31d1ae41001/volumes" Feb 9 05:46:21.559784 systemd[1]: Started cri-containerd-919c052f343699b50da7eb615080a9dff8d55845661657e08b5db2c488c3030c.scope. Feb 9 05:46:21.584669 env[1166]: time="2024-02-09T05:46:21.584623337Z" level=info msg="StartContainer for \"919c052f343699b50da7eb615080a9dff8d55845661657e08b5db2c488c3030c\" returns successfully" Feb 9 05:46:21.591599 systemd[1]: cri-containerd-919c052f343699b50da7eb615080a9dff8d55845661657e08b5db2c488c3030c.scope: Deactivated successfully. Feb 9 05:46:21.611111 env[1166]: time="2024-02-09T05:46:21.611036600Z" level=info msg="shim disconnected" id=919c052f343699b50da7eb615080a9dff8d55845661657e08b5db2c488c3030c Feb 9 05:46:21.611111 env[1166]: time="2024-02-09T05:46:21.611090858Z" level=warning msg="cleaning up after shim disconnected" id=919c052f343699b50da7eb615080a9dff8d55845661657e08b5db2c488c3030c namespace=k8s.io Feb 9 05:46:21.611111 env[1166]: time="2024-02-09T05:46:21.611109720Z" level=info msg="cleaning up dead shim" Feb 9 05:46:21.619215 env[1166]: time="2024-02-09T05:46:21.619170747Z" level=warning msg="cleanup warnings time=\"2024-02-09T05:46:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4882 runtime=io.containerd.runc.v2\n" Feb 9 05:46:22.081005 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-919c052f343699b50da7eb615080a9dff8d55845661657e08b5db2c488c3030c-rootfs.mount: Deactivated successfully. Feb 9 05:46:22.307000 kubelet[2203]: W0209 05:46:22.306906 2203 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb8dd4fad_0647_4a7e_ba8a_e31d1ae41001.slice/cri-containerd-1c947b6c68eb165ea72f525531e0438ca3e80b60fbdb933742d09fce956611af.scope WatchSource:0}: container "1c947b6c68eb165ea72f525531e0438ca3e80b60fbdb933742d09fce956611af" in namespace "k8s.io": not found Feb 9 05:46:22.515939 env[1166]: time="2024-02-09T05:46:22.515807858Z" level=info msg="CreateContainer within sandbox \"b35189640c4bcd6b53ce9f15430985f1fa4f4c01d6ea7fbcb43010c9a78c4bb5\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 05:46:22.529342 env[1166]: time="2024-02-09T05:46:22.529289014Z" level=info msg="CreateContainer within sandbox \"b35189640c4bcd6b53ce9f15430985f1fa4f4c01d6ea7fbcb43010c9a78c4bb5\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"98944007e1ba973bdf98ed773a6b9da5b09bc42ba368e78d585bb2df7e56f4a7\"" Feb 9 05:46:22.529805 env[1166]: time="2024-02-09T05:46:22.529768394Z" level=info msg="StartContainer for \"98944007e1ba973bdf98ed773a6b9da5b09bc42ba368e78d585bb2df7e56f4a7\"" Feb 9 05:46:22.552913 systemd[1]: Started cri-containerd-98944007e1ba973bdf98ed773a6b9da5b09bc42ba368e78d585bb2df7e56f4a7.scope. Feb 9 05:46:22.568625 env[1166]: time="2024-02-09T05:46:22.568539574Z" level=info msg="StartContainer for \"98944007e1ba973bdf98ed773a6b9da5b09bc42ba368e78d585bb2df7e56f4a7\" returns successfully" Feb 9 05:46:22.570182 systemd[1]: cri-containerd-98944007e1ba973bdf98ed773a6b9da5b09bc42ba368e78d585bb2df7e56f4a7.scope: Deactivated successfully. Feb 9 05:46:22.603498 env[1166]: time="2024-02-09T05:46:22.603384303Z" level=info msg="shim disconnected" id=98944007e1ba973bdf98ed773a6b9da5b09bc42ba368e78d585bb2df7e56f4a7 Feb 9 05:46:22.603837 env[1166]: time="2024-02-09T05:46:22.603510177Z" level=warning msg="cleaning up after shim disconnected" id=98944007e1ba973bdf98ed773a6b9da5b09bc42ba368e78d585bb2df7e56f4a7 namespace=k8s.io Feb 9 05:46:22.603837 env[1166]: time="2024-02-09T05:46:22.603544351Z" level=info msg="cleaning up dead shim" Feb 9 05:46:22.617688 env[1166]: time="2024-02-09T05:46:22.617556231Z" level=warning msg="cleanup warnings time=\"2024-02-09T05:46:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4936 runtime=io.containerd.runc.v2\n" Feb 9 05:46:22.650207 kubelet[2203]: I0209 05:46:22.650143 2203 setters.go:552] "Node became not ready" node="ci-3510.3.2-a-8a9497f9cf" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-02-09T05:46:22Z","lastTransitionTime":"2024-02-09T05:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 9 05:46:23.081097 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-98944007e1ba973bdf98ed773a6b9da5b09bc42ba368e78d585bb2df7e56f4a7-rootfs.mount: Deactivated successfully. Feb 9 05:46:23.524826 env[1166]: time="2024-02-09T05:46:23.524715046Z" level=info msg="CreateContainer within sandbox \"b35189640c4bcd6b53ce9f15430985f1fa4f4c01d6ea7fbcb43010c9a78c4bb5\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 05:46:23.537295 env[1166]: time="2024-02-09T05:46:23.537231207Z" level=info msg="CreateContainer within sandbox \"b35189640c4bcd6b53ce9f15430985f1fa4f4c01d6ea7fbcb43010c9a78c4bb5\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ba26aa99477950da27741f66afc0d77fd025ab694dbd3666129b59d8cc149954\"" Feb 9 05:46:23.537520 env[1166]: time="2024-02-09T05:46:23.537481722Z" level=info msg="StartContainer for \"ba26aa99477950da27741f66afc0d77fd025ab694dbd3666129b59d8cc149954\"" Feb 9 05:46:23.546202 systemd[1]: Started cri-containerd-ba26aa99477950da27741f66afc0d77fd025ab694dbd3666129b59d8cc149954.scope. Feb 9 05:46:23.558176 systemd[1]: cri-containerd-ba26aa99477950da27741f66afc0d77fd025ab694dbd3666129b59d8cc149954.scope: Deactivated successfully. Feb 9 05:46:23.558393 env[1166]: time="2024-02-09T05:46:23.558350705Z" level=info msg="StartContainer for \"ba26aa99477950da27741f66afc0d77fd025ab694dbd3666129b59d8cc149954\" returns successfully" Feb 9 05:46:23.591176 env[1166]: time="2024-02-09T05:46:23.591110312Z" level=info msg="shim disconnected" id=ba26aa99477950da27741f66afc0d77fd025ab694dbd3666129b59d8cc149954 Feb 9 05:46:23.591176 env[1166]: time="2024-02-09T05:46:23.591155731Z" level=warning msg="cleaning up after shim disconnected" id=ba26aa99477950da27741f66afc0d77fd025ab694dbd3666129b59d8cc149954 namespace=k8s.io Feb 9 05:46:23.591176 env[1166]: time="2024-02-09T05:46:23.591165020Z" level=info msg="cleaning up dead shim" Feb 9 05:46:23.609163 env[1166]: time="2024-02-09T05:46:23.609120017Z" level=warning msg="cleanup warnings time=\"2024-02-09T05:46:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4990 runtime=io.containerd.runc.v2\n" Feb 9 05:46:24.077268 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ba26aa99477950da27741f66afc0d77fd025ab694dbd3666129b59d8cc149954-rootfs.mount: Deactivated successfully. Feb 9 05:46:24.534377 env[1166]: time="2024-02-09T05:46:24.534242244Z" level=info msg="CreateContainer within sandbox \"b35189640c4bcd6b53ce9f15430985f1fa4f4c01d6ea7fbcb43010c9a78c4bb5\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 05:46:24.536262 kubelet[2203]: E0209 05:46:24.536208 2203 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-5dd5756b68-lh45l" podUID="36002a85-45d6-4bd6-b584-82d8b7ad3f1a" Feb 9 05:46:24.546695 env[1166]: time="2024-02-09T05:46:24.546644846Z" level=info msg="CreateContainer within sandbox \"b35189640c4bcd6b53ce9f15430985f1fa4f4c01d6ea7fbcb43010c9a78c4bb5\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3e1702a1289e400a6aa3453feb6b9b997d057a2b8ed66e60baba5e78f01652cf\"" Feb 9 05:46:24.547025 env[1166]: time="2024-02-09T05:46:24.546995522Z" level=info msg="StartContainer for \"3e1702a1289e400a6aa3453feb6b9b997d057a2b8ed66e60baba5e78f01652cf\"" Feb 9 05:46:24.568287 systemd[1]: Started cri-containerd-3e1702a1289e400a6aa3453feb6b9b997d057a2b8ed66e60baba5e78f01652cf.scope. Feb 9 05:46:24.597486 env[1166]: time="2024-02-09T05:46:24.597405597Z" level=info msg="StartContainer for \"3e1702a1289e400a6aa3453feb6b9b997d057a2b8ed66e60baba5e78f01652cf\" returns successfully" Feb 9 05:46:24.791588 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 9 05:46:25.423757 kubelet[2203]: W0209 05:46:25.423629 2203 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode0798e4a_eea6_47ac_a482_7f840c023260.slice/cri-containerd-8f6d5feb93f4a266ac74ad737c1faa311c74a3ca62fef42a375416680bdb3703.scope WatchSource:0}: task 8f6d5feb93f4a266ac74ad737c1faa311c74a3ca62fef42a375416680bdb3703 not found: not found Feb 9 05:46:25.575807 env[1166]: time="2024-02-09T05:46:25.575700876Z" level=info msg="StopPodSandbox for \"bb9bc707843e01e650955711155ca8ac2cc40d15ac179be9710072977f30ad33\"" Feb 9 05:46:25.576707 env[1166]: time="2024-02-09T05:46:25.576001381Z" level=info msg="TearDown network for sandbox \"bb9bc707843e01e650955711155ca8ac2cc40d15ac179be9710072977f30ad33\" successfully" Feb 9 05:46:25.576707 env[1166]: time="2024-02-09T05:46:25.576152120Z" level=info msg="StopPodSandbox for \"bb9bc707843e01e650955711155ca8ac2cc40d15ac179be9710072977f30ad33\" returns successfully" Feb 9 05:46:25.577027 kubelet[2203]: I0209 05:46:25.576111 2203 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-svvzw" podStartSLOduration=5.576022527 podCreationTimestamp="2024-02-09 05:46:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 05:46:25.575731494 +0000 UTC m=+1320.124904565" watchObservedRunningTime="2024-02-09 05:46:25.576022527 +0000 UTC m=+1320.125195560" Feb 9 05:46:25.577795 env[1166]: time="2024-02-09T05:46:25.577151080Z" level=info msg="RemovePodSandbox for \"bb9bc707843e01e650955711155ca8ac2cc40d15ac179be9710072977f30ad33\"" Feb 9 05:46:25.577795 env[1166]: time="2024-02-09T05:46:25.577229385Z" level=info msg="Forcibly stopping sandbox \"bb9bc707843e01e650955711155ca8ac2cc40d15ac179be9710072977f30ad33\"" Feb 9 05:46:25.577795 env[1166]: time="2024-02-09T05:46:25.577429570Z" level=info msg="TearDown network for sandbox \"bb9bc707843e01e650955711155ca8ac2cc40d15ac179be9710072977f30ad33\" successfully" Feb 9 05:46:25.581644 env[1166]: time="2024-02-09T05:46:25.581601470Z" level=info msg="RemovePodSandbox \"bb9bc707843e01e650955711155ca8ac2cc40d15ac179be9710072977f30ad33\" returns successfully" Feb 9 05:46:25.582044 env[1166]: time="2024-02-09T05:46:25.581969534Z" level=info msg="StopPodSandbox for \"094b44a3babb140a46d906565e0d03e26cc20148f9b0665b646244e61ad2f675\"" Feb 9 05:46:25.582129 env[1166]: time="2024-02-09T05:46:25.582065593Z" level=info msg="TearDown network for sandbox \"094b44a3babb140a46d906565e0d03e26cc20148f9b0665b646244e61ad2f675\" successfully" Feb 9 05:46:25.582129 env[1166]: time="2024-02-09T05:46:25.582096744Z" level=info msg="StopPodSandbox for \"094b44a3babb140a46d906565e0d03e26cc20148f9b0665b646244e61ad2f675\" returns successfully" Feb 9 05:46:25.582311 env[1166]: time="2024-02-09T05:46:25.582277036Z" level=info msg="RemovePodSandbox for \"094b44a3babb140a46d906565e0d03e26cc20148f9b0665b646244e61ad2f675\"" Feb 9 05:46:25.582350 env[1166]: time="2024-02-09T05:46:25.582307256Z" level=info msg="Forcibly stopping sandbox \"094b44a3babb140a46d906565e0d03e26cc20148f9b0665b646244e61ad2f675\"" Feb 9 05:46:25.582350 env[1166]: time="2024-02-09T05:46:25.582340414Z" level=info msg="TearDown network for sandbox \"094b44a3babb140a46d906565e0d03e26cc20148f9b0665b646244e61ad2f675\" successfully" Feb 9 05:46:25.583409 env[1166]: time="2024-02-09T05:46:25.583364004Z" level=info msg="RemovePodSandbox \"094b44a3babb140a46d906565e0d03e26cc20148f9b0665b646244e61ad2f675\" returns successfully" Feb 9 05:46:25.583547 env[1166]: time="2024-02-09T05:46:25.583536608Z" level=info msg="StopPodSandbox for \"a3263fa4dc80da79342c7e5e4a56295b1ddc42af658c6dbb82f0f9cd7a4bed75\"" Feb 9 05:46:25.583597 env[1166]: time="2024-02-09T05:46:25.583580019Z" level=info msg="TearDown network for sandbox \"a3263fa4dc80da79342c7e5e4a56295b1ddc42af658c6dbb82f0f9cd7a4bed75\" successfully" Feb 9 05:46:25.583646 env[1166]: time="2024-02-09T05:46:25.583597582Z" level=info msg="StopPodSandbox for \"a3263fa4dc80da79342c7e5e4a56295b1ddc42af658c6dbb82f0f9cd7a4bed75\" returns successfully" Feb 9 05:46:25.583792 env[1166]: time="2024-02-09T05:46:25.583753458Z" level=info msg="RemovePodSandbox for \"a3263fa4dc80da79342c7e5e4a56295b1ddc42af658c6dbb82f0f9cd7a4bed75\"" Feb 9 05:46:25.583792 env[1166]: time="2024-02-09T05:46:25.583764408Z" level=info msg="Forcibly stopping sandbox \"a3263fa4dc80da79342c7e5e4a56295b1ddc42af658c6dbb82f0f9cd7a4bed75\"" Feb 9 05:46:25.583850 env[1166]: time="2024-02-09T05:46:25.583794165Z" level=info msg="TearDown network for sandbox \"a3263fa4dc80da79342c7e5e4a56295b1ddc42af658c6dbb82f0f9cd7a4bed75\" successfully" Feb 9 05:46:25.584796 env[1166]: time="2024-02-09T05:46:25.584757359Z" level=info msg="RemovePodSandbox \"a3263fa4dc80da79342c7e5e4a56295b1ddc42af658c6dbb82f0f9cd7a4bed75\" returns successfully" Feb 9 05:46:27.592287 systemd-networkd[1007]: lxc_health: Link UP Feb 9 05:46:27.611590 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 05:46:27.611691 systemd-networkd[1007]: lxc_health: Gained carrier Feb 9 05:46:28.534003 kubelet[2203]: W0209 05:46:28.533980 2203 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode0798e4a_eea6_47ac_a482_7f840c023260.slice/cri-containerd-919c052f343699b50da7eb615080a9dff8d55845661657e08b5db2c488c3030c.scope WatchSource:0}: task 919c052f343699b50da7eb615080a9dff8d55845661657e08b5db2c488c3030c not found: not found Feb 9 05:46:29.620707 systemd-networkd[1007]: lxc_health: Gained IPv6LL Feb 9 05:46:31.642093 kubelet[2203]: W0209 05:46:31.642005 2203 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode0798e4a_eea6_47ac_a482_7f840c023260.slice/cri-containerd-98944007e1ba973bdf98ed773a6b9da5b09bc42ba368e78d585bb2df7e56f4a7.scope WatchSource:0}: task 98944007e1ba973bdf98ed773a6b9da5b09bc42ba368e78d585bb2df7e56f4a7 not found: not found Feb 9 05:46:33.841274 sshd[4594]: pam_unix(sshd:session): session closed for user core Feb 9 05:46:33.846891 systemd[1]: sshd@25-147.75.90.151:22-147.75.109.163:53582.service: Deactivated successfully. Feb 9 05:46:33.848553 systemd[1]: session-27.scope: Deactivated successfully. Feb 9 05:46:33.850192 systemd-logind[1154]: Session 27 logged out. Waiting for processes to exit. Feb 9 05:46:33.852407 systemd-logind[1154]: Removed session 27. Feb 9 05:46:34.752257 kubelet[2203]: W0209 05:46:34.752147 2203 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode0798e4a_eea6_47ac_a482_7f840c023260.slice/cri-containerd-ba26aa99477950da27741f66afc0d77fd025ab694dbd3666129b59d8cc149954.scope WatchSource:0}: task ba26aa99477950da27741f66afc0d77fd025ab694dbd3666129b59d8cc149954 not found: not found