Feb 9 14:06:03.548440 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Thu Feb 8 21:14:17 -00 2024 Feb 9 14:06:03.548453 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 9 14:06:03.548460 kernel: BIOS-provided physical RAM map: Feb 9 14:06:03.548464 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Feb 9 14:06:03.548468 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Feb 9 14:06:03.548471 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Feb 9 14:06:03.548476 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Feb 9 14:06:03.548480 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Feb 9 14:06:03.548483 kernel: BIOS-e820: [mem 0x0000000040400000-0x00000000820e1fff] usable Feb 9 14:06:03.548487 kernel: BIOS-e820: [mem 0x00000000820e2000-0x00000000820e2fff] ACPI NVS Feb 9 14:06:03.548492 kernel: BIOS-e820: [mem 0x00000000820e3000-0x00000000820e3fff] reserved Feb 9 14:06:03.548495 kernel: BIOS-e820: [mem 0x00000000820e4000-0x000000008afccfff] usable Feb 9 14:06:03.548499 kernel: BIOS-e820: [mem 0x000000008afcd000-0x000000008c0b1fff] reserved Feb 9 14:06:03.548503 kernel: BIOS-e820: [mem 0x000000008c0b2000-0x000000008c23afff] usable Feb 9 14:06:03.548508 kernel: BIOS-e820: [mem 0x000000008c23b000-0x000000008c66cfff] ACPI NVS Feb 9 14:06:03.548513 kernel: BIOS-e820: [mem 0x000000008c66d000-0x000000008eefefff] reserved Feb 9 14:06:03.548517 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable Feb 9 14:06:03.548521 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved Feb 9 14:06:03.548525 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Feb 9 14:06:03.548529 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Feb 9 14:06:03.548533 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Feb 9 14:06:03.548537 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Feb 9 14:06:03.548542 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Feb 9 14:06:03.548546 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable Feb 9 14:06:03.548550 kernel: NX (Execute Disable) protection: active Feb 9 14:06:03.548554 kernel: SMBIOS 3.2.1 present. Feb 9 14:06:03.548559 kernel: DMI: Supermicro SYS-5019C-MR-PH004/X11SCM-F, BIOS 1.9 09/16/2022 Feb 9 14:06:03.548563 kernel: tsc: Detected 3400.000 MHz processor Feb 9 14:06:03.548567 kernel: tsc: Detected 3399.906 MHz TSC Feb 9 14:06:03.548572 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 9 14:06:03.548576 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 9 14:06:03.548581 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 Feb 9 14:06:03.548585 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 9 14:06:03.548589 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 Feb 9 14:06:03.548593 kernel: Using GB pages for direct mapping Feb 9 14:06:03.548598 kernel: ACPI: Early table checksum verification disabled Feb 9 14:06:03.548603 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Feb 9 14:06:03.548607 kernel: ACPI: XSDT 0x000000008C54E0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Feb 9 14:06:03.548611 kernel: ACPI: FACP 0x000000008C58A670 000114 (v06 01072009 AMI 00010013) Feb 9 14:06:03.548615 kernel: ACPI: DSDT 0x000000008C54E268 03C404 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Feb 9 14:06:03.548622 kernel: ACPI: FACS 0x000000008C66CF80 000040 Feb 9 14:06:03.548626 kernel: ACPI: APIC 0x000000008C58A788 00012C (v04 01072009 AMI 00010013) Feb 9 14:06:03.548631 kernel: ACPI: FPDT 0x000000008C58A8B8 000044 (v01 01072009 AMI 00010013) Feb 9 14:06:03.548636 kernel: ACPI: FIDT 0x000000008C58A900 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Feb 9 14:06:03.548641 kernel: ACPI: MCFG 0x000000008C58A9A0 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Feb 9 14:06:03.548645 kernel: ACPI: SPMI 0x000000008C58A9E0 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Feb 9 14:06:03.548650 kernel: ACPI: SSDT 0x000000008C58AA28 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Feb 9 14:06:03.548655 kernel: ACPI: SSDT 0x000000008C58C548 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Feb 9 14:06:03.548659 kernel: ACPI: SSDT 0x000000008C58F710 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Feb 9 14:06:03.548664 kernel: ACPI: HPET 0x000000008C591A40 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 9 14:06:03.548669 kernel: ACPI: SSDT 0x000000008C591A78 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Feb 9 14:06:03.548674 kernel: ACPI: SSDT 0x000000008C592A28 0008F4 (v02 INTEL xh_mossb 00000000 INTL 20160527) Feb 9 14:06:03.548678 kernel: ACPI: UEFI 0x000000008C593320 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 9 14:06:03.548683 kernel: ACPI: LPIT 0x000000008C593368 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 9 14:06:03.548687 kernel: ACPI: SSDT 0x000000008C593400 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Feb 9 14:06:03.548692 kernel: ACPI: SSDT 0x000000008C595BE0 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Feb 9 14:06:03.548696 kernel: ACPI: DBGP 0x000000008C5970C8 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 9 14:06:03.548701 kernel: ACPI: DBG2 0x000000008C597100 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Feb 9 14:06:03.548706 kernel: ACPI: SSDT 0x000000008C597158 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Feb 9 14:06:03.548711 kernel: ACPI: DMAR 0x000000008C598CC0 000070 (v01 INTEL EDK2 00000002 01000013) Feb 9 14:06:03.548715 kernel: ACPI: SSDT 0x000000008C598D30 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Feb 9 14:06:03.548720 kernel: ACPI: TPM2 0x000000008C598E78 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Feb 9 14:06:03.548724 kernel: ACPI: SSDT 0x000000008C598EB0 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Feb 9 14:06:03.548729 kernel: ACPI: WSMT 0x000000008C599C40 000028 (v01 SUPERM 01072009 AMI 00010013) Feb 9 14:06:03.548734 kernel: ACPI: EINJ 0x000000008C599C68 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Feb 9 14:06:03.548738 kernel: ACPI: ERST 0x000000008C599D98 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Feb 9 14:06:03.548743 kernel: ACPI: BERT 0x000000008C599FC8 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Feb 9 14:06:03.548748 kernel: ACPI: HEST 0x000000008C599FF8 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Feb 9 14:06:03.548753 kernel: ACPI: SSDT 0x000000008C59A278 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Feb 9 14:06:03.548757 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58a670-0x8c58a783] Feb 9 14:06:03.548762 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54e268-0x8c58a66b] Feb 9 14:06:03.548766 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66cf80-0x8c66cfbf] Feb 9 14:06:03.548771 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58a788-0x8c58a8b3] Feb 9 14:06:03.548775 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58a8b8-0x8c58a8fb] Feb 9 14:06:03.548780 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58a900-0x8c58a99b] Feb 9 14:06:03.548785 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58a9a0-0x8c58a9db] Feb 9 14:06:03.548790 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58a9e0-0x8c58aa20] Feb 9 14:06:03.548794 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58aa28-0x8c58c543] Feb 9 14:06:03.548799 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58c548-0x8c58f70d] Feb 9 14:06:03.548803 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58f710-0x8c591a3a] Feb 9 14:06:03.548808 kernel: ACPI: Reserving HPET table memory at [mem 0x8c591a40-0x8c591a77] Feb 9 14:06:03.548812 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c591a78-0x8c592a25] Feb 9 14:06:03.548817 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c592a28-0x8c59331b] Feb 9 14:06:03.548821 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c593320-0x8c593361] Feb 9 14:06:03.548827 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c593368-0x8c5933fb] Feb 9 14:06:03.548831 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c593400-0x8c595bdd] Feb 9 14:06:03.548836 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c595be0-0x8c5970c1] Feb 9 14:06:03.548840 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c5970c8-0x8c5970fb] Feb 9 14:06:03.548845 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c597100-0x8c597153] Feb 9 14:06:03.548849 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c597158-0x8c598cbe] Feb 9 14:06:03.548854 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c598cc0-0x8c598d2f] Feb 9 14:06:03.548859 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598d30-0x8c598e73] Feb 9 14:06:03.548863 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c598e78-0x8c598eab] Feb 9 14:06:03.548868 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598eb0-0x8c599c3e] Feb 9 14:06:03.548873 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c599c40-0x8c599c67] Feb 9 14:06:03.548877 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c599c68-0x8c599d97] Feb 9 14:06:03.548882 kernel: ACPI: Reserving ERST table memory at [mem 0x8c599d98-0x8c599fc7] Feb 9 14:06:03.548887 kernel: ACPI: Reserving BERT table memory at [mem 0x8c599fc8-0x8c599ff7] Feb 9 14:06:03.548891 kernel: ACPI: Reserving HEST table memory at [mem 0x8c599ff8-0x8c59a273] Feb 9 14:06:03.548896 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59a278-0x8c59a3d9] Feb 9 14:06:03.548900 kernel: No NUMA configuration found Feb 9 14:06:03.548905 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] Feb 9 14:06:03.548910 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] Feb 9 14:06:03.548915 kernel: Zone ranges: Feb 9 14:06:03.548920 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 9 14:06:03.548924 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Feb 9 14:06:03.548929 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] Feb 9 14:06:03.548933 kernel: Movable zone start for each node Feb 9 14:06:03.548938 kernel: Early memory node ranges Feb 9 14:06:03.548942 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Feb 9 14:06:03.548947 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Feb 9 14:06:03.548951 kernel: node 0: [mem 0x0000000040400000-0x00000000820e1fff] Feb 9 14:06:03.548957 kernel: node 0: [mem 0x00000000820e4000-0x000000008afccfff] Feb 9 14:06:03.548961 kernel: node 0: [mem 0x000000008c0b2000-0x000000008c23afff] Feb 9 14:06:03.548966 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] Feb 9 14:06:03.548970 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] Feb 9 14:06:03.548975 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] Feb 9 14:06:03.548980 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 9 14:06:03.548987 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Feb 9 14:06:03.548993 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Feb 9 14:06:03.548998 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Feb 9 14:06:03.549003 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges Feb 9 14:06:03.549009 kernel: On node 0, zone DMA32: 11460 pages in unavailable ranges Feb 9 14:06:03.549014 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges Feb 9 14:06:03.549019 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges Feb 9 14:06:03.549023 kernel: ACPI: PM-Timer IO Port: 0x1808 Feb 9 14:06:03.549028 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Feb 9 14:06:03.549033 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Feb 9 14:06:03.549038 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Feb 9 14:06:03.549044 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Feb 9 14:06:03.549049 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Feb 9 14:06:03.549054 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Feb 9 14:06:03.549058 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Feb 9 14:06:03.549063 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Feb 9 14:06:03.549068 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Feb 9 14:06:03.549073 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Feb 9 14:06:03.549078 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Feb 9 14:06:03.549083 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Feb 9 14:06:03.549088 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Feb 9 14:06:03.549093 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Feb 9 14:06:03.549098 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Feb 9 14:06:03.549103 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Feb 9 14:06:03.549107 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Feb 9 14:06:03.549112 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 9 14:06:03.549117 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 9 14:06:03.549122 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 9 14:06:03.549127 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 9 14:06:03.549133 kernel: TSC deadline timer available Feb 9 14:06:03.549138 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Feb 9 14:06:03.549143 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices Feb 9 14:06:03.549147 kernel: Booting paravirtualized kernel on bare hardware Feb 9 14:06:03.549152 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 9 14:06:03.549157 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:16 nr_node_ids:1 Feb 9 14:06:03.549162 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u262144 Feb 9 14:06:03.549167 kernel: pcpu-alloc: s185624 r8192 d31464 u262144 alloc=1*2097152 Feb 9 14:06:03.549172 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Feb 9 14:06:03.549177 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232415 Feb 9 14:06:03.549182 kernel: Policy zone: Normal Feb 9 14:06:03.549188 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 9 14:06:03.549193 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 9 14:06:03.549198 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Feb 9 14:06:03.549203 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Feb 9 14:06:03.549208 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 9 14:06:03.549214 kernel: Memory: 32724720K/33452980K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 728000K reserved, 0K cma-reserved) Feb 9 14:06:03.549219 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Feb 9 14:06:03.549224 kernel: ftrace: allocating 34475 entries in 135 pages Feb 9 14:06:03.549228 kernel: ftrace: allocated 135 pages with 4 groups Feb 9 14:06:03.549233 kernel: rcu: Hierarchical RCU implementation. Feb 9 14:06:03.549239 kernel: rcu: RCU event tracing is enabled. Feb 9 14:06:03.549244 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Feb 9 14:06:03.549248 kernel: Rude variant of Tasks RCU enabled. Feb 9 14:06:03.549253 kernel: Tracing variant of Tasks RCU enabled. Feb 9 14:06:03.549259 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 9 14:06:03.549264 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Feb 9 14:06:03.549269 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Feb 9 14:06:03.549274 kernel: random: crng init done Feb 9 14:06:03.549279 kernel: Console: colour dummy device 80x25 Feb 9 14:06:03.549284 kernel: printk: console [tty0] enabled Feb 9 14:06:03.549289 kernel: printk: console [ttyS1] enabled Feb 9 14:06:03.549294 kernel: ACPI: Core revision 20210730 Feb 9 14:06:03.549299 kernel: hpet: HPET dysfunctional in PC10. Force disabled. Feb 9 14:06:03.549306 kernel: APIC: Switch to symmetric I/O mode setup Feb 9 14:06:03.549327 kernel: DMAR: Host address width 39 Feb 9 14:06:03.549332 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Feb 9 14:06:03.549337 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Feb 9 14:06:03.549342 kernel: DMAR: RMRR base: 0x0000008cf18000 end: 0x0000008d161fff Feb 9 14:06:03.549347 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 Feb 9 14:06:03.549352 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Feb 9 14:06:03.549357 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Feb 9 14:06:03.549362 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Feb 9 14:06:03.549367 kernel: x2apic enabled Feb 9 14:06:03.549373 kernel: Switched APIC routing to cluster x2apic. Feb 9 14:06:03.549378 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Feb 9 14:06:03.549383 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Feb 9 14:06:03.549388 kernel: CPU0: Thermal monitoring enabled (TM1) Feb 9 14:06:03.549393 kernel: process: using mwait in idle threads Feb 9 14:06:03.549398 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 9 14:06:03.549403 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 9 14:06:03.549407 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 9 14:06:03.549412 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Feb 9 14:06:03.549418 kernel: Spectre V2 : Mitigation: Enhanced IBRS Feb 9 14:06:03.549423 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 9 14:06:03.549428 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Feb 9 14:06:03.549433 kernel: RETBleed: Mitigation: Enhanced IBRS Feb 9 14:06:03.549438 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 9 14:06:03.549443 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Feb 9 14:06:03.549447 kernel: TAA: Mitigation: TSX disabled Feb 9 14:06:03.549452 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Feb 9 14:06:03.549457 kernel: SRBDS: Mitigation: Microcode Feb 9 14:06:03.549462 kernel: GDS: Vulnerable: No microcode Feb 9 14:06:03.549467 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 9 14:06:03.549473 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 9 14:06:03.549478 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 9 14:06:03.549483 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Feb 9 14:06:03.549488 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Feb 9 14:06:03.549493 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 9 14:06:03.549498 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Feb 9 14:06:03.549502 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Feb 9 14:06:03.549507 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Feb 9 14:06:03.549512 kernel: Freeing SMP alternatives memory: 32K Feb 9 14:06:03.549517 kernel: pid_max: default: 32768 minimum: 301 Feb 9 14:06:03.549522 kernel: LSM: Security Framework initializing Feb 9 14:06:03.549527 kernel: SELinux: Initializing. Feb 9 14:06:03.549532 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 9 14:06:03.549537 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 9 14:06:03.549542 kernel: smpboot: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Feb 9 14:06:03.549547 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Feb 9 14:06:03.549552 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Feb 9 14:06:03.549557 kernel: ... version: 4 Feb 9 14:06:03.549562 kernel: ... bit width: 48 Feb 9 14:06:03.549567 kernel: ... generic registers: 4 Feb 9 14:06:03.549572 kernel: ... value mask: 0000ffffffffffff Feb 9 14:06:03.549578 kernel: ... max period: 00007fffffffffff Feb 9 14:06:03.549583 kernel: ... fixed-purpose events: 3 Feb 9 14:06:03.549588 kernel: ... event mask: 000000070000000f Feb 9 14:06:03.549593 kernel: signal: max sigframe size: 2032 Feb 9 14:06:03.549598 kernel: rcu: Hierarchical SRCU implementation. Feb 9 14:06:03.549603 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Feb 9 14:06:03.549608 kernel: smp: Bringing up secondary CPUs ... Feb 9 14:06:03.549613 kernel: x86: Booting SMP configuration: Feb 9 14:06:03.549618 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 Feb 9 14:06:03.549623 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 9 14:06:03.549629 kernel: #9 #10 #11 #12 #13 #14 #15 Feb 9 14:06:03.549634 kernel: smp: Brought up 1 node, 16 CPUs Feb 9 14:06:03.549638 kernel: smpboot: Max logical packages: 1 Feb 9 14:06:03.549643 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Feb 9 14:06:03.549648 kernel: devtmpfs: initialized Feb 9 14:06:03.549653 kernel: x86/mm: Memory block size: 128MB Feb 9 14:06:03.549658 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x820e2000-0x820e2fff] (4096 bytes) Feb 9 14:06:03.549663 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23b000-0x8c66cfff] (4399104 bytes) Feb 9 14:06:03.549669 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 9 14:06:03.549674 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Feb 9 14:06:03.549679 kernel: pinctrl core: initialized pinctrl subsystem Feb 9 14:06:03.549684 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 9 14:06:03.549689 kernel: audit: initializing netlink subsys (disabled) Feb 9 14:06:03.549694 kernel: audit: type=2000 audit(1707487558.040:1): state=initialized audit_enabled=0 res=1 Feb 9 14:06:03.549699 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 9 14:06:03.549704 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 9 14:06:03.549709 kernel: cpuidle: using governor menu Feb 9 14:06:03.549714 kernel: ACPI: bus type PCI registered Feb 9 14:06:03.549719 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 9 14:06:03.549724 kernel: dca service started, version 1.12.1 Feb 9 14:06:03.549729 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Feb 9 14:06:03.549734 kernel: PCI: MMCONFIG at [mem 0xe0000000-0xefffffff] reserved in E820 Feb 9 14:06:03.549739 kernel: PCI: Using configuration type 1 for base access Feb 9 14:06:03.549744 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Feb 9 14:06:03.549749 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 9 14:06:03.549754 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 9 14:06:03.549760 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 9 14:06:03.549765 kernel: ACPI: Added _OSI(Module Device) Feb 9 14:06:03.549769 kernel: ACPI: Added _OSI(Processor Device) Feb 9 14:06:03.549774 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 9 14:06:03.549779 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 9 14:06:03.549784 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 9 14:06:03.549789 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 9 14:06:03.549794 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 9 14:06:03.549799 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Feb 9 14:06:03.549805 kernel: ACPI: Dynamic OEM Table Load: Feb 9 14:06:03.549810 kernel: ACPI: SSDT 0xFFFF915A80212000 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Feb 9 14:06:03.549815 kernel: ACPI: \_SB_.PR00: _OSC native thermal LVT Acked Feb 9 14:06:03.549820 kernel: ACPI: Dynamic OEM Table Load: Feb 9 14:06:03.549825 kernel: ACPI: SSDT 0xFFFF915A81AE1800 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Feb 9 14:06:03.549830 kernel: ACPI: Dynamic OEM Table Load: Feb 9 14:06:03.549835 kernel: ACPI: SSDT 0xFFFF915A81A5B800 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Feb 9 14:06:03.549840 kernel: ACPI: Dynamic OEM Table Load: Feb 9 14:06:03.549844 kernel: ACPI: SSDT 0xFFFF915A81A5C800 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Feb 9 14:06:03.549849 kernel: ACPI: Dynamic OEM Table Load: Feb 9 14:06:03.549855 kernel: ACPI: SSDT 0xFFFF915A8014D000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Feb 9 14:06:03.549860 kernel: ACPI: Dynamic OEM Table Load: Feb 9 14:06:03.549865 kernel: ACPI: SSDT 0xFFFF915A81AE1000 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Feb 9 14:06:03.549869 kernel: ACPI: Interpreter enabled Feb 9 14:06:03.549874 kernel: ACPI: PM: (supports S0 S5) Feb 9 14:06:03.549879 kernel: ACPI: Using IOAPIC for interrupt routing Feb 9 14:06:03.549884 kernel: HEST: Enabling Firmware First mode for corrected errors. Feb 9 14:06:03.549889 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Feb 9 14:06:03.549894 kernel: HEST: Table parsing has been initialized. Feb 9 14:06:03.549900 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Feb 9 14:06:03.549905 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 9 14:06:03.549910 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Feb 9 14:06:03.549915 kernel: ACPI: PM: Power Resource [USBC] Feb 9 14:06:03.549919 kernel: ACPI: PM: Power Resource [V0PR] Feb 9 14:06:03.549924 kernel: ACPI: PM: Power Resource [V1PR] Feb 9 14:06:03.549929 kernel: ACPI: PM: Power Resource [V2PR] Feb 9 14:06:03.549934 kernel: ACPI: PM: Power Resource [WRST] Feb 9 14:06:03.549939 kernel: ACPI: PM: Power Resource [FN00] Feb 9 14:06:03.549945 kernel: ACPI: PM: Power Resource [FN01] Feb 9 14:06:03.549950 kernel: ACPI: PM: Power Resource [FN02] Feb 9 14:06:03.549955 kernel: ACPI: PM: Power Resource [FN03] Feb 9 14:06:03.549959 kernel: ACPI: PM: Power Resource [FN04] Feb 9 14:06:03.549964 kernel: ACPI: PM: Power Resource [PIN] Feb 9 14:06:03.549969 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Feb 9 14:06:03.550035 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 9 14:06:03.550081 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Feb 9 14:06:03.550124 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Feb 9 14:06:03.550132 kernel: PCI host bridge to bus 0000:00 Feb 9 14:06:03.550177 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 9 14:06:03.550215 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 9 14:06:03.550252 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 9 14:06:03.550289 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] Feb 9 14:06:03.550327 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Feb 9 14:06:03.550366 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Feb 9 14:06:03.550417 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Feb 9 14:06:03.550465 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Feb 9 14:06:03.550508 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Feb 9 14:06:03.550554 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Feb 9 14:06:03.550596 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] Feb 9 14:06:03.550646 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Feb 9 14:06:03.550688 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit] Feb 9 14:06:03.550736 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Feb 9 14:06:03.550779 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] Feb 9 14:06:03.550822 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Feb 9 14:06:03.550868 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Feb 9 14:06:03.550913 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] Feb 9 14:06:03.550954 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit] Feb 9 14:06:03.551000 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Feb 9 14:06:03.551042 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Feb 9 14:06:03.551089 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Feb 9 14:06:03.551131 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Feb 9 14:06:03.551179 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Feb 9 14:06:03.551220 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] Feb 9 14:06:03.551262 kernel: pci 0000:00:16.0: PME# supported from D3hot Feb 9 14:06:03.551310 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Feb 9 14:06:03.551352 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] Feb 9 14:06:03.551395 kernel: pci 0000:00:16.1: PME# supported from D3hot Feb 9 14:06:03.551441 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Feb 9 14:06:03.551485 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] Feb 9 14:06:03.551526 kernel: pci 0000:00:16.4: PME# supported from D3hot Feb 9 14:06:03.551570 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Feb 9 14:06:03.551613 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] Feb 9 14:06:03.551653 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] Feb 9 14:06:03.551695 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] Feb 9 14:06:03.551736 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] Feb 9 14:06:03.551784 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] Feb 9 14:06:03.551826 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] Feb 9 14:06:03.551868 kernel: pci 0000:00:17.0: PME# supported from D3hot Feb 9 14:06:03.551913 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Feb 9 14:06:03.551955 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Feb 9 14:06:03.552002 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Feb 9 14:06:03.552044 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Feb 9 14:06:03.552092 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Feb 9 14:06:03.552134 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Feb 9 14:06:03.552180 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Feb 9 14:06:03.552222 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Feb 9 14:06:03.552270 kernel: pci 0000:00:1c.3: [8086:a33b] type 01 class 0x060400 Feb 9 14:06:03.552318 kernel: pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold Feb 9 14:06:03.552363 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Feb 9 14:06:03.552406 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Feb 9 14:06:03.552453 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Feb 9 14:06:03.552502 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Feb 9 14:06:03.552543 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] Feb 9 14:06:03.552585 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Feb 9 14:06:03.552630 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Feb 9 14:06:03.552672 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Feb 9 14:06:03.552721 kernel: pci 0000:01:00.0: [15b3:1015] type 00 class 0x020000 Feb 9 14:06:03.552767 kernel: pci 0000:01:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Feb 9 14:06:03.552811 kernel: pci 0000:01:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] Feb 9 14:06:03.552853 kernel: pci 0000:01:00.0: PME# supported from D3cold Feb 9 14:06:03.552898 kernel: pci 0000:01:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Feb 9 14:06:03.552942 kernel: pci 0000:01:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Feb 9 14:06:03.552990 kernel: pci 0000:01:00.1: [15b3:1015] type 00 class 0x020000 Feb 9 14:06:03.553033 kernel: pci 0000:01:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Feb 9 14:06:03.553078 kernel: pci 0000:01:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] Feb 9 14:06:03.553121 kernel: pci 0000:01:00.1: PME# supported from D3cold Feb 9 14:06:03.553164 kernel: pci 0000:01:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Feb 9 14:06:03.553207 kernel: pci 0000:01:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Feb 9 14:06:03.553249 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Feb 9 14:06:03.553291 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Feb 9 14:06:03.553338 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Feb 9 14:06:03.553381 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Feb 9 14:06:03.553433 kernel: pci 0000:03:00.0: [8086:1533] type 00 class 0x020000 Feb 9 14:06:03.553477 kernel: pci 0000:03:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] Feb 9 14:06:03.553521 kernel: pci 0000:03:00.0: reg 0x18: [io 0x5000-0x501f] Feb 9 14:06:03.553584 kernel: pci 0000:03:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] Feb 9 14:06:03.553625 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Feb 9 14:06:03.553667 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Feb 9 14:06:03.553708 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Feb 9 14:06:03.553750 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Feb 9 14:06:03.553798 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Feb 9 14:06:03.553840 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] Feb 9 14:06:03.553884 kernel: pci 0000:04:00.0: reg 0x18: [io 0x4000-0x401f] Feb 9 14:06:03.553926 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] Feb 9 14:06:03.553968 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Feb 9 14:06:03.554009 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Feb 9 14:06:03.554123 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Feb 9 14:06:03.554166 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Feb 9 14:06:03.554208 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Feb 9 14:06:03.554256 kernel: pci 0000:06:00.0: [1a03:1150] type 01 class 0x060400 Feb 9 14:06:03.554299 kernel: pci 0000:06:00.0: enabling Extended Tags Feb 9 14:06:03.554386 kernel: pci 0000:06:00.0: supports D1 D2 Feb 9 14:06:03.554429 kernel: pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 9 14:06:03.554472 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Feb 9 14:06:03.554515 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Feb 9 14:06:03.554558 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Feb 9 14:06:03.554602 kernel: pci_bus 0000:07: extended config space not accessible Feb 9 14:06:03.554652 kernel: pci 0000:07:00.0: [1a03:2000] type 00 class 0x030000 Feb 9 14:06:03.554697 kernel: pci 0000:07:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] Feb 9 14:06:03.554743 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] Feb 9 14:06:03.554788 kernel: pci 0000:07:00.0: reg 0x18: [io 0x3000-0x307f] Feb 9 14:06:03.554833 kernel: pci 0000:07:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 9 14:06:03.554879 kernel: pci 0000:07:00.0: supports D1 D2 Feb 9 14:06:03.554924 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 9 14:06:03.554967 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Feb 9 14:06:03.555010 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Feb 9 14:06:03.555052 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Feb 9 14:06:03.555059 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Feb 9 14:06:03.555065 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Feb 9 14:06:03.555072 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Feb 9 14:06:03.555077 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Feb 9 14:06:03.555082 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Feb 9 14:06:03.555087 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Feb 9 14:06:03.555092 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Feb 9 14:06:03.555098 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Feb 9 14:06:03.555103 kernel: iommu: Default domain type: Translated Feb 9 14:06:03.555108 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 9 14:06:03.555151 kernel: pci 0000:07:00.0: vgaarb: setting as boot VGA device Feb 9 14:06:03.555197 kernel: pci 0000:07:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 9 14:06:03.555242 kernel: pci 0000:07:00.0: vgaarb: bridge control possible Feb 9 14:06:03.555250 kernel: vgaarb: loaded Feb 9 14:06:03.555255 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 9 14:06:03.555260 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 9 14:06:03.555266 kernel: PTP clock support registered Feb 9 14:06:03.555271 kernel: PCI: Using ACPI for IRQ routing Feb 9 14:06:03.555276 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 9 14:06:03.555281 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Feb 9 14:06:03.555288 kernel: e820: reserve RAM buffer [mem 0x820e2000-0x83ffffff] Feb 9 14:06:03.555293 kernel: e820: reserve RAM buffer [mem 0x8afcd000-0x8bffffff] Feb 9 14:06:03.555298 kernel: e820: reserve RAM buffer [mem 0x8c23b000-0x8fffffff] Feb 9 14:06:03.555305 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] Feb 9 14:06:03.555332 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] Feb 9 14:06:03.555337 kernel: clocksource: Switched to clocksource tsc-early Feb 9 14:06:03.555343 kernel: VFS: Disk quotas dquot_6.6.0 Feb 9 14:06:03.555348 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 9 14:06:03.555373 kernel: pnp: PnP ACPI init Feb 9 14:06:03.555419 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Feb 9 14:06:03.555463 kernel: pnp 00:02: [dma 0 disabled] Feb 9 14:06:03.555504 kernel: pnp 00:03: [dma 0 disabled] Feb 9 14:06:03.555544 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Feb 9 14:06:03.555583 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Feb 9 14:06:03.555623 kernel: system 00:05: [io 0x1854-0x1857] has been reserved Feb 9 14:06:03.555665 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved Feb 9 14:06:03.555702 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved Feb 9 14:06:03.555740 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved Feb 9 14:06:03.555777 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved Feb 9 14:06:03.555813 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved Feb 9 14:06:03.555851 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved Feb 9 14:06:03.555887 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved Feb 9 14:06:03.555927 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved Feb 9 14:06:03.555969 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved Feb 9 14:06:03.556007 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved Feb 9 14:06:03.556044 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved Feb 9 14:06:03.556080 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved Feb 9 14:06:03.556117 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved Feb 9 14:06:03.556153 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved Feb 9 14:06:03.556192 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved Feb 9 14:06:03.556234 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved Feb 9 14:06:03.556242 kernel: pnp: PnP ACPI: found 10 devices Feb 9 14:06:03.556247 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 9 14:06:03.556252 kernel: NET: Registered PF_INET protocol family Feb 9 14:06:03.556258 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 9 14:06:03.556263 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Feb 9 14:06:03.556270 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 9 14:06:03.556275 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 9 14:06:03.556280 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Feb 9 14:06:03.556285 kernel: TCP: Hash tables configured (established 262144 bind 65536) Feb 9 14:06:03.556291 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 9 14:06:03.556296 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 9 14:06:03.556301 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 9 14:06:03.556330 kernel: NET: Registered PF_XDP protocol family Feb 9 14:06:03.556393 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] Feb 9 14:06:03.556437 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] Feb 9 14:06:03.556498 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] Feb 9 14:06:03.556542 kernel: pci 0000:01:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Feb 9 14:06:03.556585 kernel: pci 0000:01:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Feb 9 14:06:03.556630 kernel: pci 0000:01:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Feb 9 14:06:03.556673 kernel: pci 0000:01:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Feb 9 14:06:03.556715 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Feb 9 14:06:03.556758 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Feb 9 14:06:03.556802 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Feb 9 14:06:03.556843 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Feb 9 14:06:03.556885 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Feb 9 14:06:03.556927 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Feb 9 14:06:03.556968 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Feb 9 14:06:03.557012 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Feb 9 14:06:03.557054 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Feb 9 14:06:03.557097 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Feb 9 14:06:03.557139 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Feb 9 14:06:03.557182 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Feb 9 14:06:03.557225 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Feb 9 14:06:03.557269 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Feb 9 14:06:03.557314 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Feb 9 14:06:03.557357 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Feb 9 14:06:03.557401 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Feb 9 14:06:03.557439 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Feb 9 14:06:03.557479 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 9 14:06:03.557515 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 9 14:06:03.557552 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 9 14:06:03.557588 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] Feb 9 14:06:03.557625 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Feb 9 14:06:03.557668 kernel: pci_bus 0000:01: resource 1 [mem 0x95100000-0x952fffff] Feb 9 14:06:03.557710 kernel: pci_bus 0000:01: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Feb 9 14:06:03.557754 kernel: pci_bus 0000:03: resource 0 [io 0x5000-0x5fff] Feb 9 14:06:03.557792 kernel: pci_bus 0000:03: resource 1 [mem 0x95400000-0x954fffff] Feb 9 14:06:03.557836 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Feb 9 14:06:03.557874 kernel: pci_bus 0000:04: resource 1 [mem 0x95300000-0x953fffff] Feb 9 14:06:03.557921 kernel: pci_bus 0000:06: resource 0 [io 0x3000-0x3fff] Feb 9 14:06:03.557961 kernel: pci_bus 0000:06: resource 1 [mem 0x94000000-0x950fffff] Feb 9 14:06:03.558002 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Feb 9 14:06:03.558043 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] Feb 9 14:06:03.558050 kernel: PCI: CLS 64 bytes, default 64 Feb 9 14:06:03.558056 kernel: DMAR: No ATSR found Feb 9 14:06:03.558061 kernel: DMAR: No SATC found Feb 9 14:06:03.558066 kernel: DMAR: dmar0: Using Queued invalidation Feb 9 14:06:03.558109 kernel: pci 0000:00:00.0: Adding to iommu group 0 Feb 9 14:06:03.558152 kernel: pci 0000:00:01.0: Adding to iommu group 1 Feb 9 14:06:03.558195 kernel: pci 0000:00:08.0: Adding to iommu group 2 Feb 9 14:06:03.558238 kernel: pci 0000:00:12.0: Adding to iommu group 3 Feb 9 14:06:03.558279 kernel: pci 0000:00:14.0: Adding to iommu group 4 Feb 9 14:06:03.558323 kernel: pci 0000:00:14.2: Adding to iommu group 4 Feb 9 14:06:03.558366 kernel: pci 0000:00:15.0: Adding to iommu group 5 Feb 9 14:06:03.558407 kernel: pci 0000:00:15.1: Adding to iommu group 5 Feb 9 14:06:03.558452 kernel: pci 0000:00:16.0: Adding to iommu group 6 Feb 9 14:06:03.558494 kernel: pci 0000:00:16.1: Adding to iommu group 6 Feb 9 14:06:03.558536 kernel: pci 0000:00:16.4: Adding to iommu group 6 Feb 9 14:06:03.558578 kernel: pci 0000:00:17.0: Adding to iommu group 7 Feb 9 14:06:03.558619 kernel: pci 0000:00:1b.0: Adding to iommu group 8 Feb 9 14:06:03.558662 kernel: pci 0000:00:1b.4: Adding to iommu group 9 Feb 9 14:06:03.558704 kernel: pci 0000:00:1b.5: Adding to iommu group 10 Feb 9 14:06:03.558747 kernel: pci 0000:00:1c.0: Adding to iommu group 11 Feb 9 14:06:03.558788 kernel: pci 0000:00:1c.3: Adding to iommu group 12 Feb 9 14:06:03.558833 kernel: pci 0000:00:1e.0: Adding to iommu group 13 Feb 9 14:06:03.558875 kernel: pci 0000:00:1f.0: Adding to iommu group 14 Feb 9 14:06:03.558918 kernel: pci 0000:00:1f.4: Adding to iommu group 14 Feb 9 14:06:03.558960 kernel: pci 0000:00:1f.5: Adding to iommu group 14 Feb 9 14:06:03.559003 kernel: pci 0000:01:00.0: Adding to iommu group 1 Feb 9 14:06:03.559047 kernel: pci 0000:01:00.1: Adding to iommu group 1 Feb 9 14:06:03.559090 kernel: pci 0000:03:00.0: Adding to iommu group 15 Feb 9 14:06:03.559137 kernel: pci 0000:04:00.0: Adding to iommu group 16 Feb 9 14:06:03.559180 kernel: pci 0000:06:00.0: Adding to iommu group 17 Feb 9 14:06:03.559226 kernel: pci 0000:07:00.0: Adding to iommu group 17 Feb 9 14:06:03.559234 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Feb 9 14:06:03.559240 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 9 14:06:03.559245 kernel: software IO TLB: mapped [mem 0x0000000086fcd000-0x000000008afcd000] (64MB) Feb 9 14:06:03.559250 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer Feb 9 14:06:03.559256 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Feb 9 14:06:03.559261 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Feb 9 14:06:03.559268 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Feb 9 14:06:03.559314 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Feb 9 14:06:03.559322 kernel: Initialise system trusted keyrings Feb 9 14:06:03.559328 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Feb 9 14:06:03.559333 kernel: Key type asymmetric registered Feb 9 14:06:03.559338 kernel: Asymmetric key parser 'x509' registered Feb 9 14:06:03.559343 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 9 14:06:03.559349 kernel: io scheduler mq-deadline registered Feb 9 14:06:03.559355 kernel: io scheduler kyber registered Feb 9 14:06:03.559361 kernel: io scheduler bfq registered Feb 9 14:06:03.559403 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 Feb 9 14:06:03.559445 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 122 Feb 9 14:06:03.559487 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 123 Feb 9 14:06:03.559529 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 124 Feb 9 14:06:03.559571 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 125 Feb 9 14:06:03.559614 kernel: pcieport 0000:00:1c.3: PME: Signaling with IRQ 126 Feb 9 14:06:03.559662 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Feb 9 14:06:03.559670 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Feb 9 14:06:03.559676 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Feb 9 14:06:03.559681 kernel: pstore: Registered erst as persistent store backend Feb 9 14:06:03.559687 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 9 14:06:03.559692 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 9 14:06:03.559697 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 9 14:06:03.559703 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Feb 9 14:06:03.559710 kernel: hpet_acpi_add: no address or irqs in _CRS Feb 9 14:06:03.559753 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Feb 9 14:06:03.559761 kernel: i8042: PNP: No PS/2 controller found. Feb 9 14:06:03.559799 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Feb 9 14:06:03.559839 kernel: rtc_cmos rtc_cmos: registered as rtc0 Feb 9 14:06:03.559877 kernel: rtc_cmos rtc_cmos: setting system clock to 2024-02-09T14:06:02 UTC (1707487562) Feb 9 14:06:03.559916 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Feb 9 14:06:03.559923 kernel: fail to initialize ptp_kvm Feb 9 14:06:03.559930 kernel: intel_pstate: Intel P-state driver initializing Feb 9 14:06:03.559936 kernel: intel_pstate: Disabling energy efficiency optimization Feb 9 14:06:03.559941 kernel: intel_pstate: HWP enabled Feb 9 14:06:03.559946 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 Feb 9 14:06:03.559951 kernel: vesafb: scrolling: redraw Feb 9 14:06:03.559957 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 Feb 9 14:06:03.559962 kernel: vesafb: framebuffer at 0x94000000, mapped to 0x000000007e9a543d, using 768k, total 768k Feb 9 14:06:03.559967 kernel: Console: switching to colour frame buffer device 128x48 Feb 9 14:06:03.559973 kernel: fb0: VESA VGA frame buffer device Feb 9 14:06:03.559979 kernel: NET: Registered PF_INET6 protocol family Feb 9 14:06:03.559984 kernel: Segment Routing with IPv6 Feb 9 14:06:03.559989 kernel: In-situ OAM (IOAM) with IPv6 Feb 9 14:06:03.559995 kernel: NET: Registered PF_PACKET protocol family Feb 9 14:06:03.560000 kernel: Key type dns_resolver registered Feb 9 14:06:03.560005 kernel: microcode: sig=0x906ed, pf=0x2, revision=0xf4 Feb 9 14:06:03.560011 kernel: microcode: Microcode Update Driver: v2.2. Feb 9 14:06:03.560016 kernel: IPI shorthand broadcast: enabled Feb 9 14:06:03.560021 kernel: sched_clock: Marking stable (1679894333, 1339732198)->(4439100936, -1419474405) Feb 9 14:06:03.560027 kernel: registered taskstats version 1 Feb 9 14:06:03.560032 kernel: Loading compiled-in X.509 certificates Feb 9 14:06:03.560038 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: e9d857ae0e8100c174221878afd1046acbb054a6' Feb 9 14:06:03.560043 kernel: Key type .fscrypt registered Feb 9 14:06:03.560048 kernel: Key type fscrypt-provisioning registered Feb 9 14:06:03.560053 kernel: pstore: Using crash dump compression: deflate Feb 9 14:06:03.560059 kernel: ima: Allocated hash algorithm: sha1 Feb 9 14:06:03.560064 kernel: ima: No architecture policies found Feb 9 14:06:03.560070 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 9 14:06:03.560075 kernel: Write protecting the kernel read-only data: 28672k Feb 9 14:06:03.560081 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 9 14:06:03.560086 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 9 14:06:03.560091 kernel: Run /init as init process Feb 9 14:06:03.560097 kernel: with arguments: Feb 9 14:06:03.560102 kernel: /init Feb 9 14:06:03.560107 kernel: with environment: Feb 9 14:06:03.560112 kernel: HOME=/ Feb 9 14:06:03.560118 kernel: TERM=linux Feb 9 14:06:03.560123 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 9 14:06:03.560130 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 14:06:03.560137 systemd[1]: Detected architecture x86-64. Feb 9 14:06:03.560142 systemd[1]: Running in initrd. Feb 9 14:06:03.560148 systemd[1]: No hostname configured, using default hostname. Feb 9 14:06:03.560153 systemd[1]: Hostname set to . Feb 9 14:06:03.560158 systemd[1]: Initializing machine ID from random generator. Feb 9 14:06:03.560165 systemd[1]: Queued start job for default target initrd.target. Feb 9 14:06:03.560170 systemd[1]: Started systemd-ask-password-console.path. Feb 9 14:06:03.560175 systemd[1]: Reached target cryptsetup.target. Feb 9 14:06:03.560181 systemd[1]: Reached target paths.target. Feb 9 14:06:03.560186 systemd[1]: Reached target slices.target. Feb 9 14:06:03.560191 systemd[1]: Reached target swap.target. Feb 9 14:06:03.560197 systemd[1]: Reached target timers.target. Feb 9 14:06:03.560202 systemd[1]: Listening on iscsid.socket. Feb 9 14:06:03.560209 systemd[1]: Listening on iscsiuio.socket. Feb 9 14:06:03.560214 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 14:06:03.560220 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 14:06:03.560225 systemd[1]: Listening on systemd-journald.socket. Feb 9 14:06:03.560231 kernel: tsc: Refined TSC clocksource calibration: 3407.999 MHz Feb 9 14:06:03.560236 systemd[1]: Listening on systemd-networkd.socket. Feb 9 14:06:03.560241 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd336761, max_idle_ns: 440795243819 ns Feb 9 14:06:03.560247 kernel: clocksource: Switched to clocksource tsc Feb 9 14:06:03.560253 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 14:06:03.560259 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 14:06:03.560264 systemd[1]: Reached target sockets.target. Feb 9 14:06:03.560270 systemd[1]: Starting kmod-static-nodes.service... Feb 9 14:06:03.560275 systemd[1]: Finished network-cleanup.service. Feb 9 14:06:03.560280 systemd[1]: Starting systemd-fsck-usr.service... Feb 9 14:06:03.560286 systemd[1]: Starting systemd-journald.service... Feb 9 14:06:03.560291 systemd[1]: Starting systemd-modules-load.service... Feb 9 14:06:03.560299 systemd-journald[266]: Journal started Feb 9 14:06:03.560328 systemd-journald[266]: Runtime Journal (/run/log/journal/d527d32c9e874be8b08f1f9580edd6f9) is 8.0M, max 640.1M, 632.1M free. Feb 9 14:06:03.562946 systemd-modules-load[267]: Inserted module 'overlay' Feb 9 14:06:03.569000 audit: BPF prog-id=6 op=LOAD Feb 9 14:06:03.587323 kernel: audit: type=1334 audit(1707487563.569:2): prog-id=6 op=LOAD Feb 9 14:06:03.587338 systemd[1]: Starting systemd-resolved.service... Feb 9 14:06:03.636309 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 9 14:06:03.636326 systemd[1]: Starting systemd-vconsole-setup.service... Feb 9 14:06:03.668308 kernel: Bridge firewalling registered Feb 9 14:06:03.668325 systemd[1]: Started systemd-journald.service. Feb 9 14:06:03.683186 systemd-modules-load[267]: Inserted module 'br_netfilter' Feb 9 14:06:03.732929 kernel: audit: type=1130 audit(1707487563.691:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:03.691000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:03.689534 systemd-resolved[269]: Positive Trust Anchors: Feb 9 14:06:03.808482 kernel: SCSI subsystem initialized Feb 9 14:06:03.808494 kernel: audit: type=1130 audit(1707487563.745:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:03.808505 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 9 14:06:03.745000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:03.689540 systemd-resolved[269]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 14:06:03.911615 kernel: device-mapper: uevent: version 1.0.3 Feb 9 14:06:03.911650 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 9 14:06:03.911675 kernel: audit: type=1130 audit(1707487563.866:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:03.866000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:03.689559 systemd-resolved[269]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 14:06:03.985561 kernel: audit: type=1130 audit(1707487563.920:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:03.920000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:03.691076 systemd-resolved[269]: Defaulting to hostname 'linux'. Feb 9 14:06:03.994000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:03.691569 systemd[1]: Finished kmod-static-nodes.service. Feb 9 14:06:04.094048 kernel: audit: type=1130 audit(1707487563.994:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:04.094060 kernel: audit: type=1130 audit(1707487564.047:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:04.047000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:03.745427 systemd[1]: Started systemd-resolved.service. Feb 9 14:06:03.867092 systemd[1]: Finished systemd-fsck-usr.service. Feb 9 14:06:03.912154 systemd-modules-load[267]: Inserted module 'dm_multipath' Feb 9 14:06:03.920623 systemd[1]: Finished systemd-modules-load.service. Feb 9 14:06:03.994667 systemd[1]: Finished systemd-vconsole-setup.service. Feb 9 14:06:04.047595 systemd[1]: Reached target nss-lookup.target. Feb 9 14:06:04.102919 systemd[1]: Starting dracut-cmdline-ask.service... Feb 9 14:06:04.122830 systemd[1]: Starting systemd-sysctl.service... Feb 9 14:06:04.131888 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 14:06:04.132574 systemd[1]: Finished systemd-sysctl.service. Feb 9 14:06:04.132000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:04.134602 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 14:06:04.181529 kernel: audit: type=1130 audit(1707487564.132:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:04.196000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:04.196635 systemd[1]: Finished dracut-cmdline-ask.service. Feb 9 14:06:04.262412 kernel: audit: type=1130 audit(1707487564.196:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:04.254000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:04.254891 systemd[1]: Starting dracut-cmdline.service... Feb 9 14:06:04.277419 dracut-cmdline[291]: dracut-dracut-053 Feb 9 14:06:04.277419 dracut-cmdline[291]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA Feb 9 14:06:04.277419 dracut-cmdline[291]: BEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 9 14:06:04.344383 kernel: Loading iSCSI transport class v2.0-870. Feb 9 14:06:04.344396 kernel: iscsi: registered transport (tcp) Feb 9 14:06:04.393126 kernel: iscsi: registered transport (qla4xxx) Feb 9 14:06:04.393143 kernel: QLogic iSCSI HBA Driver Feb 9 14:06:04.409804 systemd[1]: Finished dracut-cmdline.service. Feb 9 14:06:04.418000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:04.419088 systemd[1]: Starting dracut-pre-udev.service... Feb 9 14:06:04.474362 kernel: raid6: avx2x4 gen() 48899 MB/s Feb 9 14:06:04.509369 kernel: raid6: avx2x4 xor() 21730 MB/s Feb 9 14:06:04.544369 kernel: raid6: avx2x2 gen() 54987 MB/s Feb 9 14:06:04.579369 kernel: raid6: avx2x2 xor() 32845 MB/s Feb 9 14:06:04.614368 kernel: raid6: avx2x1 gen() 46261 MB/s Feb 9 14:06:04.649341 kernel: raid6: avx2x1 xor() 28537 MB/s Feb 9 14:06:04.683368 kernel: raid6: sse2x4 gen() 21804 MB/s Feb 9 14:06:04.717341 kernel: raid6: sse2x4 xor() 11982 MB/s Feb 9 14:06:04.751345 kernel: raid6: sse2x2 gen() 22151 MB/s Feb 9 14:06:04.785345 kernel: raid6: sse2x2 xor() 13739 MB/s Feb 9 14:06:04.819368 kernel: raid6: sse2x1 gen() 18650 MB/s Feb 9 14:06:04.871095 kernel: raid6: sse2x1 xor() 9121 MB/s Feb 9 14:06:04.871110 kernel: raid6: using algorithm avx2x2 gen() 54987 MB/s Feb 9 14:06:04.871118 kernel: raid6: .... xor() 32845 MB/s, rmw enabled Feb 9 14:06:04.889231 kernel: raid6: using avx2x2 recovery algorithm Feb 9 14:06:04.935362 kernel: xor: automatically using best checksumming function avx Feb 9 14:06:05.013337 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 9 14:06:05.018279 systemd[1]: Finished dracut-pre-udev.service. Feb 9 14:06:05.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:05.018000 audit: BPF prog-id=7 op=LOAD Feb 9 14:06:05.018000 audit: BPF prog-id=8 op=LOAD Feb 9 14:06:05.019256 systemd[1]: Starting systemd-udevd.service... Feb 9 14:06:05.026914 systemd-udevd[471]: Using default interface naming scheme 'v252'. Feb 9 14:06:05.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:05.039791 systemd[1]: Started systemd-udevd.service. Feb 9 14:06:05.079424 dracut-pre-trigger[484]: rd.md=0: removing MD RAID activation Feb 9 14:06:05.057521 systemd[1]: Starting dracut-pre-trigger.service... Feb 9 14:06:05.095000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:05.084412 systemd[1]: Finished dracut-pre-trigger.service. Feb 9 14:06:05.096153 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 14:06:05.165336 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 14:06:05.165000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:05.191313 kernel: cryptd: max_cpu_qlen set to 1000 Feb 9 14:06:05.193312 kernel: libata version 3.00 loaded. Feb 9 14:06:05.228561 kernel: ACPI: bus type USB registered Feb 9 14:06:05.228625 kernel: usbcore: registered new interface driver usbfs Feb 9 14:06:05.228633 kernel: usbcore: registered new interface driver hub Feb 9 14:06:05.246658 kernel: usbcore: registered new device driver usb Feb 9 14:06:05.297703 kernel: AVX2 version of gcm_enc/dec engaged. Feb 9 14:06:05.297754 kernel: AES CTR mode by8 optimization enabled Feb 9 14:06:05.298310 kernel: ahci 0000:00:17.0: version 3.0 Feb 9 14:06:05.337873 kernel: mlx5_core 0000:01:00.0: firmware version: 14.27.1016 Feb 9 14:06:05.337966 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 7 ports 6 Gbps 0x7f impl SATA mode Feb 9 14:06:05.338030 kernel: mlx5_core 0000:01:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Feb 9 14:06:05.338093 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Feb 9 14:06:05.399308 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Feb 9 14:06:05.399405 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Feb 9 14:06:05.399469 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Feb 9 14:06:05.399479 kernel: scsi host0: ahci Feb 9 14:06:05.399546 kernel: scsi host1: ahci Feb 9 14:06:05.399607 kernel: scsi host2: ahci Feb 9 14:06:05.401308 kernel: scsi host3: ahci Feb 9 14:06:05.401380 kernel: scsi host4: ahci Feb 9 14:06:05.401433 kernel: scsi host5: ahci Feb 9 14:06:05.401484 kernel: scsi host6: ahci Feb 9 14:06:05.401533 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 127 Feb 9 14:06:05.401541 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 127 Feb 9 14:06:05.401547 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 127 Feb 9 14:06:05.401553 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 127 Feb 9 14:06:05.401561 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 127 Feb 9 14:06:05.401568 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 127 Feb 9 14:06:05.401574 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 127 Feb 9 14:06:05.444866 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Feb 9 14:06:05.444946 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Feb 9 14:06:05.457995 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Feb 9 14:06:05.505909 kernel: pps pps0: new PPS source ptp0 Feb 9 14:06:05.506008 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Feb 9 14:06:05.506090 kernel: igb 0000:03:00.0: added PHC on eth0 Feb 9 14:06:05.532175 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Feb 9 14:06:05.532258 kernel: igb 0000:03:00.0: Intel(R) Gigabit Ethernet Network Connection Feb 9 14:06:05.561677 kernel: hub 1-0:1.0: USB hub found Feb 9 14:06:05.561786 kernel: igb 0000:03:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:6a:f1:0a Feb 9 14:06:05.589389 kernel: hub 1-0:1.0: 16 ports detected Feb 9 14:06:05.602667 kernel: igb 0000:03:00.0: eth0: PBA No: 010000-000 Feb 9 14:06:05.616307 kernel: hub 2-0:1.0: USB hub found Feb 9 14:06:05.616415 kernel: igb 0000:03:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Feb 9 14:06:05.642658 kernel: mlx5_core 0000:01:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Feb 9 14:06:05.642732 kernel: hub 2-0:1.0: 10 ports detected Feb 9 14:06:05.660308 kernel: pps pps1: new PPS source ptp2 Feb 9 14:06:05.660378 kernel: usb: port power management may be unreliable Feb 9 14:06:05.660387 kernel: mlx5_core 0000:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Feb 9 14:06:05.678036 kernel: igb 0000:04:00.0: added PHC on eth1 Feb 9 14:06:05.727816 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Feb 9 14:06:05.727835 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Feb 9 14:06:05.727902 kernel: ata5: SATA link down (SStatus 0 SControl 300) Feb 9 14:06:05.754006 kernel: igb 0000:04:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:6a:f1:0b Feb 9 14:06:05.754078 kernel: igb 0000:04:00.0: eth1: PBA No: 010000-000 Feb 9 14:06:05.754132 kernel: ata6: SATA link down (SStatus 0 SControl 300) Feb 9 14:06:05.778155 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Feb 9 14:06:05.778227 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Feb 9 14:06:05.858369 kernel: mlx5_core 0000:01:00.0: Supported tc offload range - chains: 4294967294, prios: 4294967295 Feb 9 14:06:05.858447 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Feb 9 14:06:05.866349 kernel: ata7: SATA link down (SStatus 0 SControl 300) Feb 9 14:06:05.893308 kernel: mlx5_core 0000:01:00.1: firmware version: 14.27.1016 Feb 9 14:06:05.893383 kernel: ata3: SATA link down (SStatus 0 SControl 300) Feb 9 14:06:05.922539 kernel: mlx5_core 0000:01:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Feb 9 14:06:05.922612 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Feb 9 14:06:05.998308 kernel: hub 1-14:1.0: USB hub found Feb 9 14:06:05.998428 kernel: ata4: SATA link down (SStatus 0 SControl 300) Feb 9 14:06:06.032307 kernel: hub 1-14:1.0: 4 ports detected Feb 9 14:06:06.032426 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Feb 9 14:06:06.203866 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Feb 9 14:06:06.203883 kernel: ata2.00: Features: NCQ-prio Feb 9 14:06:06.242395 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Feb 9 14:06:06.242412 kernel: mlx5_core 0000:01:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Feb 9 14:06:06.242480 kernel: ata1.00: Features: NCQ-prio Feb 9 14:06:06.257349 kernel: port_module: 9 callbacks suppressed Feb 9 14:06:06.257364 kernel: mlx5_core 0000:01:00.1: Port module event: module 1, Cable plugged Feb 9 14:06:06.272822 kernel: ata2.00: configured for UDMA/133 Feb 9 14:06:06.292358 kernel: mlx5_core 0000:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Feb 9 14:06:06.308343 kernel: ata1.00: configured for UDMA/133 Feb 9 14:06:06.343365 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Feb 9 14:06:06.343391 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Feb 9 14:06:06.401346 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Feb 9 14:06:06.420309 kernel: igb 0000:03:00.0 eno1: renamed from eth0 Feb 9 14:06:06.441777 kernel: ata2.00: Enabling discard_zeroes_data Feb 9 14:06:06.441799 kernel: igb 0000:04:00.0 eno2: renamed from eth1 Feb 9 14:06:06.441914 kernel: ata1.00: Enabling discard_zeroes_data Feb 9 14:06:06.441923 kernel: sd 1:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Feb 9 14:06:06.442003 kernel: sd 0:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Feb 9 14:06:06.442096 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Feb 9 14:06:06.442166 kernel: sd 0:0:0:0: [sda] Write Protect is off Feb 9 14:06:06.442233 kernel: sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 Feb 9 14:06:06.442297 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Feb 9 14:06:06.442398 kernel: ata1.00: Enabling discard_zeroes_data Feb 9 14:06:06.444306 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 9 14:06:06.444318 kernel: GPT:9289727 != 937703087 Feb 9 14:06:06.444326 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 9 14:06:06.444336 kernel: GPT:9289727 != 937703087 Feb 9 14:06:06.444344 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 9 14:06:06.444351 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 14:06:06.444359 kernel: ata1.00: Enabling discard_zeroes_data Feb 9 14:06:06.444367 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Feb 9 14:06:06.528338 kernel: mlx5_core 0000:01:00.1: Supported tc offload range - chains: 4294967294, prios: 4294967295 Feb 9 14:06:06.528418 kernel: sd 1:0:0:0: [sdb] 4096-byte physical blocks Feb 9 14:06:06.528478 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 9 14:06:06.757391 kernel: sd 1:0:0:0: [sdb] Write Protect is off Feb 9 14:06:06.792260 kernel: sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Feb 9 14:06:06.792351 kernel: sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Feb 9 14:06:06.807780 kernel: ata2.00: Enabling discard_zeroes_data Feb 9 14:06:06.822776 kernel: ata2.00: Enabling discard_zeroes_data Feb 9 14:06:06.822792 kernel: sd 1:0:0:0: [sdb] Attached SCSI disk Feb 9 14:06:06.856310 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: renamed from eth2 Feb 9 14:06:06.869220 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 9 14:06:06.958679 kernel: usbcore: registered new interface driver usbhid Feb 9 14:06:06.958694 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (531) Feb 9 14:06:06.958701 kernel: usbhid: USB HID core driver Feb 9 14:06:06.958708 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: renamed from eth0 Feb 9 14:06:06.958784 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Feb 9 14:06:06.928180 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 9 14:06:06.968407 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 9 14:06:06.971137 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 9 14:06:07.114806 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Feb 9 14:06:07.114980 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Feb 9 14:06:07.114989 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Feb 9 14:06:07.010513 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 14:06:07.136423 kernel: ata1.00: Enabling discard_zeroes_data Feb 9 14:06:07.125142 systemd[1]: Starting disk-uuid.service... Feb 9 14:06:07.177418 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 14:06:07.177428 kernel: ata1.00: Enabling discard_zeroes_data Feb 9 14:06:07.177469 disk-uuid[688]: Primary Header is updated. Feb 9 14:06:07.177469 disk-uuid[688]: Secondary Entries is updated. Feb 9 14:06:07.177469 disk-uuid[688]: Secondary Header is updated. Feb 9 14:06:07.237391 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 14:06:07.237401 kernel: ata1.00: Enabling discard_zeroes_data Feb 9 14:06:07.237408 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 14:06:08.223279 kernel: ata1.00: Enabling discard_zeroes_data Feb 9 14:06:08.242361 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 14:06:08.242394 disk-uuid[689]: The operation has completed successfully. Feb 9 14:06:08.290768 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 9 14:06:08.385725 kernel: audit: type=1130 audit(1707487568.297:19): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:08.385741 kernel: audit: type=1131 audit(1707487568.297:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:08.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:08.297000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:08.290812 systemd[1]: Finished disk-uuid.service. Feb 9 14:06:08.414402 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 9 14:06:08.305629 systemd[1]: Starting verity-setup.service... Feb 9 14:06:08.463103 systemd[1]: Found device dev-mapper-usr.device. Feb 9 14:06:08.474750 systemd[1]: Mounting sysusr-usr.mount... Feb 9 14:06:08.486943 systemd[1]: Finished verity-setup.service. Feb 9 14:06:08.501000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:08.555309 kernel: audit: type=1130 audit(1707487568.501:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:08.614371 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 9 14:06:08.614644 systemd[1]: Mounted sysusr-usr.mount. Feb 9 14:06:08.621615 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 9 14:06:08.622008 systemd[1]: Starting ignition-setup.service... Feb 9 14:06:08.711198 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 14:06:08.711214 kernel: BTRFS info (device sda6): using free space tree Feb 9 14:06:08.711221 kernel: BTRFS info (device sda6): has skinny extents Feb 9 14:06:08.711228 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 9 14:06:08.653764 systemd[1]: Starting parse-ip-for-networkd.service... Feb 9 14:06:08.719693 systemd[1]: Finished ignition-setup.service. Feb 9 14:06:08.737000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:08.737646 systemd[1]: Finished parse-ip-for-networkd.service. Feb 9 14:06:08.842855 kernel: audit: type=1130 audit(1707487568.737:22): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:08.842946 kernel: audit: type=1130 audit(1707487568.793:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:08.793000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:08.793971 systemd[1]: Starting ignition-fetch-offline.service... Feb 9 14:06:08.874746 kernel: audit: type=1334 audit(1707487568.851:24): prog-id=9 op=LOAD Feb 9 14:06:08.851000 audit: BPF prog-id=9 op=LOAD Feb 9 14:06:08.852244 systemd[1]: Starting systemd-networkd.service... Feb 9 14:06:08.889767 systemd-networkd[879]: lo: Link UP Feb 9 14:06:08.898000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:08.889770 systemd-networkd[879]: lo: Gained carrier Feb 9 14:06:08.967644 kernel: audit: type=1130 audit(1707487568.898:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:08.913496 ignition[867]: Ignition 2.14.0 Feb 9 14:06:08.890090 systemd-networkd[879]: Enumeration completed Feb 9 14:06:08.913500 ignition[867]: Stage: fetch-offline Feb 9 14:06:08.993000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:08.890164 systemd[1]: Started systemd-networkd.service. Feb 9 14:06:09.126404 kernel: audit: type=1130 audit(1707487568.993:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:09.126417 kernel: audit: type=1130 audit(1707487569.052:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:09.126424 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Feb 9 14:06:09.052000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:08.913526 ignition[867]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 14:06:09.161568 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): enp1s0f1np1: link becomes ready Feb 9 14:06:08.890956 systemd-networkd[879]: enp1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 14:06:08.913540 ignition[867]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 9 14:06:08.898456 systemd[1]: Reached target network.target. Feb 9 14:06:09.188000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:08.922097 ignition[867]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 9 14:06:09.213385 iscsid[909]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 9 14:06:09.213385 iscsid[909]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Feb 9 14:06:09.213385 iscsid[909]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 9 14:06:09.213385 iscsid[909]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 9 14:06:09.213385 iscsid[909]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 9 14:06:09.213385 iscsid[909]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 9 14:06:09.213385 iscsid[909]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 9 14:06:09.221000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:08.942627 unknown[867]: fetched base config from "system" Feb 9 14:06:08.922161 ignition[867]: parsed url from cmdline: "" Feb 9 14:06:08.942631 unknown[867]: fetched user config from "system" Feb 9 14:06:08.922163 ignition[867]: no config URL provided Feb 9 14:06:09.384454 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Feb 9 14:06:08.960006 systemd[1]: Starting iscsiuio.service... Feb 9 14:06:08.922165 ignition[867]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 14:06:09.407000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:08.974620 systemd[1]: Started iscsiuio.service. Feb 9 14:06:08.922195 ignition[867]: parsing config with SHA512: 0d8f4af778740d1b884c0bcdc6a098c88278e74e092f0a7ec8045fc0c89eaa0a0a3a803ac433a7cb835b318b44a34e7ad854170ec48a7a0bf7c141fbd44100c6 Feb 9 14:06:08.993514 systemd[1]: Finished ignition-fetch-offline.service. Feb 9 14:06:08.942990 ignition[867]: fetch-offline: fetch-offline passed Feb 9 14:06:09.052570 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 9 14:06:08.942992 ignition[867]: POST message to Packet Timeline Feb 9 14:06:09.053026 systemd[1]: Starting ignition-kargs.service... Feb 9 14:06:08.942997 ignition[867]: POST Status error: resource requires networking Feb 9 14:06:09.127694 systemd-networkd[879]: enp1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 14:06:08.943027 ignition[867]: Ignition finished successfully Feb 9 14:06:09.140946 systemd[1]: Starting iscsid.service... Feb 9 14:06:09.130851 ignition[897]: Ignition 2.14.0 Feb 9 14:06:09.168620 systemd[1]: Started iscsid.service. Feb 9 14:06:09.130855 ignition[897]: Stage: kargs Feb 9 14:06:09.188832 systemd[1]: Starting dracut-initqueue.service... Feb 9 14:06:09.130923 ignition[897]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 14:06:09.203486 systemd[1]: Finished dracut-initqueue.service. Feb 9 14:06:09.130933 ignition[897]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 9 14:06:09.221426 systemd[1]: Reached target remote-fs-pre.target. Feb 9 14:06:09.132296 ignition[897]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 9 14:06:09.266511 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 14:06:09.134212 ignition[897]: kargs: kargs passed Feb 9 14:06:09.287754 systemd[1]: Reached target remote-fs.target. Feb 9 14:06:09.134216 ignition[897]: POST message to Packet Timeline Feb 9 14:06:09.351523 systemd[1]: Starting dracut-pre-mount.service... Feb 9 14:06:09.134228 ignition[897]: GET https://metadata.packet.net/metadata: attempt #1 Feb 9 14:06:09.380876 systemd-networkd[879]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 14:06:09.135911 ignition[897]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:43488->[::1]:53: read: connection refused Feb 9 14:06:09.391692 systemd[1]: Finished dracut-pre-mount.service. Feb 9 14:06:09.336378 ignition[897]: GET https://metadata.packet.net/metadata: attempt #2 Feb 9 14:06:09.409850 systemd-networkd[879]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 14:06:09.336819 ignition[897]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:50415->[::1]:53: read: connection refused Feb 9 14:06:09.439060 systemd-networkd[879]: enp1s0f1np1: Link UP Feb 9 14:06:09.439258 systemd-networkd[879]: enp1s0f1np1: Gained carrier Feb 9 14:06:09.455741 systemd-networkd[879]: enp1s0f0np0: Link UP Feb 9 14:06:09.456056 systemd-networkd[879]: eno2: Link UP Feb 9 14:06:09.456361 systemd-networkd[879]: eno1: Link UP Feb 9 14:06:09.736985 ignition[897]: GET https://metadata.packet.net/metadata: attempt #3 Feb 9 14:06:09.738195 ignition[897]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:40394->[::1]:53: read: connection refused Feb 9 14:06:10.186123 systemd-networkd[879]: enp1s0f0np0: Gained carrier Feb 9 14:06:10.194559 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): enp1s0f0np0: link becomes ready Feb 9 14:06:10.221530 systemd-networkd[879]: enp1s0f0np0: DHCPv4 address 139.178.88.165/31, gateway 139.178.88.164 acquired from 145.40.83.140 Feb 9 14:06:10.538722 ignition[897]: GET https://metadata.packet.net/metadata: attempt #4 Feb 9 14:06:10.539988 ignition[897]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:53675->[::1]:53: read: connection refused Feb 9 14:06:10.994897 systemd-networkd[879]: enp1s0f1np1: Gained IPv6LL Feb 9 14:06:11.890881 systemd-networkd[879]: enp1s0f0np0: Gained IPv6LL Feb 9 14:06:12.141696 ignition[897]: GET https://metadata.packet.net/metadata: attempt #5 Feb 9 14:06:12.142834 ignition[897]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:50402->[::1]:53: read: connection refused Feb 9 14:06:15.346334 ignition[897]: GET https://metadata.packet.net/metadata: attempt #6 Feb 9 14:06:15.384283 ignition[897]: GET result: OK Feb 9 14:06:15.577570 ignition[897]: Ignition finished successfully Feb 9 14:06:15.581519 systemd[1]: Finished ignition-kargs.service. Feb 9 14:06:15.669760 kernel: kauditd_printk_skb: 3 callbacks suppressed Feb 9 14:06:15.669783 kernel: audit: type=1130 audit(1707487575.593:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:15.593000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:15.602877 ignition[927]: Ignition 2.14.0 Feb 9 14:06:15.595535 systemd[1]: Starting ignition-disks.service... Feb 9 14:06:15.602880 ignition[927]: Stage: disks Feb 9 14:06:15.602946 ignition[927]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 14:06:15.602956 ignition[927]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 9 14:06:15.604262 ignition[927]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 9 14:06:15.605791 ignition[927]: disks: disks passed Feb 9 14:06:15.605794 ignition[927]: POST message to Packet Timeline Feb 9 14:06:15.605804 ignition[927]: GET https://metadata.packet.net/metadata: attempt #1 Feb 9 14:06:15.640364 ignition[927]: GET result: OK Feb 9 14:06:15.838937 ignition[927]: Ignition finished successfully Feb 9 14:06:15.842057 systemd[1]: Finished ignition-disks.service. Feb 9 14:06:15.855000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:15.855949 systemd[1]: Reached target initrd-root-device.target. Feb 9 14:06:15.933572 kernel: audit: type=1130 audit(1707487575.855:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:15.919517 systemd[1]: Reached target local-fs-pre.target. Feb 9 14:06:15.919553 systemd[1]: Reached target local-fs.target. Feb 9 14:06:15.942556 systemd[1]: Reached target sysinit.target. Feb 9 14:06:15.956535 systemd[1]: Reached target basic.target. Feb 9 14:06:15.970188 systemd[1]: Starting systemd-fsck-root.service... Feb 9 14:06:15.989462 systemd-fsck[946]: ROOT: clean, 602/553520 files, 56014/553472 blocks Feb 9 14:06:16.003355 systemd[1]: Finished systemd-fsck-root.service. Feb 9 14:06:16.092796 kernel: audit: type=1130 audit(1707487576.011:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:16.092810 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 9 14:06:16.011000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:16.031834 systemd[1]: Mounting sysroot.mount... Feb 9 14:06:16.099966 systemd[1]: Mounted sysroot.mount. Feb 9 14:06:16.114578 systemd[1]: Reached target initrd-root-fs.target. Feb 9 14:06:16.123275 systemd[1]: Mounting sysroot-usr.mount... Feb 9 14:06:16.149174 systemd[1]: Starting flatcar-metadata-hostname.service... Feb 9 14:06:16.157798 systemd[1]: Starting flatcar-static-network.service... Feb 9 14:06:16.174452 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 9 14:06:16.174490 systemd[1]: Reached target ignition-diskful.target. Feb 9 14:06:16.192003 systemd[1]: Mounted sysroot-usr.mount. Feb 9 14:06:16.216030 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 14:06:16.257329 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (957) Feb 9 14:06:16.227847 systemd[1]: Starting initrd-setup-root.service... Feb 9 14:06:16.340438 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 14:06:16.340458 kernel: BTRFS info (device sda6): using free space tree Feb 9 14:06:16.340466 kernel: BTRFS info (device sda6): has skinny extents Feb 9 14:06:16.340474 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 9 14:06:16.340481 initrd-setup-root[964]: cut: /sysroot/etc/passwd: No such file or directory Feb 9 14:06:16.412511 kernel: audit: type=1130 audit(1707487576.359:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:16.359000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:16.412547 coreos-metadata[953]: Feb 09 14:06:16.264 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 9 14:06:16.412547 coreos-metadata[953]: Feb 09 14:06:16.286 INFO Fetch successful Feb 9 14:06:16.412547 coreos-metadata[953]: Feb 09 14:06:16.304 INFO wrote hostname ci-3510.3.2-a-80177560a3 to /sysroot/etc/hostname Feb 9 14:06:16.618573 kernel: audit: type=1130 audit(1707487576.421:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:16.618587 kernel: audit: type=1130 audit(1707487576.484:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:16.618595 kernel: audit: type=1131 audit(1707487576.484:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:16.421000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:16.484000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:16.484000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:16.278594 systemd[1]: Finished initrd-setup-root.service. Feb 9 14:06:16.633432 coreos-metadata[954]: Feb 09 14:06:16.264 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 9 14:06:16.633432 coreos-metadata[954]: Feb 09 14:06:16.289 INFO Fetch successful Feb 9 14:06:16.660000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:16.688475 initrd-setup-root[972]: cut: /sysroot/etc/group: No such file or directory Feb 9 14:06:16.726539 kernel: audit: type=1130 audit(1707487576.660:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:16.360620 systemd[1]: Finished flatcar-metadata-hostname.service. Feb 9 14:06:16.735578 initrd-setup-root[980]: cut: /sysroot/etc/shadow: No such file or directory Feb 9 14:06:16.421640 systemd[1]: flatcar-static-network.service: Deactivated successfully. Feb 9 14:06:16.755503 initrd-setup-root[988]: cut: /sysroot/etc/gshadow: No such file or directory Feb 9 14:06:16.421681 systemd[1]: Finished flatcar-static-network.service. Feb 9 14:06:16.773511 ignition[1030]: INFO : Ignition 2.14.0 Feb 9 14:06:16.773511 ignition[1030]: INFO : Stage: mount Feb 9 14:06:16.773511 ignition[1030]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 14:06:16.773511 ignition[1030]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 9 14:06:16.773511 ignition[1030]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 9 14:06:16.773511 ignition[1030]: INFO : mount: mount passed Feb 9 14:06:16.773511 ignition[1030]: INFO : POST message to Packet Timeline Feb 9 14:06:16.773511 ignition[1030]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Feb 9 14:06:16.773511 ignition[1030]: INFO : GET result: OK Feb 9 14:06:16.484605 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 14:06:16.605991 systemd[1]: Starting ignition-mount.service... Feb 9 14:06:16.625930 systemd[1]: Starting sysroot-boot.service... Feb 9 14:06:16.640836 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 9 14:06:16.640909 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 9 14:06:16.643073 systemd[1]: Finished sysroot-boot.service. Feb 9 14:06:16.914000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:16.982258 ignition[1030]: INFO : Ignition finished successfully Feb 9 14:06:16.998343 kernel: audit: type=1130 audit(1707487576.914:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:16.900226 systemd[1]: Finished ignition-mount.service. Feb 9 14:06:17.040440 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1044) Feb 9 14:06:17.040451 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 14:06:16.916555 systemd[1]: Starting ignition-files.service... Feb 9 14:06:17.092429 kernel: BTRFS info (device sda6): using free space tree Feb 9 14:06:17.092442 kernel: BTRFS info (device sda6): has skinny extents Feb 9 14:06:17.092450 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 9 14:06:16.991224 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 14:06:17.126668 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 14:06:17.151547 ignition[1063]: INFO : Ignition 2.14.0 Feb 9 14:06:17.151547 ignition[1063]: INFO : Stage: files Feb 9 14:06:17.151547 ignition[1063]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 14:06:17.151547 ignition[1063]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 9 14:06:17.151547 ignition[1063]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 9 14:06:17.151547 ignition[1063]: DEBUG : files: compiled without relabeling support, skipping Feb 9 14:06:17.151547 ignition[1063]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 9 14:06:17.151547 ignition[1063]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 9 14:06:17.153934 unknown[1063]: wrote ssh authorized keys file for user: core Feb 9 14:06:17.250455 ignition[1063]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 9 14:06:17.250455 ignition[1063]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 9 14:06:17.250455 ignition[1063]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 9 14:06:17.250455 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 9 14:06:17.250455 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 9 14:06:17.250455 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 9 14:06:17.250455 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 9 14:06:17.250455 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Feb 9 14:06:17.250455 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-amd64-v1.3.0.tgz: attempt #1 Feb 9 14:06:17.768813 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 9 14:06:17.865108 ignition[1063]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: 5d0324ca8a3c90c680b6e1fddb245a2255582fa15949ba1f3c6bb7323df9d3af754dae98d6e40ac9ccafb2999c932df2c4288d418949a4915d928eb23c090540 Feb 9 14:06:17.865108 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Feb 9 14:06:17.908557 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Feb 9 14:06:17.908557 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.0/crictl-v1.27.0-linux-amd64.tar.gz: attempt #1 Feb 9 14:06:18.286110 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 9 14:06:18.337077 ignition[1063]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: aa622325bf05520939f9e020d7a28ab48ac23e2fae6f47d5a4e52174c88c1ebc31b464853e4fd65bd8f5331f330a6ca96fd370d247d3eeaed042da4ee2d1219a Feb 9 14:06:18.362576 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Feb 9 14:06:18.362576 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 9 14:06:18.362576 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubeadm: attempt #1 Feb 9 14:06:18.827732 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 9 14:06:25.224887 ignition[1063]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: f4daad200c8378dfdc6cb69af28eaca4215f2b4a2dbdf75f29f9210171cb5683bc873fc000319022e6b3ad61175475d77190734713ba9136644394e8a8faafa1 Feb 9 14:06:25.250637 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 9 14:06:25.250637 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubelet" Feb 9 14:06:25.250637 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubelet: attempt #1 Feb 9 14:06:25.297377 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 9 14:06:37.731570 ignition[1063]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: ce6ba764274162d38ac1c44e1fb1f0f835346f3afc5b508bb755b1b7d7170910f5812b0a1941b32e29d950e905bbd08ae761c87befad921db4d44969c8562e75 Feb 9 14:06:37.731570 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 9 14:06:37.772627 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubectl" Feb 9 14:06:37.772627 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubectl: attempt #1 Feb 9 14:06:37.806375 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Feb 9 14:06:37.932128 ignition[1063]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: 33cf3f6e37bcee4dff7ce14ab933c605d07353d4e31446dd2b52c3f05e0b150b60e531f6069f112d8a76331322a72b593537531e62104cfc7c70cb03d46f76b3 Feb 9 14:06:37.932128 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 9 14:06:37.932128 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 9 14:06:37.990530 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 9 14:06:37.990530 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 9 14:06:37.990530 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Feb 9 14:06:38.425132 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 9 14:06:38.477432 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 9 14:06:38.477432 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/install.sh" Feb 9 14:06:38.525544 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (1063) Feb 9 14:06:38.525609 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/install.sh" Feb 9 14:06:38.525609 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 9 14:06:38.525609 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 9 14:06:38.525609 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 14:06:38.525609 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 14:06:38.525609 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 14:06:38.525609 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 14:06:38.525609 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 14:06:38.525609 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 14:06:38.525609 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/etc/systemd/system/packet-phone-home.service" Feb 9 14:06:38.525609 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(10): oem config not found in "/usr/share/oem", looking on oem partition Feb 9 14:06:38.525609 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(11): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem218155766" Feb 9 14:06:38.525609 ignition[1063]: CRITICAL : files: createFilesystemsFiles: createFiles: op(10): op(11): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem218155766": device or resource busy Feb 9 14:06:38.525609 ignition[1063]: ERROR : files: createFilesystemsFiles: createFiles: op(10): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem218155766", trying btrfs: device or resource busy Feb 9 14:06:38.525609 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(12): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem218155766" Feb 9 14:06:38.783650 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(12): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem218155766" Feb 9 14:06:38.783650 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(13): [started] unmounting "/mnt/oem218155766" Feb 9 14:06:38.783650 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(13): [finished] unmounting "/mnt/oem218155766" Feb 9 14:06:38.783650 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/systemd/system/packet-phone-home.service" Feb 9 14:06:38.783650 ignition[1063]: INFO : files: op(14): [started] processing unit "coreos-metadata-sshkeys@.service" Feb 9 14:06:38.783650 ignition[1063]: INFO : files: op(14): [finished] processing unit "coreos-metadata-sshkeys@.service" Feb 9 14:06:38.783650 ignition[1063]: INFO : files: op(15): [started] processing unit "packet-phone-home.service" Feb 9 14:06:38.783650 ignition[1063]: INFO : files: op(15): [finished] processing unit "packet-phone-home.service" Feb 9 14:06:38.783650 ignition[1063]: INFO : files: op(16): [started] processing unit "prepare-cni-plugins.service" Feb 9 14:06:38.783650 ignition[1063]: INFO : files: op(16): op(17): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 14:06:38.783650 ignition[1063]: INFO : files: op(16): op(17): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 14:06:38.783650 ignition[1063]: INFO : files: op(16): [finished] processing unit "prepare-cni-plugins.service" Feb 9 14:06:38.783650 ignition[1063]: INFO : files: op(18): [started] processing unit "prepare-critools.service" Feb 9 14:06:38.783650 ignition[1063]: INFO : files: op(18): op(19): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 14:06:38.783650 ignition[1063]: INFO : files: op(18): op(19): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 14:06:38.783650 ignition[1063]: INFO : files: op(18): [finished] processing unit "prepare-critools.service" Feb 9 14:06:38.783650 ignition[1063]: INFO : files: op(1a): [started] processing unit "prepare-helm.service" Feb 9 14:06:38.783650 ignition[1063]: INFO : files: op(1a): op(1b): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 14:06:39.290500 kernel: audit: type=1130 audit(1707487598.924:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:39.290530 kernel: audit: type=1130 audit(1707487599.038:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:39.290544 kernel: audit: type=1130 audit(1707487599.106:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:39.290556 kernel: audit: type=1131 audit(1707487599.106:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:38.924000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:39.038000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:39.106000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:39.106000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:39.290689 ignition[1063]: INFO : files: op(1a): op(1b): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 14:06:39.290689 ignition[1063]: INFO : files: op(1a): [finished] processing unit "prepare-helm.service" Feb 9 14:06:39.290689 ignition[1063]: INFO : files: op(1c): [started] setting preset to enabled for "prepare-critools.service" Feb 9 14:06:39.290689 ignition[1063]: INFO : files: op(1c): [finished] setting preset to enabled for "prepare-critools.service" Feb 9 14:06:39.290689 ignition[1063]: INFO : files: op(1d): [started] setting preset to enabled for "prepare-helm.service" Feb 9 14:06:39.290689 ignition[1063]: INFO : files: op(1d): [finished] setting preset to enabled for "prepare-helm.service" Feb 9 14:06:39.290689 ignition[1063]: INFO : files: op(1e): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 9 14:06:39.290689 ignition[1063]: INFO : files: op(1e): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 9 14:06:39.290689 ignition[1063]: INFO : files: op(1f): [started] setting preset to enabled for "packet-phone-home.service" Feb 9 14:06:39.290689 ignition[1063]: INFO : files: op(1f): [finished] setting preset to enabled for "packet-phone-home.service" Feb 9 14:06:39.290689 ignition[1063]: INFO : files: op(20): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 14:06:39.290689 ignition[1063]: INFO : files: op(20): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 14:06:39.290689 ignition[1063]: INFO : files: createResultFile: createFiles: op(21): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 9 14:06:39.290689 ignition[1063]: INFO : files: createResultFile: createFiles: op(21): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 9 14:06:39.290689 ignition[1063]: INFO : files: files passed Feb 9 14:06:39.290689 ignition[1063]: INFO : POST message to Packet Timeline Feb 9 14:06:39.290689 ignition[1063]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Feb 9 14:06:39.290689 ignition[1063]: INFO : GET result: OK Feb 9 14:06:39.290689 ignition[1063]: INFO : Ignition finished successfully Feb 9 14:06:39.857598 kernel: audit: type=1130 audit(1707487599.298:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:39.857690 kernel: audit: type=1131 audit(1707487599.298:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:39.857736 kernel: audit: type=1130 audit(1707487599.480:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:39.857776 kernel: audit: type=1131 audit(1707487599.641:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:39.298000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:39.298000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:39.480000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:39.641000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:38.904800 systemd[1]: Finished ignition-files.service. Feb 9 14:06:38.929861 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 9 14:06:39.892630 initrd-setup-root-after-ignition[1097]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 9 14:06:38.991573 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 9 14:06:39.990357 kernel: audit: type=1131 audit(1707487599.922:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:39.922000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:38.991880 systemd[1]: Starting ignition-quench.service... Feb 9 14:06:39.027699 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 9 14:06:40.014000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:39.038768 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 9 14:06:40.090553 kernel: audit: type=1131 audit(1707487600.014:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:40.081000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:39.038832 systemd[1]: Finished ignition-quench.service. Feb 9 14:06:39.106582 systemd[1]: Reached target ignition-complete.target. Feb 9 14:06:39.237086 systemd[1]: Starting initrd-parse-etc.service... Feb 9 14:06:39.280553 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 9 14:06:39.280612 systemd[1]: Finished initrd-parse-etc.service. Feb 9 14:06:40.166000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:39.298571 systemd[1]: Reached target initrd-fs.target. Feb 9 14:06:40.182000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:39.423532 systemd[1]: Reached target initrd.target. Feb 9 14:06:40.197000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:39.423589 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 9 14:06:40.223537 ignition[1112]: INFO : Ignition 2.14.0 Feb 9 14:06:40.223537 ignition[1112]: INFO : Stage: umount Feb 9 14:06:40.223537 ignition[1112]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 14:06:40.223537 ignition[1112]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 9 14:06:40.223537 ignition[1112]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 9 14:06:40.223537 ignition[1112]: INFO : umount: umount passed Feb 9 14:06:40.223537 ignition[1112]: INFO : POST message to Packet Timeline Feb 9 14:06:40.223537 ignition[1112]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Feb 9 14:06:40.237000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:40.274000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:40.301000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:40.319000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:40.335000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:39.423942 systemd[1]: Starting dracut-pre-pivot.service... Feb 9 14:06:40.375780 iscsid[909]: iscsid shutting down. Feb 9 14:06:40.382000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:40.390900 ignition[1112]: INFO : GET result: OK Feb 9 14:06:40.398000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:40.398000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:39.459674 systemd[1]: Finished dracut-pre-pivot.service. Feb 9 14:06:39.481259 systemd[1]: Starting initrd-cleanup.service... Feb 9 14:06:39.547347 systemd[1]: Stopped target nss-lookup.target. Feb 9 14:06:39.577518 systemd[1]: Stopped target remote-cryptsetup.target. Feb 9 14:06:40.463585 ignition[1112]: INFO : Ignition finished successfully Feb 9 14:06:40.471000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:39.592699 systemd[1]: Stopped target timers.target. Feb 9 14:06:40.479000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:39.620823 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 9 14:06:40.501000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:40.501000 audit: BPF prog-id=6 op=UNLOAD Feb 9 14:06:39.621021 systemd[1]: Stopped dracut-pre-pivot.service. Feb 9 14:06:39.642170 systemd[1]: Stopped target initrd.target. Feb 9 14:06:40.532000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:39.715548 systemd[1]: Stopped target basic.target. Feb 9 14:06:40.547000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:39.729670 systemd[1]: Stopped target ignition-complete.target. Feb 9 14:06:40.562000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:39.761695 systemd[1]: Stopped target ignition-diskful.target. Feb 9 14:06:40.578000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:39.786700 systemd[1]: Stopped target initrd-root-device.target. Feb 9 14:06:39.801758 systemd[1]: Stopped target remote-fs.target. Feb 9 14:06:40.610000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:39.817886 systemd[1]: Stopped target remote-fs-pre.target. Feb 9 14:06:40.626000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:39.836918 systemd[1]: Stopped target sysinit.target. Feb 9 14:06:40.643000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:39.849848 systemd[1]: Stopped target local-fs.target. Feb 9 14:06:39.866892 systemd[1]: Stopped target local-fs-pre.target. Feb 9 14:06:40.672000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:39.882889 systemd[1]: Stopped target swap.target. Feb 9 14:06:39.899788 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 9 14:06:39.900155 systemd[1]: Stopped dracut-pre-mount.service. Feb 9 14:06:40.718000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:39.923118 systemd[1]: Stopped target cryptsetup.target. Feb 9 14:06:40.734000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:39.998607 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 9 14:06:40.750000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:39.998687 systemd[1]: Stopped dracut-initqueue.service. Feb 9 14:06:40.014607 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 9 14:06:40.781000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:40.014668 systemd[1]: Stopped ignition-fetch-offline.service. Feb 9 14:06:40.799000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:40.799000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:40.081743 systemd[1]: Stopped target paths.target. Feb 9 14:06:40.097539 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 9 14:06:40.101560 systemd[1]: Stopped systemd-ask-password-console.path. Feb 9 14:06:40.113558 systemd[1]: Stopped target slices.target. Feb 9 14:06:40.129585 systemd[1]: Stopped target sockets.target. Feb 9 14:06:40.147732 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 9 14:06:40.147873 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 9 14:06:40.166831 systemd[1]: ignition-files.service: Deactivated successfully. Feb 9 14:06:40.167056 systemd[1]: Stopped ignition-files.service. Feb 9 14:06:40.182998 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 9 14:06:40.183373 systemd[1]: Stopped flatcar-metadata-hostname.service. Feb 9 14:06:40.200041 systemd[1]: Stopping ignition-mount.service... Feb 9 14:06:40.211475 systemd[1]: Stopping iscsid.service... Feb 9 14:06:40.230505 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 9 14:06:40.230604 systemd[1]: Stopped kmod-static-nodes.service. Feb 9 14:06:40.238133 systemd[1]: Stopping sysroot-boot.service... Feb 9 14:06:40.249480 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 9 14:06:40.949000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:40.249627 systemd[1]: Stopped systemd-udev-trigger.service. Feb 9 14:06:40.275026 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 9 14:06:40.275386 systemd[1]: Stopped dracut-pre-trigger.service. Feb 9 14:06:40.309613 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 9 14:06:40.311652 systemd[1]: iscsid.service: Deactivated successfully. Feb 9 14:06:40.311886 systemd[1]: Stopped iscsid.service. Feb 9 14:06:40.320645 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 9 14:06:40.320858 systemd[1]: Stopped sysroot-boot.service. Feb 9 14:06:40.337029 systemd[1]: iscsid.socket: Deactivated successfully. Feb 9 14:06:40.337287 systemd[1]: Closed iscsid.socket. Feb 9 14:06:40.350840 systemd[1]: Stopping iscsiuio.service... Feb 9 14:06:40.368114 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 9 14:06:40.368362 systemd[1]: Stopped iscsiuio.service. Feb 9 14:06:40.383206 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 9 14:06:40.383447 systemd[1]: Finished initrd-cleanup.service. Feb 9 14:06:41.071322 systemd-journald[266]: Received SIGTERM from PID 1 (systemd). Feb 9 14:06:40.400602 systemd[1]: Stopped target network.target. Feb 9 14:06:40.413591 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 9 14:06:40.413688 systemd[1]: Closed iscsiuio.socket. Feb 9 14:06:40.427918 systemd[1]: Stopping systemd-networkd.service... Feb 9 14:06:40.436464 systemd-networkd[879]: enp1s0f1np1: DHCPv6 lease lost Feb 9 14:06:40.441824 systemd[1]: Stopping systemd-resolved.service... Feb 9 14:06:40.446465 systemd-networkd[879]: enp1s0f0np0: DHCPv6 lease lost Feb 9 14:06:41.071000 audit: BPF prog-id=9 op=UNLOAD Feb 9 14:06:40.456236 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 9 14:06:40.456609 systemd[1]: Stopped systemd-resolved.service. Feb 9 14:06:40.473011 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 14:06:40.473365 systemd[1]: Stopped systemd-networkd.service. Feb 9 14:06:40.479707 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 9 14:06:40.479748 systemd[1]: Stopped ignition-mount.service. Feb 9 14:06:40.501630 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 9 14:06:40.501648 systemd[1]: Closed systemd-networkd.socket. Feb 9 14:06:40.517474 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 9 14:06:40.517509 systemd[1]: Stopped ignition-disks.service. Feb 9 14:06:40.532561 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 9 14:06:40.532642 systemd[1]: Stopped ignition-kargs.service. Feb 9 14:06:40.547645 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 9 14:06:40.547763 systemd[1]: Stopped ignition-setup.service. Feb 9 14:06:40.562750 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 9 14:06:40.562899 systemd[1]: Stopped initrd-setup-root.service. Feb 9 14:06:40.580458 systemd[1]: Stopping network-cleanup.service... Feb 9 14:06:40.594510 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 9 14:06:40.594686 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 9 14:06:40.610694 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 14:06:40.610829 systemd[1]: Stopped systemd-sysctl.service. Feb 9 14:06:40.626965 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 9 14:06:40.627107 systemd[1]: Stopped systemd-modules-load.service. Feb 9 14:06:40.643973 systemd[1]: Stopping systemd-udevd.service... Feb 9 14:06:40.661434 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 9 14:06:40.662822 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 9 14:06:40.663138 systemd[1]: Stopped systemd-udevd.service. Feb 9 14:06:40.675257 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 9 14:06:40.675416 systemd[1]: Closed systemd-udevd-control.socket. Feb 9 14:06:40.687736 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 9 14:06:40.687854 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 9 14:06:40.702572 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 9 14:06:40.702709 systemd[1]: Stopped dracut-pre-udev.service. Feb 9 14:06:40.718528 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 9 14:06:40.718553 systemd[1]: Stopped dracut-cmdline.service. Feb 9 14:06:40.734468 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 9 14:06:40.734516 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 9 14:06:40.751436 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 9 14:06:40.766507 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 9 14:06:40.766650 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 9 14:06:40.782560 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 9 14:06:40.782772 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 9 14:06:40.938370 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 9 14:06:40.938623 systemd[1]: Stopped network-cleanup.service. Feb 9 14:06:40.949932 systemd[1]: Reached target initrd-switch-root.target. Feb 9 14:06:40.969248 systemd[1]: Starting initrd-switch-root.service... Feb 9 14:06:41.007520 systemd[1]: Switching root. Feb 9 14:06:41.072793 systemd-journald[266]: Journal stopped Feb 9 14:06:44.989755 kernel: SELinux: Class mctp_socket not defined in policy. Feb 9 14:06:44.989769 kernel: SELinux: Class anon_inode not defined in policy. Feb 9 14:06:44.989778 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 9 14:06:44.989783 kernel: SELinux: policy capability network_peer_controls=1 Feb 9 14:06:44.989788 kernel: SELinux: policy capability open_perms=1 Feb 9 14:06:44.989793 kernel: SELinux: policy capability extended_socket_class=1 Feb 9 14:06:44.989799 kernel: SELinux: policy capability always_check_network=0 Feb 9 14:06:44.989805 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 9 14:06:44.989810 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 9 14:06:44.989816 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 9 14:06:44.989821 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 9 14:06:44.989827 systemd[1]: Successfully loaded SELinux policy in 323.921ms. Feb 9 14:06:44.989834 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.270ms. Feb 9 14:06:44.989841 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 14:06:44.989848 systemd[1]: Detected architecture x86-64. Feb 9 14:06:44.989854 systemd[1]: Detected first boot. Feb 9 14:06:44.989860 systemd[1]: Hostname set to . Feb 9 14:06:44.989867 systemd[1]: Initializing machine ID from random generator. Feb 9 14:06:44.989872 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 9 14:06:44.989878 systemd[1]: Populated /etc with preset unit settings. Feb 9 14:06:44.989884 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 14:06:44.989891 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 14:06:44.989898 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 14:06:44.989904 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 9 14:06:44.989910 systemd[1]: Stopped initrd-switch-root.service. Feb 9 14:06:44.989916 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 9 14:06:44.989922 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 9 14:06:44.989929 systemd[1]: Created slice system-addon\x2drun.slice. Feb 9 14:06:44.989935 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Feb 9 14:06:44.989941 systemd[1]: Created slice system-getty.slice. Feb 9 14:06:44.989947 systemd[1]: Created slice system-modprobe.slice. Feb 9 14:06:44.989953 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 9 14:06:44.989959 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 9 14:06:44.989965 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 9 14:06:44.989971 systemd[1]: Created slice user.slice. Feb 9 14:06:44.989977 systemd[1]: Started systemd-ask-password-console.path. Feb 9 14:06:44.989984 systemd[1]: Started systemd-ask-password-wall.path. Feb 9 14:06:44.989990 systemd[1]: Set up automount boot.automount. Feb 9 14:06:44.989996 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 9 14:06:44.990002 systemd[1]: Stopped target initrd-switch-root.target. Feb 9 14:06:44.990010 systemd[1]: Stopped target initrd-fs.target. Feb 9 14:06:44.990016 systemd[1]: Stopped target initrd-root-fs.target. Feb 9 14:06:44.990022 systemd[1]: Reached target integritysetup.target. Feb 9 14:06:44.990029 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 14:06:44.990036 systemd[1]: Reached target remote-fs.target. Feb 9 14:06:44.990042 systemd[1]: Reached target slices.target. Feb 9 14:06:44.990048 systemd[1]: Reached target swap.target. Feb 9 14:06:44.990054 systemd[1]: Reached target torcx.target. Feb 9 14:06:44.990060 systemd[1]: Reached target veritysetup.target. Feb 9 14:06:44.990066 systemd[1]: Listening on systemd-coredump.socket. Feb 9 14:06:44.990073 systemd[1]: Listening on systemd-initctl.socket. Feb 9 14:06:44.990079 systemd[1]: Listening on systemd-networkd.socket. Feb 9 14:06:44.990086 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 14:06:44.990093 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 14:06:44.990099 systemd[1]: Listening on systemd-userdbd.socket. Feb 9 14:06:44.990105 systemd[1]: Mounting dev-hugepages.mount... Feb 9 14:06:44.990111 systemd[1]: Mounting dev-mqueue.mount... Feb 9 14:06:44.990118 systemd[1]: Mounting media.mount... Feb 9 14:06:44.990125 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 14:06:44.990132 systemd[1]: Mounting sys-kernel-debug.mount... Feb 9 14:06:44.990138 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 9 14:06:44.990144 systemd[1]: Mounting tmp.mount... Feb 9 14:06:44.990151 systemd[1]: Starting flatcar-tmpfiles.service... Feb 9 14:06:44.990157 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 9 14:06:44.990163 systemd[1]: Starting kmod-static-nodes.service... Feb 9 14:06:44.990170 systemd[1]: Starting modprobe@configfs.service... Feb 9 14:06:44.990176 systemd[1]: Starting modprobe@dm_mod.service... Feb 9 14:06:44.990183 systemd[1]: Starting modprobe@drm.service... Feb 9 14:06:44.990189 systemd[1]: Starting modprobe@efi_pstore.service... Feb 9 14:06:44.990196 systemd[1]: Starting modprobe@fuse.service... Feb 9 14:06:44.990202 kernel: fuse: init (API version 7.34) Feb 9 14:06:44.990208 systemd[1]: Starting modprobe@loop.service... Feb 9 14:06:44.990214 kernel: loop: module loaded Feb 9 14:06:44.990221 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 9 14:06:44.990227 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 9 14:06:44.990234 systemd[1]: Stopped systemd-fsck-root.service. Feb 9 14:06:44.990241 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 9 14:06:44.990247 kernel: kauditd_printk_skb: 71 callbacks suppressed Feb 9 14:06:44.990253 kernel: audit: type=1131 audit(1707487604.630:114): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:44.990259 systemd[1]: Stopped systemd-fsck-usr.service. Feb 9 14:06:44.990265 kernel: audit: type=1131 audit(1707487604.718:115): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:44.990271 systemd[1]: Stopped systemd-journald.service. Feb 9 14:06:44.990278 kernel: audit: type=1130 audit(1707487604.782:116): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:44.990285 kernel: audit: type=1131 audit(1707487604.782:117): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:44.990291 kernel: audit: type=1334 audit(1707487604.868:118): prog-id=21 op=LOAD Feb 9 14:06:44.990296 kernel: audit: type=1334 audit(1707487604.886:119): prog-id=22 op=LOAD Feb 9 14:06:44.990305 kernel: audit: type=1334 audit(1707487604.904:120): prog-id=23 op=LOAD Feb 9 14:06:44.990312 systemd[1]: Starting systemd-journald.service... Feb 9 14:06:44.990318 kernel: audit: type=1334 audit(1707487604.904:121): prog-id=19 op=UNLOAD Feb 9 14:06:44.990324 kernel: audit: type=1334 audit(1707487604.904:122): prog-id=20 op=UNLOAD Feb 9 14:06:44.990354 systemd[1]: Starting systemd-modules-load.service... Feb 9 14:06:44.990362 kernel: audit: type=1305 audit(1707487604.987:123): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 14:06:44.990387 systemd-journald[1263]: Journal started Feb 9 14:06:44.990411 systemd-journald[1263]: Runtime Journal (/run/log/journal/1d0df4e08c4b432cac590226929bb069) is 8.0M, max 640.1M, 632.1M free. Feb 9 14:06:41.488000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 9 14:06:41.760000 audit[1]: AVC avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 9 14:06:41.762000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 14:06:41.762000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 14:06:41.762000 audit: BPF prog-id=10 op=LOAD Feb 9 14:06:41.762000 audit: BPF prog-id=10 op=UNLOAD Feb 9 14:06:41.762000 audit: BPF prog-id=11 op=LOAD Feb 9 14:06:41.762000 audit: BPF prog-id=11 op=UNLOAD Feb 9 14:06:41.829000 audit[1153]: AVC avc: denied { associate } for pid=1153 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 9 14:06:41.829000 audit[1153]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001258e2 a1=c00002ce58 a2=c00002b100 a3=32 items=0 ppid=1136 pid=1153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 14:06:41.829000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 14:06:41.855000 audit[1153]: AVC avc: denied { associate } for pid=1153 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 9 14:06:41.855000 audit[1153]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001259b9 a2=1ed a3=0 items=2 ppid=1136 pid=1153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 14:06:41.855000 audit: CWD cwd="/" Feb 9 14:06:41.855000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 14:06:41.855000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 14:06:41.855000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 14:06:43.366000 audit: BPF prog-id=12 op=LOAD Feb 9 14:06:43.366000 audit: BPF prog-id=3 op=UNLOAD Feb 9 14:06:43.367000 audit: BPF prog-id=13 op=LOAD Feb 9 14:06:43.367000 audit: BPF prog-id=14 op=LOAD Feb 9 14:06:43.367000 audit: BPF prog-id=4 op=UNLOAD Feb 9 14:06:43.367000 audit: BPF prog-id=5 op=UNLOAD Feb 9 14:06:43.367000 audit: BPF prog-id=15 op=LOAD Feb 9 14:06:43.367000 audit: BPF prog-id=12 op=UNLOAD Feb 9 14:06:43.367000 audit: BPF prog-id=16 op=LOAD Feb 9 14:06:43.367000 audit: BPF prog-id=17 op=LOAD Feb 9 14:06:43.367000 audit: BPF prog-id=13 op=UNLOAD Feb 9 14:06:43.367000 audit: BPF prog-id=14 op=UNLOAD Feb 9 14:06:43.368000 audit: BPF prog-id=18 op=LOAD Feb 9 14:06:43.368000 audit: BPF prog-id=15 op=UNLOAD Feb 9 14:06:43.368000 audit: BPF prog-id=19 op=LOAD Feb 9 14:06:43.368000 audit: BPF prog-id=20 op=LOAD Feb 9 14:06:43.368000 audit: BPF prog-id=16 op=UNLOAD Feb 9 14:06:43.368000 audit: BPF prog-id=17 op=UNLOAD Feb 9 14:06:43.369000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:43.421000 audit: BPF prog-id=18 op=UNLOAD Feb 9 14:06:43.424000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:43.424000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:44.630000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:44.718000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:44.782000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:44.782000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:44.868000 audit: BPF prog-id=21 op=LOAD Feb 9 14:06:44.886000 audit: BPF prog-id=22 op=LOAD Feb 9 14:06:44.904000 audit: BPF prog-id=23 op=LOAD Feb 9 14:06:44.904000 audit: BPF prog-id=19 op=UNLOAD Feb 9 14:06:44.904000 audit: BPF prog-id=20 op=UNLOAD Feb 9 14:06:44.987000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 14:06:41.827708 /usr/lib/systemd/system-generators/torcx-generator[1153]: time="2024-02-09T14:06:41Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 14:06:43.365634 systemd[1]: Queued start job for default target multi-user.target. Feb 9 14:06:41.828226 /usr/lib/systemd/system-generators/torcx-generator[1153]: time="2024-02-09T14:06:41Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 14:06:43.369192 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 9 14:06:41.828242 /usr/lib/systemd/system-generators/torcx-generator[1153]: time="2024-02-09T14:06:41Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 14:06:41.828266 /usr/lib/systemd/system-generators/torcx-generator[1153]: time="2024-02-09T14:06:41Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 9 14:06:41.828274 /usr/lib/systemd/system-generators/torcx-generator[1153]: time="2024-02-09T14:06:41Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 9 14:06:41.828298 /usr/lib/systemd/system-generators/torcx-generator[1153]: time="2024-02-09T14:06:41Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 9 14:06:41.828314 /usr/lib/systemd/system-generators/torcx-generator[1153]: time="2024-02-09T14:06:41Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 9 14:06:41.828456 /usr/lib/systemd/system-generators/torcx-generator[1153]: time="2024-02-09T14:06:41Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 9 14:06:41.828487 /usr/lib/systemd/system-generators/torcx-generator[1153]: time="2024-02-09T14:06:41Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 14:06:41.828497 /usr/lib/systemd/system-generators/torcx-generator[1153]: time="2024-02-09T14:06:41Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 14:06:41.829030 /usr/lib/systemd/system-generators/torcx-generator[1153]: time="2024-02-09T14:06:41Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 9 14:06:41.829056 /usr/lib/systemd/system-generators/torcx-generator[1153]: time="2024-02-09T14:06:41Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 9 14:06:41.829070 /usr/lib/systemd/system-generators/torcx-generator[1153]: time="2024-02-09T14:06:41Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 9 14:06:41.829082 /usr/lib/systemd/system-generators/torcx-generator[1153]: time="2024-02-09T14:06:41Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 9 14:06:41.829094 /usr/lib/systemd/system-generators/torcx-generator[1153]: time="2024-02-09T14:06:41Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 9 14:06:41.829105 /usr/lib/systemd/system-generators/torcx-generator[1153]: time="2024-02-09T14:06:41Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 9 14:06:43.016000 /usr/lib/systemd/system-generators/torcx-generator[1153]: time="2024-02-09T14:06:43Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 14:06:43.016141 /usr/lib/systemd/system-generators/torcx-generator[1153]: time="2024-02-09T14:06:43Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 14:06:43.016198 /usr/lib/systemd/system-generators/torcx-generator[1153]: time="2024-02-09T14:06:43Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 14:06:43.016286 /usr/lib/systemd/system-generators/torcx-generator[1153]: time="2024-02-09T14:06:43Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 14:06:43.016319 /usr/lib/systemd/system-generators/torcx-generator[1153]: time="2024-02-09T14:06:43Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 9 14:06:43.016353 /usr/lib/systemd/system-generators/torcx-generator[1153]: time="2024-02-09T14:06:43Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 9 14:06:44.987000 audit[1263]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7fffa3cdb110 a2=4000 a3=7fffa3cdb1ac items=0 ppid=1 pid=1263 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 14:06:44.987000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 9 14:06:45.067510 systemd[1]: Starting systemd-network-generator.service... Feb 9 14:06:45.094348 systemd[1]: Starting systemd-remount-fs.service... Feb 9 14:06:45.121357 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 14:06:45.164080 systemd[1]: verity-setup.service: Deactivated successfully. Feb 9 14:06:45.164103 systemd[1]: Stopped verity-setup.service. Feb 9 14:06:45.171000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:45.209350 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 14:06:45.229504 systemd[1]: Started systemd-journald.service. Feb 9 14:06:45.237000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:45.237965 systemd[1]: Mounted dev-hugepages.mount. Feb 9 14:06:45.245583 systemd[1]: Mounted dev-mqueue.mount. Feb 9 14:06:45.252573 systemd[1]: Mounted media.mount. Feb 9 14:06:45.259585 systemd[1]: Mounted sys-kernel-debug.mount. Feb 9 14:06:45.268596 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 9 14:06:45.277643 systemd[1]: Mounted tmp.mount. Feb 9 14:06:45.284667 systemd[1]: Finished flatcar-tmpfiles.service. Feb 9 14:06:45.293000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:45.293694 systemd[1]: Finished kmod-static-nodes.service. Feb 9 14:06:45.303000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:45.303685 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 9 14:06:45.303795 systemd[1]: Finished modprobe@configfs.service. Feb 9 14:06:45.313000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:45.313000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:45.313736 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 9 14:06:45.313875 systemd[1]: Finished modprobe@dm_mod.service. Feb 9 14:06:45.322000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:45.322000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:45.322869 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 9 14:06:45.323064 systemd[1]: Finished modprobe@drm.service. Feb 9 14:06:45.332000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:45.332000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:45.333165 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 9 14:06:45.333495 systemd[1]: Finished modprobe@efi_pstore.service. Feb 9 14:06:45.341000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:45.341000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:45.342127 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 9 14:06:45.342446 systemd[1]: Finished modprobe@fuse.service. Feb 9 14:06:45.350000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:45.350000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:45.351138 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 9 14:06:45.351469 systemd[1]: Finished modprobe@loop.service. Feb 9 14:06:45.359000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:45.359000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:45.360122 systemd[1]: Finished systemd-modules-load.service. Feb 9 14:06:45.368000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:45.369228 systemd[1]: Finished systemd-network-generator.service. Feb 9 14:06:45.377000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:45.378071 systemd[1]: Finished systemd-remount-fs.service. Feb 9 14:06:45.386000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:45.387086 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 14:06:45.395000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:45.396652 systemd[1]: Reached target network-pre.target. Feb 9 14:06:45.408153 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 9 14:06:45.419208 systemd[1]: Mounting sys-kernel-config.mount... Feb 9 14:06:45.426576 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 9 14:06:45.427557 systemd[1]: Starting systemd-hwdb-update.service... Feb 9 14:06:45.434902 systemd[1]: Starting systemd-journal-flush.service... Feb 9 14:06:45.438364 systemd-journald[1263]: Time spent on flushing to /var/log/journal/1d0df4e08c4b432cac590226929bb069 is 14.852ms for 1626 entries. Feb 9 14:06:45.438364 systemd-journald[1263]: System Journal (/var/log/journal/1d0df4e08c4b432cac590226929bb069) is 8.0M, max 195.6M, 187.6M free. Feb 9 14:06:45.477202 systemd-journald[1263]: Received client request to flush runtime journal. Feb 9 14:06:45.450411 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 9 14:06:45.450933 systemd[1]: Starting systemd-random-seed.service... Feb 9 14:06:45.466428 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 9 14:06:45.466931 systemd[1]: Starting systemd-sysctl.service... Feb 9 14:06:45.474085 systemd[1]: Starting systemd-sysusers.service... Feb 9 14:06:45.480954 systemd[1]: Starting systemd-udev-settle.service... Feb 9 14:06:45.488429 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 9 14:06:45.496489 systemd[1]: Mounted sys-kernel-config.mount. Feb 9 14:06:45.504546 systemd[1]: Finished systemd-journal-flush.service. Feb 9 14:06:45.512000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:45.512507 systemd[1]: Finished systemd-random-seed.service. Feb 9 14:06:45.520000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:45.520550 systemd[1]: Finished systemd-sysctl.service. Feb 9 14:06:45.528000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:45.528540 systemd[1]: Finished systemd-sysusers.service. Feb 9 14:06:45.536000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:45.537470 systemd[1]: Reached target first-boot-complete.target. Feb 9 14:06:45.545578 udevadm[1279]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 9 14:06:45.727755 systemd[1]: Finished systemd-hwdb-update.service. Feb 9 14:06:45.736000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:45.736000 audit: BPF prog-id=24 op=LOAD Feb 9 14:06:45.737000 audit: BPF prog-id=25 op=LOAD Feb 9 14:06:45.737000 audit: BPF prog-id=7 op=UNLOAD Feb 9 14:06:45.737000 audit: BPF prog-id=8 op=UNLOAD Feb 9 14:06:45.737661 systemd[1]: Starting systemd-udevd.service... Feb 9 14:06:45.749138 systemd-udevd[1280]: Using default interface naming scheme 'v252'. Feb 9 14:06:45.769653 systemd[1]: Started systemd-udevd.service. Feb 9 14:06:45.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:45.779561 systemd[1]: Condition check resulted in dev-ttyS1.device being skipped. Feb 9 14:06:45.779000 audit: BPF prog-id=26 op=LOAD Feb 9 14:06:45.780712 systemd[1]: Starting systemd-networkd.service... Feb 9 14:06:45.807000 audit: BPF prog-id=27 op=LOAD Feb 9 14:06:45.807000 audit: BPF prog-id=28 op=LOAD Feb 9 14:06:45.807000 audit: BPF prog-id=29 op=LOAD Feb 9 14:06:45.808389 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 Feb 9 14:06:45.808434 kernel: mousedev: PS/2 mouse device common for all mice Feb 9 14:06:45.827359 kernel: IPMI message handler: version 39.2 Feb 9 14:06:45.827375 systemd[1]: Starting systemd-userdbd.service... Feb 9 14:06:45.845314 kernel: ACPI: button: Sleep Button [SLPB] Feb 9 14:06:45.845354 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1347) Feb 9 14:06:45.886596 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Feb 9 14:06:45.926057 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 14:06:45.932320 kernel: ACPI: button: Power Button [PWRF] Feb 9 14:06:45.940883 systemd[1]: Started systemd-userdbd.service. Feb 9 14:06:45.951000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:45.832000 audit[1359]: AVC avc: denied { confidentiality } for pid=1359 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 9 14:06:45.965312 kernel: ipmi device interface Feb 9 14:06:45.832000 audit[1359]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=7f6b67f88010 a1=4d8bc a2=7f6b69c21bc5 a3=5 items=42 ppid=1280 pid=1359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 14:06:45.832000 audit: CWD cwd="/" Feb 9 14:06:45.832000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 14:06:45.832000 audit: PATH item=1 name=(null) inode=12031 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 14:06:45.832000 audit: PATH item=2 name=(null) inode=12031 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 14:06:45.832000 audit: PATH item=3 name=(null) inode=12032 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 14:06:45.832000 audit: PATH item=4 name=(null) inode=12031 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 14:06:45.832000 audit: PATH item=5 name=(null) inode=12033 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 14:06:45.832000 audit: PATH item=6 name=(null) inode=12031 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 14:06:45.832000 audit: PATH item=7 name=(null) inode=12034 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 14:06:45.832000 audit: PATH item=8 name=(null) inode=12034 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 14:06:45.832000 audit: PATH item=9 name=(null) inode=12035 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 14:06:45.832000 audit: PATH item=10 name=(null) inode=12034 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 14:06:45.832000 audit: PATH item=11 name=(null) inode=12036 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 14:06:45.832000 audit: PATH item=12 name=(null) inode=12034 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 14:06:45.832000 audit: PATH item=13 name=(null) inode=12037 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 14:06:45.832000 audit: PATH item=14 name=(null) inode=12034 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 14:06:45.832000 audit: PATH item=15 name=(null) inode=12038 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 14:06:45.832000 audit: PATH item=16 name=(null) inode=12034 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 14:06:45.832000 audit: PATH item=17 name=(null) inode=12039 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 14:06:45.832000 audit: PATH item=18 name=(null) inode=12031 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 14:06:45.832000 audit: PATH item=19 name=(null) inode=12040 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 14:06:45.832000 audit: PATH item=20 name=(null) inode=12040 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 14:06:45.832000 audit: PATH item=21 name=(null) inode=12041 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 14:06:45.832000 audit: PATH item=22 name=(null) inode=12040 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 14:06:45.832000 audit: PATH item=23 name=(null) inode=12042 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 14:06:45.832000 audit: PATH item=24 name=(null) inode=12040 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 14:06:45.832000 audit: PATH item=25 name=(null) inode=12043 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 14:06:45.832000 audit: PATH item=26 name=(null) inode=12040 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 14:06:45.832000 audit: PATH item=27 name=(null) inode=12044 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 14:06:45.832000 audit: PATH item=28 name=(null) inode=12040 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 14:06:45.832000 audit: PATH item=29 name=(null) inode=12045 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 14:06:45.832000 audit: PATH item=30 name=(null) inode=12031 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 14:06:45.832000 audit: PATH item=31 name=(null) inode=12046 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 14:06:45.832000 audit: PATH item=32 name=(null) inode=12046 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 14:06:45.832000 audit: PATH item=33 name=(null) inode=12047 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 14:06:45.832000 audit: PATH item=34 name=(null) inode=12046 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 14:06:45.832000 audit: PATH item=35 name=(null) inode=12048 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 14:06:45.832000 audit: PATH item=36 name=(null) inode=12046 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 14:06:45.832000 audit: PATH item=37 name=(null) inode=12049 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 14:06:45.832000 audit: PATH item=38 name=(null) inode=12046 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 14:06:45.832000 audit: PATH item=39 name=(null) inode=12050 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 14:06:45.832000 audit: PATH item=40 name=(null) inode=12046 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 14:06:45.832000 audit: PATH item=41 name=(null) inode=12051 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 14:06:45.832000 audit: PROCTITLE proctitle="(udev-worker)" Feb 9 14:06:45.971310 kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface Feb 9 14:06:45.971447 kernel: mei_me 0000:00:16.4: Device doesn't have valid ME Interface Feb 9 14:06:45.971533 kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set Feb 9 14:06:46.051357 kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt Feb 9 14:06:46.072312 kernel: i2c i2c-0: 2/4 memory slots populated (from DMI) Feb 9 14:06:46.113700 kernel: ipmi_si: IPMI System Interface driver Feb 9 14:06:46.113762 kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS Feb 9 14:06:46.113933 kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 Feb 9 14:06:46.153393 kernel: ipmi_si: Adding SMBIOS-specified kcs state machine Feb 9 14:06:46.172629 kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI Feb 9 14:06:46.172781 kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2] regsize 1 spacing 1 irq 0 Feb 9 14:06:46.215402 systemd-networkd[1327]: bond0: netdev ready Feb 9 14:06:46.217507 systemd-networkd[1327]: lo: Link UP Feb 9 14:06:46.217511 systemd-networkd[1327]: lo: Gained carrier Feb 9 14:06:46.217965 systemd-networkd[1327]: Enumeration completed Feb 9 14:06:46.218037 systemd[1]: Started systemd-networkd.service. Feb 9 14:06:46.218239 systemd-networkd[1327]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Feb 9 14:06:46.219002 systemd-networkd[1327]: enp1s0f1np1: Configuring with /etc/systemd/network/10-1c:34:da:42:7b:7d.network. Feb 9 14:06:46.234312 kernel: iTCO_vendor_support: vendor-support=0 Feb 9 14:06:46.234352 kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI Feb 9 14:06:46.250000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:46.274858 kernel: ipmi_si: Adding ACPI-specified kcs state machine Feb 9 14:06:46.274897 kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 Feb 9 14:06:46.340496 kernel: iTCO_wdt iTCO_wdt: Found a Intel PCH TCO device (Version=6, TCOBASE=0x0400) Feb 9 14:06:46.340599 kernel: iTCO_wdt iTCO_wdt: initialized. heartbeat=30 sec (nowayout=0) Feb 9 14:06:46.340656 kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. Feb 9 14:06:46.424585 kernel: intel_rapl_common: Found RAPL domain package Feb 9 14:06:46.424621 kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1b0f, dev_id: 0x20) Feb 9 14:06:46.424710 kernel: intel_rapl_common: Found RAPL domain core Feb 9 14:06:46.457941 kernel: intel_rapl_common: Found RAPL domain dram Feb 9 14:06:46.488310 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Feb 9 14:06:46.512343 kernel: bond0: (slave enp1s0f1np1): Enslaving as a backup interface with an up link Feb 9 14:06:46.512956 systemd-networkd[1327]: enp1s0f0np0: Configuring with /etc/systemd/network/10-1c:34:da:42:7b:7c.network. Feb 9 14:06:46.551309 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Feb 9 14:06:46.551339 kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized Feb 9 14:06:46.589389 kernel: ipmi_ssif: IPMI SSIF Interface driver Feb 9 14:06:46.685400 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Feb 9 14:06:46.685905 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Feb 9 14:06:46.706354 kernel: bond0: (slave enp1s0f0np0): Enslaving as a backup interface with an up link Feb 9 14:06:46.745308 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): bond0: link becomes ready Feb 9 14:06:46.755636 systemd-networkd[1327]: bond0: Link UP Feb 9 14:06:46.755820 systemd-networkd[1327]: enp1s0f1np1: Link UP Feb 9 14:06:46.755946 systemd-networkd[1327]: enp1s0f1np1: Gained carrier Feb 9 14:06:46.756899 systemd-networkd[1327]: enp1s0f1np1: Reconfiguring with /etc/systemd/network/10-1c:34:da:42:7b:7c.network. Feb 9 14:06:46.806144 kernel: bond0: (slave enp1s0f1np1): link status definitely up, 10000 Mbps full duplex Feb 9 14:06:46.806184 kernel: bond0: active interface up! Feb 9 14:06:46.853253 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Feb 9 14:06:46.853277 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Feb 9 14:06:46.857588 systemd[1]: Finished systemd-udev-settle.service. Feb 9 14:06:46.866000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:46.867067 systemd[1]: Starting lvm2-activation-early.service... Feb 9 14:06:46.882812 lvm[1387]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 14:06:46.909706 systemd[1]: Finished lvm2-activation-early.service. Feb 9 14:06:46.927354 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Feb 9 14:06:46.945000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:46.946470 systemd[1]: Reached target cryptsetup.target. Feb 9 14:06:46.949352 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Feb 9 14:06:46.966910 systemd[1]: Starting lvm2-activation.service... Feb 9 14:06:46.968977 lvm[1388]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 14:06:46.972349 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Feb 9 14:06:46.994343 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Feb 9 14:06:47.016363 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Feb 9 14:06:47.037358 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Feb 9 14:06:47.039147 systemd[1]: Finished lvm2-activation.service. Feb 9 14:06:47.058362 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Feb 9 14:06:47.074000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:47.074459 systemd[1]: Reached target local-fs-pre.target. Feb 9 14:06:47.079346 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Feb 9 14:06:47.096413 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 9 14:06:47.096428 systemd[1]: Reached target local-fs.target. Feb 9 14:06:47.100345 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Feb 9 14:06:47.117401 systemd[1]: Reached target machines.target. Feb 9 14:06:47.121349 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Feb 9 14:06:47.139997 systemd[1]: Starting ldconfig.service... Feb 9 14:06:47.142338 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Feb 9 14:06:47.158919 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 9 14:06:47.158950 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 14:06:47.159641 systemd[1]: Starting systemd-boot-update.service... Feb 9 14:06:47.163308 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Feb 9 14:06:47.179814 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 9 14:06:47.184372 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Feb 9 14:06:47.184614 systemd[1]: Starting systemd-machine-id-commit.service... Feb 9 14:06:47.184711 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 9 14:06:47.184745 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 9 14:06:47.185381 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 9 14:06:47.185575 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1390 (bootctl) Feb 9 14:06:47.186207 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 9 14:06:47.205309 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Feb 9 14:06:47.205345 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Feb 9 14:06:47.223348 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Feb 9 14:06:47.238747 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 9 14:06:47.241000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:47.262347 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Feb 9 14:06:47.282374 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Feb 9 14:06:47.301386 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Feb 9 14:06:47.320413 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Feb 9 14:06:47.339336 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Feb 9 14:06:47.357338 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Feb 9 14:06:47.358380 systemd-networkd[1327]: enp1s0f0np0: Link UP Feb 9 14:06:47.358535 systemd-networkd[1327]: bond0: Gained carrier Feb 9 14:06:47.358619 systemd-networkd[1327]: enp1s0f0np0: Gained carrier Feb 9 14:06:47.389827 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Feb 9 14:06:47.389853 kernel: bond0: (slave enp1s0f1np1): invalid new link 1 on slave Feb 9 14:06:47.407934 systemd-networkd[1327]: enp1s0f1np1: Link DOWN Feb 9 14:06:47.407937 systemd-networkd[1327]: enp1s0f1np1: Lost carrier Feb 9 14:06:47.408261 systemd-tmpfiles[1394]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 9 14:06:47.408443 kernel: bond0: (slave enp1s0f0np0): link status definitely up, 10000 Mbps full duplex Feb 9 14:06:47.555312 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Feb 9 14:06:47.571339 kernel: bond0: (slave enp1s0f1np1): speed changed to 0 on port 1 Feb 9 14:06:47.571815 systemd-networkd[1327]: enp1s0f1np1: Link UP Feb 9 14:06:47.571969 systemd-networkd[1327]: enp1s0f1np1: Gained carrier Feb 9 14:06:47.606372 systemd-tmpfiles[1394]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 9 14:06:47.622498 systemd-tmpfiles[1394]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 9 14:06:47.634308 kernel: bond0: (slave enp1s0f1np1): link status up again after 200 ms Feb 9 14:06:47.651308 kernel: bond0: (slave enp1s0f1np1): link status definitely up, 10000 Mbps full duplex Feb 9 14:06:47.651896 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 9 14:06:47.652297 systemd[1]: Finished systemd-machine-id-commit.service. Feb 9 14:06:47.652000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:47.658002 systemd-fsck[1398]: fsck.fat 4.2 (2021-01-31) Feb 9 14:06:47.658002 systemd-fsck[1398]: /dev/sda1: 789 files, 115332/258078 clusters Feb 9 14:06:47.661736 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 9 14:06:47.671000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:47.674149 systemd[1]: Mounting boot.mount... Feb 9 14:06:47.685528 systemd[1]: Mounted boot.mount. Feb 9 14:06:47.707088 systemd[1]: Finished systemd-boot-update.service. Feb 9 14:06:47.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:47.734896 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 9 14:06:47.743000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 14:06:47.745046 systemd[1]: Starting audit-rules.service... Feb 9 14:06:47.752937 systemd[1]: Starting clean-ca-certificates.service... Feb 9 14:06:47.762919 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 9 14:06:47.762000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 14:06:47.762000 audit[1417]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe3b0670b0 a2=420 a3=0 items=0 ppid=1401 pid=1417 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 14:06:47.762000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 14:06:47.763220 augenrules[1417]: No rules Feb 9 14:06:47.772299 systemd[1]: Starting systemd-resolved.service... Feb 9 14:06:47.781174 systemd[1]: Starting systemd-timesyncd.service... Feb 9 14:06:47.788815 systemd[1]: Starting systemd-update-utmp.service... Feb 9 14:06:47.796599 systemd[1]: Finished audit-rules.service. Feb 9 14:06:47.804479 systemd[1]: Finished clean-ca-certificates.service. Feb 9 14:06:47.813455 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 9 14:06:47.825628 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 9 14:06:47.826324 systemd[1]: Finished systemd-update-utmp.service. Feb 9 14:06:47.836953 ldconfig[1389]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 9 14:06:47.839599 systemd[1]: Finished ldconfig.service. Feb 9 14:06:47.846958 systemd[1]: Starting systemd-update-done.service... Feb 9 14:06:47.853461 systemd[1]: Started systemd-timesyncd.service. Feb 9 14:06:47.853686 systemd-resolved[1423]: Positive Trust Anchors: Feb 9 14:06:47.853694 systemd-resolved[1423]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 14:06:47.853723 systemd-resolved[1423]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 14:06:47.857908 systemd-resolved[1423]: Using system hostname 'ci-3510.3.2-a-80177560a3'. Feb 9 14:06:47.862476 systemd[1]: Started systemd-resolved.service. Feb 9 14:06:47.870528 systemd[1]: Finished systemd-update-done.service. Feb 9 14:06:47.879389 systemd[1]: Reached target network.target. Feb 9 14:06:47.888339 systemd[1]: Reached target nss-lookup.target. Feb 9 14:06:47.897339 systemd[1]: Reached target sysinit.target. Feb 9 14:06:47.905376 systemd[1]: Started motdgen.path. Feb 9 14:06:47.913351 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 9 14:06:47.924341 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 9 14:06:47.933333 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 9 14:06:47.933352 systemd[1]: Reached target paths.target. Feb 9 14:06:47.941334 systemd[1]: Reached target time-set.target. Feb 9 14:06:47.949411 systemd[1]: Started logrotate.timer. Feb 9 14:06:47.957372 systemd[1]: Started mdadm.timer. Feb 9 14:06:47.965332 systemd[1]: Reached target timers.target. Feb 9 14:06:47.972454 systemd[1]: Listening on dbus.socket. Feb 9 14:06:47.980931 systemd[1]: Starting docker.socket... Feb 9 14:06:47.989824 systemd[1]: Listening on sshd.socket. Feb 9 14:06:47.997460 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 14:06:47.997669 systemd[1]: Listening on docker.socket. Feb 9 14:06:48.004455 systemd[1]: Reached target sockets.target. Feb 9 14:06:48.012410 systemd[1]: Reached target basic.target. Feb 9 14:06:48.019400 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 14:06:48.019414 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 14:06:48.019842 systemd[1]: Starting containerd.service... Feb 9 14:06:48.026811 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Feb 9 14:06:48.035866 systemd[1]: Starting coreos-metadata.service... Feb 9 14:06:48.042955 systemd[1]: Starting dbus.service... Feb 9 14:06:48.049043 systemd[1]: Starting enable-oem-cloudinit.service... Feb 9 14:06:48.053546 jq[1438]: false Feb 9 14:06:48.056010 systemd[1]: Starting extend-filesystems.service... Feb 9 14:06:48.057090 coreos-metadata[1431]: Feb 09 14:06:48.057 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 9 14:06:48.062561 dbus-daemon[1437]: [system] SELinux support is enabled Feb 9 14:06:48.063360 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 9 14:06:48.063863 extend-filesystems[1439]: Found sda Feb 9 14:06:48.063863 extend-filesystems[1439]: Found sda1 Feb 9 14:06:48.095375 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 116605649 blocks Feb 9 14:06:48.063946 systemd[1]: Starting motdgen.service... Feb 9 14:06:48.095458 coreos-metadata[1434]: Feb 09 14:06:48.065 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 9 14:06:48.095567 extend-filesystems[1439]: Found sda2 Feb 9 14:06:48.095567 extend-filesystems[1439]: Found sda3 Feb 9 14:06:48.095567 extend-filesystems[1439]: Found usr Feb 9 14:06:48.095567 extend-filesystems[1439]: Found sda4 Feb 9 14:06:48.095567 extend-filesystems[1439]: Found sda6 Feb 9 14:06:48.095567 extend-filesystems[1439]: Found sda7 Feb 9 14:06:48.095567 extend-filesystems[1439]: Found sda9 Feb 9 14:06:48.095567 extend-filesystems[1439]: Checking size of /dev/sda9 Feb 9 14:06:48.095567 extend-filesystems[1439]: Resized partition /dev/sda9 Feb 9 14:06:48.081135 systemd[1]: Starting prepare-cni-plugins.service... Feb 9 14:06:48.180377 extend-filesystems[1455]: resize2fs 1.46.5 (30-Dec-2021) Feb 9 14:06:48.103037 systemd[1]: Starting prepare-critools.service... Feb 9 14:06:48.146898 systemd[1]: Starting prepare-helm.service... Feb 9 14:06:48.187870 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 9 14:06:48.202834 systemd[1]: Starting sshd-keygen.service... Feb 9 14:06:48.210656 systemd[1]: Starting systemd-logind.service... Feb 9 14:06:48.217389 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 14:06:48.217903 systemd[1]: Starting tcsd.service... Feb 9 14:06:48.224652 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 9 14:06:48.224968 systemd[1]: Starting update-engine.service... Feb 9 14:06:48.231876 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 9 14:06:48.232761 systemd-logind[1468]: Watching system buttons on /dev/input/event3 (Power Button) Feb 9 14:06:48.232771 systemd-logind[1468]: Watching system buttons on /dev/input/event2 (Sleep Button) Feb 9 14:06:48.232781 systemd-logind[1468]: Watching system buttons on /dev/input/event0 (HID 0557:2419) Feb 9 14:06:48.232875 systemd-logind[1468]: New seat seat0. Feb 9 14:06:48.233837 jq[1471]: true Feb 9 14:06:48.241779 systemd[1]: Started dbus.service. Feb 9 14:06:48.251129 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 9 14:06:48.251220 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 9 14:06:48.251397 systemd[1]: motdgen.service: Deactivated successfully. Feb 9 14:06:48.251478 systemd[1]: Finished motdgen.service. Feb 9 14:06:48.260624 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 9 14:06:48.260708 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 9 14:06:48.266062 tar[1473]: ./ Feb 9 14:06:48.266062 tar[1473]: ./loopback Feb 9 14:06:48.271952 jq[1479]: true Feb 9 14:06:48.272233 update_engine[1470]: I0209 14:06:48.271656 1470 main.cc:92] Flatcar Update Engine starting Feb 9 14:06:48.272469 dbus-daemon[1437]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 9 14:06:48.273489 tar[1475]: linux-amd64/helm Feb 9 14:06:48.274648 tar[1474]: crictl Feb 9 14:06:48.275793 update_engine[1470]: I0209 14:06:48.275775 1470 update_check_scheduler.cc:74] Next update check in 7m9s Feb 9 14:06:48.277539 systemd[1]: tcsd.service: Skipped due to 'exec-condition'. Feb 9 14:06:48.277628 systemd[1]: Condition check resulted in tcsd.service being skipped. Feb 9 14:06:48.277701 systemd[1]: Started systemd-logind.service. Feb 9 14:06:48.282551 env[1480]: time="2024-02-09T14:06:48.282517770Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 9 14:06:48.286740 tar[1473]: ./bandwidth Feb 9 14:06:48.288660 systemd[1]: Started update-engine.service. Feb 9 14:06:48.295025 env[1480]: time="2024-02-09T14:06:48.294981115Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 9 14:06:48.295934 env[1480]: time="2024-02-09T14:06:48.295923840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 9 14:06:48.296586 env[1480]: time="2024-02-09T14:06:48.296529894Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 9 14:06:48.296586 env[1480]: time="2024-02-09T14:06:48.296550325Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 9 14:06:48.298261 systemd[1]: Started locksmithd.service. Feb 9 14:06:48.298519 env[1480]: time="2024-02-09T14:06:48.298478988Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 14:06:48.298519 env[1480]: time="2024-02-09T14:06:48.298495892Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 9 14:06:48.298519 env[1480]: time="2024-02-09T14:06:48.298507011Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 9 14:06:48.298519 env[1480]: time="2024-02-09T14:06:48.298515404Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 9 14:06:48.298981 env[1480]: time="2024-02-09T14:06:48.298934042Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 9 14:06:48.299093 bash[1507]: Updated "/home/core/.ssh/authorized_keys" Feb 9 14:06:48.299202 env[1480]: time="2024-02-09T14:06:48.299096622Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 9 14:06:48.299221 env[1480]: time="2024-02-09T14:06:48.299192784Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 14:06:48.299221 env[1480]: time="2024-02-09T14:06:48.299207799Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 9 14:06:48.299261 env[1480]: time="2024-02-09T14:06:48.299250161Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 9 14:06:48.299280 env[1480]: time="2024-02-09T14:06:48.299261657Z" level=info msg="metadata content store policy set" policy=shared Feb 9 14:06:48.306447 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 9 14:06:48.306559 systemd[1]: Reached target system-config.target. Feb 9 14:06:48.308194 env[1480]: time="2024-02-09T14:06:48.308153875Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 9 14:06:48.308194 env[1480]: time="2024-02-09T14:06:48.308175551Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 9 14:06:48.308194 env[1480]: time="2024-02-09T14:06:48.308188155Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 9 14:06:48.308252 env[1480]: time="2024-02-09T14:06:48.308212805Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 9 14:06:48.308252 env[1480]: time="2024-02-09T14:06:48.308225133Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 9 14:06:48.308252 env[1480]: time="2024-02-09T14:06:48.308237387Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 9 14:06:48.308252 env[1480]: time="2024-02-09T14:06:48.308248110Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 9 14:06:48.308318 env[1480]: time="2024-02-09T14:06:48.308258889Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 9 14:06:48.308318 env[1480]: time="2024-02-09T14:06:48.308269227Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 9 14:06:48.308318 env[1480]: time="2024-02-09T14:06:48.308279801Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 9 14:06:48.308318 env[1480]: time="2024-02-09T14:06:48.308291920Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 9 14:06:48.308318 env[1480]: time="2024-02-09T14:06:48.308301578Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 9 14:06:48.308393 env[1480]: time="2024-02-09T14:06:48.308376122Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 9 14:06:48.308479 env[1480]: time="2024-02-09T14:06:48.308444660Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 9 14:06:48.308657 env[1480]: time="2024-02-09T14:06:48.308622448Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 9 14:06:48.308657 env[1480]: time="2024-02-09T14:06:48.308644461Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 9 14:06:48.308692 env[1480]: time="2024-02-09T14:06:48.308657355Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 9 14:06:48.308710 env[1480]: time="2024-02-09T14:06:48.308695915Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 9 14:06:48.308727 env[1480]: time="2024-02-09T14:06:48.308708273Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 9 14:06:48.308727 env[1480]: time="2024-02-09T14:06:48.308718565Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 9 14:06:48.308759 env[1480]: time="2024-02-09T14:06:48.308727471Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 9 14:06:48.308759 env[1480]: time="2024-02-09T14:06:48.308737419Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 9 14:06:48.308759 env[1480]: time="2024-02-09T14:06:48.308747509Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 9 14:06:48.308759 env[1480]: time="2024-02-09T14:06:48.308756439Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 9 14:06:48.308819 env[1480]: time="2024-02-09T14:06:48.308765738Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 9 14:06:48.308819 env[1480]: time="2024-02-09T14:06:48.308776884Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 9 14:06:48.308854 env[1480]: time="2024-02-09T14:06:48.308838801Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 9 14:06:48.308854 env[1480]: time="2024-02-09T14:06:48.308847365Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 9 14:06:48.308888 env[1480]: time="2024-02-09T14:06:48.308853731Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 9 14:06:48.308888 env[1480]: time="2024-02-09T14:06:48.308860066Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 9 14:06:48.308888 env[1480]: time="2024-02-09T14:06:48.308867799Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 9 14:06:48.308888 env[1480]: time="2024-02-09T14:06:48.308873858Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 9 14:06:48.308888 env[1480]: time="2024-02-09T14:06:48.308883395Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 9 14:06:48.308964 env[1480]: time="2024-02-09T14:06:48.308904527Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 9 14:06:48.309098 env[1480]: time="2024-02-09T14:06:48.309026443Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 9 14:06:48.309098 env[1480]: time="2024-02-09T14:06:48.309080310Z" level=info msg="Connect containerd service" Feb 9 14:06:48.310942 env[1480]: time="2024-02-09T14:06:48.309108357Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 9 14:06:48.310942 env[1480]: time="2024-02-09T14:06:48.309479327Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 14:06:48.310942 env[1480]: time="2024-02-09T14:06:48.309571792Z" level=info msg="Start subscribing containerd event" Feb 9 14:06:48.310942 env[1480]: time="2024-02-09T14:06:48.309602244Z" level=info msg="Start recovering state" Feb 9 14:06:48.310942 env[1480]: time="2024-02-09T14:06:48.309624472Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 9 14:06:48.310942 env[1480]: time="2024-02-09T14:06:48.309647836Z" level=info msg="Start event monitor" Feb 9 14:06:48.310942 env[1480]: time="2024-02-09T14:06:48.309655896Z" level=info msg="Start snapshots syncer" Feb 9 14:06:48.310942 env[1480]: time="2024-02-09T14:06:48.309661811Z" level=info msg="Start cni network conf syncer for default" Feb 9 14:06:48.310942 env[1480]: time="2024-02-09T14:06:48.309658118Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 9 14:06:48.310942 env[1480]: time="2024-02-09T14:06:48.309694450Z" level=info msg="containerd successfully booted in 0.027564s" Feb 9 14:06:48.310942 env[1480]: time="2024-02-09T14:06:48.309666979Z" level=info msg="Start streaming server" Feb 9 14:06:48.314455 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 9 14:06:48.314585 systemd[1]: Reached target user-config.target. Feb 9 14:06:48.321381 tar[1473]: ./ptp Feb 9 14:06:48.325941 systemd[1]: Started containerd.service. Feb 9 14:06:48.333609 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 9 14:06:48.345805 tar[1473]: ./vlan Feb 9 14:06:48.358724 locksmithd[1513]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 9 14:06:48.368803 tar[1473]: ./host-device Feb 9 14:06:48.391128 tar[1473]: ./tuning Feb 9 14:06:48.410911 tar[1473]: ./vrf Feb 9 14:06:48.431653 tar[1473]: ./sbr Feb 9 14:06:48.449817 sshd_keygen[1467]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 9 14:06:48.451868 tar[1473]: ./tap Feb 9 14:06:48.461718 systemd[1]: Finished sshd-keygen.service. Feb 9 14:06:48.469285 systemd[1]: Starting issuegen.service... Feb 9 14:06:48.475135 tar[1473]: ./dhcp Feb 9 14:06:48.476658 systemd[1]: issuegen.service: Deactivated successfully. Feb 9 14:06:48.476765 systemd[1]: Finished issuegen.service. Feb 9 14:06:48.484415 systemd[1]: Starting systemd-user-sessions.service... Feb 9 14:06:48.492644 systemd[1]: Finished systemd-user-sessions.service. Feb 9 14:06:48.501390 systemd[1]: Started getty@tty1.service. Feb 9 14:06:48.509231 systemd[1]: Started serial-getty@ttyS1.service. Feb 9 14:06:48.518525 systemd[1]: Reached target getty.target. Feb 9 14:06:48.534309 tar[1473]: ./static Feb 9 14:06:48.537186 tar[1475]: linux-amd64/LICENSE Feb 9 14:06:48.537229 tar[1475]: linux-amd64/README.md Feb 9 14:06:48.539723 systemd[1]: Finished prepare-critools.service. Feb 9 14:06:48.548571 systemd[1]: Finished prepare-helm.service. Feb 9 14:06:48.551094 tar[1473]: ./firewall Feb 9 14:06:48.576735 tar[1473]: ./macvlan Feb 9 14:06:48.591308 kernel: EXT4-fs (sda9): resized filesystem to 116605649 Feb 9 14:06:48.619143 extend-filesystems[1455]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Feb 9 14:06:48.619143 extend-filesystems[1455]: old_desc_blocks = 1, new_desc_blocks = 56 Feb 9 14:06:48.619143 extend-filesystems[1455]: The filesystem on /dev/sda9 is now 116605649 (4k) blocks long. Feb 9 14:06:48.658364 extend-filesystems[1439]: Resized filesystem in /dev/sda9 Feb 9 14:06:48.658364 extend-filesystems[1439]: Found sdb Feb 9 14:06:48.673357 tar[1473]: ./dummy Feb 9 14:06:48.673357 tar[1473]: ./bridge Feb 9 14:06:48.673357 tar[1473]: ./ipvlan Feb 9 14:06:48.619597 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 9 14:06:48.619692 systemd[1]: Finished extend-filesystems.service. Feb 9 14:06:48.694183 tar[1473]: ./portmap Feb 9 14:06:48.714492 tar[1473]: ./host-local Feb 9 14:06:48.721364 kernel: mlx5_core 0000:01:00.0: lag map port 1:1 port 2:2 shared_fdb:0 Feb 9 14:06:48.738284 systemd[1]: Finished prepare-cni-plugins.service. Feb 9 14:06:48.754569 systemd-networkd[1327]: bond0: Gained IPv6LL Feb 9 14:06:48.807362 kernel: mlx5_core 0000:01:00.0: modify lag map port 1:1 port 2:1 Feb 9 14:06:49.848370 kernel: mlx5_core 0000:01:00.0: modify lag map port 1:1 port 2:2 Feb 9 14:06:53.602802 login[1533]: pam_lastlog(login:session): file /var/log/lastlog is locked/write Feb 9 14:06:53.609482 login[1534]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 9 14:06:53.638552 systemd-logind[1468]: New session 2 of user core. Feb 9 14:06:53.641130 systemd[1]: Created slice user-500.slice. Feb 9 14:06:53.644099 systemd[1]: Starting user-runtime-dir@500.service... Feb 9 14:06:53.654576 systemd[1]: Finished user-runtime-dir@500.service. Feb 9 14:06:53.655210 systemd[1]: Starting user@500.service... Feb 9 14:06:53.657014 (systemd)[1543]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 9 14:06:53.777536 systemd[1543]: Queued start job for default target default.target. Feb 9 14:06:53.777770 systemd[1543]: Reached target paths.target. Feb 9 14:06:53.777781 systemd[1543]: Reached target sockets.target. Feb 9 14:06:53.777789 systemd[1543]: Reached target timers.target. Feb 9 14:06:53.777795 systemd[1543]: Reached target basic.target. Feb 9 14:06:53.777813 systemd[1543]: Reached target default.target. Feb 9 14:06:53.777827 systemd[1543]: Startup finished in 117ms. Feb 9 14:06:53.777883 systemd[1]: Started user@500.service. Feb 9 14:06:53.778473 systemd[1]: Started session-2.scope. Feb 9 14:06:54.003526 coreos-metadata[1431]: Feb 09 14:06:54.003 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Name or service not known Feb 9 14:06:54.004252 coreos-metadata[1434]: Feb 09 14:06:54.003 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Name or service not known Feb 9 14:06:54.603525 login[1533]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 9 14:06:54.614549 systemd-logind[1468]: New session 1 of user core. Feb 9 14:06:54.616872 systemd[1]: Started session-1.scope. Feb 9 14:06:54.944295 kernel: mlx5_core 0000:01:00.0: modify lag map port 1:2 port 2:2 Feb 9 14:06:54.944471 kernel: mlx5_core 0000:01:00.0: modify lag map port 1:1 port 2:2 Feb 9 14:06:55.003835 coreos-metadata[1434]: Feb 09 14:06:55.003 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Feb 9 14:06:55.004012 coreos-metadata[1431]: Feb 09 14:06:55.003 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Feb 9 14:06:55.051787 coreos-metadata[1434]: Feb 09 14:06:55.051 INFO Fetch successful Feb 9 14:06:55.052457 coreos-metadata[1431]: Feb 09 14:06:55.052 INFO Fetch successful Feb 9 14:06:55.074462 unknown[1431]: wrote ssh authorized keys file for user: core Feb 9 14:06:55.113424 update-ssh-keys[1563]: Updated "/home/core/.ssh/authorized_keys" Feb 9 14:06:55.114643 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Feb 9 14:06:55.230265 systemd-timesyncd[1424]: Contacted time server 198.137.202.32:123 (0.flatcar.pool.ntp.org). Feb 9 14:06:55.230416 systemd-timesyncd[1424]: Initial clock synchronization to Fri 2024-02-09 14:06:55.534701 UTC. Feb 9 14:06:55.304598 systemd[1]: Finished coreos-metadata.service. Feb 9 14:06:55.308478 systemd[1]: Started packet-phone-home.service. Feb 9 14:06:55.309043 systemd[1]: Reached target multi-user.target. Feb 9 14:06:55.312425 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 9 14:06:55.325063 curl[1566]: % Total % Received % Xferd Average Speed Time Time Time Current Feb 9 14:06:55.325660 curl[1566]: Dload Upload Total Spent Left Speed Feb 9 14:06:55.331771 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 9 14:06:55.332158 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 9 14:06:55.332692 systemd[1]: Startup finished in 1.851s (kernel) + 38.306s (initrd) + 14.192s (userspace) = 54.350s. Feb 9 14:06:55.634972 curl[1566]: \u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 Feb 9 14:06:55.639712 systemd[1]: packet-phone-home.service: Deactivated successfully. Feb 9 14:06:55.640267 systemd[1]: Created slice system-sshd.slice. Feb 9 14:06:55.640853 systemd[1]: Started sshd@0-139.178.88.165:22-147.75.109.163:35550.service. Feb 9 14:06:55.689764 sshd[1570]: Accepted publickey for core from 147.75.109.163 port 35550 ssh2: RSA SHA256:U9TXCuLWmaN/vK3Q2mHS688pTPlFTaEaiETkjLfvCUc Feb 9 14:06:55.690454 sshd[1570]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 14:06:55.692861 systemd-logind[1468]: New session 3 of user core. Feb 9 14:06:55.693282 systemd[1]: Started session-3.scope. Feb 9 14:06:55.743995 systemd[1]: Started sshd@1-139.178.88.165:22-147.75.109.163:35562.service. Feb 9 14:06:55.779610 sshd[1575]: Accepted publickey for core from 147.75.109.163 port 35562 ssh2: RSA SHA256:U9TXCuLWmaN/vK3Q2mHS688pTPlFTaEaiETkjLfvCUc Feb 9 14:06:55.780347 sshd[1575]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 14:06:55.782644 systemd-logind[1468]: New session 4 of user core. Feb 9 14:06:55.783077 systemd[1]: Started session-4.scope. Feb 9 14:06:55.833889 sshd[1575]: pam_unix(sshd:session): session closed for user core Feb 9 14:06:55.836195 systemd[1]: sshd@1-139.178.88.165:22-147.75.109.163:35562.service: Deactivated successfully. Feb 9 14:06:55.836779 systemd[1]: session-4.scope: Deactivated successfully. Feb 9 14:06:55.837297 systemd-logind[1468]: Session 4 logged out. Waiting for processes to exit. Feb 9 14:06:55.838333 systemd[1]: Started sshd@2-139.178.88.165:22-147.75.109.163:35564.service. Feb 9 14:06:55.839113 systemd-logind[1468]: Removed session 4. Feb 9 14:06:55.877108 sshd[1581]: Accepted publickey for core from 147.75.109.163 port 35564 ssh2: RSA SHA256:U9TXCuLWmaN/vK3Q2mHS688pTPlFTaEaiETkjLfvCUc Feb 9 14:06:55.877923 sshd[1581]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 14:06:55.880808 systemd-logind[1468]: New session 5 of user core. Feb 9 14:06:55.881346 systemd[1]: Started session-5.scope. Feb 9 14:06:55.933032 sshd[1581]: pam_unix(sshd:session): session closed for user core Feb 9 14:06:55.934608 systemd[1]: sshd@2-139.178.88.165:22-147.75.109.163:35564.service: Deactivated successfully. Feb 9 14:06:55.934892 systemd[1]: session-5.scope: Deactivated successfully. Feb 9 14:06:55.935184 systemd-logind[1468]: Session 5 logged out. Waiting for processes to exit. Feb 9 14:06:55.935724 systemd[1]: Started sshd@3-139.178.88.165:22-147.75.109.163:35570.service. Feb 9 14:06:55.936148 systemd-logind[1468]: Removed session 5. Feb 9 14:06:55.972368 sshd[1587]: Accepted publickey for core from 147.75.109.163 port 35570 ssh2: RSA SHA256:U9TXCuLWmaN/vK3Q2mHS688pTPlFTaEaiETkjLfvCUc Feb 9 14:06:55.973244 sshd[1587]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 14:06:55.976120 systemd-logind[1468]: New session 6 of user core. Feb 9 14:06:55.976724 systemd[1]: Started session-6.scope. Feb 9 14:06:56.032937 sshd[1587]: pam_unix(sshd:session): session closed for user core Feb 9 14:06:56.034383 systemd[1]: sshd@3-139.178.88.165:22-147.75.109.163:35570.service: Deactivated successfully. Feb 9 14:06:56.034698 systemd[1]: session-6.scope: Deactivated successfully. Feb 9 14:06:56.035036 systemd-logind[1468]: Session 6 logged out. Waiting for processes to exit. Feb 9 14:06:56.035584 systemd[1]: Started sshd@4-139.178.88.165:22-147.75.109.163:35584.service. Feb 9 14:06:56.035995 systemd-logind[1468]: Removed session 6. Feb 9 14:06:56.072779 sshd[1593]: Accepted publickey for core from 147.75.109.163 port 35584 ssh2: RSA SHA256:U9TXCuLWmaN/vK3Q2mHS688pTPlFTaEaiETkjLfvCUc Feb 9 14:06:56.073758 sshd[1593]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 14:06:56.077121 systemd-logind[1468]: New session 7 of user core. Feb 9 14:06:56.077823 systemd[1]: Started session-7.scope. Feb 9 14:06:56.162494 sudo[1596]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 9 14:06:56.163101 sudo[1596]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 14:07:00.245109 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 14:07:00.249866 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 14:07:00.250071 systemd[1]: Reached target network-online.target. Feb 9 14:07:00.250816 systemd[1]: Starting docker.service... Feb 9 14:07:00.268931 env[1616]: time="2024-02-09T14:07:00.268903524Z" level=info msg="Starting up" Feb 9 14:07:00.269539 env[1616]: time="2024-02-09T14:07:00.269500554Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 14:07:00.269539 env[1616]: time="2024-02-09T14:07:00.269508486Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 14:07:00.269539 env[1616]: time="2024-02-09T14:07:00.269519166Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 14:07:00.269539 env[1616]: time="2024-02-09T14:07:00.269524807Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 14:07:00.270990 env[1616]: time="2024-02-09T14:07:00.270952408Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 14:07:00.270990 env[1616]: time="2024-02-09T14:07:00.270961091Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 14:07:00.270990 env[1616]: time="2024-02-09T14:07:00.270968510Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 14:07:00.270990 env[1616]: time="2024-02-09T14:07:00.270973551Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 14:07:00.285832 env[1616]: time="2024-02-09T14:07:00.285788537Z" level=info msg="Loading containers: start." Feb 9 14:07:00.372406 kernel: Initializing XFRM netlink socket Feb 9 14:07:00.396404 env[1616]: time="2024-02-09T14:07:00.396383929Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 9 14:07:00.453043 systemd-networkd[1327]: docker0: Link UP Feb 9 14:07:00.458793 env[1616]: time="2024-02-09T14:07:00.458749595Z" level=info msg="Loading containers: done." Feb 9 14:07:00.463852 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2005012735-merged.mount: Deactivated successfully. Feb 9 14:07:00.464039 env[1616]: time="2024-02-09T14:07:00.464024980Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 9 14:07:00.464125 env[1616]: time="2024-02-09T14:07:00.464116607Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 9 14:07:00.464174 env[1616]: time="2024-02-09T14:07:00.464166643Z" level=info msg="Daemon has completed initialization" Feb 9 14:07:00.471712 systemd[1]: Started docker.service. Feb 9 14:07:00.475860 env[1616]: time="2024-02-09T14:07:00.475800467Z" level=info msg="API listen on /run/docker.sock" Feb 9 14:07:00.488982 systemd[1]: Reloading. Feb 9 14:07:00.532071 /usr/lib/systemd/system-generators/torcx-generator[1776]: time="2024-02-09T14:07:00Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 14:07:00.532095 /usr/lib/systemd/system-generators/torcx-generator[1776]: time="2024-02-09T14:07:00Z" level=info msg="torcx already run" Feb 9 14:07:00.583279 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 14:07:00.583287 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 14:07:00.594992 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 14:07:00.651079 systemd[1]: Started kubelet.service. Feb 9 14:07:00.673624 kubelet[1832]: E0209 14:07:00.673566 1832 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 9 14:07:00.674957 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 14:07:00.675057 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 14:07:01.497223 env[1480]: time="2024-02-09T14:07:01.497062850Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.6\"" Feb 9 14:07:02.203500 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3114514053.mount: Deactivated successfully. Feb 9 14:07:03.586671 env[1480]: time="2024-02-09T14:07:03.586632969Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 14:07:03.587385 env[1480]: time="2024-02-09T14:07:03.587343628Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:70e88c5e3a8e409ff4604a5fdb1dacb736ea02ba0b7a3da635f294e953906f47,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 14:07:03.588165 env[1480]: time="2024-02-09T14:07:03.588153862Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 14:07:03.589081 env[1480]: time="2024-02-09T14:07:03.589035215Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:98a686df810b9f1de8e3b2ae869e79c51a36e7434d33c53f011852618aec0a68,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 14:07:03.589939 env[1480]: time="2024-02-09T14:07:03.589897093Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.6\" returns image reference \"sha256:70e88c5e3a8e409ff4604a5fdb1dacb736ea02ba0b7a3da635f294e953906f47\"" Feb 9 14:07:03.595682 env[1480]: time="2024-02-09T14:07:03.595649774Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.6\"" Feb 9 14:07:05.527070 env[1480]: time="2024-02-09T14:07:05.526989343Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 14:07:05.527683 env[1480]: time="2024-02-09T14:07:05.527635077Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:18dbd2df3bb54036300d2af8b20ef60d479173946ff089a4d16e258b27faa55c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 14:07:05.528468 env[1480]: time="2024-02-09T14:07:05.528428407Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 14:07:05.529338 env[1480]: time="2024-02-09T14:07:05.529288857Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:80bdcd72cfe26028bb2fed75732fc2f511c35fa8d1edc03deae11f3490713c9e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 14:07:05.531113 env[1480]: time="2024-02-09T14:07:05.531068447Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.6\" returns image reference \"sha256:18dbd2df3bb54036300d2af8b20ef60d479173946ff089a4d16e258b27faa55c\"" Feb 9 14:07:05.536777 env[1480]: time="2024-02-09T14:07:05.536746038Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.6\"" Feb 9 14:07:06.656452 env[1480]: time="2024-02-09T14:07:06.656392094Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 14:07:06.657025 env[1480]: time="2024-02-09T14:07:06.656973542Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7597ecaaf12074e2980eee086736dbd01e566dc266351560001aa47dbbb0e5fe,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 14:07:06.658197 env[1480]: time="2024-02-09T14:07:06.658162482Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 14:07:06.660154 env[1480]: time="2024-02-09T14:07:06.660111680Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:a89db556c34d652d403d909882dbd97336f2e935b1c726b2e2b2c0400186ac39,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 14:07:06.661045 env[1480]: time="2024-02-09T14:07:06.661002355Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.6\" returns image reference \"sha256:7597ecaaf12074e2980eee086736dbd01e566dc266351560001aa47dbbb0e5fe\"" Feb 9 14:07:06.666449 env[1480]: time="2024-02-09T14:07:06.666394498Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.6\"" Feb 9 14:07:07.609728 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3005220661.mount: Deactivated successfully. Feb 9 14:07:07.953925 env[1480]: time="2024-02-09T14:07:07.953835446Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 14:07:07.954797 env[1480]: time="2024-02-09T14:07:07.954740676Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:342a759d88156b4f56ba522a1aed0e3d32d72542545346b40877f6583bebe05f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 14:07:07.955541 env[1480]: time="2024-02-09T14:07:07.955528432Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 14:07:07.956377 env[1480]: time="2024-02-09T14:07:07.956363535Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:3898a1671ae42be1cd3c2e777549bc7b5b306b8da3a224b747365f6679fb902a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 14:07:07.956525 env[1480]: time="2024-02-09T14:07:07.956512154Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.6\" returns image reference \"sha256:342a759d88156b4f56ba522a1aed0e3d32d72542545346b40877f6583bebe05f\"" Feb 9 14:07:07.962137 env[1480]: time="2024-02-09T14:07:07.962083053Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 9 14:07:08.500254 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1142418991.mount: Deactivated successfully. Feb 9 14:07:08.501512 env[1480]: time="2024-02-09T14:07:08.501461211Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 14:07:08.502111 env[1480]: time="2024-02-09T14:07:08.502061362Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 14:07:08.502719 env[1480]: time="2024-02-09T14:07:08.502678273Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 14:07:08.503668 env[1480]: time="2024-02-09T14:07:08.503624897Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 14:07:08.503840 env[1480]: time="2024-02-09T14:07:08.503795488Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 9 14:07:08.511581 env[1480]: time="2024-02-09T14:07:08.511561674Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.9-0\"" Feb 9 14:07:09.116392 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2602672189.mount: Deactivated successfully. Feb 9 14:07:10.831522 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 9 14:07:10.831639 systemd[1]: Stopped kubelet.service. Feb 9 14:07:10.832539 systemd[1]: Started kubelet.service. Feb 9 14:07:10.857898 kubelet[1914]: E0209 14:07:10.857857 1914 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 9 14:07:10.860176 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 14:07:10.860247 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 14:07:12.246378 env[1480]: time="2024-02-09T14:07:12.246353662Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.9-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 14:07:12.246993 env[1480]: time="2024-02-09T14:07:12.246983842Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 14:07:12.247978 env[1480]: time="2024-02-09T14:07:12.247964112Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.9-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 14:07:12.249113 env[1480]: time="2024-02-09T14:07:12.249101190Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 14:07:12.249616 env[1480]: time="2024-02-09T14:07:12.249602968Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.9-0\" returns image reference \"sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9\"" Feb 9 14:07:12.255418 env[1480]: time="2024-02-09T14:07:12.255372277Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Feb 9 14:07:12.780528 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3935041282.mount: Deactivated successfully. Feb 9 14:07:13.304295 env[1480]: time="2024-02-09T14:07:13.304214378Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 14:07:13.304862 env[1480]: time="2024-02-09T14:07:13.304811125Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 14:07:13.305600 env[1480]: time="2024-02-09T14:07:13.305565998Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 14:07:13.306362 env[1480]: time="2024-02-09T14:07:13.306297227Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 14:07:13.306716 env[1480]: time="2024-02-09T14:07:13.306674663Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\"" Feb 9 14:07:15.070181 systemd[1]: Stopped kubelet.service. Feb 9 14:07:15.080532 systemd[1]: Reloading. Feb 9 14:07:15.106647 /usr/lib/systemd/system-generators/torcx-generator[2073]: time="2024-02-09T14:07:15Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 14:07:15.106663 /usr/lib/systemd/system-generators/torcx-generator[2073]: time="2024-02-09T14:07:15Z" level=info msg="torcx already run" Feb 9 14:07:15.167090 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 14:07:15.167099 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 14:07:15.180465 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 14:07:15.238163 systemd[1]: Started kubelet.service. Feb 9 14:07:15.260171 kubelet[2133]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 14:07:15.260171 kubelet[2133]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 9 14:07:15.260171 kubelet[2133]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 14:07:15.260171 kubelet[2133]: I0209 14:07:15.260163 2133 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 14:07:15.514833 kubelet[2133]: I0209 14:07:15.514800 2133 server.go:467] "Kubelet version" kubeletVersion="v1.28.1" Feb 9 14:07:15.514833 kubelet[2133]: I0209 14:07:15.514827 2133 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 14:07:15.514986 kubelet[2133]: I0209 14:07:15.514940 2133 server.go:895] "Client rotation is on, will bootstrap in background" Feb 9 14:07:15.517136 kubelet[2133]: I0209 14:07:15.517055 2133 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 14:07:15.518315 kubelet[2133]: E0209 14:07:15.518302 2133 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://139.178.88.165:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 139.178.88.165:6443: connect: connection refused Feb 9 14:07:15.538886 kubelet[2133]: I0209 14:07:15.538848 2133 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 14:07:15.539048 kubelet[2133]: I0209 14:07:15.539009 2133 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 14:07:15.539122 kubelet[2133]: I0209 14:07:15.539090 2133 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 9 14:07:15.539122 kubelet[2133]: I0209 14:07:15.539101 2133 topology_manager.go:138] "Creating topology manager with none policy" Feb 9 14:07:15.539122 kubelet[2133]: I0209 14:07:15.539106 2133 container_manager_linux.go:301] "Creating device plugin manager" Feb 9 14:07:15.539218 kubelet[2133]: I0209 14:07:15.539156 2133 state_mem.go:36] "Initialized new in-memory state store" Feb 9 14:07:15.539218 kubelet[2133]: I0209 14:07:15.539195 2133 kubelet.go:393] "Attempting to sync node with API server" Feb 9 14:07:15.539218 kubelet[2133]: I0209 14:07:15.539202 2133 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 14:07:15.539218 kubelet[2133]: I0209 14:07:15.539213 2133 kubelet.go:309] "Adding apiserver pod source" Feb 9 14:07:15.539378 kubelet[2133]: I0209 14:07:15.539222 2133 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 14:07:15.539556 kubelet[2133]: I0209 14:07:15.539547 2133 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 14:07:15.539638 kubelet[2133]: W0209 14:07:15.539579 2133 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://139.178.88.165:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-80177560a3&limit=500&resourceVersion=0": dial tcp 139.178.88.165:6443: connect: connection refused Feb 9 14:07:15.539669 kubelet[2133]: E0209 14:07:15.539641 2133 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://139.178.88.165:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-80177560a3&limit=500&resourceVersion=0": dial tcp 139.178.88.165:6443: connect: connection refused Feb 9 14:07:15.539669 kubelet[2133]: W0209 14:07:15.539633 2133 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://139.178.88.165:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.88.165:6443: connect: connection refused Feb 9 14:07:15.539669 kubelet[2133]: E0209 14:07:15.539656 2133 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://139.178.88.165:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.88.165:6443: connect: connection refused Feb 9 14:07:15.539798 kubelet[2133]: W0209 14:07:15.539690 2133 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 9 14:07:15.540020 kubelet[2133]: I0209 14:07:15.539995 2133 server.go:1232] "Started kubelet" Feb 9 14:07:15.540088 kubelet[2133]: I0209 14:07:15.540078 2133 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 9 14:07:15.540129 kubelet[2133]: I0209 14:07:15.540085 2133 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 14:07:15.540213 kubelet[2133]: E0209 14:07:15.540203 2133 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 14:07:15.540250 kubelet[2133]: E0209 14:07:15.540172 2133 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-80177560a3.17b236f536bb7c2f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-80177560a3", UID:"ci-3510.3.2-a-80177560a3", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-80177560a3"}, FirstTimestamp:time.Date(2024, time.February, 9, 14, 7, 15, 539983407, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 14, 7, 15, 539983407, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ci-3510.3.2-a-80177560a3"}': 'Post "https://139.178.88.165:6443/api/v1/namespaces/default/events": dial tcp 139.178.88.165:6443: connect: connection refused'(may retry after sleeping) Feb 9 14:07:15.540250 kubelet[2133]: E0209 14:07:15.540218 2133 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 14:07:15.540250 kubelet[2133]: I0209 14:07:15.540242 2133 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 9 14:07:15.540805 kubelet[2133]: I0209 14:07:15.540791 2133 server.go:462] "Adding debug handlers to kubelet server" Feb 9 14:07:15.549981 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 9 14:07:15.550186 kubelet[2133]: I0209 14:07:15.550176 2133 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 14:07:15.550274 kubelet[2133]: I0209 14:07:15.550258 2133 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 9 14:07:15.550365 kubelet[2133]: I0209 14:07:15.550353 2133 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 14:07:15.550424 kubelet[2133]: I0209 14:07:15.550415 2133 reconciler_new.go:29] "Reconciler: start to sync state" Feb 9 14:07:15.550807 kubelet[2133]: E0209 14:07:15.550797 2133 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.88.165:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-80177560a3?timeout=10s\": dial tcp 139.178.88.165:6443: connect: connection refused" interval="200ms" Feb 9 14:07:15.550807 kubelet[2133]: W0209 14:07:15.550782 2133 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://139.178.88.165:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.88.165:6443: connect: connection refused Feb 9 14:07:15.550887 kubelet[2133]: E0209 14:07:15.550822 2133 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://139.178.88.165:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.88.165:6443: connect: connection refused Feb 9 14:07:15.557684 kubelet[2133]: I0209 14:07:15.557670 2133 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 9 14:07:15.558143 kubelet[2133]: I0209 14:07:15.558134 2133 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 9 14:07:15.558188 kubelet[2133]: I0209 14:07:15.558149 2133 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 9 14:07:15.558188 kubelet[2133]: I0209 14:07:15.558160 2133 kubelet.go:2303] "Starting kubelet main sync loop" Feb 9 14:07:15.558188 kubelet[2133]: E0209 14:07:15.558183 2133 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 9 14:07:15.558490 kubelet[2133]: W0209 14:07:15.558457 2133 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://139.178.88.165:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.88.165:6443: connect: connection refused Feb 9 14:07:15.558553 kubelet[2133]: E0209 14:07:15.558518 2133 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://139.178.88.165:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.88.165:6443: connect: connection refused Feb 9 14:07:15.570832 kubelet[2133]: I0209 14:07:15.570825 2133 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 14:07:15.570832 kubelet[2133]: I0209 14:07:15.570833 2133 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 14:07:15.570909 kubelet[2133]: I0209 14:07:15.570840 2133 state_mem.go:36] "Initialized new in-memory state store" Feb 9 14:07:15.571644 kubelet[2133]: I0209 14:07:15.571609 2133 policy_none.go:49] "None policy: Start" Feb 9 14:07:15.571988 kubelet[2133]: I0209 14:07:15.571924 2133 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 14:07:15.571988 kubelet[2133]: I0209 14:07:15.571938 2133 state_mem.go:35] "Initializing new in-memory state store" Feb 9 14:07:15.574385 systemd[1]: Created slice kubepods.slice. Feb 9 14:07:15.576570 systemd[1]: Created slice kubepods-burstable.slice. Feb 9 14:07:15.577928 systemd[1]: Created slice kubepods-besteffort.slice. Feb 9 14:07:15.592914 kubelet[2133]: I0209 14:07:15.592873 2133 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 14:07:15.593052 kubelet[2133]: I0209 14:07:15.593020 2133 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 14:07:15.593230 kubelet[2133]: E0209 14:07:15.593221 2133 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.2-a-80177560a3\" not found" Feb 9 14:07:15.654332 kubelet[2133]: I0209 14:07:15.654275 2133 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-80177560a3" Feb 9 14:07:15.655004 kubelet[2133]: E0209 14:07:15.654961 2133 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://139.178.88.165:6443/api/v1/nodes\": dial tcp 139.178.88.165:6443: connect: connection refused" node="ci-3510.3.2-a-80177560a3" Feb 9 14:07:15.659336 kubelet[2133]: I0209 14:07:15.659245 2133 topology_manager.go:215] "Topology Admit Handler" podUID="737f5b8590d77b250619995325648a6b" podNamespace="kube-system" podName="kube-controller-manager-ci-3510.3.2-a-80177560a3" Feb 9 14:07:15.662604 kubelet[2133]: I0209 14:07:15.662529 2133 topology_manager.go:215] "Topology Admit Handler" podUID="7b77ae27aa72f0ee6bf28537e9b11e50" podNamespace="kube-system" podName="kube-scheduler-ci-3510.3.2-a-80177560a3" Feb 9 14:07:15.666099 kubelet[2133]: I0209 14:07:15.666024 2133 topology_manager.go:215] "Topology Admit Handler" podUID="ede0b243bca64c13f55ab7bc7c0794b6" podNamespace="kube-system" podName="kube-apiserver-ci-3510.3.2-a-80177560a3" Feb 9 14:07:15.678892 systemd[1]: Created slice kubepods-burstable-pod737f5b8590d77b250619995325648a6b.slice. Feb 9 14:07:15.700907 systemd[1]: Created slice kubepods-burstable-pod7b77ae27aa72f0ee6bf28537e9b11e50.slice. Feb 9 14:07:15.710091 systemd[1]: Created slice kubepods-burstable-podede0b243bca64c13f55ab7bc7c0794b6.slice. Feb 9 14:07:15.751713 kubelet[2133]: I0209 14:07:15.751640 2133 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/737f5b8590d77b250619995325648a6b-ca-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-80177560a3\" (UID: \"737f5b8590d77b250619995325648a6b\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-80177560a3" Feb 9 14:07:15.751974 kubelet[2133]: I0209 14:07:15.751775 2133 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/737f5b8590d77b250619995325648a6b-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.2-a-80177560a3\" (UID: \"737f5b8590d77b250619995325648a6b\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-80177560a3" Feb 9 14:07:15.751974 kubelet[2133]: E0209 14:07:15.751909 2133 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.88.165:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-80177560a3?timeout=10s\": dial tcp 139.178.88.165:6443: connect: connection refused" interval="400ms" Feb 9 14:07:15.853019 kubelet[2133]: I0209 14:07:15.852941 2133 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/737f5b8590d77b250619995325648a6b-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-80177560a3\" (UID: \"737f5b8590d77b250619995325648a6b\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-80177560a3" Feb 9 14:07:15.853348 kubelet[2133]: I0209 14:07:15.853089 2133 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/737f5b8590d77b250619995325648a6b-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.2-a-80177560a3\" (UID: \"737f5b8590d77b250619995325648a6b\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-80177560a3" Feb 9 14:07:15.853348 kubelet[2133]: I0209 14:07:15.853231 2133 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/737f5b8590d77b250619995325648a6b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.2-a-80177560a3\" (UID: \"737f5b8590d77b250619995325648a6b\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-80177560a3" Feb 9 14:07:15.853614 kubelet[2133]: I0209 14:07:15.853376 2133 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b77ae27aa72f0ee6bf28537e9b11e50-kubeconfig\") pod \"kube-scheduler-ci-3510.3.2-a-80177560a3\" (UID: \"7b77ae27aa72f0ee6bf28537e9b11e50\") " pod="kube-system/kube-scheduler-ci-3510.3.2-a-80177560a3" Feb 9 14:07:15.853614 kubelet[2133]: I0209 14:07:15.853471 2133 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ede0b243bca64c13f55ab7bc7c0794b6-ca-certs\") pod \"kube-apiserver-ci-3510.3.2-a-80177560a3\" (UID: \"ede0b243bca64c13f55ab7bc7c0794b6\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-80177560a3" Feb 9 14:07:15.853614 kubelet[2133]: I0209 14:07:15.853535 2133 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ede0b243bca64c13f55ab7bc7c0794b6-k8s-certs\") pod \"kube-apiserver-ci-3510.3.2-a-80177560a3\" (UID: \"ede0b243bca64c13f55ab7bc7c0794b6\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-80177560a3" Feb 9 14:07:15.853614 kubelet[2133]: I0209 14:07:15.853603 2133 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ede0b243bca64c13f55ab7bc7c0794b6-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.2-a-80177560a3\" (UID: \"ede0b243bca64c13f55ab7bc7c0794b6\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-80177560a3" Feb 9 14:07:15.859437 kubelet[2133]: I0209 14:07:15.859362 2133 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-80177560a3" Feb 9 14:07:15.860147 kubelet[2133]: E0209 14:07:15.860076 2133 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://139.178.88.165:6443/api/v1/nodes\": dial tcp 139.178.88.165:6443: connect: connection refused" node="ci-3510.3.2-a-80177560a3" Feb 9 14:07:15.997839 env[1480]: time="2024-02-09T14:07:15.997755124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.2-a-80177560a3,Uid:737f5b8590d77b250619995325648a6b,Namespace:kube-system,Attempt:0,}" Feb 9 14:07:16.006906 env[1480]: time="2024-02-09T14:07:16.006774336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.2-a-80177560a3,Uid:7b77ae27aa72f0ee6bf28537e9b11e50,Namespace:kube-system,Attempt:0,}" Feb 9 14:07:16.016061 env[1480]: time="2024-02-09T14:07:16.015956702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-a-80177560a3,Uid:ede0b243bca64c13f55ab7bc7c0794b6,Namespace:kube-system,Attempt:0,}" Feb 9 14:07:16.153297 kubelet[2133]: E0209 14:07:16.153079 2133 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.88.165:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-80177560a3?timeout=10s\": dial tcp 139.178.88.165:6443: connect: connection refused" interval="800ms" Feb 9 14:07:16.266510 kubelet[2133]: I0209 14:07:16.266460 2133 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-80177560a3" Feb 9 14:07:16.267246 kubelet[2133]: E0209 14:07:16.267139 2133 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://139.178.88.165:6443/api/v1/nodes\": dial tcp 139.178.88.165:6443: connect: connection refused" node="ci-3510.3.2-a-80177560a3" Feb 9 14:07:16.463912 kubelet[2133]: W0209 14:07:16.463661 2133 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://139.178.88.165:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.88.165:6443: connect: connection refused Feb 9 14:07:16.463912 kubelet[2133]: E0209 14:07:16.463792 2133 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://139.178.88.165:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.88.165:6443: connect: connection refused Feb 9 14:07:16.487935 kubelet[2133]: W0209 14:07:16.487786 2133 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://139.178.88.165:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.88.165:6443: connect: connection refused Feb 9 14:07:16.487935 kubelet[2133]: E0209 14:07:16.487943 2133 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://139.178.88.165:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.88.165:6443: connect: connection refused Feb 9 14:07:16.579215 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3459293845.mount: Deactivated successfully. Feb 9 14:07:16.580137 env[1480]: time="2024-02-09T14:07:16.580088436Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 14:07:16.581356 env[1480]: time="2024-02-09T14:07:16.581319102Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 14:07:16.582106 env[1480]: time="2024-02-09T14:07:16.582067195Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 14:07:16.582766 env[1480]: time="2024-02-09T14:07:16.582725677Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 14:07:16.583116 env[1480]: time="2024-02-09T14:07:16.583076745Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 14:07:16.583446 env[1480]: time="2024-02-09T14:07:16.583409223Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 14:07:16.585039 env[1480]: time="2024-02-09T14:07:16.585000224Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 14:07:16.586570 env[1480]: time="2024-02-09T14:07:16.586531701Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 14:07:16.586904 env[1480]: time="2024-02-09T14:07:16.586866797Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 14:07:16.588083 env[1480]: time="2024-02-09T14:07:16.588068723Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 14:07:16.588476 env[1480]: time="2024-02-09T14:07:16.588465027Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 14:07:16.588850 env[1480]: time="2024-02-09T14:07:16.588838405Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 14:07:16.592862 env[1480]: time="2024-02-09T14:07:16.592810898Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 14:07:16.592862 env[1480]: time="2024-02-09T14:07:16.592831852Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 14:07:16.592862 env[1480]: time="2024-02-09T14:07:16.592838723Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 14:07:16.592964 env[1480]: time="2024-02-09T14:07:16.592900747Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7d5a9ed4b9d42d086861cd82b7d06cdf1a5a1201f3cf58e97fec1691c44e9dfe pid=2183 runtime=io.containerd.runc.v2 Feb 9 14:07:16.595575 env[1480]: time="2024-02-09T14:07:16.595541419Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 14:07:16.595575 env[1480]: time="2024-02-09T14:07:16.595561090Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 14:07:16.595575 env[1480]: time="2024-02-09T14:07:16.595568037Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 14:07:16.595722 env[1480]: time="2024-02-09T14:07:16.595656424Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9aa1a9c0e1626317e8c80a88cde2eb43f2c003d9f66486f9e0ab020b20a9817e pid=2203 runtime=io.containerd.runc.v2 Feb 9 14:07:16.596163 env[1480]: time="2024-02-09T14:07:16.596138391Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 14:07:16.596195 env[1480]: time="2024-02-09T14:07:16.596163624Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 14:07:16.596195 env[1480]: time="2024-02-09T14:07:16.596178307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 14:07:16.596292 env[1480]: time="2024-02-09T14:07:16.596270789Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/696195b610fa97b0a2486c6c03b57c7a88f736f50ed43b98f232d393fdebe939 pid=2216 runtime=io.containerd.runc.v2 Feb 9 14:07:16.598406 kubelet[2133]: W0209 14:07:16.598378 2133 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://139.178.88.165:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-80177560a3&limit=500&resourceVersion=0": dial tcp 139.178.88.165:6443: connect: connection refused Feb 9 14:07:16.598465 kubelet[2133]: E0209 14:07:16.598414 2133 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://139.178.88.165:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-80177560a3&limit=500&resourceVersion=0": dial tcp 139.178.88.165:6443: connect: connection refused Feb 9 14:07:16.601548 systemd[1]: Started cri-containerd-696195b610fa97b0a2486c6c03b57c7a88f736f50ed43b98f232d393fdebe939.scope. Feb 9 14:07:16.609896 systemd[1]: Started cri-containerd-7d5a9ed4b9d42d086861cd82b7d06cdf1a5a1201f3cf58e97fec1691c44e9dfe.scope. Feb 9 14:07:16.613144 systemd[1]: Started cri-containerd-9aa1a9c0e1626317e8c80a88cde2eb43f2c003d9f66486f9e0ab020b20a9817e.scope. Feb 9 14:07:16.632481 env[1480]: time="2024-02-09T14:07:16.632451969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.2-a-80177560a3,Uid:737f5b8590d77b250619995325648a6b,Namespace:kube-system,Attempt:0,} returns sandbox id \"7d5a9ed4b9d42d086861cd82b7d06cdf1a5a1201f3cf58e97fec1691c44e9dfe\"" Feb 9 14:07:16.634367 env[1480]: time="2024-02-09T14:07:16.634350435Z" level=info msg="CreateContainer within sandbox \"7d5a9ed4b9d42d086861cd82b7d06cdf1a5a1201f3cf58e97fec1691c44e9dfe\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 9 14:07:16.636428 env[1480]: time="2024-02-09T14:07:16.636409719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.2-a-80177560a3,Uid:7b77ae27aa72f0ee6bf28537e9b11e50,Namespace:kube-system,Attempt:0,} returns sandbox id \"696195b610fa97b0a2486c6c03b57c7a88f736f50ed43b98f232d393fdebe939\"" Feb 9 14:07:16.637362 env[1480]: time="2024-02-09T14:07:16.637347069Z" level=info msg="CreateContainer within sandbox \"696195b610fa97b0a2486c6c03b57c7a88f736f50ed43b98f232d393fdebe939\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 9 14:07:16.640068 env[1480]: time="2024-02-09T14:07:16.640023602Z" level=info msg="CreateContainer within sandbox \"7d5a9ed4b9d42d086861cd82b7d06cdf1a5a1201f3cf58e97fec1691c44e9dfe\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"05f9140e2dfadf2091bed06bfd45f84ad1427419d11d07a56ac8c7247d4aac73\"" Feb 9 14:07:16.640251 env[1480]: time="2024-02-09T14:07:16.640238913Z" level=info msg="StartContainer for \"05f9140e2dfadf2091bed06bfd45f84ad1427419d11d07a56ac8c7247d4aac73\"" Feb 9 14:07:16.642218 env[1480]: time="2024-02-09T14:07:16.642174353Z" level=info msg="CreateContainer within sandbox \"696195b610fa97b0a2486c6c03b57c7a88f736f50ed43b98f232d393fdebe939\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c01b912967be4203e8d2f307ae936c33cd5caa6fd3451b0f3931804ef8b93b57\"" Feb 9 14:07:16.642380 env[1480]: time="2024-02-09T14:07:16.642340860Z" level=info msg="StartContainer for \"c01b912967be4203e8d2f307ae936c33cd5caa6fd3451b0f3931804ef8b93b57\"" Feb 9 14:07:16.647792 systemd[1]: Started cri-containerd-05f9140e2dfadf2091bed06bfd45f84ad1427419d11d07a56ac8c7247d4aac73.scope. Feb 9 14:07:16.649959 env[1480]: time="2024-02-09T14:07:16.649927233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-a-80177560a3,Uid:ede0b243bca64c13f55ab7bc7c0794b6,Namespace:kube-system,Attempt:0,} returns sandbox id \"9aa1a9c0e1626317e8c80a88cde2eb43f2c003d9f66486f9e0ab020b20a9817e\"" Feb 9 14:07:16.651121 env[1480]: time="2024-02-09T14:07:16.651105134Z" level=info msg="CreateContainer within sandbox \"9aa1a9c0e1626317e8c80a88cde2eb43f2c003d9f66486f9e0ab020b20a9817e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 9 14:07:16.655401 env[1480]: time="2024-02-09T14:07:16.655382993Z" level=info msg="CreateContainer within sandbox \"9aa1a9c0e1626317e8c80a88cde2eb43f2c003d9f66486f9e0ab020b20a9817e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"512569f6092fc68ff064e6b5b7efa0554e6497e78ab3068ead8d9d565588c2d8\"" Feb 9 14:07:16.655564 env[1480]: time="2024-02-09T14:07:16.655552351Z" level=info msg="StartContainer for \"512569f6092fc68ff064e6b5b7efa0554e6497e78ab3068ead8d9d565588c2d8\"" Feb 9 14:07:16.661438 systemd[1]: Started cri-containerd-c01b912967be4203e8d2f307ae936c33cd5caa6fd3451b0f3931804ef8b93b57.scope. Feb 9 14:07:16.664102 kubelet[2133]: W0209 14:07:16.664042 2133 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://139.178.88.165:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.88.165:6443: connect: connection refused Feb 9 14:07:16.664102 kubelet[2133]: E0209 14:07:16.664079 2133 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://139.178.88.165:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.88.165:6443: connect: connection refused Feb 9 14:07:16.673347 env[1480]: time="2024-02-09T14:07:16.673320579Z" level=info msg="StartContainer for \"05f9140e2dfadf2091bed06bfd45f84ad1427419d11d07a56ac8c7247d4aac73\" returns successfully" Feb 9 14:07:16.674517 systemd[1]: Started cri-containerd-512569f6092fc68ff064e6b5b7efa0554e6497e78ab3068ead8d9d565588c2d8.scope. Feb 9 14:07:16.685595 env[1480]: time="2024-02-09T14:07:16.685569482Z" level=info msg="StartContainer for \"c01b912967be4203e8d2f307ae936c33cd5caa6fd3451b0f3931804ef8b93b57\" returns successfully" Feb 9 14:07:16.700347 env[1480]: time="2024-02-09T14:07:16.700322360Z" level=info msg="StartContainer for \"512569f6092fc68ff064e6b5b7efa0554e6497e78ab3068ead8d9d565588c2d8\" returns successfully" Feb 9 14:07:17.069243 kubelet[2133]: I0209 14:07:17.069225 2133 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-80177560a3" Feb 9 14:07:17.462101 kubelet[2133]: E0209 14:07:17.462083 2133 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.2-a-80177560a3\" not found" node="ci-3510.3.2-a-80177560a3" Feb 9 14:07:17.560839 kubelet[2133]: I0209 14:07:17.560812 2133 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.2-a-80177560a3" Feb 9 14:07:17.568186 kubelet[2133]: E0209 14:07:17.568166 2133 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-80177560a3\" not found" Feb 9 14:07:17.668564 kubelet[2133]: E0209 14:07:17.668489 2133 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-80177560a3\" not found" Feb 9 14:07:17.769546 kubelet[2133]: E0209 14:07:17.769370 2133 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-80177560a3\" not found" Feb 9 14:07:17.870042 kubelet[2133]: E0209 14:07:17.869943 2133 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-80177560a3\" not found" Feb 9 14:07:17.971265 kubelet[2133]: E0209 14:07:17.971173 2133 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-80177560a3\" not found" Feb 9 14:07:18.072421 kubelet[2133]: E0209 14:07:18.072184 2133 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-80177560a3\" not found" Feb 9 14:07:18.173473 kubelet[2133]: E0209 14:07:18.173404 2133 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-80177560a3\" not found" Feb 9 14:07:18.273638 kubelet[2133]: E0209 14:07:18.273568 2133 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-80177560a3\" not found" Feb 9 14:07:18.374174 kubelet[2133]: E0209 14:07:18.374129 2133 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-80177560a3\" not found" Feb 9 14:07:18.475055 kubelet[2133]: E0209 14:07:18.474954 2133 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-80177560a3\" not found" Feb 9 14:07:18.575599 kubelet[2133]: E0209 14:07:18.575515 2133 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-80177560a3\" not found" Feb 9 14:07:18.676051 kubelet[2133]: E0209 14:07:18.675864 2133 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-80177560a3\" not found" Feb 9 14:07:18.776391 kubelet[2133]: E0209 14:07:18.776275 2133 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-80177560a3\" not found" Feb 9 14:07:18.876619 kubelet[2133]: E0209 14:07:18.876559 2133 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-80177560a3\" not found" Feb 9 14:07:18.977742 kubelet[2133]: E0209 14:07:18.977601 2133 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-80177560a3\" not found" Feb 9 14:07:19.078658 kubelet[2133]: E0209 14:07:19.078602 2133 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-80177560a3\" not found" Feb 9 14:07:19.541063 kubelet[2133]: I0209 14:07:19.540967 2133 apiserver.go:52] "Watching apiserver" Feb 9 14:07:19.550849 kubelet[2133]: I0209 14:07:19.550797 2133 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 14:07:20.324529 systemd[1]: Reloading. Feb 9 14:07:20.352892 /usr/lib/systemd/system-generators/torcx-generator[2464]: time="2024-02-09T14:07:20Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 14:07:20.352917 /usr/lib/systemd/system-generators/torcx-generator[2464]: time="2024-02-09T14:07:20Z" level=info msg="torcx already run" Feb 9 14:07:20.419370 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 14:07:20.419383 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 14:07:20.435159 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 14:07:20.503283 systemd[1]: Stopping kubelet.service... Feb 9 14:07:20.521789 systemd[1]: kubelet.service: Deactivated successfully. Feb 9 14:07:20.521894 systemd[1]: Stopped kubelet.service. Feb 9 14:07:20.522769 systemd[1]: Started kubelet.service. Feb 9 14:07:20.544967 kubelet[2523]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 14:07:20.544967 kubelet[2523]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 9 14:07:20.544967 kubelet[2523]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 14:07:20.544967 kubelet[2523]: I0209 14:07:20.544944 2523 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 14:07:20.547724 kubelet[2523]: I0209 14:07:20.547713 2523 server.go:467] "Kubelet version" kubeletVersion="v1.28.1" Feb 9 14:07:20.547724 kubelet[2523]: I0209 14:07:20.547724 2523 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 14:07:20.547833 kubelet[2523]: I0209 14:07:20.547828 2523 server.go:895] "Client rotation is on, will bootstrap in background" Feb 9 14:07:20.548897 kubelet[2523]: I0209 14:07:20.548890 2523 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 9 14:07:20.549682 kubelet[2523]: I0209 14:07:20.549634 2523 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 14:07:20.568137 kubelet[2523]: I0209 14:07:20.568124 2523 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 14:07:20.568249 kubelet[2523]: I0209 14:07:20.568243 2523 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 14:07:20.568402 kubelet[2523]: I0209 14:07:20.568365 2523 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 9 14:07:20.568402 kubelet[2523]: I0209 14:07:20.568379 2523 topology_manager.go:138] "Creating topology manager with none policy" Feb 9 14:07:20.568402 kubelet[2523]: I0209 14:07:20.568385 2523 container_manager_linux.go:301] "Creating device plugin manager" Feb 9 14:07:20.568402 kubelet[2523]: I0209 14:07:20.568404 2523 state_mem.go:36] "Initialized new in-memory state store" Feb 9 14:07:20.568559 kubelet[2523]: I0209 14:07:20.568454 2523 kubelet.go:393] "Attempting to sync node with API server" Feb 9 14:07:20.568559 kubelet[2523]: I0209 14:07:20.568463 2523 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 14:07:20.568559 kubelet[2523]: I0209 14:07:20.568476 2523 kubelet.go:309] "Adding apiserver pod source" Feb 9 14:07:20.568559 kubelet[2523]: I0209 14:07:20.568502 2523 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 14:07:20.568823 kubelet[2523]: I0209 14:07:20.568813 2523 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 14:07:20.569462 kubelet[2523]: I0209 14:07:20.569424 2523 server.go:1232] "Started kubelet" Feb 9 14:07:20.569521 kubelet[2523]: I0209 14:07:20.569467 2523 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 14:07:20.569521 kubelet[2523]: I0209 14:07:20.569483 2523 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 9 14:07:20.570047 kubelet[2523]: I0209 14:07:20.570029 2523 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 9 14:07:20.570261 kubelet[2523]: E0209 14:07:20.570245 2523 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 14:07:20.570337 kubelet[2523]: E0209 14:07:20.570269 2523 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 14:07:20.571307 kubelet[2523]: I0209 14:07:20.571291 2523 server.go:462] "Adding debug handlers to kubelet server" Feb 9 14:07:20.571379 kubelet[2523]: I0209 14:07:20.571369 2523 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 14:07:20.571461 kubelet[2523]: I0209 14:07:20.571441 2523 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 9 14:07:20.571519 kubelet[2523]: E0209 14:07:20.571464 2523 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-80177560a3\" not found" Feb 9 14:07:20.571519 kubelet[2523]: I0209 14:07:20.571496 2523 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 14:07:20.571607 kubelet[2523]: I0209 14:07:20.571570 2523 reconciler_new.go:29] "Reconciler: start to sync state" Feb 9 14:07:20.575717 kubelet[2523]: I0209 14:07:20.575660 2523 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 9 14:07:20.577135 kubelet[2523]: I0209 14:07:20.577116 2523 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 9 14:07:20.577226 kubelet[2523]: I0209 14:07:20.577145 2523 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 9 14:07:20.577226 kubelet[2523]: I0209 14:07:20.577165 2523 kubelet.go:2303] "Starting kubelet main sync loop" Feb 9 14:07:20.577292 kubelet[2523]: E0209 14:07:20.577225 2523 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 9 14:07:20.590897 kubelet[2523]: I0209 14:07:20.590851 2523 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 14:07:20.590897 kubelet[2523]: I0209 14:07:20.590862 2523 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 14:07:20.590897 kubelet[2523]: I0209 14:07:20.590870 2523 state_mem.go:36] "Initialized new in-memory state store" Feb 9 14:07:20.591016 kubelet[2523]: I0209 14:07:20.590951 2523 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 9 14:07:20.591016 kubelet[2523]: I0209 14:07:20.590963 2523 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 9 14:07:20.591016 kubelet[2523]: I0209 14:07:20.590967 2523 policy_none.go:49] "None policy: Start" Feb 9 14:07:20.591245 kubelet[2523]: I0209 14:07:20.591213 2523 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 14:07:20.591245 kubelet[2523]: I0209 14:07:20.591223 2523 state_mem.go:35] "Initializing new in-memory state store" Feb 9 14:07:20.591290 kubelet[2523]: I0209 14:07:20.591286 2523 state_mem.go:75] "Updated machine memory state" Feb 9 14:07:20.592961 kubelet[2523]: I0209 14:07:20.592924 2523 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 14:07:20.593068 kubelet[2523]: I0209 14:07:20.593033 2523 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 14:07:20.678027 kubelet[2523]: I0209 14:07:20.677924 2523 topology_manager.go:215] "Topology Admit Handler" podUID="ede0b243bca64c13f55ab7bc7c0794b6" podNamespace="kube-system" podName="kube-apiserver-ci-3510.3.2-a-80177560a3" Feb 9 14:07:20.678293 kubelet[2523]: I0209 14:07:20.678256 2523 topology_manager.go:215] "Topology Admit Handler" podUID="737f5b8590d77b250619995325648a6b" podNamespace="kube-system" podName="kube-controller-manager-ci-3510.3.2-a-80177560a3" Feb 9 14:07:20.678602 kubelet[2523]: I0209 14:07:20.678517 2523 topology_manager.go:215] "Topology Admit Handler" podUID="7b77ae27aa72f0ee6bf28537e9b11e50" podNamespace="kube-system" podName="kube-scheduler-ci-3510.3.2-a-80177560a3" Feb 9 14:07:20.678836 kubelet[2523]: I0209 14:07:20.678807 2523 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-80177560a3" Feb 9 14:07:20.690548 kubelet[2523]: W0209 14:07:20.690485 2523 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 9 14:07:20.690936 kubelet[2523]: W0209 14:07:20.690601 2523 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 9 14:07:20.691546 kubelet[2523]: W0209 14:07:20.691502 2523 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 9 14:07:20.692118 kubelet[2523]: I0209 14:07:20.692036 2523 kubelet_node_status.go:108] "Node was previously registered" node="ci-3510.3.2-a-80177560a3" Feb 9 14:07:20.692390 kubelet[2523]: I0209 14:07:20.692187 2523 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.2-a-80177560a3" Feb 9 14:07:20.740166 sudo[2566]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 9 14:07:20.740762 sudo[2566]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Feb 9 14:07:20.773293 kubelet[2523]: I0209 14:07:20.773245 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/737f5b8590d77b250619995325648a6b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.2-a-80177560a3\" (UID: \"737f5b8590d77b250619995325648a6b\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-80177560a3" Feb 9 14:07:20.773508 kubelet[2523]: I0209 14:07:20.773379 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b77ae27aa72f0ee6bf28537e9b11e50-kubeconfig\") pod \"kube-scheduler-ci-3510.3.2-a-80177560a3\" (UID: \"7b77ae27aa72f0ee6bf28537e9b11e50\") " pod="kube-system/kube-scheduler-ci-3510.3.2-a-80177560a3" Feb 9 14:07:20.773508 kubelet[2523]: I0209 14:07:20.773473 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ede0b243bca64c13f55ab7bc7c0794b6-ca-certs\") pod \"kube-apiserver-ci-3510.3.2-a-80177560a3\" (UID: \"ede0b243bca64c13f55ab7bc7c0794b6\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-80177560a3" Feb 9 14:07:20.773719 kubelet[2523]: I0209 14:07:20.773565 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ede0b243bca64c13f55ab7bc7c0794b6-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.2-a-80177560a3\" (UID: \"ede0b243bca64c13f55ab7bc7c0794b6\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-80177560a3" Feb 9 14:07:20.773719 kubelet[2523]: I0209 14:07:20.773645 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/737f5b8590d77b250619995325648a6b-ca-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-80177560a3\" (UID: \"737f5b8590d77b250619995325648a6b\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-80177560a3" Feb 9 14:07:20.773885 kubelet[2523]: I0209 14:07:20.773722 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ede0b243bca64c13f55ab7bc7c0794b6-k8s-certs\") pod \"kube-apiserver-ci-3510.3.2-a-80177560a3\" (UID: \"ede0b243bca64c13f55ab7bc7c0794b6\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-80177560a3" Feb 9 14:07:20.773885 kubelet[2523]: I0209 14:07:20.773806 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/737f5b8590d77b250619995325648a6b-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.2-a-80177560a3\" (UID: \"737f5b8590d77b250619995325648a6b\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-80177560a3" Feb 9 14:07:20.774068 kubelet[2523]: I0209 14:07:20.773884 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/737f5b8590d77b250619995325648a6b-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-80177560a3\" (UID: \"737f5b8590d77b250619995325648a6b\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-80177560a3" Feb 9 14:07:20.774068 kubelet[2523]: I0209 14:07:20.773975 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/737f5b8590d77b250619995325648a6b-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.2-a-80177560a3\" (UID: \"737f5b8590d77b250619995325648a6b\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-80177560a3" Feb 9 14:07:21.113284 sudo[2566]: pam_unix(sudo:session): session closed for user root Feb 9 14:07:21.569653 kubelet[2523]: I0209 14:07:21.569583 2523 apiserver.go:52] "Watching apiserver" Feb 9 14:07:21.572183 kubelet[2523]: I0209 14:07:21.572138 2523 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 14:07:21.588229 kubelet[2523]: W0209 14:07:21.588195 2523 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 9 14:07:21.588292 kubelet[2523]: E0209 14:07:21.588242 2523 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.2-a-80177560a3\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.2-a-80177560a3" Feb 9 14:07:21.588915 kubelet[2523]: W0209 14:07:21.588877 2523 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 9 14:07:21.588915 kubelet[2523]: E0209 14:07:21.588912 2523 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510.3.2-a-80177560a3\" already exists" pod="kube-system/kube-controller-manager-ci-3510.3.2-a-80177560a3" Feb 9 14:07:21.593918 kubelet[2523]: I0209 14:07:21.593864 2523 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.2-a-80177560a3" podStartSLOduration=1.5938312510000001 podCreationTimestamp="2024-02-09 14:07:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 14:07:21.593515084 +0000 UTC m=+1.068994074" watchObservedRunningTime="2024-02-09 14:07:21.593831251 +0000 UTC m=+1.069310243" Feb 9 14:07:21.598704 kubelet[2523]: I0209 14:07:21.598667 2523 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.2-a-80177560a3" podStartSLOduration=1.598653408 podCreationTimestamp="2024-02-09 14:07:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 14:07:21.598297855 +0000 UTC m=+1.073776848" watchObservedRunningTime="2024-02-09 14:07:21.598653408 +0000 UTC m=+1.074132398" Feb 9 14:07:21.603982 kubelet[2523]: I0209 14:07:21.603948 2523 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.2-a-80177560a3" podStartSLOduration=1.60393421 podCreationTimestamp="2024-02-09 14:07:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 14:07:21.603916581 +0000 UTC m=+1.079395572" watchObservedRunningTime="2024-02-09 14:07:21.60393421 +0000 UTC m=+1.079413197" Feb 9 14:07:22.162461 sudo[1596]: pam_unix(sudo:session): session closed for user root Feb 9 14:07:22.163175 sshd[1593]: pam_unix(sshd:session): session closed for user core Feb 9 14:07:22.164451 systemd[1]: sshd@4-139.178.88.165:22-147.75.109.163:35584.service: Deactivated successfully. Feb 9 14:07:22.164849 systemd[1]: session-7.scope: Deactivated successfully. Feb 9 14:07:22.164931 systemd[1]: session-7.scope: Consumed 3.186s CPU time. Feb 9 14:07:22.165189 systemd-logind[1468]: Session 7 logged out. Waiting for processes to exit. Feb 9 14:07:22.165669 systemd-logind[1468]: Removed session 7. Feb 9 14:07:25.563190 systemd[1]: Started sshd@5-139.178.88.165:22-65.181.73.155:34560.service. Feb 9 14:07:26.431092 sshd[2661]: Invalid user tanaim from 65.181.73.155 port 34560 Feb 9 14:07:26.432421 sshd[2661]: pam_faillock(sshd:auth): User unknown Feb 9 14:07:26.432633 sshd[2661]: pam_unix(sshd:auth): check pass; user unknown Feb 9 14:07:26.432652 sshd[2661]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=65.181.73.155 Feb 9 14:07:26.432856 sshd[2661]: pam_faillock(sshd:auth): User unknown Feb 9 14:07:28.143135 sshd[2661]: Failed password for invalid user tanaim from 65.181.73.155 port 34560 ssh2 Feb 9 14:07:28.336095 sshd[2661]: Received disconnect from 65.181.73.155 port 34560:11: Bye Bye [preauth] Feb 9 14:07:28.336095 sshd[2661]: Disconnected from invalid user tanaim 65.181.73.155 port 34560 [preauth] Feb 9 14:07:28.338732 systemd[1]: sshd@5-139.178.88.165:22-65.181.73.155:34560.service: Deactivated successfully. Feb 9 14:07:33.840355 kubelet[2523]: I0209 14:07:33.840260 2523 kuberuntime_manager.go:1463] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 9 14:07:33.841290 env[1480]: time="2024-02-09T14:07:33.841046707Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 9 14:07:33.841904 kubelet[2523]: I0209 14:07:33.841547 2523 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 9 14:07:33.994867 update_engine[1470]: I0209 14:07:33.994760 1470 update_attempter.cc:509] Updating boot flags... Feb 9 14:07:34.602433 kubelet[2523]: I0209 14:07:34.602361 2523 topology_manager.go:215] "Topology Admit Handler" podUID="82a439d9-603a-4f09-ab17-a1ffaa69781d" podNamespace="kube-system" podName="kube-proxy-89bfl" Feb 9 14:07:34.608833 kubelet[2523]: I0209 14:07:34.608785 2523 topology_manager.go:215] "Topology Admit Handler" podUID="00602c6b-f3bb-4d64-9edd-eb6411171a3c" podNamespace="kube-system" podName="cilium-vvvzw" Feb 9 14:07:34.615408 systemd[1]: Created slice kubepods-besteffort-pod82a439d9_603a_4f09_ab17_a1ffaa69781d.slice. Feb 9 14:07:34.636414 systemd[1]: Created slice kubepods-burstable-pod00602c6b_f3bb_4d64_9edd_eb6411171a3c.slice. Feb 9 14:07:34.667037 kubelet[2523]: I0209 14:07:34.666970 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/82a439d9-603a-4f09-ab17-a1ffaa69781d-kube-proxy\") pod \"kube-proxy-89bfl\" (UID: \"82a439d9-603a-4f09-ab17-a1ffaa69781d\") " pod="kube-system/kube-proxy-89bfl" Feb 9 14:07:34.667422 kubelet[2523]: I0209 14:07:34.667134 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/00602c6b-f3bb-4d64-9edd-eb6411171a3c-bpf-maps\") pod \"cilium-vvvzw\" (UID: \"00602c6b-f3bb-4d64-9edd-eb6411171a3c\") " pod="kube-system/cilium-vvvzw" Feb 9 14:07:34.667422 kubelet[2523]: I0209 14:07:34.667255 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/00602c6b-f3bb-4d64-9edd-eb6411171a3c-cilium-config-path\") pod \"cilium-vvvzw\" (UID: \"00602c6b-f3bb-4d64-9edd-eb6411171a3c\") " pod="kube-system/cilium-vvvzw" Feb 9 14:07:34.667422 kubelet[2523]: I0209 14:07:34.667393 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/00602c6b-f3bb-4d64-9edd-eb6411171a3c-hubble-tls\") pod \"cilium-vvvzw\" (UID: \"00602c6b-f3bb-4d64-9edd-eb6411171a3c\") " pod="kube-system/cilium-vvvzw" Feb 9 14:07:34.667965 kubelet[2523]: I0209 14:07:34.667536 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjxbb\" (UniqueName: \"kubernetes.io/projected/82a439d9-603a-4f09-ab17-a1ffaa69781d-kube-api-access-kjxbb\") pod \"kube-proxy-89bfl\" (UID: \"82a439d9-603a-4f09-ab17-a1ffaa69781d\") " pod="kube-system/kube-proxy-89bfl" Feb 9 14:07:34.667965 kubelet[2523]: I0209 14:07:34.667692 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/00602c6b-f3bb-4d64-9edd-eb6411171a3c-host-proc-sys-net\") pod \"cilium-vvvzw\" (UID: \"00602c6b-f3bb-4d64-9edd-eb6411171a3c\") " pod="kube-system/cilium-vvvzw" Feb 9 14:07:34.667965 kubelet[2523]: I0209 14:07:34.667873 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/00602c6b-f3bb-4d64-9edd-eb6411171a3c-cilium-cgroup\") pod \"cilium-vvvzw\" (UID: \"00602c6b-f3bb-4d64-9edd-eb6411171a3c\") " pod="kube-system/cilium-vvvzw" Feb 9 14:07:34.668522 kubelet[2523]: I0209 14:07:34.668013 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/00602c6b-f3bb-4d64-9edd-eb6411171a3c-lib-modules\") pod \"cilium-vvvzw\" (UID: \"00602c6b-f3bb-4d64-9edd-eb6411171a3c\") " pod="kube-system/cilium-vvvzw" Feb 9 14:07:34.668522 kubelet[2523]: I0209 14:07:34.668171 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/00602c6b-f3bb-4d64-9edd-eb6411171a3c-xtables-lock\") pod \"cilium-vvvzw\" (UID: \"00602c6b-f3bb-4d64-9edd-eb6411171a3c\") " pod="kube-system/cilium-vvvzw" Feb 9 14:07:34.668522 kubelet[2523]: I0209 14:07:34.668288 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/00602c6b-f3bb-4d64-9edd-eb6411171a3c-clustermesh-secrets\") pod \"cilium-vvvzw\" (UID: \"00602c6b-f3bb-4d64-9edd-eb6411171a3c\") " pod="kube-system/cilium-vvvzw" Feb 9 14:07:34.668522 kubelet[2523]: I0209 14:07:34.668442 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/00602c6b-f3bb-4d64-9edd-eb6411171a3c-cilium-run\") pod \"cilium-vvvzw\" (UID: \"00602c6b-f3bb-4d64-9edd-eb6411171a3c\") " pod="kube-system/cilium-vvvzw" Feb 9 14:07:34.669158 kubelet[2523]: I0209 14:07:34.668590 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/00602c6b-f3bb-4d64-9edd-eb6411171a3c-cni-path\") pod \"cilium-vvvzw\" (UID: \"00602c6b-f3bb-4d64-9edd-eb6411171a3c\") " pod="kube-system/cilium-vvvzw" Feb 9 14:07:34.669158 kubelet[2523]: I0209 14:07:34.668716 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/82a439d9-603a-4f09-ab17-a1ffaa69781d-xtables-lock\") pod \"kube-proxy-89bfl\" (UID: \"82a439d9-603a-4f09-ab17-a1ffaa69781d\") " pod="kube-system/kube-proxy-89bfl" Feb 9 14:07:34.669158 kubelet[2523]: I0209 14:07:34.668849 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/00602c6b-f3bb-4d64-9edd-eb6411171a3c-etc-cni-netd\") pod \"cilium-vvvzw\" (UID: \"00602c6b-f3bb-4d64-9edd-eb6411171a3c\") " pod="kube-system/cilium-vvvzw" Feb 9 14:07:34.669158 kubelet[2523]: I0209 14:07:34.668941 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/82a439d9-603a-4f09-ab17-a1ffaa69781d-lib-modules\") pod \"kube-proxy-89bfl\" (UID: \"82a439d9-603a-4f09-ab17-a1ffaa69781d\") " pod="kube-system/kube-proxy-89bfl" Feb 9 14:07:34.669158 kubelet[2523]: I0209 14:07:34.669044 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/00602c6b-f3bb-4d64-9edd-eb6411171a3c-hostproc\") pod \"cilium-vvvzw\" (UID: \"00602c6b-f3bb-4d64-9edd-eb6411171a3c\") " pod="kube-system/cilium-vvvzw" Feb 9 14:07:34.669158 kubelet[2523]: I0209 14:07:34.669138 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/00602c6b-f3bb-4d64-9edd-eb6411171a3c-host-proc-sys-kernel\") pod \"cilium-vvvzw\" (UID: \"00602c6b-f3bb-4d64-9edd-eb6411171a3c\") " pod="kube-system/cilium-vvvzw" Feb 9 14:07:34.669914 kubelet[2523]: I0209 14:07:34.669225 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h75xh\" (UniqueName: \"kubernetes.io/projected/00602c6b-f3bb-4d64-9edd-eb6411171a3c-kube-api-access-h75xh\") pod \"cilium-vvvzw\" (UID: \"00602c6b-f3bb-4d64-9edd-eb6411171a3c\") " pod="kube-system/cilium-vvvzw" Feb 9 14:07:34.697237 kubelet[2523]: I0209 14:07:34.697213 2523 topology_manager.go:215] "Topology Admit Handler" podUID="b19c4d24-1ffe-427e-b653-176eff29c216" podNamespace="kube-system" podName="cilium-operator-6bc8ccdb58-v8pdv" Feb 9 14:07:34.701338 systemd[1]: Created slice kubepods-besteffort-podb19c4d24_1ffe_427e_b653_176eff29c216.slice. Feb 9 14:07:34.769759 kubelet[2523]: I0209 14:07:34.769691 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pl8m\" (UniqueName: \"kubernetes.io/projected/b19c4d24-1ffe-427e-b653-176eff29c216-kube-api-access-7pl8m\") pod \"cilium-operator-6bc8ccdb58-v8pdv\" (UID: \"b19c4d24-1ffe-427e-b653-176eff29c216\") " pod="kube-system/cilium-operator-6bc8ccdb58-v8pdv" Feb 9 14:07:34.770839 kubelet[2523]: I0209 14:07:34.770776 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b19c4d24-1ffe-427e-b653-176eff29c216-cilium-config-path\") pod \"cilium-operator-6bc8ccdb58-v8pdv\" (UID: \"b19c4d24-1ffe-427e-b653-176eff29c216\") " pod="kube-system/cilium-operator-6bc8ccdb58-v8pdv" Feb 9 14:07:34.936964 env[1480]: time="2024-02-09T14:07:34.936721088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-89bfl,Uid:82a439d9-603a-4f09-ab17-a1ffaa69781d,Namespace:kube-system,Attempt:0,}" Feb 9 14:07:34.939064 env[1480]: time="2024-02-09T14:07:34.938943565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vvvzw,Uid:00602c6b-f3bb-4d64-9edd-eb6411171a3c,Namespace:kube-system,Attempt:0,}" Feb 9 14:07:34.968217 env[1480]: time="2024-02-09T14:07:34.968066328Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 14:07:34.968217 env[1480]: time="2024-02-09T14:07:34.968172741Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 14:07:34.968637 env[1480]: time="2024-02-09T14:07:34.968212978Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 14:07:34.968881 env[1480]: time="2024-02-09T14:07:34.968742040Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9c1b56b994dd0ea7531df3ab7a9a95a75116f851b758689c3bf18072406e2df3 pid=2697 runtime=io.containerd.runc.v2 Feb 9 14:07:34.970523 env[1480]: time="2024-02-09T14:07:34.970389244Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 14:07:34.970523 env[1480]: time="2024-02-09T14:07:34.970487843Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 14:07:34.970869 env[1480]: time="2024-02-09T14:07:34.970541804Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 14:07:34.971128 env[1480]: time="2024-02-09T14:07:34.971021387Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ab77a8be1a6048b5a66e6590d859c64150b94748faa3f9f6067d879d13eafc77 pid=2705 runtime=io.containerd.runc.v2 Feb 9 14:07:35.000249 systemd[1]: Started cri-containerd-9c1b56b994dd0ea7531df3ab7a9a95a75116f851b758689c3bf18072406e2df3.scope. Feb 9 14:07:35.003327 systemd[1]: Started cri-containerd-ab77a8be1a6048b5a66e6590d859c64150b94748faa3f9f6067d879d13eafc77.scope. Feb 9 14:07:35.003652 env[1480]: time="2024-02-09T14:07:35.003609368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-v8pdv,Uid:b19c4d24-1ffe-427e-b653-176eff29c216,Namespace:kube-system,Attempt:0,}" Feb 9 14:07:35.015185 env[1480]: time="2024-02-09T14:07:35.015115780Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 14:07:35.015185 env[1480]: time="2024-02-09T14:07:35.015159161Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 14:07:35.015445 env[1480]: time="2024-02-09T14:07:35.015178820Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 14:07:35.015445 env[1480]: time="2024-02-09T14:07:35.015341305Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1add6ba781b8927ab85f016f30fc446367b48affbff286be37331fdb54bfb209 pid=2758 runtime=io.containerd.runc.v2 Feb 9 14:07:35.027893 systemd[1]: Started cri-containerd-1add6ba781b8927ab85f016f30fc446367b48affbff286be37331fdb54bfb209.scope. Feb 9 14:07:35.036262 env[1480]: time="2024-02-09T14:07:35.036212802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-89bfl,Uid:82a439d9-603a-4f09-ab17-a1ffaa69781d,Namespace:kube-system,Attempt:0,} returns sandbox id \"9c1b56b994dd0ea7531df3ab7a9a95a75116f851b758689c3bf18072406e2df3\"" Feb 9 14:07:35.036616 env[1480]: time="2024-02-09T14:07:35.036577018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vvvzw,Uid:00602c6b-f3bb-4d64-9edd-eb6411171a3c,Namespace:kube-system,Attempt:0,} returns sandbox id \"ab77a8be1a6048b5a66e6590d859c64150b94748faa3f9f6067d879d13eafc77\"" Feb 9 14:07:35.038964 env[1480]: time="2024-02-09T14:07:35.038929711Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 9 14:07:35.039926 env[1480]: time="2024-02-09T14:07:35.039891712Z" level=info msg="CreateContainer within sandbox \"9c1b56b994dd0ea7531df3ab7a9a95a75116f851b758689c3bf18072406e2df3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 9 14:07:35.051474 env[1480]: time="2024-02-09T14:07:35.051392865Z" level=info msg="CreateContainer within sandbox \"9c1b56b994dd0ea7531df3ab7a9a95a75116f851b758689c3bf18072406e2df3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d87d52d5fb6d156497319037e0c20c75a026efb41a0aa0fb7472e16bb4140dee\"" Feb 9 14:07:35.052035 env[1480]: time="2024-02-09T14:07:35.051955984Z" level=info msg="StartContainer for \"d87d52d5fb6d156497319037e0c20c75a026efb41a0aa0fb7472e16bb4140dee\"" Feb 9 14:07:35.065960 systemd[1]: Started cri-containerd-d87d52d5fb6d156497319037e0c20c75a026efb41a0aa0fb7472e16bb4140dee.scope. Feb 9 14:07:35.084107 env[1480]: time="2024-02-09T14:07:35.084078095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-v8pdv,Uid:b19c4d24-1ffe-427e-b653-176eff29c216,Namespace:kube-system,Attempt:0,} returns sandbox id \"1add6ba781b8927ab85f016f30fc446367b48affbff286be37331fdb54bfb209\"" Feb 9 14:07:35.094748 env[1480]: time="2024-02-09T14:07:35.094684829Z" level=info msg="StartContainer for \"d87d52d5fb6d156497319037e0c20c75a026efb41a0aa0fb7472e16bb4140dee\" returns successfully" Feb 9 14:07:35.639847 kubelet[2523]: I0209 14:07:35.639745 2523 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-89bfl" podStartSLOduration=1.6396474680000002 podCreationTimestamp="2024-02-09 14:07:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 14:07:35.639602275 +0000 UTC m=+15.115081376" watchObservedRunningTime="2024-02-09 14:07:35.639647468 +0000 UTC m=+15.115126513" Feb 9 14:07:38.819707 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount112120890.mount: Deactivated successfully. Feb 9 14:07:40.501478 env[1480]: time="2024-02-09T14:07:40.501419769Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 14:07:40.501979 env[1480]: time="2024-02-09T14:07:40.501921343Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 14:07:40.502774 env[1480]: time="2024-02-09T14:07:40.502734914Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 14:07:40.503123 env[1480]: time="2024-02-09T14:07:40.503077721Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 9 14:07:40.503515 env[1480]: time="2024-02-09T14:07:40.503471805Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 9 14:07:40.504353 env[1480]: time="2024-02-09T14:07:40.504294673Z" level=info msg="CreateContainer within sandbox \"ab77a8be1a6048b5a66e6590d859c64150b94748faa3f9f6067d879d13eafc77\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 14:07:40.509672 env[1480]: time="2024-02-09T14:07:40.509652886Z" level=info msg="CreateContainer within sandbox \"ab77a8be1a6048b5a66e6590d859c64150b94748faa3f9f6067d879d13eafc77\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"95e0cdecb013aa68137d7d5529a680fcdb1508246598c6b72fcd90d5e8877b29\"" Feb 9 14:07:40.509918 env[1480]: time="2024-02-09T14:07:40.509903143Z" level=info msg="StartContainer for \"95e0cdecb013aa68137d7d5529a680fcdb1508246598c6b72fcd90d5e8877b29\"" Feb 9 14:07:40.510556 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3356092941.mount: Deactivated successfully. Feb 9 14:07:40.531307 systemd[1]: Started cri-containerd-95e0cdecb013aa68137d7d5529a680fcdb1508246598c6b72fcd90d5e8877b29.scope. Feb 9 14:07:40.557379 env[1480]: time="2024-02-09T14:07:40.557354451Z" level=info msg="StartContainer for \"95e0cdecb013aa68137d7d5529a680fcdb1508246598c6b72fcd90d5e8877b29\" returns successfully" Feb 9 14:07:40.562027 systemd[1]: cri-containerd-95e0cdecb013aa68137d7d5529a680fcdb1508246598c6b72fcd90d5e8877b29.scope: Deactivated successfully. Feb 9 14:07:41.512170 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-95e0cdecb013aa68137d7d5529a680fcdb1508246598c6b72fcd90d5e8877b29-rootfs.mount: Deactivated successfully. Feb 9 14:07:41.634127 env[1480]: time="2024-02-09T14:07:41.634017949Z" level=info msg="shim disconnected" id=95e0cdecb013aa68137d7d5529a680fcdb1508246598c6b72fcd90d5e8877b29 Feb 9 14:07:41.634921 env[1480]: time="2024-02-09T14:07:41.634124613Z" level=warning msg="cleaning up after shim disconnected" id=95e0cdecb013aa68137d7d5529a680fcdb1508246598c6b72fcd90d5e8877b29 namespace=k8s.io Feb 9 14:07:41.634921 env[1480]: time="2024-02-09T14:07:41.634153645Z" level=info msg="cleaning up dead shim" Feb 9 14:07:41.662701 env[1480]: time="2024-02-09T14:07:41.662573131Z" level=warning msg="cleanup warnings time=\"2024-02-09T14:07:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3020 runtime=io.containerd.runc.v2\n" Feb 9 14:07:42.171402 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1523823024.mount: Deactivated successfully. Feb 9 14:07:42.635750 env[1480]: time="2024-02-09T14:07:42.635682035Z" level=info msg="CreateContainer within sandbox \"ab77a8be1a6048b5a66e6590d859c64150b94748faa3f9f6067d879d13eafc77\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 14:07:42.640668 env[1480]: time="2024-02-09T14:07:42.640647107Z" level=info msg="CreateContainer within sandbox \"ab77a8be1a6048b5a66e6590d859c64150b94748faa3f9f6067d879d13eafc77\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b5887b332d5aaeb12f508eb74db42ac018d0652f36b6590c2674c6556383b648\"" Feb 9 14:07:42.640951 env[1480]: time="2024-02-09T14:07:42.640937080Z" level=info msg="StartContainer for \"b5887b332d5aaeb12f508eb74db42ac018d0652f36b6590c2674c6556383b648\"" Feb 9 14:07:42.641281 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3071974459.mount: Deactivated successfully. Feb 9 14:07:42.662349 systemd[1]: Started cri-containerd-b5887b332d5aaeb12f508eb74db42ac018d0652f36b6590c2674c6556383b648.scope. Feb 9 14:07:42.684984 env[1480]: time="2024-02-09T14:07:42.684930336Z" level=info msg="StartContainer for \"b5887b332d5aaeb12f508eb74db42ac018d0652f36b6590c2674c6556383b648\" returns successfully" Feb 9 14:07:42.690423 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 14:07:42.690548 systemd[1]: Stopped systemd-sysctl.service. Feb 9 14:07:42.690669 systemd[1]: Stopping systemd-sysctl.service... Feb 9 14:07:42.691811 systemd[1]: Starting systemd-sysctl.service... Feb 9 14:07:42.692922 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 9 14:07:42.693281 systemd[1]: cri-containerd-b5887b332d5aaeb12f508eb74db42ac018d0652f36b6590c2674c6556383b648.scope: Deactivated successfully. Feb 9 14:07:42.693588 env[1480]: time="2024-02-09T14:07:42.693566312Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 14:07:42.694139 env[1480]: time="2024-02-09T14:07:42.694125517Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 14:07:42.694980 env[1480]: time="2024-02-09T14:07:42.694963076Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 14:07:42.695336 env[1480]: time="2024-02-09T14:07:42.695322063Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 9 14:07:42.695702 systemd[1]: Finished systemd-sysctl.service. Feb 9 14:07:42.696260 env[1480]: time="2024-02-09T14:07:42.696248363Z" level=info msg="CreateContainer within sandbox \"1add6ba781b8927ab85f016f30fc446367b48affbff286be37331fdb54bfb209\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 9 14:07:42.700511 env[1480]: time="2024-02-09T14:07:42.700465762Z" level=info msg="CreateContainer within sandbox \"1add6ba781b8927ab85f016f30fc446367b48affbff286be37331fdb54bfb209\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"96290905e0e040afc92740f30c2d800e3c9cf7b234ea6610411974fee9016cb3\"" Feb 9 14:07:42.700757 env[1480]: time="2024-02-09T14:07:42.700739748Z" level=info msg="StartContainer for \"96290905e0e040afc92740f30c2d800e3c9cf7b234ea6610411974fee9016cb3\"" Feb 9 14:07:42.719028 systemd[1]: Started cri-containerd-96290905e0e040afc92740f30c2d800e3c9cf7b234ea6610411974fee9016cb3.scope. Feb 9 14:07:42.768248 env[1480]: time="2024-02-09T14:07:42.768218698Z" level=info msg="StartContainer for \"96290905e0e040afc92740f30c2d800e3c9cf7b234ea6610411974fee9016cb3\" returns successfully" Feb 9 14:07:42.875640 env[1480]: time="2024-02-09T14:07:42.875497945Z" level=info msg="shim disconnected" id=b5887b332d5aaeb12f508eb74db42ac018d0652f36b6590c2674c6556383b648 Feb 9 14:07:42.876028 env[1480]: time="2024-02-09T14:07:42.875641337Z" level=warning msg="cleaning up after shim disconnected" id=b5887b332d5aaeb12f508eb74db42ac018d0652f36b6590c2674c6556383b648 namespace=k8s.io Feb 9 14:07:42.876028 env[1480]: time="2024-02-09T14:07:42.875673019Z" level=info msg="cleaning up dead shim" Feb 9 14:07:42.893933 env[1480]: time="2024-02-09T14:07:42.893720830Z" level=warning msg="cleanup warnings time=\"2024-02-09T14:07:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3132 runtime=io.containerd.runc.v2\n" Feb 9 14:07:43.646853 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b5887b332d5aaeb12f508eb74db42ac018d0652f36b6590c2674c6556383b648-rootfs.mount: Deactivated successfully. Feb 9 14:07:43.651272 env[1480]: time="2024-02-09T14:07:43.651171572Z" level=info msg="CreateContainer within sandbox \"ab77a8be1a6048b5a66e6590d859c64150b94748faa3f9f6067d879d13eafc77\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 14:07:43.661913 kubelet[2523]: I0209 14:07:43.661845 2523 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-6bc8ccdb58-v8pdv" podStartSLOduration=2.050924933 podCreationTimestamp="2024-02-09 14:07:34 +0000 UTC" firstStartedPulling="2024-02-09 14:07:35.084676609 +0000 UTC m=+14.560155598" lastFinishedPulling="2024-02-09 14:07:42.695478927 +0000 UTC m=+22.170957920" observedRunningTime="2024-02-09 14:07:43.661493161 +0000 UTC m=+23.136972221" watchObservedRunningTime="2024-02-09 14:07:43.661727255 +0000 UTC m=+23.137206320" Feb 9 14:07:43.673476 env[1480]: time="2024-02-09T14:07:43.673331072Z" level=info msg="CreateContainer within sandbox \"ab77a8be1a6048b5a66e6590d859c64150b94748faa3f9f6067d879d13eafc77\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"90ed4fdeb332f4109fa74c3d0426857f126b39cc93318071caa4df96786cdc62\"" Feb 9 14:07:43.674603 env[1480]: time="2024-02-09T14:07:43.674469529Z" level=info msg="StartContainer for \"90ed4fdeb332f4109fa74c3d0426857f126b39cc93318071caa4df96786cdc62\"" Feb 9 14:07:43.715320 systemd[1]: Started cri-containerd-90ed4fdeb332f4109fa74c3d0426857f126b39cc93318071caa4df96786cdc62.scope. Feb 9 14:07:43.730631 env[1480]: time="2024-02-09T14:07:43.730568275Z" level=info msg="StartContainer for \"90ed4fdeb332f4109fa74c3d0426857f126b39cc93318071caa4df96786cdc62\" returns successfully" Feb 9 14:07:43.731947 systemd[1]: cri-containerd-90ed4fdeb332f4109fa74c3d0426857f126b39cc93318071caa4df96786cdc62.scope: Deactivated successfully. Feb 9 14:07:43.760701 env[1480]: time="2024-02-09T14:07:43.760599814Z" level=info msg="shim disconnected" id=90ed4fdeb332f4109fa74c3d0426857f126b39cc93318071caa4df96786cdc62 Feb 9 14:07:43.761082 env[1480]: time="2024-02-09T14:07:43.760705902Z" level=warning msg="cleaning up after shim disconnected" id=90ed4fdeb332f4109fa74c3d0426857f126b39cc93318071caa4df96786cdc62 namespace=k8s.io Feb 9 14:07:43.761082 env[1480]: time="2024-02-09T14:07:43.760736367Z" level=info msg="cleaning up dead shim" Feb 9 14:07:43.778840 env[1480]: time="2024-02-09T14:07:43.778726040Z" level=warning msg="cleanup warnings time=\"2024-02-09T14:07:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3188 runtime=io.containerd.runc.v2\n" Feb 9 14:07:44.644967 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-90ed4fdeb332f4109fa74c3d0426857f126b39cc93318071caa4df96786cdc62-rootfs.mount: Deactivated successfully. Feb 9 14:07:44.648568 env[1480]: time="2024-02-09T14:07:44.648547444Z" level=info msg="CreateContainer within sandbox \"ab77a8be1a6048b5a66e6590d859c64150b94748faa3f9f6067d879d13eafc77\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 14:07:44.653361 env[1480]: time="2024-02-09T14:07:44.653324762Z" level=info msg="CreateContainer within sandbox \"ab77a8be1a6048b5a66e6590d859c64150b94748faa3f9f6067d879d13eafc77\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"3e4d2f9aae549c76d98e972839a58de836c827097ab03d86e8428181b47d280f\"" Feb 9 14:07:44.653638 env[1480]: time="2024-02-09T14:07:44.653622038Z" level=info msg="StartContainer for \"3e4d2f9aae549c76d98e972839a58de836c827097ab03d86e8428181b47d280f\"" Feb 9 14:07:44.663674 systemd[1]: Started cri-containerd-3e4d2f9aae549c76d98e972839a58de836c827097ab03d86e8428181b47d280f.scope. Feb 9 14:07:44.675762 systemd[1]: cri-containerd-3e4d2f9aae549c76d98e972839a58de836c827097ab03d86e8428181b47d280f.scope: Deactivated successfully. Feb 9 14:07:44.676632 env[1480]: time="2024-02-09T14:07:44.676578702Z" level=info msg="StartContainer for \"3e4d2f9aae549c76d98e972839a58de836c827097ab03d86e8428181b47d280f\" returns successfully" Feb 9 14:07:44.687239 env[1480]: time="2024-02-09T14:07:44.687210166Z" level=info msg="shim disconnected" id=3e4d2f9aae549c76d98e972839a58de836c827097ab03d86e8428181b47d280f Feb 9 14:07:44.687239 env[1480]: time="2024-02-09T14:07:44.687240184Z" level=warning msg="cleaning up after shim disconnected" id=3e4d2f9aae549c76d98e972839a58de836c827097ab03d86e8428181b47d280f namespace=k8s.io Feb 9 14:07:44.687380 env[1480]: time="2024-02-09T14:07:44.687246993Z" level=info msg="cleaning up dead shim" Feb 9 14:07:44.691444 env[1480]: time="2024-02-09T14:07:44.691424137Z" level=warning msg="cleanup warnings time=\"2024-02-09T14:07:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3242 runtime=io.containerd.runc.v2\n" Feb 9 14:07:45.644787 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3e4d2f9aae549c76d98e972839a58de836c827097ab03d86e8428181b47d280f-rootfs.mount: Deactivated successfully. Feb 9 14:07:45.650455 env[1480]: time="2024-02-09T14:07:45.650408427Z" level=info msg="CreateContainer within sandbox \"ab77a8be1a6048b5a66e6590d859c64150b94748faa3f9f6067d879d13eafc77\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 14:07:45.656051 env[1480]: time="2024-02-09T14:07:45.656019671Z" level=info msg="CreateContainer within sandbox \"ab77a8be1a6048b5a66e6590d859c64150b94748faa3f9f6067d879d13eafc77\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f400400903621d7647e456eb66e8228a4ff57c03301b95fc25e8c9d7331db035\"" Feb 9 14:07:45.656299 env[1480]: time="2024-02-09T14:07:45.656284851Z" level=info msg="StartContainer for \"f400400903621d7647e456eb66e8228a4ff57c03301b95fc25e8c9d7331db035\"" Feb 9 14:07:45.665515 systemd[1]: Started cri-containerd-f400400903621d7647e456eb66e8228a4ff57c03301b95fc25e8c9d7331db035.scope. Feb 9 14:07:45.701272 env[1480]: time="2024-02-09T14:07:45.701168233Z" level=info msg="StartContainer for \"f400400903621d7647e456eb66e8228a4ff57c03301b95fc25e8c9d7331db035\" returns successfully" Feb 9 14:07:45.801338 kubelet[2523]: I0209 14:07:45.801316 2523 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 9 14:07:45.814535 kubelet[2523]: I0209 14:07:45.814510 2523 topology_manager.go:215] "Topology Admit Handler" podUID="bb00cd7a-a97f-4e66-8ef3-dce79ee6de52" podNamespace="kube-system" podName="coredns-5dd5756b68-vvjgp" Feb 9 14:07:45.815615 kubelet[2523]: I0209 14:07:45.815598 2523 topology_manager.go:215] "Topology Admit Handler" podUID="3cc899a9-779c-4038-8e0a-493e4a7651a4" podNamespace="kube-system" podName="coredns-5dd5756b68-plw9r" Feb 9 14:07:45.817320 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Feb 9 14:07:45.819185 systemd[1]: Created slice kubepods-burstable-podbb00cd7a_a97f_4e66_8ef3_dce79ee6de52.slice. Feb 9 14:07:45.822908 systemd[1]: Created slice kubepods-burstable-pod3cc899a9_779c_4038_8e0a_493e4a7651a4.slice. Feb 9 14:07:45.851902 kubelet[2523]: I0209 14:07:45.851886 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nptw\" (UniqueName: \"kubernetes.io/projected/3cc899a9-779c-4038-8e0a-493e4a7651a4-kube-api-access-6nptw\") pod \"coredns-5dd5756b68-plw9r\" (UID: \"3cc899a9-779c-4038-8e0a-493e4a7651a4\") " pod="kube-system/coredns-5dd5756b68-plw9r" Feb 9 14:07:45.851989 kubelet[2523]: I0209 14:07:45.851911 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bb00cd7a-a97f-4e66-8ef3-dce79ee6de52-config-volume\") pod \"coredns-5dd5756b68-vvjgp\" (UID: \"bb00cd7a-a97f-4e66-8ef3-dce79ee6de52\") " pod="kube-system/coredns-5dd5756b68-vvjgp" Feb 9 14:07:45.851989 kubelet[2523]: I0209 14:07:45.851923 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prj7l\" (UniqueName: \"kubernetes.io/projected/bb00cd7a-a97f-4e66-8ef3-dce79ee6de52-kube-api-access-prj7l\") pod \"coredns-5dd5756b68-vvjgp\" (UID: \"bb00cd7a-a97f-4e66-8ef3-dce79ee6de52\") " pod="kube-system/coredns-5dd5756b68-vvjgp" Feb 9 14:07:45.851989 kubelet[2523]: I0209 14:07:45.851936 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3cc899a9-779c-4038-8e0a-493e4a7651a4-config-volume\") pod \"coredns-5dd5756b68-plw9r\" (UID: \"3cc899a9-779c-4038-8e0a-493e4a7651a4\") " pod="kube-system/coredns-5dd5756b68-plw9r" Feb 9 14:07:45.966320 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Feb 9 14:07:46.122976 env[1480]: time="2024-02-09T14:07:46.122875602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-vvjgp,Uid:bb00cd7a-a97f-4e66-8ef3-dce79ee6de52,Namespace:kube-system,Attempt:0,}" Feb 9 14:07:46.126131 env[1480]: time="2024-02-09T14:07:46.125997159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-plw9r,Uid:3cc899a9-779c-4038-8e0a-493e4a7651a4,Namespace:kube-system,Attempt:0,}" Feb 9 14:07:46.661980 kubelet[2523]: I0209 14:07:46.661959 2523 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-vvvzw" podStartSLOduration=7.196207214 podCreationTimestamp="2024-02-09 14:07:34 +0000 UTC" firstStartedPulling="2024-02-09 14:07:35.037568995 +0000 UTC m=+14.513048021" lastFinishedPulling="2024-02-09 14:07:40.503295097 +0000 UTC m=+19.978774087" observedRunningTime="2024-02-09 14:07:46.661608925 +0000 UTC m=+26.137087919" watchObservedRunningTime="2024-02-09 14:07:46.66193328 +0000 UTC m=+26.137412269" Feb 9 14:07:47.565801 systemd-networkd[1327]: cilium_host: Link UP Feb 9 14:07:47.565890 systemd-networkd[1327]: cilium_net: Link UP Feb 9 14:07:47.565892 systemd-networkd[1327]: cilium_net: Gained carrier Feb 9 14:07:47.566041 systemd-networkd[1327]: cilium_host: Gained carrier Feb 9 14:07:47.574316 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 9 14:07:47.574301 systemd-networkd[1327]: cilium_host: Gained IPv6LL Feb 9 14:07:47.630855 systemd-networkd[1327]: cilium_vxlan: Link UP Feb 9 14:07:47.630858 systemd-networkd[1327]: cilium_vxlan: Gained carrier Feb 9 14:07:47.765314 kernel: NET: Registered PF_ALG protocol family Feb 9 14:07:47.890392 systemd-networkd[1327]: cilium_net: Gained IPv6LL Feb 9 14:07:48.238614 systemd-networkd[1327]: lxc_health: Link UP Feb 9 14:07:48.264153 systemd-networkd[1327]: lxc_health: Gained carrier Feb 9 14:07:48.264342 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 14:07:48.686024 systemd-networkd[1327]: lxce101eb3c2d0f: Link UP Feb 9 14:07:48.702315 kernel: eth0: renamed from tmpfab32 Feb 9 14:07:48.737776 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 14:07:48.737899 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxce101eb3c2d0f: link becomes ready Feb 9 14:07:48.738342 kernel: eth0: renamed from tmp5f88e Feb 9 14:07:48.761562 systemd-networkd[1327]: lxcd832e92ead2a: Link UP Feb 9 14:07:48.762177 systemd-networkd[1327]: lxce101eb3c2d0f: Gained carrier Feb 9 14:07:48.769279 systemd-networkd[1327]: lxcd832e92ead2a: Gained carrier Feb 9 14:07:48.769374 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcd832e92ead2a: link becomes ready Feb 9 14:07:49.298430 systemd-networkd[1327]: lxc_health: Gained IPv6LL Feb 9 14:07:49.491478 systemd-networkd[1327]: cilium_vxlan: Gained IPv6LL Feb 9 14:07:50.002438 systemd-networkd[1327]: lxce101eb3c2d0f: Gained IPv6LL Feb 9 14:07:50.259409 systemd-networkd[1327]: lxcd832e92ead2a: Gained IPv6LL Feb 9 14:07:51.069588 env[1480]: time="2024-02-09T14:07:51.069554269Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 14:07:51.069588 env[1480]: time="2024-02-09T14:07:51.069579143Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 14:07:51.069588 env[1480]: time="2024-02-09T14:07:51.069586135Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 14:07:51.069838 env[1480]: time="2024-02-09T14:07:51.069664451Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fab32b975728d7abbc8a22424a9f4f529ed5d7e949b572aac6d5b121dc4e87de pid=3921 runtime=io.containerd.runc.v2 Feb 9 14:07:51.072228 env[1480]: time="2024-02-09T14:07:51.072195375Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 14:07:51.072228 env[1480]: time="2024-02-09T14:07:51.072216524Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 14:07:51.072228 env[1480]: time="2024-02-09T14:07:51.072223429Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 14:07:51.072371 env[1480]: time="2024-02-09T14:07:51.072281581Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5f88e3404a4e1348a86cc67499cf356f8bc00813a787a19c72ecc32666d9a15f pid=3939 runtime=io.containerd.runc.v2 Feb 9 14:07:51.079240 systemd[1]: Started cri-containerd-5f88e3404a4e1348a86cc67499cf356f8bc00813a787a19c72ecc32666d9a15f.scope. Feb 9 14:07:51.087826 systemd[1]: Started cri-containerd-fab32b975728d7abbc8a22424a9f4f529ed5d7e949b572aac6d5b121dc4e87de.scope. Feb 9 14:07:51.112900 env[1480]: time="2024-02-09T14:07:51.112875401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-plw9r,Uid:3cc899a9-779c-4038-8e0a-493e4a7651a4,Namespace:kube-system,Attempt:0,} returns sandbox id \"5f88e3404a4e1348a86cc67499cf356f8bc00813a787a19c72ecc32666d9a15f\"" Feb 9 14:07:51.114225 env[1480]: time="2024-02-09T14:07:51.114183201Z" level=info msg="CreateContainer within sandbox \"5f88e3404a4e1348a86cc67499cf356f8bc00813a787a19c72ecc32666d9a15f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 14:07:51.118766 env[1480]: time="2024-02-09T14:07:51.118722615Z" level=info msg="CreateContainer within sandbox \"5f88e3404a4e1348a86cc67499cf356f8bc00813a787a19c72ecc32666d9a15f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"962d0069be33b354049040a6fee6f76fafa2c93963c5ce9c66e9b41bf8b93f98\"" Feb 9 14:07:51.118982 env[1480]: time="2024-02-09T14:07:51.118967532Z" level=info msg="StartContainer for \"962d0069be33b354049040a6fee6f76fafa2c93963c5ce9c66e9b41bf8b93f98\"" Feb 9 14:07:51.121696 env[1480]: time="2024-02-09T14:07:51.121668176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-vvjgp,Uid:bb00cd7a-a97f-4e66-8ef3-dce79ee6de52,Namespace:kube-system,Attempt:0,} returns sandbox id \"fab32b975728d7abbc8a22424a9f4f529ed5d7e949b572aac6d5b121dc4e87de\"" Feb 9 14:07:51.122904 env[1480]: time="2024-02-09T14:07:51.122890037Z" level=info msg="CreateContainer within sandbox \"fab32b975728d7abbc8a22424a9f4f529ed5d7e949b572aac6d5b121dc4e87de\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 14:07:51.126968 env[1480]: time="2024-02-09T14:07:51.126951241Z" level=info msg="CreateContainer within sandbox \"fab32b975728d7abbc8a22424a9f4f529ed5d7e949b572aac6d5b121dc4e87de\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8bed1ab696fb8c0509b421b6aa78af9dc502589d20dc22fec42155b8386a6cc5\"" Feb 9 14:07:51.127183 env[1480]: time="2024-02-09T14:07:51.127169614Z" level=info msg="StartContainer for \"8bed1ab696fb8c0509b421b6aa78af9dc502589d20dc22fec42155b8386a6cc5\"" Feb 9 14:07:51.139516 systemd[1]: Started cri-containerd-962d0069be33b354049040a6fee6f76fafa2c93963c5ce9c66e9b41bf8b93f98.scope. Feb 9 14:07:51.154798 systemd[1]: Started cri-containerd-8bed1ab696fb8c0509b421b6aa78af9dc502589d20dc22fec42155b8386a6cc5.scope. Feb 9 14:07:51.201515 env[1480]: time="2024-02-09T14:07:51.201449535Z" level=info msg="StartContainer for \"962d0069be33b354049040a6fee6f76fafa2c93963c5ce9c66e9b41bf8b93f98\" returns successfully" Feb 9 14:07:51.201515 env[1480]: time="2024-02-09T14:07:51.201457996Z" level=info msg="StartContainer for \"8bed1ab696fb8c0509b421b6aa78af9dc502589d20dc22fec42155b8386a6cc5\" returns successfully" Feb 9 14:07:51.683518 kubelet[2523]: I0209 14:07:51.683451 2523 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-plw9r" podStartSLOduration=17.683337186 podCreationTimestamp="2024-02-09 14:07:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 14:07:51.682078434 +0000 UTC m=+31.157557517" watchObservedRunningTime="2024-02-09 14:07:51.683337186 +0000 UTC m=+31.158816230" Feb 9 14:07:51.720240 kubelet[2523]: I0209 14:07:51.720179 2523 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-vvjgp" podStartSLOduration=17.720097424 podCreationTimestamp="2024-02-09 14:07:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 14:07:51.719294622 +0000 UTC m=+31.194773685" watchObservedRunningTime="2024-02-09 14:07:51.720097424 +0000 UTC m=+31.195576459" Feb 9 14:13:01.247151 systemd[1]: Started sshd@6-139.178.88.165:22-218.92.0.43:22164.service. Feb 9 14:13:01.714205 sshd[4145]: Connection reset by 218.92.0.43 port 22164 [preauth] Feb 9 14:13:01.716175 systemd[1]: sshd@6-139.178.88.165:22-218.92.0.43:22164.service: Deactivated successfully. Feb 9 14:13:58.082513 update_engine[1470]: I0209 14:13:58.082445 1470 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Feb 9 14:13:58.082513 update_engine[1470]: I0209 14:13:58.082482 1470 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Feb 9 14:13:58.083914 update_engine[1470]: I0209 14:13:58.083865 1470 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Feb 9 14:13:58.084278 update_engine[1470]: I0209 14:13:58.084233 1470 omaha_request_params.cc:62] Current group set to lts Feb 9 14:13:58.084391 update_engine[1470]: I0209 14:13:58.084373 1470 update_attempter.cc:499] Already updated boot flags. Skipping. Feb 9 14:13:58.084391 update_engine[1470]: I0209 14:13:58.084381 1470 update_attempter.cc:643] Scheduling an action processor start. Feb 9 14:13:58.084507 update_engine[1470]: I0209 14:13:58.084395 1470 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 9 14:13:58.084507 update_engine[1470]: I0209 14:13:58.084425 1470 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Feb 9 14:13:58.084507 update_engine[1470]: I0209 14:13:58.084482 1470 omaha_request_action.cc:270] Posting an Omaha request to disabled Feb 9 14:13:58.084507 update_engine[1470]: I0209 14:13:58.084489 1470 omaha_request_action.cc:271] Request: Feb 9 14:13:58.084507 update_engine[1470]: Feb 9 14:13:58.084507 update_engine[1470]: Feb 9 14:13:58.084507 update_engine[1470]: Feb 9 14:13:58.084507 update_engine[1470]: Feb 9 14:13:58.084507 update_engine[1470]: Feb 9 14:13:58.084507 update_engine[1470]: Feb 9 14:13:58.084507 update_engine[1470]: Feb 9 14:13:58.084507 update_engine[1470]: Feb 9 14:13:58.084507 update_engine[1470]: I0209 14:13:58.084494 1470 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 9 14:13:58.085098 locksmithd[1513]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Feb 9 14:13:58.086015 update_engine[1470]: I0209 14:13:58.085966 1470 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 9 14:13:58.086115 update_engine[1470]: E0209 14:13:58.086081 1470 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 9 14:13:58.086178 update_engine[1470]: I0209 14:13:58.086159 1470 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Feb 9 14:14:07.986625 update_engine[1470]: I0209 14:14:07.986513 1470 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 9 14:14:07.987578 update_engine[1470]: I0209 14:14:07.986977 1470 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 9 14:14:07.987578 update_engine[1470]: E0209 14:14:07.987175 1470 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 9 14:14:07.987578 update_engine[1470]: I0209 14:14:07.987368 1470 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Feb 9 14:14:13.242477 systemd[1]: Started sshd@7-139.178.88.165:22-147.75.109.163:57032.service. Feb 9 14:14:13.280486 sshd[4157]: Accepted publickey for core from 147.75.109.163 port 57032 ssh2: RSA SHA256:U9TXCuLWmaN/vK3Q2mHS688pTPlFTaEaiETkjLfvCUc Feb 9 14:14:13.281676 sshd[4157]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 14:14:13.285388 systemd-logind[1468]: New session 8 of user core. Feb 9 14:14:13.286499 systemd[1]: Started session-8.scope. Feb 9 14:14:13.378177 sshd[4157]: pam_unix(sshd:session): session closed for user core Feb 9 14:14:13.379666 systemd[1]: sshd@7-139.178.88.165:22-147.75.109.163:57032.service: Deactivated successfully. Feb 9 14:14:13.380133 systemd[1]: session-8.scope: Deactivated successfully. Feb 9 14:14:13.380536 systemd-logind[1468]: Session 8 logged out. Waiting for processes to exit. Feb 9 14:14:13.381050 systemd-logind[1468]: Removed session 8. Feb 9 14:14:17.987553 update_engine[1470]: I0209 14:14:17.987376 1470 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 9 14:14:17.988625 update_engine[1470]: I0209 14:14:17.987944 1470 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 9 14:14:17.988625 update_engine[1470]: E0209 14:14:17.988210 1470 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 9 14:14:17.988625 update_engine[1470]: I0209 14:14:17.988492 1470 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Feb 9 14:14:18.389660 systemd[1]: Started sshd@8-139.178.88.165:22-147.75.109.163:54386.service. Feb 9 14:14:18.463738 sshd[4186]: Accepted publickey for core from 147.75.109.163 port 54386 ssh2: RSA SHA256:U9TXCuLWmaN/vK3Q2mHS688pTPlFTaEaiETkjLfvCUc Feb 9 14:14:18.464438 sshd[4186]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 14:14:18.466795 systemd-logind[1468]: New session 9 of user core. Feb 9 14:14:18.467274 systemd[1]: Started session-9.scope. Feb 9 14:14:18.557512 sshd[4186]: pam_unix(sshd:session): session closed for user core Feb 9 14:14:18.558982 systemd[1]: sshd@8-139.178.88.165:22-147.75.109.163:54386.service: Deactivated successfully. Feb 9 14:14:18.559455 systemd[1]: session-9.scope: Deactivated successfully. Feb 9 14:14:18.559922 systemd-logind[1468]: Session 9 logged out. Waiting for processes to exit. Feb 9 14:14:18.560468 systemd-logind[1468]: Removed session 9. Feb 9 14:14:23.567420 systemd[1]: Started sshd@9-139.178.88.165:22-147.75.109.163:54390.service. Feb 9 14:14:23.603439 sshd[4215]: Accepted publickey for core from 147.75.109.163 port 54390 ssh2: RSA SHA256:U9TXCuLWmaN/vK3Q2mHS688pTPlFTaEaiETkjLfvCUc Feb 9 14:14:23.604291 sshd[4215]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 14:14:23.607153 systemd-logind[1468]: New session 10 of user core. Feb 9 14:14:23.607950 systemd[1]: Started session-10.scope. Feb 9 14:14:23.694306 sshd[4215]: pam_unix(sshd:session): session closed for user core Feb 9 14:14:23.695784 systemd[1]: sshd@9-139.178.88.165:22-147.75.109.163:54390.service: Deactivated successfully. Feb 9 14:14:23.696248 systemd[1]: session-10.scope: Deactivated successfully. Feb 9 14:14:23.696680 systemd-logind[1468]: Session 10 logged out. Waiting for processes to exit. Feb 9 14:14:23.697190 systemd-logind[1468]: Removed session 10. Feb 9 14:14:27.987563 update_engine[1470]: I0209 14:14:27.987474 1470 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 9 14:14:27.988809 update_engine[1470]: I0209 14:14:27.988063 1470 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 9 14:14:27.988809 update_engine[1470]: E0209 14:14:27.988371 1470 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 9 14:14:27.988809 update_engine[1470]: I0209 14:14:27.988584 1470 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 9 14:14:27.988809 update_engine[1470]: I0209 14:14:27.988610 1470 omaha_request_action.cc:621] Omaha request response: Feb 9 14:14:27.988809 update_engine[1470]: E0209 14:14:27.988792 1470 omaha_request_action.cc:640] Omaha request network transfer failed. Feb 9 14:14:27.989688 update_engine[1470]: I0209 14:14:27.988832 1470 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Feb 9 14:14:27.989688 update_engine[1470]: I0209 14:14:27.988848 1470 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 9 14:14:27.989688 update_engine[1470]: I0209 14:14:27.988862 1470 update_attempter.cc:306] Processing Done. Feb 9 14:14:27.989688 update_engine[1470]: E0209 14:14:27.988896 1470 update_attempter.cc:619] Update failed. Feb 9 14:14:27.989688 update_engine[1470]: I0209 14:14:27.988912 1470 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Feb 9 14:14:27.989688 update_engine[1470]: I0209 14:14:27.988926 1470 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Feb 9 14:14:27.989688 update_engine[1470]: I0209 14:14:27.988941 1470 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Feb 9 14:14:27.989688 update_engine[1470]: I0209 14:14:27.989158 1470 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 9 14:14:27.989688 update_engine[1470]: I0209 14:14:27.989228 1470 omaha_request_action.cc:270] Posting an Omaha request to disabled Feb 9 14:14:27.989688 update_engine[1470]: I0209 14:14:27.989248 1470 omaha_request_action.cc:271] Request: Feb 9 14:14:27.989688 update_engine[1470]: Feb 9 14:14:27.989688 update_engine[1470]: Feb 9 14:14:27.989688 update_engine[1470]: Feb 9 14:14:27.989688 update_engine[1470]: Feb 9 14:14:27.989688 update_engine[1470]: Feb 9 14:14:27.989688 update_engine[1470]: Feb 9 14:14:27.989688 update_engine[1470]: I0209 14:14:27.989264 1470 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 9 14:14:27.991658 update_engine[1470]: I0209 14:14:27.989704 1470 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 9 14:14:27.991658 update_engine[1470]: E0209 14:14:27.989940 1470 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 9 14:14:27.991658 update_engine[1470]: I0209 14:14:27.990141 1470 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 9 14:14:27.991658 update_engine[1470]: I0209 14:14:27.990166 1470 omaha_request_action.cc:621] Omaha request response: Feb 9 14:14:27.991658 update_engine[1470]: I0209 14:14:27.990183 1470 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 9 14:14:27.991658 update_engine[1470]: I0209 14:14:27.990196 1470 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 9 14:14:27.991658 update_engine[1470]: I0209 14:14:27.990209 1470 update_attempter.cc:306] Processing Done. Feb 9 14:14:27.991658 update_engine[1470]: I0209 14:14:27.990223 1470 update_attempter.cc:310] Error event sent. Feb 9 14:14:27.991658 update_engine[1470]: I0209 14:14:27.990250 1470 update_check_scheduler.cc:74] Next update check in 47m3s Feb 9 14:14:27.992491 locksmithd[1513]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Feb 9 14:14:27.992491 locksmithd[1513]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Feb 9 14:14:28.697747 systemd[1]: Started sshd@10-139.178.88.165:22-147.75.109.163:46780.service. Feb 9 14:14:28.768914 sshd[4241]: Accepted publickey for core from 147.75.109.163 port 46780 ssh2: RSA SHA256:U9TXCuLWmaN/vK3Q2mHS688pTPlFTaEaiETkjLfvCUc Feb 9 14:14:28.770668 sshd[4241]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 14:14:28.776506 systemd-logind[1468]: New session 11 of user core. Feb 9 14:14:28.778128 systemd[1]: Started session-11.scope. Feb 9 14:14:28.915386 sshd[4241]: pam_unix(sshd:session): session closed for user core Feb 9 14:14:28.917291 systemd[1]: sshd@10-139.178.88.165:22-147.75.109.163:46780.service: Deactivated successfully. Feb 9 14:14:28.917672 systemd[1]: session-11.scope: Deactivated successfully. Feb 9 14:14:28.918022 systemd-logind[1468]: Session 11 logged out. Waiting for processes to exit. Feb 9 14:14:28.918684 systemd[1]: Started sshd@11-139.178.88.165:22-147.75.109.163:46786.service. Feb 9 14:14:28.919067 systemd-logind[1468]: Removed session 11. Feb 9 14:14:28.955262 sshd[4267]: Accepted publickey for core from 147.75.109.163 port 46786 ssh2: RSA SHA256:U9TXCuLWmaN/vK3Q2mHS688pTPlFTaEaiETkjLfvCUc Feb 9 14:14:28.956189 sshd[4267]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 14:14:28.958934 systemd-logind[1468]: New session 12 of user core. Feb 9 14:14:28.959653 systemd[1]: Started session-12.scope. Feb 9 14:14:29.391553 sshd[4267]: pam_unix(sshd:session): session closed for user core Feb 9 14:14:29.393497 systemd[1]: sshd@11-139.178.88.165:22-147.75.109.163:46786.service: Deactivated successfully. Feb 9 14:14:29.393851 systemd[1]: session-12.scope: Deactivated successfully. Feb 9 14:14:29.394153 systemd-logind[1468]: Session 12 logged out. Waiting for processes to exit. Feb 9 14:14:29.394794 systemd[1]: Started sshd@12-139.178.88.165:22-147.75.109.163:46800.service. Feb 9 14:14:29.395136 systemd-logind[1468]: Removed session 12. Feb 9 14:14:29.465966 sshd[4290]: Accepted publickey for core from 147.75.109.163 port 46800 ssh2: RSA SHA256:U9TXCuLWmaN/vK3Q2mHS688pTPlFTaEaiETkjLfvCUc Feb 9 14:14:29.467896 sshd[4290]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 14:14:29.473818 systemd-logind[1468]: New session 13 of user core. Feb 9 14:14:29.475138 systemd[1]: Started session-13.scope. Feb 9 14:14:29.609959 sshd[4290]: pam_unix(sshd:session): session closed for user core Feb 9 14:14:29.611464 systemd[1]: sshd@12-139.178.88.165:22-147.75.109.163:46800.service: Deactivated successfully. Feb 9 14:14:29.611913 systemd[1]: session-13.scope: Deactivated successfully. Feb 9 14:14:29.612219 systemd-logind[1468]: Session 13 logged out. Waiting for processes to exit. Feb 9 14:14:29.612866 systemd-logind[1468]: Removed session 13. Feb 9 14:14:34.619949 systemd[1]: Started sshd@13-139.178.88.165:22-147.75.109.163:46826.service. Feb 9 14:14:34.656309 sshd[4316]: Accepted publickey for core from 147.75.109.163 port 46826 ssh2: RSA SHA256:U9TXCuLWmaN/vK3Q2mHS688pTPlFTaEaiETkjLfvCUc Feb 9 14:14:34.657426 sshd[4316]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 14:14:34.660760 systemd-logind[1468]: New session 14 of user core. Feb 9 14:14:34.661445 systemd[1]: Started session-14.scope. Feb 9 14:14:34.751476 sshd[4316]: pam_unix(sshd:session): session closed for user core Feb 9 14:14:34.753115 systemd[1]: sshd@13-139.178.88.165:22-147.75.109.163:46826.service: Deactivated successfully. Feb 9 14:14:34.753606 systemd[1]: session-14.scope: Deactivated successfully. Feb 9 14:14:34.754013 systemd-logind[1468]: Session 14 logged out. Waiting for processes to exit. Feb 9 14:14:34.754584 systemd-logind[1468]: Removed session 14. Feb 9 14:14:39.760939 systemd[1]: Started sshd@14-139.178.88.165:22-147.75.109.163:46832.service. Feb 9 14:14:39.797466 sshd[4343]: Accepted publickey for core from 147.75.109.163 port 46832 ssh2: RSA SHA256:U9TXCuLWmaN/vK3Q2mHS688pTPlFTaEaiETkjLfvCUc Feb 9 14:14:39.798380 sshd[4343]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 14:14:39.801564 systemd-logind[1468]: New session 15 of user core. Feb 9 14:14:39.802206 systemd[1]: Started session-15.scope. Feb 9 14:14:39.889387 sshd[4343]: pam_unix(sshd:session): session closed for user core Feb 9 14:14:39.890961 systemd[1]: sshd@14-139.178.88.165:22-147.75.109.163:46832.service: Deactivated successfully. Feb 9 14:14:39.891417 systemd[1]: session-15.scope: Deactivated successfully. Feb 9 14:14:39.891823 systemd-logind[1468]: Session 15 logged out. Waiting for processes to exit. Feb 9 14:14:39.892268 systemd-logind[1468]: Removed session 15. Feb 9 14:14:44.899469 systemd[1]: Started sshd@15-139.178.88.165:22-147.75.109.163:33662.service. Feb 9 14:14:44.935603 sshd[4368]: Accepted publickey for core from 147.75.109.163 port 33662 ssh2: RSA SHA256:U9TXCuLWmaN/vK3Q2mHS688pTPlFTaEaiETkjLfvCUc Feb 9 14:14:44.936540 sshd[4368]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 14:14:44.939610 systemd-logind[1468]: New session 16 of user core. Feb 9 14:14:44.940411 systemd[1]: Started session-16.scope. Feb 9 14:14:45.025492 sshd[4368]: pam_unix(sshd:session): session closed for user core Feb 9 14:14:45.027218 systemd[1]: sshd@15-139.178.88.165:22-147.75.109.163:33662.service: Deactivated successfully. Feb 9 14:14:45.027571 systemd[1]: session-16.scope: Deactivated successfully. Feb 9 14:14:45.027969 systemd-logind[1468]: Session 16 logged out. Waiting for processes to exit. Feb 9 14:14:45.028549 systemd[1]: Started sshd@16-139.178.88.165:22-147.75.109.163:33668.service. Feb 9 14:14:45.028958 systemd-logind[1468]: Removed session 16. Feb 9 14:14:45.065026 sshd[4393]: Accepted publickey for core from 147.75.109.163 port 33668 ssh2: RSA SHA256:U9TXCuLWmaN/vK3Q2mHS688pTPlFTaEaiETkjLfvCUc Feb 9 14:14:45.065830 sshd[4393]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 14:14:45.068389 systemd-logind[1468]: New session 17 of user core. Feb 9 14:14:45.068963 systemd[1]: Started session-17.scope. Feb 9 14:14:46.177630 sshd[4393]: pam_unix(sshd:session): session closed for user core Feb 9 14:14:46.179756 systemd[1]: sshd@16-139.178.88.165:22-147.75.109.163:33668.service: Deactivated successfully. Feb 9 14:14:46.180179 systemd[1]: session-17.scope: Deactivated successfully. Feb 9 14:14:46.180573 systemd-logind[1468]: Session 17 logged out. Waiting for processes to exit. Feb 9 14:14:46.181242 systemd[1]: Started sshd@17-139.178.88.165:22-147.75.109.163:33684.service. Feb 9 14:14:46.181778 systemd-logind[1468]: Removed session 17. Feb 9 14:14:46.269514 sshd[4415]: Accepted publickey for core from 147.75.109.163 port 33684 ssh2: RSA SHA256:U9TXCuLWmaN/vK3Q2mHS688pTPlFTaEaiETkjLfvCUc Feb 9 14:14:46.271473 sshd[4415]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 14:14:46.277427 systemd-logind[1468]: New session 18 of user core. Feb 9 14:14:46.278783 systemd[1]: Started session-18.scope. Feb 9 14:14:47.116141 sshd[4415]: pam_unix(sshd:session): session closed for user core Feb 9 14:14:47.118038 systemd[1]: sshd@17-139.178.88.165:22-147.75.109.163:33684.service: Deactivated successfully. Feb 9 14:14:47.118459 systemd[1]: session-18.scope: Deactivated successfully. Feb 9 14:14:47.118887 systemd-logind[1468]: Session 18 logged out. Waiting for processes to exit. Feb 9 14:14:47.119738 systemd[1]: Started sshd@18-139.178.88.165:22-147.75.109.163:33690.service. Feb 9 14:14:47.120196 systemd-logind[1468]: Removed session 18. Feb 9 14:14:47.192126 sshd[4444]: Accepted publickey for core from 147.75.109.163 port 33690 ssh2: RSA SHA256:U9TXCuLWmaN/vK3Q2mHS688pTPlFTaEaiETkjLfvCUc Feb 9 14:14:47.193832 sshd[4444]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 14:14:47.198338 systemd-logind[1468]: New session 19 of user core. Feb 9 14:14:47.199457 systemd[1]: Started session-19.scope. Feb 9 14:14:47.414006 sshd[4444]: pam_unix(sshd:session): session closed for user core Feb 9 14:14:47.415837 systemd[1]: sshd@18-139.178.88.165:22-147.75.109.163:33690.service: Deactivated successfully. Feb 9 14:14:47.416217 systemd[1]: session-19.scope: Deactivated successfully. Feb 9 14:14:47.416572 systemd-logind[1468]: Session 19 logged out. Waiting for processes to exit. Feb 9 14:14:47.417324 systemd[1]: Started sshd@19-139.178.88.165:22-147.75.109.163:33702.service. Feb 9 14:14:47.417928 systemd-logind[1468]: Removed session 19. Feb 9 14:14:47.453890 sshd[4471]: Accepted publickey for core from 147.75.109.163 port 33702 ssh2: RSA SHA256:U9TXCuLWmaN/vK3Q2mHS688pTPlFTaEaiETkjLfvCUc Feb 9 14:14:47.454584 sshd[4471]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 14:14:47.457033 systemd-logind[1468]: New session 20 of user core. Feb 9 14:14:47.457640 systemd[1]: Started session-20.scope. Feb 9 14:14:47.588667 sshd[4471]: pam_unix(sshd:session): session closed for user core Feb 9 14:14:47.590084 systemd[1]: sshd@19-139.178.88.165:22-147.75.109.163:33702.service: Deactivated successfully. Feb 9 14:14:47.590532 systemd[1]: session-20.scope: Deactivated successfully. Feb 9 14:14:47.590846 systemd-logind[1468]: Session 20 logged out. Waiting for processes to exit. Feb 9 14:14:47.591213 systemd-logind[1468]: Removed session 20. Feb 9 14:14:52.597808 systemd[1]: Started sshd@20-139.178.88.165:22-147.75.109.163:33714.service. Feb 9 14:14:52.642852 sshd[4501]: Accepted publickey for core from 147.75.109.163 port 33714 ssh2: RSA SHA256:U9TXCuLWmaN/vK3Q2mHS688pTPlFTaEaiETkjLfvCUc Feb 9 14:14:52.643523 sshd[4501]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 14:14:52.645884 systemd-logind[1468]: New session 21 of user core. Feb 9 14:14:52.646361 systemd[1]: Started session-21.scope. Feb 9 14:14:52.764528 sshd[4501]: pam_unix(sshd:session): session closed for user core Feb 9 14:14:52.765924 systemd[1]: sshd@20-139.178.88.165:22-147.75.109.163:33714.service: Deactivated successfully. Feb 9 14:14:52.766394 systemd[1]: session-21.scope: Deactivated successfully. Feb 9 14:14:52.766719 systemd-logind[1468]: Session 21 logged out. Waiting for processes to exit. Feb 9 14:14:52.767123 systemd-logind[1468]: Removed session 21. Feb 9 14:14:57.775099 systemd[1]: Started sshd@21-139.178.88.165:22-147.75.109.163:54944.service. Feb 9 14:14:57.811403 sshd[4527]: Accepted publickey for core from 147.75.109.163 port 54944 ssh2: RSA SHA256:U9TXCuLWmaN/vK3Q2mHS688pTPlFTaEaiETkjLfvCUc Feb 9 14:14:57.812068 sshd[4527]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 14:14:57.814590 systemd-logind[1468]: New session 22 of user core. Feb 9 14:14:57.815102 systemd[1]: Started session-22.scope. Feb 9 14:14:57.902481 sshd[4527]: pam_unix(sshd:session): session closed for user core Feb 9 14:14:57.904003 systemd[1]: sshd@21-139.178.88.165:22-147.75.109.163:54944.service: Deactivated successfully. Feb 9 14:14:57.904443 systemd[1]: session-22.scope: Deactivated successfully. Feb 9 14:14:57.904900 systemd-logind[1468]: Session 22 logged out. Waiting for processes to exit. Feb 9 14:14:57.905400 systemd-logind[1468]: Removed session 22. Feb 9 14:15:02.910922 systemd[1]: Started sshd@22-139.178.88.165:22-147.75.109.163:54950.service. Feb 9 14:15:02.947347 sshd[4552]: Accepted publickey for core from 147.75.109.163 port 54950 ssh2: RSA SHA256:U9TXCuLWmaN/vK3Q2mHS688pTPlFTaEaiETkjLfvCUc Feb 9 14:15:02.948329 sshd[4552]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 14:15:02.951522 systemd-logind[1468]: New session 23 of user core. Feb 9 14:15:02.952313 systemd[1]: Started session-23.scope. Feb 9 14:15:03.041240 sshd[4552]: pam_unix(sshd:session): session closed for user core Feb 9 14:15:03.042816 systemd[1]: sshd@22-139.178.88.165:22-147.75.109.163:54950.service: Deactivated successfully. Feb 9 14:15:03.043272 systemd[1]: session-23.scope: Deactivated successfully. Feb 9 14:15:03.043739 systemd-logind[1468]: Session 23 logged out. Waiting for processes to exit. Feb 9 14:15:03.044266 systemd-logind[1468]: Removed session 23. Feb 9 14:15:08.050526 systemd[1]: Started sshd@23-139.178.88.165:22-147.75.109.163:55274.service. Feb 9 14:15:08.087638 sshd[4579]: Accepted publickey for core from 147.75.109.163 port 55274 ssh2: RSA SHA256:U9TXCuLWmaN/vK3Q2mHS688pTPlFTaEaiETkjLfvCUc Feb 9 14:15:08.091255 sshd[4579]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 14:15:08.101618 systemd-logind[1468]: New session 24 of user core. Feb 9 14:15:08.104178 systemd[1]: Started session-24.scope. Feb 9 14:15:08.205480 sshd[4579]: pam_unix(sshd:session): session closed for user core Feb 9 14:15:08.206991 systemd[1]: sshd@23-139.178.88.165:22-147.75.109.163:55274.service: Deactivated successfully. Feb 9 14:15:08.207452 systemd[1]: session-24.scope: Deactivated successfully. Feb 9 14:15:08.207856 systemd-logind[1468]: Session 24 logged out. Waiting for processes to exit. Feb 9 14:15:08.208310 systemd-logind[1468]: Removed session 24. Feb 9 14:15:13.214155 systemd[1]: Started sshd@24-139.178.88.165:22-147.75.109.163:55284.service. Feb 9 14:15:13.250335 sshd[4603]: Accepted publickey for core from 147.75.109.163 port 55284 ssh2: RSA SHA256:U9TXCuLWmaN/vK3Q2mHS688pTPlFTaEaiETkjLfvCUc Feb 9 14:15:13.251180 sshd[4603]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 14:15:13.254075 systemd-logind[1468]: New session 25 of user core. Feb 9 14:15:13.254660 systemd[1]: Started session-25.scope. Feb 9 14:15:13.340342 sshd[4603]: pam_unix(sshd:session): session closed for user core Feb 9 14:15:13.341887 systemd[1]: sshd@24-139.178.88.165:22-147.75.109.163:55284.service: Deactivated successfully. Feb 9 14:15:13.342342 systemd[1]: session-25.scope: Deactivated successfully. Feb 9 14:15:13.342755 systemd-logind[1468]: Session 25 logged out. Waiting for processes to exit. Feb 9 14:15:13.343272 systemd-logind[1468]: Removed session 25. Feb 9 14:15:18.343484 systemd[1]: Started sshd@25-139.178.88.165:22-147.75.109.163:36398.service. Feb 9 14:15:18.381375 sshd[4629]: Accepted publickey for core from 147.75.109.163 port 36398 ssh2: RSA SHA256:U9TXCuLWmaN/vK3Q2mHS688pTPlFTaEaiETkjLfvCUc Feb 9 14:15:18.382425 sshd[4629]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 14:15:18.385560 systemd-logind[1468]: New session 26 of user core. Feb 9 14:15:18.386202 systemd[1]: Started session-26.scope. Feb 9 14:15:18.471872 sshd[4629]: pam_unix(sshd:session): session closed for user core Feb 9 14:15:18.473747 systemd[1]: sshd@25-139.178.88.165:22-147.75.109.163:36398.service: Deactivated successfully. Feb 9 14:15:18.474134 systemd[1]: session-26.scope: Deactivated successfully. Feb 9 14:15:18.474568 systemd-logind[1468]: Session 26 logged out. Waiting for processes to exit. Feb 9 14:15:18.475214 systemd[1]: Started sshd@26-139.178.88.165:22-147.75.109.163:36412.service. Feb 9 14:15:18.475792 systemd-logind[1468]: Removed session 26. Feb 9 14:15:18.512541 sshd[4654]: Accepted publickey for core from 147.75.109.163 port 36412 ssh2: RSA SHA256:U9TXCuLWmaN/vK3Q2mHS688pTPlFTaEaiETkjLfvCUc Feb 9 14:15:18.513451 sshd[4654]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 14:15:18.516560 systemd-logind[1468]: New session 27 of user core. Feb 9 14:15:18.517202 systemd[1]: Started session-27.scope. Feb 9 14:15:19.902391 env[1480]: time="2024-02-09T14:15:19.902261536Z" level=info msg="StopContainer for \"96290905e0e040afc92740f30c2d800e3c9cf7b234ea6610411974fee9016cb3\" with timeout 30 (s)" Feb 9 14:15:19.903505 env[1480]: time="2024-02-09T14:15:19.903072547Z" level=info msg="Stop container \"96290905e0e040afc92740f30c2d800e3c9cf7b234ea6610411974fee9016cb3\" with signal terminated" Feb 9 14:15:19.940923 systemd[1]: cri-containerd-96290905e0e040afc92740f30c2d800e3c9cf7b234ea6610411974fee9016cb3.scope: Deactivated successfully. Feb 9 14:15:19.971586 env[1480]: time="2024-02-09T14:15:19.971490923Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 14:15:19.980992 env[1480]: time="2024-02-09T14:15:19.980902514Z" level=info msg="StopContainer for \"f400400903621d7647e456eb66e8228a4ff57c03301b95fc25e8c9d7331db035\" with timeout 2 (s)" Feb 9 14:15:19.981323 env[1480]: time="2024-02-09T14:15:19.981261714Z" level=info msg="Stop container \"f400400903621d7647e456eb66e8228a4ff57c03301b95fc25e8c9d7331db035\" with signal terminated" Feb 9 14:15:19.986894 env[1480]: time="2024-02-09T14:15:19.986767620Z" level=info msg="shim disconnected" id=96290905e0e040afc92740f30c2d800e3c9cf7b234ea6610411974fee9016cb3 Feb 9 14:15:19.986894 env[1480]: time="2024-02-09T14:15:19.986872725Z" level=warning msg="cleaning up after shim disconnected" id=96290905e0e040afc92740f30c2d800e3c9cf7b234ea6610411974fee9016cb3 namespace=k8s.io Feb 9 14:15:19.987272 env[1480]: time="2024-02-09T14:15:19.986906498Z" level=info msg="cleaning up dead shim" Feb 9 14:15:19.987501 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-96290905e0e040afc92740f30c2d800e3c9cf7b234ea6610411974fee9016cb3-rootfs.mount: Deactivated successfully. Feb 9 14:15:19.999421 env[1480]: time="2024-02-09T14:15:19.999329079Z" level=warning msg="cleanup warnings time=\"2024-02-09T14:15:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4721 runtime=io.containerd.runc.v2\n" Feb 9 14:15:20.001111 env[1480]: time="2024-02-09T14:15:20.001022673Z" level=info msg="StopContainer for \"96290905e0e040afc92740f30c2d800e3c9cf7b234ea6610411974fee9016cb3\" returns successfully" Feb 9 14:15:20.002079 env[1480]: time="2024-02-09T14:15:20.002002928Z" level=info msg="StopPodSandbox for \"1add6ba781b8927ab85f016f30fc446367b48affbff286be37331fdb54bfb209\"" Feb 9 14:15:20.002255 env[1480]: time="2024-02-09T14:15:20.002117964Z" level=info msg="Container to stop \"96290905e0e040afc92740f30c2d800e3c9cf7b234ea6610411974fee9016cb3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 14:15:20.003774 systemd-networkd[1327]: lxc_health: Link DOWN Feb 9 14:15:20.003784 systemd-networkd[1327]: lxc_health: Lost carrier Feb 9 14:15:20.005941 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1add6ba781b8927ab85f016f30fc446367b48affbff286be37331fdb54bfb209-shm.mount: Deactivated successfully. Feb 9 14:15:20.014593 systemd[1]: cri-containerd-1add6ba781b8927ab85f016f30fc446367b48affbff286be37331fdb54bfb209.scope: Deactivated successfully. Feb 9 14:15:20.051234 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1add6ba781b8927ab85f016f30fc446367b48affbff286be37331fdb54bfb209-rootfs.mount: Deactivated successfully. Feb 9 14:15:20.061710 env[1480]: time="2024-02-09T14:15:20.061652013Z" level=info msg="shim disconnected" id=1add6ba781b8927ab85f016f30fc446367b48affbff286be37331fdb54bfb209 Feb 9 14:15:20.061874 env[1480]: time="2024-02-09T14:15:20.061711845Z" level=warning msg="cleaning up after shim disconnected" id=1add6ba781b8927ab85f016f30fc446367b48affbff286be37331fdb54bfb209 namespace=k8s.io Feb 9 14:15:20.061874 env[1480]: time="2024-02-09T14:15:20.061730004Z" level=info msg="cleaning up dead shim" Feb 9 14:15:20.062644 systemd[1]: cri-containerd-f400400903621d7647e456eb66e8228a4ff57c03301b95fc25e8c9d7331db035.scope: Deactivated successfully. Feb 9 14:15:20.062946 systemd[1]: cri-containerd-f400400903621d7647e456eb66e8228a4ff57c03301b95fc25e8c9d7331db035.scope: Consumed 6.965s CPU time. Feb 9 14:15:20.069762 env[1480]: time="2024-02-09T14:15:20.069720651Z" level=warning msg="cleanup warnings time=\"2024-02-09T14:15:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4755 runtime=io.containerd.runc.v2\n" Feb 9 14:15:20.070067 env[1480]: time="2024-02-09T14:15:20.070036287Z" level=info msg="TearDown network for sandbox \"1add6ba781b8927ab85f016f30fc446367b48affbff286be37331fdb54bfb209\" successfully" Feb 9 14:15:20.070137 env[1480]: time="2024-02-09T14:15:20.070065743Z" level=info msg="StopPodSandbox for \"1add6ba781b8927ab85f016f30fc446367b48affbff286be37331fdb54bfb209\" returns successfully" Feb 9 14:15:20.092664 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f400400903621d7647e456eb66e8228a4ff57c03301b95fc25e8c9d7331db035-rootfs.mount: Deactivated successfully. Feb 9 14:15:20.113046 env[1480]: time="2024-02-09T14:15:20.112951278Z" level=info msg="shim disconnected" id=f400400903621d7647e456eb66e8228a4ff57c03301b95fc25e8c9d7331db035 Feb 9 14:15:20.113046 env[1480]: time="2024-02-09T14:15:20.113015940Z" level=warning msg="cleaning up after shim disconnected" id=f400400903621d7647e456eb66e8228a4ff57c03301b95fc25e8c9d7331db035 namespace=k8s.io Feb 9 14:15:20.113046 env[1480]: time="2024-02-09T14:15:20.113034871Z" level=info msg="cleaning up dead shim" Feb 9 14:15:20.126820 env[1480]: time="2024-02-09T14:15:20.126749667Z" level=warning msg="cleanup warnings time=\"2024-02-09T14:15:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4779 runtime=io.containerd.runc.v2\n" Feb 9 14:15:20.128694 env[1480]: time="2024-02-09T14:15:20.128630412Z" level=info msg="StopContainer for \"f400400903621d7647e456eb66e8228a4ff57c03301b95fc25e8c9d7331db035\" returns successfully" Feb 9 14:15:20.129590 env[1480]: time="2024-02-09T14:15:20.129495847Z" level=info msg="StopPodSandbox for \"ab77a8be1a6048b5a66e6590d859c64150b94748faa3f9f6067d879d13eafc77\"" Feb 9 14:15:20.129759 env[1480]: time="2024-02-09T14:15:20.129609017Z" level=info msg="Container to stop \"95e0cdecb013aa68137d7d5529a680fcdb1508246598c6b72fcd90d5e8877b29\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 14:15:20.129759 env[1480]: time="2024-02-09T14:15:20.129643539Z" level=info msg="Container to stop \"b5887b332d5aaeb12f508eb74db42ac018d0652f36b6590c2674c6556383b648\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 14:15:20.129759 env[1480]: time="2024-02-09T14:15:20.129669014Z" level=info msg="Container to stop \"90ed4fdeb332f4109fa74c3d0426857f126b39cc93318071caa4df96786cdc62\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 14:15:20.129759 env[1480]: time="2024-02-09T14:15:20.129695917Z" level=info msg="Container to stop \"3e4d2f9aae549c76d98e972839a58de836c827097ab03d86e8428181b47d280f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 14:15:20.129759 env[1480]: time="2024-02-09T14:15:20.129719140Z" level=info msg="Container to stop \"f400400903621d7647e456eb66e8228a4ff57c03301b95fc25e8c9d7331db035\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 14:15:20.141176 systemd[1]: cri-containerd-ab77a8be1a6048b5a66e6590d859c64150b94748faa3f9f6067d879d13eafc77.scope: Deactivated successfully. Feb 9 14:15:20.171899 env[1480]: time="2024-02-09T14:15:20.171699926Z" level=info msg="shim disconnected" id=ab77a8be1a6048b5a66e6590d859c64150b94748faa3f9f6067d879d13eafc77 Feb 9 14:15:20.171899 env[1480]: time="2024-02-09T14:15:20.171819732Z" level=warning msg="cleaning up after shim disconnected" id=ab77a8be1a6048b5a66e6590d859c64150b94748faa3f9f6067d879d13eafc77 namespace=k8s.io Feb 9 14:15:20.171899 env[1480]: time="2024-02-09T14:15:20.171859453Z" level=info msg="cleaning up dead shim" Feb 9 14:15:20.197879 env[1480]: time="2024-02-09T14:15:20.197810496Z" level=warning msg="cleanup warnings time=\"2024-02-09T14:15:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4810 runtime=io.containerd.runc.v2\n" Feb 9 14:15:20.198445 env[1480]: time="2024-02-09T14:15:20.198398867Z" level=info msg="TearDown network for sandbox \"ab77a8be1a6048b5a66e6590d859c64150b94748faa3f9f6067d879d13eafc77\" successfully" Feb 9 14:15:20.198585 env[1480]: time="2024-02-09T14:15:20.198444028Z" level=info msg="StopPodSandbox for \"ab77a8be1a6048b5a66e6590d859c64150b94748faa3f9f6067d879d13eafc77\" returns successfully" Feb 9 14:15:20.218631 kubelet[2523]: I0209 14:15:20.218539 2523 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7pl8m\" (UniqueName: \"kubernetes.io/projected/b19c4d24-1ffe-427e-b653-176eff29c216-kube-api-access-7pl8m\") pod \"b19c4d24-1ffe-427e-b653-176eff29c216\" (UID: \"b19c4d24-1ffe-427e-b653-176eff29c216\") " Feb 9 14:15:20.219454 kubelet[2523]: I0209 14:15:20.218671 2523 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/00602c6b-f3bb-4d64-9edd-eb6411171a3c-cilium-run\") pod \"00602c6b-f3bb-4d64-9edd-eb6411171a3c\" (UID: \"00602c6b-f3bb-4d64-9edd-eb6411171a3c\") " Feb 9 14:15:20.219454 kubelet[2523]: I0209 14:15:20.218731 2523 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/00602c6b-f3bb-4d64-9edd-eb6411171a3c-cni-path\") pod \"00602c6b-f3bb-4d64-9edd-eb6411171a3c\" (UID: \"00602c6b-f3bb-4d64-9edd-eb6411171a3c\") " Feb 9 14:15:20.219454 kubelet[2523]: I0209 14:15:20.218786 2523 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/00602c6b-f3bb-4d64-9edd-eb6411171a3c-xtables-lock\") pod \"00602c6b-f3bb-4d64-9edd-eb6411171a3c\" (UID: \"00602c6b-f3bb-4d64-9edd-eb6411171a3c\") " Feb 9 14:15:20.219454 kubelet[2523]: I0209 14:15:20.218762 2523 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/00602c6b-f3bb-4d64-9edd-eb6411171a3c-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "00602c6b-f3bb-4d64-9edd-eb6411171a3c" (UID: "00602c6b-f3bb-4d64-9edd-eb6411171a3c"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 14:15:20.219454 kubelet[2523]: I0209 14:15:20.218854 2523 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b19c4d24-1ffe-427e-b653-176eff29c216-cilium-config-path\") pod \"b19c4d24-1ffe-427e-b653-176eff29c216\" (UID: \"b19c4d24-1ffe-427e-b653-176eff29c216\") " Feb 9 14:15:20.219454 kubelet[2523]: I0209 14:15:20.218844 2523 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/00602c6b-f3bb-4d64-9edd-eb6411171a3c-cni-path" (OuterVolumeSpecName: "cni-path") pod "00602c6b-f3bb-4d64-9edd-eb6411171a3c" (UID: "00602c6b-f3bb-4d64-9edd-eb6411171a3c"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 14:15:20.220138 kubelet[2523]: I0209 14:15:20.218913 2523 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/00602c6b-f3bb-4d64-9edd-eb6411171a3c-hostproc\") pod \"00602c6b-f3bb-4d64-9edd-eb6411171a3c\" (UID: \"00602c6b-f3bb-4d64-9edd-eb6411171a3c\") " Feb 9 14:15:20.220138 kubelet[2523]: I0209 14:15:20.218894 2523 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/00602c6b-f3bb-4d64-9edd-eb6411171a3c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "00602c6b-f3bb-4d64-9edd-eb6411171a3c" (UID: "00602c6b-f3bb-4d64-9edd-eb6411171a3c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 14:15:20.220138 kubelet[2523]: I0209 14:15:20.218965 2523 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/00602c6b-f3bb-4d64-9edd-eb6411171a3c-bpf-maps\") pod \"00602c6b-f3bb-4d64-9edd-eb6411171a3c\" (UID: \"00602c6b-f3bb-4d64-9edd-eb6411171a3c\") " Feb 9 14:15:20.220138 kubelet[2523]: I0209 14:15:20.219019 2523 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/00602c6b-f3bb-4d64-9edd-eb6411171a3c-host-proc-sys-net\") pod \"00602c6b-f3bb-4d64-9edd-eb6411171a3c\" (UID: \"00602c6b-f3bb-4d64-9edd-eb6411171a3c\") " Feb 9 14:15:20.220138 kubelet[2523]: I0209 14:15:20.219036 2523 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/00602c6b-f3bb-4d64-9edd-eb6411171a3c-hostproc" (OuterVolumeSpecName: "hostproc") pod "00602c6b-f3bb-4d64-9edd-eb6411171a3c" (UID: "00602c6b-f3bb-4d64-9edd-eb6411171a3c"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 14:15:20.220700 kubelet[2523]: I0209 14:15:20.219077 2523 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/00602c6b-f3bb-4d64-9edd-eb6411171a3c-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "00602c6b-f3bb-4d64-9edd-eb6411171a3c" (UID: "00602c6b-f3bb-4d64-9edd-eb6411171a3c"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 14:15:20.220700 kubelet[2523]: I0209 14:15:20.219088 2523 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/00602c6b-f3bb-4d64-9edd-eb6411171a3c-clustermesh-secrets\") pod \"00602c6b-f3bb-4d64-9edd-eb6411171a3c\" (UID: \"00602c6b-f3bb-4d64-9edd-eb6411171a3c\") " Feb 9 14:15:20.220700 kubelet[2523]: I0209 14:15:20.219130 2523 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/00602c6b-f3bb-4d64-9edd-eb6411171a3c-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "00602c6b-f3bb-4d64-9edd-eb6411171a3c" (UID: "00602c6b-f3bb-4d64-9edd-eb6411171a3c"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 14:15:20.220700 kubelet[2523]: I0209 14:15:20.219233 2523 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/00602c6b-f3bb-4d64-9edd-eb6411171a3c-etc-cni-netd\") pod \"00602c6b-f3bb-4d64-9edd-eb6411171a3c\" (UID: \"00602c6b-f3bb-4d64-9edd-eb6411171a3c\") " Feb 9 14:15:20.220700 kubelet[2523]: I0209 14:15:20.219288 2523 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/00602c6b-f3bb-4d64-9edd-eb6411171a3c-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "00602c6b-f3bb-4d64-9edd-eb6411171a3c" (UID: "00602c6b-f3bb-4d64-9edd-eb6411171a3c"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 14:15:20.221220 kubelet[2523]: I0209 14:15:20.219379 2523 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/00602c6b-f3bb-4d64-9edd-eb6411171a3c-host-proc-sys-kernel\") pod \"00602c6b-f3bb-4d64-9edd-eb6411171a3c\" (UID: \"00602c6b-f3bb-4d64-9edd-eb6411171a3c\") " Feb 9 14:15:20.221220 kubelet[2523]: I0209 14:15:20.219468 2523 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/00602c6b-f3bb-4d64-9edd-eb6411171a3c-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "00602c6b-f3bb-4d64-9edd-eb6411171a3c" (UID: "00602c6b-f3bb-4d64-9edd-eb6411171a3c"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 14:15:20.221220 kubelet[2523]: I0209 14:15:20.219498 2523 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/00602c6b-f3bb-4d64-9edd-eb6411171a3c-cilium-cgroup\") pod \"00602c6b-f3bb-4d64-9edd-eb6411171a3c\" (UID: \"00602c6b-f3bb-4d64-9edd-eb6411171a3c\") " Feb 9 14:15:20.221220 kubelet[2523]: I0209 14:15:20.219582 2523 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/00602c6b-f3bb-4d64-9edd-eb6411171a3c-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "00602c6b-f3bb-4d64-9edd-eb6411171a3c" (UID: "00602c6b-f3bb-4d64-9edd-eb6411171a3c"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 14:15:20.221220 kubelet[2523]: I0209 14:15:20.219611 2523 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/00602c6b-f3bb-4d64-9edd-eb6411171a3c-lib-modules\") pod \"00602c6b-f3bb-4d64-9edd-eb6411171a3c\" (UID: \"00602c6b-f3bb-4d64-9edd-eb6411171a3c\") " Feb 9 14:15:20.221792 kubelet[2523]: I0209 14:15:20.219713 2523 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/00602c6b-f3bb-4d64-9edd-eb6411171a3c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "00602c6b-f3bb-4d64-9edd-eb6411171a3c" (UID: "00602c6b-f3bb-4d64-9edd-eb6411171a3c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 14:15:20.221792 kubelet[2523]: I0209 14:15:20.219735 2523 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/00602c6b-f3bb-4d64-9edd-eb6411171a3c-cilium-config-path\") pod \"00602c6b-f3bb-4d64-9edd-eb6411171a3c\" (UID: \"00602c6b-f3bb-4d64-9edd-eb6411171a3c\") " Feb 9 14:15:20.221792 kubelet[2523]: I0209 14:15:20.219882 2523 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/00602c6b-f3bb-4d64-9edd-eb6411171a3c-hubble-tls\") pod \"00602c6b-f3bb-4d64-9edd-eb6411171a3c\" (UID: \"00602c6b-f3bb-4d64-9edd-eb6411171a3c\") " Feb 9 14:15:20.221792 kubelet[2523]: I0209 14:15:20.219999 2523 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/00602c6b-f3bb-4d64-9edd-eb6411171a3c-xtables-lock\") on node \"ci-3510.3.2-a-80177560a3\" DevicePath \"\"" Feb 9 14:15:20.221792 kubelet[2523]: I0209 14:15:20.220053 2523 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/00602c6b-f3bb-4d64-9edd-eb6411171a3c-cilium-run\") on node \"ci-3510.3.2-a-80177560a3\" DevicePath \"\"" Feb 9 14:15:20.221792 kubelet[2523]: I0209 14:15:20.220088 2523 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/00602c6b-f3bb-4d64-9edd-eb6411171a3c-cni-path\") on node \"ci-3510.3.2-a-80177560a3\" DevicePath \"\"" Feb 9 14:15:20.221792 kubelet[2523]: I0209 14:15:20.220133 2523 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/00602c6b-f3bb-4d64-9edd-eb6411171a3c-hostproc\") on node \"ci-3510.3.2-a-80177560a3\" DevicePath \"\"" Feb 9 14:15:20.222545 kubelet[2523]: I0209 14:15:20.220167 2523 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/00602c6b-f3bb-4d64-9edd-eb6411171a3c-bpf-maps\") on node \"ci-3510.3.2-a-80177560a3\" DevicePath \"\"" Feb 9 14:15:20.222545 kubelet[2523]: I0209 14:15:20.220210 2523 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/00602c6b-f3bb-4d64-9edd-eb6411171a3c-host-proc-sys-net\") on node \"ci-3510.3.2-a-80177560a3\" DevicePath \"\"" Feb 9 14:15:20.222545 kubelet[2523]: I0209 14:15:20.220248 2523 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/00602c6b-f3bb-4d64-9edd-eb6411171a3c-etc-cni-netd\") on node \"ci-3510.3.2-a-80177560a3\" DevicePath \"\"" Feb 9 14:15:20.222545 kubelet[2523]: I0209 14:15:20.220290 2523 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/00602c6b-f3bb-4d64-9edd-eb6411171a3c-host-proc-sys-kernel\") on node \"ci-3510.3.2-a-80177560a3\" DevicePath \"\"" Feb 9 14:15:20.222545 kubelet[2523]: I0209 14:15:20.220355 2523 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/00602c6b-f3bb-4d64-9edd-eb6411171a3c-cilium-cgroup\") on node \"ci-3510.3.2-a-80177560a3\" DevicePath \"\"" Feb 9 14:15:20.222545 kubelet[2523]: I0209 14:15:20.220424 2523 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/00602c6b-f3bb-4d64-9edd-eb6411171a3c-lib-modules\") on node \"ci-3510.3.2-a-80177560a3\" DevicePath \"\"" Feb 9 14:15:20.224745 kubelet[2523]: I0209 14:15:20.224683 2523 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b19c4d24-1ffe-427e-b653-176eff29c216-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b19c4d24-1ffe-427e-b653-176eff29c216" (UID: "b19c4d24-1ffe-427e-b653-176eff29c216"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 14:15:20.225584 kubelet[2523]: I0209 14:15:20.225518 2523 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b19c4d24-1ffe-427e-b653-176eff29c216-kube-api-access-7pl8m" (OuterVolumeSpecName: "kube-api-access-7pl8m") pod "b19c4d24-1ffe-427e-b653-176eff29c216" (UID: "b19c4d24-1ffe-427e-b653-176eff29c216"). InnerVolumeSpecName "kube-api-access-7pl8m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 14:15:20.225919 kubelet[2523]: I0209 14:15:20.225861 2523 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00602c6b-f3bb-4d64-9edd-eb6411171a3c-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "00602c6b-f3bb-4d64-9edd-eb6411171a3c" (UID: "00602c6b-f3bb-4d64-9edd-eb6411171a3c"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 14:15:20.226561 kubelet[2523]: I0209 14:15:20.226287 2523 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00602c6b-f3bb-4d64-9edd-eb6411171a3c-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "00602c6b-f3bb-4d64-9edd-eb6411171a3c" (UID: "00602c6b-f3bb-4d64-9edd-eb6411171a3c"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 14:15:20.228776 kubelet[2523]: I0209 14:15:20.228704 2523 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00602c6b-f3bb-4d64-9edd-eb6411171a3c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "00602c6b-f3bb-4d64-9edd-eb6411171a3c" (UID: "00602c6b-f3bb-4d64-9edd-eb6411171a3c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 14:15:20.321599 kubelet[2523]: I0209 14:15:20.321478 2523 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h75xh\" (UniqueName: \"kubernetes.io/projected/00602c6b-f3bb-4d64-9edd-eb6411171a3c-kube-api-access-h75xh\") pod \"00602c6b-f3bb-4d64-9edd-eb6411171a3c\" (UID: \"00602c6b-f3bb-4d64-9edd-eb6411171a3c\") " Feb 9 14:15:20.321599 kubelet[2523]: I0209 14:15:20.321609 2523 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-7pl8m\" (UniqueName: \"kubernetes.io/projected/b19c4d24-1ffe-427e-b653-176eff29c216-kube-api-access-7pl8m\") on node \"ci-3510.3.2-a-80177560a3\" DevicePath \"\"" Feb 9 14:15:20.322016 kubelet[2523]: I0209 14:15:20.321675 2523 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b19c4d24-1ffe-427e-b653-176eff29c216-cilium-config-path\") on node \"ci-3510.3.2-a-80177560a3\" DevicePath \"\"" Feb 9 14:15:20.322016 kubelet[2523]: I0209 14:15:20.321710 2523 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/00602c6b-f3bb-4d64-9edd-eb6411171a3c-clustermesh-secrets\") on node \"ci-3510.3.2-a-80177560a3\" DevicePath \"\"" Feb 9 14:15:20.322016 kubelet[2523]: I0209 14:15:20.321749 2523 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/00602c6b-f3bb-4d64-9edd-eb6411171a3c-cilium-config-path\") on node \"ci-3510.3.2-a-80177560a3\" DevicePath \"\"" Feb 9 14:15:20.322016 kubelet[2523]: I0209 14:15:20.321784 2523 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/00602c6b-f3bb-4d64-9edd-eb6411171a3c-hubble-tls\") on node \"ci-3510.3.2-a-80177560a3\" DevicePath \"\"" Feb 9 14:15:20.328039 kubelet[2523]: I0209 14:15:20.327930 2523 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00602c6b-f3bb-4d64-9edd-eb6411171a3c-kube-api-access-h75xh" (OuterVolumeSpecName: "kube-api-access-h75xh") pod "00602c6b-f3bb-4d64-9edd-eb6411171a3c" (UID: "00602c6b-f3bb-4d64-9edd-eb6411171a3c"). InnerVolumeSpecName "kube-api-access-h75xh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 14:15:20.422763 kubelet[2523]: I0209 14:15:20.422539 2523 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-h75xh\" (UniqueName: \"kubernetes.io/projected/00602c6b-f3bb-4d64-9edd-eb6411171a3c-kube-api-access-h75xh\") on node \"ci-3510.3.2-a-80177560a3\" DevicePath \"\"" Feb 9 14:15:20.588344 kubelet[2523]: I0209 14:15:20.588327 2523 scope.go:117] "RemoveContainer" containerID="96290905e0e040afc92740f30c2d800e3c9cf7b234ea6610411974fee9016cb3" Feb 9 14:15:20.589246 env[1480]: time="2024-02-09T14:15:20.589226887Z" level=info msg="RemoveContainer for \"96290905e0e040afc92740f30c2d800e3c9cf7b234ea6610411974fee9016cb3\"" Feb 9 14:15:20.590153 systemd[1]: Removed slice kubepods-besteffort-podb19c4d24_1ffe_427e_b653_176eff29c216.slice. Feb 9 14:15:20.590952 systemd[1]: Removed slice kubepods-burstable-pod00602c6b_f3bb_4d64_9edd_eb6411171a3c.slice. Feb 9 14:15:20.591026 systemd[1]: kubepods-burstable-pod00602c6b_f3bb_4d64_9edd_eb6411171a3c.slice: Consumed 7.023s CPU time. Feb 9 14:15:20.591505 env[1480]: time="2024-02-09T14:15:20.591453703Z" level=info msg="RemoveContainer for \"96290905e0e040afc92740f30c2d800e3c9cf7b234ea6610411974fee9016cb3\" returns successfully" Feb 9 14:15:20.591623 kubelet[2523]: I0209 14:15:20.591591 2523 scope.go:117] "RemoveContainer" containerID="90ed4fdeb332f4109fa74c3d0426857f126b39cc93318071caa4df96786cdc62" Feb 9 14:15:20.592047 env[1480]: time="2024-02-09T14:15:20.592011428Z" level=info msg="RemoveContainer for \"90ed4fdeb332f4109fa74c3d0426857f126b39cc93318071caa4df96786cdc62\"" Feb 9 14:15:20.593197 env[1480]: time="2024-02-09T14:15:20.593156573Z" level=info msg="RemoveContainer for \"90ed4fdeb332f4109fa74c3d0426857f126b39cc93318071caa4df96786cdc62\" returns successfully" Feb 9 14:15:20.593232 kubelet[2523]: I0209 14:15:20.593214 2523 scope.go:117] "RemoveContainer" containerID="3e4d2f9aae549c76d98e972839a58de836c827097ab03d86e8428181b47d280f" Feb 9 14:15:20.593594 env[1480]: time="2024-02-09T14:15:20.593573231Z" level=info msg="RemoveContainer for \"3e4d2f9aae549c76d98e972839a58de836c827097ab03d86e8428181b47d280f\"" Feb 9 14:15:20.594533 env[1480]: time="2024-02-09T14:15:20.594497686Z" level=info msg="RemoveContainer for \"3e4d2f9aae549c76d98e972839a58de836c827097ab03d86e8428181b47d280f\" returns successfully" Feb 9 14:15:20.594573 kubelet[2523]: I0209 14:15:20.594544 2523 scope.go:117] "RemoveContainer" containerID="f400400903621d7647e456eb66e8228a4ff57c03301b95fc25e8c9d7331db035" Feb 9 14:15:20.595601 env[1480]: time="2024-02-09T14:15:20.595246541Z" level=info msg="RemoveContainer for \"f400400903621d7647e456eb66e8228a4ff57c03301b95fc25e8c9d7331db035\"" Feb 9 14:15:20.619653 env[1480]: time="2024-02-09T14:15:20.619543547Z" level=info msg="RemoveContainer for \"f400400903621d7647e456eb66e8228a4ff57c03301b95fc25e8c9d7331db035\" returns successfully" Feb 9 14:15:20.620089 kubelet[2523]: I0209 14:15:20.620010 2523 scope.go:117] "RemoveContainer" containerID="95e0cdecb013aa68137d7d5529a680fcdb1508246598c6b72fcd90d5e8877b29" Feb 9 14:15:20.622788 env[1480]: time="2024-02-09T14:15:20.622718784Z" level=info msg="RemoveContainer for \"95e0cdecb013aa68137d7d5529a680fcdb1508246598c6b72fcd90d5e8877b29\"" Feb 9 14:15:20.627133 env[1480]: time="2024-02-09T14:15:20.627050376Z" level=info msg="RemoveContainer for \"95e0cdecb013aa68137d7d5529a680fcdb1508246598c6b72fcd90d5e8877b29\" returns successfully" Feb 9 14:15:20.627524 kubelet[2523]: I0209 14:15:20.627470 2523 scope.go:117] "RemoveContainer" containerID="b5887b332d5aaeb12f508eb74db42ac018d0652f36b6590c2674c6556383b648" Feb 9 14:15:20.630176 env[1480]: time="2024-02-09T14:15:20.630074302Z" level=info msg="RemoveContainer for \"b5887b332d5aaeb12f508eb74db42ac018d0652f36b6590c2674c6556383b648\"" Feb 9 14:15:20.635233 env[1480]: time="2024-02-09T14:15:20.635110149Z" level=info msg="RemoveContainer for \"b5887b332d5aaeb12f508eb74db42ac018d0652f36b6590c2674c6556383b648\" returns successfully" Feb 9 14:15:20.638546 env[1480]: time="2024-02-09T14:15:20.638419410Z" level=info msg="StopPodSandbox for \"ab77a8be1a6048b5a66e6590d859c64150b94748faa3f9f6067d879d13eafc77\"" Feb 9 14:15:20.638758 env[1480]: time="2024-02-09T14:15:20.638631433Z" level=info msg="TearDown network for sandbox \"ab77a8be1a6048b5a66e6590d859c64150b94748faa3f9f6067d879d13eafc77\" successfully" Feb 9 14:15:20.638758 env[1480]: time="2024-02-09T14:15:20.638724824Z" level=info msg="StopPodSandbox for \"ab77a8be1a6048b5a66e6590d859c64150b94748faa3f9f6067d879d13eafc77\" returns successfully" Feb 9 14:15:20.639724 env[1480]: time="2024-02-09T14:15:20.639618272Z" level=info msg="RemovePodSandbox for \"ab77a8be1a6048b5a66e6590d859c64150b94748faa3f9f6067d879d13eafc77\"" Feb 9 14:15:20.639910 env[1480]: time="2024-02-09T14:15:20.639747452Z" level=info msg="Forcibly stopping sandbox \"ab77a8be1a6048b5a66e6590d859c64150b94748faa3f9f6067d879d13eafc77\"" Feb 9 14:15:20.640091 env[1480]: time="2024-02-09T14:15:20.639927542Z" level=info msg="TearDown network for sandbox \"ab77a8be1a6048b5a66e6590d859c64150b94748faa3f9f6067d879d13eafc77\" successfully" Feb 9 14:15:20.644147 env[1480]: time="2024-02-09T14:15:20.644041250Z" level=info msg="RemovePodSandbox \"ab77a8be1a6048b5a66e6590d859c64150b94748faa3f9f6067d879d13eafc77\" returns successfully" Feb 9 14:15:20.644980 env[1480]: time="2024-02-09T14:15:20.644903160Z" level=info msg="StopPodSandbox for \"1add6ba781b8927ab85f016f30fc446367b48affbff286be37331fdb54bfb209\"" Feb 9 14:15:20.645255 env[1480]: time="2024-02-09T14:15:20.645128063Z" level=info msg="TearDown network for sandbox \"1add6ba781b8927ab85f016f30fc446367b48affbff286be37331fdb54bfb209\" successfully" Feb 9 14:15:20.645483 env[1480]: time="2024-02-09T14:15:20.645271906Z" level=info msg="StopPodSandbox for \"1add6ba781b8927ab85f016f30fc446367b48affbff286be37331fdb54bfb209\" returns successfully" Feb 9 14:15:20.646173 env[1480]: time="2024-02-09T14:15:20.646081066Z" level=info msg="RemovePodSandbox for \"1add6ba781b8927ab85f016f30fc446367b48affbff286be37331fdb54bfb209\"" Feb 9 14:15:20.646439 env[1480]: time="2024-02-09T14:15:20.646162479Z" level=info msg="Forcibly stopping sandbox \"1add6ba781b8927ab85f016f30fc446367b48affbff286be37331fdb54bfb209\"" Feb 9 14:15:20.646439 env[1480]: time="2024-02-09T14:15:20.646371758Z" level=info msg="TearDown network for sandbox \"1add6ba781b8927ab85f016f30fc446367b48affbff286be37331fdb54bfb209\" successfully" Feb 9 14:15:20.650165 env[1480]: time="2024-02-09T14:15:20.650097360Z" level=info msg="RemovePodSandbox \"1add6ba781b8927ab85f016f30fc446367b48affbff286be37331fdb54bfb209\" returns successfully" Feb 9 14:15:20.744862 kubelet[2523]: E0209 14:15:20.744673 2523 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 14:15:20.939723 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ab77a8be1a6048b5a66e6590d859c64150b94748faa3f9f6067d879d13eafc77-rootfs.mount: Deactivated successfully. Feb 9 14:15:20.940001 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ab77a8be1a6048b5a66e6590d859c64150b94748faa3f9f6067d879d13eafc77-shm.mount: Deactivated successfully. Feb 9 14:15:20.940218 systemd[1]: var-lib-kubelet-pods-b19c4d24\x2d1ffe\x2d427e\x2db653\x2d176eff29c216-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7pl8m.mount: Deactivated successfully. Feb 9 14:15:20.940253 systemd[1]: var-lib-kubelet-pods-00602c6b\x2df3bb\x2d4d64\x2d9edd\x2deb6411171a3c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dh75xh.mount: Deactivated successfully. Feb 9 14:15:20.940285 systemd[1]: var-lib-kubelet-pods-00602c6b\x2df3bb\x2d4d64\x2d9edd\x2deb6411171a3c-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 14:15:20.940319 systemd[1]: var-lib-kubelet-pods-00602c6b\x2df3bb\x2d4d64\x2d9edd\x2deb6411171a3c-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 14:15:21.837053 sshd[4654]: pam_unix(sshd:session): session closed for user core Feb 9 14:15:21.839046 systemd[1]: sshd@26-139.178.88.165:22-147.75.109.163:36412.service: Deactivated successfully. Feb 9 14:15:21.839440 systemd[1]: session-27.scope: Deactivated successfully. Feb 9 14:15:21.839945 systemd-logind[1468]: Session 27 logged out. Waiting for processes to exit. Feb 9 14:15:21.840620 systemd[1]: Started sshd@27-139.178.88.165:22-147.75.109.163:36422.service. Feb 9 14:15:21.841022 systemd-logind[1468]: Removed session 27. Feb 9 14:15:21.877371 sshd[4830]: Accepted publickey for core from 147.75.109.163 port 36422 ssh2: RSA SHA256:U9TXCuLWmaN/vK3Q2mHS688pTPlFTaEaiETkjLfvCUc Feb 9 14:15:21.878249 sshd[4830]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 14:15:21.881577 systemd-logind[1468]: New session 28 of user core. Feb 9 14:15:21.882300 systemd[1]: Started session-28.scope. Feb 9 14:15:22.035083 kubelet[2523]: I0209 14:15:22.035067 2523 setters.go:552] "Node became not ready" node="ci-3510.3.2-a-80177560a3" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-02-09T14:15:22Z","lastTransitionTime":"2024-02-09T14:15:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 9 14:15:22.252596 sshd[4830]: pam_unix(sshd:session): session closed for user core Feb 9 14:15:22.254453 systemd[1]: sshd@27-139.178.88.165:22-147.75.109.163:36422.service: Deactivated successfully. Feb 9 14:15:22.254815 systemd[1]: session-28.scope: Deactivated successfully. Feb 9 14:15:22.255123 systemd-logind[1468]: Session 28 logged out. Waiting for processes to exit. Feb 9 14:15:22.255856 systemd[1]: Started sshd@28-139.178.88.165:22-147.75.109.163:36424.service. Feb 9 14:15:22.256282 systemd-logind[1468]: Removed session 28. Feb 9 14:15:22.260461 kubelet[2523]: I0209 14:15:22.260435 2523 topology_manager.go:215] "Topology Admit Handler" podUID="c6681cb8-0129-4e9c-9fce-00d659eca2e2" podNamespace="kube-system" podName="cilium-szsz7" Feb 9 14:15:22.260569 kubelet[2523]: E0209 14:15:22.260492 2523 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="00602c6b-f3bb-4d64-9edd-eb6411171a3c" containerName="mount-bpf-fs" Feb 9 14:15:22.260569 kubelet[2523]: E0209 14:15:22.260505 2523 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="00602c6b-f3bb-4d64-9edd-eb6411171a3c" containerName="clean-cilium-state" Feb 9 14:15:22.260569 kubelet[2523]: E0209 14:15:22.260513 2523 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="00602c6b-f3bb-4d64-9edd-eb6411171a3c" containerName="apply-sysctl-overwrites" Feb 9 14:15:22.260569 kubelet[2523]: E0209 14:15:22.260520 2523 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b19c4d24-1ffe-427e-b653-176eff29c216" containerName="cilium-operator" Feb 9 14:15:22.260569 kubelet[2523]: E0209 14:15:22.260526 2523 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="00602c6b-f3bb-4d64-9edd-eb6411171a3c" containerName="cilium-agent" Feb 9 14:15:22.260569 kubelet[2523]: E0209 14:15:22.260533 2523 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="00602c6b-f3bb-4d64-9edd-eb6411171a3c" containerName="mount-cgroup" Feb 9 14:15:22.260569 kubelet[2523]: I0209 14:15:22.260554 2523 memory_manager.go:346] "RemoveStaleState removing state" podUID="b19c4d24-1ffe-427e-b653-176eff29c216" containerName="cilium-operator" Feb 9 14:15:22.260569 kubelet[2523]: I0209 14:15:22.260559 2523 memory_manager.go:346] "RemoveStaleState removing state" podUID="00602c6b-f3bb-4d64-9edd-eb6411171a3c" containerName="cilium-agent" Feb 9 14:15:22.264426 systemd[1]: Created slice kubepods-burstable-podc6681cb8_0129_4e9c_9fce_00d659eca2e2.slice. Feb 9 14:15:22.295131 sshd[4855]: Accepted publickey for core from 147.75.109.163 port 36424 ssh2: RSA SHA256:U9TXCuLWmaN/vK3Q2mHS688pTPlFTaEaiETkjLfvCUc Feb 9 14:15:22.298516 sshd[4855]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 14:15:22.308963 systemd-logind[1468]: New session 29 of user core. Feb 9 14:15:22.311616 systemd[1]: Started session-29.scope. Feb 9 14:15:22.336674 kubelet[2523]: I0209 14:15:22.336619 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c6681cb8-0129-4e9c-9fce-00d659eca2e2-cilium-cgroup\") pod \"cilium-szsz7\" (UID: \"c6681cb8-0129-4e9c-9fce-00d659eca2e2\") " pod="kube-system/cilium-szsz7" Feb 9 14:15:22.336997 kubelet[2523]: I0209 14:15:22.336720 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c6681cb8-0129-4e9c-9fce-00d659eca2e2-bpf-maps\") pod \"cilium-szsz7\" (UID: \"c6681cb8-0129-4e9c-9fce-00d659eca2e2\") " pod="kube-system/cilium-szsz7" Feb 9 14:15:22.336997 kubelet[2523]: I0209 14:15:22.336950 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c6681cb8-0129-4e9c-9fce-00d659eca2e2-clustermesh-secrets\") pod \"cilium-szsz7\" (UID: \"c6681cb8-0129-4e9c-9fce-00d659eca2e2\") " pod="kube-system/cilium-szsz7" Feb 9 14:15:22.337385 kubelet[2523]: I0209 14:15:22.337176 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c6681cb8-0129-4e9c-9fce-00d659eca2e2-cilium-config-path\") pod \"cilium-szsz7\" (UID: \"c6681cb8-0129-4e9c-9fce-00d659eca2e2\") " pod="kube-system/cilium-szsz7" Feb 9 14:15:22.337385 kubelet[2523]: I0209 14:15:22.337353 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c6681cb8-0129-4e9c-9fce-00d659eca2e2-etc-cni-netd\") pod \"cilium-szsz7\" (UID: \"c6681cb8-0129-4e9c-9fce-00d659eca2e2\") " pod="kube-system/cilium-szsz7" Feb 9 14:15:22.337805 kubelet[2523]: I0209 14:15:22.337505 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c6681cb8-0129-4e9c-9fce-00d659eca2e2-xtables-lock\") pod \"cilium-szsz7\" (UID: \"c6681cb8-0129-4e9c-9fce-00d659eca2e2\") " pod="kube-system/cilium-szsz7" Feb 9 14:15:22.337805 kubelet[2523]: I0209 14:15:22.337597 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k855l\" (UniqueName: \"kubernetes.io/projected/c6681cb8-0129-4e9c-9fce-00d659eca2e2-kube-api-access-k855l\") pod \"cilium-szsz7\" (UID: \"c6681cb8-0129-4e9c-9fce-00d659eca2e2\") " pod="kube-system/cilium-szsz7" Feb 9 14:15:22.338180 kubelet[2523]: I0209 14:15:22.337822 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c6681cb8-0129-4e9c-9fce-00d659eca2e2-hostproc\") pod \"cilium-szsz7\" (UID: \"c6681cb8-0129-4e9c-9fce-00d659eca2e2\") " pod="kube-system/cilium-szsz7" Feb 9 14:15:22.338180 kubelet[2523]: I0209 14:15:22.337975 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c6681cb8-0129-4e9c-9fce-00d659eca2e2-cni-path\") pod \"cilium-szsz7\" (UID: \"c6681cb8-0129-4e9c-9fce-00d659eca2e2\") " pod="kube-system/cilium-szsz7" Feb 9 14:15:22.338180 kubelet[2523]: I0209 14:15:22.338093 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c6681cb8-0129-4e9c-9fce-00d659eca2e2-cilium-ipsec-secrets\") pod \"cilium-szsz7\" (UID: \"c6681cb8-0129-4e9c-9fce-00d659eca2e2\") " pod="kube-system/cilium-szsz7" Feb 9 14:15:22.338180 kubelet[2523]: I0209 14:15:22.338174 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c6681cb8-0129-4e9c-9fce-00d659eca2e2-hubble-tls\") pod \"cilium-szsz7\" (UID: \"c6681cb8-0129-4e9c-9fce-00d659eca2e2\") " pod="kube-system/cilium-szsz7" Feb 9 14:15:22.338740 kubelet[2523]: I0209 14:15:22.338235 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c6681cb8-0129-4e9c-9fce-00d659eca2e2-cilium-run\") pod \"cilium-szsz7\" (UID: \"c6681cb8-0129-4e9c-9fce-00d659eca2e2\") " pod="kube-system/cilium-szsz7" Feb 9 14:15:22.338740 kubelet[2523]: I0209 14:15:22.338296 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c6681cb8-0129-4e9c-9fce-00d659eca2e2-host-proc-sys-net\") pod \"cilium-szsz7\" (UID: \"c6681cb8-0129-4e9c-9fce-00d659eca2e2\") " pod="kube-system/cilium-szsz7" Feb 9 14:15:22.338740 kubelet[2523]: I0209 14:15:22.338379 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c6681cb8-0129-4e9c-9fce-00d659eca2e2-lib-modules\") pod \"cilium-szsz7\" (UID: \"c6681cb8-0129-4e9c-9fce-00d659eca2e2\") " pod="kube-system/cilium-szsz7" Feb 9 14:15:22.338740 kubelet[2523]: I0209 14:15:22.338558 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c6681cb8-0129-4e9c-9fce-00d659eca2e2-host-proc-sys-kernel\") pod \"cilium-szsz7\" (UID: \"c6681cb8-0129-4e9c-9fce-00d659eca2e2\") " pod="kube-system/cilium-szsz7" Feb 9 14:15:22.482930 sshd[4855]: pam_unix(sshd:session): session closed for user core Feb 9 14:15:22.486227 systemd[1]: sshd@28-139.178.88.165:22-147.75.109.163:36424.service: Deactivated successfully. Feb 9 14:15:22.486939 systemd[1]: session-29.scope: Deactivated successfully. Feb 9 14:15:22.487674 systemd-logind[1468]: Session 29 logged out. Waiting for processes to exit. Feb 9 14:15:22.488775 systemd[1]: Started sshd@29-139.178.88.165:22-147.75.109.163:36430.service. Feb 9 14:15:22.489545 systemd-logind[1468]: Removed session 29. Feb 9 14:15:22.504784 env[1480]: time="2024-02-09T14:15:22.504701372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-szsz7,Uid:c6681cb8-0129-4e9c-9fce-00d659eca2e2,Namespace:kube-system,Attempt:0,}" Feb 9 14:15:22.513517 env[1480]: time="2024-02-09T14:15:22.513426505Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 14:15:22.513517 env[1480]: time="2024-02-09T14:15:22.513463600Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 14:15:22.513517 env[1480]: time="2024-02-09T14:15:22.513476823Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 14:15:22.513708 env[1480]: time="2024-02-09T14:15:22.513602013Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d30f5744d3ac0451f1e14e6f5fe318a561f3e9c8481373d1fde633e88ddea34a pid=4893 runtime=io.containerd.runc.v2 Feb 9 14:15:22.534757 systemd[1]: Started cri-containerd-d30f5744d3ac0451f1e14e6f5fe318a561f3e9c8481373d1fde633e88ddea34a.scope. Feb 9 14:15:22.537788 sshd[4884]: Accepted publickey for core from 147.75.109.163 port 36430 ssh2: RSA SHA256:U9TXCuLWmaN/vK3Q2mHS688pTPlFTaEaiETkjLfvCUc Feb 9 14:15:22.539099 sshd[4884]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 14:15:22.542755 systemd-logind[1468]: New session 30 of user core. Feb 9 14:15:22.543685 systemd[1]: Started session-30.scope. Feb 9 14:15:22.567702 env[1480]: time="2024-02-09T14:15:22.567658123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-szsz7,Uid:c6681cb8-0129-4e9c-9fce-00d659eca2e2,Namespace:kube-system,Attempt:0,} returns sandbox id \"d30f5744d3ac0451f1e14e6f5fe318a561f3e9c8481373d1fde633e88ddea34a\"" Feb 9 14:15:22.569922 env[1480]: time="2024-02-09T14:15:22.569895990Z" level=info msg="CreateContainer within sandbox \"d30f5744d3ac0451f1e14e6f5fe318a561f3e9c8481373d1fde633e88ddea34a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 14:15:22.576829 env[1480]: time="2024-02-09T14:15:22.576773890Z" level=info msg="CreateContainer within sandbox \"d30f5744d3ac0451f1e14e6f5fe318a561f3e9c8481373d1fde633e88ddea34a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f619be65010fd30f65e91495a5163c3dcc19af3cd4b3160e297096828331bb76\"" Feb 9 14:15:22.577166 env[1480]: time="2024-02-09T14:15:22.577109775Z" level=info msg="StartContainer for \"f619be65010fd30f65e91495a5163c3dcc19af3cd4b3160e297096828331bb76\"" Feb 9 14:15:22.580509 kubelet[2523]: I0209 14:15:22.580487 2523 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="00602c6b-f3bb-4d64-9edd-eb6411171a3c" path="/var/lib/kubelet/pods/00602c6b-f3bb-4d64-9edd-eb6411171a3c/volumes" Feb 9 14:15:22.581198 kubelet[2523]: I0209 14:15:22.581181 2523 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="b19c4d24-1ffe-427e-b653-176eff29c216" path="/var/lib/kubelet/pods/b19c4d24-1ffe-427e-b653-176eff29c216/volumes" Feb 9 14:15:22.605519 systemd[1]: Started cri-containerd-f619be65010fd30f65e91495a5163c3dcc19af3cd4b3160e297096828331bb76.scope. Feb 9 14:15:22.617430 systemd[1]: cri-containerd-f619be65010fd30f65e91495a5163c3dcc19af3cd4b3160e297096828331bb76.scope: Deactivated successfully. Feb 9 14:15:22.653111 env[1480]: time="2024-02-09T14:15:22.653071288Z" level=info msg="shim disconnected" id=f619be65010fd30f65e91495a5163c3dcc19af3cd4b3160e297096828331bb76 Feb 9 14:15:22.653111 env[1480]: time="2024-02-09T14:15:22.653110897Z" level=warning msg="cleaning up after shim disconnected" id=f619be65010fd30f65e91495a5163c3dcc19af3cd4b3160e297096828331bb76 namespace=k8s.io Feb 9 14:15:22.653301 env[1480]: time="2024-02-09T14:15:22.653119915Z" level=info msg="cleaning up dead shim" Feb 9 14:15:22.670528 env[1480]: time="2024-02-09T14:15:22.670461909Z" level=warning msg="cleanup warnings time=\"2024-02-09T14:15:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4972 runtime=io.containerd.runc.v2\ntime=\"2024-02-09T14:15:22Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/f619be65010fd30f65e91495a5163c3dcc19af3cd4b3160e297096828331bb76/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 9 14:15:22.670742 env[1480]: time="2024-02-09T14:15:22.670656202Z" level=error msg="copy shim log" error="read /proc/self/fd/31: file already closed" Feb 9 14:15:22.670869 env[1480]: time="2024-02-09T14:15:22.670837890Z" level=error msg="Failed to pipe stdout of container \"f619be65010fd30f65e91495a5163c3dcc19af3cd4b3160e297096828331bb76\"" error="reading from a closed fifo" Feb 9 14:15:22.670924 env[1480]: time="2024-02-09T14:15:22.670853771Z" level=error msg="Failed to pipe stderr of container \"f619be65010fd30f65e91495a5163c3dcc19af3cd4b3160e297096828331bb76\"" error="reading from a closed fifo" Feb 9 14:15:22.671511 env[1480]: time="2024-02-09T14:15:22.671476236Z" level=error msg="StartContainer for \"f619be65010fd30f65e91495a5163c3dcc19af3cd4b3160e297096828331bb76\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 9 14:15:22.671703 kubelet[2523]: E0209 14:15:22.671684 2523 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="f619be65010fd30f65e91495a5163c3dcc19af3cd4b3160e297096828331bb76" Feb 9 14:15:22.671821 kubelet[2523]: E0209 14:15:22.671810 2523 kuberuntime_manager.go:1209] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 9 14:15:22.671821 kubelet[2523]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 9 14:15:22.671821 kubelet[2523]: rm /hostbin/cilium-mount Feb 9 14:15:22.671937 kubelet[2523]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-k855l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-szsz7_kube-system(c6681cb8-0129-4e9c-9fce-00d659eca2e2): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 9 14:15:22.671937 kubelet[2523]: E0209 14:15:22.671856 2523 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-szsz7" podUID="c6681cb8-0129-4e9c-9fce-00d659eca2e2" Feb 9 14:15:23.048793 env[1480]: time="2024-02-09T14:15:23.048683284Z" level=info msg="StopPodSandbox for \"d30f5744d3ac0451f1e14e6f5fe318a561f3e9c8481373d1fde633e88ddea34a\"" Feb 9 14:15:23.049089 env[1480]: time="2024-02-09T14:15:23.048872748Z" level=info msg="Container to stop \"f619be65010fd30f65e91495a5163c3dcc19af3cd4b3160e297096828331bb76\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 14:15:23.074371 systemd[1]: cri-containerd-d30f5744d3ac0451f1e14e6f5fe318a561f3e9c8481373d1fde633e88ddea34a.scope: Deactivated successfully. Feb 9 14:15:23.128607 env[1480]: time="2024-02-09T14:15:23.128486307Z" level=info msg="shim disconnected" id=d30f5744d3ac0451f1e14e6f5fe318a561f3e9c8481373d1fde633e88ddea34a Feb 9 14:15:23.129114 env[1480]: time="2024-02-09T14:15:23.128615942Z" level=warning msg="cleaning up after shim disconnected" id=d30f5744d3ac0451f1e14e6f5fe318a561f3e9c8481373d1fde633e88ddea34a namespace=k8s.io Feb 9 14:15:23.129114 env[1480]: time="2024-02-09T14:15:23.128655190Z" level=info msg="cleaning up dead shim" Feb 9 14:15:23.158994 env[1480]: time="2024-02-09T14:15:23.158894993Z" level=warning msg="cleanup warnings time=\"2024-02-09T14:15:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5002 runtime=io.containerd.runc.v2\n" Feb 9 14:15:23.159691 env[1480]: time="2024-02-09T14:15:23.159628263Z" level=info msg="TearDown network for sandbox \"d30f5744d3ac0451f1e14e6f5fe318a561f3e9c8481373d1fde633e88ddea34a\" successfully" Feb 9 14:15:23.159899 env[1480]: time="2024-02-09T14:15:23.159686824Z" level=info msg="StopPodSandbox for \"d30f5744d3ac0451f1e14e6f5fe318a561f3e9c8481373d1fde633e88ddea34a\" returns successfully" Feb 9 14:15:23.245364 kubelet[2523]: I0209 14:15:23.245238 2523 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c6681cb8-0129-4e9c-9fce-00d659eca2e2-host-proc-sys-kernel\") pod \"c6681cb8-0129-4e9c-9fce-00d659eca2e2\" (UID: \"c6681cb8-0129-4e9c-9fce-00d659eca2e2\") " Feb 9 14:15:23.245364 kubelet[2523]: I0209 14:15:23.245360 2523 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c6681cb8-0129-4e9c-9fce-00d659eca2e2-bpf-maps\") pod \"c6681cb8-0129-4e9c-9fce-00d659eca2e2\" (UID: \"c6681cb8-0129-4e9c-9fce-00d659eca2e2\") " Feb 9 14:15:23.246752 kubelet[2523]: I0209 14:15:23.245440 2523 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c6681cb8-0129-4e9c-9fce-00d659eca2e2-clustermesh-secrets\") pod \"c6681cb8-0129-4e9c-9fce-00d659eca2e2\" (UID: \"c6681cb8-0129-4e9c-9fce-00d659eca2e2\") " Feb 9 14:15:23.246752 kubelet[2523]: I0209 14:15:23.245436 2523 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6681cb8-0129-4e9c-9fce-00d659eca2e2-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "c6681cb8-0129-4e9c-9fce-00d659eca2e2" (UID: "c6681cb8-0129-4e9c-9fce-00d659eca2e2"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 14:15:23.246752 kubelet[2523]: I0209 14:15:23.245495 2523 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c6681cb8-0129-4e9c-9fce-00d659eca2e2-cni-path\") pod \"c6681cb8-0129-4e9c-9fce-00d659eca2e2\" (UID: \"c6681cb8-0129-4e9c-9fce-00d659eca2e2\") " Feb 9 14:15:23.246752 kubelet[2523]: I0209 14:15:23.245466 2523 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6681cb8-0129-4e9c-9fce-00d659eca2e2-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "c6681cb8-0129-4e9c-9fce-00d659eca2e2" (UID: "c6681cb8-0129-4e9c-9fce-00d659eca2e2"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 14:15:23.246752 kubelet[2523]: I0209 14:15:23.245550 2523 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c6681cb8-0129-4e9c-9fce-00d659eca2e2-cilium-run\") pod \"c6681cb8-0129-4e9c-9fce-00d659eca2e2\" (UID: \"c6681cb8-0129-4e9c-9fce-00d659eca2e2\") " Feb 9 14:15:23.246752 kubelet[2523]: I0209 14:15:23.245607 2523 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c6681cb8-0129-4e9c-9fce-00d659eca2e2-cilium-cgroup\") pod \"c6681cb8-0129-4e9c-9fce-00d659eca2e2\" (UID: \"c6681cb8-0129-4e9c-9fce-00d659eca2e2\") " Feb 9 14:15:23.246752 kubelet[2523]: I0209 14:15:23.245605 2523 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6681cb8-0129-4e9c-9fce-00d659eca2e2-cni-path" (OuterVolumeSpecName: "cni-path") pod "c6681cb8-0129-4e9c-9fce-00d659eca2e2" (UID: "c6681cb8-0129-4e9c-9fce-00d659eca2e2"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 14:15:23.246752 kubelet[2523]: I0209 14:15:23.245621 2523 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6681cb8-0129-4e9c-9fce-00d659eca2e2-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "c6681cb8-0129-4e9c-9fce-00d659eca2e2" (UID: "c6681cb8-0129-4e9c-9fce-00d659eca2e2"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 14:15:23.246752 kubelet[2523]: I0209 14:15:23.245669 2523 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c6681cb8-0129-4e9c-9fce-00d659eca2e2-cilium-config-path\") pod \"c6681cb8-0129-4e9c-9fce-00d659eca2e2\" (UID: \"c6681cb8-0129-4e9c-9fce-00d659eca2e2\") " Feb 9 14:15:23.246752 kubelet[2523]: I0209 14:15:23.245723 2523 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c6681cb8-0129-4e9c-9fce-00d659eca2e2-etc-cni-netd\") pod \"c6681cb8-0129-4e9c-9fce-00d659eca2e2\" (UID: \"c6681cb8-0129-4e9c-9fce-00d659eca2e2\") " Feb 9 14:15:23.246752 kubelet[2523]: I0209 14:15:23.245709 2523 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6681cb8-0129-4e9c-9fce-00d659eca2e2-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "c6681cb8-0129-4e9c-9fce-00d659eca2e2" (UID: "c6681cb8-0129-4e9c-9fce-00d659eca2e2"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 14:15:23.246752 kubelet[2523]: I0209 14:15:23.245785 2523 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c6681cb8-0129-4e9c-9fce-00d659eca2e2-cilium-ipsec-secrets\") pod \"c6681cb8-0129-4e9c-9fce-00d659eca2e2\" (UID: \"c6681cb8-0129-4e9c-9fce-00d659eca2e2\") " Feb 9 14:15:23.246752 kubelet[2523]: I0209 14:15:23.245797 2523 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6681cb8-0129-4e9c-9fce-00d659eca2e2-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "c6681cb8-0129-4e9c-9fce-00d659eca2e2" (UID: "c6681cb8-0129-4e9c-9fce-00d659eca2e2"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 14:15:23.246752 kubelet[2523]: I0209 14:15:23.245850 2523 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c6681cb8-0129-4e9c-9fce-00d659eca2e2-hubble-tls\") pod \"c6681cb8-0129-4e9c-9fce-00d659eca2e2\" (UID: \"c6681cb8-0129-4e9c-9fce-00d659eca2e2\") " Feb 9 14:15:23.246752 kubelet[2523]: I0209 14:15:23.245905 2523 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c6681cb8-0129-4e9c-9fce-00d659eca2e2-xtables-lock\") pod \"c6681cb8-0129-4e9c-9fce-00d659eca2e2\" (UID: \"c6681cb8-0129-4e9c-9fce-00d659eca2e2\") " Feb 9 14:15:23.248527 kubelet[2523]: I0209 14:15:23.245957 2523 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c6681cb8-0129-4e9c-9fce-00d659eca2e2-hostproc\") pod \"c6681cb8-0129-4e9c-9fce-00d659eca2e2\" (UID: \"c6681cb8-0129-4e9c-9fce-00d659eca2e2\") " Feb 9 14:15:23.248527 kubelet[2523]: I0209 14:15:23.246025 2523 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k855l\" (UniqueName: \"kubernetes.io/projected/c6681cb8-0129-4e9c-9fce-00d659eca2e2-kube-api-access-k855l\") pod \"c6681cb8-0129-4e9c-9fce-00d659eca2e2\" (UID: \"c6681cb8-0129-4e9c-9fce-00d659eca2e2\") " Feb 9 14:15:23.248527 kubelet[2523]: I0209 14:15:23.246083 2523 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c6681cb8-0129-4e9c-9fce-00d659eca2e2-host-proc-sys-net\") pod \"c6681cb8-0129-4e9c-9fce-00d659eca2e2\" (UID: \"c6681cb8-0129-4e9c-9fce-00d659eca2e2\") " Feb 9 14:15:23.248527 kubelet[2523]: I0209 14:15:23.246088 2523 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6681cb8-0129-4e9c-9fce-00d659eca2e2-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c6681cb8-0129-4e9c-9fce-00d659eca2e2" (UID: "c6681cb8-0129-4e9c-9fce-00d659eca2e2"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 14:15:23.248527 kubelet[2523]: I0209 14:15:23.246105 2523 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6681cb8-0129-4e9c-9fce-00d659eca2e2-hostproc" (OuterVolumeSpecName: "hostproc") pod "c6681cb8-0129-4e9c-9fce-00d659eca2e2" (UID: "c6681cb8-0129-4e9c-9fce-00d659eca2e2"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 14:15:23.248527 kubelet[2523]: I0209 14:15:23.246139 2523 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c6681cb8-0129-4e9c-9fce-00d659eca2e2-lib-modules\") pod \"c6681cb8-0129-4e9c-9fce-00d659eca2e2\" (UID: \"c6681cb8-0129-4e9c-9fce-00d659eca2e2\") " Feb 9 14:15:23.248527 kubelet[2523]: I0209 14:15:23.246186 2523 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6681cb8-0129-4e9c-9fce-00d659eca2e2-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c6681cb8-0129-4e9c-9fce-00d659eca2e2" (UID: "c6681cb8-0129-4e9c-9fce-00d659eca2e2"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 14:15:23.248527 kubelet[2523]: I0209 14:15:23.246256 2523 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6681cb8-0129-4e9c-9fce-00d659eca2e2-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "c6681cb8-0129-4e9c-9fce-00d659eca2e2" (UID: "c6681cb8-0129-4e9c-9fce-00d659eca2e2"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 14:15:23.248527 kubelet[2523]: I0209 14:15:23.246343 2523 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c6681cb8-0129-4e9c-9fce-00d659eca2e2-lib-modules\") on node \"ci-3510.3.2-a-80177560a3\" DevicePath \"\"" Feb 9 14:15:23.248527 kubelet[2523]: I0209 14:15:23.246409 2523 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c6681cb8-0129-4e9c-9fce-00d659eca2e2-host-proc-sys-kernel\") on node \"ci-3510.3.2-a-80177560a3\" DevicePath \"\"" Feb 9 14:15:23.248527 kubelet[2523]: I0209 14:15:23.246446 2523 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c6681cb8-0129-4e9c-9fce-00d659eca2e2-bpf-maps\") on node \"ci-3510.3.2-a-80177560a3\" DevicePath \"\"" Feb 9 14:15:23.248527 kubelet[2523]: I0209 14:15:23.246479 2523 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c6681cb8-0129-4e9c-9fce-00d659eca2e2-cni-path\") on node \"ci-3510.3.2-a-80177560a3\" DevicePath \"\"" Feb 9 14:15:23.248527 kubelet[2523]: I0209 14:15:23.246509 2523 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c6681cb8-0129-4e9c-9fce-00d659eca2e2-cilium-run\") on node \"ci-3510.3.2-a-80177560a3\" DevicePath \"\"" Feb 9 14:15:23.248527 kubelet[2523]: I0209 14:15:23.246542 2523 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c6681cb8-0129-4e9c-9fce-00d659eca2e2-cilium-cgroup\") on node \"ci-3510.3.2-a-80177560a3\" DevicePath \"\"" Feb 9 14:15:23.248527 kubelet[2523]: I0209 14:15:23.246572 2523 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c6681cb8-0129-4e9c-9fce-00d659eca2e2-etc-cni-netd\") on node \"ci-3510.3.2-a-80177560a3\" DevicePath \"\"" Feb 9 14:15:23.248527 kubelet[2523]: I0209 14:15:23.246602 2523 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c6681cb8-0129-4e9c-9fce-00d659eca2e2-xtables-lock\") on node \"ci-3510.3.2-a-80177560a3\" DevicePath \"\"" Feb 9 14:15:23.248527 kubelet[2523]: I0209 14:15:23.246631 2523 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c6681cb8-0129-4e9c-9fce-00d659eca2e2-hostproc\") on node \"ci-3510.3.2-a-80177560a3\" DevicePath \"\"" Feb 9 14:15:23.251679 kubelet[2523]: I0209 14:15:23.251580 2523 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6681cb8-0129-4e9c-9fce-00d659eca2e2-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c6681cb8-0129-4e9c-9fce-00d659eca2e2" (UID: "c6681cb8-0129-4e9c-9fce-00d659eca2e2"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 14:15:23.252382 kubelet[2523]: I0209 14:15:23.252276 2523 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6681cb8-0129-4e9c-9fce-00d659eca2e2-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "c6681cb8-0129-4e9c-9fce-00d659eca2e2" (UID: "c6681cb8-0129-4e9c-9fce-00d659eca2e2"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 14:15:23.252612 kubelet[2523]: I0209 14:15:23.252449 2523 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6681cb8-0129-4e9c-9fce-00d659eca2e2-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "c6681cb8-0129-4e9c-9fce-00d659eca2e2" (UID: "c6681cb8-0129-4e9c-9fce-00d659eca2e2"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 14:15:23.252816 kubelet[2523]: I0209 14:15:23.252713 2523 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6681cb8-0129-4e9c-9fce-00d659eca2e2-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "c6681cb8-0129-4e9c-9fce-00d659eca2e2" (UID: "c6681cb8-0129-4e9c-9fce-00d659eca2e2"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 14:15:23.253205 kubelet[2523]: I0209 14:15:23.253094 2523 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6681cb8-0129-4e9c-9fce-00d659eca2e2-kube-api-access-k855l" (OuterVolumeSpecName: "kube-api-access-k855l") pod "c6681cb8-0129-4e9c-9fce-00d659eca2e2" (UID: "c6681cb8-0129-4e9c-9fce-00d659eca2e2"). InnerVolumeSpecName "kube-api-access-k855l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 14:15:23.347812 kubelet[2523]: I0209 14:15:23.347705 2523 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c6681cb8-0129-4e9c-9fce-00d659eca2e2-clustermesh-secrets\") on node \"ci-3510.3.2-a-80177560a3\" DevicePath \"\"" Feb 9 14:15:23.347812 kubelet[2523]: I0209 14:15:23.347785 2523 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c6681cb8-0129-4e9c-9fce-00d659eca2e2-cilium-ipsec-secrets\") on node \"ci-3510.3.2-a-80177560a3\" DevicePath \"\"" Feb 9 14:15:23.347812 kubelet[2523]: I0209 14:15:23.347820 2523 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c6681cb8-0129-4e9c-9fce-00d659eca2e2-hubble-tls\") on node \"ci-3510.3.2-a-80177560a3\" DevicePath \"\"" Feb 9 14:15:23.348396 kubelet[2523]: I0209 14:15:23.347856 2523 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c6681cb8-0129-4e9c-9fce-00d659eca2e2-cilium-config-path\") on node \"ci-3510.3.2-a-80177560a3\" DevicePath \"\"" Feb 9 14:15:23.348396 kubelet[2523]: I0209 14:15:23.347890 2523 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-k855l\" (UniqueName: \"kubernetes.io/projected/c6681cb8-0129-4e9c-9fce-00d659eca2e2-kube-api-access-k855l\") on node \"ci-3510.3.2-a-80177560a3\" DevicePath \"\"" Feb 9 14:15:23.348396 kubelet[2523]: I0209 14:15:23.347923 2523 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c6681cb8-0129-4e9c-9fce-00d659eca2e2-host-proc-sys-net\") on node \"ci-3510.3.2-a-80177560a3\" DevicePath \"\"" Feb 9 14:15:23.446681 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d30f5744d3ac0451f1e14e6f5fe318a561f3e9c8481373d1fde633e88ddea34a-rootfs.mount: Deactivated successfully. Feb 9 14:15:23.446950 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d30f5744d3ac0451f1e14e6f5fe318a561f3e9c8481373d1fde633e88ddea34a-shm.mount: Deactivated successfully. Feb 9 14:15:23.447142 systemd[1]: var-lib-kubelet-pods-c6681cb8\x2d0129\x2d4e9c\x2d9fce\x2d00d659eca2e2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dk855l.mount: Deactivated successfully. Feb 9 14:15:23.447348 systemd[1]: var-lib-kubelet-pods-c6681cb8\x2d0129\x2d4e9c\x2d9fce\x2d00d659eca2e2-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 9 14:15:23.447548 systemd[1]: var-lib-kubelet-pods-c6681cb8\x2d0129\x2d4e9c\x2d9fce\x2d00d659eca2e2-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 14:15:23.447715 systemd[1]: var-lib-kubelet-pods-c6681cb8\x2d0129\x2d4e9c\x2d9fce\x2d00d659eca2e2-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 14:15:24.054606 kubelet[2523]: I0209 14:15:24.054543 2523 scope.go:117] "RemoveContainer" containerID="f619be65010fd30f65e91495a5163c3dcc19af3cd4b3160e297096828331bb76" Feb 9 14:15:24.057347 env[1480]: time="2024-02-09T14:15:24.057233526Z" level=info msg="RemoveContainer for \"f619be65010fd30f65e91495a5163c3dcc19af3cd4b3160e297096828331bb76\"" Feb 9 14:15:24.060454 env[1480]: time="2024-02-09T14:15:24.060417846Z" level=info msg="RemoveContainer for \"f619be65010fd30f65e91495a5163c3dcc19af3cd4b3160e297096828331bb76\" returns successfully" Feb 9 14:15:24.061212 systemd[1]: Removed slice kubepods-burstable-podc6681cb8_0129_4e9c_9fce_00d659eca2e2.slice. Feb 9 14:15:24.077614 kubelet[2523]: I0209 14:15:24.077591 2523 topology_manager.go:215] "Topology Admit Handler" podUID="b84206f5-1f89-42dc-a297-ea45a0400de1" podNamespace="kube-system" podName="cilium-gtvfj" Feb 9 14:15:24.077747 kubelet[2523]: E0209 14:15:24.077649 2523 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c6681cb8-0129-4e9c-9fce-00d659eca2e2" containerName="mount-cgroup" Feb 9 14:15:24.077747 kubelet[2523]: I0209 14:15:24.077674 2523 memory_manager.go:346] "RemoveStaleState removing state" podUID="c6681cb8-0129-4e9c-9fce-00d659eca2e2" containerName="mount-cgroup" Feb 9 14:15:24.081179 systemd[1]: Created slice kubepods-burstable-podb84206f5_1f89_42dc_a297_ea45a0400de1.slice. Feb 9 14:15:24.152727 kubelet[2523]: I0209 14:15:24.152612 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jxmw\" (UniqueName: \"kubernetes.io/projected/b84206f5-1f89-42dc-a297-ea45a0400de1-kube-api-access-4jxmw\") pod \"cilium-gtvfj\" (UID: \"b84206f5-1f89-42dc-a297-ea45a0400de1\") " pod="kube-system/cilium-gtvfj" Feb 9 14:15:24.152727 kubelet[2523]: I0209 14:15:24.152714 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b84206f5-1f89-42dc-a297-ea45a0400de1-xtables-lock\") pod \"cilium-gtvfj\" (UID: \"b84206f5-1f89-42dc-a297-ea45a0400de1\") " pod="kube-system/cilium-gtvfj" Feb 9 14:15:24.153172 kubelet[2523]: I0209 14:15:24.152917 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b84206f5-1f89-42dc-a297-ea45a0400de1-cilium-run\") pod \"cilium-gtvfj\" (UID: \"b84206f5-1f89-42dc-a297-ea45a0400de1\") " pod="kube-system/cilium-gtvfj" Feb 9 14:15:24.153172 kubelet[2523]: I0209 14:15:24.153024 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b84206f5-1f89-42dc-a297-ea45a0400de1-lib-modules\") pod \"cilium-gtvfj\" (UID: \"b84206f5-1f89-42dc-a297-ea45a0400de1\") " pod="kube-system/cilium-gtvfj" Feb 9 14:15:24.153439 kubelet[2523]: I0209 14:15:24.153158 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b84206f5-1f89-42dc-a297-ea45a0400de1-hubble-tls\") pod \"cilium-gtvfj\" (UID: \"b84206f5-1f89-42dc-a297-ea45a0400de1\") " pod="kube-system/cilium-gtvfj" Feb 9 14:15:24.153439 kubelet[2523]: I0209 14:15:24.153290 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b84206f5-1f89-42dc-a297-ea45a0400de1-hostproc\") pod \"cilium-gtvfj\" (UID: \"b84206f5-1f89-42dc-a297-ea45a0400de1\") " pod="kube-system/cilium-gtvfj" Feb 9 14:15:24.153786 kubelet[2523]: I0209 14:15:24.153442 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b84206f5-1f89-42dc-a297-ea45a0400de1-etc-cni-netd\") pod \"cilium-gtvfj\" (UID: \"b84206f5-1f89-42dc-a297-ea45a0400de1\") " pod="kube-system/cilium-gtvfj" Feb 9 14:15:24.153786 kubelet[2523]: I0209 14:15:24.153553 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b84206f5-1f89-42dc-a297-ea45a0400de1-clustermesh-secrets\") pod \"cilium-gtvfj\" (UID: \"b84206f5-1f89-42dc-a297-ea45a0400de1\") " pod="kube-system/cilium-gtvfj" Feb 9 14:15:24.153786 kubelet[2523]: I0209 14:15:24.153750 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b84206f5-1f89-42dc-a297-ea45a0400de1-cilium-ipsec-secrets\") pod \"cilium-gtvfj\" (UID: \"b84206f5-1f89-42dc-a297-ea45a0400de1\") " pod="kube-system/cilium-gtvfj" Feb 9 14:15:24.154292 kubelet[2523]: I0209 14:15:24.153914 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b84206f5-1f89-42dc-a297-ea45a0400de1-cilium-cgroup\") pod \"cilium-gtvfj\" (UID: \"b84206f5-1f89-42dc-a297-ea45a0400de1\") " pod="kube-system/cilium-gtvfj" Feb 9 14:15:24.154292 kubelet[2523]: I0209 14:15:24.154010 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b84206f5-1f89-42dc-a297-ea45a0400de1-cni-path\") pod \"cilium-gtvfj\" (UID: \"b84206f5-1f89-42dc-a297-ea45a0400de1\") " pod="kube-system/cilium-gtvfj" Feb 9 14:15:24.154292 kubelet[2523]: I0209 14:15:24.154105 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b84206f5-1f89-42dc-a297-ea45a0400de1-bpf-maps\") pod \"cilium-gtvfj\" (UID: \"b84206f5-1f89-42dc-a297-ea45a0400de1\") " pod="kube-system/cilium-gtvfj" Feb 9 14:15:24.154292 kubelet[2523]: I0209 14:15:24.154201 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b84206f5-1f89-42dc-a297-ea45a0400de1-cilium-config-path\") pod \"cilium-gtvfj\" (UID: \"b84206f5-1f89-42dc-a297-ea45a0400de1\") " pod="kube-system/cilium-gtvfj" Feb 9 14:15:24.154292 kubelet[2523]: I0209 14:15:24.154285 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b84206f5-1f89-42dc-a297-ea45a0400de1-host-proc-sys-net\") pod \"cilium-gtvfj\" (UID: \"b84206f5-1f89-42dc-a297-ea45a0400de1\") " pod="kube-system/cilium-gtvfj" Feb 9 14:15:24.155017 kubelet[2523]: I0209 14:15:24.154554 2523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b84206f5-1f89-42dc-a297-ea45a0400de1-host-proc-sys-kernel\") pod \"cilium-gtvfj\" (UID: \"b84206f5-1f89-42dc-a297-ea45a0400de1\") " pod="kube-system/cilium-gtvfj" Feb 9 14:15:24.384296 env[1480]: time="2024-02-09T14:15:24.384168443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gtvfj,Uid:b84206f5-1f89-42dc-a297-ea45a0400de1,Namespace:kube-system,Attempt:0,}" Feb 9 14:15:24.399934 env[1480]: time="2024-02-09T14:15:24.399875947Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 14:15:24.399934 env[1480]: time="2024-02-09T14:15:24.399896555Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 14:15:24.399934 env[1480]: time="2024-02-09T14:15:24.399903601Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 14:15:24.400026 env[1480]: time="2024-02-09T14:15:24.399961853Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1519639115c0c71fb62661b553a701661171a7de6470955fc9de1daf96d72fc8 pid=5029 runtime=io.containerd.runc.v2 Feb 9 14:15:24.418407 systemd[1]: Started cri-containerd-1519639115c0c71fb62661b553a701661171a7de6470955fc9de1daf96d72fc8.scope. Feb 9 14:15:24.444172 env[1480]: time="2024-02-09T14:15:24.444112624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gtvfj,Uid:b84206f5-1f89-42dc-a297-ea45a0400de1,Namespace:kube-system,Attempt:0,} returns sandbox id \"1519639115c0c71fb62661b553a701661171a7de6470955fc9de1daf96d72fc8\"" Feb 9 14:15:24.446167 env[1480]: time="2024-02-09T14:15:24.446137840Z" level=info msg="CreateContainer within sandbox \"1519639115c0c71fb62661b553a701661171a7de6470955fc9de1daf96d72fc8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 14:15:24.453085 env[1480]: time="2024-02-09T14:15:24.453025180Z" level=info msg="CreateContainer within sandbox \"1519639115c0c71fb62661b553a701661171a7de6470955fc9de1daf96d72fc8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"fed986a07afcdf3ac894eeb527b7711f2f5ec1a5d3fe7a6bc91c5b3a00a2b9fe\"" Feb 9 14:15:24.453470 env[1480]: time="2024-02-09T14:15:24.453398472Z" level=info msg="StartContainer for \"fed986a07afcdf3ac894eeb527b7711f2f5ec1a5d3fe7a6bc91c5b3a00a2b9fe\"" Feb 9 14:15:24.495296 systemd[1]: Started cri-containerd-fed986a07afcdf3ac894eeb527b7711f2f5ec1a5d3fe7a6bc91c5b3a00a2b9fe.scope. Feb 9 14:15:24.549387 env[1480]: time="2024-02-09T14:15:24.549329755Z" level=info msg="StartContainer for \"fed986a07afcdf3ac894eeb527b7711f2f5ec1a5d3fe7a6bc91c5b3a00a2b9fe\" returns successfully" Feb 9 14:15:24.561431 systemd[1]: cri-containerd-fed986a07afcdf3ac894eeb527b7711f2f5ec1a5d3fe7a6bc91c5b3a00a2b9fe.scope: Deactivated successfully. Feb 9 14:15:24.580905 kubelet[2523]: I0209 14:15:24.580869 2523 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="c6681cb8-0129-4e9c-9fce-00d659eca2e2" path="/var/lib/kubelet/pods/c6681cb8-0129-4e9c-9fce-00d659eca2e2/volumes" Feb 9 14:15:24.600891 env[1480]: time="2024-02-09T14:15:24.600827778Z" level=info msg="shim disconnected" id=fed986a07afcdf3ac894eeb527b7711f2f5ec1a5d3fe7a6bc91c5b3a00a2b9fe Feb 9 14:15:24.600891 env[1480]: time="2024-02-09T14:15:24.600890596Z" level=warning msg="cleaning up after shim disconnected" id=fed986a07afcdf3ac894eeb527b7711f2f5ec1a5d3fe7a6bc91c5b3a00a2b9fe namespace=k8s.io Feb 9 14:15:24.601274 env[1480]: time="2024-02-09T14:15:24.600906450Z" level=info msg="cleaning up dead shim" Feb 9 14:15:24.610730 env[1480]: time="2024-02-09T14:15:24.610653215Z" level=warning msg="cleanup warnings time=\"2024-02-09T14:15:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5112 runtime=io.containerd.runc.v2\n" Feb 9 14:15:25.065301 env[1480]: time="2024-02-09T14:15:25.065198013Z" level=info msg="CreateContainer within sandbox \"1519639115c0c71fb62661b553a701661171a7de6470955fc9de1daf96d72fc8\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 14:15:25.079883 env[1480]: time="2024-02-09T14:15:25.079761964Z" level=info msg="CreateContainer within sandbox \"1519639115c0c71fb62661b553a701661171a7de6470955fc9de1daf96d72fc8\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1de53d565bc718fad6876a635faa9f5b63f01fe8528c3275b5c7d0da4d7bdf84\"" Feb 9 14:15:25.080764 env[1480]: time="2024-02-09T14:15:25.080677272Z" level=info msg="StartContainer for \"1de53d565bc718fad6876a635faa9f5b63f01fe8528c3275b5c7d0da4d7bdf84\"" Feb 9 14:15:25.105653 systemd[1]: Started cri-containerd-1de53d565bc718fad6876a635faa9f5b63f01fe8528c3275b5c7d0da4d7bdf84.scope. Feb 9 14:15:25.130175 env[1480]: time="2024-02-09T14:15:25.130148009Z" level=info msg="StartContainer for \"1de53d565bc718fad6876a635faa9f5b63f01fe8528c3275b5c7d0da4d7bdf84\" returns successfully" Feb 9 14:15:25.133763 systemd[1]: cri-containerd-1de53d565bc718fad6876a635faa9f5b63f01fe8528c3275b5c7d0da4d7bdf84.scope: Deactivated successfully. Feb 9 14:15:25.158226 env[1480]: time="2024-02-09T14:15:25.158181239Z" level=info msg="shim disconnected" id=1de53d565bc718fad6876a635faa9f5b63f01fe8528c3275b5c7d0da4d7bdf84 Feb 9 14:15:25.158226 env[1480]: time="2024-02-09T14:15:25.158226647Z" level=warning msg="cleaning up after shim disconnected" id=1de53d565bc718fad6876a635faa9f5b63f01fe8528c3275b5c7d0da4d7bdf84 namespace=k8s.io Feb 9 14:15:25.158443 env[1480]: time="2024-02-09T14:15:25.158238094Z" level=info msg="cleaning up dead shim" Feb 9 14:15:25.165638 env[1480]: time="2024-02-09T14:15:25.165592975Z" level=warning msg="cleanup warnings time=\"2024-02-09T14:15:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5171 runtime=io.containerd.runc.v2\n" Feb 9 14:15:25.448309 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fed986a07afcdf3ac894eeb527b7711f2f5ec1a5d3fe7a6bc91c5b3a00a2b9fe-rootfs.mount: Deactivated successfully. Feb 9 14:15:25.746971 kubelet[2523]: E0209 14:15:25.746773 2523 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 14:15:25.760430 kubelet[2523]: W0209 14:15:25.760273 2523 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc6681cb8_0129_4e9c_9fce_00d659eca2e2.slice/cri-containerd-f619be65010fd30f65e91495a5163c3dcc19af3cd4b3160e297096828331bb76.scope WatchSource:0}: container "f619be65010fd30f65e91495a5163c3dcc19af3cd4b3160e297096828331bb76" in namespace "k8s.io": not found Feb 9 14:15:26.072810 env[1480]: time="2024-02-09T14:15:26.072570511Z" level=info msg="CreateContainer within sandbox \"1519639115c0c71fb62661b553a701661171a7de6470955fc9de1daf96d72fc8\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 14:15:26.084962 env[1480]: time="2024-02-09T14:15:26.084942527Z" level=info msg="CreateContainer within sandbox \"1519639115c0c71fb62661b553a701661171a7de6470955fc9de1daf96d72fc8\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"747a017ce0216cdc6334bb1e076b5caff8a2f062d0c3695c1f5883cd1cdbea59\"" Feb 9 14:15:26.085384 env[1480]: time="2024-02-09T14:15:26.085356271Z" level=info msg="StartContainer for \"747a017ce0216cdc6334bb1e076b5caff8a2f062d0c3695c1f5883cd1cdbea59\"" Feb 9 14:15:26.107945 systemd[1]: Started cri-containerd-747a017ce0216cdc6334bb1e076b5caff8a2f062d0c3695c1f5883cd1cdbea59.scope. Feb 9 14:15:26.134197 env[1480]: time="2024-02-09T14:15:26.134164635Z" level=info msg="StartContainer for \"747a017ce0216cdc6334bb1e076b5caff8a2f062d0c3695c1f5883cd1cdbea59\" returns successfully" Feb 9 14:15:26.135537 systemd[1]: cri-containerd-747a017ce0216cdc6334bb1e076b5caff8a2f062d0c3695c1f5883cd1cdbea59.scope: Deactivated successfully. Feb 9 14:15:26.169357 env[1480]: time="2024-02-09T14:15:26.169268312Z" level=info msg="shim disconnected" id=747a017ce0216cdc6334bb1e076b5caff8a2f062d0c3695c1f5883cd1cdbea59 Feb 9 14:15:26.169357 env[1480]: time="2024-02-09T14:15:26.169355829Z" level=warning msg="cleaning up after shim disconnected" id=747a017ce0216cdc6334bb1e076b5caff8a2f062d0c3695c1f5883cd1cdbea59 namespace=k8s.io Feb 9 14:15:26.169668 env[1480]: time="2024-02-09T14:15:26.169373624Z" level=info msg="cleaning up dead shim" Feb 9 14:15:26.193376 env[1480]: time="2024-02-09T14:15:26.193260894Z" level=warning msg="cleanup warnings time=\"2024-02-09T14:15:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5227 runtime=io.containerd.runc.v2\n" Feb 9 14:15:26.450204 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-747a017ce0216cdc6334bb1e076b5caff8a2f062d0c3695c1f5883cd1cdbea59-rootfs.mount: Deactivated successfully. Feb 9 14:15:27.081363 env[1480]: time="2024-02-09T14:15:27.081240235Z" level=info msg="CreateContainer within sandbox \"1519639115c0c71fb62661b553a701661171a7de6470955fc9de1daf96d72fc8\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 14:15:27.098276 env[1480]: time="2024-02-09T14:15:27.098253337Z" level=info msg="CreateContainer within sandbox \"1519639115c0c71fb62661b553a701661171a7de6470955fc9de1daf96d72fc8\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e26f7d0050faf416a4ff0346017423b96f877541a608ae767e102134f749ba91\"" Feb 9 14:15:27.098691 env[1480]: time="2024-02-09T14:15:27.098620413Z" level=info msg="StartContainer for \"e26f7d0050faf416a4ff0346017423b96f877541a608ae767e102134f749ba91\"" Feb 9 14:15:27.120066 systemd[1]: Started cri-containerd-e26f7d0050faf416a4ff0346017423b96f877541a608ae767e102134f749ba91.scope. Feb 9 14:15:27.153099 env[1480]: time="2024-02-09T14:15:27.152999363Z" level=info msg="StartContainer for \"e26f7d0050faf416a4ff0346017423b96f877541a608ae767e102134f749ba91\" returns successfully" Feb 9 14:15:27.153250 systemd[1]: cri-containerd-e26f7d0050faf416a4ff0346017423b96f877541a608ae767e102134f749ba91.scope: Deactivated successfully. Feb 9 14:15:27.238437 env[1480]: time="2024-02-09T14:15:27.238240616Z" level=info msg="shim disconnected" id=e26f7d0050faf416a4ff0346017423b96f877541a608ae767e102134f749ba91 Feb 9 14:15:27.238437 env[1480]: time="2024-02-09T14:15:27.238399922Z" level=warning msg="cleaning up after shim disconnected" id=e26f7d0050faf416a4ff0346017423b96f877541a608ae767e102134f749ba91 namespace=k8s.io Feb 9 14:15:27.238437 env[1480]: time="2024-02-09T14:15:27.238433211Z" level=info msg="cleaning up dead shim" Feb 9 14:15:27.256166 env[1480]: time="2024-02-09T14:15:27.256057186Z" level=warning msg="cleanup warnings time=\"2024-02-09T14:15:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5281 runtime=io.containerd.runc.v2\n" Feb 9 14:15:27.450658 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e26f7d0050faf416a4ff0346017423b96f877541a608ae767e102134f749ba91-rootfs.mount: Deactivated successfully. Feb 9 14:15:28.091804 env[1480]: time="2024-02-09T14:15:28.091681708Z" level=info msg="CreateContainer within sandbox \"1519639115c0c71fb62661b553a701661171a7de6470955fc9de1daf96d72fc8\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 14:15:28.104824 env[1480]: time="2024-02-09T14:15:28.104772181Z" level=info msg="CreateContainer within sandbox \"1519639115c0c71fb62661b553a701661171a7de6470955fc9de1daf96d72fc8\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4cd4349838248572a566ed42a2260a8309fb9711c34814fd4077f554722d31ec\"" Feb 9 14:15:28.105194 env[1480]: time="2024-02-09T14:15:28.105178857Z" level=info msg="StartContainer for \"4cd4349838248572a566ed42a2260a8309fb9711c34814fd4077f554722d31ec\"" Feb 9 14:15:28.127200 systemd[1]: Started cri-containerd-4cd4349838248572a566ed42a2260a8309fb9711c34814fd4077f554722d31ec.scope. Feb 9 14:15:28.158325 env[1480]: time="2024-02-09T14:15:28.158281231Z" level=info msg="StartContainer for \"4cd4349838248572a566ed42a2260a8309fb9711c34814fd4077f554722d31ec\" returns successfully" Feb 9 14:15:28.322317 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 9 14:15:28.876925 kubelet[2523]: W0209 14:15:28.876857 2523 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb84206f5_1f89_42dc_a297_ea45a0400de1.slice/cri-containerd-fed986a07afcdf3ac894eeb527b7711f2f5ec1a5d3fe7a6bc91c5b3a00a2b9fe.scope WatchSource:0}: task fed986a07afcdf3ac894eeb527b7711f2f5ec1a5d3fe7a6bc91c5b3a00a2b9fe not found: not found Feb 9 14:15:29.114274 kubelet[2523]: I0209 14:15:29.114257 2523 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-gtvfj" podStartSLOduration=5.114235523 podCreationTimestamp="2024-02-09 14:15:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 14:15:29.114084328 +0000 UTC m=+488.589563320" watchObservedRunningTime="2024-02-09 14:15:29.114235523 +0000 UTC m=+488.589714510" Feb 9 14:15:31.270794 systemd-networkd[1327]: lxc_health: Link UP Feb 9 14:15:31.293140 systemd-networkd[1327]: lxc_health: Gained carrier Feb 9 14:15:31.293312 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 14:15:31.986200 kubelet[2523]: W0209 14:15:31.986173 2523 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb84206f5_1f89_42dc_a297_ea45a0400de1.slice/cri-containerd-1de53d565bc718fad6876a635faa9f5b63f01fe8528c3275b5c7d0da4d7bdf84.scope WatchSource:0}: task 1de53d565bc718fad6876a635faa9f5b63f01fe8528c3275b5c7d0da4d7bdf84 not found: not found Feb 9 14:15:32.723458 systemd-networkd[1327]: lxc_health: Gained IPv6LL Feb 9 14:15:35.092872 kubelet[2523]: W0209 14:15:35.092756 2523 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb84206f5_1f89_42dc_a297_ea45a0400de1.slice/cri-containerd-747a017ce0216cdc6334bb1e076b5caff8a2f062d0c3695c1f5883cd1cdbea59.scope WatchSource:0}: task 747a017ce0216cdc6334bb1e076b5caff8a2f062d0c3695c1f5883cd1cdbea59 not found: not found Feb 9 14:15:38.204457 kubelet[2523]: W0209 14:15:38.204380 2523 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb84206f5_1f89_42dc_a297_ea45a0400de1.slice/cri-containerd-e26f7d0050faf416a4ff0346017423b96f877541a608ae767e102134f749ba91.scope WatchSource:0}: task e26f7d0050faf416a4ff0346017423b96f877541a608ae767e102134f749ba91 not found: not found Feb 9 14:15:55.385164 systemd[1]: Started sshd@30-139.178.88.165:22-2.57.122.87:39626.service. Feb 9 14:15:57.076837 sshd[6033]: Invalid user kdwang from 2.57.122.87 port 39626 Feb 9 14:15:57.402361 sshd[6033]: pam_faillock(sshd:auth): User unknown Feb 9 14:15:57.403516 sshd[6033]: pam_unix(sshd:auth): check pass; user unknown Feb 9 14:15:57.403610 sshd[6033]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=2.57.122.87 Feb 9 14:15:57.404589 sshd[6033]: pam_faillock(sshd:auth): User unknown Feb 9 14:15:59.601332 sshd[6033]: Failed password for invalid user kdwang from 2.57.122.87 port 39626 ssh2 Feb 9 14:16:01.062523 sshd[6033]: Connection closed by invalid user kdwang 2.57.122.87 port 39626 [preauth] Feb 9 14:16:01.065098 systemd[1]: sshd@30-139.178.88.165:22-2.57.122.87:39626.service: Deactivated successfully. Feb 9 14:16:20.656163 env[1480]: time="2024-02-09T14:16:20.655998384Z" level=info msg="StopPodSandbox for \"d30f5744d3ac0451f1e14e6f5fe318a561f3e9c8481373d1fde633e88ddea34a\"" Feb 9 14:16:20.657165 env[1480]: time="2024-02-09T14:16:20.656239263Z" level=info msg="TearDown network for sandbox \"d30f5744d3ac0451f1e14e6f5fe318a561f3e9c8481373d1fde633e88ddea34a\" successfully" Feb 9 14:16:20.657165 env[1480]: time="2024-02-09T14:16:20.656369879Z" level=info msg="StopPodSandbox for \"d30f5744d3ac0451f1e14e6f5fe318a561f3e9c8481373d1fde633e88ddea34a\" returns successfully" Feb 9 14:16:20.657165 env[1480]: time="2024-02-09T14:16:20.657049494Z" level=info msg="RemovePodSandbox for \"d30f5744d3ac0451f1e14e6f5fe318a561f3e9c8481373d1fde633e88ddea34a\"" Feb 9 14:16:20.657554 env[1480]: time="2024-02-09T14:16:20.657127844Z" level=info msg="Forcibly stopping sandbox \"d30f5744d3ac0451f1e14e6f5fe318a561f3e9c8481373d1fde633e88ddea34a\"" Feb 9 14:16:20.657554 env[1480]: time="2024-02-09T14:16:20.657345773Z" level=info msg="TearDown network for sandbox \"d30f5744d3ac0451f1e14e6f5fe318a561f3e9c8481373d1fde633e88ddea34a\" successfully" Feb 9 14:16:20.661653 env[1480]: time="2024-02-09T14:16:20.661577717Z" level=info msg="RemovePodSandbox \"d30f5744d3ac0451f1e14e6f5fe318a561f3e9c8481373d1fde633e88ddea34a\" returns successfully" Feb 9 14:16:34.852950 sshd[4884]: pam_unix(sshd:session): session closed for user core Feb 9 14:16:34.858927 systemd[1]: sshd@29-139.178.88.165:22-147.75.109.163:36430.service: Deactivated successfully. Feb 9 14:16:34.860937 systemd[1]: session-30.scope: Deactivated successfully. Feb 9 14:16:34.862812 systemd-logind[1468]: Session 30 logged out. Waiting for processes to exit. Feb 9 14:16:34.865165 systemd-logind[1468]: Removed session 30.