Dec 13 16:02:48.569315 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Dec 13 12:55:10 -00 2024 Dec 13 16:02:48.569328 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 16:02:48.569335 kernel: BIOS-provided physical RAM map: Dec 13 16:02:48.569339 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Dec 13 16:02:48.569342 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Dec 13 16:02:48.569346 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Dec 13 16:02:48.569350 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Dec 13 16:02:48.569357 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Dec 13 16:02:48.569361 kernel: BIOS-e820: [mem 0x0000000040400000-0x0000000081b2afff] usable Dec 13 16:02:48.569364 kernel: BIOS-e820: [mem 0x0000000081b2b000-0x0000000081b2bfff] ACPI NVS Dec 13 16:02:48.569369 kernel: BIOS-e820: [mem 0x0000000081b2c000-0x0000000081b2cfff] reserved Dec 13 16:02:48.569373 kernel: BIOS-e820: [mem 0x0000000081b2d000-0x000000008afccfff] usable Dec 13 16:02:48.569377 kernel: BIOS-e820: [mem 0x000000008afcd000-0x000000008c0b1fff] reserved Dec 13 16:02:48.569394 kernel: BIOS-e820: [mem 0x000000008c0b2000-0x000000008c23afff] usable Dec 13 16:02:48.569399 kernel: BIOS-e820: [mem 0x000000008c23b000-0x000000008c66cfff] ACPI NVS Dec 13 16:02:48.569403 kernel: BIOS-e820: [mem 0x000000008c66d000-0x000000008eefefff] reserved Dec 13 16:02:48.569407 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable Dec 13 16:02:48.569412 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved Dec 13 16:02:48.569416 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Dec 13 16:02:48.569420 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Dec 13 16:02:48.569424 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Dec 13 16:02:48.569428 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Dec 13 16:02:48.569432 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Dec 13 16:02:48.569436 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable Dec 13 16:02:48.569440 kernel: NX (Execute Disable) protection: active Dec 13 16:02:48.569444 kernel: SMBIOS 3.2.1 present. Dec 13 16:02:48.569449 kernel: DMI: Supermicro SYS-5019C-MR-PH004/X11SCM-F, BIOS 1.9 09/16/2022 Dec 13 16:02:48.569453 kernel: tsc: Detected 3400.000 MHz processor Dec 13 16:02:48.569457 kernel: tsc: Detected 3399.906 MHz TSC Dec 13 16:02:48.569461 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 16:02:48.569466 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 16:02:48.569470 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 Dec 13 16:02:48.569474 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 16:02:48.569479 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 Dec 13 16:02:48.569483 kernel: Using GB pages for direct mapping Dec 13 16:02:48.569487 kernel: ACPI: Early table checksum verification disabled Dec 13 16:02:48.569492 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Dec 13 16:02:48.569496 kernel: ACPI: XSDT 0x000000008C54E0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Dec 13 16:02:48.569501 kernel: ACPI: FACP 0x000000008C58A670 000114 (v06 01072009 AMI 00010013) Dec 13 16:02:48.569505 kernel: ACPI: DSDT 0x000000008C54E268 03C404 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Dec 13 16:02:48.569511 kernel: ACPI: FACS 0x000000008C66CF80 000040 Dec 13 16:02:48.569515 kernel: ACPI: APIC 0x000000008C58A788 00012C (v04 01072009 AMI 00010013) Dec 13 16:02:48.569521 kernel: ACPI: FPDT 0x000000008C58A8B8 000044 (v01 01072009 AMI 00010013) Dec 13 16:02:48.569525 kernel: ACPI: FIDT 0x000000008C58A900 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Dec 13 16:02:48.569530 kernel: ACPI: MCFG 0x000000008C58A9A0 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Dec 13 16:02:48.569535 kernel: ACPI: SPMI 0x000000008C58A9E0 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Dec 13 16:02:48.569539 kernel: ACPI: SSDT 0x000000008C58AA28 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Dec 13 16:02:48.569544 kernel: ACPI: SSDT 0x000000008C58C548 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Dec 13 16:02:48.569548 kernel: ACPI: SSDT 0x000000008C58F710 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Dec 13 16:02:48.569553 kernel: ACPI: HPET 0x000000008C591A40 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Dec 13 16:02:48.569558 kernel: ACPI: SSDT 0x000000008C591A78 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Dec 13 16:02:48.569563 kernel: ACPI: SSDT 0x000000008C592A28 0008F4 (v02 INTEL xh_mossb 00000000 INTL 20160527) Dec 13 16:02:48.569567 kernel: ACPI: UEFI 0x000000008C593320 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Dec 13 16:02:48.569572 kernel: ACPI: LPIT 0x000000008C593368 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Dec 13 16:02:48.569576 kernel: ACPI: SSDT 0x000000008C593400 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Dec 13 16:02:48.569581 kernel: ACPI: SSDT 0x000000008C595BE0 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Dec 13 16:02:48.569585 kernel: ACPI: DBGP 0x000000008C5970C8 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Dec 13 16:02:48.569590 kernel: ACPI: DBG2 0x000000008C597100 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Dec 13 16:02:48.569595 kernel: ACPI: SSDT 0x000000008C597158 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Dec 13 16:02:48.569600 kernel: ACPI: DMAR 0x000000008C598CC0 000070 (v01 INTEL EDK2 00000002 01000013) Dec 13 16:02:48.569604 kernel: ACPI: SSDT 0x000000008C598D30 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Dec 13 16:02:48.569609 kernel: ACPI: TPM2 0x000000008C598E78 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Dec 13 16:02:48.569613 kernel: ACPI: SSDT 0x000000008C598EB0 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Dec 13 16:02:48.569618 kernel: ACPI: WSMT 0x000000008C599C40 000028 (v01 SUPERM 01072009 AMI 00010013) Dec 13 16:02:48.569622 kernel: ACPI: EINJ 0x000000008C599C68 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Dec 13 16:02:48.569627 kernel: ACPI: ERST 0x000000008C599D98 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Dec 13 16:02:48.569631 kernel: ACPI: BERT 0x000000008C599FC8 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Dec 13 16:02:48.569637 kernel: ACPI: HEST 0x000000008C599FF8 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Dec 13 16:02:48.569641 kernel: ACPI: SSDT 0x000000008C59A278 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Dec 13 16:02:48.569646 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58a670-0x8c58a783] Dec 13 16:02:48.569650 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54e268-0x8c58a66b] Dec 13 16:02:48.569655 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66cf80-0x8c66cfbf] Dec 13 16:02:48.569659 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58a788-0x8c58a8b3] Dec 13 16:02:48.569664 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58a8b8-0x8c58a8fb] Dec 13 16:02:48.569668 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58a900-0x8c58a99b] Dec 13 16:02:48.569674 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58a9a0-0x8c58a9db] Dec 13 16:02:48.569678 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58a9e0-0x8c58aa20] Dec 13 16:02:48.569683 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58aa28-0x8c58c543] Dec 13 16:02:48.569687 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58c548-0x8c58f70d] Dec 13 16:02:48.569692 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58f710-0x8c591a3a] Dec 13 16:02:48.569696 kernel: ACPI: Reserving HPET table memory at [mem 0x8c591a40-0x8c591a77] Dec 13 16:02:48.569701 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c591a78-0x8c592a25] Dec 13 16:02:48.569705 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c592a28-0x8c59331b] Dec 13 16:02:48.569710 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c593320-0x8c593361] Dec 13 16:02:48.569715 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c593368-0x8c5933fb] Dec 13 16:02:48.569719 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c593400-0x8c595bdd] Dec 13 16:02:48.569724 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c595be0-0x8c5970c1] Dec 13 16:02:48.569728 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c5970c8-0x8c5970fb] Dec 13 16:02:48.569733 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c597100-0x8c597153] Dec 13 16:02:48.569737 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c597158-0x8c598cbe] Dec 13 16:02:48.569742 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c598cc0-0x8c598d2f] Dec 13 16:02:48.569746 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598d30-0x8c598e73] Dec 13 16:02:48.569751 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c598e78-0x8c598eab] Dec 13 16:02:48.569756 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598eb0-0x8c599c3e] Dec 13 16:02:48.569761 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c599c40-0x8c599c67] Dec 13 16:02:48.569765 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c599c68-0x8c599d97] Dec 13 16:02:48.569770 kernel: ACPI: Reserving ERST table memory at [mem 0x8c599d98-0x8c599fc7] Dec 13 16:02:48.569774 kernel: ACPI: Reserving BERT table memory at [mem 0x8c599fc8-0x8c599ff7] Dec 13 16:02:48.569779 kernel: ACPI: Reserving HEST table memory at [mem 0x8c599ff8-0x8c59a273] Dec 13 16:02:48.569783 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59a278-0x8c59a3d9] Dec 13 16:02:48.569788 kernel: No NUMA configuration found Dec 13 16:02:48.569793 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] Dec 13 16:02:48.569798 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] Dec 13 16:02:48.569802 kernel: Zone ranges: Dec 13 16:02:48.569807 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 16:02:48.569812 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Dec 13 16:02:48.569816 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] Dec 13 16:02:48.569821 kernel: Movable zone start for each node Dec 13 16:02:48.569825 kernel: Early memory node ranges Dec 13 16:02:48.569830 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Dec 13 16:02:48.569834 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Dec 13 16:02:48.569839 kernel: node 0: [mem 0x0000000040400000-0x0000000081b2afff] Dec 13 16:02:48.569844 kernel: node 0: [mem 0x0000000081b2d000-0x000000008afccfff] Dec 13 16:02:48.569849 kernel: node 0: [mem 0x000000008c0b2000-0x000000008c23afff] Dec 13 16:02:48.569853 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] Dec 13 16:02:48.569858 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] Dec 13 16:02:48.569862 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] Dec 13 16:02:48.569867 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 16:02:48.569874 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Dec 13 16:02:48.569880 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Dec 13 16:02:48.569885 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Dec 13 16:02:48.569890 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges Dec 13 16:02:48.569895 kernel: On node 0, zone DMA32: 11460 pages in unavailable ranges Dec 13 16:02:48.569900 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges Dec 13 16:02:48.569905 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges Dec 13 16:02:48.569910 kernel: ACPI: PM-Timer IO Port: 0x1808 Dec 13 16:02:48.569915 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Dec 13 16:02:48.569920 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Dec 13 16:02:48.569925 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Dec 13 16:02:48.569930 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Dec 13 16:02:48.569935 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Dec 13 16:02:48.569940 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Dec 13 16:02:48.569945 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Dec 13 16:02:48.569949 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Dec 13 16:02:48.569954 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Dec 13 16:02:48.569959 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Dec 13 16:02:48.569964 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Dec 13 16:02:48.569969 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Dec 13 16:02:48.569974 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Dec 13 16:02:48.569979 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Dec 13 16:02:48.569984 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Dec 13 16:02:48.569989 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Dec 13 16:02:48.569993 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Dec 13 16:02:48.569998 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 16:02:48.570003 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 16:02:48.570008 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 16:02:48.570013 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 13 16:02:48.570019 kernel: TSC deadline timer available Dec 13 16:02:48.570024 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Dec 13 16:02:48.570029 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices Dec 13 16:02:48.570034 kernel: Booting paravirtualized kernel on bare hardware Dec 13 16:02:48.570039 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 16:02:48.570043 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:16 nr_node_ids:1 Dec 13 16:02:48.570048 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u262144 Dec 13 16:02:48.570053 kernel: pcpu-alloc: s188696 r8192 d32488 u262144 alloc=1*2097152 Dec 13 16:02:48.570058 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Dec 13 16:02:48.570063 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232415 Dec 13 16:02:48.570068 kernel: Policy zone: Normal Dec 13 16:02:48.570074 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 16:02:48.570079 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 16:02:48.570084 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Dec 13 16:02:48.570089 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Dec 13 16:02:48.570093 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 16:02:48.570098 kernel: Memory: 32722604K/33452980K available (12294K kernel code, 2275K rwdata, 13716K rodata, 47472K init, 4112K bss, 730116K reserved, 0K cma-reserved) Dec 13 16:02:48.570104 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Dec 13 16:02:48.570109 kernel: ftrace: allocating 34549 entries in 135 pages Dec 13 16:02:48.570114 kernel: ftrace: allocated 135 pages with 4 groups Dec 13 16:02:48.570119 kernel: rcu: Hierarchical RCU implementation. Dec 13 16:02:48.570124 kernel: rcu: RCU event tracing is enabled. Dec 13 16:02:48.570129 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Dec 13 16:02:48.570134 kernel: Rude variant of Tasks RCU enabled. Dec 13 16:02:48.570139 kernel: Tracing variant of Tasks RCU enabled. Dec 13 16:02:48.570144 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 16:02:48.570149 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Dec 13 16:02:48.570154 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Dec 13 16:02:48.570159 kernel: random: crng init done Dec 13 16:02:48.570164 kernel: Console: colour dummy device 80x25 Dec 13 16:02:48.570169 kernel: printk: console [tty0] enabled Dec 13 16:02:48.570174 kernel: printk: console [ttyS1] enabled Dec 13 16:02:48.570179 kernel: ACPI: Core revision 20210730 Dec 13 16:02:48.570184 kernel: hpet: HPET dysfunctional in PC10. Force disabled. Dec 13 16:02:48.570188 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 16:02:48.570194 kernel: DMAR: Host address width 39 Dec 13 16:02:48.570199 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Dec 13 16:02:48.570204 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Dec 13 16:02:48.570209 kernel: DMAR: RMRR base: 0x0000008cf18000 end: 0x0000008d161fff Dec 13 16:02:48.570214 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 Dec 13 16:02:48.570219 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Dec 13 16:02:48.570224 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Dec 13 16:02:48.570228 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Dec 13 16:02:48.570233 kernel: x2apic enabled Dec 13 16:02:48.570239 kernel: Switched APIC routing to cluster x2apic. Dec 13 16:02:48.570244 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Dec 13 16:02:48.570249 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Dec 13 16:02:48.570254 kernel: CPU0: Thermal monitoring enabled (TM1) Dec 13 16:02:48.570258 kernel: process: using mwait in idle threads Dec 13 16:02:48.570263 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Dec 13 16:02:48.570268 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Dec 13 16:02:48.570273 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 16:02:48.570278 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Dec 13 16:02:48.570283 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Dec 13 16:02:48.570288 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Dec 13 16:02:48.570293 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Dec 13 16:02:48.570298 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 16:02:48.570303 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Dec 13 16:02:48.570307 kernel: RETBleed: Mitigation: Enhanced IBRS Dec 13 16:02:48.570312 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 16:02:48.570317 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Dec 13 16:02:48.570322 kernel: TAA: Mitigation: TSX disabled Dec 13 16:02:48.570327 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Dec 13 16:02:48.570331 kernel: SRBDS: Mitigation: Microcode Dec 13 16:02:48.570337 kernel: GDS: Vulnerable: No microcode Dec 13 16:02:48.570342 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 16:02:48.570347 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 16:02:48.570353 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 16:02:48.570374 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Dec 13 16:02:48.570379 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Dec 13 16:02:48.570384 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 16:02:48.570389 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Dec 13 16:02:48.570393 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Dec 13 16:02:48.570412 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Dec 13 16:02:48.570417 kernel: Freeing SMP alternatives memory: 32K Dec 13 16:02:48.570421 kernel: pid_max: default: 32768 minimum: 301 Dec 13 16:02:48.570427 kernel: LSM: Security Framework initializing Dec 13 16:02:48.570432 kernel: SELinux: Initializing. Dec 13 16:02:48.570437 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 16:02:48.570442 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 16:02:48.570446 kernel: smpboot: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Dec 13 16:02:48.570451 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Dec 13 16:02:48.570456 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Dec 13 16:02:48.570461 kernel: ... version: 4 Dec 13 16:02:48.570466 kernel: ... bit width: 48 Dec 13 16:02:48.570471 kernel: ... generic registers: 4 Dec 13 16:02:48.570476 kernel: ... value mask: 0000ffffffffffff Dec 13 16:02:48.570481 kernel: ... max period: 00007fffffffffff Dec 13 16:02:48.570486 kernel: ... fixed-purpose events: 3 Dec 13 16:02:48.570491 kernel: ... event mask: 000000070000000f Dec 13 16:02:48.570496 kernel: signal: max sigframe size: 2032 Dec 13 16:02:48.570500 kernel: rcu: Hierarchical SRCU implementation. Dec 13 16:02:48.570505 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Dec 13 16:02:48.570510 kernel: smp: Bringing up secondary CPUs ... Dec 13 16:02:48.570515 kernel: x86: Booting SMP configuration: Dec 13 16:02:48.570521 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 Dec 13 16:02:48.570526 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Dec 13 16:02:48.570531 kernel: #9 #10 #11 #12 #13 #14 #15 Dec 13 16:02:48.570535 kernel: smp: Brought up 1 node, 16 CPUs Dec 13 16:02:48.570540 kernel: smpboot: Max logical packages: 1 Dec 13 16:02:48.570545 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Dec 13 16:02:48.570550 kernel: devtmpfs: initialized Dec 13 16:02:48.570555 kernel: x86/mm: Memory block size: 128MB Dec 13 16:02:48.570560 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x81b2b000-0x81b2bfff] (4096 bytes) Dec 13 16:02:48.570565 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23b000-0x8c66cfff] (4399104 bytes) Dec 13 16:02:48.570570 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 16:02:48.570575 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Dec 13 16:02:48.570580 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 16:02:48.570585 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 16:02:48.570590 kernel: audit: initializing netlink subsys (disabled) Dec 13 16:02:48.570595 kernel: audit: type=2000 audit(1734105763.041:1): state=initialized audit_enabled=0 res=1 Dec 13 16:02:48.570600 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 16:02:48.570604 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 16:02:48.570610 kernel: cpuidle: using governor menu Dec 13 16:02:48.570615 kernel: ACPI: bus type PCI registered Dec 13 16:02:48.570620 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 16:02:48.570625 kernel: dca service started, version 1.12.1 Dec 13 16:02:48.570630 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Dec 13 16:02:48.570635 kernel: PCI: MMCONFIG at [mem 0xe0000000-0xefffffff] reserved in E820 Dec 13 16:02:48.570639 kernel: PCI: Using configuration type 1 for base access Dec 13 16:02:48.570644 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Dec 13 16:02:48.570649 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 16:02:48.570655 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 16:02:48.570660 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 16:02:48.570664 kernel: ACPI: Added _OSI(Module Device) Dec 13 16:02:48.570669 kernel: ACPI: Added _OSI(Processor Device) Dec 13 16:02:48.570674 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 16:02:48.570679 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 16:02:48.570684 kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 13 16:02:48.570689 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 13 16:02:48.570694 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 13 16:02:48.570699 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Dec 13 16:02:48.570704 kernel: ACPI: Dynamic OEM Table Load: Dec 13 16:02:48.570709 kernel: ACPI: SSDT 0xFFFF90E180218F00 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Dec 13 16:02:48.570714 kernel: ACPI: \_SB_.PR00: _OSC native thermal LVT Acked Dec 13 16:02:48.570719 kernel: ACPI: Dynamic OEM Table Load: Dec 13 16:02:48.570723 kernel: ACPI: SSDT 0xFFFF90E181AE4800 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Dec 13 16:02:48.570728 kernel: ACPI: Dynamic OEM Table Load: Dec 13 16:02:48.570733 kernel: ACPI: SSDT 0xFFFF90E181A5C800 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Dec 13 16:02:48.570738 kernel: ACPI: Dynamic OEM Table Load: Dec 13 16:02:48.570743 kernel: ACPI: SSDT 0xFFFF90E181B4F800 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Dec 13 16:02:48.570748 kernel: ACPI: Dynamic OEM Table Load: Dec 13 16:02:48.570753 kernel: ACPI: SSDT 0xFFFF90E18014C000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Dec 13 16:02:48.570758 kernel: ACPI: Dynamic OEM Table Load: Dec 13 16:02:48.570763 kernel: ACPI: SSDT 0xFFFF90E181AE0800 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Dec 13 16:02:48.570768 kernel: ACPI: Interpreter enabled Dec 13 16:02:48.570772 kernel: ACPI: PM: (supports S0 S5) Dec 13 16:02:48.570777 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 16:02:48.570782 kernel: HEST: Enabling Firmware First mode for corrected errors. Dec 13 16:02:48.570787 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Dec 13 16:02:48.570792 kernel: HEST: Table parsing has been initialized. Dec 13 16:02:48.570797 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Dec 13 16:02:48.570802 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 16:02:48.570807 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Dec 13 16:02:48.570812 kernel: ACPI: PM: Power Resource [USBC] Dec 13 16:02:48.570817 kernel: ACPI: PM: Power Resource [V0PR] Dec 13 16:02:48.570822 kernel: ACPI: PM: Power Resource [V1PR] Dec 13 16:02:48.570826 kernel: ACPI: PM: Power Resource [V2PR] Dec 13 16:02:48.570831 kernel: ACPI: PM: Power Resource [WRST] Dec 13 16:02:48.570837 kernel: ACPI: PM: Power Resource [FN00] Dec 13 16:02:48.570841 kernel: ACPI: PM: Power Resource [FN01] Dec 13 16:02:48.570846 kernel: ACPI: PM: Power Resource [FN02] Dec 13 16:02:48.570851 kernel: ACPI: PM: Power Resource [FN03] Dec 13 16:02:48.570856 kernel: ACPI: PM: Power Resource [FN04] Dec 13 16:02:48.570861 kernel: ACPI: PM: Power Resource [PIN] Dec 13 16:02:48.570865 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Dec 13 16:02:48.570929 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 16:02:48.570976 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Dec 13 16:02:48.571017 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Dec 13 16:02:48.571024 kernel: PCI host bridge to bus 0000:00 Dec 13 16:02:48.571067 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 16:02:48.571105 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 16:02:48.571142 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 16:02:48.571179 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] Dec 13 16:02:48.571216 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Dec 13 16:02:48.571253 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Dec 13 16:02:48.571304 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Dec 13 16:02:48.571356 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Dec 13 16:02:48.571430 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Dec 13 16:02:48.571477 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Dec 13 16:02:48.571522 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] Dec 13 16:02:48.571567 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Dec 13 16:02:48.571609 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit] Dec 13 16:02:48.571656 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Dec 13 16:02:48.571698 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] Dec 13 16:02:48.571741 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Dec 13 16:02:48.571787 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Dec 13 16:02:48.571829 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] Dec 13 16:02:48.571869 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit] Dec 13 16:02:48.571915 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Dec 13 16:02:48.571957 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Dec 13 16:02:48.572004 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Dec 13 16:02:48.572047 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Dec 13 16:02:48.572092 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Dec 13 16:02:48.572133 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] Dec 13 16:02:48.572174 kernel: pci 0000:00:16.0: PME# supported from D3hot Dec 13 16:02:48.572218 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Dec 13 16:02:48.572259 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] Dec 13 16:02:48.572300 kernel: pci 0000:00:16.1: PME# supported from D3hot Dec 13 16:02:48.572346 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Dec 13 16:02:48.572426 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] Dec 13 16:02:48.572466 kernel: pci 0000:00:16.4: PME# supported from D3hot Dec 13 16:02:48.572511 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Dec 13 16:02:48.572552 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] Dec 13 16:02:48.572596 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] Dec 13 16:02:48.572643 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] Dec 13 16:02:48.572686 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] Dec 13 16:02:48.572728 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] Dec 13 16:02:48.572769 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] Dec 13 16:02:48.572810 kernel: pci 0000:00:17.0: PME# supported from D3hot Dec 13 16:02:48.572856 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Dec 13 16:02:48.572898 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Dec 13 16:02:48.572944 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Dec 13 16:02:48.572987 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Dec 13 16:02:48.573035 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Dec 13 16:02:48.573076 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Dec 13 16:02:48.573122 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Dec 13 16:02:48.573164 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Dec 13 16:02:48.573213 kernel: pci 0000:00:1c.3: [8086:a33b] type 01 class 0x060400 Dec 13 16:02:48.573255 kernel: pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold Dec 13 16:02:48.573301 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Dec 13 16:02:48.573343 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Dec 13 16:02:48.573391 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Dec 13 16:02:48.573437 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Dec 13 16:02:48.573478 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] Dec 13 16:02:48.573522 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Dec 13 16:02:48.573569 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Dec 13 16:02:48.573611 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Dec 13 16:02:48.573659 kernel: pci 0000:01:00.0: [15b3:1015] type 00 class 0x020000 Dec 13 16:02:48.573704 kernel: pci 0000:01:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Dec 13 16:02:48.573748 kernel: pci 0000:01:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] Dec 13 16:02:48.573791 kernel: pci 0000:01:00.0: PME# supported from D3cold Dec 13 16:02:48.573834 kernel: pci 0000:01:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Dec 13 16:02:48.573876 kernel: pci 0000:01:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Dec 13 16:02:48.573925 kernel: pci 0000:01:00.1: [15b3:1015] type 00 class 0x020000 Dec 13 16:02:48.573970 kernel: pci 0000:01:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Dec 13 16:02:48.574013 kernel: pci 0000:01:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] Dec 13 16:02:48.574055 kernel: pci 0000:01:00.1: PME# supported from D3cold Dec 13 16:02:48.574098 kernel: pci 0000:01:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Dec 13 16:02:48.574141 kernel: pci 0000:01:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Dec 13 16:02:48.574184 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Dec 13 16:02:48.574225 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Dec 13 16:02:48.574267 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Dec 13 16:02:48.574311 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Dec 13 16:02:48.574361 kernel: pci 0000:03:00.0: working around ROM BAR overlap defect Dec 13 16:02:48.574405 kernel: pci 0000:03:00.0: [8086:1533] type 00 class 0x020000 Dec 13 16:02:48.574449 kernel: pci 0000:03:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] Dec 13 16:02:48.574491 kernel: pci 0000:03:00.0: reg 0x18: [io 0x5000-0x501f] Dec 13 16:02:48.574534 kernel: pci 0000:03:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] Dec 13 16:02:48.574578 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Dec 13 16:02:48.574622 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Dec 13 16:02:48.574664 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Dec 13 16:02:48.574706 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Dec 13 16:02:48.574752 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect Dec 13 16:02:48.574796 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Dec 13 16:02:48.574839 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] Dec 13 16:02:48.574882 kernel: pci 0000:04:00.0: reg 0x18: [io 0x4000-0x401f] Dec 13 16:02:48.574926 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] Dec 13 16:02:48.574969 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Dec 13 16:02:48.575011 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Dec 13 16:02:48.575055 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Dec 13 16:02:48.575149 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Dec 13 16:02:48.575191 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Dec 13 16:02:48.575237 kernel: pci 0000:06:00.0: [1a03:1150] type 01 class 0x060400 Dec 13 16:02:48.575281 kernel: pci 0000:06:00.0: enabling Extended Tags Dec 13 16:02:48.575325 kernel: pci 0000:06:00.0: supports D1 D2 Dec 13 16:02:48.575387 kernel: pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold Dec 13 16:02:48.575449 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Dec 13 16:02:48.575491 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Dec 13 16:02:48.575533 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Dec 13 16:02:48.575580 kernel: pci_bus 0000:07: extended config space not accessible Dec 13 16:02:48.575630 kernel: pci 0000:07:00.0: [1a03:2000] type 00 class 0x030000 Dec 13 16:02:48.575678 kernel: pci 0000:07:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] Dec 13 16:02:48.575724 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] Dec 13 16:02:48.575769 kernel: pci 0000:07:00.0: reg 0x18: [io 0x3000-0x307f] Dec 13 16:02:48.575814 kernel: pci 0000:07:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 16:02:48.575859 kernel: pci 0000:07:00.0: supports D1 D2 Dec 13 16:02:48.575904 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Dec 13 16:02:48.575947 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Dec 13 16:02:48.575990 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Dec 13 16:02:48.576035 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Dec 13 16:02:48.576043 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Dec 13 16:02:48.576048 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Dec 13 16:02:48.576054 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Dec 13 16:02:48.576059 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Dec 13 16:02:48.576064 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Dec 13 16:02:48.576069 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Dec 13 16:02:48.576074 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Dec 13 16:02:48.576081 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Dec 13 16:02:48.576086 kernel: iommu: Default domain type: Translated Dec 13 16:02:48.576092 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 16:02:48.576136 kernel: pci 0000:07:00.0: vgaarb: setting as boot VGA device Dec 13 16:02:48.576181 kernel: pci 0000:07:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 16:02:48.576227 kernel: pci 0000:07:00.0: vgaarb: bridge control possible Dec 13 16:02:48.576234 kernel: vgaarb: loaded Dec 13 16:02:48.576240 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 16:02:48.576245 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 16:02:48.576251 kernel: PTP clock support registered Dec 13 16:02:48.576257 kernel: PCI: Using ACPI for IRQ routing Dec 13 16:02:48.576262 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 16:02:48.576267 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Dec 13 16:02:48.576272 kernel: e820: reserve RAM buffer [mem 0x81b2b000-0x83ffffff] Dec 13 16:02:48.576277 kernel: e820: reserve RAM buffer [mem 0x8afcd000-0x8bffffff] Dec 13 16:02:48.576282 kernel: e820: reserve RAM buffer [mem 0x8c23b000-0x8fffffff] Dec 13 16:02:48.576287 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] Dec 13 16:02:48.576292 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] Dec 13 16:02:48.576298 kernel: clocksource: Switched to clocksource tsc-early Dec 13 16:02:48.576304 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 16:02:48.576309 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 16:02:48.576314 kernel: pnp: PnP ACPI init Dec 13 16:02:48.576359 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Dec 13 16:02:48.576439 kernel: pnp 00:02: [dma 0 disabled] Dec 13 16:02:48.576481 kernel: pnp 00:03: [dma 0 disabled] Dec 13 16:02:48.576525 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Dec 13 16:02:48.576564 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Dec 13 16:02:48.576603 kernel: system 00:05: [io 0x1854-0x1857] has been reserved Dec 13 16:02:48.576645 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved Dec 13 16:02:48.576683 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved Dec 13 16:02:48.576721 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved Dec 13 16:02:48.576760 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved Dec 13 16:02:48.576797 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved Dec 13 16:02:48.576835 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved Dec 13 16:02:48.576872 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved Dec 13 16:02:48.576909 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved Dec 13 16:02:48.576950 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved Dec 13 16:02:48.576987 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved Dec 13 16:02:48.577026 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved Dec 13 16:02:48.577064 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved Dec 13 16:02:48.577100 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved Dec 13 16:02:48.577137 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved Dec 13 16:02:48.577175 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved Dec 13 16:02:48.577215 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved Dec 13 16:02:48.577222 kernel: pnp: PnP ACPI: found 10 devices Dec 13 16:02:48.577229 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 16:02:48.577234 kernel: NET: Registered PF_INET protocol family Dec 13 16:02:48.577239 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 16:02:48.577245 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Dec 13 16:02:48.577250 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 16:02:48.577255 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 16:02:48.577260 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Dec 13 16:02:48.577266 kernel: TCP: Hash tables configured (established 262144 bind 65536) Dec 13 16:02:48.577271 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Dec 13 16:02:48.577277 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Dec 13 16:02:48.577282 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 16:02:48.577287 kernel: NET: Registered PF_XDP protocol family Dec 13 16:02:48.577329 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] Dec 13 16:02:48.577391 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] Dec 13 16:02:48.577452 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] Dec 13 16:02:48.577497 kernel: pci 0000:01:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Dec 13 16:02:48.577540 kernel: pci 0000:01:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Dec 13 16:02:48.577586 kernel: pci 0000:01:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Dec 13 16:02:48.577630 kernel: pci 0000:01:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Dec 13 16:02:48.577672 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Dec 13 16:02:48.577715 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Dec 13 16:02:48.577757 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Dec 13 16:02:48.577799 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Dec 13 16:02:48.577844 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Dec 13 16:02:48.577886 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Dec 13 16:02:48.577928 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Dec 13 16:02:48.577969 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Dec 13 16:02:48.578011 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Dec 13 16:02:48.578055 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Dec 13 16:02:48.578097 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Dec 13 16:02:48.578143 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Dec 13 16:02:48.578186 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Dec 13 16:02:48.578230 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Dec 13 16:02:48.578271 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Dec 13 16:02:48.578313 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Dec 13 16:02:48.578375 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Dec 13 16:02:48.578433 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Dec 13 16:02:48.578470 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 16:02:48.578507 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 16:02:48.578545 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 16:02:48.578581 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] Dec 13 16:02:48.578617 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Dec 13 16:02:48.578661 kernel: pci_bus 0000:01: resource 1 [mem 0x95100000-0x952fffff] Dec 13 16:02:48.578700 kernel: pci_bus 0000:01: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Dec 13 16:02:48.578746 kernel: pci_bus 0000:03: resource 0 [io 0x5000-0x5fff] Dec 13 16:02:48.578787 kernel: pci_bus 0000:03: resource 1 [mem 0x95400000-0x954fffff] Dec 13 16:02:48.578829 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Dec 13 16:02:48.578867 kernel: pci_bus 0000:04: resource 1 [mem 0x95300000-0x953fffff] Dec 13 16:02:48.578910 kernel: pci_bus 0000:06: resource 0 [io 0x3000-0x3fff] Dec 13 16:02:48.578949 kernel: pci_bus 0000:06: resource 1 [mem 0x94000000-0x950fffff] Dec 13 16:02:48.578991 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Dec 13 16:02:48.579032 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] Dec 13 16:02:48.579041 kernel: PCI: CLS 64 bytes, default 64 Dec 13 16:02:48.579047 kernel: DMAR: No ATSR found Dec 13 16:02:48.579052 kernel: DMAR: No SATC found Dec 13 16:02:48.579057 kernel: DMAR: dmar0: Using Queued invalidation Dec 13 16:02:48.579099 kernel: pci 0000:00:00.0: Adding to iommu group 0 Dec 13 16:02:48.579143 kernel: pci 0000:00:01.0: Adding to iommu group 1 Dec 13 16:02:48.579184 kernel: pci 0000:00:08.0: Adding to iommu group 2 Dec 13 16:02:48.579227 kernel: pci 0000:00:12.0: Adding to iommu group 3 Dec 13 16:02:48.579268 kernel: pci 0000:00:14.0: Adding to iommu group 4 Dec 13 16:02:48.579311 kernel: pci 0000:00:14.2: Adding to iommu group 4 Dec 13 16:02:48.579354 kernel: pci 0000:00:15.0: Adding to iommu group 5 Dec 13 16:02:48.579396 kernel: pci 0000:00:15.1: Adding to iommu group 5 Dec 13 16:02:48.579438 kernel: pci 0000:00:16.0: Adding to iommu group 6 Dec 13 16:02:48.579479 kernel: pci 0000:00:16.1: Adding to iommu group 6 Dec 13 16:02:48.579521 kernel: pci 0000:00:16.4: Adding to iommu group 6 Dec 13 16:02:48.579561 kernel: pci 0000:00:17.0: Adding to iommu group 7 Dec 13 16:02:48.579603 kernel: pci 0000:00:1b.0: Adding to iommu group 8 Dec 13 16:02:48.579646 kernel: pci 0000:00:1b.4: Adding to iommu group 9 Dec 13 16:02:48.579688 kernel: pci 0000:00:1b.5: Adding to iommu group 10 Dec 13 16:02:48.579730 kernel: pci 0000:00:1c.0: Adding to iommu group 11 Dec 13 16:02:48.579771 kernel: pci 0000:00:1c.3: Adding to iommu group 12 Dec 13 16:02:48.579814 kernel: pci 0000:00:1e.0: Adding to iommu group 13 Dec 13 16:02:48.579854 kernel: pci 0000:00:1f.0: Adding to iommu group 14 Dec 13 16:02:48.579896 kernel: pci 0000:00:1f.4: Adding to iommu group 14 Dec 13 16:02:48.579936 kernel: pci 0000:00:1f.5: Adding to iommu group 14 Dec 13 16:02:48.579981 kernel: pci 0000:01:00.0: Adding to iommu group 1 Dec 13 16:02:48.580024 kernel: pci 0000:01:00.1: Adding to iommu group 1 Dec 13 16:02:48.580067 kernel: pci 0000:03:00.0: Adding to iommu group 15 Dec 13 16:02:48.580111 kernel: pci 0000:04:00.0: Adding to iommu group 16 Dec 13 16:02:48.580153 kernel: pci 0000:06:00.0: Adding to iommu group 17 Dec 13 16:02:48.580198 kernel: pci 0000:07:00.0: Adding to iommu group 17 Dec 13 16:02:48.580206 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Dec 13 16:02:48.580211 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 13 16:02:48.580218 kernel: software IO TLB: mapped [mem 0x0000000086fcd000-0x000000008afcd000] (64MB) Dec 13 16:02:48.580223 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer Dec 13 16:02:48.580228 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Dec 13 16:02:48.580234 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Dec 13 16:02:48.580239 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Dec 13 16:02:48.580282 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Dec 13 16:02:48.580290 kernel: Initialise system trusted keyrings Dec 13 16:02:48.580295 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Dec 13 16:02:48.580302 kernel: Key type asymmetric registered Dec 13 16:02:48.580307 kernel: Asymmetric key parser 'x509' registered Dec 13 16:02:48.580312 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 16:02:48.580318 kernel: io scheduler mq-deadline registered Dec 13 16:02:48.580323 kernel: io scheduler kyber registered Dec 13 16:02:48.580328 kernel: io scheduler bfq registered Dec 13 16:02:48.580373 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 Dec 13 16:02:48.580415 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 122 Dec 13 16:02:48.580459 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 123 Dec 13 16:02:48.580503 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 124 Dec 13 16:02:48.580545 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 125 Dec 13 16:02:48.580588 kernel: pcieport 0000:00:1c.3: PME: Signaling with IRQ 126 Dec 13 16:02:48.580634 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Dec 13 16:02:48.580641 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Dec 13 16:02:48.580647 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Dec 13 16:02:48.580652 kernel: pstore: Registered erst as persistent store backend Dec 13 16:02:48.580657 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 16:02:48.580664 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 16:02:48.580669 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 16:02:48.580674 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Dec 13 16:02:48.580679 kernel: hpet_acpi_add: no address or irqs in _CRS Dec 13 16:02:48.580724 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Dec 13 16:02:48.580731 kernel: i8042: PNP: No PS/2 controller found. Dec 13 16:02:48.580769 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Dec 13 16:02:48.580809 kernel: rtc_cmos rtc_cmos: registered as rtc0 Dec 13 16:02:48.580849 kernel: rtc_cmos rtc_cmos: setting system clock to 2024-12-13T16:02:47 UTC (1734105767) Dec 13 16:02:48.580887 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Dec 13 16:02:48.580894 kernel: fail to initialize ptp_kvm Dec 13 16:02:48.580900 kernel: intel_pstate: Intel P-state driver initializing Dec 13 16:02:48.580905 kernel: intel_pstate: Disabling energy efficiency optimization Dec 13 16:02:48.580910 kernel: intel_pstate: HWP enabled Dec 13 16:02:48.580915 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 Dec 13 16:02:48.580920 kernel: vesafb: scrolling: redraw Dec 13 16:02:48.580927 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 Dec 13 16:02:48.580932 kernel: vesafb: framebuffer at 0x94000000, mapped to 0x00000000fc0c548a, using 768k, total 768k Dec 13 16:02:48.580937 kernel: Console: switching to colour frame buffer device 128x48 Dec 13 16:02:48.580942 kernel: fb0: VESA VGA frame buffer device Dec 13 16:02:48.580948 kernel: NET: Registered PF_INET6 protocol family Dec 13 16:02:48.580953 kernel: Segment Routing with IPv6 Dec 13 16:02:48.580958 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 16:02:48.580963 kernel: NET: Registered PF_PACKET protocol family Dec 13 16:02:48.580968 kernel: Key type dns_resolver registered Dec 13 16:02:48.580974 kernel: microcode: sig=0x906ed, pf=0x2, revision=0xf4 Dec 13 16:02:48.580979 kernel: microcode: Microcode Update Driver: v2.2. Dec 13 16:02:48.580985 kernel: IPI shorthand broadcast: enabled Dec 13 16:02:48.580990 kernel: sched_clock: Marking stable (1680665715, 1339877615)->(4464560356, -1444017026) Dec 13 16:02:48.580995 kernel: registered taskstats version 1 Dec 13 16:02:48.581000 kernel: Loading compiled-in X.509 certificates Dec 13 16:02:48.581005 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: e1d88c9e01f5bb2adeb5b99325e46e5ca8dff115' Dec 13 16:02:48.581010 kernel: Key type .fscrypt registered Dec 13 16:02:48.581015 kernel: Key type fscrypt-provisioning registered Dec 13 16:02:48.581021 kernel: pstore: Using crash dump compression: deflate Dec 13 16:02:48.581027 kernel: ima: Allocated hash algorithm: sha1 Dec 13 16:02:48.581032 kernel: ima: No architecture policies found Dec 13 16:02:48.581037 kernel: clk: Disabling unused clocks Dec 13 16:02:48.581042 kernel: Freeing unused kernel image (initmem) memory: 47472K Dec 13 16:02:48.581047 kernel: Write protecting the kernel read-only data: 28672k Dec 13 16:02:48.581052 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Dec 13 16:02:48.581058 kernel: Freeing unused kernel image (rodata/data gap) memory: 620K Dec 13 16:02:48.581063 kernel: Run /init as init process Dec 13 16:02:48.581069 kernel: with arguments: Dec 13 16:02:48.581074 kernel: /init Dec 13 16:02:48.581079 kernel: with environment: Dec 13 16:02:48.581084 kernel: HOME=/ Dec 13 16:02:48.581089 kernel: TERM=linux Dec 13 16:02:48.581094 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 16:02:48.581101 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 16:02:48.581107 systemd[1]: Detected architecture x86-64. Dec 13 16:02:48.581114 systemd[1]: Running in initrd. Dec 13 16:02:48.581119 systemd[1]: No hostname configured, using default hostname. Dec 13 16:02:48.581124 systemd[1]: Hostname set to . Dec 13 16:02:48.581129 systemd[1]: Initializing machine ID from random generator. Dec 13 16:02:48.581135 systemd[1]: Queued start job for default target initrd.target. Dec 13 16:02:48.581140 systemd[1]: Started systemd-ask-password-console.path. Dec 13 16:02:48.581145 systemd[1]: Reached target cryptsetup.target. Dec 13 16:02:48.581151 systemd[1]: Reached target paths.target. Dec 13 16:02:48.581157 systemd[1]: Reached target slices.target. Dec 13 16:02:48.581162 systemd[1]: Reached target swap.target. Dec 13 16:02:48.581167 systemd[1]: Reached target timers.target. Dec 13 16:02:48.581172 systemd[1]: Listening on iscsid.socket. Dec 13 16:02:48.581178 systemd[1]: Listening on iscsiuio.socket. Dec 13 16:02:48.581183 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 16:02:48.581189 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 16:02:48.581195 systemd[1]: Listening on systemd-journald.socket. Dec 13 16:02:48.581201 kernel: tsc: Refined TSC clocksource calibration: 3407.998 MHz Dec 13 16:02:48.581206 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd208cfc, max_idle_ns: 440795283699 ns Dec 13 16:02:48.581211 kernel: clocksource: Switched to clocksource tsc Dec 13 16:02:48.581217 systemd[1]: Listening on systemd-networkd.socket. Dec 13 16:02:48.581222 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 16:02:48.581227 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 16:02:48.581233 systemd[1]: Reached target sockets.target. Dec 13 16:02:48.581238 systemd[1]: Starting kmod-static-nodes.service... Dec 13 16:02:48.581244 systemd[1]: Finished network-cleanup.service. Dec 13 16:02:48.581249 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 16:02:48.581255 systemd[1]: Starting systemd-journald.service... Dec 13 16:02:48.581260 systemd[1]: Starting systemd-modules-load.service... Dec 13 16:02:48.581267 systemd-journald[268]: Journal started Dec 13 16:02:48.581292 systemd-journald[268]: Runtime Journal (/run/log/journal/c248a1a53852426faa454e260f193b4c) is 8.0M, max 640.1M, 632.1M free. Dec 13 16:02:48.583598 systemd-modules-load[269]: Inserted module 'overlay' Dec 13 16:02:48.641442 kernel: audit: type=1334 audit(1734105768.588:2): prog-id=6 op=LOAD Dec 13 16:02:48.641453 systemd[1]: Starting systemd-resolved.service... Dec 13 16:02:48.641462 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 16:02:48.588000 audit: BPF prog-id=6 op=LOAD Dec 13 16:02:48.675409 kernel: Bridge firewalling registered Dec 13 16:02:48.675425 systemd[1]: Starting systemd-vconsole-setup.service... Dec 13 16:02:48.690027 systemd-modules-load[269]: Inserted module 'br_netfilter' Dec 13 16:02:48.714430 systemd[1]: Started systemd-journald.service. Dec 13 16:02:48.692518 systemd-resolved[271]: Positive Trust Anchors: Dec 13 16:02:48.726462 kernel: SCSI subsystem initialized Dec 13 16:02:48.692524 systemd-resolved[271]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 16:02:48.844456 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 16:02:48.844469 kernel: audit: type=1130 audit(1734105768.747:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:02:48.844478 kernel: device-mapper: uevent: version 1.0.3 Dec 13 16:02:48.844486 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Dec 13 16:02:48.747000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:02:48.692544 systemd-resolved[271]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 16:02:48.918584 kernel: audit: type=1130 audit(1734105768.853:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:02:48.853000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:02:48.694098 systemd-resolved[271]: Defaulting to hostname 'linux'. Dec 13 16:02:48.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:02:48.748651 systemd[1]: Started systemd-resolved.service. Dec 13 16:02:49.021166 kernel: audit: type=1130 audit(1734105768.926:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:02:49.021177 kernel: audit: type=1130 audit(1734105768.978:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:02:48.978000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:02:48.844850 systemd-modules-load[269]: Inserted module 'dm_multipath' Dec 13 16:02:49.073727 kernel: audit: type=1130 audit(1734105769.029:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:02:49.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:02:48.853674 systemd[1]: Finished kmod-static-nodes.service. Dec 13 16:02:49.128062 kernel: audit: type=1130 audit(1734105769.082:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:02:49.082000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:02:48.927703 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 16:02:48.978658 systemd[1]: Finished systemd-modules-load.service. Dec 13 16:02:49.029655 systemd[1]: Finished systemd-vconsole-setup.service. Dec 13 16:02:49.082637 systemd[1]: Reached target nss-lookup.target. Dec 13 16:02:49.136973 systemd[1]: Starting dracut-cmdline-ask.service... Dec 13 16:02:49.157954 systemd[1]: Starting systemd-sysctl.service... Dec 13 16:02:49.158248 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 16:02:49.161164 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 16:02:49.159000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:02:49.161938 systemd[1]: Finished systemd-sysctl.service. Dec 13 16:02:49.210574 kernel: audit: type=1130 audit(1734105769.159:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:02:49.222000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:02:49.222843 systemd[1]: Finished dracut-cmdline-ask.service. Dec 13 16:02:49.287453 kernel: audit: type=1130 audit(1734105769.222:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:02:49.278000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:02:49.278970 systemd[1]: Starting dracut-cmdline.service... Dec 13 16:02:49.302468 dracut-cmdline[293]: dracut-dracut-053 Dec 13 16:02:49.302468 dracut-cmdline[293]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA Dec 13 16:02:49.302468 dracut-cmdline[293]: BEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 16:02:49.370427 kernel: Loading iSCSI transport class v2.0-870. Dec 13 16:02:49.370441 kernel: iscsi: registered transport (tcp) Dec 13 16:02:49.422424 kernel: iscsi: registered transport (qla4xxx) Dec 13 16:02:49.422444 kernel: QLogic iSCSI HBA Driver Dec 13 16:02:49.437960 systemd[1]: Finished dracut-cmdline.service. Dec 13 16:02:49.447000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:02:49.448066 systemd[1]: Starting dracut-pre-udev.service... Dec 13 16:02:49.503433 kernel: raid6: avx2x4 gen() 45669 MB/s Dec 13 16:02:49.538387 kernel: raid6: avx2x4 xor() 21372 MB/s Dec 13 16:02:49.573429 kernel: raid6: avx2x2 gen() 53687 MB/s Dec 13 16:02:49.608393 kernel: raid6: avx2x2 xor() 32136 MB/s Dec 13 16:02:49.642385 kernel: raid6: avx2x1 gen() 45131 MB/s Dec 13 16:02:49.676428 kernel: raid6: avx2x1 xor() 27926 MB/s Dec 13 16:02:49.710428 kernel: raid6: sse2x4 gen() 21358 MB/s Dec 13 16:02:49.744428 kernel: raid6: sse2x4 xor() 11978 MB/s Dec 13 16:02:49.778429 kernel: raid6: sse2x2 gen() 21681 MB/s Dec 13 16:02:49.812428 kernel: raid6: sse2x2 xor() 13398 MB/s Dec 13 16:02:49.846386 kernel: raid6: sse2x1 gen() 18304 MB/s Dec 13 16:02:49.897808 kernel: raid6: sse2x1 xor() 8915 MB/s Dec 13 16:02:49.897823 kernel: raid6: using algorithm avx2x2 gen() 53687 MB/s Dec 13 16:02:49.897831 kernel: raid6: .... xor() 32136 MB/s, rmw enabled Dec 13 16:02:49.915762 kernel: raid6: using avx2x2 recovery algorithm Dec 13 16:02:49.961384 kernel: xor: automatically using best checksumming function avx Dec 13 16:02:50.040430 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Dec 13 16:02:50.045886 systemd[1]: Finished dracut-pre-udev.service. Dec 13 16:02:50.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:02:50.055000 audit: BPF prog-id=7 op=LOAD Dec 13 16:02:50.055000 audit: BPF prog-id=8 op=LOAD Dec 13 16:02:50.056392 systemd[1]: Starting systemd-udevd.service... Dec 13 16:02:50.064498 systemd-udevd[473]: Using default interface naming scheme 'v252'. Dec 13 16:02:50.086000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:02:50.069602 systemd[1]: Started systemd-udevd.service. Dec 13 16:02:50.111498 dracut-pre-trigger[485]: rd.md=0: removing MD RAID activation Dec 13 16:02:50.087101 systemd[1]: Starting dracut-pre-trigger.service... Dec 13 16:02:50.126000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:02:50.115276 systemd[1]: Finished dracut-pre-trigger.service. Dec 13 16:02:50.128462 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 16:02:50.181290 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 16:02:50.198000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:02:50.208361 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 16:02:50.252340 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 16:02:50.252411 kernel: AES CTR mode by8 optimization enabled Dec 13 16:02:50.269361 kernel: ACPI: bus type USB registered Dec 13 16:02:50.269388 kernel: libata version 3.00 loaded. Dec 13 16:02:50.269396 kernel: usbcore: registered new interface driver usbfs Dec 13 16:02:50.286929 kernel: usbcore: registered new interface driver hub Dec 13 16:02:50.304559 kernel: usbcore: registered new device driver usb Dec 13 16:02:50.322361 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Dec 13 16:02:50.356266 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Dec 13 16:02:50.397334 kernel: mlx5_core 0000:01:00.0: firmware version: 14.27.1016 Dec 13 16:02:51.358799 kernel: mlx5_core 0000:01:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Dec 13 16:02:51.358866 kernel: pps pps0: new PPS source ptp0 Dec 13 16:02:51.358927 kernel: igb 0000:03:00.0: added PHC on eth0 Dec 13 16:02:51.358982 kernel: igb 0000:03:00.0: Intel(R) Gigabit Ethernet Network Connection Dec 13 16:02:51.359033 kernel: igb 0000:03:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:6a:f0:74 Dec 13 16:02:51.359083 kernel: igb 0000:03:00.0: eth0: PBA No: 010000-000 Dec 13 16:02:51.359135 kernel: igb 0000:03:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Dec 13 16:02:51.359185 kernel: ahci 0000:00:17.0: version 3.0 Dec 13 16:02:51.359234 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Dec 13 16:02:51.359282 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 7 ports 6 Gbps 0x7f impl SATA mode Dec 13 16:02:51.359329 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Dec 13 16:02:51.359383 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Dec 13 16:02:51.359432 kernel: pps pps1: new PPS source ptp1 Dec 13 16:02:51.359491 kernel: igb 0000:04:00.0: added PHC on eth1 Dec 13 16:02:51.359544 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Dec 13 16:02:51.359594 kernel: igb 0000:04:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:6a:f0:75 Dec 13 16:02:51.359645 kernel: igb 0000:04:00.0: eth1: PBA No: 010000-000 Dec 13 16:02:51.359694 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Dec 13 16:02:51.359743 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Dec 13 16:02:51.359790 kernel: igb 0000:03:00.0 eno1: renamed from eth0 Dec 13 16:02:51.359839 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Dec 13 16:02:51.359886 kernel: scsi host0: ahci Dec 13 16:02:51.359939 kernel: scsi host1: ahci Dec 13 16:02:51.359989 kernel: scsi host2: ahci Dec 13 16:02:51.360039 kernel: scsi host3: ahci Dec 13 16:02:51.360088 kernel: scsi host4: ahci Dec 13 16:02:51.360135 kernel: scsi host5: ahci Dec 13 16:02:51.360185 kernel: scsi host6: ahci Dec 13 16:02:51.360235 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 132 Dec 13 16:02:51.360243 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 132 Dec 13 16:02:51.360249 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 132 Dec 13 16:02:51.360256 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 132 Dec 13 16:02:51.360262 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 132 Dec 13 16:02:51.360269 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 132 Dec 13 16:02:51.360275 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 132 Dec 13 16:02:51.360281 kernel: mlx5_core 0000:01:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Dec 13 16:02:51.360333 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Dec 13 16:02:51.360388 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Dec 13 16:02:51.360396 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Dec 13 16:02:51.360443 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 13 16:02:51.360450 kernel: hub 1-0:1.0: USB hub found Dec 13 16:02:51.360509 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 13 16:02:51.360516 kernel: hub 1-0:1.0: 16 ports detected Dec 13 16:02:51.360568 kernel: ata7: SATA link down (SStatus 0 SControl 300) Dec 13 16:02:51.360577 kernel: hub 2-0:1.0: USB hub found Dec 13 16:02:51.360635 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Dec 13 16:02:51.360643 kernel: hub 2-0:1.0: 10 ports detected Dec 13 16:02:51.360695 kernel: ata3: SATA link down (SStatus 0 SControl 300) Dec 13 16:02:51.360702 kernel: igb 0000:04:00.0 eno2: renamed from eth1 Dec 13 16:02:51.360752 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 13 16:02:51.360759 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Dec 13 16:02:51.360766 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Dec 13 16:02:51.360774 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Dec 13 16:02:51.360781 kernel: mlx5_core 0000:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Dec 13 16:02:51.360830 kernel: ata1.00: Features: NCQ-prio Dec 13 16:02:51.360837 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Dec 13 16:02:51.360844 kernel: ata2.00: Features: NCQ-prio Dec 13 16:02:51.360851 kernel: ata1.00: configured for UDMA/133 Dec 13 16:02:51.360857 kernel: ata2.00: configured for UDMA/133 Dec 13 16:02:51.360864 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Dec 13 16:02:51.636711 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Dec 13 16:02:51.768981 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Dec 13 16:02:51.769057 kernel: ata1.00: Enabling discard_zeroes_data Dec 13 16:02:51.769065 kernel: ata2.00: Enabling discard_zeroes_data Dec 13 16:02:51.769072 kernel: sd 0:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Dec 13 16:02:51.769129 kernel: sd 1:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Dec 13 16:02:51.769183 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Dec 13 16:02:51.769238 kernel: sd 1:0:0:0: [sdb] 4096-byte physical blocks Dec 13 16:02:51.769292 kernel: sd 0:0:0:0: [sda] Write Protect is off Dec 13 16:02:51.769345 kernel: sd 1:0:0:0: [sdb] Write Protect is off Dec 13 16:02:51.769403 kernel: mlx5_core 0000:01:00.0: Supported tc offload range - chains: 4294967294, prios: 4294967295 Dec 13 16:02:51.769462 kernel: sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 Dec 13 16:02:51.769515 kernel: mlx5_core 0000:01:00.1: firmware version: 14.27.1016 Dec 13 16:02:51.979707 kernel: mlx5_core 0000:01:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Dec 13 16:02:51.980057 kernel: sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Dec 13 16:02:51.980395 kernel: hub 1-14:1.0: USB hub found Dec 13 16:02:51.980742 kernel: hub 1-14:1.0: 4 ports detected Dec 13 16:02:51.981038 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Dec 13 16:02:51.981324 kernel: sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Dec 13 16:02:51.981782 kernel: ata1.00: Enabling discard_zeroes_data Dec 13 16:02:51.981851 kernel: ata2.00: Enabling discard_zeroes_data Dec 13 16:02:51.981921 kernel: ata2.00: Enabling discard_zeroes_data Dec 13 16:02:51.981980 kernel: sd 1:0:0:0: [sdb] Attached SCSI disk Dec 13 16:02:51.982502 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 16:02:51.982569 kernel: GPT:9289727 != 937703087 Dec 13 16:02:51.982628 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 16:02:51.982689 kernel: GPT:9289727 != 937703087 Dec 13 16:02:51.982747 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 16:02:51.982805 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 16:02:51.982851 kernel: ata1.00: Enabling discard_zeroes_data Dec 13 16:02:51.982894 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Dec 13 16:02:51.983234 kernel: mlx5_core 0000:01:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Dec 13 16:02:51.983556 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Dec 13 16:02:51.984126 kernel: port_module: 9 callbacks suppressed Dec 13 16:02:51.984167 kernel: mlx5_core 0000:01:00.1: Port module event: module 1, Cable plugged Dec 13 16:02:51.984485 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (518) Dec 13 16:02:51.984525 kernel: mlx5_core 0000:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Dec 13 16:02:51.984791 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 16:02:51.984830 kernel: usbcore: registered new interface driver usbhid Dec 13 16:02:51.984864 kernel: usbhid: USB HID core driver Dec 13 16:02:51.984897 kernel: ata1.00: Enabling discard_zeroes_data Dec 13 16:02:51.984930 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 16:02:51.984963 kernel: ata1.00: Enabling discard_zeroes_data Dec 13 16:02:51.984995 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 16:02:51.985026 kernel: ata1.00: Enabling discard_zeroes_data Dec 13 16:02:51.985060 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 16:02:51.985092 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Dec 13 16:02:51.985131 kernel: mlx5_core 0000:01:00.1: Supported tc offload range - chains: 4294967294, prios: 4294967295 Dec 13 16:02:51.985515 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: renamed from eth0 Dec 13 16:02:51.668685 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Dec 13 16:02:52.062466 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Dec 13 16:02:52.062566 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Dec 13 16:02:51.732478 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Dec 13 16:02:52.134496 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Dec 13 16:02:52.134588 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: renamed from eth1 Dec 13 16:02:51.755410 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Dec 13 16:02:52.142408 disk-uuid[685]: Primary Header is updated. Dec 13 16:02:52.142408 disk-uuid[685]: Secondary Entries is updated. Dec 13 16:02:52.142408 disk-uuid[685]: Secondary Header is updated. Dec 13 16:02:51.769124 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Dec 13 16:02:51.786185 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 16:02:51.806126 systemd[1]: Starting disk-uuid.service... Dec 13 16:02:52.900472 kernel: ata1.00: Enabling discard_zeroes_data Dec 13 16:02:52.920403 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 16:02:52.920927 disk-uuid[686]: The operation has completed successfully. Dec 13 16:02:52.961283 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 16:02:52.961328 systemd[1]: Finished disk-uuid.service. Dec 13 16:02:53.066084 kernel: audit: type=1130 audit(1734105772.976:19): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:02:53.066101 kernel: audit: type=1131 audit(1734105772.976:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:02:52.976000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:02:52.976000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:02:52.979909 systemd[1]: Starting verity-setup.service... Dec 13 16:02:53.096388 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Dec 13 16:02:53.162599 systemd[1]: Found device dev-mapper-usr.device. Dec 13 16:02:53.172481 systemd[1]: Mounting sysusr-usr.mount... Dec 13 16:02:53.186700 systemd[1]: Finished verity-setup.service. Dec 13 16:02:53.201000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:02:53.249358 kernel: audit: type=1130 audit(1734105773.201:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:02:53.278107 systemd[1]: Mounted sysusr-usr.mount. Dec 13 16:02:53.293541 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 16:02:53.286678 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Dec 13 16:02:53.379668 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 16:02:53.379682 kernel: BTRFS info (device sda6): using free space tree Dec 13 16:02:53.379690 kernel: BTRFS info (device sda6): has skinny extents Dec 13 16:02:53.379703 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 16:02:53.287085 systemd[1]: Starting ignition-setup.service... Dec 13 16:02:53.307154 systemd[1]: Starting parse-ip-for-networkd.service... Dec 13 16:02:53.405000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:02:53.388856 systemd[1]: Finished ignition-setup.service. Dec 13 16:02:53.513849 kernel: audit: type=1130 audit(1734105773.405:22): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:02:53.513863 kernel: audit: type=1130 audit(1734105773.463:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:02:53.463000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:02:53.405710 systemd[1]: Finished parse-ip-for-networkd.service. Dec 13 16:02:53.544991 kernel: audit: type=1334 audit(1734105773.520:24): prog-id=9 op=LOAD Dec 13 16:02:53.520000 audit: BPF prog-id=9 op=LOAD Dec 13 16:02:53.464035 systemd[1]: Starting ignition-fetch-offline.service... Dec 13 16:02:53.522326 systemd[1]: Starting systemd-networkd.service... Dec 13 16:02:53.559224 systemd-networkd[879]: lo: Link UP Dec 13 16:02:53.574000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:02:53.587902 ignition[868]: Ignition 2.14.0 Dec 13 16:02:53.641523 kernel: audit: type=1130 audit(1734105773.574:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:02:53.559226 systemd-networkd[879]: lo: Gained carrier Dec 13 16:02:53.587906 ignition[868]: Stage: fetch-offline Dec 13 16:02:53.661000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:02:53.559553 systemd-networkd[879]: Enumeration completed Dec 13 16:02:53.799260 kernel: audit: type=1130 audit(1734105773.661:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:02:53.799273 kernel: audit: type=1130 audit(1734105773.723:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:02:53.799283 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Dec 13 16:02:53.723000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:02:53.587936 ignition[868]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 16:02:53.559602 systemd[1]: Started systemd-networkd.service. Dec 13 16:02:53.845450 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): enp1s0f1np1: link becomes ready Dec 13 16:02:53.587950 ignition[868]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Dec 13 16:02:53.560296 systemd-networkd[879]: enp1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 16:02:53.851000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:02:53.595397 ignition[868]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Dec 13 16:02:53.575542 systemd[1]: Reached target network.target. Dec 13 16:02:53.881506 iscsid[899]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 13 16:02:53.881506 iscsid[899]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Dec 13 16:02:53.881506 iscsid[899]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 13 16:02:53.881506 iscsid[899]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 13 16:02:53.881506 iscsid[899]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 13 16:02:53.881506 iscsid[899]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 13 16:02:53.881506 iscsid[899]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 13 16:02:54.043447 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Dec 13 16:02:53.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:02:54.013000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:02:53.595480 ignition[868]: parsed url from cmdline: "" Dec 13 16:02:53.599794 unknown[868]: fetched base config from "system" Dec 13 16:02:53.595482 ignition[868]: no config URL provided Dec 13 16:02:53.599798 unknown[868]: fetched user config from "system" Dec 13 16:02:53.595485 ignition[868]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 16:02:53.637564 systemd[1]: Starting iscsiuio.service... Dec 13 16:02:53.595508 ignition[868]: parsing config with SHA512: 0b3d85997284d099252cbcf60b2b85af68392cda254065606a2a06704cfd35012e89b51806ead6260685e655b64720a3b6f7909eaca4e092e14f6b60fec41160 Dec 13 16:02:53.648763 systemd[1]: Started iscsiuio.service. Dec 13 16:02:53.600094 ignition[868]: fetch-offline: fetch-offline passed Dec 13 16:02:53.662925 systemd[1]: Finished ignition-fetch-offline.service. Dec 13 16:02:53.600098 ignition[868]: POST message to Packet Timeline Dec 13 16:02:53.723620 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 13 16:02:53.600102 ignition[868]: POST Status error: resource requires networking Dec 13 16:02:53.724115 systemd[1]: Starting ignition-kargs.service... Dec 13 16:02:53.600140 ignition[868]: Ignition finished successfully Dec 13 16:02:53.800203 systemd-networkd[879]: enp1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 16:02:53.804169 ignition[889]: Ignition 2.14.0 Dec 13 16:02:53.813952 systemd[1]: Starting iscsid.service... Dec 13 16:02:53.804172 ignition[889]: Stage: kargs Dec 13 16:02:53.838573 systemd[1]: Started iscsid.service. Dec 13 16:02:53.804228 ignition[889]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 16:02:53.852917 systemd[1]: Starting dracut-initqueue.service... Dec 13 16:02:53.804236 ignition[889]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Dec 13 16:02:53.871625 systemd[1]: Finished dracut-initqueue.service. Dec 13 16:02:53.806488 ignition[889]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Dec 13 16:02:53.894594 systemd[1]: Reached target remote-fs-pre.target. Dec 13 16:02:53.807057 ignition[889]: kargs: kargs passed Dec 13 16:02:53.913506 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 16:02:53.807060 ignition[889]: POST message to Packet Timeline Dec 13 16:02:53.913559 systemd[1]: Reached target remote-fs.target. Dec 13 16:02:53.807069 ignition[889]: GET https://metadata.packet.net/metadata: attempt #1 Dec 13 16:02:53.965338 systemd[1]: Starting dracut-pre-mount.service... Dec 13 16:02:53.814164 ignition[889]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:32971->[::1]:53: read: connection refused Dec 13 16:02:53.991692 systemd[1]: Finished dracut-pre-mount.service. Dec 13 16:02:54.014495 ignition[889]: GET https://metadata.packet.net/metadata: attempt #2 Dec 13 16:02:54.015678 systemd-networkd[879]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 16:02:54.014902 ignition[889]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:37037->[::1]:53: read: connection refused Dec 13 16:02:54.043768 systemd-networkd[879]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 16:02:54.071682 systemd-networkd[879]: enp1s0f1np1: Link UP Dec 13 16:02:54.071847 systemd-networkd[879]: enp1s0f1np1: Gained carrier Dec 13 16:02:54.080678 systemd-networkd[879]: enp1s0f0np0: Link UP Dec 13 16:02:54.080872 systemd-networkd[879]: eno2: Link UP Dec 13 16:02:54.081051 systemd-networkd[879]: eno1: Link UP Dec 13 16:02:54.415215 ignition[889]: GET https://metadata.packet.net/metadata: attempt #3 Dec 13 16:02:54.416351 ignition[889]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:45384->[::1]:53: read: connection refused Dec 13 16:02:54.864408 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): enp1s0f0np0: link becomes ready Dec 13 16:02:54.864356 systemd-networkd[879]: enp1s0f0np0: Gained carrier Dec 13 16:02:54.890548 systemd-networkd[879]: enp1s0f0np0: DHCPv4 address 145.40.90.237/31, gateway 145.40.90.236 acquired from 145.40.83.140 Dec 13 16:02:55.216909 ignition[889]: GET https://metadata.packet.net/metadata: attempt #4 Dec 13 16:02:55.218206 ignition[889]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:56522->[::1]:53: read: connection refused Dec 13 16:02:55.864939 systemd-networkd[879]: enp1s0f1np1: Gained IPv6LL Dec 13 16:02:56.568928 systemd-networkd[879]: enp1s0f0np0: Gained IPv6LL Dec 13 16:02:56.819522 ignition[889]: GET https://metadata.packet.net/metadata: attempt #5 Dec 13 16:02:56.820835 ignition[889]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:33664->[::1]:53: read: connection refused Dec 13 16:03:00.024347 ignition[889]: GET https://metadata.packet.net/metadata: attempt #6 Dec 13 16:03:00.624851 ignition[889]: GET result: OK Dec 13 16:03:01.183066 ignition[889]: Ignition finished successfully Dec 13 16:03:01.187351 systemd[1]: Finished ignition-kargs.service. Dec 13 16:03:01.276311 kernel: kauditd_printk_skb: 3 callbacks suppressed Dec 13 16:03:01.276327 kernel: audit: type=1130 audit(1734105781.198:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:01.198000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:01.208321 ignition[918]: Ignition 2.14.0 Dec 13 16:03:01.201689 systemd[1]: Starting ignition-disks.service... Dec 13 16:03:01.208324 ignition[918]: Stage: disks Dec 13 16:03:01.208425 ignition[918]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 16:03:01.208453 ignition[918]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Dec 13 16:03:01.209757 ignition[918]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Dec 13 16:03:01.211338 ignition[918]: disks: disks passed Dec 13 16:03:01.211341 ignition[918]: POST message to Packet Timeline Dec 13 16:03:01.211356 ignition[918]: GET https://metadata.packet.net/metadata: attempt #1 Dec 13 16:03:01.786010 ignition[918]: GET result: OK Dec 13 16:03:02.406678 ignition[918]: Ignition finished successfully Dec 13 16:03:02.409756 systemd[1]: Finished ignition-disks.service. Dec 13 16:03:02.422000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:02.423939 systemd[1]: Reached target initrd-root-device.target. Dec 13 16:03:02.503582 kernel: audit: type=1130 audit(1734105782.422:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:02.489598 systemd[1]: Reached target local-fs-pre.target. Dec 13 16:03:02.489636 systemd[1]: Reached target local-fs.target. Dec 13 16:03:02.503659 systemd[1]: Reached target sysinit.target. Dec 13 16:03:02.525590 systemd[1]: Reached target basic.target. Dec 13 16:03:02.539262 systemd[1]: Starting systemd-fsck-root.service... Dec 13 16:03:02.558188 systemd-fsck[935]: ROOT: clean, 621/553520 files, 56021/553472 blocks Dec 13 16:03:02.571938 systemd[1]: Finished systemd-fsck-root.service. Dec 13 16:03:02.666939 kernel: audit: type=1130 audit(1734105782.579:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:02.666955 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 16:03:02.579000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:02.586039 systemd[1]: Mounting sysroot.mount... Dec 13 16:03:02.674075 systemd[1]: Mounted sysroot.mount. Dec 13 16:03:02.688703 systemd[1]: Reached target initrd-root-fs.target. Dec 13 16:03:02.706236 systemd[1]: Mounting sysroot-usr.mount... Dec 13 16:03:02.714309 systemd[1]: Starting flatcar-metadata-hostname.service... Dec 13 16:03:02.734451 systemd[1]: Starting flatcar-static-network.service... Dec 13 16:03:02.748653 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 16:03:02.748768 systemd[1]: Reached target ignition-diskful.target. Dec 13 16:03:02.767764 systemd[1]: Mounted sysroot-usr.mount. Dec 13 16:03:02.790917 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 16:03:02.802789 systemd[1]: Starting initrd-setup-root.service... Dec 13 16:03:02.932295 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (944) Dec 13 16:03:02.932311 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 16:03:02.932320 kernel: BTRFS info (device sda6): using free space tree Dec 13 16:03:02.932327 kernel: BTRFS info (device sda6): has skinny extents Dec 13 16:03:02.932334 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 16:03:02.855588 systemd[1]: Finished initrd-setup-root.service. Dec 13 16:03:02.940000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:02.996553 coreos-metadata[942]: Dec 13 16:03:02.853 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Dec 13 16:03:03.019613 kernel: audit: type=1130 audit(1734105782.940:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:03.019626 coreos-metadata[943]: Dec 13 16:03:02.854 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Dec 13 16:03:03.039447 initrd-setup-root[951]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 16:03:02.941729 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 16:03:03.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:03.089525 initrd-setup-root[959]: cut: /sysroot/etc/group: No such file or directory Dec 13 16:03:03.121589 kernel: audit: type=1130 audit(1734105783.055:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:03.006029 systemd[1]: Starting ignition-mount.service... Dec 13 16:03:03.128625 initrd-setup-root[969]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 16:03:03.028011 systemd[1]: Starting sysroot-boot.service... Dec 13 16:03:03.145622 initrd-setup-root[977]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 16:03:03.047261 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Dec 13 16:03:03.165590 ignition[1018]: INFO : Ignition 2.14.0 Dec 13 16:03:03.165590 ignition[1018]: INFO : Stage: mount Dec 13 16:03:03.165590 ignition[1018]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 16:03:03.165590 ignition[1018]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Dec 13 16:03:03.165590 ignition[1018]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Dec 13 16:03:03.165590 ignition[1018]: INFO : mount: mount passed Dec 13 16:03:03.165590 ignition[1018]: INFO : POST message to Packet Timeline Dec 13 16:03:03.165590 ignition[1018]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Dec 13 16:03:03.047305 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Dec 13 16:03:03.048016 systemd[1]: Finished sysroot-boot.service. Dec 13 16:03:03.341779 coreos-metadata[942]: Dec 13 16:03:03.341 INFO Fetch successful Dec 13 16:03:03.378099 coreos-metadata[942]: Dec 13 16:03:03.378 INFO wrote hostname ci-3510.3.6-a-245bdeb2fc to /sysroot/etc/hostname Dec 13 16:03:03.378682 systemd[1]: Finished flatcar-metadata-hostname.service. Dec 13 16:03:03.398000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:03.435701 coreos-metadata[943]: Dec 13 16:03:03.435 INFO Fetch successful Dec 13 16:03:03.466618 kernel: audit: type=1130 audit(1734105783.398:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:03.464116 systemd[1]: flatcar-static-network.service: Deactivated successfully. Dec 13 16:03:03.475000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:03.464166 systemd[1]: Finished flatcar-static-network.service. Dec 13 16:03:03.598584 kernel: audit: type=1130 audit(1734105783.475:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:03.598595 kernel: audit: type=1131 audit(1734105783.475:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:03.475000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:03.913715 ignition[1018]: INFO : GET result: OK Dec 13 16:03:04.244185 ignition[1018]: INFO : Ignition finished successfully Dec 13 16:03:04.246730 systemd[1]: Finished ignition-mount.service. Dec 13 16:03:04.260000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:04.263645 systemd[1]: Starting ignition-files.service... Dec 13 16:03:04.335503 kernel: audit: type=1130 audit(1734105784.260:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:04.329427 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 16:03:04.391148 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1034) Dec 13 16:03:04.391163 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 16:03:04.391171 kernel: BTRFS info (device sda6): using free space tree Dec 13 16:03:04.414878 kernel: BTRFS info (device sda6): has skinny extents Dec 13 16:03:04.464363 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 16:03:04.466125 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 16:03:04.482501 ignition[1053]: INFO : Ignition 2.14.0 Dec 13 16:03:04.482501 ignition[1053]: INFO : Stage: files Dec 13 16:03:04.482501 ignition[1053]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 16:03:04.482501 ignition[1053]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Dec 13 16:03:04.482501 ignition[1053]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Dec 13 16:03:04.485849 unknown[1053]: wrote ssh authorized keys file for user: core Dec 13 16:03:04.547539 ignition[1053]: DEBUG : files: compiled without relabeling support, skipping Dec 13 16:03:04.547539 ignition[1053]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 16:03:04.547539 ignition[1053]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 16:03:04.547539 ignition[1053]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 16:03:04.547539 ignition[1053]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 16:03:04.547539 ignition[1053]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 16:03:04.547539 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 16:03:04.547539 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 16:03:04.547539 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 16:03:04.547539 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 16:03:04.547539 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 13 16:03:04.695565 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 16:03:04.695565 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 16:03:04.695565 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Dec 13 16:03:05.167831 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Dec 13 16:03:05.365660 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 16:03:05.365660 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Dec 13 16:03:05.422574 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (1063) Dec 13 16:03:05.422592 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 16:03:05.422592 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 16:03:05.422592 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 16:03:05.422592 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 16:03:05.422592 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 16:03:05.422592 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 16:03:05.422592 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 16:03:05.422592 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 16:03:05.422592 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 16:03:05.422592 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 16:03:05.422592 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 16:03:05.422592 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/systemd/system/packet-phone-home.service" Dec 13 16:03:05.422592 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(c): oem config not found in "/usr/share/oem", looking on oem partition Dec 13 16:03:05.422592 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(c): op(d): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2786470479" Dec 13 16:03:05.422592 ignition[1053]: CRITICAL : files: createFilesystemsFiles: createFiles: op(c): op(d): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2786470479": device or resource busy Dec 13 16:03:05.681701 ignition[1053]: ERROR : files: createFilesystemsFiles: createFiles: op(c): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2786470479", trying btrfs: device or resource busy Dec 13 16:03:05.681701 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(c): op(e): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2786470479" Dec 13 16:03:05.681701 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(c): op(e): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2786470479" Dec 13 16:03:05.681701 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(c): op(f): [started] unmounting "/mnt/oem2786470479" Dec 13 16:03:05.681701 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(c): op(f): [finished] unmounting "/mnt/oem2786470479" Dec 13 16:03:05.681701 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/systemd/system/packet-phone-home.service" Dec 13 16:03:05.681701 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 16:03:05.681701 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(10): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Dec 13 16:03:05.869760 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(10): GET result: OK Dec 13 16:03:06.096045 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 16:03:06.096045 ignition[1053]: INFO : files: op(11): [started] processing unit "packet-phone-home.service" Dec 13 16:03:06.096045 ignition[1053]: INFO : files: op(11): [finished] processing unit "packet-phone-home.service" Dec 13 16:03:06.096045 ignition[1053]: INFO : files: op(12): [started] processing unit "coreos-metadata-sshkeys@.service" Dec 13 16:03:06.096045 ignition[1053]: INFO : files: op(12): [finished] processing unit "coreos-metadata-sshkeys@.service" Dec 13 16:03:06.096045 ignition[1053]: INFO : files: op(13): [started] processing unit "containerd.service" Dec 13 16:03:06.096045 ignition[1053]: INFO : files: op(13): op(14): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 16:03:06.198673 ignition[1053]: INFO : files: op(13): op(14): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 16:03:06.198673 ignition[1053]: INFO : files: op(13): [finished] processing unit "containerd.service" Dec 13 16:03:06.198673 ignition[1053]: INFO : files: op(15): [started] processing unit "prepare-helm.service" Dec 13 16:03:06.198673 ignition[1053]: INFO : files: op(15): op(16): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 16:03:06.198673 ignition[1053]: INFO : files: op(15): op(16): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 16:03:06.198673 ignition[1053]: INFO : files: op(15): [finished] processing unit "prepare-helm.service" Dec 13 16:03:06.198673 ignition[1053]: INFO : files: op(17): [started] setting preset to enabled for "prepare-helm.service" Dec 13 16:03:06.198673 ignition[1053]: INFO : files: op(17): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 16:03:06.198673 ignition[1053]: INFO : files: op(18): [started] setting preset to enabled for "packet-phone-home.service" Dec 13 16:03:06.198673 ignition[1053]: INFO : files: op(18): [finished] setting preset to enabled for "packet-phone-home.service" Dec 13 16:03:06.198673 ignition[1053]: INFO : files: op(19): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 16:03:06.198673 ignition[1053]: INFO : files: op(19): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 16:03:06.198673 ignition[1053]: INFO : files: createResultFile: createFiles: op(1a): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 16:03:06.198673 ignition[1053]: INFO : files: createResultFile: createFiles: op(1a): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 16:03:06.198673 ignition[1053]: INFO : files: files passed Dec 13 16:03:06.198673 ignition[1053]: INFO : POST message to Packet Timeline Dec 13 16:03:06.198673 ignition[1053]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Dec 13 16:03:07.047861 ignition[1053]: INFO : GET result: OK Dec 13 16:03:07.363771 ignition[1053]: INFO : Ignition finished successfully Dec 13 16:03:07.365893 systemd[1]: Finished ignition-files.service. Dec 13 16:03:07.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:07.386766 systemd[1]: Starting initrd-setup-root-after-ignition.service... Dec 13 16:03:07.459622 kernel: audit: type=1130 audit(1734105787.380:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:07.449611 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Dec 13 16:03:07.483600 initrd-setup-root-after-ignition[1088]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 16:03:07.492000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:07.450003 systemd[1]: Starting ignition-quench.service... Dec 13 16:03:07.674577 kernel: audit: type=1130 audit(1734105787.492:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:07.674593 kernel: audit: type=1130 audit(1734105787.561:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:07.674601 kernel: audit: type=1131 audit(1734105787.561:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:07.561000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:07.561000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:07.466787 systemd[1]: Finished initrd-setup-root-after-ignition.service. Dec 13 16:03:07.493833 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 16:03:07.493905 systemd[1]: Finished ignition-quench.service. Dec 13 16:03:07.828022 kernel: audit: type=1130 audit(1734105787.713:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:07.828035 kernel: audit: type=1131 audit(1734105787.713:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:07.713000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:07.713000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:07.561643 systemd[1]: Reached target ignition-complete.target. Dec 13 16:03:07.682868 systemd[1]: Starting initrd-parse-etc.service... Dec 13 16:03:07.696768 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 16:03:07.876000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:07.696810 systemd[1]: Finished initrd-parse-etc.service. Dec 13 16:03:07.950594 kernel: audit: type=1130 audit(1734105787.876:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:07.713645 systemd[1]: Reached target initrd-fs.target. Dec 13 16:03:07.836574 systemd[1]: Reached target initrd.target. Dec 13 16:03:07.836663 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Dec 13 16:03:07.837035 systemd[1]: Starting dracut-pre-pivot.service... Dec 13 16:03:08.008000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:07.857773 systemd[1]: Finished dracut-pre-pivot.service. Dec 13 16:03:08.084604 kernel: audit: type=1131 audit(1734105788.008:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:07.877276 systemd[1]: Starting initrd-cleanup.service... Dec 13 16:03:07.945461 systemd[1]: Stopped target nss-lookup.target. Dec 13 16:03:07.959629 systemd[1]: Stopped target remote-cryptsetup.target. Dec 13 16:03:07.975718 systemd[1]: Stopped target timers.target. Dec 13 16:03:07.989672 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 16:03:07.989773 systemd[1]: Stopped dracut-pre-pivot.service. Dec 13 16:03:08.009841 systemd[1]: Stopped target initrd.target. Dec 13 16:03:08.077705 systemd[1]: Stopped target basic.target. Dec 13 16:03:08.091648 systemd[1]: Stopped target ignition-complete.target. Dec 13 16:03:08.112695 systemd[1]: Stopped target ignition-diskful.target. Dec 13 16:03:08.128685 systemd[1]: Stopped target initrd-root-device.target. Dec 13 16:03:08.144730 systemd[1]: Stopped target remote-fs.target. Dec 13 16:03:08.256000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:08.162948 systemd[1]: Stopped target remote-fs-pre.target. Dec 13 16:03:08.344615 kernel: audit: type=1131 audit(1734105788.256:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:08.179010 systemd[1]: Stopped target sysinit.target. Dec 13 16:03:08.353000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:08.195006 systemd[1]: Stopped target local-fs.target. Dec 13 16:03:08.431599 kernel: audit: type=1131 audit(1734105788.353:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:08.423000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:08.210996 systemd[1]: Stopped target local-fs-pre.target. Dec 13 16:03:08.226970 systemd[1]: Stopped target swap.target. Dec 13 16:03:08.241871 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 16:03:08.242237 systemd[1]: Stopped dracut-pre-mount.service. Dec 13 16:03:08.258216 systemd[1]: Stopped target cryptsetup.target. Dec 13 16:03:08.335652 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 16:03:08.506000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:08.335751 systemd[1]: Stopped dracut-initqueue.service. Dec 13 16:03:08.524000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:08.353768 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 16:03:08.540000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:08.353839 systemd[1]: Stopped ignition-fetch-offline.service. Dec 13 16:03:08.566574 ignition[1103]: INFO : Ignition 2.14.0 Dec 13 16:03:08.566574 ignition[1103]: INFO : Stage: umount Dec 13 16:03:08.566574 ignition[1103]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 16:03:08.566574 ignition[1103]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Dec 13 16:03:08.566574 ignition[1103]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Dec 13 16:03:08.566574 ignition[1103]: INFO : umount: umount passed Dec 13 16:03:08.566574 ignition[1103]: INFO : POST message to Packet Timeline Dec 13 16:03:08.566574 ignition[1103]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Dec 13 16:03:08.579000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:08.637000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:08.656000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:08.672000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:08.423728 systemd[1]: Stopped target paths.target. Dec 13 16:03:08.694000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:08.702790 iscsid[899]: iscsid shutting down. Dec 13 16:03:08.438607 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 16:03:08.442563 systemd[1]: Stopped systemd-ask-password-console.path. Dec 13 16:03:08.739000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:08.459695 systemd[1]: Stopped target slices.target. Dec 13 16:03:08.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:08.755000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:08.474684 systemd[1]: Stopped target sockets.target. Dec 13 16:03:08.490840 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 16:03:08.490996 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Dec 13 16:03:08.507911 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 16:03:08.508138 systemd[1]: Stopped ignition-files.service. Dec 13 16:03:08.526089 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 13 16:03:08.526471 systemd[1]: Stopped flatcar-metadata-hostname.service. Dec 13 16:03:08.544179 systemd[1]: Stopping ignition-mount.service... Dec 13 16:03:08.558612 systemd[1]: Stopping iscsid.service... Dec 13 16:03:08.573532 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 16:03:08.573669 systemd[1]: Stopped kmod-static-nodes.service. Dec 13 16:03:08.581358 systemd[1]: Stopping sysroot-boot.service... Dec 13 16:03:08.609620 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 16:03:08.609961 systemd[1]: Stopped systemd-udev-trigger.service. Dec 13 16:03:08.639140 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 16:03:08.639515 systemd[1]: Stopped dracut-pre-trigger.service. Dec 13 16:03:08.665974 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 16:03:08.667764 systemd[1]: iscsid.service: Deactivated successfully. Dec 13 16:03:08.668016 systemd[1]: Stopped iscsid.service. Dec 13 16:03:08.674892 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 16:03:08.675165 systemd[1]: Stopped sysroot-boot.service. Dec 13 16:03:08.696803 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 16:03:08.697092 systemd[1]: Closed iscsid.socket. Dec 13 16:03:08.709841 systemd[1]: Stopping iscsiuio.service... Dec 13 16:03:08.724063 systemd[1]: iscsiuio.service: Deactivated successfully. Dec 13 16:03:08.724311 systemd[1]: Stopped iscsiuio.service. Dec 13 16:03:08.741399 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 16:03:08.741639 systemd[1]: Finished initrd-cleanup.service. Dec 13 16:03:08.758695 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 16:03:08.758782 systemd[1]: Closed iscsiuio.socket. Dec 13 16:03:09.331975 ignition[1103]: INFO : GET result: OK Dec 13 16:03:09.647080 ignition[1103]: INFO : Ignition finished successfully Dec 13 16:03:09.649651 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 16:03:09.662000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:09.649885 systemd[1]: Stopped ignition-mount.service. Dec 13 16:03:09.663916 systemd[1]: Stopped target network.target. Dec 13 16:03:09.695000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:09.680637 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 16:03:09.710000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:09.680846 systemd[1]: Stopped ignition-disks.service. Dec 13 16:03:09.726000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:09.696798 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 16:03:09.742000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:09.696945 systemd[1]: Stopped ignition-kargs.service. Dec 13 16:03:09.711807 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 16:03:09.711958 systemd[1]: Stopped ignition-setup.service. Dec 13 16:03:09.787000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:09.727797 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 16:03:09.803000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:09.804000 audit: BPF prog-id=6 op=UNLOAD Dec 13 16:03:09.727951 systemd[1]: Stopped initrd-setup-root.service. Dec 13 16:03:09.744073 systemd[1]: Stopping systemd-networkd.service... Dec 13 16:03:09.749490 systemd-networkd[879]: enp1s0f0np0: DHCPv6 lease lost Dec 13 16:03:09.851000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:09.757574 systemd-networkd[879]: enp1s0f1np1: DHCPv6 lease lost Dec 13 16:03:09.859000 audit: BPF prog-id=9 op=UNLOAD Dec 13 16:03:09.867000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:09.758861 systemd[1]: Stopping systemd-resolved.service... Dec 13 16:03:09.883000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:09.773258 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 16:03:09.773525 systemd[1]: Stopped systemd-resolved.service. Dec 13 16:03:09.907000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:09.789970 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 16:03:09.790294 systemd[1]: Stopped systemd-networkd.service. Dec 13 16:03:09.804638 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 16:03:09.960000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:09.804656 systemd[1]: Closed systemd-networkd.socket. Dec 13 16:03:09.977000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:09.824015 systemd[1]: Stopping network-cleanup.service... Dec 13 16:03:09.992000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:09.831572 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 16:03:09.831607 systemd[1]: Stopped parse-ip-for-networkd.service. Dec 13 16:03:10.023000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:09.852620 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 16:03:10.040000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:10.040000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:09.852684 systemd[1]: Stopped systemd-sysctl.service. Dec 13 16:03:09.868926 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 16:03:09.869024 systemd[1]: Stopped systemd-modules-load.service. Dec 13 16:03:09.884911 systemd[1]: Stopping systemd-udevd.service... Dec 13 16:03:09.903248 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 16:03:09.904695 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 16:03:09.904756 systemd[1]: Stopped systemd-udevd.service. Dec 13 16:03:09.908688 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 16:03:09.908710 systemd[1]: Closed systemd-udevd-control.socket. Dec 13 16:03:09.928550 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 16:03:09.928573 systemd[1]: Closed systemd-udevd-kernel.socket. Dec 13 16:03:09.944537 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 16:03:09.944578 systemd[1]: Stopped dracut-pre-udev.service. Dec 13 16:03:10.159000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:09.961669 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 16:03:09.961771 systemd[1]: Stopped dracut-cmdline.service. Dec 13 16:03:10.194000 audit: BPF prog-id=5 op=UNLOAD Dec 13 16:03:10.195000 audit: BPF prog-id=4 op=UNLOAD Dec 13 16:03:10.195000 audit: BPF prog-id=3 op=UNLOAD Dec 13 16:03:10.195000 audit: BPF prog-id=8 op=UNLOAD Dec 13 16:03:10.195000 audit: BPF prog-id=7 op=UNLOAD Dec 13 16:03:09.978684 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 16:03:10.264687 systemd-journald[268]: Failed to send stream file descriptor to service manager: Connection refused Dec 13 16:03:10.264712 systemd-journald[268]: Received SIGTERM from PID 1 (n/a). Dec 13 16:03:09.978811 systemd[1]: Stopped dracut-cmdline-ask.service. Dec 13 16:03:09.995462 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Dec 13 16:03:10.008383 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 16:03:10.008413 systemd[1]: Stopped systemd-vconsole-setup.service. Dec 13 16:03:10.024637 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 16:03:10.024690 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Dec 13 16:03:10.151017 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 16:03:10.151257 systemd[1]: Stopped network-cleanup.service. Dec 13 16:03:10.160861 systemd[1]: Reached target initrd-switch-root.target. Dec 13 16:03:10.178622 systemd[1]: Starting initrd-switch-root.service... Dec 13 16:03:10.193785 systemd[1]: Switching root. Dec 13 16:03:10.264960 systemd-journald[268]: Journal stopped Dec 13 16:03:13.995184 kernel: SELinux: Class mctp_socket not defined in policy. Dec 13 16:03:13.995197 kernel: SELinux: Class anon_inode not defined in policy. Dec 13 16:03:13.995205 kernel: SELinux: the above unknown classes and permissions will be allowed Dec 13 16:03:13.995211 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 16:03:13.995216 kernel: SELinux: policy capability open_perms=1 Dec 13 16:03:13.995221 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 16:03:13.995227 kernel: SELinux: policy capability always_check_network=0 Dec 13 16:03:13.995233 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 16:03:13.995239 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 16:03:13.995245 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 16:03:13.995250 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 16:03:13.995256 systemd[1]: Successfully loaded SELinux policy in 315.223ms. Dec 13 16:03:13.995263 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 5.699ms. Dec 13 16:03:13.995270 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 16:03:13.995278 systemd[1]: Detected architecture x86-64. Dec 13 16:03:13.995284 systemd[1]: Detected first boot. Dec 13 16:03:13.995290 systemd[1]: Hostname set to . Dec 13 16:03:13.995296 systemd[1]: Initializing machine ID from random generator. Dec 13 16:03:13.995302 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Dec 13 16:03:13.995308 systemd[1]: Populated /etc with preset unit settings. Dec 13 16:03:13.995315 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 16:03:13.995322 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 16:03:13.995329 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 16:03:13.995335 systemd[1]: Queued start job for default target multi-user.target. Dec 13 16:03:13.995341 systemd[1]: Created slice system-addon\x2dconfig.slice. Dec 13 16:03:13.995347 systemd[1]: Created slice system-addon\x2drun.slice. Dec 13 16:03:13.995374 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Dec 13 16:03:13.995382 systemd[1]: Created slice system-getty.slice. Dec 13 16:03:13.995389 systemd[1]: Created slice system-modprobe.slice. Dec 13 16:03:13.995410 systemd[1]: Created slice system-serial\x2dgetty.slice. Dec 13 16:03:13.995416 systemd[1]: Created slice system-system\x2dcloudinit.slice. Dec 13 16:03:13.995422 systemd[1]: Created slice system-systemd\x2dfsck.slice. Dec 13 16:03:13.995428 systemd[1]: Created slice user.slice. Dec 13 16:03:13.995434 systemd[1]: Started systemd-ask-password-console.path. Dec 13 16:03:13.995440 systemd[1]: Started systemd-ask-password-wall.path. Dec 13 16:03:13.995447 systemd[1]: Set up automount boot.automount. Dec 13 16:03:13.995453 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Dec 13 16:03:13.995460 systemd[1]: Reached target integritysetup.target. Dec 13 16:03:13.995466 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 16:03:13.995472 systemd[1]: Reached target remote-fs.target. Dec 13 16:03:13.995480 systemd[1]: Reached target slices.target. Dec 13 16:03:13.995486 systemd[1]: Reached target swap.target. Dec 13 16:03:13.995492 systemd[1]: Reached target torcx.target. Dec 13 16:03:13.995499 systemd[1]: Reached target veritysetup.target. Dec 13 16:03:13.995506 systemd[1]: Listening on systemd-coredump.socket. Dec 13 16:03:13.995512 systemd[1]: Listening on systemd-initctl.socket. Dec 13 16:03:13.995518 kernel: kauditd_printk_skb: 49 callbacks suppressed Dec 13 16:03:13.995525 kernel: audit: type=1400 audit(1734105793.239:92): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 16:03:13.995531 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 16:03:13.995538 kernel: audit: type=1335 audit(1734105793.239:93): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 13 16:03:13.995544 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 16:03:13.995550 systemd[1]: Listening on systemd-journald.socket. Dec 13 16:03:13.995557 systemd[1]: Listening on systemd-networkd.socket. Dec 13 16:03:13.995564 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 16:03:13.995570 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 16:03:13.995577 systemd[1]: Listening on systemd-userdbd.socket. Dec 13 16:03:13.995585 systemd[1]: Mounting dev-hugepages.mount... Dec 13 16:03:13.995591 systemd[1]: Mounting dev-mqueue.mount... Dec 13 16:03:13.995598 systemd[1]: Mounting media.mount... Dec 13 16:03:13.995604 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 16:03:13.995611 systemd[1]: Mounting sys-kernel-debug.mount... Dec 13 16:03:13.995617 systemd[1]: Mounting sys-kernel-tracing.mount... Dec 13 16:03:13.995623 systemd[1]: Mounting tmp.mount... Dec 13 16:03:13.995630 systemd[1]: Starting flatcar-tmpfiles.service... Dec 13 16:03:13.995636 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 16:03:13.995644 systemd[1]: Starting kmod-static-nodes.service... Dec 13 16:03:13.995650 systemd[1]: Starting modprobe@configfs.service... Dec 13 16:03:13.995657 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 16:03:13.995663 systemd[1]: Starting modprobe@drm.service... Dec 13 16:03:13.995670 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 16:03:13.995676 systemd[1]: Starting modprobe@fuse.service... Dec 13 16:03:13.995683 kernel: fuse: init (API version 7.34) Dec 13 16:03:13.995689 systemd[1]: Starting modprobe@loop.service... Dec 13 16:03:13.995695 kernel: loop: module loaded Dec 13 16:03:13.995702 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 16:03:13.995709 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Dec 13 16:03:13.995715 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Dec 13 16:03:13.995722 systemd[1]: Starting systemd-journald.service... Dec 13 16:03:13.995728 systemd[1]: Starting systemd-modules-load.service... Dec 13 16:03:13.995735 kernel: audit: type=1305 audit(1734105793.992:94): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 16:03:13.995743 systemd-journald[1296]: Journal started Dec 13 16:03:13.995768 systemd-journald[1296]: Runtime Journal (/run/log/journal/0e5e00e2ab5147829eded8108e856061) is 8.0M, max 640.1M, 632.1M free. Dec 13 16:03:13.239000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 16:03:13.239000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 13 16:03:13.992000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 16:03:13.992000 audit[1296]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffdcbee1990 a2=4000 a3=7ffdcbee1a2c items=0 ppid=1 pid=1296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 16:03:13.992000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 16:03:14.043437 kernel: audit: type=1300 audit(1734105793.992:94): arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffdcbee1990 a2=4000 a3=7ffdcbee1a2c items=0 ppid=1 pid=1296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 16:03:14.043458 kernel: audit: type=1327 audit(1734105793.992:94): proctitle="/usr/lib/systemd/systemd-journald" Dec 13 16:03:14.158549 systemd[1]: Starting systemd-network-generator.service... Dec 13 16:03:14.185358 systemd[1]: Starting systemd-remount-fs.service... Dec 13 16:03:14.211358 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 16:03:14.255404 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 16:03:14.274372 systemd[1]: Started systemd-journald.service. Dec 13 16:03:14.283000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:14.284095 systemd[1]: Mounted dev-hugepages.mount. Dec 13 16:03:14.331540 kernel: audit: type=1130 audit(1734105794.283:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:14.339624 systemd[1]: Mounted dev-mqueue.mount. Dec 13 16:03:14.346625 systemd[1]: Mounted media.mount. Dec 13 16:03:14.354607 systemd[1]: Mounted sys-kernel-debug.mount. Dec 13 16:03:14.363607 systemd[1]: Mounted sys-kernel-tracing.mount. Dec 13 16:03:14.371577 systemd[1]: Mounted tmp.mount. Dec 13 16:03:14.378702 systemd[1]: Finished flatcar-tmpfiles.service. Dec 13 16:03:14.386000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:14.387800 systemd[1]: Finished kmod-static-nodes.service. Dec 13 16:03:14.435422 kernel: audit: type=1130 audit(1734105794.386:96): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:14.443000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:14.443685 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 16:03:14.443760 systemd[1]: Finished modprobe@configfs.service. Dec 13 16:03:14.492546 kernel: audit: type=1130 audit(1734105794.443:97): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:14.500000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:14.500689 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 16:03:14.500760 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 16:03:14.500000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:14.551398 kernel: audit: type=1130 audit(1734105794.500:98): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:14.551417 kernel: audit: type=1131 audit(1734105794.500:99): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:14.580549 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 16:03:14.580644 systemd[1]: Finished modprobe@drm.service. Dec 13 16:03:14.579000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:14.579000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:14.610000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:14.610000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:14.610720 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 16:03:14.610809 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 16:03:14.619000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:14.619000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:14.619714 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 16:03:14.619804 systemd[1]: Finished modprobe@fuse.service. Dec 13 16:03:14.628000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:14.628000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:14.628748 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 16:03:14.628836 systemd[1]: Finished modprobe@loop.service. Dec 13 16:03:14.637000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:14.637000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:14.637759 systemd[1]: Finished systemd-modules-load.service. Dec 13 16:03:14.645000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:14.646723 systemd[1]: Finished systemd-network-generator.service. Dec 13 16:03:14.654000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:14.655715 systemd[1]: Finished systemd-remount-fs.service. Dec 13 16:03:14.663000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:14.664748 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 16:03:14.672000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:14.673868 systemd[1]: Reached target network-pre.target. Dec 13 16:03:14.685053 systemd[1]: Mounting sys-fs-fuse-connections.mount... Dec 13 16:03:14.695924 systemd[1]: Mounting sys-kernel-config.mount... Dec 13 16:03:14.702593 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 16:03:14.706664 systemd[1]: Starting systemd-hwdb-update.service... Dec 13 16:03:14.714210 systemd[1]: Starting systemd-journal-flush.service... Dec 13 16:03:14.717521 systemd-journald[1296]: Time spent on flushing to /var/log/journal/0e5e00e2ab5147829eded8108e856061 is 14.577ms for 1531 entries. Dec 13 16:03:14.717521 systemd-journald[1296]: System Journal (/var/log/journal/0e5e00e2ab5147829eded8108e856061) is 8.0M, max 195.6M, 187.6M free. Dec 13 16:03:14.758661 systemd-journald[1296]: Received client request to flush runtime journal. Dec 13 16:03:14.731484 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 16:03:14.732891 systemd[1]: Starting systemd-random-seed.service... Dec 13 16:03:14.748483 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 16:03:14.749313 systemd[1]: Starting systemd-sysctl.service... Dec 13 16:03:14.757031 systemd[1]: Starting systemd-sysusers.service... Dec 13 16:03:14.764062 systemd[1]: Starting systemd-udev-settle.service... Dec 13 16:03:14.771688 systemd[1]: Mounted sys-fs-fuse-connections.mount. Dec 13 16:03:14.779542 systemd[1]: Mounted sys-kernel-config.mount. Dec 13 16:03:14.787625 systemd[1]: Finished systemd-journal-flush.service. Dec 13 16:03:14.794000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:14.795650 systemd[1]: Finished systemd-random-seed.service. Dec 13 16:03:14.802000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:14.803666 systemd[1]: Finished systemd-sysctl.service. Dec 13 16:03:14.810000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:14.811635 systemd[1]: Finished systemd-sysusers.service. Dec 13 16:03:14.818000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:14.820538 systemd[1]: Reached target first-boot-complete.target. Dec 13 16:03:14.829151 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 16:03:14.837738 udevadm[1323]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 16:03:14.847866 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 16:03:14.855000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:15.013931 systemd[1]: Finished systemd-hwdb-update.service. Dec 13 16:03:15.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:15.024307 systemd[1]: Starting systemd-udevd.service... Dec 13 16:03:15.036326 systemd-udevd[1331]: Using default interface naming scheme 'v252'. Dec 13 16:03:15.055583 systemd[1]: Started systemd-udevd.service. Dec 13 16:03:15.062000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:15.067413 systemd[1]: Found device dev-ttyS1.device. Dec 13 16:03:15.087332 systemd[1]: Starting systemd-networkd.service... Dec 13 16:03:15.094362 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 Dec 13 16:03:15.094420 kernel: ACPI: button: Sleep Button [SLPB] Dec 13 16:03:15.113031 systemd[1]: Starting systemd-userdbd.service... Dec 13 16:03:15.139402 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 13 16:03:15.140362 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 16:03:15.161364 kernel: ACPI: button: Power Button [PWRF] Dec 13 16:03:15.181362 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1403) Dec 13 16:03:15.219489 systemd[1]: dev-disk-by\x2dlabel-OEM.device was skipped because of an unmet condition check (ConditionPathExists=!/usr/.noupdate). Dec 13 16:03:15.152000 audit[1340]: AVC avc: denied { confidentiality } for pid=1340 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 16:03:15.228360 kernel: IPMI message handler: version 39.2 Dec 13 16:03:15.273099 kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set Dec 13 16:03:15.315276 kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt Dec 13 16:03:15.315373 kernel: ipmi device interface Dec 13 16:03:15.315387 kernel: i2c i2c-0: 2/4 memory slots populated (from DMI) Dec 13 16:03:15.315466 kernel: ipmi_si: IPMI System Interface driver Dec 13 16:03:15.152000 audit[1340]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=563c64200cd0 a1=4d98c a2=7fa3a1873bc5 a3=5 items=42 ppid=1331 pid=1340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 16:03:15.152000 audit: CWD cwd="/" Dec 13 16:03:15.152000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:03:15.152000 audit: PATH item=1 name=(null) inode=20059 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:03:15.152000 audit: PATH item=2 name=(null) inode=20059 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:03:15.152000 audit: PATH item=3 name=(null) inode=20060 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:03:15.152000 audit: PATH item=4 name=(null) inode=20059 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:03:15.152000 audit: PATH item=5 name=(null) inode=20061 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:03:15.152000 audit: PATH item=6 name=(null) inode=20059 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:03:15.152000 audit: PATH item=7 name=(null) inode=20062 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:03:15.152000 audit: PATH item=8 name=(null) inode=20062 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:03:15.152000 audit: PATH item=9 name=(null) inode=20063 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:03:15.152000 audit: PATH item=10 name=(null) inode=20062 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:03:15.152000 audit: PATH item=11 name=(null) inode=20064 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:03:15.152000 audit: PATH item=12 name=(null) inode=20062 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:03:15.152000 audit: PATH item=13 name=(null) inode=20065 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:03:15.152000 audit: PATH item=14 name=(null) inode=20062 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:03:15.152000 audit: PATH item=15 name=(null) inode=20066 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:03:15.152000 audit: PATH item=16 name=(null) inode=20062 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:03:15.152000 audit: PATH item=17 name=(null) inode=20067 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:03:15.152000 audit: PATH item=18 name=(null) inode=20059 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:03:15.152000 audit: PATH item=19 name=(null) inode=20068 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:03:15.152000 audit: PATH item=20 name=(null) inode=20068 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:03:15.152000 audit: PATH item=21 name=(null) inode=20069 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:03:15.152000 audit: PATH item=22 name=(null) inode=20068 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:03:15.152000 audit: PATH item=23 name=(null) inode=20070 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:03:15.152000 audit: PATH item=24 name=(null) inode=20068 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:03:15.152000 audit: PATH item=25 name=(null) inode=20071 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:03:15.152000 audit: PATH item=26 name=(null) inode=20068 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:03:15.152000 audit: PATH item=27 name=(null) inode=20072 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:03:15.152000 audit: PATH item=28 name=(null) inode=20068 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:03:15.152000 audit: PATH item=29 name=(null) inode=20073 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:03:15.152000 audit: PATH item=30 name=(null) inode=20059 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:03:15.152000 audit: PATH item=31 name=(null) inode=20074 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:03:15.152000 audit: PATH item=32 name=(null) inode=20074 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:03:15.152000 audit: PATH item=33 name=(null) inode=20075 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:03:15.152000 audit: PATH item=34 name=(null) inode=20074 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:03:15.152000 audit: PATH item=35 name=(null) inode=20076 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:03:15.152000 audit: PATH item=36 name=(null) inode=20074 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:03:15.152000 audit: PATH item=37 name=(null) inode=20077 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:03:15.152000 audit: PATH item=38 name=(null) inode=20074 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:03:15.152000 audit: PATH item=39 name=(null) inode=20078 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:03:15.152000 audit: PATH item=40 name=(null) inode=20074 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:03:15.152000 audit: PATH item=41 name=(null) inode=20079 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:03:15.152000 audit: PROCTITLE proctitle="(udev-worker)" Dec 13 16:03:15.357308 kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS Dec 13 16:03:15.402807 kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 Dec 13 16:03:15.402821 kernel: ipmi_si: Adding SMBIOS-specified kcs state machine Dec 13 16:03:15.402831 kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI Dec 13 16:03:15.572561 kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2] regsize 1 spacing 1 irq 0 Dec 13 16:03:15.572650 kernel: iTCO_vendor_support: vendor-support=0 Dec 13 16:03:15.572664 kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface Dec 13 16:03:15.572737 kernel: mei_me 0000:00:16.4: Device doesn't have valid ME Interface Dec 13 16:03:15.572810 kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI Dec 13 16:03:15.572866 kernel: ipmi_si: Adding ACPI-specified kcs state machine Dec 13 16:03:15.572878 kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 Dec 13 16:03:15.472000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:15.459447 systemd[1]: Started systemd-userdbd.service. Dec 13 16:03:15.623364 kernel: iTCO_wdt iTCO_wdt: Found a Intel PCH TCO device (Version=6, TCOBASE=0x0400) Dec 13 16:03:15.645235 kernel: iTCO_wdt iTCO_wdt: initialized. heartbeat=30 sec (nowayout=0) Dec 13 16:03:15.645300 kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. Dec 13 16:03:15.738828 kernel: intel_rapl_common: Found RAPL domain package Dec 13 16:03:15.738881 kernel: intel_rapl_common: Found RAPL domain core Dec 13 16:03:15.738911 kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1b0f, dev_id: 0x20) Dec 13 16:03:15.764357 kernel: intel_rapl_common: Found RAPL domain dram Dec 13 16:03:15.815838 systemd-networkd[1398]: bond0: netdev ready Dec 13 16:03:15.818046 systemd-networkd[1398]: lo: Link UP Dec 13 16:03:15.818049 systemd-networkd[1398]: lo: Gained carrier Dec 13 16:03:15.818521 systemd-networkd[1398]: Enumeration completed Dec 13 16:03:15.818589 systemd[1]: Started systemd-networkd.service. Dec 13 16:03:15.818807 systemd-networkd[1398]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Dec 13 16:03:15.822696 systemd-networkd[1398]: enp1s0f1np1: Configuring with /etc/systemd/network/10-1c:34:da:42:bc:21.network. Dec 13 16:03:15.828356 kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized Dec 13 16:03:15.840000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:15.850391 kernel: ipmi_ssif: IPMI SSIF Interface driver Dec 13 16:03:15.855622 systemd[1]: Finished systemd-udev-settle.service. Dec 13 16:03:15.862000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:15.865219 systemd[1]: Starting lvm2-activation-early.service... Dec 13 16:03:15.880758 lvm[1435]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 16:03:15.909788 systemd[1]: Finished lvm2-activation-early.service. Dec 13 16:03:15.917000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:15.918514 systemd[1]: Reached target cryptsetup.target. Dec 13 16:03:15.927107 systemd[1]: Starting lvm2-activation.service... Dec 13 16:03:15.929243 lvm[1437]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 16:03:15.963787 systemd[1]: Finished lvm2-activation.service. Dec 13 16:03:15.970000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:15.971530 systemd[1]: Reached target local-fs-pre.target. Dec 13 16:03:15.979404 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 16:03:15.979419 systemd[1]: Reached target local-fs.target. Dec 13 16:03:15.988394 systemd[1]: Reached target machines.target. Dec 13 16:03:15.998204 systemd[1]: Starting ldconfig.service... Dec 13 16:03:16.005768 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 16:03:16.005789 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 16:03:16.006372 systemd[1]: Starting systemd-boot-update.service... Dec 13 16:03:16.014933 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Dec 13 16:03:16.026137 systemd[1]: Starting systemd-machine-id-commit.service... Dec 13 16:03:16.027015 systemd[1]: Starting systemd-sysext.service... Dec 13 16:03:16.027271 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1440 (bootctl) Dec 13 16:03:16.027931 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Dec 13 16:03:16.048616 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Dec 13 16:03:16.047000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:16.050947 systemd[1]: Unmounting usr-share-oem.mount... Dec 13 16:03:16.052840 systemd[1]: usr-share-oem.mount: Deactivated successfully. Dec 13 16:03:16.052955 systemd[1]: Unmounted usr-share-oem.mount. Dec 13 16:03:16.097401 kernel: loop0: detected capacity change from 0 to 211296 Dec 13 16:03:16.162957 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 16:03:16.163378 systemd[1]: Finished systemd-machine-id-commit.service. Dec 13 16:03:16.162000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:16.200361 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 16:03:16.211609 systemd-fsck[1454]: fsck.fat 4.2 (2021-01-31) Dec 13 16:03:16.211609 systemd-fsck[1454]: /dev/sda1: 789 files, 119291/258078 clusters Dec 13 16:03:16.212321 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Dec 13 16:03:16.220000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:16.223348 systemd[1]: Mounting boot.mount... Dec 13 16:03:16.250360 kernel: loop1: detected capacity change from 0 to 211296 Dec 13 16:03:16.251931 systemd[1]: Mounted boot.mount. Dec 13 16:03:16.267022 (sd-sysext)[1460]: Using extensions 'kubernetes'. Dec 13 16:03:16.267207 (sd-sysext)[1460]: Merged extensions into '/usr'. Dec 13 16:03:16.276509 systemd[1]: Finished systemd-boot-update.service. Dec 13 16:03:16.284000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:16.285555 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 16:03:16.286358 systemd[1]: Mounting usr-share-oem.mount... Dec 13 16:03:16.294518 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 16:03:16.295197 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 16:03:16.301963 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 16:03:16.309913 systemd[1]: Starting modprobe@loop.service... Dec 13 16:03:16.317416 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 16:03:16.317481 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 16:03:16.317543 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 16:03:16.319451 systemd[1]: Mounted usr-share-oem.mount. Dec 13 16:03:16.326597 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 16:03:16.326675 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 16:03:16.340000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:16.340000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:16.341624 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 16:03:16.341696 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 16:03:16.351358 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Dec 13 16:03:16.368000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:16.368000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:16.369691 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 16:03:16.369767 systemd[1]: Finished modprobe@loop.service. Dec 13 16:03:16.379404 kernel: bond0: (slave enp1s0f1np1): Enslaving as a backup interface with an up link Dec 13 16:03:16.380108 systemd-networkd[1398]: enp1s0f0np0: Configuring with /etc/systemd/network/10-1c:34:da:42:bc:20.network. Dec 13 16:03:16.385000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:16.385000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:16.386843 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 16:03:16.386904 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 16:03:16.387433 systemd[1]: Finished systemd-sysext.service. Dec 13 16:03:16.394000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:16.396195 systemd[1]: Starting ensure-sysext.service... Dec 13 16:03:16.404025 systemd[1]: Starting systemd-tmpfiles-setup.service... Dec 13 16:03:16.410275 systemd-tmpfiles[1476]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Dec 13 16:03:16.410891 systemd-tmpfiles[1476]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 16:03:16.412798 systemd-tmpfiles[1476]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 16:03:16.414560 systemd[1]: Reloading. Dec 13 16:03:16.420510 ldconfig[1439]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 16:03:16.433329 /usr/lib/systemd/system-generators/torcx-generator[1496]: time="2024-12-13T16:03:16Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 16:03:16.433351 /usr/lib/systemd/system-generators/torcx-generator[1496]: time="2024-12-13T16:03:16Z" level=info msg="torcx already run" Dec 13 16:03:16.463364 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Dec 13 16:03:16.490993 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 16:03:16.491002 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 16:03:16.501970 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 16:03:16.527441 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Dec 13 16:03:16.545558 systemd[1]: Finished ldconfig.service. Dec 13 16:03:16.551410 kernel: bond0: (slave enp1s0f0np0): Enslaving as a backup interface with an up link Dec 13 16:03:16.551435 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): bond0: link becomes ready Dec 13 16:03:16.563000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:16.565073 systemd[1]: Finished systemd-tmpfiles-setup.service. Dec 13 16:03:16.571357 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Dec 13 16:03:16.572246 systemd-networkd[1398]: bond0: Link UP Dec 13 16:03:16.572448 systemd-networkd[1398]: enp1s0f1np1: Link UP Dec 13 16:03:16.607000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:03:16.610574 systemd[1]: Starting audit-rules.service... Dec 13 16:03:16.613355 kernel: bond0: (slave enp1s0f1np1): link status definitely up, 10000 Mbps full duplex Dec 13 16:03:16.613378 kernel: bond0: active interface up! Dec 13 16:03:16.625000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 16:03:16.625000 audit[1580]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd0bd65b10 a2=420 a3=0 items=0 ppid=1565 pid=1580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 16:03:16.625000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 16:03:16.627291 augenrules[1580]: No rules Dec 13 16:03:16.645172 systemd[1]: Starting clean-ca-certificates.service... Dec 13 16:03:16.649694 systemd-networkd[1398]: enp1s0f1np1: Gained carrier Dec 13 16:03:16.650356 kernel: bond0: (slave enp1s0f0np0): link status definitely up, 10000 Mbps full duplex Dec 13 16:03:16.650662 systemd-networkd[1398]: enp1s0f1np1: Reconfiguring with /etc/systemd/network/10-1c:34:da:42:bc:20.network. Dec 13 16:03:16.658176 systemd[1]: Starting systemd-journal-catalog-update.service... Dec 13 16:03:16.667300 systemd[1]: Starting systemd-resolved.service... Dec 13 16:03:16.682364 systemd[1]: Starting systemd-timesyncd.service... Dec 13 16:03:16.694356 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Dec 13 16:03:16.701070 systemd[1]: Starting systemd-update-utmp.service... Dec 13 16:03:16.707990 systemd[1]: Finished audit-rules.service. Dec 13 16:03:16.714671 systemd[1]: Finished clean-ca-certificates.service. Dec 13 16:03:16.722649 systemd[1]: Finished systemd-journal-catalog-update.service. Dec 13 16:03:16.734989 systemd[1]: Finished systemd-update-utmp.service. Dec 13 16:03:16.744121 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 16:03:16.744796 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 16:03:16.749203 systemd-networkd[1398]: enp1s0f0np0: Link UP Dec 13 16:03:16.749381 systemd-networkd[1398]: bond0: Gained carrier Dec 13 16:03:16.749467 systemd-networkd[1398]: enp1s0f0np0: Gained carrier Dec 13 16:03:16.752023 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 16:03:16.767998 systemd[1]: Starting modprobe@loop.service... Dec 13 16:03:16.774357 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Dec 13 16:03:16.774382 kernel: bond0: (slave enp1s0f1np1): invalid new link 1 on slave Dec 13 16:03:16.796634 systemd-networkd[1398]: enp1s0f1np1: Link DOWN Dec 13 16:03:16.796637 systemd-networkd[1398]: enp1s0f1np1: Lost carrier Dec 13 16:03:16.799467 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 16:03:16.799541 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 16:03:16.800326 systemd[1]: Starting systemd-update-done.service... Dec 13 16:03:16.804536 systemd-resolved[1590]: Positive Trust Anchors: Dec 13 16:03:16.804544 systemd-resolved[1590]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 16:03:16.804564 systemd-resolved[1590]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 16:03:16.804730 systemd-timesyncd[1592]: Network configuration changed, trying to establish connection. Dec 13 16:03:16.807460 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 16:03:16.808037 systemd[1]: Started systemd-timesyncd.service. Dec 13 16:03:16.808286 systemd-resolved[1590]: Using system hostname 'ci-3510.3.6-a-245bdeb2fc'. Dec 13 16:03:16.817117 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 16:03:16.817206 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 16:03:16.825642 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 16:03:16.825721 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 16:03:16.833625 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 16:03:16.833709 systemd[1]: Finished modprobe@loop.service. Dec 13 16:03:16.841681 systemd[1]: Finished systemd-update-done.service. Dec 13 16:03:16.850781 systemd[1]: Reached target time-set.target. Dec 13 16:03:16.858625 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 16:03:16.859350 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 16:03:16.868047 systemd[1]: Starting modprobe@drm.service... Dec 13 16:03:16.874974 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 16:03:16.881967 systemd[1]: Starting modprobe@loop.service... Dec 13 16:03:16.888472 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 16:03:16.888541 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 16:03:16.889192 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 16:03:16.898452 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 16:03:16.899111 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 16:03:16.899188 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 16:03:16.908643 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 16:03:16.908718 systemd[1]: Finished modprobe@drm.service. Dec 13 16:03:16.918433 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 16:03:16.918510 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 16:03:16.926673 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 16:03:16.926757 systemd[1]: Finished modprobe@loop.service. Dec 13 16:03:16.934745 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 16:03:16.934801 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 16:03:16.935366 systemd[1]: Finished ensure-sysext.service. Dec 13 16:03:16.957418 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Dec 13 16:03:16.978180 systemd-networkd[1398]: enp1s0f1np1: Link UP Dec 13 16:03:16.978309 systemd-timesyncd[1592]: Network configuration changed, trying to establish connection. Dec 13 16:03:16.978374 kernel: bond0: (slave enp1s0f1np1): speed changed to 0 on port 1 Dec 13 16:03:16.979642 systemd-timesyncd[1592]: Network configuration changed, trying to establish connection. Dec 13 16:03:16.979686 systemd-networkd[1398]: enp1s0f1np1: Gained carrier Dec 13 16:03:16.979696 systemd-timesyncd[1592]: Network configuration changed, trying to establish connection. Dec 13 16:03:16.980191 systemd[1]: Started systemd-resolved.service. Dec 13 16:03:16.990587 systemd-timesyncd[1592]: Network configuration changed, trying to establish connection. Dec 13 16:03:16.996525 systemd[1]: Reached target network.target. Dec 13 16:03:17.005411 kernel: bond0: (slave enp1s0f1np1): link status up again after 200 ms Dec 13 16:03:17.021480 systemd[1]: Reached target nss-lookup.target. Dec 13 16:03:17.026427 kernel: bond0: (slave enp1s0f1np1): link status definitely up, 10000 Mbps full duplex Dec 13 16:03:17.034451 systemd[1]: Reached target sysinit.target. Dec 13 16:03:17.042489 systemd[1]: Started motdgen.path. Dec 13 16:03:17.049462 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Dec 13 16:03:17.059501 systemd[1]: Started logrotate.timer. Dec 13 16:03:17.066474 systemd[1]: Started mdadm.timer. Dec 13 16:03:17.073425 systemd[1]: Started systemd-tmpfiles-clean.timer. Dec 13 16:03:17.081429 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 16:03:17.081444 systemd[1]: Reached target paths.target. Dec 13 16:03:17.088420 systemd[1]: Reached target timers.target. Dec 13 16:03:17.095561 systemd[1]: Listening on dbus.socket. Dec 13 16:03:17.103020 systemd[1]: Starting docker.socket... Dec 13 16:03:17.110255 systemd[1]: Listening on sshd.socket. Dec 13 16:03:17.117475 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 16:03:17.118693 systemd[1]: Listening on docker.socket. Dec 13 16:03:17.125455 systemd[1]: Reached target sockets.target. Dec 13 16:03:17.133419 systemd[1]: Reached target basic.target. Dec 13 16:03:17.140490 systemd[1]: System is tainted: cgroupsv1 Dec 13 16:03:17.140506 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 16:03:17.140525 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 16:03:17.140538 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 16:03:17.141042 systemd[1]: Starting containerd.service... Dec 13 16:03:17.147873 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Dec 13 16:03:17.156935 systemd[1]: Starting coreos-metadata.service... Dec 13 16:03:17.164018 systemd[1]: Starting dbus.service... Dec 13 16:03:17.169965 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 16:03:17.175208 jq[1627]: false Dec 13 16:03:17.176716 coreos-metadata[1620]: Dec 13 16:03:17.176 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Dec 13 16:03:17.177148 systemd[1]: Starting extend-filesystems.service... Dec 13 16:03:17.183463 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Dec 13 16:03:17.183624 dbus-daemon[1626]: [system] SELinux support is enabled Dec 13 16:03:17.184202 systemd[1]: Starting motdgen.service... Dec 13 16:03:17.185609 extend-filesystems[1630]: Found loop1 Dec 13 16:03:17.185609 extend-filesystems[1630]: Found sda Dec 13 16:03:17.213791 extend-filesystems[1630]: Found sda1 Dec 13 16:03:17.213791 extend-filesystems[1630]: Found sda2 Dec 13 16:03:17.213791 extend-filesystems[1630]: Found sda3 Dec 13 16:03:17.213791 extend-filesystems[1630]: Found usr Dec 13 16:03:17.213791 extend-filesystems[1630]: Found sda4 Dec 13 16:03:17.213791 extend-filesystems[1630]: Found sda6 Dec 13 16:03:17.213791 extend-filesystems[1630]: Found sda7 Dec 13 16:03:17.213791 extend-filesystems[1630]: Found sda9 Dec 13 16:03:17.213791 extend-filesystems[1630]: Checking size of /dev/sda9 Dec 13 16:03:17.213791 extend-filesystems[1630]: Resized partition /dev/sda9 Dec 13 16:03:17.344478 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 116605649 blocks Dec 13 16:03:17.344518 coreos-metadata[1623]: Dec 13 16:03:17.185 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Dec 13 16:03:17.191220 systemd[1]: Starting prepare-helm.service... Dec 13 16:03:17.344746 extend-filesystems[1646]: resize2fs 1.46.5 (30-Dec-2021) Dec 13 16:03:17.225247 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 16:03:17.252107 systemd[1]: Starting sshd-keygen.service... Dec 13 16:03:17.266545 systemd[1]: Starting systemd-logind.service... Dec 13 16:03:17.279399 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 16:03:17.280067 systemd[1]: Starting tcsd.service... Dec 13 16:03:17.359943 update_engine[1661]: I1213 16:03:17.336134 1661 main.cc:92] Flatcar Update Engine starting Dec 13 16:03:17.359943 update_engine[1661]: I1213 16:03:17.342013 1661 update_check_scheduler.cc:74] Next update check in 6m17s Dec 13 16:03:17.292367 systemd[1]: Starting update-engine.service... Dec 13 16:03:17.360191 jq[1662]: true Dec 13 16:03:17.292549 systemd-logind[1659]: Watching system buttons on /dev/input/event3 (Power Button) Dec 13 16:03:17.292560 systemd-logind[1659]: Watching system buttons on /dev/input/event2 (Sleep Button) Dec 13 16:03:17.292569 systemd-logind[1659]: Watching system buttons on /dev/input/event0 (HID 0557:2419) Dec 13 16:03:17.292690 systemd-logind[1659]: New seat seat0. Dec 13 16:03:17.306327 systemd[1]: Starting update-ssh-keys-after-ignition.service... Dec 13 16:03:17.321405 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 16:03:17.321805 systemd[1]: Started dbus.service. Dec 13 16:03:17.338121 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 16:03:17.338254 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 16:03:17.338428 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 16:03:17.338548 systemd[1]: Finished motdgen.service. Dec 13 16:03:17.351899 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 16:03:17.352028 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 16:03:17.370237 jq[1668]: true Dec 13 16:03:17.371197 tar[1666]: linux-amd64/helm Dec 13 16:03:17.371574 dbus-daemon[1626]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 13 16:03:17.375976 systemd[1]: tcsd.service: Skipped due to 'exec-condition'. Dec 13 16:03:17.376189 systemd[1]: Condition check resulted in tcsd.service being skipped. Dec 13 16:03:17.380307 systemd[1]: Started update-engine.service. Dec 13 16:03:17.380475 env[1669]: time="2024-12-13T16:03:17.380392180Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Dec 13 16:03:17.388641 env[1669]: time="2024-12-13T16:03:17.388625890Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 16:03:17.388694 env[1669]: time="2024-12-13T16:03:17.388684688Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 16:03:17.389411 env[1669]: time="2024-12-13T16:03:17.389393813Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 16:03:17.389445 env[1669]: time="2024-12-13T16:03:17.389410116Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 16:03:17.390830 env[1669]: time="2024-12-13T16:03:17.390816978Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 16:03:17.390865 env[1669]: time="2024-12-13T16:03:17.390830230Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 16:03:17.390865 env[1669]: time="2024-12-13T16:03:17.390838163Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Dec 13 16:03:17.390865 env[1669]: time="2024-12-13T16:03:17.390843595Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 16:03:17.390933 env[1669]: time="2024-12-13T16:03:17.390886320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 16:03:17.392476 systemd[1]: Started systemd-logind.service. Dec 13 16:03:17.392915 env[1669]: time="2024-12-13T16:03:17.392904357Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 16:03:17.393008 env[1669]: time="2024-12-13T16:03:17.392997099Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 16:03:17.393038 env[1669]: time="2024-12-13T16:03:17.393007589Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 16:03:17.393038 env[1669]: time="2024-12-13T16:03:17.393033882Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Dec 13 16:03:17.393089 env[1669]: time="2024-12-13T16:03:17.393042307Z" level=info msg="metadata content store policy set" policy=shared Dec 13 16:03:17.397253 bash[1699]: Updated "/home/core/.ssh/authorized_keys" Dec 13 16:03:17.400626 systemd[1]: Finished update-ssh-keys-after-ignition.service. Dec 13 16:03:17.401450 env[1669]: time="2024-12-13T16:03:17.401438918Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 16:03:17.401485 env[1669]: time="2024-12-13T16:03:17.401455149Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 16:03:17.401485 env[1669]: time="2024-12-13T16:03:17.401463433Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 16:03:17.401485 env[1669]: time="2024-12-13T16:03:17.401478907Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 16:03:17.401560 env[1669]: time="2024-12-13T16:03:17.401487732Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 16:03:17.401560 env[1669]: time="2024-12-13T16:03:17.401495920Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 16:03:17.401560 env[1669]: time="2024-12-13T16:03:17.401504657Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 16:03:17.401560 env[1669]: time="2024-12-13T16:03:17.401516617Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 16:03:17.401560 env[1669]: time="2024-12-13T16:03:17.401528910Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Dec 13 16:03:17.401560 env[1669]: time="2024-12-13T16:03:17.401537453Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 16:03:17.401560 env[1669]: time="2024-12-13T16:03:17.401544595Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 16:03:17.401560 env[1669]: time="2024-12-13T16:03:17.401550950Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 16:03:17.401740 env[1669]: time="2024-12-13T16:03:17.401597351Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 16:03:17.401740 env[1669]: time="2024-12-13T16:03:17.401641786Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 16:03:17.401827 env[1669]: time="2024-12-13T16:03:17.401818402Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 16:03:17.401859 env[1669]: time="2024-12-13T16:03:17.401834290Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 16:03:17.401859 env[1669]: time="2024-12-13T16:03:17.401842105Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 16:03:17.401977 env[1669]: time="2024-12-13T16:03:17.401865465Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 16:03:17.401977 env[1669]: time="2024-12-13T16:03:17.401873110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 16:03:17.401977 env[1669]: time="2024-12-13T16:03:17.401880077Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 16:03:17.401977 env[1669]: time="2024-12-13T16:03:17.401885928Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 16:03:17.401977 env[1669]: time="2024-12-13T16:03:17.401893058Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 16:03:17.401977 env[1669]: time="2024-12-13T16:03:17.401899809Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 16:03:17.401977 env[1669]: time="2024-12-13T16:03:17.401907885Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 16:03:17.401977 env[1669]: time="2024-12-13T16:03:17.401914152Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 16:03:17.401977 env[1669]: time="2024-12-13T16:03:17.401921017Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 16:03:17.402180 env[1669]: time="2024-12-13T16:03:17.401991420Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 16:03:17.402180 env[1669]: time="2024-12-13T16:03:17.402000920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 16:03:17.402180 env[1669]: time="2024-12-13T16:03:17.402007468Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 16:03:17.402180 env[1669]: time="2024-12-13T16:03:17.402013637Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 16:03:17.402180 env[1669]: time="2024-12-13T16:03:17.402021199Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Dec 13 16:03:17.402180 env[1669]: time="2024-12-13T16:03:17.402026923Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 16:03:17.402180 env[1669]: time="2024-12-13T16:03:17.402039576Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Dec 13 16:03:17.402180 env[1669]: time="2024-12-13T16:03:17.402061384Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 16:03:17.402385 env[1669]: time="2024-12-13T16:03:17.402168019Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 16:03:17.402385 env[1669]: time="2024-12-13T16:03:17.402198770Z" level=info msg="Connect containerd service" Dec 13 16:03:17.402385 env[1669]: time="2024-12-13T16:03:17.402215324Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 16:03:17.404283 env[1669]: time="2024-12-13T16:03:17.402780121Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 16:03:17.404283 env[1669]: time="2024-12-13T16:03:17.402871753Z" level=info msg="Start subscribing containerd event" Dec 13 16:03:17.404283 env[1669]: time="2024-12-13T16:03:17.402895212Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 16:03:17.404283 env[1669]: time="2024-12-13T16:03:17.402917013Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 16:03:17.404283 env[1669]: time="2024-12-13T16:03:17.402940110Z" level=info msg="containerd successfully booted in 0.022969s" Dec 13 16:03:17.404283 env[1669]: time="2024-12-13T16:03:17.402899411Z" level=info msg="Start recovering state" Dec 13 16:03:17.404283 env[1669]: time="2024-12-13T16:03:17.402974457Z" level=info msg="Start event monitor" Dec 13 16:03:17.404283 env[1669]: time="2024-12-13T16:03:17.402985362Z" level=info msg="Start snapshots syncer" Dec 13 16:03:17.404283 env[1669]: time="2024-12-13T16:03:17.402991224Z" level=info msg="Start cni network conf syncer for default" Dec 13 16:03:17.404283 env[1669]: time="2024-12-13T16:03:17.402995057Z" level=info msg="Start streaming server" Dec 13 16:03:17.410480 systemd[1]: Started containerd.service. Dec 13 16:03:17.418954 systemd[1]: Started locksmithd.service. Dec 13 16:03:17.425493 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 16:03:17.425578 systemd[1]: Reached target system-config.target. Dec 13 16:03:17.433454 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 16:03:17.433524 systemd[1]: Reached target user-config.target. Dec 13 16:03:17.476080 locksmithd[1713]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 16:03:17.485081 sshd_keygen[1658]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 16:03:17.497221 systemd[1]: Finished sshd-keygen.service. Dec 13 16:03:17.505512 systemd[1]: Starting issuegen.service... Dec 13 16:03:17.513642 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 16:03:17.513794 systemd[1]: Finished issuegen.service. Dec 13 16:03:17.522421 systemd[1]: Starting systemd-user-sessions.service... Dec 13 16:03:17.531683 systemd[1]: Finished systemd-user-sessions.service. Dec 13 16:03:17.541313 systemd[1]: Started getty@tty1.service. Dec 13 16:03:17.549208 systemd[1]: Started serial-getty@ttyS1.service. Dec 13 16:03:17.557514 systemd[1]: Reached target getty.target. Dec 13 16:03:17.631475 tar[1666]: linux-amd64/LICENSE Dec 13 16:03:17.631560 tar[1666]: linux-amd64/README.md Dec 13 16:03:17.634128 systemd[1]: Finished prepare-helm.service. Dec 13 16:03:17.702399 kernel: EXT4-fs (sda9): resized filesystem to 116605649 Dec 13 16:03:17.731422 extend-filesystems[1646]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Dec 13 16:03:17.731422 extend-filesystems[1646]: old_desc_blocks = 1, new_desc_blocks = 56 Dec 13 16:03:17.731422 extend-filesystems[1646]: The filesystem on /dev/sda9 is now 116605649 (4k) blocks long. Dec 13 16:03:17.768539 extend-filesystems[1630]: Resized filesystem in /dev/sda9 Dec 13 16:03:17.768539 extend-filesystems[1630]: Found sdb Dec 13 16:03:17.731779 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 16:03:17.731891 systemd[1]: Finished extend-filesystems.service. Dec 13 16:03:18.264736 systemd-timesyncd[1592]: Network configuration changed, trying to establish connection. Dec 13 16:03:18.584502 systemd-networkd[1398]: bond0: Gained IPv6LL Dec 13 16:03:18.584756 systemd-timesyncd[1592]: Network configuration changed, trying to establish connection. Dec 13 16:03:18.585671 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 16:03:18.595803 systemd[1]: Reached target network-online.target. Dec 13 16:03:18.605480 systemd[1]: Starting kubelet.service... Dec 13 16:03:19.250239 systemd[1]: Started kubelet.service. Dec 13 16:03:19.250553 kernel: mlx5_core 0000:01:00.0: lag map port 1:1 port 2:2 shared_fdb:0 Dec 13 16:03:19.867790 kubelet[1752]: E1213 16:03:19.867723 1752 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 16:03:19.868940 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 16:03:19.869024 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 16:03:22.569664 login[1736]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 13 16:03:22.577291 systemd-logind[1659]: New session 1 of user core. Dec 13 16:03:22.578004 systemd[1]: Created slice user-500.slice. Dec 13 16:03:22.578558 systemd[1]: Starting user-runtime-dir@500.service... Dec 13 16:03:22.578723 login[1735]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 13 16:03:22.580552 systemd-logind[1659]: New session 2 of user core. Dec 13 16:03:22.584181 systemd[1]: Finished user-runtime-dir@500.service. Dec 13 16:03:22.584980 systemd[1]: Starting user@500.service... Dec 13 16:03:22.586908 (systemd)[1777]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 16:03:22.655987 systemd[1777]: Queued start job for default target default.target. Dec 13 16:03:22.656091 systemd[1777]: Reached target paths.target. Dec 13 16:03:22.656102 systemd[1777]: Reached target sockets.target. Dec 13 16:03:22.656110 systemd[1777]: Reached target timers.target. Dec 13 16:03:22.656117 systemd[1777]: Reached target basic.target. Dec 13 16:03:22.656136 systemd[1777]: Reached target default.target. Dec 13 16:03:22.656150 systemd[1777]: Startup finished in 66ms. Dec 13 16:03:22.656194 systemd[1]: Started user@500.service. Dec 13 16:03:22.656880 systemd[1]: Started session-1.scope. Dec 13 16:03:22.657301 systemd[1]: Started session-2.scope. Dec 13 16:03:23.321476 coreos-metadata[1620]: Dec 13 16:03:23.321 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Name or service not known Dec 13 16:03:23.322349 coreos-metadata[1623]: Dec 13 16:03:23.321 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Name or service not known Dec 13 16:03:24.321686 coreos-metadata[1620]: Dec 13 16:03:24.321 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Dec 13 16:03:24.322624 coreos-metadata[1623]: Dec 13 16:03:24.321 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Dec 13 16:03:24.727409 kernel: mlx5_core 0000:01:00.0: modify lag map port 1:2 port 2:2 Dec 13 16:03:24.734398 kernel: mlx5_core 0000:01:00.0: modify lag map port 1:1 port 2:2 Dec 13 16:03:25.375360 systemd[1]: Created slice system-sshd.slice. Dec 13 16:03:25.376187 systemd[1]: Started sshd@0-145.40.90.237:22-139.178.89.65:41574.service. Dec 13 16:03:25.424373 sshd[1799]: Accepted publickey for core from 139.178.89.65 port 41574 ssh2: RSA SHA256:izw9nsjJfNoYQ+ckx4ezJW8umjt8Ms6p+cEtPdzXVWQ Dec 13 16:03:25.426028 sshd[1799]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 16:03:25.431807 systemd-logind[1659]: New session 3 of user core. Dec 13 16:03:25.433911 systemd[1]: Started session-3.scope. Dec 13 16:03:25.487130 systemd[1]: Started sshd@1-145.40.90.237:22-139.178.89.65:41584.service. Dec 13 16:03:25.525302 sshd[1804]: Accepted publickey for core from 139.178.89.65 port 41584 ssh2: RSA SHA256:izw9nsjJfNoYQ+ckx4ezJW8umjt8Ms6p+cEtPdzXVWQ Dec 13 16:03:25.525967 sshd[1804]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 16:03:25.528195 systemd-logind[1659]: New session 4 of user core. Dec 13 16:03:25.528842 systemd[1]: Started session-4.scope. Dec 13 16:03:25.579615 sshd[1804]: pam_unix(sshd:session): session closed for user core Dec 13 16:03:25.581490 systemd[1]: Started sshd@2-145.40.90.237:22-139.178.89.65:41600.service. Dec 13 16:03:25.581869 systemd[1]: sshd@1-145.40.90.237:22-139.178.89.65:41584.service: Deactivated successfully. Dec 13 16:03:25.582348 systemd-logind[1659]: Session 4 logged out. Waiting for processes to exit. Dec 13 16:03:25.582447 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 16:03:25.582999 systemd-logind[1659]: Removed session 4. Dec 13 16:03:25.618152 sshd[1810]: Accepted publickey for core from 139.178.89.65 port 41600 ssh2: RSA SHA256:izw9nsjJfNoYQ+ckx4ezJW8umjt8Ms6p+cEtPdzXVWQ Dec 13 16:03:25.619394 sshd[1810]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 16:03:25.623376 systemd-logind[1659]: New session 5 of user core. Dec 13 16:03:25.624835 systemd[1]: Started session-5.scope. Dec 13 16:03:25.683747 sshd[1810]: pam_unix(sshd:session): session closed for user core Dec 13 16:03:25.684969 systemd[1]: sshd@2-145.40.90.237:22-139.178.89.65:41600.service: Deactivated successfully. Dec 13 16:03:25.685543 systemd-logind[1659]: Session 5 logged out. Waiting for processes to exit. Dec 13 16:03:25.685566 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 16:03:25.686070 systemd-logind[1659]: Removed session 5. Dec 13 16:03:26.259161 coreos-metadata[1620]: Dec 13 16:03:26.259 INFO Fetch successful Dec 13 16:03:26.292100 unknown[1620]: wrote ssh authorized keys file for user: core Dec 13 16:03:26.326087 update-ssh-keys[1819]: Updated "/home/core/.ssh/authorized_keys" Dec 13 16:03:26.327488 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Dec 13 16:03:26.446985 coreos-metadata[1623]: Dec 13 16:03:26.446 INFO Fetch successful Dec 13 16:03:26.526026 systemd[1]: Finished coreos-metadata.service. Dec 13 16:03:26.527041 systemd[1]: Started packet-phone-home.service. Dec 13 16:03:26.527235 systemd[1]: Reached target multi-user.target. Dec 13 16:03:26.528000 systemd[1]: Starting systemd-update-utmp-runlevel.service... Dec 13 16:03:26.531842 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 13 16:03:26.531953 systemd[1]: Finished systemd-update-utmp-runlevel.service. Dec 13 16:03:26.532090 systemd[1]: Startup finished in 24.412s (kernel) + 16.138s (userspace) = 40.551s. Dec 13 16:03:26.532263 curl[1827]: % Total % Received % Xferd Average Speed Time Time Time Current Dec 13 16:03:26.532399 curl[1827]: Dload Upload Total Spent Left Speed Dec 13 16:03:26.912136 curl[1827]: \u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 Dec 13 16:03:26.914582 systemd[1]: packet-phone-home.service: Deactivated successfully. Dec 13 16:03:29.966630 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 16:03:29.967423 systemd[1]: Stopped kubelet.service. Dec 13 16:03:29.970651 systemd[1]: Starting kubelet.service... Dec 13 16:03:30.169566 systemd[1]: Started kubelet.service. Dec 13 16:03:30.202768 kubelet[1839]: E1213 16:03:30.202694 1839 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 16:03:30.204953 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 16:03:30.205034 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 16:03:35.695413 systemd[1]: Started sshd@3-145.40.90.237:22-139.178.89.65:54462.service. Dec 13 16:03:35.731809 sshd[1858]: Accepted publickey for core from 139.178.89.65 port 54462 ssh2: RSA SHA256:izw9nsjJfNoYQ+ckx4ezJW8umjt8Ms6p+cEtPdzXVWQ Dec 13 16:03:35.732839 sshd[1858]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 16:03:35.736579 systemd-logind[1659]: New session 6 of user core. Dec 13 16:03:35.737802 systemd[1]: Started session-6.scope. Dec 13 16:03:35.794188 sshd[1858]: pam_unix(sshd:session): session closed for user core Dec 13 16:03:35.795994 systemd[1]: Started sshd@4-145.40.90.237:22-139.178.89.65:54474.service. Dec 13 16:03:35.796391 systemd[1]: sshd@3-145.40.90.237:22-139.178.89.65:54462.service: Deactivated successfully. Dec 13 16:03:35.796861 systemd-logind[1659]: Session 6 logged out. Waiting for processes to exit. Dec 13 16:03:35.796933 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 16:03:35.797433 systemd-logind[1659]: Removed session 6. Dec 13 16:03:35.832276 sshd[1864]: Accepted publickey for core from 139.178.89.65 port 54474 ssh2: RSA SHA256:izw9nsjJfNoYQ+ckx4ezJW8umjt8Ms6p+cEtPdzXVWQ Dec 13 16:03:35.833164 sshd[1864]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 16:03:35.836342 systemd-logind[1659]: New session 7 of user core. Dec 13 16:03:35.837409 systemd[1]: Started session-7.scope. Dec 13 16:03:35.890930 sshd[1864]: pam_unix(sshd:session): session closed for user core Dec 13 16:03:35.892730 systemd[1]: Started sshd@5-145.40.90.237:22-139.178.89.65:54488.service. Dec 13 16:03:35.893114 systemd[1]: sshd@4-145.40.90.237:22-139.178.89.65:54474.service: Deactivated successfully. Dec 13 16:03:35.893628 systemd-logind[1659]: Session 7 logged out. Waiting for processes to exit. Dec 13 16:03:35.893692 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 16:03:35.894121 systemd-logind[1659]: Removed session 7. Dec 13 16:03:35.929230 sshd[1872]: Accepted publickey for core from 139.178.89.65 port 54488 ssh2: RSA SHA256:izw9nsjJfNoYQ+ckx4ezJW8umjt8Ms6p+cEtPdzXVWQ Dec 13 16:03:35.930446 sshd[1872]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 16:03:35.934347 systemd-logind[1659]: New session 8 of user core. Dec 13 16:03:35.935859 systemd[1]: Started session-8.scope. Dec 13 16:03:35.994434 sshd[1872]: pam_unix(sshd:session): session closed for user core Dec 13 16:03:35.996111 systemd[1]: Started sshd@6-145.40.90.237:22-139.178.89.65:54502.service. Dec 13 16:03:35.996527 systemd[1]: sshd@5-145.40.90.237:22-139.178.89.65:54488.service: Deactivated successfully. Dec 13 16:03:35.997013 systemd-logind[1659]: Session 8 logged out. Waiting for processes to exit. Dec 13 16:03:35.997085 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 16:03:35.997527 systemd-logind[1659]: Removed session 8. Dec 13 16:03:36.033232 sshd[1879]: Accepted publickey for core from 139.178.89.65 port 54502 ssh2: RSA SHA256:izw9nsjJfNoYQ+ckx4ezJW8umjt8Ms6p+cEtPdzXVWQ Dec 13 16:03:36.034444 sshd[1879]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 16:03:36.038818 systemd-logind[1659]: New session 9 of user core. Dec 13 16:03:36.040345 systemd[1]: Started session-9.scope. Dec 13 16:03:36.141330 sudo[1884]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 16:03:36.142020 sudo[1884]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 16:03:36.167292 systemd[1]: Starting docker.service... Dec 13 16:03:36.183649 env[1899]: time="2024-12-13T16:03:36.183596043Z" level=info msg="Starting up" Dec 13 16:03:36.184223 env[1899]: time="2024-12-13T16:03:36.184185753Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 16:03:36.184223 env[1899]: time="2024-12-13T16:03:36.184194274Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 16:03:36.184223 env[1899]: time="2024-12-13T16:03:36.184205798Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 16:03:36.184223 env[1899]: time="2024-12-13T16:03:36.184212160Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 16:03:36.185031 env[1899]: time="2024-12-13T16:03:36.184993739Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 16:03:36.185031 env[1899]: time="2024-12-13T16:03:36.185001505Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 16:03:36.185031 env[1899]: time="2024-12-13T16:03:36.185008418Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 16:03:36.185031 env[1899]: time="2024-12-13T16:03:36.185013156Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 16:03:36.335161 env[1899]: time="2024-12-13T16:03:36.334983978Z" level=warning msg="Your kernel does not support cgroup blkio weight" Dec 13 16:03:36.335161 env[1899]: time="2024-12-13T16:03:36.335024782Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Dec 13 16:03:36.335526 env[1899]: time="2024-12-13T16:03:36.335259397Z" level=info msg="Loading containers: start." Dec 13 16:03:36.522441 kernel: Initializing XFRM netlink socket Dec 13 16:03:36.591893 env[1899]: time="2024-12-13T16:03:36.591813535Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Dec 13 16:03:36.592636 systemd-timesyncd[1592]: Network configuration changed, trying to establish connection. Dec 13 16:03:36.713764 systemd-networkd[1398]: docker0: Link UP Dec 13 16:03:36.756479 env[1899]: time="2024-12-13T16:03:36.756375501Z" level=info msg="Loading containers: done." Dec 13 16:03:36.776923 env[1899]: time="2024-12-13T16:03:36.776813606Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 16:03:36.777335 env[1899]: time="2024-12-13T16:03:36.777194433Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Dec 13 16:03:36.777512 env[1899]: time="2024-12-13T16:03:36.777443459Z" level=info msg="Daemon has completed initialization" Dec 13 16:03:36.785111 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2236678927-merged.mount: Deactivated successfully. Dec 13 16:03:36.802917 systemd[1]: Started docker.service. Dec 13 16:03:36.819939 env[1899]: time="2024-12-13T16:03:36.819800477Z" level=info msg="API listen on /run/docker.sock" Dec 13 16:03:36.991452 systemd-timesyncd[1592]: Contacted time server [2606:6680:8:1::d14e:69b8]:123 (2.flatcar.pool.ntp.org). Dec 13 16:03:36.991495 systemd-timesyncd[1592]: Initial clock synchronization to Fri 2024-12-13 16:03:36.673078 UTC. Dec 13 16:03:38.050749 env[1669]: time="2024-12-13T16:03:38.050697738Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Dec 13 16:03:38.619538 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2460101745.mount: Deactivated successfully. Dec 13 16:03:40.215070 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 16:03:40.215177 systemd[1]: Stopped kubelet.service. Dec 13 16:03:40.216065 systemd[1]: Starting kubelet.service... Dec 13 16:03:40.395447 systemd[1]: Started kubelet.service. Dec 13 16:03:40.417811 kubelet[2068]: E1213 16:03:40.417770 2068 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 16:03:40.418969 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 16:03:40.419047 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 16:03:40.644902 env[1669]: time="2024-12-13T16:03:40.644816111Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:03:40.645477 env[1669]: time="2024-12-13T16:03:40.645439769Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:03:40.646767 env[1669]: time="2024-12-13T16:03:40.646734983Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:03:40.647614 env[1669]: time="2024-12-13T16:03:40.647572083Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:03:40.648189 env[1669]: time="2024-12-13T16:03:40.648148549Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\"" Dec 13 16:03:40.653788 env[1669]: time="2024-12-13T16:03:40.653761419Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Dec 13 16:03:43.211720 env[1669]: time="2024-12-13T16:03:43.211653357Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:03:43.212459 env[1669]: time="2024-12-13T16:03:43.212406957Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:03:43.213511 env[1669]: time="2024-12-13T16:03:43.213468178Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:03:43.215068 env[1669]: time="2024-12-13T16:03:43.215014176Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:03:43.215360 env[1669]: time="2024-12-13T16:03:43.215322916Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\"" Dec 13 16:03:43.221550 env[1669]: time="2024-12-13T16:03:43.221530853Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Dec 13 16:03:44.555153 env[1669]: time="2024-12-13T16:03:44.555078783Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:03:44.555858 env[1669]: time="2024-12-13T16:03:44.555814911Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:03:44.556741 env[1669]: time="2024-12-13T16:03:44.556699876Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:03:44.557696 env[1669]: time="2024-12-13T16:03:44.557673742Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:03:44.558672 env[1669]: time="2024-12-13T16:03:44.558658027Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\"" Dec 13 16:03:44.564165 env[1669]: time="2024-12-13T16:03:44.564136361Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 16:03:45.487162 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount700469976.mount: Deactivated successfully. Dec 13 16:03:45.828697 env[1669]: time="2024-12-13T16:03:45.828615682Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:03:45.829162 env[1669]: time="2024-12-13T16:03:45.829129041Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:03:45.829783 env[1669]: time="2024-12-13T16:03:45.829735589Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:03:45.830446 env[1669]: time="2024-12-13T16:03:45.830398472Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:03:45.830734 env[1669]: time="2024-12-13T16:03:45.830684404Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Dec 13 16:03:45.836512 env[1669]: time="2024-12-13T16:03:45.836495365Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 16:03:46.374695 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1668993968.mount: Deactivated successfully. Dec 13 16:03:47.146590 env[1669]: time="2024-12-13T16:03:47.146535476Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:03:47.147161 env[1669]: time="2024-12-13T16:03:47.147126458Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:03:47.148202 env[1669]: time="2024-12-13T16:03:47.148159795Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:03:47.149593 env[1669]: time="2024-12-13T16:03:47.149554067Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:03:47.149955 env[1669]: time="2024-12-13T16:03:47.149915242Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 16:03:47.155808 env[1669]: time="2024-12-13T16:03:47.155769034Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 16:03:47.709936 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1331445707.mount: Deactivated successfully. Dec 13 16:03:47.710808 env[1669]: time="2024-12-13T16:03:47.710775596Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:03:47.711457 env[1669]: time="2024-12-13T16:03:47.711402362Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:03:47.712214 env[1669]: time="2024-12-13T16:03:47.712169493Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:03:47.713317 env[1669]: time="2024-12-13T16:03:47.713275316Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:03:47.713565 env[1669]: time="2024-12-13T16:03:47.713526120Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Dec 13 16:03:47.718953 env[1669]: time="2024-12-13T16:03:47.718912506Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Dec 13 16:03:48.296842 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2078562528.mount: Deactivated successfully. Dec 13 16:03:50.465081 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 13 16:03:50.465213 systemd[1]: Stopped kubelet.service. Dec 13 16:03:50.466086 systemd[1]: Starting kubelet.service... Dec 13 16:03:50.655969 systemd[1]: Started kubelet.service. Dec 13 16:03:50.685745 kubelet[2150]: E1213 16:03:50.685719 2150 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 16:03:50.686787 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 16:03:50.686900 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 16:03:50.796168 env[1669]: time="2024-12-13T16:03:50.796086185Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:03:50.796685 env[1669]: time="2024-12-13T16:03:50.796647611Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:03:50.797739 env[1669]: time="2024-12-13T16:03:50.797693857Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:03:50.798678 env[1669]: time="2024-12-13T16:03:50.798638449Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:03:50.799530 env[1669]: time="2024-12-13T16:03:50.799484143Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Dec 13 16:03:52.851385 systemd[1]: Stopped kubelet.service. Dec 13 16:03:52.852636 systemd[1]: Starting kubelet.service... Dec 13 16:03:52.864487 systemd[1]: Reloading. Dec 13 16:03:52.898697 /usr/lib/systemd/system-generators/torcx-generator[2302]: time="2024-12-13T16:03:52Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 16:03:52.898714 /usr/lib/systemd/system-generators/torcx-generator[2302]: time="2024-12-13T16:03:52Z" level=info msg="torcx already run" Dec 13 16:03:52.957157 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 16:03:52.957167 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 16:03:52.970545 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 16:03:53.033354 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 16:03:53.033396 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 16:03:53.033528 systemd[1]: Stopped kubelet.service. Dec 13 16:03:53.034399 systemd[1]: Starting kubelet.service... Dec 13 16:03:53.230887 systemd[1]: Started kubelet.service. Dec 13 16:03:53.268968 kubelet[2378]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 16:03:53.268968 kubelet[2378]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 16:03:53.268968 kubelet[2378]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 16:03:53.269206 kubelet[2378]: I1213 16:03:53.268996 2378 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 16:03:53.502719 kubelet[2378]: I1213 16:03:53.502652 2378 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 16:03:53.502719 kubelet[2378]: I1213 16:03:53.502665 2378 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 16:03:53.502793 kubelet[2378]: I1213 16:03:53.502777 2378 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 16:03:53.516031 kubelet[2378]: E1213 16:03:53.515993 2378 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://145.40.90.237:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 145.40.90.237:6443: connect: connection refused Dec 13 16:03:53.516984 kubelet[2378]: I1213 16:03:53.516947 2378 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 16:03:53.556222 kubelet[2378]: I1213 16:03:53.556175 2378 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 16:03:53.558246 kubelet[2378]: I1213 16:03:53.558211 2378 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 16:03:53.558308 kubelet[2378]: I1213 16:03:53.558302 2378 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 16:03:53.558365 kubelet[2378]: I1213 16:03:53.558315 2378 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 16:03:53.558365 kubelet[2378]: I1213 16:03:53.558321 2378 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 16:03:53.558402 kubelet[2378]: I1213 16:03:53.558373 2378 state_mem.go:36] "Initialized new in-memory state store" Dec 13 16:03:53.558421 kubelet[2378]: I1213 16:03:53.558419 2378 kubelet.go:396] "Attempting to sync node with API server" Dec 13 16:03:53.558437 kubelet[2378]: I1213 16:03:53.558428 2378 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 16:03:53.558454 kubelet[2378]: I1213 16:03:53.558440 2378 kubelet.go:312] "Adding apiserver pod source" Dec 13 16:03:53.558454 kubelet[2378]: I1213 16:03:53.558447 2378 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 16:03:53.559293 kubelet[2378]: I1213 16:03:53.559236 2378 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 16:03:53.602187 kubelet[2378]: I1213 16:03:53.602098 2378 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 16:03:53.602396 kubelet[2378]: W1213 16:03:53.602214 2378 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 16:03:53.603767 kubelet[2378]: I1213 16:03:53.603699 2378 server.go:1256] "Started kubelet" Dec 13 16:03:53.603991 kubelet[2378]: I1213 16:03:53.603829 2378 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 16:03:53.604149 kubelet[2378]: I1213 16:03:53.604013 2378 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 16:03:53.604699 kubelet[2378]: I1213 16:03:53.604651 2378 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 16:03:53.615695 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Dec 13 16:03:53.615964 kubelet[2378]: I1213 16:03:53.615883 2378 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 16:03:53.616158 kubelet[2378]: I1213 16:03:53.615969 2378 server.go:461] "Adding debug handlers to kubelet server" Dec 13 16:03:53.616158 kubelet[2378]: I1213 16:03:53.616061 2378 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 16:03:53.616473 kubelet[2378]: I1213 16:03:53.616209 2378 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 16:03:53.616922 kubelet[2378]: I1213 16:03:53.616816 2378 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 16:03:53.617071 kubelet[2378]: W1213 16:03:53.616806 2378 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://145.40.90.237:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 145.40.90.237:6443: connect: connection refused Dec 13 16:03:53.617221 kubelet[2378]: E1213 16:03:53.617154 2378 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://145.40.90.237:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 145.40.90.237:6443: connect: connection refused Dec 13 16:03:53.617381 kubelet[2378]: W1213 16:03:53.617228 2378 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://145.40.90.237:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 145.40.90.237:6443: connect: connection refused Dec 13 16:03:53.617536 kubelet[2378]: E1213 16:03:53.617405 2378 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://145.40.90.237:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 145.40.90.237:6443: connect: connection refused Dec 13 16:03:53.680474 kubelet[2378]: I1213 16:03:53.680378 2378 factory.go:221] Registration of the systemd container factory successfully Dec 13 16:03:53.681731 kubelet[2378]: E1213 16:03:53.681218 2378 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://145.40.90.237:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.6-a-245bdeb2fc?timeout=10s\": dial tcp 145.40.90.237:6443: connect: connection refused" interval="200ms" Dec 13 16:03:53.682099 kubelet[2378]: W1213 16:03:53.681219 2378 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://145.40.90.237:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.6-a-245bdeb2fc&limit=500&resourceVersion=0": dial tcp 145.40.90.237:6443: connect: connection refused Dec 13 16:03:53.682486 kubelet[2378]: I1213 16:03:53.681843 2378 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 16:03:53.682721 kubelet[2378]: E1213 16:03:53.682508 2378 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://145.40.90.237:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.6-a-245bdeb2fc&limit=500&resourceVersion=0": dial tcp 145.40.90.237:6443: connect: connection refused Dec 13 16:03:53.684907 kubelet[2378]: E1213 16:03:53.684851 2378 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 16:03:53.685165 kubelet[2378]: I1213 16:03:53.684926 2378 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 16:03:53.686219 kubelet[2378]: I1213 16:03:53.686175 2378 factory.go:221] Registration of the containerd container factory successfully Dec 13 16:03:53.687572 kubelet[2378]: I1213 16:03:53.687486 2378 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 16:03:53.687572 kubelet[2378]: I1213 16:03:53.687557 2378 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 16:03:53.687855 kubelet[2378]: I1213 16:03:53.687600 2378 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 16:03:53.687855 kubelet[2378]: E1213 16:03:53.687712 2378 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 16:03:53.688578 kubelet[2378]: W1213 16:03:53.688459 2378 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://145.40.90.237:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 145.40.90.237:6443: connect: connection refused Dec 13 16:03:53.688825 kubelet[2378]: E1213 16:03:53.688609 2378 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://145.40.90.237:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 145.40.90.237:6443: connect: connection refused Dec 13 16:03:53.689215 kubelet[2378]: E1213 16:03:53.689172 2378 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://145.40.90.237:6443/api/v1/namespaces/default/events\": dial tcp 145.40.90.237:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.6-a-245bdeb2fc.1810c81166e09cf3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.6-a-245bdeb2fc,UID:ci-3510.3.6-a-245bdeb2fc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.6-a-245bdeb2fc,},FirstTimestamp:2024-12-13 16:03:53.603636467 +0000 UTC m=+0.370385228,LastTimestamp:2024-12-13 16:03:53.603636467 +0000 UTC m=+0.370385228,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.6-a-245bdeb2fc,}" Dec 13 16:03:53.788795 kubelet[2378]: E1213 16:03:53.788557 2378 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 16:03:53.866092 kubelet[2378]: I1213 16:03:53.866027 2378 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.6-a-245bdeb2fc" Dec 13 16:03:53.866927 kubelet[2378]: E1213 16:03:53.866873 2378 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://145.40.90.237:6443/api/v1/nodes\": dial tcp 145.40.90.237:6443: connect: connection refused" node="ci-3510.3.6-a-245bdeb2fc" Dec 13 16:03:53.867442 kubelet[2378]: I1213 16:03:53.867399 2378 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 16:03:53.867442 kubelet[2378]: I1213 16:03:53.867439 2378 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 16:03:53.867715 kubelet[2378]: I1213 16:03:53.867479 2378 state_mem.go:36] "Initialized new in-memory state store" Dec 13 16:03:53.869619 kubelet[2378]: I1213 16:03:53.869532 2378 policy_none.go:49] "None policy: Start" Dec 13 16:03:53.870900 kubelet[2378]: I1213 16:03:53.870851 2378 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 16:03:53.871121 kubelet[2378]: I1213 16:03:53.870918 2378 state_mem.go:35] "Initializing new in-memory state store" Dec 13 16:03:53.880950 kubelet[2378]: I1213 16:03:53.880900 2378 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 16:03:53.881588 kubelet[2378]: I1213 16:03:53.881544 2378 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 16:03:53.882712 kubelet[2378]: E1213 16:03:53.882666 2378 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://145.40.90.237:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.6-a-245bdeb2fc?timeout=10s\": dial tcp 145.40.90.237:6443: connect: connection refused" interval="400ms" Dec 13 16:03:53.883566 kubelet[2378]: E1213 16:03:53.883484 2378 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.6-a-245bdeb2fc\" not found" Dec 13 16:03:53.989824 kubelet[2378]: I1213 16:03:53.989708 2378 topology_manager.go:215] "Topology Admit Handler" podUID="d78d60b883bab6c735fa1bccac6146ce" podNamespace="kube-system" podName="kube-apiserver-ci-3510.3.6-a-245bdeb2fc" Dec 13 16:03:53.998940 kubelet[2378]: I1213 16:03:53.998854 2378 topology_manager.go:215] "Topology Admit Handler" podUID="4012944dd6f70083a353439b08e31405" podNamespace="kube-system" podName="kube-controller-manager-ci-3510.3.6-a-245bdeb2fc" Dec 13 16:03:54.002539 kubelet[2378]: I1213 16:03:54.002461 2378 topology_manager.go:215] "Topology Admit Handler" podUID="240df6ef2e4a5400e4d36a1c8b16c02a" podNamespace="kube-system" podName="kube-scheduler-ci-3510.3.6-a-245bdeb2fc" Dec 13 16:03:54.019054 kubelet[2378]: I1213 16:03:54.018966 2378 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d78d60b883bab6c735fa1bccac6146ce-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.6-a-245bdeb2fc\" (UID: \"d78d60b883bab6c735fa1bccac6146ce\") " pod="kube-system/kube-apiserver-ci-3510.3.6-a-245bdeb2fc" Dec 13 16:03:54.019054 kubelet[2378]: I1213 16:03:54.019059 2378 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4012944dd6f70083a353439b08e31405-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.6-a-245bdeb2fc\" (UID: \"4012944dd6f70083a353439b08e31405\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-245bdeb2fc" Dec 13 16:03:54.019406 kubelet[2378]: I1213 16:03:54.019168 2378 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4012944dd6f70083a353439b08e31405-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.6-a-245bdeb2fc\" (UID: \"4012944dd6f70083a353439b08e31405\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-245bdeb2fc" Dec 13 16:03:54.019406 kubelet[2378]: I1213 16:03:54.019279 2378 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4012944dd6f70083a353439b08e31405-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.6-a-245bdeb2fc\" (UID: \"4012944dd6f70083a353439b08e31405\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-245bdeb2fc" Dec 13 16:03:54.019406 kubelet[2378]: I1213 16:03:54.019340 2378 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/240df6ef2e4a5400e4d36a1c8b16c02a-kubeconfig\") pod \"kube-scheduler-ci-3510.3.6-a-245bdeb2fc\" (UID: \"240df6ef2e4a5400e4d36a1c8b16c02a\") " pod="kube-system/kube-scheduler-ci-3510.3.6-a-245bdeb2fc" Dec 13 16:03:54.019706 kubelet[2378]: I1213 16:03:54.019427 2378 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d78d60b883bab6c735fa1bccac6146ce-ca-certs\") pod \"kube-apiserver-ci-3510.3.6-a-245bdeb2fc\" (UID: \"d78d60b883bab6c735fa1bccac6146ce\") " pod="kube-system/kube-apiserver-ci-3510.3.6-a-245bdeb2fc" Dec 13 16:03:54.019706 kubelet[2378]: I1213 16:03:54.019484 2378 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d78d60b883bab6c735fa1bccac6146ce-k8s-certs\") pod \"kube-apiserver-ci-3510.3.6-a-245bdeb2fc\" (UID: \"d78d60b883bab6c735fa1bccac6146ce\") " pod="kube-system/kube-apiserver-ci-3510.3.6-a-245bdeb2fc" Dec 13 16:03:54.019706 kubelet[2378]: I1213 16:03:54.019558 2378 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4012944dd6f70083a353439b08e31405-ca-certs\") pod \"kube-controller-manager-ci-3510.3.6-a-245bdeb2fc\" (UID: \"4012944dd6f70083a353439b08e31405\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-245bdeb2fc" Dec 13 16:03:54.019706 kubelet[2378]: I1213 16:03:54.019623 2378 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4012944dd6f70083a353439b08e31405-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.6-a-245bdeb2fc\" (UID: \"4012944dd6f70083a353439b08e31405\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-245bdeb2fc" Dec 13 16:03:54.071635 kubelet[2378]: I1213 16:03:54.071439 2378 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.6-a-245bdeb2fc" Dec 13 16:03:54.072235 kubelet[2378]: E1213 16:03:54.072147 2378 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://145.40.90.237:6443/api/v1/nodes\": dial tcp 145.40.90.237:6443: connect: connection refused" node="ci-3510.3.6-a-245bdeb2fc" Dec 13 16:03:54.284655 kubelet[2378]: E1213 16:03:54.284544 2378 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://145.40.90.237:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.6-a-245bdeb2fc?timeout=10s\": dial tcp 145.40.90.237:6443: connect: connection refused" interval="800ms" Dec 13 16:03:54.310420 env[1669]: time="2024-12-13T16:03:54.310274072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.6-a-245bdeb2fc,Uid:d78d60b883bab6c735fa1bccac6146ce,Namespace:kube-system,Attempt:0,}" Dec 13 16:03:54.316497 env[1669]: time="2024-12-13T16:03:54.316339836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.6-a-245bdeb2fc,Uid:4012944dd6f70083a353439b08e31405,Namespace:kube-system,Attempt:0,}" Dec 13 16:03:54.319624 env[1669]: time="2024-12-13T16:03:54.319500701Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.6-a-245bdeb2fc,Uid:240df6ef2e4a5400e4d36a1c8b16c02a,Namespace:kube-system,Attempt:0,}" Dec 13 16:03:54.427655 kubelet[2378]: W1213 16:03:54.427392 2378 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://145.40.90.237:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 145.40.90.237:6443: connect: connection refused Dec 13 16:03:54.427655 kubelet[2378]: E1213 16:03:54.427531 2378 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://145.40.90.237:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 145.40.90.237:6443: connect: connection refused Dec 13 16:03:54.476016 kubelet[2378]: I1213 16:03:54.475922 2378 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.6-a-245bdeb2fc" Dec 13 16:03:54.476644 kubelet[2378]: E1213 16:03:54.476569 2378 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://145.40.90.237:6443/api/v1/nodes\": dial tcp 145.40.90.237:6443: connect: connection refused" node="ci-3510.3.6-a-245bdeb2fc" Dec 13 16:03:54.585306 kubelet[2378]: W1213 16:03:54.585137 2378 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://145.40.90.237:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 145.40.90.237:6443: connect: connection refused Dec 13 16:03:54.585306 kubelet[2378]: E1213 16:03:54.585284 2378 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://145.40.90.237:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 145.40.90.237:6443: connect: connection refused Dec 13 16:03:54.807168 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount848316580.mount: Deactivated successfully. Dec 13 16:03:54.808593 env[1669]: time="2024-12-13T16:03:54.808543901Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:03:54.809502 env[1669]: time="2024-12-13T16:03:54.809461105Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:03:54.810185 env[1669]: time="2024-12-13T16:03:54.810145335Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:03:54.810746 env[1669]: time="2024-12-13T16:03:54.810700994Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:03:54.811084 env[1669]: time="2024-12-13T16:03:54.811045222Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:03:54.811481 env[1669]: time="2024-12-13T16:03:54.811441559Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:03:54.812996 env[1669]: time="2024-12-13T16:03:54.812958334Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:03:54.813306 env[1669]: time="2024-12-13T16:03:54.813272855Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:03:54.814974 env[1669]: time="2024-12-13T16:03:54.814925463Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:03:54.815783 env[1669]: time="2024-12-13T16:03:54.815740346Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:03:54.816159 env[1669]: time="2024-12-13T16:03:54.816119762Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:03:54.817077 env[1669]: time="2024-12-13T16:03:54.817038610Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:03:54.822614 env[1669]: time="2024-12-13T16:03:54.822582148Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 16:03:54.822614 env[1669]: time="2024-12-13T16:03:54.822602680Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 16:03:54.822614 env[1669]: time="2024-12-13T16:03:54.822609538Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 16:03:54.822746 env[1669]: time="2024-12-13T16:03:54.822678496Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6964571d5e800600851f36c213e8d9c8aadaa1a38bff4b0323b84bafaaa114d2 pid=2428 runtime=io.containerd.runc.v2 Dec 13 16:03:54.823814 env[1669]: time="2024-12-13T16:03:54.823784365Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 16:03:54.823814 env[1669]: time="2024-12-13T16:03:54.823808947Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 16:03:54.823909 env[1669]: time="2024-12-13T16:03:54.823819337Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 16:03:54.823945 env[1669]: time="2024-12-13T16:03:54.823905021Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/996139eb6c2510a1b4d87da83391a6d6af662d769bcad7accd5d6a0664909309 pid=2445 runtime=io.containerd.runc.v2 Dec 13 16:03:54.824142 env[1669]: time="2024-12-13T16:03:54.824120616Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 16:03:54.824176 env[1669]: time="2024-12-13T16:03:54.824139550Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 16:03:54.824176 env[1669]: time="2024-12-13T16:03:54.824149373Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 16:03:54.824228 env[1669]: time="2024-12-13T16:03:54.824213772Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2699f0f9092456b56a8debc40e54ba263c4baec80b67c4c91ee143a2a951820e pid=2456 runtime=io.containerd.runc.v2 Dec 13 16:03:54.851687 env[1669]: time="2024-12-13T16:03:54.851662303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.6-a-245bdeb2fc,Uid:d78d60b883bab6c735fa1bccac6146ce,Namespace:kube-system,Attempt:0,} returns sandbox id \"2699f0f9092456b56a8debc40e54ba263c4baec80b67c4c91ee143a2a951820e\"" Dec 13 16:03:54.852329 env[1669]: time="2024-12-13T16:03:54.852313435Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.6-a-245bdeb2fc,Uid:240df6ef2e4a5400e4d36a1c8b16c02a,Namespace:kube-system,Attempt:0,} returns sandbox id \"996139eb6c2510a1b4d87da83391a6d6af662d769bcad7accd5d6a0664909309\"" Dec 13 16:03:54.852549 env[1669]: time="2024-12-13T16:03:54.852534779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.6-a-245bdeb2fc,Uid:4012944dd6f70083a353439b08e31405,Namespace:kube-system,Attempt:0,} returns sandbox id \"6964571d5e800600851f36c213e8d9c8aadaa1a38bff4b0323b84bafaaa114d2\"" Dec 13 16:03:54.853592 env[1669]: time="2024-12-13T16:03:54.853578890Z" level=info msg="CreateContainer within sandbox \"996139eb6c2510a1b4d87da83391a6d6af662d769bcad7accd5d6a0664909309\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 16:03:54.853592 env[1669]: time="2024-12-13T16:03:54.853580310Z" level=info msg="CreateContainer within sandbox \"2699f0f9092456b56a8debc40e54ba263c4baec80b67c4c91ee143a2a951820e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 16:03:54.853654 env[1669]: time="2024-12-13T16:03:54.853591883Z" level=info msg="CreateContainer within sandbox \"6964571d5e800600851f36c213e8d9c8aadaa1a38bff4b0323b84bafaaa114d2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 16:03:54.860272 env[1669]: time="2024-12-13T16:03:54.860228249Z" level=info msg="CreateContainer within sandbox \"2699f0f9092456b56a8debc40e54ba263c4baec80b67c4c91ee143a2a951820e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"bedbe6aeb7bfd6319ea40048e5511fa44ba41265d11f4fe2369455c60f802ec7\"" Dec 13 16:03:54.860509 env[1669]: time="2024-12-13T16:03:54.860444739Z" level=info msg="StartContainer for \"bedbe6aeb7bfd6319ea40048e5511fa44ba41265d11f4fe2369455c60f802ec7\"" Dec 13 16:03:54.861346 env[1669]: time="2024-12-13T16:03:54.861308570Z" level=info msg="CreateContainer within sandbox \"996139eb6c2510a1b4d87da83391a6d6af662d769bcad7accd5d6a0664909309\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6e383c3e50e2e6aaaf43d718cb0fec60fc4208c8ba5680bc244b1b6dff21a15e\"" Dec 13 16:03:54.861468 env[1669]: time="2024-12-13T16:03:54.861455332Z" level=info msg="StartContainer for \"6e383c3e50e2e6aaaf43d718cb0fec60fc4208c8ba5680bc244b1b6dff21a15e\"" Dec 13 16:03:54.862207 env[1669]: time="2024-12-13T16:03:54.862176550Z" level=info msg="CreateContainer within sandbox \"6964571d5e800600851f36c213e8d9c8aadaa1a38bff4b0323b84bafaaa114d2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"23e407fc593af5467bdc3bc30fdf31eb3fdf67fd4bf1822dce28cda58b26ad33\"" Dec 13 16:03:54.862333 env[1669]: time="2024-12-13T16:03:54.862320386Z" level=info msg="StartContainer for \"23e407fc593af5467bdc3bc30fdf31eb3fdf67fd4bf1822dce28cda58b26ad33\"" Dec 13 16:03:54.893605 env[1669]: time="2024-12-13T16:03:54.893582294Z" level=info msg="StartContainer for \"bedbe6aeb7bfd6319ea40048e5511fa44ba41265d11f4fe2369455c60f802ec7\" returns successfully" Dec 13 16:03:54.893699 env[1669]: time="2024-12-13T16:03:54.893632476Z" level=info msg="StartContainer for \"6e383c3e50e2e6aaaf43d718cb0fec60fc4208c8ba5680bc244b1b6dff21a15e\" returns successfully" Dec 13 16:03:54.895037 env[1669]: time="2024-12-13T16:03:54.895017122Z" level=info msg="StartContainer for \"23e407fc593af5467bdc3bc30fdf31eb3fdf67fd4bf1822dce28cda58b26ad33\" returns successfully" Dec 13 16:03:55.278105 kubelet[2378]: I1213 16:03:55.278090 2378 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.6-a-245bdeb2fc" Dec 13 16:03:55.457431 kubelet[2378]: I1213 16:03:55.457357 2378 kubelet_node_status.go:76] "Successfully registered node" node="ci-3510.3.6-a-245bdeb2fc" Dec 13 16:03:55.509779 kubelet[2378]: E1213 16:03:55.509702 2378 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="1.6s" Dec 13 16:03:55.559750 kubelet[2378]: I1213 16:03:55.559681 2378 apiserver.go:52] "Watching apiserver" Dec 13 16:03:55.617061 kubelet[2378]: I1213 16:03:55.616959 2378 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 16:03:55.706467 kubelet[2378]: E1213 16:03:55.706370 2378 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3510.3.6-a-245bdeb2fc\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-3510.3.6-a-245bdeb2fc" Dec 13 16:03:55.706467 kubelet[2378]: E1213 16:03:55.706457 2378 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510.3.6-a-245bdeb2fc\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-3510.3.6-a-245bdeb2fc" Dec 13 16:03:55.706833 kubelet[2378]: E1213 16:03:55.706499 2378 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.6-a-245bdeb2fc\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-3510.3.6-a-245bdeb2fc" Dec 13 16:03:56.712157 kubelet[2378]: W1213 16:03:56.712055 2378 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 16:03:58.895747 systemd[1]: Reloading. Dec 13 16:03:58.924974 /usr/lib/systemd/system-generators/torcx-generator[2707]: time="2024-12-13T16:03:58Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 16:03:58.925000 /usr/lib/systemd/system-generators/torcx-generator[2707]: time="2024-12-13T16:03:58Z" level=info msg="torcx already run" Dec 13 16:03:58.983762 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 16:03:58.983771 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 16:03:58.996502 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 16:03:59.056571 systemd[1]: Stopping kubelet.service... Dec 13 16:03:59.070633 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 16:03:59.070784 systemd[1]: Stopped kubelet.service. Dec 13 16:03:59.071780 systemd[1]: Starting kubelet.service... Dec 13 16:03:59.285996 systemd[1]: Started kubelet.service. Dec 13 16:03:59.310041 kubelet[2781]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 16:03:59.310041 kubelet[2781]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 16:03:59.310041 kubelet[2781]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 16:03:59.310321 kubelet[2781]: I1213 16:03:59.310061 2781 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 16:03:59.312878 kubelet[2781]: I1213 16:03:59.312837 2781 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 16:03:59.312878 kubelet[2781]: I1213 16:03:59.312849 2781 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 16:03:59.312990 kubelet[2781]: I1213 16:03:59.312954 2781 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 16:03:59.313902 kubelet[2781]: I1213 16:03:59.313850 2781 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 16:03:59.314987 kubelet[2781]: I1213 16:03:59.314936 2781 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 16:03:59.337186 kubelet[2781]: I1213 16:03:59.337136 2781 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 16:03:59.337707 kubelet[2781]: I1213 16:03:59.337672 2781 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 16:03:59.337892 kubelet[2781]: I1213 16:03:59.337855 2781 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 16:03:59.337892 kubelet[2781]: I1213 16:03:59.337880 2781 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 16:03:59.337892 kubelet[2781]: I1213 16:03:59.337893 2781 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 16:03:59.338033 kubelet[2781]: I1213 16:03:59.337920 2781 state_mem.go:36] "Initialized new in-memory state store" Dec 13 16:03:59.338033 kubelet[2781]: I1213 16:03:59.338000 2781 kubelet.go:396] "Attempting to sync node with API server" Dec 13 16:03:59.338033 kubelet[2781]: I1213 16:03:59.338015 2781 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 16:03:59.337905 sudo[2806]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 13 16:03:59.338269 kubelet[2781]: I1213 16:03:59.338039 2781 kubelet.go:312] "Adding apiserver pod source" Dec 13 16:03:59.338269 kubelet[2781]: I1213 16:03:59.338053 2781 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 16:03:59.338150 sudo[2806]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Dec 13 16:03:59.338478 kubelet[2781]: I1213 16:03:59.338463 2781 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 16:03:59.338661 kubelet[2781]: I1213 16:03:59.338637 2781 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 16:03:59.338996 kubelet[2781]: I1213 16:03:59.338982 2781 server.go:1256] "Started kubelet" Dec 13 16:03:59.339074 kubelet[2781]: I1213 16:03:59.339027 2781 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 16:03:59.339133 kubelet[2781]: I1213 16:03:59.339116 2781 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 16:03:59.339269 kubelet[2781]: I1213 16:03:59.339257 2781 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 16:03:59.339853 kubelet[2781]: I1213 16:03:59.339842 2781 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 16:03:59.339924 kubelet[2781]: I1213 16:03:59.339887 2781 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 16:03:59.339971 kubelet[2781]: I1213 16:03:59.339945 2781 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 16:03:59.339971 kubelet[2781]: E1213 16:03:59.339945 2781 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3510.3.6-a-245bdeb2fc\" not found" Dec 13 16:03:59.340056 kubelet[2781]: I1213 16:03:59.340029 2781 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 16:03:59.340128 kubelet[2781]: I1213 16:03:59.340113 2781 server.go:461] "Adding debug handlers to kubelet server" Dec 13 16:03:59.340337 kubelet[2781]: E1213 16:03:59.340325 2781 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 16:03:59.342186 kubelet[2781]: I1213 16:03:59.342175 2781 factory.go:221] Registration of the containerd container factory successfully Dec 13 16:03:59.342186 kubelet[2781]: I1213 16:03:59.342184 2781 factory.go:221] Registration of the systemd container factory successfully Dec 13 16:03:59.342300 kubelet[2781]: I1213 16:03:59.342240 2781 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 16:03:59.348303 kubelet[2781]: I1213 16:03:59.348281 2781 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 16:03:59.348951 kubelet[2781]: I1213 16:03:59.348939 2781 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 16:03:59.349011 kubelet[2781]: I1213 16:03:59.348960 2781 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 16:03:59.349011 kubelet[2781]: I1213 16:03:59.348973 2781 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 16:03:59.349011 kubelet[2781]: E1213 16:03:59.349006 2781 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 16:03:59.369322 kubelet[2781]: I1213 16:03:59.369305 2781 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 16:03:59.369322 kubelet[2781]: I1213 16:03:59.369322 2781 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 16:03:59.369436 kubelet[2781]: I1213 16:03:59.369335 2781 state_mem.go:36] "Initialized new in-memory state store" Dec 13 16:03:59.369465 kubelet[2781]: I1213 16:03:59.369459 2781 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 16:03:59.369487 kubelet[2781]: I1213 16:03:59.369479 2781 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 16:03:59.369507 kubelet[2781]: I1213 16:03:59.369487 2781 policy_none.go:49] "None policy: Start" Dec 13 16:03:59.369777 kubelet[2781]: I1213 16:03:59.369766 2781 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 16:03:59.369818 kubelet[2781]: I1213 16:03:59.369780 2781 state_mem.go:35] "Initializing new in-memory state store" Dec 13 16:03:59.369925 kubelet[2781]: I1213 16:03:59.369918 2781 state_mem.go:75] "Updated machine memory state" Dec 13 16:03:59.370837 kubelet[2781]: I1213 16:03:59.370796 2781 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 16:03:59.370976 kubelet[2781]: I1213 16:03:59.370945 2781 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 16:03:59.442109 kubelet[2781]: I1213 16:03:59.442052 2781 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.6-a-245bdeb2fc" Dec 13 16:03:59.449045 kubelet[2781]: I1213 16:03:59.449034 2781 kubelet_node_status.go:112] "Node was previously registered" node="ci-3510.3.6-a-245bdeb2fc" Dec 13 16:03:59.449102 kubelet[2781]: I1213 16:03:59.449053 2781 topology_manager.go:215] "Topology Admit Handler" podUID="d78d60b883bab6c735fa1bccac6146ce" podNamespace="kube-system" podName="kube-apiserver-ci-3510.3.6-a-245bdeb2fc" Dec 13 16:03:59.449102 kubelet[2781]: I1213 16:03:59.449073 2781 kubelet_node_status.go:76] "Successfully registered node" node="ci-3510.3.6-a-245bdeb2fc" Dec 13 16:03:59.449102 kubelet[2781]: I1213 16:03:59.449097 2781 topology_manager.go:215] "Topology Admit Handler" podUID="4012944dd6f70083a353439b08e31405" podNamespace="kube-system" podName="kube-controller-manager-ci-3510.3.6-a-245bdeb2fc" Dec 13 16:03:59.449199 kubelet[2781]: I1213 16:03:59.449191 2781 topology_manager.go:215] "Topology Admit Handler" podUID="240df6ef2e4a5400e4d36a1c8b16c02a" podNamespace="kube-system" podName="kube-scheduler-ci-3510.3.6-a-245bdeb2fc" Dec 13 16:03:59.451828 kubelet[2781]: W1213 16:03:59.451788 2781 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 16:03:59.453008 kubelet[2781]: W1213 16:03:59.452983 2781 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 16:03:59.454063 kubelet[2781]: W1213 16:03:59.454055 2781 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 16:03:59.454107 kubelet[2781]: E1213 16:03:59.454098 2781 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.6-a-245bdeb2fc\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.6-a-245bdeb2fc" Dec 13 16:03:59.540557 kubelet[2781]: I1213 16:03:59.540509 2781 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4012944dd6f70083a353439b08e31405-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.6-a-245bdeb2fc\" (UID: \"4012944dd6f70083a353439b08e31405\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-245bdeb2fc" Dec 13 16:03:59.540557 kubelet[2781]: I1213 16:03:59.540532 2781 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d78d60b883bab6c735fa1bccac6146ce-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.6-a-245bdeb2fc\" (UID: \"d78d60b883bab6c735fa1bccac6146ce\") " pod="kube-system/kube-apiserver-ci-3510.3.6-a-245bdeb2fc" Dec 13 16:03:59.540557 kubelet[2781]: I1213 16:03:59.540544 2781 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4012944dd6f70083a353439b08e31405-ca-certs\") pod \"kube-controller-manager-ci-3510.3.6-a-245bdeb2fc\" (UID: \"4012944dd6f70083a353439b08e31405\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-245bdeb2fc" Dec 13 16:03:59.540677 kubelet[2781]: I1213 16:03:59.540560 2781 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4012944dd6f70083a353439b08e31405-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.6-a-245bdeb2fc\" (UID: \"4012944dd6f70083a353439b08e31405\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-245bdeb2fc" Dec 13 16:03:59.540677 kubelet[2781]: I1213 16:03:59.540592 2781 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4012944dd6f70083a353439b08e31405-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.6-a-245bdeb2fc\" (UID: \"4012944dd6f70083a353439b08e31405\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-245bdeb2fc" Dec 13 16:03:59.540677 kubelet[2781]: I1213 16:03:59.540613 2781 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/240df6ef2e4a5400e4d36a1c8b16c02a-kubeconfig\") pod \"kube-scheduler-ci-3510.3.6-a-245bdeb2fc\" (UID: \"240df6ef2e4a5400e4d36a1c8b16c02a\") " pod="kube-system/kube-scheduler-ci-3510.3.6-a-245bdeb2fc" Dec 13 16:03:59.540677 kubelet[2781]: I1213 16:03:59.540629 2781 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d78d60b883bab6c735fa1bccac6146ce-ca-certs\") pod \"kube-apiserver-ci-3510.3.6-a-245bdeb2fc\" (UID: \"d78d60b883bab6c735fa1bccac6146ce\") " pod="kube-system/kube-apiserver-ci-3510.3.6-a-245bdeb2fc" Dec 13 16:03:59.540677 kubelet[2781]: I1213 16:03:59.540649 2781 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d78d60b883bab6c735fa1bccac6146ce-k8s-certs\") pod \"kube-apiserver-ci-3510.3.6-a-245bdeb2fc\" (UID: \"d78d60b883bab6c735fa1bccac6146ce\") " pod="kube-system/kube-apiserver-ci-3510.3.6-a-245bdeb2fc" Dec 13 16:03:59.540773 kubelet[2781]: I1213 16:03:59.540672 2781 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4012944dd6f70083a353439b08e31405-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.6-a-245bdeb2fc\" (UID: \"4012944dd6f70083a353439b08e31405\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-245bdeb2fc" Dec 13 16:03:59.677278 sudo[2806]: pam_unix(sudo:session): session closed for user root Dec 13 16:04:00.339085 kubelet[2781]: I1213 16:04:00.338953 2781 apiserver.go:52] "Watching apiserver" Dec 13 16:04:00.364454 kubelet[2781]: W1213 16:04:00.364305 2781 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 16:04:00.364586 kubelet[2781]: E1213 16:04:00.364558 2781 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.6-a-245bdeb2fc\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.6-a-245bdeb2fc" Dec 13 16:04:00.364586 kubelet[2781]: W1213 16:04:00.364576 2781 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 16:04:00.364657 kubelet[2781]: E1213 16:04:00.364622 2781 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510.3.6-a-245bdeb2fc\" already exists" pod="kube-system/kube-controller-manager-ci-3510.3.6-a-245bdeb2fc" Dec 13 16:04:00.375088 kubelet[2781]: I1213 16:04:00.375039 2781 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.6-a-245bdeb2fc" podStartSLOduration=1.3750129119999999 podStartE2EDuration="1.375012912s" podCreationTimestamp="2024-12-13 16:03:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 16:04:00.374264894 +0000 UTC m=+1.086208721" watchObservedRunningTime="2024-12-13 16:04:00.375012912 +0000 UTC m=+1.086956736" Dec 13 16:04:00.379954 kubelet[2781]: I1213 16:04:00.379917 2781 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.6-a-245bdeb2fc" podStartSLOduration=1.379900679 podStartE2EDuration="1.379900679s" podCreationTimestamp="2024-12-13 16:03:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 16:04:00.379850222 +0000 UTC m=+1.091794050" watchObservedRunningTime="2024-12-13 16:04:00.379900679 +0000 UTC m=+1.091844509" Dec 13 16:04:00.385035 kubelet[2781]: I1213 16:04:00.385022 2781 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.6-a-245bdeb2fc" podStartSLOduration=4.385001989 podStartE2EDuration="4.385001989s" podCreationTimestamp="2024-12-13 16:03:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 16:04:00.384834586 +0000 UTC m=+1.096778414" watchObservedRunningTime="2024-12-13 16:04:00.385001989 +0000 UTC m=+1.096945817" Dec 13 16:04:00.440614 kubelet[2781]: I1213 16:04:00.440555 2781 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 16:04:01.032234 sudo[1884]: pam_unix(sudo:session): session closed for user root Dec 13 16:04:01.033045 sshd[1879]: pam_unix(sshd:session): session closed for user core Dec 13 16:04:01.034491 systemd[1]: sshd@6-145.40.90.237:22-139.178.89.65:54502.service: Deactivated successfully. Dec 13 16:04:01.035143 systemd-logind[1659]: Session 9 logged out. Waiting for processes to exit. Dec 13 16:04:01.035155 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 16:04:01.035862 systemd-logind[1659]: Removed session 9. Dec 13 16:04:02.982995 update_engine[1661]: I1213 16:04:02.982880 1661 update_attempter.cc:509] Updating boot flags... Dec 13 16:04:07.693303 systemd[1]: Started sshd@7-145.40.90.237:22-218.92.0.157:36047.service. Dec 13 16:04:11.220221 sshd[2938]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.157 user=root Dec 13 16:04:13.536682 kubelet[2781]: I1213 16:04:13.536651 2781 topology_manager.go:215] "Topology Admit Handler" podUID="f24e3999-0804-4369-81e1-a6b3ab99ff4f" podNamespace="kube-system" podName="cilium-operator-5cc964979-2nggw" Dec 13 16:04:13.547074 sshd[2938]: Failed password for root from 218.92.0.157 port 36047 ssh2 Dec 13 16:04:13.582877 kubelet[2781]: I1213 16:04:13.582841 2781 topology_manager.go:215] "Topology Admit Handler" podUID="5c00387e-0298-4fc9-ab21-b97ac716f951" podNamespace="kube-system" podName="kube-proxy-8cv7k" Dec 13 16:04:13.591923 kubelet[2781]: I1213 16:04:13.591895 2781 topology_manager.go:215] "Topology Admit Handler" podUID="31f965b8-1d39-4ca5-8a9f-b928327d0911" podNamespace="kube-system" podName="cilium-d2fr7" Dec 13 16:04:13.629835 kubelet[2781]: I1213 16:04:13.629775 2781 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fd6j\" (UniqueName: \"kubernetes.io/projected/f24e3999-0804-4369-81e1-a6b3ab99ff4f-kube-api-access-2fd6j\") pod \"cilium-operator-5cc964979-2nggw\" (UID: \"f24e3999-0804-4369-81e1-a6b3ab99ff4f\") " pod="kube-system/cilium-operator-5cc964979-2nggw" Dec 13 16:04:13.629835 kubelet[2781]: I1213 16:04:13.629816 2781 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/31f965b8-1d39-4ca5-8a9f-b928327d0911-bpf-maps\") pod \"cilium-d2fr7\" (UID: \"31f965b8-1d39-4ca5-8a9f-b928327d0911\") " pod="kube-system/cilium-d2fr7" Dec 13 16:04:13.630012 kubelet[2781]: I1213 16:04:13.629845 2781 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/31f965b8-1d39-4ca5-8a9f-b928327d0911-xtables-lock\") pod \"cilium-d2fr7\" (UID: \"31f965b8-1d39-4ca5-8a9f-b928327d0911\") " pod="kube-system/cilium-d2fr7" Dec 13 16:04:13.630012 kubelet[2781]: I1213 16:04:13.629867 2781 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/31f965b8-1d39-4ca5-8a9f-b928327d0911-hostproc\") pod \"cilium-d2fr7\" (UID: \"31f965b8-1d39-4ca5-8a9f-b928327d0911\") " pod="kube-system/cilium-d2fr7" Dec 13 16:04:13.630012 kubelet[2781]: I1213 16:04:13.629887 2781 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/31f965b8-1d39-4ca5-8a9f-b928327d0911-cni-path\") pod \"cilium-d2fr7\" (UID: \"31f965b8-1d39-4ca5-8a9f-b928327d0911\") " pod="kube-system/cilium-d2fr7" Dec 13 16:04:13.630012 kubelet[2781]: I1213 16:04:13.629913 2781 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wr9c7\" (UniqueName: \"kubernetes.io/projected/31f965b8-1d39-4ca5-8a9f-b928327d0911-kube-api-access-wr9c7\") pod \"cilium-d2fr7\" (UID: \"31f965b8-1d39-4ca5-8a9f-b928327d0911\") " pod="kube-system/cilium-d2fr7" Dec 13 16:04:13.630012 kubelet[2781]: I1213 16:04:13.629936 2781 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/31f965b8-1d39-4ca5-8a9f-b928327d0911-clustermesh-secrets\") pod \"cilium-d2fr7\" (UID: \"31f965b8-1d39-4ca5-8a9f-b928327d0911\") " pod="kube-system/cilium-d2fr7" Dec 13 16:04:13.630012 kubelet[2781]: I1213 16:04:13.629977 2781 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/31f965b8-1d39-4ca5-8a9f-b928327d0911-host-proc-sys-kernel\") pod \"cilium-d2fr7\" (UID: \"31f965b8-1d39-4ca5-8a9f-b928327d0911\") " pod="kube-system/cilium-d2fr7" Dec 13 16:04:13.630225 kubelet[2781]: I1213 16:04:13.630023 2781 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/31f965b8-1d39-4ca5-8a9f-b928327d0911-etc-cni-netd\") pod \"cilium-d2fr7\" (UID: \"31f965b8-1d39-4ca5-8a9f-b928327d0911\") " pod="kube-system/cilium-d2fr7" Dec 13 16:04:13.630225 kubelet[2781]: I1213 16:04:13.630051 2781 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/31f965b8-1d39-4ca5-8a9f-b928327d0911-host-proc-sys-net\") pod \"cilium-d2fr7\" (UID: \"31f965b8-1d39-4ca5-8a9f-b928327d0911\") " pod="kube-system/cilium-d2fr7" Dec 13 16:04:13.630225 kubelet[2781]: I1213 16:04:13.630074 2781 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5c00387e-0298-4fc9-ab21-b97ac716f951-kube-proxy\") pod \"kube-proxy-8cv7k\" (UID: \"5c00387e-0298-4fc9-ab21-b97ac716f951\") " pod="kube-system/kube-proxy-8cv7k" Dec 13 16:04:13.630225 kubelet[2781]: I1213 16:04:13.630095 2781 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/31f965b8-1d39-4ca5-8a9f-b928327d0911-lib-modules\") pod \"cilium-d2fr7\" (UID: \"31f965b8-1d39-4ca5-8a9f-b928327d0911\") " pod="kube-system/cilium-d2fr7" Dec 13 16:04:13.630225 kubelet[2781]: I1213 16:04:13.630114 2781 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/31f965b8-1d39-4ca5-8a9f-b928327d0911-hubble-tls\") pod \"cilium-d2fr7\" (UID: \"31f965b8-1d39-4ca5-8a9f-b928327d0911\") " pod="kube-system/cilium-d2fr7" Dec 13 16:04:13.630225 kubelet[2781]: I1213 16:04:13.630139 2781 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5c00387e-0298-4fc9-ab21-b97ac716f951-lib-modules\") pod \"kube-proxy-8cv7k\" (UID: \"5c00387e-0298-4fc9-ab21-b97ac716f951\") " pod="kube-system/kube-proxy-8cv7k" Dec 13 16:04:13.630442 kubelet[2781]: I1213 16:04:13.630161 2781 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/31f965b8-1d39-4ca5-8a9f-b928327d0911-cilium-run\") pod \"cilium-d2fr7\" (UID: \"31f965b8-1d39-4ca5-8a9f-b928327d0911\") " pod="kube-system/cilium-d2fr7" Dec 13 16:04:13.630442 kubelet[2781]: I1213 16:04:13.630181 2781 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5c00387e-0298-4fc9-ab21-b97ac716f951-xtables-lock\") pod \"kube-proxy-8cv7k\" (UID: \"5c00387e-0298-4fc9-ab21-b97ac716f951\") " pod="kube-system/kube-proxy-8cv7k" Dec 13 16:04:13.630442 kubelet[2781]: I1213 16:04:13.630202 2781 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chzl2\" (UniqueName: \"kubernetes.io/projected/5c00387e-0298-4fc9-ab21-b97ac716f951-kube-api-access-chzl2\") pod \"kube-proxy-8cv7k\" (UID: \"5c00387e-0298-4fc9-ab21-b97ac716f951\") " pod="kube-system/kube-proxy-8cv7k" Dec 13 16:04:13.630442 kubelet[2781]: I1213 16:04:13.630225 2781 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f24e3999-0804-4369-81e1-a6b3ab99ff4f-cilium-config-path\") pod \"cilium-operator-5cc964979-2nggw\" (UID: \"f24e3999-0804-4369-81e1-a6b3ab99ff4f\") " pod="kube-system/cilium-operator-5cc964979-2nggw" Dec 13 16:04:13.630442 kubelet[2781]: I1213 16:04:13.630247 2781 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/31f965b8-1d39-4ca5-8a9f-b928327d0911-cilium-cgroup\") pod \"cilium-d2fr7\" (UID: \"31f965b8-1d39-4ca5-8a9f-b928327d0911\") " pod="kube-system/cilium-d2fr7" Dec 13 16:04:13.630607 kubelet[2781]: I1213 16:04:13.630268 2781 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/31f965b8-1d39-4ca5-8a9f-b928327d0911-cilium-config-path\") pod \"cilium-d2fr7\" (UID: \"31f965b8-1d39-4ca5-8a9f-b928327d0911\") " pod="kube-system/cilium-d2fr7" Dec 13 16:04:13.646953 kubelet[2781]: I1213 16:04:13.646904 2781 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 16:04:13.647751 env[1669]: time="2024-12-13T16:04:13.647673230Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 16:04:13.648705 kubelet[2781]: I1213 16:04:13.648060 2781 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 16:04:13.841587 env[1669]: time="2024-12-13T16:04:13.841384903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-2nggw,Uid:f24e3999-0804-4369-81e1-a6b3ab99ff4f,Namespace:kube-system,Attempt:0,}" Dec 13 16:04:13.869666 env[1669]: time="2024-12-13T16:04:13.869502852Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 16:04:13.869666 env[1669]: time="2024-12-13T16:04:13.869608996Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 16:04:13.870128 env[1669]: time="2024-12-13T16:04:13.869651674Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 16:04:13.870291 env[1669]: time="2024-12-13T16:04:13.870108937Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d55fb63c332238060585a4adb6cc4adf48954c8ab2ff73e78ddeaaee43fa62e1 pid=2953 runtime=io.containerd.runc.v2 Dec 13 16:04:13.887745 env[1669]: time="2024-12-13T16:04:13.887615085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8cv7k,Uid:5c00387e-0298-4fc9-ab21-b97ac716f951,Namespace:kube-system,Attempt:0,}" Dec 13 16:04:13.895420 env[1669]: time="2024-12-13T16:04:13.895313815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-d2fr7,Uid:31f965b8-1d39-4ca5-8a9f-b928327d0911,Namespace:kube-system,Attempt:0,}" Dec 13 16:04:13.910326 env[1669]: time="2024-12-13T16:04:13.910164923Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 16:04:13.910326 env[1669]: time="2024-12-13T16:04:13.910283479Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 16:04:13.910765 env[1669]: time="2024-12-13T16:04:13.910337062Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 16:04:13.910946 env[1669]: time="2024-12-13T16:04:13.910759351Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/983b8ae2b967815034abce3bf047f63d659385859af0f6a5ebebd3bccb92a9f4 pid=2980 runtime=io.containerd.runc.v2 Dec 13 16:04:13.918303 env[1669]: time="2024-12-13T16:04:13.918126846Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 16:04:13.918303 env[1669]: time="2024-12-13T16:04:13.918238133Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 16:04:13.918735 env[1669]: time="2024-12-13T16:04:13.918291800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 16:04:13.918860 env[1669]: time="2024-12-13T16:04:13.918703210Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0469a29528ea6142bf636e14da8eae73ec2addbca56539702ae27f7fef0ee04d pid=3000 runtime=io.containerd.runc.v2 Dec 13 16:04:13.965134 env[1669]: time="2024-12-13T16:04:13.964825934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8cv7k,Uid:5c00387e-0298-4fc9-ab21-b97ac716f951,Namespace:kube-system,Attempt:0,} returns sandbox id \"983b8ae2b967815034abce3bf047f63d659385859af0f6a5ebebd3bccb92a9f4\"" Dec 13 16:04:13.968606 env[1669]: time="2024-12-13T16:04:13.968577317Z" level=info msg="CreateContainer within sandbox \"983b8ae2b967815034abce3bf047f63d659385859af0f6a5ebebd3bccb92a9f4\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 16:04:13.968702 env[1669]: time="2024-12-13T16:04:13.968592985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-d2fr7,Uid:31f965b8-1d39-4ca5-8a9f-b928327d0911,Namespace:kube-system,Attempt:0,} returns sandbox id \"0469a29528ea6142bf636e14da8eae73ec2addbca56539702ae27f7fef0ee04d\"" Dec 13 16:04:13.969650 env[1669]: time="2024-12-13T16:04:13.969630374Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 16:04:13.973952 env[1669]: time="2024-12-13T16:04:13.973923377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-2nggw,Uid:f24e3999-0804-4369-81e1-a6b3ab99ff4f,Namespace:kube-system,Attempt:0,} returns sandbox id \"d55fb63c332238060585a4adb6cc4adf48954c8ab2ff73e78ddeaaee43fa62e1\"" Dec 13 16:04:13.975130 env[1669]: time="2024-12-13T16:04:13.975082635Z" level=info msg="CreateContainer within sandbox \"983b8ae2b967815034abce3bf047f63d659385859af0f6a5ebebd3bccb92a9f4\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3a9405ca3f64e906e1ad82cf5e520907a93162436789e2a1d891534e9de28555\"" Dec 13 16:04:13.975326 env[1669]: time="2024-12-13T16:04:13.975313396Z" level=info msg="StartContainer for \"3a9405ca3f64e906e1ad82cf5e520907a93162436789e2a1d891534e9de28555\"" Dec 13 16:04:14.000592 env[1669]: time="2024-12-13T16:04:14.000536792Z" level=info msg="StartContainer for \"3a9405ca3f64e906e1ad82cf5e520907a93162436789e2a1d891534e9de28555\" returns successfully" Dec 13 16:04:14.427924 kubelet[2781]: I1213 16:04:14.427846 2781 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-8cv7k" podStartSLOduration=1.427755586 podStartE2EDuration="1.427755586s" podCreationTimestamp="2024-12-13 16:04:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 16:04:14.42750851 +0000 UTC m=+15.139452442" watchObservedRunningTime="2024-12-13 16:04:14.427755586 +0000 UTC m=+15.139699462" Dec 13 16:04:17.430506 sshd[2938]: Failed password for root from 218.92.0.157 port 36047 ssh2 Dec 13 16:04:18.541285 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount584761791.mount: Deactivated successfully. Dec 13 16:04:20.443589 env[1669]: time="2024-12-13T16:04:20.443539229Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:04:20.444244 env[1669]: time="2024-12-13T16:04:20.444231645Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:04:20.445091 env[1669]: time="2024-12-13T16:04:20.445079089Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:04:20.445474 env[1669]: time="2024-12-13T16:04:20.445459911Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 13 16:04:20.445959 env[1669]: time="2024-12-13T16:04:20.445926732Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 16:04:20.446801 env[1669]: time="2024-12-13T16:04:20.446765344Z" level=info msg="CreateContainer within sandbox \"0469a29528ea6142bf636e14da8eae73ec2addbca56539702ae27f7fef0ee04d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 16:04:20.451110 env[1669]: time="2024-12-13T16:04:20.451086873Z" level=info msg="CreateContainer within sandbox \"0469a29528ea6142bf636e14da8eae73ec2addbca56539702ae27f7fef0ee04d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"156ce479679d635df0aa6781d4761c1cae589a648494751fa5f2376c09fc3a4c\"" Dec 13 16:04:20.451422 env[1669]: time="2024-12-13T16:04:20.451349378Z" level=info msg="StartContainer for \"156ce479679d635df0aa6781d4761c1cae589a648494751fa5f2376c09fc3a4c\"" Dec 13 16:04:20.473092 env[1669]: time="2024-12-13T16:04:20.473068159Z" level=info msg="StartContainer for \"156ce479679d635df0aa6781d4761c1cae589a648494751fa5f2376c09fc3a4c\" returns successfully" Dec 13 16:04:20.613087 sshd[2938]: Failed password for root from 218.92.0.157 port 36047 ssh2 Dec 13 16:04:21.451092 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-156ce479679d635df0aa6781d4761c1cae589a648494751fa5f2376c09fc3a4c-rootfs.mount: Deactivated successfully. Dec 13 16:04:22.079258 env[1669]: time="2024-12-13T16:04:22.079106490Z" level=info msg="shim disconnected" id=156ce479679d635df0aa6781d4761c1cae589a648494751fa5f2376c09fc3a4c Dec 13 16:04:22.079258 env[1669]: time="2024-12-13T16:04:22.079220054Z" level=warning msg="cleaning up after shim disconnected" id=156ce479679d635df0aa6781d4761c1cae589a648494751fa5f2376c09fc3a4c namespace=k8s.io Dec 13 16:04:22.079258 env[1669]: time="2024-12-13T16:04:22.079249492Z" level=info msg="cleaning up dead shim" Dec 13 16:04:22.094383 env[1669]: time="2024-12-13T16:04:22.094291549Z" level=warning msg="cleanup warnings time=\"2024-12-13T16:04:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3281 runtime=io.containerd.runc.v2\n" Dec 13 16:04:22.420029 env[1669]: time="2024-12-13T16:04:22.419754305Z" level=info msg="CreateContainer within sandbox \"0469a29528ea6142bf636e14da8eae73ec2addbca56539702ae27f7fef0ee04d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 16:04:22.429538 env[1669]: time="2024-12-13T16:04:22.429493010Z" level=info msg="CreateContainer within sandbox \"0469a29528ea6142bf636e14da8eae73ec2addbca56539702ae27f7fef0ee04d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3df5199628344118b49df70f77d8c0ede1136d743f7f50ea6b7eeaf5f1b11950\"" Dec 13 16:04:22.430066 env[1669]: time="2024-12-13T16:04:22.430009339Z" level=info msg="StartContainer for \"3df5199628344118b49df70f77d8c0ede1136d743f7f50ea6b7eeaf5f1b11950\"" Dec 13 16:04:22.431678 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3531960224.mount: Deactivated successfully. Dec 13 16:04:22.450884 env[1669]: time="2024-12-13T16:04:22.450855315Z" level=info msg="StartContainer for \"3df5199628344118b49df70f77d8c0ede1136d743f7f50ea6b7eeaf5f1b11950\" returns successfully" Dec 13 16:04:22.457174 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 16:04:22.457321 systemd[1]: Stopped systemd-sysctl.service. Dec 13 16:04:22.457432 systemd[1]: Stopping systemd-sysctl.service... Dec 13 16:04:22.458348 systemd[1]: Starting systemd-sysctl.service... Dec 13 16:04:22.459823 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 16:04:22.462928 systemd[1]: Finished systemd-sysctl.service. Dec 13 16:04:22.465528 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3df5199628344118b49df70f77d8c0ede1136d743f7f50ea6b7eeaf5f1b11950-rootfs.mount: Deactivated successfully. Dec 13 16:04:22.467662 env[1669]: time="2024-12-13T16:04:22.467638121Z" level=info msg="shim disconnected" id=3df5199628344118b49df70f77d8c0ede1136d743f7f50ea6b7eeaf5f1b11950 Dec 13 16:04:22.467727 env[1669]: time="2024-12-13T16:04:22.467665804Z" level=warning msg="cleaning up after shim disconnected" id=3df5199628344118b49df70f77d8c0ede1136d743f7f50ea6b7eeaf5f1b11950 namespace=k8s.io Dec 13 16:04:22.467727 env[1669]: time="2024-12-13T16:04:22.467673132Z" level=info msg="cleaning up dead shim" Dec 13 16:04:22.471172 env[1669]: time="2024-12-13T16:04:22.471156784Z" level=warning msg="cleanup warnings time=\"2024-12-13T16:04:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3345 runtime=io.containerd.runc.v2\n" Dec 13 16:04:22.827461 sshd[2938]: Received disconnect from 218.92.0.157 port 36047:11: [preauth] Dec 13 16:04:22.827461 sshd[2938]: Disconnected from authenticating user root 218.92.0.157 port 36047 [preauth] Dec 13 16:04:22.827591 sshd[2938]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.157 user=root Dec 13 16:04:22.828105 systemd[1]: sshd@7-145.40.90.237:22-218.92.0.157:36047.service: Deactivated successfully. Dec 13 16:04:23.264454 env[1669]: time="2024-12-13T16:04:23.264431184Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:04:23.264958 env[1669]: time="2024-12-13T16:04:23.264945993Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:04:23.265584 env[1669]: time="2024-12-13T16:04:23.265569200Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:04:23.266125 env[1669]: time="2024-12-13T16:04:23.266109824Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 13 16:04:23.267096 env[1669]: time="2024-12-13T16:04:23.267068517Z" level=info msg="CreateContainer within sandbox \"d55fb63c332238060585a4adb6cc4adf48954c8ab2ff73e78ddeaaee43fa62e1\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 16:04:23.270854 env[1669]: time="2024-12-13T16:04:23.270809680Z" level=info msg="CreateContainer within sandbox \"d55fb63c332238060585a4adb6cc4adf48954c8ab2ff73e78ddeaaee43fa62e1\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"589d9957b1fbe9b173ef009c0147eddd0e157a65525fa80d2477c178e282786c\"" Dec 13 16:04:23.271128 env[1669]: time="2024-12-13T16:04:23.271091625Z" level=info msg="StartContainer for \"589d9957b1fbe9b173ef009c0147eddd0e157a65525fa80d2477c178e282786c\"" Dec 13 16:04:23.291645 env[1669]: time="2024-12-13T16:04:23.291624127Z" level=info msg="StartContainer for \"589d9957b1fbe9b173ef009c0147eddd0e157a65525fa80d2477c178e282786c\" returns successfully" Dec 13 16:04:23.426905 env[1669]: time="2024-12-13T16:04:23.426797937Z" level=info msg="CreateContainer within sandbox \"0469a29528ea6142bf636e14da8eae73ec2addbca56539702ae27f7fef0ee04d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 16:04:23.447779 env[1669]: time="2024-12-13T16:04:23.447647200Z" level=info msg="CreateContainer within sandbox \"0469a29528ea6142bf636e14da8eae73ec2addbca56539702ae27f7fef0ee04d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"36fb0ff88c1dad23ac461af9032e73d62f7953e80a39f05a48635c35cec93326\"" Dec 13 16:04:23.448758 env[1669]: time="2024-12-13T16:04:23.448643381Z" level=info msg="StartContainer for \"36fb0ff88c1dad23ac461af9032e73d62f7953e80a39f05a48635c35cec93326\"" Dec 13 16:04:23.456922 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3161065862.mount: Deactivated successfully. Dec 13 16:04:23.491424 env[1669]: time="2024-12-13T16:04:23.491395737Z" level=info msg="StartContainer for \"36fb0ff88c1dad23ac461af9032e73d62f7953e80a39f05a48635c35cec93326\" returns successfully" Dec 13 16:04:23.504724 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-36fb0ff88c1dad23ac461af9032e73d62f7953e80a39f05a48635c35cec93326-rootfs.mount: Deactivated successfully. Dec 13 16:04:23.652132 env[1669]: time="2024-12-13T16:04:23.652039938Z" level=info msg="shim disconnected" id=36fb0ff88c1dad23ac461af9032e73d62f7953e80a39f05a48635c35cec93326 Dec 13 16:04:23.652132 env[1669]: time="2024-12-13T16:04:23.652067675Z" level=warning msg="cleaning up after shim disconnected" id=36fb0ff88c1dad23ac461af9032e73d62f7953e80a39f05a48635c35cec93326 namespace=k8s.io Dec 13 16:04:23.652132 env[1669]: time="2024-12-13T16:04:23.652073408Z" level=info msg="cleaning up dead shim" Dec 13 16:04:23.655848 env[1669]: time="2024-12-13T16:04:23.655801606Z" level=warning msg="cleanup warnings time=\"2024-12-13T16:04:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3455 runtime=io.containerd.runc.v2\n" Dec 13 16:04:24.438046 env[1669]: time="2024-12-13T16:04:24.437932965Z" level=info msg="CreateContainer within sandbox \"0469a29528ea6142bf636e14da8eae73ec2addbca56539702ae27f7fef0ee04d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 16:04:24.454076 env[1669]: time="2024-12-13T16:04:24.453946792Z" level=info msg="CreateContainer within sandbox \"0469a29528ea6142bf636e14da8eae73ec2addbca56539702ae27f7fef0ee04d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7ac0bcb9ab8ced94a50b72446237ad1b588a5d467829cf74584f0968fad503df\"" Dec 13 16:04:24.455107 env[1669]: time="2024-12-13T16:04:24.454994565Z" level=info msg="StartContainer for \"7ac0bcb9ab8ced94a50b72446237ad1b588a5d467829cf74584f0968fad503df\"" Dec 13 16:04:24.477608 kubelet[2781]: I1213 16:04:24.477572 2781 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-2nggw" podStartSLOduration=2.185763165 podStartE2EDuration="11.477523855s" podCreationTimestamp="2024-12-13 16:04:13 +0000 UTC" firstStartedPulling="2024-12-13 16:04:13.974501959 +0000 UTC m=+14.686445793" lastFinishedPulling="2024-12-13 16:04:23.266262657 +0000 UTC m=+23.978206483" observedRunningTime="2024-12-13 16:04:23.465302348 +0000 UTC m=+24.177246199" watchObservedRunningTime="2024-12-13 16:04:24.477523855 +0000 UTC m=+25.189467693" Dec 13 16:04:24.499974 env[1669]: time="2024-12-13T16:04:24.499946956Z" level=info msg="StartContainer for \"7ac0bcb9ab8ced94a50b72446237ad1b588a5d467829cf74584f0968fad503df\" returns successfully" Dec 13 16:04:24.509076 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7ac0bcb9ab8ced94a50b72446237ad1b588a5d467829cf74584f0968fad503df-rootfs.mount: Deactivated successfully. Dec 13 16:04:24.509632 env[1669]: time="2024-12-13T16:04:24.509602391Z" level=info msg="shim disconnected" id=7ac0bcb9ab8ced94a50b72446237ad1b588a5d467829cf74584f0968fad503df Dec 13 16:04:24.509703 env[1669]: time="2024-12-13T16:04:24.509637903Z" level=warning msg="cleaning up after shim disconnected" id=7ac0bcb9ab8ced94a50b72446237ad1b588a5d467829cf74584f0968fad503df namespace=k8s.io Dec 13 16:04:24.509703 env[1669]: time="2024-12-13T16:04:24.509647405Z" level=info msg="cleaning up dead shim" Dec 13 16:04:24.513924 env[1669]: time="2024-12-13T16:04:24.513901583Z" level=warning msg="cleanup warnings time=\"2024-12-13T16:04:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3509 runtime=io.containerd.runc.v2\n" Dec 13 16:04:25.450056 env[1669]: time="2024-12-13T16:04:25.450017846Z" level=info msg="CreateContainer within sandbox \"0469a29528ea6142bf636e14da8eae73ec2addbca56539702ae27f7fef0ee04d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 16:04:25.456757 env[1669]: time="2024-12-13T16:04:25.456734005Z" level=info msg="CreateContainer within sandbox \"0469a29528ea6142bf636e14da8eae73ec2addbca56539702ae27f7fef0ee04d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f235e220af5cdb8bfb45e79e81cb6c0682fe36eecdd0d60aef5c955002021e00\"" Dec 13 16:04:25.457089 env[1669]: time="2024-12-13T16:04:25.457075495Z" level=info msg="StartContainer for \"f235e220af5cdb8bfb45e79e81cb6c0682fe36eecdd0d60aef5c955002021e00\"" Dec 13 16:04:25.480615 env[1669]: time="2024-12-13T16:04:25.480561306Z" level=info msg="StartContainer for \"f235e220af5cdb8bfb45e79e81cb6c0682fe36eecdd0d60aef5c955002021e00\" returns successfully" Dec 13 16:04:25.508069 kubelet[2781]: I1213 16:04:25.508056 2781 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 16:04:25.522056 kubelet[2781]: I1213 16:04:25.522038 2781 topology_manager.go:215] "Topology Admit Handler" podUID="0d04c636-903a-4418-a375-169c9224e8cc" podNamespace="kube-system" podName="coredns-76f75df574-s6jtb" Dec 13 16:04:25.523090 kubelet[2781]: I1213 16:04:25.523076 2781 topology_manager.go:215] "Topology Admit Handler" podUID="6cdc1c5a-510a-4f56-9d49-45d25220636c" podNamespace="kube-system" podName="coredns-76f75df574-shvf4" Dec 13 16:04:25.535361 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Dec 13 16:04:25.616734 kubelet[2781]: I1213 16:04:25.616714 2781 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpdzn\" (UniqueName: \"kubernetes.io/projected/0d04c636-903a-4418-a375-169c9224e8cc-kube-api-access-dpdzn\") pod \"coredns-76f75df574-s6jtb\" (UID: \"0d04c636-903a-4418-a375-169c9224e8cc\") " pod="kube-system/coredns-76f75df574-s6jtb" Dec 13 16:04:25.616734 kubelet[2781]: I1213 16:04:25.616738 2781 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0d04c636-903a-4418-a375-169c9224e8cc-config-volume\") pod \"coredns-76f75df574-s6jtb\" (UID: \"0d04c636-903a-4418-a375-169c9224e8cc\") " pod="kube-system/coredns-76f75df574-s6jtb" Dec 13 16:04:25.616873 kubelet[2781]: I1213 16:04:25.616754 2781 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6cdc1c5a-510a-4f56-9d49-45d25220636c-config-volume\") pod \"coredns-76f75df574-shvf4\" (UID: \"6cdc1c5a-510a-4f56-9d49-45d25220636c\") " pod="kube-system/coredns-76f75df574-shvf4" Dec 13 16:04:25.616873 kubelet[2781]: I1213 16:04:25.616772 2781 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98ns9\" (UniqueName: \"kubernetes.io/projected/6cdc1c5a-510a-4f56-9d49-45d25220636c-kube-api-access-98ns9\") pod \"coredns-76f75df574-shvf4\" (UID: \"6cdc1c5a-510a-4f56-9d49-45d25220636c\") " pod="kube-system/coredns-76f75df574-shvf4" Dec 13 16:04:25.683427 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Dec 13 16:04:25.825095 env[1669]: time="2024-12-13T16:04:25.825006734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-s6jtb,Uid:0d04c636-903a-4418-a375-169c9224e8cc,Namespace:kube-system,Attempt:0,}" Dec 13 16:04:25.825601 env[1669]: time="2024-12-13T16:04:25.825532856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-shvf4,Uid:6cdc1c5a-510a-4f56-9d49-45d25220636c,Namespace:kube-system,Attempt:0,}" Dec 13 16:04:26.470873 kubelet[2781]: I1213 16:04:26.470779 2781 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-d2fr7" podStartSLOduration=6.994205788 podStartE2EDuration="13.470677104s" podCreationTimestamp="2024-12-13 16:04:13 +0000 UTC" firstStartedPulling="2024-12-13 16:04:13.969302002 +0000 UTC m=+14.681245844" lastFinishedPulling="2024-12-13 16:04:20.445773324 +0000 UTC m=+21.157717160" observedRunningTime="2024-12-13 16:04:26.470015717 +0000 UTC m=+27.181959636" watchObservedRunningTime="2024-12-13 16:04:26.470677104 +0000 UTC m=+27.182620985" Dec 13 16:04:27.284185 systemd-networkd[1398]: cilium_host: Link UP Dec 13 16:04:27.284298 systemd-networkd[1398]: cilium_net: Link UP Dec 13 16:04:27.291389 systemd-networkd[1398]: cilium_net: Gained carrier Dec 13 16:04:27.298608 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Dec 13 16:04:27.298647 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Dec 13 16:04:27.298648 systemd-networkd[1398]: cilium_host: Gained carrier Dec 13 16:04:27.344157 systemd-networkd[1398]: cilium_vxlan: Link UP Dec 13 16:04:27.344161 systemd-networkd[1398]: cilium_vxlan: Gained carrier Dec 13 16:04:27.477517 kernel: NET: Registered PF_ALG protocol family Dec 13 16:04:27.800463 systemd-networkd[1398]: cilium_net: Gained IPv6LL Dec 13 16:04:27.960430 systemd-networkd[1398]: cilium_host: Gained IPv6LL Dec 13 16:04:27.974512 systemd-networkd[1398]: lxc_health: Link UP Dec 13 16:04:27.994370 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 16:04:27.994469 systemd-networkd[1398]: lxc_health: Gained carrier Dec 13 16:04:28.372159 systemd-networkd[1398]: lxc0dcb85199327: Link UP Dec 13 16:04:28.397411 kernel: eth0: renamed from tmp05d56 Dec 13 16:04:28.426925 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 16:04:28.427011 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc0dcb85199327: link becomes ready Dec 13 16:04:28.447363 kernel: eth0: renamed from tmpbd91d Dec 13 16:04:28.457710 systemd-networkd[1398]: lxc0dcb85199327: Gained carrier Dec 13 16:04:28.465386 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcf5e6308004da: link becomes ready Dec 13 16:04:28.465770 systemd-networkd[1398]: tmpbd91d: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 16:04:28.465841 systemd-networkd[1398]: tmpbd91d: Cannot enable IPv6, ignoring: No such file or directory Dec 13 16:04:28.465863 systemd-networkd[1398]: tmpbd91d: Cannot configure IPv6 privacy extensions for interface, ignoring: No such file or directory Dec 13 16:04:28.465871 systemd-networkd[1398]: tmpbd91d: Cannot disable kernel IPv6 accept_ra for interface, ignoring: No such file or directory Dec 13 16:04:28.465877 systemd-networkd[1398]: tmpbd91d: Cannot set IPv6 proxy NDP, ignoring: No such file or directory Dec 13 16:04:28.465885 systemd-networkd[1398]: tmpbd91d: Cannot enable promote_secondaries for interface, ignoring: No such file or directory Dec 13 16:04:28.466034 systemd-networkd[1398]: lxcf5e6308004da: Link UP Dec 13 16:04:28.466297 systemd-networkd[1398]: lxcf5e6308004da: Gained carrier Dec 13 16:04:28.856454 systemd-networkd[1398]: cilium_vxlan: Gained IPv6LL Dec 13 16:04:29.816468 systemd-networkd[1398]: lxc0dcb85199327: Gained IPv6LL Dec 13 16:04:29.944499 systemd-networkd[1398]: lxc_health: Gained IPv6LL Dec 13 16:04:30.200445 systemd-networkd[1398]: lxcf5e6308004da: Gained IPv6LL Dec 13 16:04:30.739235 env[1669]: time="2024-12-13T16:04:30.739198607Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 16:04:30.739235 env[1669]: time="2024-12-13T16:04:30.739226036Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 16:04:30.739235 env[1669]: time="2024-12-13T16:04:30.739234404Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 16:04:30.739518 env[1669]: time="2024-12-13T16:04:30.739302135Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bd91d1ba3be7f853973f5ba855bec662811a8bc4ab9e36888d2da31632a1dfc7 pid=4198 runtime=io.containerd.runc.v2 Dec 13 16:04:30.739518 env[1669]: time="2024-12-13T16:04:30.739329663Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 16:04:30.739518 env[1669]: time="2024-12-13T16:04:30.739350548Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 16:04:30.739518 env[1669]: time="2024-12-13T16:04:30.739366687Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 16:04:30.739518 env[1669]: time="2024-12-13T16:04:30.739426929Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/05d5631aad075f63309a99a9036069f8d6808e8880643ce0530f370de9f557d2 pid=4199 runtime=io.containerd.runc.v2 Dec 13 16:04:30.767363 env[1669]: time="2024-12-13T16:04:30.767334713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-s6jtb,Uid:0d04c636-903a-4418-a375-169c9224e8cc,Namespace:kube-system,Attempt:0,} returns sandbox id \"05d5631aad075f63309a99a9036069f8d6808e8880643ce0530f370de9f557d2\"" Dec 13 16:04:30.767470 env[1669]: time="2024-12-13T16:04:30.767389910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-shvf4,Uid:6cdc1c5a-510a-4f56-9d49-45d25220636c,Namespace:kube-system,Attempt:0,} returns sandbox id \"bd91d1ba3be7f853973f5ba855bec662811a8bc4ab9e36888d2da31632a1dfc7\"" Dec 13 16:04:30.768510 env[1669]: time="2024-12-13T16:04:30.768493294Z" level=info msg="CreateContainer within sandbox \"05d5631aad075f63309a99a9036069f8d6808e8880643ce0530f370de9f557d2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 16:04:30.768510 env[1669]: time="2024-12-13T16:04:30.768494771Z" level=info msg="CreateContainer within sandbox \"bd91d1ba3be7f853973f5ba855bec662811a8bc4ab9e36888d2da31632a1dfc7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 16:04:30.773507 env[1669]: time="2024-12-13T16:04:30.773487944Z" level=info msg="CreateContainer within sandbox \"05d5631aad075f63309a99a9036069f8d6808e8880643ce0530f370de9f557d2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f3c1bb1277f08492fb9e5611cf00c1df4909fc5e21698bf986591f2a3366910c\"" Dec 13 16:04:30.773737 env[1669]: time="2024-12-13T16:04:30.773721413Z" level=info msg="StartContainer for \"f3c1bb1277f08492fb9e5611cf00c1df4909fc5e21698bf986591f2a3366910c\"" Dec 13 16:04:30.774380 env[1669]: time="2024-12-13T16:04:30.774365230Z" level=info msg="CreateContainer within sandbox \"bd91d1ba3be7f853973f5ba855bec662811a8bc4ab9e36888d2da31632a1dfc7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c09405d4a45f997924ea02b78387519534fa1617c46b3b59789722400d3ecbe0\"" Dec 13 16:04:30.774521 env[1669]: time="2024-12-13T16:04:30.774509050Z" level=info msg="StartContainer for \"c09405d4a45f997924ea02b78387519534fa1617c46b3b59789722400d3ecbe0\"" Dec 13 16:04:30.811003 env[1669]: time="2024-12-13T16:04:30.810973054Z" level=info msg="StartContainer for \"c09405d4a45f997924ea02b78387519534fa1617c46b3b59789722400d3ecbe0\" returns successfully" Dec 13 16:04:30.811098 env[1669]: time="2024-12-13T16:04:30.811026126Z" level=info msg="StartContainer for \"f3c1bb1277f08492fb9e5611cf00c1df4909fc5e21698bf986591f2a3366910c\" returns successfully" Dec 13 16:04:31.477982 kubelet[2781]: I1213 16:04:31.477964 2781 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-s6jtb" podStartSLOduration=18.477934609 podStartE2EDuration="18.477934609s" podCreationTimestamp="2024-12-13 16:04:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 16:04:31.477568411 +0000 UTC m=+32.189512237" watchObservedRunningTime="2024-12-13 16:04:31.477934609 +0000 UTC m=+32.189878432" Dec 13 16:04:31.489805 kubelet[2781]: I1213 16:04:31.489756 2781 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-shvf4" podStartSLOduration=18.489727251 podStartE2EDuration="18.489727251s" podCreationTimestamp="2024-12-13 16:04:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 16:04:31.489429348 +0000 UTC m=+32.201373184" watchObservedRunningTime="2024-12-13 16:04:31.489727251 +0000 UTC m=+32.201671081" Dec 13 16:05:40.879046 systemd[1]: Started sshd@8-145.40.90.237:22-218.92.0.157:33969.service. Dec 13 16:05:42.352569 sshd[4370]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.157 user=root Dec 13 16:05:43.835587 sshd[4370]: Failed password for root from 218.92.0.157 port 33969 ssh2 Dec 13 16:05:44.135771 sshd[4370]: pam_faillock(sshd:auth): Consecutive login failures for user root account temporarily locked Dec 13 16:05:46.226566 sshd[4370]: Failed password for root from 218.92.0.157 port 33969 ssh2 Dec 13 16:05:49.730675 systemd[1]: Started sshd@9-145.40.90.237:22-218.92.0.229:36978.service. Dec 13 16:05:52.564349 sshd[4370]: Failed password for root from 218.92.0.157 port 33969 ssh2 Dec 13 16:05:53.892463 sshd[4370]: Received disconnect from 218.92.0.157 port 33969:11: [preauth] Dec 13 16:05:53.892463 sshd[4370]: Disconnected from authenticating user root 218.92.0.157 port 33969 [preauth] Dec 13 16:05:53.892942 sshd[4370]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.157 user=root Dec 13 16:05:53.894653 systemd[1]: sshd@8-145.40.90.237:22-218.92.0.157:33969.service: Deactivated successfully. Dec 13 16:07:09.035856 systemd[1]: Started sshd@10-145.40.90.237:22-218.92.0.157:48169.service. Dec 13 16:07:12.963241 sshd[4389]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.157 user=root Dec 13 16:07:14.135563 sshd[4389]: Failed password for root from 218.92.0.157 port 48169 ssh2 Dec 13 16:07:16.386801 sshd[4389]: pam_faillock(sshd:auth): Consecutive login failures for user root account temporarily locked Dec 13 16:07:18.106527 sshd[4389]: Failed password for root from 218.92.0.157 port 48169 ssh2 Dec 13 16:07:19.751234 sshd[4389]: Received disconnect from 218.92.0.157 port 48169:11: [preauth] Dec 13 16:07:19.751234 sshd[4389]: Disconnected from authenticating user root 218.92.0.157 port 48169 [preauth] Dec 13 16:07:19.751864 sshd[4389]: PAM 1 more authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.157 user=root Dec 13 16:07:19.752907 systemd[1]: sshd@10-145.40.90.237:22-218.92.0.157:48169.service: Deactivated successfully. Dec 13 16:07:49.737607 systemd[1]: sshd@9-145.40.90.237:22-218.92.0.229:36978.service: Deactivated successfully. Dec 13 16:08:37.496386 systemd[1]: Started sshd@11-145.40.90.237:22-218.92.0.157:63417.service. Dec 13 16:08:40.079074 sshd[4407]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.157 user=root Dec 13 16:08:41.999514 sshd[4407]: Failed password for root from 218.92.0.157 port 63417 ssh2 Dec 13 16:08:45.798122 sshd[4407]: Failed password for root from 218.92.0.157 port 63417 ssh2 Dec 13 16:08:46.129603 sshd[4407]: pam_faillock(sshd:auth): Consecutive login failures for user root account temporarily locked Dec 13 16:08:48.541555 sshd[4407]: Failed password for root from 218.92.0.157 port 63417 ssh2 Dec 13 16:08:57.447709 sshd[4407]: Connection reset by authenticating user root 218.92.0.157 port 63417 [preauth] Dec 13 16:08:57.448225 sshd[4407]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.157 user=root Dec 13 16:08:57.450194 systemd[1]: sshd@11-145.40.90.237:22-218.92.0.157:63417.service: Deactivated successfully. Dec 13 16:09:35.057235 update_engine[1661]: I1213 16:09:35.057157 1661 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Dec 13 16:09:35.057235 update_engine[1661]: I1213 16:09:35.057236 1661 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Dec 13 16:09:35.064743 update_engine[1661]: I1213 16:09:35.064667 1661 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Dec 13 16:09:35.065676 update_engine[1661]: I1213 16:09:35.065603 1661 omaha_request_params.cc:62] Current group set to lts Dec 13 16:09:35.065983 update_engine[1661]: I1213 16:09:35.065907 1661 update_attempter.cc:499] Already updated boot flags. Skipping. Dec 13 16:09:35.065983 update_engine[1661]: I1213 16:09:35.065927 1661 update_attempter.cc:643] Scheduling an action processor start. Dec 13 16:09:35.065983 update_engine[1661]: I1213 16:09:35.065961 1661 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Dec 13 16:09:35.066383 update_engine[1661]: I1213 16:09:35.066044 1661 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Dec 13 16:09:35.066383 update_engine[1661]: I1213 16:09:35.066190 1661 omaha_request_action.cc:270] Posting an Omaha request to disabled Dec 13 16:09:35.066383 update_engine[1661]: I1213 16:09:35.066206 1661 omaha_request_action.cc:271] Request: Dec 13 16:09:35.066383 update_engine[1661]: Dec 13 16:09:35.066383 update_engine[1661]: Dec 13 16:09:35.066383 update_engine[1661]: Dec 13 16:09:35.066383 update_engine[1661]: Dec 13 16:09:35.066383 update_engine[1661]: Dec 13 16:09:35.066383 update_engine[1661]: Dec 13 16:09:35.066383 update_engine[1661]: Dec 13 16:09:35.066383 update_engine[1661]: Dec 13 16:09:35.066383 update_engine[1661]: I1213 16:09:35.066217 1661 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 16:09:35.067489 locksmithd[1713]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Dec 13 16:09:35.069571 update_engine[1661]: I1213 16:09:35.069489 1661 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 16:09:35.069774 update_engine[1661]: E1213 16:09:35.069715 1661 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 16:09:35.069895 update_engine[1661]: I1213 16:09:35.069869 1661 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Dec 13 16:09:44.991051 update_engine[1661]: I1213 16:09:44.990935 1661 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 16:09:44.992097 update_engine[1661]: I1213 16:09:44.991468 1661 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 16:09:44.992097 update_engine[1661]: E1213 16:09:44.991695 1661 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 16:09:44.992097 update_engine[1661]: I1213 16:09:44.991873 1661 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Dec 13 16:09:50.874491 systemd[1]: Started sshd@12-145.40.90.237:22-2.57.122.33:59570.service. Dec 13 16:09:51.053502 sshd[4419]: kex_exchange_identification: Connection closed by remote host Dec 13 16:09:51.053502 sshd[4419]: Connection closed by 2.57.122.33 port 59570 Dec 13 16:09:51.054924 systemd[1]: sshd@12-145.40.90.237:22-2.57.122.33:59570.service: Deactivated successfully. Dec 13 16:09:54.991137 update_engine[1661]: I1213 16:09:54.991019 1661 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 16:09:54.992041 update_engine[1661]: I1213 16:09:54.991552 1661 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 16:09:54.992041 update_engine[1661]: E1213 16:09:54.991755 1661 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 16:09:54.992041 update_engine[1661]: I1213 16:09:54.991927 1661 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Dec 13 16:10:04.991192 update_engine[1661]: I1213 16:10:04.991072 1661 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 16:10:04.992111 update_engine[1661]: I1213 16:10:04.991634 1661 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 16:10:04.992111 update_engine[1661]: E1213 16:10:04.991844 1661 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 16:10:04.992111 update_engine[1661]: I1213 16:10:04.991989 1661 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Dec 13 16:10:04.992111 update_engine[1661]: I1213 16:10:04.992003 1661 omaha_request_action.cc:621] Omaha request response: Dec 13 16:10:04.992541 update_engine[1661]: E1213 16:10:04.992146 1661 omaha_request_action.cc:640] Omaha request network transfer failed. Dec 13 16:10:04.992541 update_engine[1661]: I1213 16:10:04.992175 1661 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Dec 13 16:10:04.992541 update_engine[1661]: I1213 16:10:04.992185 1661 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 13 16:10:04.992541 update_engine[1661]: I1213 16:10:04.992194 1661 update_attempter.cc:306] Processing Done. Dec 13 16:10:04.992541 update_engine[1661]: E1213 16:10:04.992218 1661 update_attempter.cc:619] Update failed. Dec 13 16:10:04.992541 update_engine[1661]: I1213 16:10:04.992228 1661 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Dec 13 16:10:04.992541 update_engine[1661]: I1213 16:10:04.992237 1661 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Dec 13 16:10:04.992541 update_engine[1661]: I1213 16:10:04.992246 1661 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Dec 13 16:10:04.992541 update_engine[1661]: I1213 16:10:04.992419 1661 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Dec 13 16:10:04.992541 update_engine[1661]: I1213 16:10:04.992468 1661 omaha_request_action.cc:270] Posting an Omaha request to disabled Dec 13 16:10:04.992541 update_engine[1661]: I1213 16:10:04.992480 1661 omaha_request_action.cc:271] Request: Dec 13 16:10:04.992541 update_engine[1661]: Dec 13 16:10:04.992541 update_engine[1661]: Dec 13 16:10:04.992541 update_engine[1661]: Dec 13 16:10:04.992541 update_engine[1661]: Dec 13 16:10:04.992541 update_engine[1661]: Dec 13 16:10:04.992541 update_engine[1661]: Dec 13 16:10:04.992541 update_engine[1661]: I1213 16:10:04.992489 1661 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 16:10:04.994128 update_engine[1661]: I1213 16:10:04.992792 1661 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 16:10:04.994128 update_engine[1661]: E1213 16:10:04.992956 1661 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 16:10:04.994128 update_engine[1661]: I1213 16:10:04.993089 1661 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Dec 13 16:10:04.994128 update_engine[1661]: I1213 16:10:04.993104 1661 omaha_request_action.cc:621] Omaha request response: Dec 13 16:10:04.994128 update_engine[1661]: I1213 16:10:04.993114 1661 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 13 16:10:04.994128 update_engine[1661]: I1213 16:10:04.993122 1661 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 13 16:10:04.994128 update_engine[1661]: I1213 16:10:04.993129 1661 update_attempter.cc:306] Processing Done. Dec 13 16:10:04.994128 update_engine[1661]: I1213 16:10:04.993139 1661 update_attempter.cc:310] Error event sent. Dec 13 16:10:04.994128 update_engine[1661]: I1213 16:10:04.993163 1661 update_check_scheduler.cc:74] Next update check in 42m10s Dec 13 16:10:04.994969 locksmithd[1713]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Dec 13 16:10:04.994969 locksmithd[1713]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Dec 13 16:10:08.523971 systemd[1]: Started sshd@13-145.40.90.237:22-218.92.0.157:57601.service. Dec 13 16:10:14.135501 sshd[4428]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.157 user=root Dec 13 16:10:16.627803 sshd[4428]: Failed password for root from 218.92.0.157 port 57601 ssh2 Dec 13 16:10:17.378886 sshd[4428]: Received disconnect from 218.92.0.157 port 57601:11: [preauth] Dec 13 16:10:17.378886 sshd[4428]: Disconnected from authenticating user root 218.92.0.157 port 57601 [preauth] Dec 13 16:10:17.381335 systemd[1]: sshd@13-145.40.90.237:22-218.92.0.157:57601.service: Deactivated successfully. Dec 13 16:10:18.778556 systemd[1]: Started sshd@14-145.40.90.237:22-139.178.89.65:34042.service. Dec 13 16:10:18.840924 sshd[4434]: Accepted publickey for core from 139.178.89.65 port 34042 ssh2: RSA SHA256:izw9nsjJfNoYQ+ckx4ezJW8umjt8Ms6p+cEtPdzXVWQ Dec 13 16:10:18.841704 sshd[4434]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 16:10:18.844865 systemd-logind[1659]: New session 10 of user core. Dec 13 16:10:18.845385 systemd[1]: Started session-10.scope. Dec 13 16:10:18.936967 sshd[4434]: pam_unix(sshd:session): session closed for user core Dec 13 16:10:18.938595 systemd[1]: sshd@14-145.40.90.237:22-139.178.89.65:34042.service: Deactivated successfully. Dec 13 16:10:18.939177 systemd-logind[1659]: Session 10 logged out. Waiting for processes to exit. Dec 13 16:10:18.939226 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 16:10:18.939795 systemd-logind[1659]: Removed session 10. Dec 13 16:10:23.940188 systemd[1]: Started sshd@15-145.40.90.237:22-139.178.89.65:34048.service. Dec 13 16:10:23.978170 sshd[4463]: Accepted publickey for core from 139.178.89.65 port 34048 ssh2: RSA SHA256:izw9nsjJfNoYQ+ckx4ezJW8umjt8Ms6p+cEtPdzXVWQ Dec 13 16:10:23.979137 sshd[4463]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 16:10:23.982741 systemd-logind[1659]: New session 11 of user core. Dec 13 16:10:23.983468 systemd[1]: Started session-11.scope. Dec 13 16:10:24.073253 sshd[4463]: pam_unix(sshd:session): session closed for user core Dec 13 16:10:24.074683 systemd[1]: sshd@15-145.40.90.237:22-139.178.89.65:34048.service: Deactivated successfully. Dec 13 16:10:24.075270 systemd-logind[1659]: Session 11 logged out. Waiting for processes to exit. Dec 13 16:10:24.075283 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 16:10:24.075793 systemd-logind[1659]: Removed session 11. Dec 13 16:10:29.079582 systemd[1]: Started sshd@16-145.40.90.237:22-139.178.89.65:35950.service. Dec 13 16:10:29.116260 sshd[4491]: Accepted publickey for core from 139.178.89.65 port 35950 ssh2: RSA SHA256:izw9nsjJfNoYQ+ckx4ezJW8umjt8Ms6p+cEtPdzXVWQ Dec 13 16:10:29.117408 sshd[4491]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 16:10:29.121328 systemd-logind[1659]: New session 12 of user core. Dec 13 16:10:29.122286 systemd[1]: Started session-12.scope. Dec 13 16:10:29.211900 sshd[4491]: pam_unix(sshd:session): session closed for user core Dec 13 16:10:29.213293 systemd[1]: sshd@16-145.40.90.237:22-139.178.89.65:35950.service: Deactivated successfully. Dec 13 16:10:29.213975 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 16:10:29.213979 systemd-logind[1659]: Session 12 logged out. Waiting for processes to exit. Dec 13 16:10:29.214505 systemd-logind[1659]: Removed session 12. Dec 13 16:10:34.219508 systemd[1]: Started sshd@17-145.40.90.237:22-139.178.89.65:35964.service. Dec 13 16:10:34.259585 sshd[4518]: Accepted publickey for core from 139.178.89.65 port 35964 ssh2: RSA SHA256:izw9nsjJfNoYQ+ckx4ezJW8umjt8Ms6p+cEtPdzXVWQ Dec 13 16:10:34.260603 sshd[4518]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 16:10:34.263985 systemd-logind[1659]: New session 13 of user core. Dec 13 16:10:34.264792 systemd[1]: Started session-13.scope. Dec 13 16:10:34.374292 sshd[4518]: pam_unix(sshd:session): session closed for user core Dec 13 16:10:34.375705 systemd[1]: Started sshd@18-145.40.90.237:22-139.178.89.65:35968.service. Dec 13 16:10:34.375988 systemd[1]: sshd@17-145.40.90.237:22-139.178.89.65:35964.service: Deactivated successfully. Dec 13 16:10:34.376601 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 16:10:34.376634 systemd-logind[1659]: Session 13 logged out. Waiting for processes to exit. Dec 13 16:10:34.377132 systemd-logind[1659]: Removed session 13. Dec 13 16:10:34.411878 sshd[4543]: Accepted publickey for core from 139.178.89.65 port 35968 ssh2: RSA SHA256:izw9nsjJfNoYQ+ckx4ezJW8umjt8Ms6p+cEtPdzXVWQ Dec 13 16:10:34.412705 sshd[4543]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 16:10:34.415751 systemd-logind[1659]: New session 14 of user core. Dec 13 16:10:34.416298 systemd[1]: Started session-14.scope. Dec 13 16:10:34.568502 sshd[4543]: pam_unix(sshd:session): session closed for user core Dec 13 16:10:34.570071 systemd[1]: Started sshd@19-145.40.90.237:22-139.178.89.65:35984.service. Dec 13 16:10:34.570421 systemd[1]: sshd@18-145.40.90.237:22-139.178.89.65:35968.service: Deactivated successfully. Dec 13 16:10:34.571012 systemd-logind[1659]: Session 14 logged out. Waiting for processes to exit. Dec 13 16:10:34.571051 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 16:10:34.571531 systemd-logind[1659]: Removed session 14. Dec 13 16:10:34.607980 sshd[4567]: Accepted publickey for core from 139.178.89.65 port 35984 ssh2: RSA SHA256:izw9nsjJfNoYQ+ckx4ezJW8umjt8Ms6p+cEtPdzXVWQ Dec 13 16:10:34.611861 sshd[4567]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 16:10:34.622858 systemd-logind[1659]: New session 15 of user core. Dec 13 16:10:34.625315 systemd[1]: Started session-15.scope. Dec 13 16:10:34.773403 sshd[4567]: pam_unix(sshd:session): session closed for user core Dec 13 16:10:34.775008 systemd[1]: sshd@19-145.40.90.237:22-139.178.89.65:35984.service: Deactivated successfully. Dec 13 16:10:34.775700 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 16:10:34.775713 systemd-logind[1659]: Session 15 logged out. Waiting for processes to exit. Dec 13 16:10:34.776283 systemd-logind[1659]: Removed session 15. Dec 13 16:10:39.779519 systemd[1]: Started sshd@20-145.40.90.237:22-139.178.89.65:53306.service. Dec 13 16:10:39.816053 sshd[4597]: Accepted publickey for core from 139.178.89.65 port 53306 ssh2: RSA SHA256:izw9nsjJfNoYQ+ckx4ezJW8umjt8Ms6p+cEtPdzXVWQ Dec 13 16:10:39.817244 sshd[4597]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 16:10:39.821367 systemd-logind[1659]: New session 16 of user core. Dec 13 16:10:39.822650 systemd[1]: Started session-16.scope. Dec 13 16:10:39.914163 sshd[4597]: pam_unix(sshd:session): session closed for user core Dec 13 16:10:39.915594 systemd[1]: sshd@20-145.40.90.237:22-139.178.89.65:53306.service: Deactivated successfully. Dec 13 16:10:39.916179 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 16:10:39.916205 systemd-logind[1659]: Session 16 logged out. Waiting for processes to exit. Dec 13 16:10:39.916752 systemd-logind[1659]: Removed session 16. Dec 13 16:10:44.920835 systemd[1]: Started sshd@21-145.40.90.237:22-139.178.89.65:53314.service. Dec 13 16:10:44.957572 sshd[4626]: Accepted publickey for core from 139.178.89.65 port 53314 ssh2: RSA SHA256:izw9nsjJfNoYQ+ckx4ezJW8umjt8Ms6p+cEtPdzXVWQ Dec 13 16:10:44.958714 sshd[4626]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 16:10:44.962775 systemd-logind[1659]: New session 17 of user core. Dec 13 16:10:44.963631 systemd[1]: Started session-17.scope. Dec 13 16:10:45.055794 sshd[4626]: pam_unix(sshd:session): session closed for user core Dec 13 16:10:45.057293 systemd[1]: Started sshd@22-145.40.90.237:22-139.178.89.65:53324.service. Dec 13 16:10:45.057607 systemd[1]: sshd@21-145.40.90.237:22-139.178.89.65:53314.service: Deactivated successfully. Dec 13 16:10:45.058162 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 16:10:45.058170 systemd-logind[1659]: Session 17 logged out. Waiting for processes to exit. Dec 13 16:10:45.058699 systemd-logind[1659]: Removed session 17. Dec 13 16:10:45.093319 sshd[4649]: Accepted publickey for core from 139.178.89.65 port 53324 ssh2: RSA SHA256:izw9nsjJfNoYQ+ckx4ezJW8umjt8Ms6p+cEtPdzXVWQ Dec 13 16:10:45.094170 sshd[4649]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 16:10:45.097158 systemd-logind[1659]: New session 18 of user core. Dec 13 16:10:45.097695 systemd[1]: Started session-18.scope. Dec 13 16:10:45.338082 sshd[4649]: pam_unix(sshd:session): session closed for user core Dec 13 16:10:45.344692 systemd[1]: Started sshd@23-145.40.90.237:22-139.178.89.65:53334.service. Dec 13 16:10:45.346132 systemd[1]: sshd@22-145.40.90.237:22-139.178.89.65:53324.service: Deactivated successfully. Dec 13 16:10:45.346764 systemd-logind[1659]: Session 18 logged out. Waiting for processes to exit. Dec 13 16:10:45.346803 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 16:10:45.347328 systemd-logind[1659]: Removed session 18. Dec 13 16:10:45.382276 sshd[4676]: Accepted publickey for core from 139.178.89.65 port 53334 ssh2: RSA SHA256:izw9nsjJfNoYQ+ckx4ezJW8umjt8Ms6p+cEtPdzXVWQ Dec 13 16:10:45.383390 sshd[4676]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 16:10:45.387221 systemd-logind[1659]: New session 19 of user core. Dec 13 16:10:45.388011 systemd[1]: Started session-19.scope. Dec 13 16:10:46.466860 sshd[4676]: pam_unix(sshd:session): session closed for user core Dec 13 16:10:46.470277 systemd[1]: Started sshd@24-145.40.90.237:22-139.178.89.65:53338.service. Dec 13 16:10:46.470957 systemd[1]: sshd@23-145.40.90.237:22-139.178.89.65:53334.service: Deactivated successfully. Dec 13 16:10:46.472377 systemd-logind[1659]: Session 19 logged out. Waiting for processes to exit. Dec 13 16:10:46.472486 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 16:10:46.473526 systemd-logind[1659]: Removed session 19. Dec 13 16:10:46.519483 sshd[4708]: Accepted publickey for core from 139.178.89.65 port 53338 ssh2: RSA SHA256:izw9nsjJfNoYQ+ckx4ezJW8umjt8Ms6p+cEtPdzXVWQ Dec 13 16:10:46.522942 sshd[4708]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 16:10:46.533851 systemd-logind[1659]: New session 20 of user core. Dec 13 16:10:46.536255 systemd[1]: Started session-20.scope. Dec 13 16:10:46.752440 sshd[4708]: pam_unix(sshd:session): session closed for user core Dec 13 16:10:46.754026 systemd[1]: Started sshd@25-145.40.90.237:22-139.178.89.65:53340.service. Dec 13 16:10:46.754355 systemd[1]: sshd@24-145.40.90.237:22-139.178.89.65:53338.service: Deactivated successfully. Dec 13 16:10:46.754871 systemd-logind[1659]: Session 20 logged out. Waiting for processes to exit. Dec 13 16:10:46.754907 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 16:10:46.755261 systemd-logind[1659]: Removed session 20. Dec 13 16:10:46.790696 sshd[4734]: Accepted publickey for core from 139.178.89.65 port 53340 ssh2: RSA SHA256:izw9nsjJfNoYQ+ckx4ezJW8umjt8Ms6p+cEtPdzXVWQ Dec 13 16:10:46.791869 sshd[4734]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 16:10:46.795785 systemd-logind[1659]: New session 21 of user core. Dec 13 16:10:46.796722 systemd[1]: Started session-21.scope. Dec 13 16:10:46.940709 sshd[4734]: pam_unix(sshd:session): session closed for user core Dec 13 16:10:46.942145 systemd[1]: sshd@25-145.40.90.237:22-139.178.89.65:53340.service: Deactivated successfully. Dec 13 16:10:46.942789 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 16:10:46.942829 systemd-logind[1659]: Session 21 logged out. Waiting for processes to exit. Dec 13 16:10:46.943315 systemd-logind[1659]: Removed session 21. Dec 13 16:10:51.949160 systemd[1]: Started sshd@26-145.40.90.237:22-139.178.89.65:57428.service. Dec 13 16:10:51.989991 sshd[4767]: Accepted publickey for core from 139.178.89.65 port 57428 ssh2: RSA SHA256:izw9nsjJfNoYQ+ckx4ezJW8umjt8Ms6p+cEtPdzXVWQ Dec 13 16:10:51.993573 sshd[4767]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 16:10:52.004735 systemd-logind[1659]: New session 22 of user core. Dec 13 16:10:52.007225 systemd[1]: Started session-22.scope. Dec 13 16:10:52.112358 sshd[4767]: pam_unix(sshd:session): session closed for user core Dec 13 16:10:52.113931 systemd[1]: sshd@26-145.40.90.237:22-139.178.89.65:57428.service: Deactivated successfully. Dec 13 16:10:52.114624 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 16:10:52.114665 systemd-logind[1659]: Session 22 logged out. Waiting for processes to exit. Dec 13 16:10:52.115235 systemd-logind[1659]: Removed session 22. Dec 13 16:10:57.119166 systemd[1]: Started sshd@27-145.40.90.237:22-139.178.89.65:57444.service. Dec 13 16:10:57.155654 sshd[4791]: Accepted publickey for core from 139.178.89.65 port 57444 ssh2: RSA SHA256:izw9nsjJfNoYQ+ckx4ezJW8umjt8Ms6p+cEtPdzXVWQ Dec 13 16:10:57.156997 sshd[4791]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 16:10:57.160998 systemd-logind[1659]: New session 23 of user core. Dec 13 16:10:57.161810 systemd[1]: Started session-23.scope. Dec 13 16:10:57.253453 sshd[4791]: pam_unix(sshd:session): session closed for user core Dec 13 16:10:57.254907 systemd[1]: sshd@27-145.40.90.237:22-139.178.89.65:57444.service: Deactivated successfully. Dec 13 16:10:57.255592 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 16:10:57.255631 systemd-logind[1659]: Session 23 logged out. Waiting for processes to exit. Dec 13 16:10:57.256200 systemd-logind[1659]: Removed session 23. Dec 13 16:11:02.259966 systemd[1]: Started sshd@28-145.40.90.237:22-139.178.89.65:58798.service. Dec 13 16:11:02.297115 sshd[4820]: Accepted publickey for core from 139.178.89.65 port 58798 ssh2: RSA SHA256:izw9nsjJfNoYQ+ckx4ezJW8umjt8Ms6p+cEtPdzXVWQ Dec 13 16:11:02.300387 sshd[4820]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 16:11:02.311200 systemd-logind[1659]: New session 24 of user core. Dec 13 16:11:02.313603 systemd[1]: Started session-24.scope. Dec 13 16:11:02.403788 sshd[4820]: pam_unix(sshd:session): session closed for user core Dec 13 16:11:02.405281 systemd[1]: sshd@28-145.40.90.237:22-139.178.89.65:58798.service: Deactivated successfully. Dec 13 16:11:02.405974 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 16:11:02.406012 systemd-logind[1659]: Session 24 logged out. Waiting for processes to exit. Dec 13 16:11:02.406572 systemd-logind[1659]: Removed session 24. Dec 13 16:11:07.409983 systemd[1]: Started sshd@29-145.40.90.237:22-139.178.89.65:58806.service. Dec 13 16:11:07.446508 sshd[4844]: Accepted publickey for core from 139.178.89.65 port 58806 ssh2: RSA SHA256:izw9nsjJfNoYQ+ckx4ezJW8umjt8Ms6p+cEtPdzXVWQ Dec 13 16:11:07.447533 sshd[4844]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 16:11:07.451333 systemd-logind[1659]: New session 25 of user core. Dec 13 16:11:07.452110 systemd[1]: Started session-25.scope. Dec 13 16:11:07.542515 sshd[4844]: pam_unix(sshd:session): session closed for user core Dec 13 16:11:07.544153 systemd[1]: Started sshd@30-145.40.90.237:22-139.178.89.65:58818.service. Dec 13 16:11:07.544510 systemd[1]: sshd@29-145.40.90.237:22-139.178.89.65:58806.service: Deactivated successfully. Dec 13 16:11:07.545075 systemd-logind[1659]: Session 25 logged out. Waiting for processes to exit. Dec 13 16:11:07.545080 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 16:11:07.545565 systemd-logind[1659]: Removed session 25. Dec 13 16:11:07.580582 sshd[4868]: Accepted publickey for core from 139.178.89.65 port 58818 ssh2: RSA SHA256:izw9nsjJfNoYQ+ckx4ezJW8umjt8Ms6p+cEtPdzXVWQ Dec 13 16:11:07.581407 sshd[4868]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 16:11:07.583987 systemd-logind[1659]: New session 26 of user core. Dec 13 16:11:07.584522 systemd[1]: Started session-26.scope. Dec 13 16:11:08.892242 env[1669]: time="2024-12-13T16:11:08.892205558Z" level=info msg="StopContainer for \"589d9957b1fbe9b173ef009c0147eddd0e157a65525fa80d2477c178e282786c\" with timeout 30 (s)" Dec 13 16:11:08.892538 env[1669]: time="2024-12-13T16:11:08.892458863Z" level=info msg="Stop container \"589d9957b1fbe9b173ef009c0147eddd0e157a65525fa80d2477c178e282786c\" with signal terminated" Dec 13 16:11:08.902071 env[1669]: time="2024-12-13T16:11:08.902018621Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 16:11:08.904705 env[1669]: time="2024-12-13T16:11:08.904689014Z" level=info msg="StopContainer for \"f235e220af5cdb8bfb45e79e81cb6c0682fe36eecdd0d60aef5c955002021e00\" with timeout 2 (s)" Dec 13 16:11:08.904790 env[1669]: time="2024-12-13T16:11:08.904779007Z" level=info msg="Stop container \"f235e220af5cdb8bfb45e79e81cb6c0682fe36eecdd0d60aef5c955002021e00\" with signal terminated" Dec 13 16:11:08.905222 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-589d9957b1fbe9b173ef009c0147eddd0e157a65525fa80d2477c178e282786c-rootfs.mount: Deactivated successfully. Dec 13 16:11:08.907621 systemd-networkd[1398]: lxc_health: Link DOWN Dec 13 16:11:08.907624 systemd-networkd[1398]: lxc_health: Lost carrier Dec 13 16:11:08.912282 env[1669]: time="2024-12-13T16:11:08.912260802Z" level=info msg="shim disconnected" id=589d9957b1fbe9b173ef009c0147eddd0e157a65525fa80d2477c178e282786c Dec 13 16:11:08.912347 env[1669]: time="2024-12-13T16:11:08.912284433Z" level=warning msg="cleaning up after shim disconnected" id=589d9957b1fbe9b173ef009c0147eddd0e157a65525fa80d2477c178e282786c namespace=k8s.io Dec 13 16:11:08.912347 env[1669]: time="2024-12-13T16:11:08.912290443Z" level=info msg="cleaning up dead shim" Dec 13 16:11:08.915876 env[1669]: time="2024-12-13T16:11:08.915860072Z" level=warning msg="cleanup warnings time=\"2024-12-13T16:11:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4936 runtime=io.containerd.runc.v2\n" Dec 13 16:11:08.916489 env[1669]: time="2024-12-13T16:11:08.916448238Z" level=info msg="StopContainer for \"589d9957b1fbe9b173ef009c0147eddd0e157a65525fa80d2477c178e282786c\" returns successfully" Dec 13 16:11:08.916843 env[1669]: time="2024-12-13T16:11:08.916799201Z" level=info msg="StopPodSandbox for \"d55fb63c332238060585a4adb6cc4adf48954c8ab2ff73e78ddeaaee43fa62e1\"" Dec 13 16:11:08.916843 env[1669]: time="2024-12-13T16:11:08.916834938Z" level=info msg="Container to stop \"589d9957b1fbe9b173ef009c0147eddd0e157a65525fa80d2477c178e282786c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 16:11:08.918313 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d55fb63c332238060585a4adb6cc4adf48954c8ab2ff73e78ddeaaee43fa62e1-shm.mount: Deactivated successfully. Dec 13 16:11:08.928262 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d55fb63c332238060585a4adb6cc4adf48954c8ab2ff73e78ddeaaee43fa62e1-rootfs.mount: Deactivated successfully. Dec 13 16:11:08.928573 env[1669]: time="2024-12-13T16:11:08.928541744Z" level=info msg="shim disconnected" id=d55fb63c332238060585a4adb6cc4adf48954c8ab2ff73e78ddeaaee43fa62e1 Dec 13 16:11:08.928633 env[1669]: time="2024-12-13T16:11:08.928576535Z" level=warning msg="cleaning up after shim disconnected" id=d55fb63c332238060585a4adb6cc4adf48954c8ab2ff73e78ddeaaee43fa62e1 namespace=k8s.io Dec 13 16:11:08.928633 env[1669]: time="2024-12-13T16:11:08.928583406Z" level=info msg="cleaning up dead shim" Dec 13 16:11:08.932266 env[1669]: time="2024-12-13T16:11:08.932219869Z" level=warning msg="cleanup warnings time=\"2024-12-13T16:11:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4969 runtime=io.containerd.runc.v2\n" Dec 13 16:11:08.932451 env[1669]: time="2024-12-13T16:11:08.932382438Z" level=info msg="TearDown network for sandbox \"d55fb63c332238060585a4adb6cc4adf48954c8ab2ff73e78ddeaaee43fa62e1\" successfully" Dec 13 16:11:08.932451 env[1669]: time="2024-12-13T16:11:08.932395446Z" level=info msg="StopPodSandbox for \"d55fb63c332238060585a4adb6cc4adf48954c8ab2ff73e78ddeaaee43fa62e1\" returns successfully" Dec 13 16:11:08.970014 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f235e220af5cdb8bfb45e79e81cb6c0682fe36eecdd0d60aef5c955002021e00-rootfs.mount: Deactivated successfully. Dec 13 16:11:08.970234 env[1669]: time="2024-12-13T16:11:08.970162141Z" level=info msg="shim disconnected" id=f235e220af5cdb8bfb45e79e81cb6c0682fe36eecdd0d60aef5c955002021e00 Dec 13 16:11:08.970234 env[1669]: time="2024-12-13T16:11:08.970211978Z" level=warning msg="cleaning up after shim disconnected" id=f235e220af5cdb8bfb45e79e81cb6c0682fe36eecdd0d60aef5c955002021e00 namespace=k8s.io Dec 13 16:11:08.970234 env[1669]: time="2024-12-13T16:11:08.970226271Z" level=info msg="cleaning up dead shim" Dec 13 16:11:08.975554 env[1669]: time="2024-12-13T16:11:08.975526260Z" level=warning msg="cleanup warnings time=\"2024-12-13T16:11:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4997 runtime=io.containerd.runc.v2\n" Dec 13 16:11:08.976359 env[1669]: time="2024-12-13T16:11:08.976332119Z" level=info msg="StopContainer for \"f235e220af5cdb8bfb45e79e81cb6c0682fe36eecdd0d60aef5c955002021e00\" returns successfully" Dec 13 16:11:08.976746 env[1669]: time="2024-12-13T16:11:08.976688459Z" level=info msg="StopPodSandbox for \"0469a29528ea6142bf636e14da8eae73ec2addbca56539702ae27f7fef0ee04d\"" Dec 13 16:11:08.976746 env[1669]: time="2024-12-13T16:11:08.976738929Z" level=info msg="Container to stop \"156ce479679d635df0aa6781d4761c1cae589a648494751fa5f2376c09fc3a4c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 16:11:08.976849 env[1669]: time="2024-12-13T16:11:08.976753873Z" level=info msg="Container to stop \"3df5199628344118b49df70f77d8c0ede1136d743f7f50ea6b7eeaf5f1b11950\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 16:11:08.976849 env[1669]: time="2024-12-13T16:11:08.976763849Z" level=info msg="Container to stop \"36fb0ff88c1dad23ac461af9032e73d62f7953e80a39f05a48635c35cec93326\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 16:11:08.976849 env[1669]: time="2024-12-13T16:11:08.976773159Z" level=info msg="Container to stop \"7ac0bcb9ab8ced94a50b72446237ad1b588a5d467829cf74584f0968fad503df\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 16:11:08.976849 env[1669]: time="2024-12-13T16:11:08.976782839Z" level=info msg="Container to stop \"f235e220af5cdb8bfb45e79e81cb6c0682fe36eecdd0d60aef5c955002021e00\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 16:11:08.991852 env[1669]: time="2024-12-13T16:11:08.991782931Z" level=info msg="shim disconnected" id=0469a29528ea6142bf636e14da8eae73ec2addbca56539702ae27f7fef0ee04d Dec 13 16:11:08.991852 env[1669]: time="2024-12-13T16:11:08.991834861Z" level=warning msg="cleaning up after shim disconnected" id=0469a29528ea6142bf636e14da8eae73ec2addbca56539702ae27f7fef0ee04d namespace=k8s.io Dec 13 16:11:08.991852 env[1669]: time="2024-12-13T16:11:08.991850365Z" level=info msg="cleaning up dead shim" Dec 13 16:11:08.997252 env[1669]: time="2024-12-13T16:11:08.997225384Z" level=warning msg="cleanup warnings time=\"2024-12-13T16:11:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5030 runtime=io.containerd.runc.v2\n" Dec 13 16:11:08.997476 env[1669]: time="2024-12-13T16:11:08.997453678Z" level=info msg="TearDown network for sandbox \"0469a29528ea6142bf636e14da8eae73ec2addbca56539702ae27f7fef0ee04d\" successfully" Dec 13 16:11:08.997476 env[1669]: time="2024-12-13T16:11:08.997473101Z" level=info msg="StopPodSandbox for \"0469a29528ea6142bf636e14da8eae73ec2addbca56539702ae27f7fef0ee04d\" returns successfully" Dec 13 16:11:09.024816 kubelet[2781]: I1213 16:11:09.024741 2781 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/31f965b8-1d39-4ca5-8a9f-b928327d0911-xtables-lock\") pod \"31f965b8-1d39-4ca5-8a9f-b928327d0911\" (UID: \"31f965b8-1d39-4ca5-8a9f-b928327d0911\") " Dec 13 16:11:09.024816 kubelet[2781]: I1213 16:11:09.024798 2781 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/31f965b8-1d39-4ca5-8a9f-b928327d0911-lib-modules\") pod \"31f965b8-1d39-4ca5-8a9f-b928327d0911\" (UID: \"31f965b8-1d39-4ca5-8a9f-b928327d0911\") " Dec 13 16:11:09.025657 kubelet[2781]: I1213 16:11:09.024839 2781 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/31f965b8-1d39-4ca5-8a9f-b928327d0911-cilium-cgroup\") pod \"31f965b8-1d39-4ca5-8a9f-b928327d0911\" (UID: \"31f965b8-1d39-4ca5-8a9f-b928327d0911\") " Dec 13 16:11:09.025657 kubelet[2781]: I1213 16:11:09.024836 2781 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31f965b8-1d39-4ca5-8a9f-b928327d0911-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "31f965b8-1d39-4ca5-8a9f-b928327d0911" (UID: "31f965b8-1d39-4ca5-8a9f-b928327d0911"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 16:11:09.025657 kubelet[2781]: I1213 16:11:09.024879 2781 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/31f965b8-1d39-4ca5-8a9f-b928327d0911-etc-cni-netd\") pod \"31f965b8-1d39-4ca5-8a9f-b928327d0911\" (UID: \"31f965b8-1d39-4ca5-8a9f-b928327d0911\") " Dec 13 16:11:09.025657 kubelet[2781]: I1213 16:11:09.024916 2781 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/31f965b8-1d39-4ca5-8a9f-b928327d0911-host-proc-sys-net\") pod \"31f965b8-1d39-4ca5-8a9f-b928327d0911\" (UID: \"31f965b8-1d39-4ca5-8a9f-b928327d0911\") " Dec 13 16:11:09.025657 kubelet[2781]: I1213 16:11:09.024908 2781 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31f965b8-1d39-4ca5-8a9f-b928327d0911-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "31f965b8-1d39-4ca5-8a9f-b928327d0911" (UID: "31f965b8-1d39-4ca5-8a9f-b928327d0911"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 16:11:09.026241 kubelet[2781]: I1213 16:11:09.024941 2781 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31f965b8-1d39-4ca5-8a9f-b928327d0911-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "31f965b8-1d39-4ca5-8a9f-b928327d0911" (UID: "31f965b8-1d39-4ca5-8a9f-b928327d0911"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 16:11:09.026241 kubelet[2781]: I1213 16:11:09.024962 2781 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/31f965b8-1d39-4ca5-8a9f-b928327d0911-hubble-tls\") pod \"31f965b8-1d39-4ca5-8a9f-b928327d0911\" (UID: \"31f965b8-1d39-4ca5-8a9f-b928327d0911\") " Dec 13 16:11:09.026241 kubelet[2781]: I1213 16:11:09.024952 2781 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31f965b8-1d39-4ca5-8a9f-b928327d0911-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "31f965b8-1d39-4ca5-8a9f-b928327d0911" (UID: "31f965b8-1d39-4ca5-8a9f-b928327d0911"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 16:11:09.026241 kubelet[2781]: I1213 16:11:09.024991 2781 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31f965b8-1d39-4ca5-8a9f-b928327d0911-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "31f965b8-1d39-4ca5-8a9f-b928327d0911" (UID: "31f965b8-1d39-4ca5-8a9f-b928327d0911"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 16:11:09.026241 kubelet[2781]: I1213 16:11:09.025007 2781 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/31f965b8-1d39-4ca5-8a9f-b928327d0911-clustermesh-secrets\") pod \"31f965b8-1d39-4ca5-8a9f-b928327d0911\" (UID: \"31f965b8-1d39-4ca5-8a9f-b928327d0911\") " Dec 13 16:11:09.026618 kubelet[2781]: I1213 16:11:09.025096 2781 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wr9c7\" (UniqueName: \"kubernetes.io/projected/31f965b8-1d39-4ca5-8a9f-b928327d0911-kube-api-access-wr9c7\") pod \"31f965b8-1d39-4ca5-8a9f-b928327d0911\" (UID: \"31f965b8-1d39-4ca5-8a9f-b928327d0911\") " Dec 13 16:11:09.026618 kubelet[2781]: I1213 16:11:09.025140 2781 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/31f965b8-1d39-4ca5-8a9f-b928327d0911-host-proc-sys-kernel\") pod \"31f965b8-1d39-4ca5-8a9f-b928327d0911\" (UID: \"31f965b8-1d39-4ca5-8a9f-b928327d0911\") " Dec 13 16:11:09.026618 kubelet[2781]: I1213 16:11:09.025194 2781 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2fd6j\" (UniqueName: \"kubernetes.io/projected/f24e3999-0804-4369-81e1-a6b3ab99ff4f-kube-api-access-2fd6j\") pod \"f24e3999-0804-4369-81e1-a6b3ab99ff4f\" (UID: \"f24e3999-0804-4369-81e1-a6b3ab99ff4f\") " Dec 13 16:11:09.026618 kubelet[2781]: I1213 16:11:09.025245 2781 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/31f965b8-1d39-4ca5-8a9f-b928327d0911-hostproc\") pod \"31f965b8-1d39-4ca5-8a9f-b928327d0911\" (UID: \"31f965b8-1d39-4ca5-8a9f-b928327d0911\") " Dec 13 16:11:09.026618 kubelet[2781]: I1213 16:11:09.025262 2781 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31f965b8-1d39-4ca5-8a9f-b928327d0911-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "31f965b8-1d39-4ca5-8a9f-b928327d0911" (UID: "31f965b8-1d39-4ca5-8a9f-b928327d0911"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 16:11:09.026618 kubelet[2781]: I1213 16:11:09.025306 2781 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/31f965b8-1d39-4ca5-8a9f-b928327d0911-cilium-run\") pod \"31f965b8-1d39-4ca5-8a9f-b928327d0911\" (UID: \"31f965b8-1d39-4ca5-8a9f-b928327d0911\") " Dec 13 16:11:09.027000 kubelet[2781]: I1213 16:11:09.025328 2781 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31f965b8-1d39-4ca5-8a9f-b928327d0911-hostproc" (OuterVolumeSpecName: "hostproc") pod "31f965b8-1d39-4ca5-8a9f-b928327d0911" (UID: "31f965b8-1d39-4ca5-8a9f-b928327d0911"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 16:11:09.027000 kubelet[2781]: I1213 16:11:09.025400 2781 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f24e3999-0804-4369-81e1-a6b3ab99ff4f-cilium-config-path\") pod \"f24e3999-0804-4369-81e1-a6b3ab99ff4f\" (UID: \"f24e3999-0804-4369-81e1-a6b3ab99ff4f\") " Dec 13 16:11:09.027000 kubelet[2781]: I1213 16:11:09.025427 2781 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31f965b8-1d39-4ca5-8a9f-b928327d0911-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "31f965b8-1d39-4ca5-8a9f-b928327d0911" (UID: "31f965b8-1d39-4ca5-8a9f-b928327d0911"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 16:11:09.027000 kubelet[2781]: I1213 16:11:09.025484 2781 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31f965b8-1d39-4ca5-8a9f-b928327d0911-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "31f965b8-1d39-4ca5-8a9f-b928327d0911" (UID: "31f965b8-1d39-4ca5-8a9f-b928327d0911"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 16:11:09.027000 kubelet[2781]: I1213 16:11:09.025457 2781 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/31f965b8-1d39-4ca5-8a9f-b928327d0911-bpf-maps\") pod \"31f965b8-1d39-4ca5-8a9f-b928327d0911\" (UID: \"31f965b8-1d39-4ca5-8a9f-b928327d0911\") " Dec 13 16:11:09.027333 kubelet[2781]: I1213 16:11:09.025588 2781 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/31f965b8-1d39-4ca5-8a9f-b928327d0911-cni-path\") pod \"31f965b8-1d39-4ca5-8a9f-b928327d0911\" (UID: \"31f965b8-1d39-4ca5-8a9f-b928327d0911\") " Dec 13 16:11:09.027333 kubelet[2781]: I1213 16:11:09.025682 2781 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/31f965b8-1d39-4ca5-8a9f-b928327d0911-cilium-config-path\") pod \"31f965b8-1d39-4ca5-8a9f-b928327d0911\" (UID: \"31f965b8-1d39-4ca5-8a9f-b928327d0911\") " Dec 13 16:11:09.027333 kubelet[2781]: I1213 16:11:09.025709 2781 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31f965b8-1d39-4ca5-8a9f-b928327d0911-cni-path" (OuterVolumeSpecName: "cni-path") pod "31f965b8-1d39-4ca5-8a9f-b928327d0911" (UID: "31f965b8-1d39-4ca5-8a9f-b928327d0911"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 16:11:09.027333 kubelet[2781]: I1213 16:11:09.025788 2781 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/31f965b8-1d39-4ca5-8a9f-b928327d0911-bpf-maps\") on node \"ci-3510.3.6-a-245bdeb2fc\" DevicePath \"\"" Dec 13 16:11:09.027333 kubelet[2781]: I1213 16:11:09.025836 2781 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/31f965b8-1d39-4ca5-8a9f-b928327d0911-xtables-lock\") on node \"ci-3510.3.6-a-245bdeb2fc\" DevicePath \"\"" Dec 13 16:11:09.027333 kubelet[2781]: I1213 16:11:09.025874 2781 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/31f965b8-1d39-4ca5-8a9f-b928327d0911-lib-modules\") on node \"ci-3510.3.6-a-245bdeb2fc\" DevicePath \"\"" Dec 13 16:11:09.027333 kubelet[2781]: I1213 16:11:09.025920 2781 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/31f965b8-1d39-4ca5-8a9f-b928327d0911-cilium-cgroup\") on node \"ci-3510.3.6-a-245bdeb2fc\" DevicePath \"\"" Dec 13 16:11:09.028005 kubelet[2781]: I1213 16:11:09.025954 2781 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/31f965b8-1d39-4ca5-8a9f-b928327d0911-etc-cni-netd\") on node \"ci-3510.3.6-a-245bdeb2fc\" DevicePath \"\"" Dec 13 16:11:09.028005 kubelet[2781]: I1213 16:11:09.025992 2781 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/31f965b8-1d39-4ca5-8a9f-b928327d0911-host-proc-sys-net\") on node \"ci-3510.3.6-a-245bdeb2fc\" DevicePath \"\"" Dec 13 16:11:09.028005 kubelet[2781]: I1213 16:11:09.026033 2781 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/31f965b8-1d39-4ca5-8a9f-b928327d0911-host-proc-sys-kernel\") on node \"ci-3510.3.6-a-245bdeb2fc\" DevicePath \"\"" Dec 13 16:11:09.028005 kubelet[2781]: I1213 16:11:09.026068 2781 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/31f965b8-1d39-4ca5-8a9f-b928327d0911-hostproc\") on node \"ci-3510.3.6-a-245bdeb2fc\" DevicePath \"\"" Dec 13 16:11:09.028005 kubelet[2781]: I1213 16:11:09.026107 2781 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/31f965b8-1d39-4ca5-8a9f-b928327d0911-cilium-run\") on node \"ci-3510.3.6-a-245bdeb2fc\" DevicePath \"\"" Dec 13 16:11:09.029852 kubelet[2781]: I1213 16:11:09.029718 2781 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f24e3999-0804-4369-81e1-a6b3ab99ff4f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f24e3999-0804-4369-81e1-a6b3ab99ff4f" (UID: "f24e3999-0804-4369-81e1-a6b3ab99ff4f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 16:11:09.030284 kubelet[2781]: I1213 16:11:09.030193 2781 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31f965b8-1d39-4ca5-8a9f-b928327d0911-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "31f965b8-1d39-4ca5-8a9f-b928327d0911" (UID: "31f965b8-1d39-4ca5-8a9f-b928327d0911"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 16:11:09.030499 kubelet[2781]: I1213 16:11:09.030301 2781 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31f965b8-1d39-4ca5-8a9f-b928327d0911-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "31f965b8-1d39-4ca5-8a9f-b928327d0911" (UID: "31f965b8-1d39-4ca5-8a9f-b928327d0911"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 16:11:09.030499 kubelet[2781]: I1213 16:11:09.030433 2781 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f24e3999-0804-4369-81e1-a6b3ab99ff4f-kube-api-access-2fd6j" (OuterVolumeSpecName: "kube-api-access-2fd6j") pod "f24e3999-0804-4369-81e1-a6b3ab99ff4f" (UID: "f24e3999-0804-4369-81e1-a6b3ab99ff4f"). InnerVolumeSpecName "kube-api-access-2fd6j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 16:11:09.032137 kubelet[2781]: I1213 16:11:09.032047 2781 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31f965b8-1d39-4ca5-8a9f-b928327d0911-kube-api-access-wr9c7" (OuterVolumeSpecName: "kube-api-access-wr9c7") pod "31f965b8-1d39-4ca5-8a9f-b928327d0911" (UID: "31f965b8-1d39-4ca5-8a9f-b928327d0911"). InnerVolumeSpecName "kube-api-access-wr9c7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 16:11:09.032609 kubelet[2781]: I1213 16:11:09.032520 2781 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31f965b8-1d39-4ca5-8a9f-b928327d0911-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "31f965b8-1d39-4ca5-8a9f-b928327d0911" (UID: "31f965b8-1d39-4ca5-8a9f-b928327d0911"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 16:11:09.126868 kubelet[2781]: I1213 16:11:09.126754 2781 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f24e3999-0804-4369-81e1-a6b3ab99ff4f-cilium-config-path\") on node \"ci-3510.3.6-a-245bdeb2fc\" DevicePath \"\"" Dec 13 16:11:09.126868 kubelet[2781]: I1213 16:11:09.126860 2781 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-2fd6j\" (UniqueName: \"kubernetes.io/projected/f24e3999-0804-4369-81e1-a6b3ab99ff4f-kube-api-access-2fd6j\") on node \"ci-3510.3.6-a-245bdeb2fc\" DevicePath \"\"" Dec 13 16:11:09.127311 kubelet[2781]: I1213 16:11:09.126901 2781 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/31f965b8-1d39-4ca5-8a9f-b928327d0911-cni-path\") on node \"ci-3510.3.6-a-245bdeb2fc\" DevicePath \"\"" Dec 13 16:11:09.127311 kubelet[2781]: I1213 16:11:09.126938 2781 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/31f965b8-1d39-4ca5-8a9f-b928327d0911-cilium-config-path\") on node \"ci-3510.3.6-a-245bdeb2fc\" DevicePath \"\"" Dec 13 16:11:09.127311 kubelet[2781]: I1213 16:11:09.126971 2781 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/31f965b8-1d39-4ca5-8a9f-b928327d0911-hubble-tls\") on node \"ci-3510.3.6-a-245bdeb2fc\" DevicePath \"\"" Dec 13 16:11:09.127311 kubelet[2781]: I1213 16:11:09.127005 2781 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/31f965b8-1d39-4ca5-8a9f-b928327d0911-clustermesh-secrets\") on node \"ci-3510.3.6-a-245bdeb2fc\" DevicePath \"\"" Dec 13 16:11:09.127311 kubelet[2781]: I1213 16:11:09.127043 2781 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-wr9c7\" (UniqueName: \"kubernetes.io/projected/31f965b8-1d39-4ca5-8a9f-b928327d0911-kube-api-access-wr9c7\") on node \"ci-3510.3.6-a-245bdeb2fc\" DevicePath \"\"" Dec 13 16:11:09.496324 kubelet[2781]: E1213 16:11:09.496252 2781 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 16:11:09.643913 kubelet[2781]: I1213 16:11:09.643823 2781 scope.go:117] "RemoveContainer" containerID="f235e220af5cdb8bfb45e79e81cb6c0682fe36eecdd0d60aef5c955002021e00" Dec 13 16:11:09.646570 env[1669]: time="2024-12-13T16:11:09.646489964Z" level=info msg="RemoveContainer for \"f235e220af5cdb8bfb45e79e81cb6c0682fe36eecdd0d60aef5c955002021e00\"" Dec 13 16:11:09.652570 env[1669]: time="2024-12-13T16:11:09.652476234Z" level=info msg="RemoveContainer for \"f235e220af5cdb8bfb45e79e81cb6c0682fe36eecdd0d60aef5c955002021e00\" returns successfully" Dec 13 16:11:09.653004 kubelet[2781]: I1213 16:11:09.652954 2781 scope.go:117] "RemoveContainer" containerID="7ac0bcb9ab8ced94a50b72446237ad1b588a5d467829cf74584f0968fad503df" Dec 13 16:11:09.655540 env[1669]: time="2024-12-13T16:11:09.655469520Z" level=info msg="RemoveContainer for \"7ac0bcb9ab8ced94a50b72446237ad1b588a5d467829cf74584f0968fad503df\"" Dec 13 16:11:09.659859 env[1669]: time="2024-12-13T16:11:09.659766493Z" level=info msg="RemoveContainer for \"7ac0bcb9ab8ced94a50b72446237ad1b588a5d467829cf74584f0968fad503df\" returns successfully" Dec 13 16:11:09.660224 kubelet[2781]: I1213 16:11:09.660158 2781 scope.go:117] "RemoveContainer" containerID="36fb0ff88c1dad23ac461af9032e73d62f7953e80a39f05a48635c35cec93326" Dec 13 16:11:09.662870 env[1669]: time="2024-12-13T16:11:09.662784824Z" level=info msg="RemoveContainer for \"36fb0ff88c1dad23ac461af9032e73d62f7953e80a39f05a48635c35cec93326\"" Dec 13 16:11:09.667898 env[1669]: time="2024-12-13T16:11:09.667818950Z" level=info msg="RemoveContainer for \"36fb0ff88c1dad23ac461af9032e73d62f7953e80a39f05a48635c35cec93326\" returns successfully" Dec 13 16:11:09.668270 kubelet[2781]: I1213 16:11:09.668205 2781 scope.go:117] "RemoveContainer" containerID="3df5199628344118b49df70f77d8c0ede1136d743f7f50ea6b7eeaf5f1b11950" Dec 13 16:11:09.670913 env[1669]: time="2024-12-13T16:11:09.670803491Z" level=info msg="RemoveContainer for \"3df5199628344118b49df70f77d8c0ede1136d743f7f50ea6b7eeaf5f1b11950\"" Dec 13 16:11:09.686471 env[1669]: time="2024-12-13T16:11:09.686400898Z" level=info msg="RemoveContainer for \"3df5199628344118b49df70f77d8c0ede1136d743f7f50ea6b7eeaf5f1b11950\" returns successfully" Dec 13 16:11:09.686862 kubelet[2781]: I1213 16:11:09.686816 2781 scope.go:117] "RemoveContainer" containerID="156ce479679d635df0aa6781d4761c1cae589a648494751fa5f2376c09fc3a4c" Dec 13 16:11:09.689389 env[1669]: time="2024-12-13T16:11:09.689259297Z" level=info msg="RemoveContainer for \"156ce479679d635df0aa6781d4761c1cae589a648494751fa5f2376c09fc3a4c\"" Dec 13 16:11:09.693608 env[1669]: time="2024-12-13T16:11:09.693497494Z" level=info msg="RemoveContainer for \"156ce479679d635df0aa6781d4761c1cae589a648494751fa5f2376c09fc3a4c\" returns successfully" Dec 13 16:11:09.693988 kubelet[2781]: I1213 16:11:09.693881 2781 scope.go:117] "RemoveContainer" containerID="f235e220af5cdb8bfb45e79e81cb6c0682fe36eecdd0d60aef5c955002021e00" Dec 13 16:11:09.694608 env[1669]: time="2024-12-13T16:11:09.694347333Z" level=error msg="ContainerStatus for \"f235e220af5cdb8bfb45e79e81cb6c0682fe36eecdd0d60aef5c955002021e00\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f235e220af5cdb8bfb45e79e81cb6c0682fe36eecdd0d60aef5c955002021e00\": not found" Dec 13 16:11:09.694991 kubelet[2781]: E1213 16:11:09.694903 2781 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f235e220af5cdb8bfb45e79e81cb6c0682fe36eecdd0d60aef5c955002021e00\": not found" containerID="f235e220af5cdb8bfb45e79e81cb6c0682fe36eecdd0d60aef5c955002021e00" Dec 13 16:11:09.695223 kubelet[2781]: I1213 16:11:09.695118 2781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f235e220af5cdb8bfb45e79e81cb6c0682fe36eecdd0d60aef5c955002021e00"} err="failed to get container status \"f235e220af5cdb8bfb45e79e81cb6c0682fe36eecdd0d60aef5c955002021e00\": rpc error: code = NotFound desc = an error occurred when try to find container \"f235e220af5cdb8bfb45e79e81cb6c0682fe36eecdd0d60aef5c955002021e00\": not found" Dec 13 16:11:09.695223 kubelet[2781]: I1213 16:11:09.695157 2781 scope.go:117] "RemoveContainer" containerID="7ac0bcb9ab8ced94a50b72446237ad1b588a5d467829cf74584f0968fad503df" Dec 13 16:11:09.695788 env[1669]: time="2024-12-13T16:11:09.695602824Z" level=error msg="ContainerStatus for \"7ac0bcb9ab8ced94a50b72446237ad1b588a5d467829cf74584f0968fad503df\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7ac0bcb9ab8ced94a50b72446237ad1b588a5d467829cf74584f0968fad503df\": not found" Dec 13 16:11:09.696088 kubelet[2781]: E1213 16:11:09.696022 2781 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7ac0bcb9ab8ced94a50b72446237ad1b588a5d467829cf74584f0968fad503df\": not found" containerID="7ac0bcb9ab8ced94a50b72446237ad1b588a5d467829cf74584f0968fad503df" Dec 13 16:11:09.696256 kubelet[2781]: I1213 16:11:09.696108 2781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7ac0bcb9ab8ced94a50b72446237ad1b588a5d467829cf74584f0968fad503df"} err="failed to get container status \"7ac0bcb9ab8ced94a50b72446237ad1b588a5d467829cf74584f0968fad503df\": rpc error: code = NotFound desc = an error occurred when try to find container \"7ac0bcb9ab8ced94a50b72446237ad1b588a5d467829cf74584f0968fad503df\": not found" Dec 13 16:11:09.696256 kubelet[2781]: I1213 16:11:09.696142 2781 scope.go:117] "RemoveContainer" containerID="36fb0ff88c1dad23ac461af9032e73d62f7953e80a39f05a48635c35cec93326" Dec 13 16:11:09.696724 env[1669]: time="2024-12-13T16:11:09.696554002Z" level=error msg="ContainerStatus for \"36fb0ff88c1dad23ac461af9032e73d62f7953e80a39f05a48635c35cec93326\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"36fb0ff88c1dad23ac461af9032e73d62f7953e80a39f05a48635c35cec93326\": not found" Dec 13 16:11:09.697028 kubelet[2781]: E1213 16:11:09.696966 2781 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"36fb0ff88c1dad23ac461af9032e73d62f7953e80a39f05a48635c35cec93326\": not found" containerID="36fb0ff88c1dad23ac461af9032e73d62f7953e80a39f05a48635c35cec93326" Dec 13 16:11:09.697173 kubelet[2781]: I1213 16:11:09.697045 2781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"36fb0ff88c1dad23ac461af9032e73d62f7953e80a39f05a48635c35cec93326"} err="failed to get container status \"36fb0ff88c1dad23ac461af9032e73d62f7953e80a39f05a48635c35cec93326\": rpc error: code = NotFound desc = an error occurred when try to find container \"36fb0ff88c1dad23ac461af9032e73d62f7953e80a39f05a48635c35cec93326\": not found" Dec 13 16:11:09.697173 kubelet[2781]: I1213 16:11:09.697077 2781 scope.go:117] "RemoveContainer" containerID="3df5199628344118b49df70f77d8c0ede1136d743f7f50ea6b7eeaf5f1b11950" Dec 13 16:11:09.697717 env[1669]: time="2024-12-13T16:11:09.697531512Z" level=error msg="ContainerStatus for \"3df5199628344118b49df70f77d8c0ede1136d743f7f50ea6b7eeaf5f1b11950\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3df5199628344118b49df70f77d8c0ede1136d743f7f50ea6b7eeaf5f1b11950\": not found" Dec 13 16:11:09.697933 kubelet[2781]: E1213 16:11:09.697901 2781 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3df5199628344118b49df70f77d8c0ede1136d743f7f50ea6b7eeaf5f1b11950\": not found" containerID="3df5199628344118b49df70f77d8c0ede1136d743f7f50ea6b7eeaf5f1b11950" Dec 13 16:11:09.698073 kubelet[2781]: I1213 16:11:09.697974 2781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3df5199628344118b49df70f77d8c0ede1136d743f7f50ea6b7eeaf5f1b11950"} err="failed to get container status \"3df5199628344118b49df70f77d8c0ede1136d743f7f50ea6b7eeaf5f1b11950\": rpc error: code = NotFound desc = an error occurred when try to find container \"3df5199628344118b49df70f77d8c0ede1136d743f7f50ea6b7eeaf5f1b11950\": not found" Dec 13 16:11:09.698073 kubelet[2781]: I1213 16:11:09.698006 2781 scope.go:117] "RemoveContainer" containerID="156ce479679d635df0aa6781d4761c1cae589a648494751fa5f2376c09fc3a4c" Dec 13 16:11:09.698669 env[1669]: time="2024-12-13T16:11:09.698503312Z" level=error msg="ContainerStatus for \"156ce479679d635df0aa6781d4761c1cae589a648494751fa5f2376c09fc3a4c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"156ce479679d635df0aa6781d4761c1cae589a648494751fa5f2376c09fc3a4c\": not found" Dec 13 16:11:09.699000 kubelet[2781]: E1213 16:11:09.698935 2781 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"156ce479679d635df0aa6781d4761c1cae589a648494751fa5f2376c09fc3a4c\": not found" containerID="156ce479679d635df0aa6781d4761c1cae589a648494751fa5f2376c09fc3a4c" Dec 13 16:11:09.699161 kubelet[2781]: I1213 16:11:09.699020 2781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"156ce479679d635df0aa6781d4761c1cae589a648494751fa5f2376c09fc3a4c"} err="failed to get container status \"156ce479679d635df0aa6781d4761c1cae589a648494751fa5f2376c09fc3a4c\": rpc error: code = NotFound desc = an error occurred when try to find container \"156ce479679d635df0aa6781d4761c1cae589a648494751fa5f2376c09fc3a4c\": not found" Dec 13 16:11:09.699161 kubelet[2781]: I1213 16:11:09.699053 2781 scope.go:117] "RemoveContainer" containerID="589d9957b1fbe9b173ef009c0147eddd0e157a65525fa80d2477c178e282786c" Dec 13 16:11:09.701726 env[1669]: time="2024-12-13T16:11:09.701599918Z" level=info msg="RemoveContainer for \"589d9957b1fbe9b173ef009c0147eddd0e157a65525fa80d2477c178e282786c\"" Dec 13 16:11:09.706301 env[1669]: time="2024-12-13T16:11:09.706197092Z" level=info msg="RemoveContainer for \"589d9957b1fbe9b173ef009c0147eddd0e157a65525fa80d2477c178e282786c\" returns successfully" Dec 13 16:11:09.706588 kubelet[2781]: I1213 16:11:09.706531 2781 scope.go:117] "RemoveContainer" containerID="589d9957b1fbe9b173ef009c0147eddd0e157a65525fa80d2477c178e282786c" Dec 13 16:11:09.707146 env[1669]: time="2024-12-13T16:11:09.706977451Z" level=error msg="ContainerStatus for \"589d9957b1fbe9b173ef009c0147eddd0e157a65525fa80d2477c178e282786c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"589d9957b1fbe9b173ef009c0147eddd0e157a65525fa80d2477c178e282786c\": not found" Dec 13 16:11:09.707412 kubelet[2781]: E1213 16:11:09.707344 2781 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"589d9957b1fbe9b173ef009c0147eddd0e157a65525fa80d2477c178e282786c\": not found" containerID="589d9957b1fbe9b173ef009c0147eddd0e157a65525fa80d2477c178e282786c" Dec 13 16:11:09.707566 kubelet[2781]: I1213 16:11:09.707457 2781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"589d9957b1fbe9b173ef009c0147eddd0e157a65525fa80d2477c178e282786c"} err="failed to get container status \"589d9957b1fbe9b173ef009c0147eddd0e157a65525fa80d2477c178e282786c\": rpc error: code = NotFound desc = an error occurred when try to find container \"589d9957b1fbe9b173ef009c0147eddd0e157a65525fa80d2477c178e282786c\": not found" Dec 13 16:11:09.900685 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0469a29528ea6142bf636e14da8eae73ec2addbca56539702ae27f7fef0ee04d-rootfs.mount: Deactivated successfully. Dec 13 16:11:09.901055 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0469a29528ea6142bf636e14da8eae73ec2addbca56539702ae27f7fef0ee04d-shm.mount: Deactivated successfully. Dec 13 16:11:09.901342 systemd[1]: var-lib-kubelet-pods-31f965b8\x2d1d39\x2d4ca5\x2d8a9f\x2db928327d0911-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwr9c7.mount: Deactivated successfully. Dec 13 16:11:09.901668 systemd[1]: var-lib-kubelet-pods-f24e3999\x2d0804\x2d4369\x2d81e1\x2da6b3ab99ff4f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2fd6j.mount: Deactivated successfully. Dec 13 16:11:09.901955 systemd[1]: var-lib-kubelet-pods-31f965b8\x2d1d39\x2d4ca5\x2d8a9f\x2db928327d0911-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 16:11:09.902230 systemd[1]: var-lib-kubelet-pods-31f965b8\x2d1d39\x2d4ca5\x2d8a9f\x2db928327d0911-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 16:11:10.857925 sshd[4868]: pam_unix(sshd:session): session closed for user core Dec 13 16:11:10.859879 systemd[1]: Started sshd@31-145.40.90.237:22-139.178.89.65:43700.service. Dec 13 16:11:10.860270 systemd[1]: sshd@30-145.40.90.237:22-139.178.89.65:58818.service: Deactivated successfully. Dec 13 16:11:10.861133 systemd-logind[1659]: Session 26 logged out. Waiting for processes to exit. Dec 13 16:11:10.861186 systemd[1]: session-26.scope: Deactivated successfully. Dec 13 16:11:10.861940 systemd-logind[1659]: Removed session 26. Dec 13 16:11:10.897966 sshd[5047]: Accepted publickey for core from 139.178.89.65 port 43700 ssh2: RSA SHA256:izw9nsjJfNoYQ+ckx4ezJW8umjt8Ms6p+cEtPdzXVWQ Dec 13 16:11:10.899010 sshd[5047]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 16:11:10.902367 systemd-logind[1659]: New session 27 of user core. Dec 13 16:11:10.903141 systemd[1]: Started session-27.scope. Dec 13 16:11:11.276579 sshd[5047]: pam_unix(sshd:session): session closed for user core Dec 13 16:11:11.286149 systemd[1]: Started sshd@32-145.40.90.237:22-139.178.89.65:43702.service. Dec 13 16:11:11.288841 systemd[1]: sshd@31-145.40.90.237:22-139.178.89.65:43700.service: Deactivated successfully. Dec 13 16:11:11.292020 systemd-logind[1659]: Session 27 logged out. Waiting for processes to exit. Dec 13 16:11:11.292280 systemd[1]: session-27.scope: Deactivated successfully. Dec 13 16:11:11.294740 systemd-logind[1659]: Removed session 27. Dec 13 16:11:11.295063 kubelet[2781]: I1213 16:11:11.294908 2781 topology_manager.go:215] "Topology Admit Handler" podUID="524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84" podNamespace="kube-system" podName="cilium-vlb46" Dec 13 16:11:11.295063 kubelet[2781]: E1213 16:11:11.295037 2781 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="31f965b8-1d39-4ca5-8a9f-b928327d0911" containerName="mount-cgroup" Dec 13 16:11:11.295690 kubelet[2781]: E1213 16:11:11.295077 2781 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="31f965b8-1d39-4ca5-8a9f-b928327d0911" containerName="apply-sysctl-overwrites" Dec 13 16:11:11.295690 kubelet[2781]: E1213 16:11:11.295095 2781 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f24e3999-0804-4369-81e1-a6b3ab99ff4f" containerName="cilium-operator" Dec 13 16:11:11.295690 kubelet[2781]: E1213 16:11:11.295109 2781 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="31f965b8-1d39-4ca5-8a9f-b928327d0911" containerName="mount-bpf-fs" Dec 13 16:11:11.295690 kubelet[2781]: E1213 16:11:11.295122 2781 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="31f965b8-1d39-4ca5-8a9f-b928327d0911" containerName="clean-cilium-state" Dec 13 16:11:11.295690 kubelet[2781]: E1213 16:11:11.295141 2781 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="31f965b8-1d39-4ca5-8a9f-b928327d0911" containerName="cilium-agent" Dec 13 16:11:11.295690 kubelet[2781]: I1213 16:11:11.295186 2781 memory_manager.go:354] "RemoveStaleState removing state" podUID="f24e3999-0804-4369-81e1-a6b3ab99ff4f" containerName="cilium-operator" Dec 13 16:11:11.295690 kubelet[2781]: I1213 16:11:11.295203 2781 memory_manager.go:354] "RemoveStaleState removing state" podUID="31f965b8-1d39-4ca5-8a9f-b928327d0911" containerName="cilium-agent" Dec 13 16:11:11.343321 kubelet[2781]: I1213 16:11:11.343263 2781 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84-hubble-tls\") pod \"cilium-vlb46\" (UID: \"524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84\") " pod="kube-system/cilium-vlb46" Dec 13 16:11:11.343321 kubelet[2781]: I1213 16:11:11.343305 2781 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84-hostproc\") pod \"cilium-vlb46\" (UID: \"524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84\") " pod="kube-system/cilium-vlb46" Dec 13 16:11:11.343321 kubelet[2781]: I1213 16:11:11.343324 2781 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84-cilium-cgroup\") pod \"cilium-vlb46\" (UID: \"524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84\") " pod="kube-system/cilium-vlb46" Dec 13 16:11:11.343459 sshd[5071]: Accepted publickey for core from 139.178.89.65 port 43702 ssh2: RSA SHA256:izw9nsjJfNoYQ+ckx4ezJW8umjt8Ms6p+cEtPdzXVWQ Dec 13 16:11:11.343617 kubelet[2781]: I1213 16:11:11.343341 2781 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84-clustermesh-secrets\") pod \"cilium-vlb46\" (UID: \"524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84\") " pod="kube-system/cilium-vlb46" Dec 13 16:11:11.343617 kubelet[2781]: I1213 16:11:11.343389 2781 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84-host-proc-sys-kernel\") pod \"cilium-vlb46\" (UID: \"524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84\") " pod="kube-system/cilium-vlb46" Dec 13 16:11:11.343617 kubelet[2781]: I1213 16:11:11.343419 2781 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86djh\" (UniqueName: \"kubernetes.io/projected/524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84-kube-api-access-86djh\") pod \"cilium-vlb46\" (UID: \"524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84\") " pod="kube-system/cilium-vlb46" Dec 13 16:11:11.343617 kubelet[2781]: I1213 16:11:11.343464 2781 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84-bpf-maps\") pod \"cilium-vlb46\" (UID: \"524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84\") " pod="kube-system/cilium-vlb46" Dec 13 16:11:11.343617 kubelet[2781]: I1213 16:11:11.343501 2781 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84-lib-modules\") pod \"cilium-vlb46\" (UID: \"524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84\") " pod="kube-system/cilium-vlb46" Dec 13 16:11:11.343617 kubelet[2781]: I1213 16:11:11.343528 2781 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84-xtables-lock\") pod \"cilium-vlb46\" (UID: \"524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84\") " pod="kube-system/cilium-vlb46" Dec 13 16:11:11.343745 kubelet[2781]: I1213 16:11:11.343550 2781 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84-cilium-ipsec-secrets\") pod \"cilium-vlb46\" (UID: \"524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84\") " pod="kube-system/cilium-vlb46" Dec 13 16:11:11.343745 kubelet[2781]: I1213 16:11:11.343562 2781 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84-etc-cni-netd\") pod \"cilium-vlb46\" (UID: \"524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84\") " pod="kube-system/cilium-vlb46" Dec 13 16:11:11.343745 kubelet[2781]: I1213 16:11:11.343577 2781 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84-cni-path\") pod \"cilium-vlb46\" (UID: \"524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84\") " pod="kube-system/cilium-vlb46" Dec 13 16:11:11.343745 kubelet[2781]: I1213 16:11:11.343605 2781 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84-cilium-run\") pod \"cilium-vlb46\" (UID: \"524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84\") " pod="kube-system/cilium-vlb46" Dec 13 16:11:11.343745 kubelet[2781]: I1213 16:11:11.343621 2781 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84-host-proc-sys-net\") pod \"cilium-vlb46\" (UID: \"524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84\") " pod="kube-system/cilium-vlb46" Dec 13 16:11:11.343745 kubelet[2781]: I1213 16:11:11.343648 2781 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84-cilium-config-path\") pod \"cilium-vlb46\" (UID: \"524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84\") " pod="kube-system/cilium-vlb46" Dec 13 16:11:11.344220 sshd[5071]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 16:11:11.346878 systemd-logind[1659]: New session 28 of user core. Dec 13 16:11:11.347366 systemd[1]: Started session-28.scope. Dec 13 16:11:11.350767 kubelet[2781]: I1213 16:11:11.350724 2781 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="31f965b8-1d39-4ca5-8a9f-b928327d0911" path="/var/lib/kubelet/pods/31f965b8-1d39-4ca5-8a9f-b928327d0911/volumes" Dec 13 16:11:11.351224 kubelet[2781]: I1213 16:11:11.351186 2781 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="f24e3999-0804-4369-81e1-a6b3ab99ff4f" path="/var/lib/kubelet/pods/f24e3999-0804-4369-81e1-a6b3ab99ff4f/volumes" Dec 13 16:11:11.480530 sshd[5071]: pam_unix(sshd:session): session closed for user core Dec 13 16:11:11.482406 systemd[1]: Started sshd@33-145.40.90.237:22-139.178.89.65:43704.service. Dec 13 16:11:11.482764 systemd[1]: sshd@32-145.40.90.237:22-139.178.89.65:43702.service: Deactivated successfully. Dec 13 16:11:11.483264 systemd-logind[1659]: Session 28 logged out. Waiting for processes to exit. Dec 13 16:11:11.483310 systemd[1]: session-28.scope: Deactivated successfully. Dec 13 16:11:11.483815 systemd-logind[1659]: Removed session 28. Dec 13 16:11:11.489475 env[1669]: time="2024-12-13T16:11:11.489437871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vlb46,Uid:524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84,Namespace:kube-system,Attempt:0,}" Dec 13 16:11:11.494751 env[1669]: time="2024-12-13T16:11:11.494691750Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 16:11:11.494751 env[1669]: time="2024-12-13T16:11:11.494713488Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 16:11:11.494751 env[1669]: time="2024-12-13T16:11:11.494720566Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 16:11:11.494864 env[1669]: time="2024-12-13T16:11:11.494790445Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ece46e3470c8bbe21a9f17e2654d6b18afb538af1c1cc961b476e18f97c1e9bc pid=5111 runtime=io.containerd.runc.v2 Dec 13 16:11:11.511894 env[1669]: time="2024-12-13T16:11:11.511871358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vlb46,Uid:524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84,Namespace:kube-system,Attempt:0,} returns sandbox id \"ece46e3470c8bbe21a9f17e2654d6b18afb538af1c1cc961b476e18f97c1e9bc\"" Dec 13 16:11:11.513016 env[1669]: time="2024-12-13T16:11:11.513003122Z" level=info msg="CreateContainer within sandbox \"ece46e3470c8bbe21a9f17e2654d6b18afb538af1c1cc961b476e18f97c1e9bc\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 16:11:11.517219 env[1669]: time="2024-12-13T16:11:11.517176934Z" level=info msg="CreateContainer within sandbox \"ece46e3470c8bbe21a9f17e2654d6b18afb538af1c1cc961b476e18f97c1e9bc\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"18fb5850f5a337259d3a0491427337281cf634ec91e4bf66a0fb7f00511fa56c\"" Dec 13 16:11:11.517431 env[1669]: time="2024-12-13T16:11:11.517398747Z" level=info msg="StartContainer for \"18fb5850f5a337259d3a0491427337281cf634ec91e4bf66a0fb7f00511fa56c\"" Dec 13 16:11:11.518185 sshd[5101]: Accepted publickey for core from 139.178.89.65 port 43704 ssh2: RSA SHA256:izw9nsjJfNoYQ+ckx4ezJW8umjt8Ms6p+cEtPdzXVWQ Dec 13 16:11:11.519013 sshd[5101]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 16:11:11.521236 systemd-logind[1659]: New session 29 of user core. Dec 13 16:11:11.521794 systemd[1]: Started session-29.scope. Dec 13 16:11:11.537277 env[1669]: time="2024-12-13T16:11:11.537224327Z" level=info msg="StartContainer for \"18fb5850f5a337259d3a0491427337281cf634ec91e4bf66a0fb7f00511fa56c\" returns successfully" Dec 13 16:11:11.554044 env[1669]: time="2024-12-13T16:11:11.553988232Z" level=info msg="shim disconnected" id=18fb5850f5a337259d3a0491427337281cf634ec91e4bf66a0fb7f00511fa56c Dec 13 16:11:11.554044 env[1669]: time="2024-12-13T16:11:11.554021938Z" level=warning msg="cleaning up after shim disconnected" id=18fb5850f5a337259d3a0491427337281cf634ec91e4bf66a0fb7f00511fa56c namespace=k8s.io Dec 13 16:11:11.554044 env[1669]: time="2024-12-13T16:11:11.554030129Z" level=info msg="cleaning up dead shim" Dec 13 16:11:11.557615 env[1669]: time="2024-12-13T16:11:11.557567957Z" level=warning msg="cleanup warnings time=\"2024-12-13T16:11:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5193 runtime=io.containerd.runc.v2\n" Dec 13 16:11:11.660573 env[1669]: time="2024-12-13T16:11:11.660438501Z" level=info msg="StopPodSandbox for \"ece46e3470c8bbe21a9f17e2654d6b18afb538af1c1cc961b476e18f97c1e9bc\"" Dec 13 16:11:11.660942 env[1669]: time="2024-12-13T16:11:11.660591456Z" level=info msg="Container to stop \"18fb5850f5a337259d3a0491427337281cf634ec91e4bf66a0fb7f00511fa56c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 16:11:11.708200 env[1669]: time="2024-12-13T16:11:11.708075076Z" level=info msg="shim disconnected" id=ece46e3470c8bbe21a9f17e2654d6b18afb538af1c1cc961b476e18f97c1e9bc Dec 13 16:11:11.708657 env[1669]: time="2024-12-13T16:11:11.708204995Z" level=warning msg="cleaning up after shim disconnected" id=ece46e3470c8bbe21a9f17e2654d6b18afb538af1c1cc961b476e18f97c1e9bc namespace=k8s.io Dec 13 16:11:11.708657 env[1669]: time="2024-12-13T16:11:11.708249980Z" level=info msg="cleaning up dead shim" Dec 13 16:11:11.725675 env[1669]: time="2024-12-13T16:11:11.725577725Z" level=warning msg="cleanup warnings time=\"2024-12-13T16:11:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5245 runtime=io.containerd.runc.v2\n" Dec 13 16:11:11.726167 env[1669]: time="2024-12-13T16:11:11.726080401Z" level=info msg="TearDown network for sandbox \"ece46e3470c8bbe21a9f17e2654d6b18afb538af1c1cc961b476e18f97c1e9bc\" successfully" Dec 13 16:11:11.726167 env[1669]: time="2024-12-13T16:11:11.726125878Z" level=info msg="StopPodSandbox for \"ece46e3470c8bbe21a9f17e2654d6b18afb538af1c1cc961b476e18f97c1e9bc\" returns successfully" Dec 13 16:11:11.848435 kubelet[2781]: I1213 16:11:11.848173 2781 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84-cilium-ipsec-secrets\") pod \"524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84\" (UID: \"524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84\") " Dec 13 16:11:11.848435 kubelet[2781]: I1213 16:11:11.848280 2781 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84-xtables-lock\") pod \"524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84\" (UID: \"524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84\") " Dec 13 16:11:11.848435 kubelet[2781]: I1213 16:11:11.848346 2781 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84-hostproc\") pod \"524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84\" (UID: \"524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84\") " Dec 13 16:11:11.848435 kubelet[2781]: I1213 16:11:11.848419 2781 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84-cilium-cgroup\") pod \"524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84\" (UID: \"524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84\") " Dec 13 16:11:11.849238 kubelet[2781]: I1213 16:11:11.848475 2781 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84-cni-path\") pod \"524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84\" (UID: \"524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84\") " Dec 13 16:11:11.849238 kubelet[2781]: I1213 16:11:11.848461 2781 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84-hostproc" (OuterVolumeSpecName: "hostproc") pod "524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84" (UID: "524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 16:11:11.849238 kubelet[2781]: I1213 16:11:11.848533 2781 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84-cilium-run\") pod \"524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84\" (UID: \"524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84\") " Dec 13 16:11:11.849238 kubelet[2781]: I1213 16:11:11.848548 2781 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84" (UID: "524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 16:11:11.849238 kubelet[2781]: I1213 16:11:11.848567 2781 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84" (UID: "524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 16:11:11.849929 kubelet[2781]: I1213 16:11:11.848577 2781 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84" (UID: "524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 16:11:11.849929 kubelet[2781]: I1213 16:11:11.848608 2781 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84-cni-path" (OuterVolumeSpecName: "cni-path") pod "524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84" (UID: "524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 16:11:11.849929 kubelet[2781]: I1213 16:11:11.848601 2781 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84-cilium-config-path\") pod \"524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84\" (UID: \"524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84\") " Dec 13 16:11:11.849929 kubelet[2781]: I1213 16:11:11.848769 2781 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84-hubble-tls\") pod \"524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84\" (UID: \"524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84\") " Dec 13 16:11:11.849929 kubelet[2781]: I1213 16:11:11.848833 2781 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84-bpf-maps\") pod \"524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84\" (UID: \"524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84\") " Dec 13 16:11:11.849929 kubelet[2781]: I1213 16:11:11.848903 2781 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84-clustermesh-secrets\") pod \"524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84\" (UID: \"524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84\") " Dec 13 16:11:11.850700 kubelet[2781]: I1213 16:11:11.848961 2781 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84-etc-cni-netd\") pod \"524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84\" (UID: \"524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84\") " Dec 13 16:11:11.850700 kubelet[2781]: I1213 16:11:11.848951 2781 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84" (UID: "524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 16:11:11.850700 kubelet[2781]: I1213 16:11:11.849017 2781 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84-lib-modules\") pod \"524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84\" (UID: \"524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84\") " Dec 13 16:11:11.850700 kubelet[2781]: I1213 16:11:11.849098 2781 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84-host-proc-sys-net\") pod \"524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84\" (UID: \"524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84\") " Dec 13 16:11:11.850700 kubelet[2781]: I1213 16:11:11.849089 2781 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84" (UID: "524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 16:11:11.851217 kubelet[2781]: I1213 16:11:11.849138 2781 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84" (UID: "524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 16:11:11.851217 kubelet[2781]: I1213 16:11:11.849174 2781 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84-host-proc-sys-kernel\") pod \"524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84\" (UID: \"524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84\") " Dec 13 16:11:11.851217 kubelet[2781]: I1213 16:11:11.849223 2781 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84" (UID: "524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 16:11:11.851217 kubelet[2781]: I1213 16:11:11.849222 2781 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84" (UID: "524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 16:11:11.851217 kubelet[2781]: I1213 16:11:11.849327 2781 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-86djh\" (UniqueName: \"kubernetes.io/projected/524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84-kube-api-access-86djh\") pod \"524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84\" (UID: \"524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84\") " Dec 13 16:11:11.851757 kubelet[2781]: I1213 16:11:11.849517 2781 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84-lib-modules\") on node \"ci-3510.3.6-a-245bdeb2fc\" DevicePath \"\"" Dec 13 16:11:11.851757 kubelet[2781]: I1213 16:11:11.849590 2781 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84-host-proc-sys-net\") on node \"ci-3510.3.6-a-245bdeb2fc\" DevicePath \"\"" Dec 13 16:11:11.851757 kubelet[2781]: I1213 16:11:11.849651 2781 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84-host-proc-sys-kernel\") on node \"ci-3510.3.6-a-245bdeb2fc\" DevicePath \"\"" Dec 13 16:11:11.851757 kubelet[2781]: I1213 16:11:11.849711 2781 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84-xtables-lock\") on node \"ci-3510.3.6-a-245bdeb2fc\" DevicePath \"\"" Dec 13 16:11:11.851757 kubelet[2781]: I1213 16:11:11.849776 2781 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84-hostproc\") on node \"ci-3510.3.6-a-245bdeb2fc\" DevicePath \"\"" Dec 13 16:11:11.851757 kubelet[2781]: I1213 16:11:11.849833 2781 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84-cilium-cgroup\") on node \"ci-3510.3.6-a-245bdeb2fc\" DevicePath \"\"" Dec 13 16:11:11.851757 kubelet[2781]: I1213 16:11:11.849889 2781 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84-cni-path\") on node \"ci-3510.3.6-a-245bdeb2fc\" DevicePath \"\"" Dec 13 16:11:11.851757 kubelet[2781]: I1213 16:11:11.849946 2781 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84-cilium-run\") on node \"ci-3510.3.6-a-245bdeb2fc\" DevicePath \"\"" Dec 13 16:11:11.852596 kubelet[2781]: I1213 16:11:11.850003 2781 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84-bpf-maps\") on node \"ci-3510.3.6-a-245bdeb2fc\" DevicePath \"\"" Dec 13 16:11:11.852596 kubelet[2781]: I1213 16:11:11.850058 2781 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84-etc-cni-netd\") on node \"ci-3510.3.6-a-245bdeb2fc\" DevicePath \"\"" Dec 13 16:11:11.855122 kubelet[2781]: I1213 16:11:11.855027 2781 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84" (UID: "524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 16:11:11.855407 kubelet[2781]: I1213 16:11:11.855287 2781 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84" (UID: "524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 16:11:11.855707 kubelet[2781]: I1213 16:11:11.855591 2781 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84" (UID: "524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 16:11:11.855707 kubelet[2781]: I1213 16:11:11.855654 2781 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84" (UID: "524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 16:11:11.856177 kubelet[2781]: I1213 16:11:11.856083 2781 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84-kube-api-access-86djh" (OuterVolumeSpecName: "kube-api-access-86djh") pod "524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84" (UID: "524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84"). InnerVolumeSpecName "kube-api-access-86djh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 16:11:11.950852 kubelet[2781]: I1213 16:11:11.950743 2781 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84-cilium-config-path\") on node \"ci-3510.3.6-a-245bdeb2fc\" DevicePath \"\"" Dec 13 16:11:11.950852 kubelet[2781]: I1213 16:11:11.950823 2781 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84-hubble-tls\") on node \"ci-3510.3.6-a-245bdeb2fc\" DevicePath \"\"" Dec 13 16:11:11.950852 kubelet[2781]: I1213 16:11:11.950866 2781 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84-clustermesh-secrets\") on node \"ci-3510.3.6-a-245bdeb2fc\" DevicePath \"\"" Dec 13 16:11:11.951431 kubelet[2781]: I1213 16:11:11.950904 2781 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-86djh\" (UniqueName: \"kubernetes.io/projected/524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84-kube-api-access-86djh\") on node \"ci-3510.3.6-a-245bdeb2fc\" DevicePath \"\"" Dec 13 16:11:11.951431 kubelet[2781]: I1213 16:11:11.950938 2781 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84-cilium-ipsec-secrets\") on node \"ci-3510.3.6-a-245bdeb2fc\" DevicePath \"\"" Dec 13 16:11:12.452828 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ece46e3470c8bbe21a9f17e2654d6b18afb538af1c1cc961b476e18f97c1e9bc-rootfs.mount: Deactivated successfully. Dec 13 16:11:12.452906 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ece46e3470c8bbe21a9f17e2654d6b18afb538af1c1cc961b476e18f97c1e9bc-shm.mount: Deactivated successfully. Dec 13 16:11:12.452959 systemd[1]: var-lib-kubelet-pods-524c6d8d\x2d6c7e\x2d4c9f\x2dac8d\x2d610d5ffb5b84-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d86djh.mount: Deactivated successfully. Dec 13 16:11:12.453010 systemd[1]: var-lib-kubelet-pods-524c6d8d\x2d6c7e\x2d4c9f\x2dac8d\x2d610d5ffb5b84-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 16:11:12.453057 systemd[1]: var-lib-kubelet-pods-524c6d8d\x2d6c7e\x2d4c9f\x2dac8d\x2d610d5ffb5b84-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 16:11:12.453106 systemd[1]: var-lib-kubelet-pods-524c6d8d\x2d6c7e\x2d4c9f\x2dac8d\x2d610d5ffb5b84-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Dec 13 16:11:12.666036 kubelet[2781]: I1213 16:11:12.665938 2781 scope.go:117] "RemoveContainer" containerID="18fb5850f5a337259d3a0491427337281cf634ec91e4bf66a0fb7f00511fa56c" Dec 13 16:11:12.668729 env[1669]: time="2024-12-13T16:11:12.668657191Z" level=info msg="RemoveContainer for \"18fb5850f5a337259d3a0491427337281cf634ec91e4bf66a0fb7f00511fa56c\"" Dec 13 16:11:12.685801 env[1669]: time="2024-12-13T16:11:12.685690195Z" level=info msg="RemoveContainer for \"18fb5850f5a337259d3a0491427337281cf634ec91e4bf66a0fb7f00511fa56c\" returns successfully" Dec 13 16:11:12.721958 kubelet[2781]: I1213 16:11:12.721909 2781 topology_manager.go:215] "Topology Admit Handler" podUID="9f38f9b5-1e43-4ab7-842c-1d0a73c2dd7a" podNamespace="kube-system" podName="cilium-mkklf" Dec 13 16:11:12.722199 kubelet[2781]: E1213 16:11:12.722008 2781 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84" containerName="mount-cgroup" Dec 13 16:11:12.722199 kubelet[2781]: I1213 16:11:12.722078 2781 memory_manager.go:354] "RemoveStaleState removing state" podUID="524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84" containerName="mount-cgroup" Dec 13 16:11:12.756460 kubelet[2781]: I1213 16:11:12.756416 2781 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9f38f9b5-1e43-4ab7-842c-1d0a73c2dd7a-cilium-run\") pod \"cilium-mkklf\" (UID: \"9f38f9b5-1e43-4ab7-842c-1d0a73c2dd7a\") " pod="kube-system/cilium-mkklf" Dec 13 16:11:12.756705 kubelet[2781]: I1213 16:11:12.756495 2781 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9f38f9b5-1e43-4ab7-842c-1d0a73c2dd7a-cilium-config-path\") pod \"cilium-mkklf\" (UID: \"9f38f9b5-1e43-4ab7-842c-1d0a73c2dd7a\") " pod="kube-system/cilium-mkklf" Dec 13 16:11:12.756705 kubelet[2781]: I1213 16:11:12.756680 2781 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9f38f9b5-1e43-4ab7-842c-1d0a73c2dd7a-etc-cni-netd\") pod \"cilium-mkklf\" (UID: \"9f38f9b5-1e43-4ab7-842c-1d0a73c2dd7a\") " pod="kube-system/cilium-mkklf" Dec 13 16:11:12.756867 kubelet[2781]: I1213 16:11:12.756822 2781 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9f38f9b5-1e43-4ab7-842c-1d0a73c2dd7a-clustermesh-secrets\") pod \"cilium-mkklf\" (UID: \"9f38f9b5-1e43-4ab7-842c-1d0a73c2dd7a\") " pod="kube-system/cilium-mkklf" Dec 13 16:11:12.756959 kubelet[2781]: I1213 16:11:12.756894 2781 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9f38f9b5-1e43-4ab7-842c-1d0a73c2dd7a-cilium-ipsec-secrets\") pod \"cilium-mkklf\" (UID: \"9f38f9b5-1e43-4ab7-842c-1d0a73c2dd7a\") " pod="kube-system/cilium-mkklf" Dec 13 16:11:12.756959 kubelet[2781]: I1213 16:11:12.756945 2781 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9f38f9b5-1e43-4ab7-842c-1d0a73c2dd7a-host-proc-sys-net\") pod \"cilium-mkklf\" (UID: \"9f38f9b5-1e43-4ab7-842c-1d0a73c2dd7a\") " pod="kube-system/cilium-mkklf" Dec 13 16:11:12.757105 kubelet[2781]: I1213 16:11:12.757006 2781 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8flxj\" (UniqueName: \"kubernetes.io/projected/9f38f9b5-1e43-4ab7-842c-1d0a73c2dd7a-kube-api-access-8flxj\") pod \"cilium-mkklf\" (UID: \"9f38f9b5-1e43-4ab7-842c-1d0a73c2dd7a\") " pod="kube-system/cilium-mkklf" Dec 13 16:11:12.757105 kubelet[2781]: I1213 16:11:12.757093 2781 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9f38f9b5-1e43-4ab7-842c-1d0a73c2dd7a-cni-path\") pod \"cilium-mkklf\" (UID: \"9f38f9b5-1e43-4ab7-842c-1d0a73c2dd7a\") " pod="kube-system/cilium-mkklf" Dec 13 16:11:12.757255 kubelet[2781]: I1213 16:11:12.757140 2781 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9f38f9b5-1e43-4ab7-842c-1d0a73c2dd7a-cilium-cgroup\") pod \"cilium-mkklf\" (UID: \"9f38f9b5-1e43-4ab7-842c-1d0a73c2dd7a\") " pod="kube-system/cilium-mkklf" Dec 13 16:11:12.757339 kubelet[2781]: I1213 16:11:12.757252 2781 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9f38f9b5-1e43-4ab7-842c-1d0a73c2dd7a-hostproc\") pod \"cilium-mkklf\" (UID: \"9f38f9b5-1e43-4ab7-842c-1d0a73c2dd7a\") " pod="kube-system/cilium-mkklf" Dec 13 16:11:12.757339 kubelet[2781]: I1213 16:11:12.757327 2781 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9f38f9b5-1e43-4ab7-842c-1d0a73c2dd7a-lib-modules\") pod \"cilium-mkklf\" (UID: \"9f38f9b5-1e43-4ab7-842c-1d0a73c2dd7a\") " pod="kube-system/cilium-mkklf" Dec 13 16:11:12.757513 kubelet[2781]: I1213 16:11:12.757412 2781 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9f38f9b5-1e43-4ab7-842c-1d0a73c2dd7a-xtables-lock\") pod \"cilium-mkklf\" (UID: \"9f38f9b5-1e43-4ab7-842c-1d0a73c2dd7a\") " pod="kube-system/cilium-mkklf" Dec 13 16:11:12.757513 kubelet[2781]: I1213 16:11:12.757466 2781 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9f38f9b5-1e43-4ab7-842c-1d0a73c2dd7a-host-proc-sys-kernel\") pod \"cilium-mkklf\" (UID: \"9f38f9b5-1e43-4ab7-842c-1d0a73c2dd7a\") " pod="kube-system/cilium-mkklf" Dec 13 16:11:12.757648 kubelet[2781]: I1213 16:11:12.757514 2781 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9f38f9b5-1e43-4ab7-842c-1d0a73c2dd7a-bpf-maps\") pod \"cilium-mkklf\" (UID: \"9f38f9b5-1e43-4ab7-842c-1d0a73c2dd7a\") " pod="kube-system/cilium-mkklf" Dec 13 16:11:12.757648 kubelet[2781]: I1213 16:11:12.757582 2781 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9f38f9b5-1e43-4ab7-842c-1d0a73c2dd7a-hubble-tls\") pod \"cilium-mkklf\" (UID: \"9f38f9b5-1e43-4ab7-842c-1d0a73c2dd7a\") " pod="kube-system/cilium-mkklf" Dec 13 16:11:13.029755 env[1669]: time="2024-12-13T16:11:13.029497758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mkklf,Uid:9f38f9b5-1e43-4ab7-842c-1d0a73c2dd7a,Namespace:kube-system,Attempt:0,}" Dec 13 16:11:13.055240 env[1669]: time="2024-12-13T16:11:13.055075950Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 16:11:13.055240 env[1669]: time="2024-12-13T16:11:13.055186129Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 16:11:13.055633 env[1669]: time="2024-12-13T16:11:13.055241964Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 16:11:13.055916 env[1669]: time="2024-12-13T16:11:13.055720112Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4aaf280bd1010273446fe731206da85d27d7df55f23599bc4eeee55387731488 pid=5272 runtime=io.containerd.runc.v2 Dec 13 16:11:13.120328 env[1669]: time="2024-12-13T16:11:13.120253512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mkklf,Uid:9f38f9b5-1e43-4ab7-842c-1d0a73c2dd7a,Namespace:kube-system,Attempt:0,} returns sandbox id \"4aaf280bd1010273446fe731206da85d27d7df55f23599bc4eeee55387731488\"" Dec 13 16:11:13.124394 env[1669]: time="2024-12-13T16:11:13.124289672Z" level=info msg="CreateContainer within sandbox \"4aaf280bd1010273446fe731206da85d27d7df55f23599bc4eeee55387731488\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 16:11:13.133537 env[1669]: time="2024-12-13T16:11:13.133442826Z" level=info msg="CreateContainer within sandbox \"4aaf280bd1010273446fe731206da85d27d7df55f23599bc4eeee55387731488\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5ce3d52a4fa24fa11491cba887e90b818339f27170c7d8a75c583f950033b772\"" Dec 13 16:11:13.134096 env[1669]: time="2024-12-13T16:11:13.134037087Z" level=info msg="StartContainer for \"5ce3d52a4fa24fa11491cba887e90b818339f27170c7d8a75c583f950033b772\"" Dec 13 16:11:13.200811 env[1669]: time="2024-12-13T16:11:13.200736557Z" level=info msg="StartContainer for \"5ce3d52a4fa24fa11491cba887e90b818339f27170c7d8a75c583f950033b772\" returns successfully" Dec 13 16:11:13.245245 env[1669]: time="2024-12-13T16:11:13.245136727Z" level=info msg="shim disconnected" id=5ce3d52a4fa24fa11491cba887e90b818339f27170c7d8a75c583f950033b772 Dec 13 16:11:13.245245 env[1669]: time="2024-12-13T16:11:13.245213698Z" level=warning msg="cleaning up after shim disconnected" id=5ce3d52a4fa24fa11491cba887e90b818339f27170c7d8a75c583f950033b772 namespace=k8s.io Dec 13 16:11:13.245245 env[1669]: time="2024-12-13T16:11:13.245234669Z" level=info msg="cleaning up dead shim" Dec 13 16:11:13.257095 env[1669]: time="2024-12-13T16:11:13.257000515Z" level=warning msg="cleanup warnings time=\"2024-12-13T16:11:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5355 runtime=io.containerd.runc.v2\n" Dec 13 16:11:13.350114 kubelet[2781]: E1213 16:11:13.349895 2781 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-s6jtb" podUID="0d04c636-903a-4418-a375-169c9224e8cc" Dec 13 16:11:13.355566 kubelet[2781]: I1213 16:11:13.355509 2781 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84" path="/var/lib/kubelet/pods/524c6d8d-6c7e-4c9f-ac8d-610d5ffb5b84/volumes" Dec 13 16:11:13.677872 env[1669]: time="2024-12-13T16:11:13.677733804Z" level=info msg="CreateContainer within sandbox \"4aaf280bd1010273446fe731206da85d27d7df55f23599bc4eeee55387731488\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 16:11:13.687476 env[1669]: time="2024-12-13T16:11:13.687386559Z" level=info msg="CreateContainer within sandbox \"4aaf280bd1010273446fe731206da85d27d7df55f23599bc4eeee55387731488\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"714b8bb80964eb5595f6cf01192c4411bc645b9b8612d9d7bfebe004c92481be\"" Dec 13 16:11:13.688090 env[1669]: time="2024-12-13T16:11:13.688033675Z" level=info msg="StartContainer for \"714b8bb80964eb5595f6cf01192c4411bc645b9b8612d9d7bfebe004c92481be\"" Dec 13 16:11:13.691236 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2884142606.mount: Deactivated successfully. Dec 13 16:11:13.710134 env[1669]: time="2024-12-13T16:11:13.710082236Z" level=info msg="StartContainer for \"714b8bb80964eb5595f6cf01192c4411bc645b9b8612d9d7bfebe004c92481be\" returns successfully" Dec 13 16:11:13.722595 env[1669]: time="2024-12-13T16:11:13.722549262Z" level=info msg="shim disconnected" id=714b8bb80964eb5595f6cf01192c4411bc645b9b8612d9d7bfebe004c92481be Dec 13 16:11:13.722595 env[1669]: time="2024-12-13T16:11:13.722596657Z" level=warning msg="cleaning up after shim disconnected" id=714b8bb80964eb5595f6cf01192c4411bc645b9b8612d9d7bfebe004c92481be namespace=k8s.io Dec 13 16:11:13.722738 env[1669]: time="2024-12-13T16:11:13.722604248Z" level=info msg="cleaning up dead shim" Dec 13 16:11:13.726596 env[1669]: time="2024-12-13T16:11:13.726542481Z" level=warning msg="cleanup warnings time=\"2024-12-13T16:11:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5416 runtime=io.containerd.runc.v2\n" Dec 13 16:11:14.452921 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-714b8bb80964eb5595f6cf01192c4411bc645b9b8612d9d7bfebe004c92481be-rootfs.mount: Deactivated successfully. Dec 13 16:11:14.497937 kubelet[2781]: E1213 16:11:14.497877 2781 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 16:11:14.681485 env[1669]: time="2024-12-13T16:11:14.681421847Z" level=info msg="CreateContainer within sandbox \"4aaf280bd1010273446fe731206da85d27d7df55f23599bc4eeee55387731488\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 16:11:14.686388 env[1669]: time="2024-12-13T16:11:14.686357809Z" level=info msg="CreateContainer within sandbox \"4aaf280bd1010273446fe731206da85d27d7df55f23599bc4eeee55387731488\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2f6cb2b6063b0808241447689d3d7ee333a32fee1c1e202c5cf2875155739880\"" Dec 13 16:11:14.686656 env[1669]: time="2024-12-13T16:11:14.686613538Z" level=info msg="StartContainer for \"2f6cb2b6063b0808241447689d3d7ee333a32fee1c1e202c5cf2875155739880\"" Dec 13 16:11:14.711710 env[1669]: time="2024-12-13T16:11:14.711624378Z" level=info msg="StartContainer for \"2f6cb2b6063b0808241447689d3d7ee333a32fee1c1e202c5cf2875155739880\" returns successfully" Dec 13 16:11:14.723581 env[1669]: time="2024-12-13T16:11:14.723552480Z" level=info msg="shim disconnected" id=2f6cb2b6063b0808241447689d3d7ee333a32fee1c1e202c5cf2875155739880 Dec 13 16:11:14.723581 env[1669]: time="2024-12-13T16:11:14.723580813Z" level=warning msg="cleaning up after shim disconnected" id=2f6cb2b6063b0808241447689d3d7ee333a32fee1c1e202c5cf2875155739880 namespace=k8s.io Dec 13 16:11:14.723713 env[1669]: time="2024-12-13T16:11:14.723588084Z" level=info msg="cleaning up dead shim" Dec 13 16:11:14.727806 env[1669]: time="2024-12-13T16:11:14.727758265Z" level=warning msg="cleanup warnings time=\"2024-12-13T16:11:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5474 runtime=io.containerd.runc.v2\n" Dec 13 16:11:15.350487 kubelet[2781]: E1213 16:11:15.350338 2781 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-s6jtb" podUID="0d04c636-903a-4418-a375-169c9224e8cc" Dec 13 16:11:15.453186 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2f6cb2b6063b0808241447689d3d7ee333a32fee1c1e202c5cf2875155739880-rootfs.mount: Deactivated successfully. Dec 13 16:11:15.686009 env[1669]: time="2024-12-13T16:11:15.685933367Z" level=info msg="CreateContainer within sandbox \"4aaf280bd1010273446fe731206da85d27d7df55f23599bc4eeee55387731488\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 16:11:15.691729 env[1669]: time="2024-12-13T16:11:15.691663086Z" level=info msg="CreateContainer within sandbox \"4aaf280bd1010273446fe731206da85d27d7df55f23599bc4eeee55387731488\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"76aebda1d62809f515af004515a1b61c8ed10b8db2b2219db6bc6cc317e49f86\"" Dec 13 16:11:15.692111 env[1669]: time="2024-12-13T16:11:15.692094425Z" level=info msg="StartContainer for \"76aebda1d62809f515af004515a1b61c8ed10b8db2b2219db6bc6cc317e49f86\"" Dec 13 16:11:15.694747 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount36184164.mount: Deactivated successfully. Dec 13 16:11:15.713918 env[1669]: time="2024-12-13T16:11:15.713894119Z" level=info msg="StartContainer for \"76aebda1d62809f515af004515a1b61c8ed10b8db2b2219db6bc6cc317e49f86\" returns successfully" Dec 13 16:11:15.722897 env[1669]: time="2024-12-13T16:11:15.722868502Z" level=info msg="shim disconnected" id=76aebda1d62809f515af004515a1b61c8ed10b8db2b2219db6bc6cc317e49f86 Dec 13 16:11:15.722897 env[1669]: time="2024-12-13T16:11:15.722896100Z" level=warning msg="cleaning up after shim disconnected" id=76aebda1d62809f515af004515a1b61c8ed10b8db2b2219db6bc6cc317e49f86 namespace=k8s.io Dec 13 16:11:15.723064 env[1669]: time="2024-12-13T16:11:15.722903968Z" level=info msg="cleaning up dead shim" Dec 13 16:11:15.726442 env[1669]: time="2024-12-13T16:11:15.726424393Z" level=warning msg="cleanup warnings time=\"2024-12-13T16:11:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5528 runtime=io.containerd.runc.v2\n" Dec 13 16:11:16.349972 kubelet[2781]: E1213 16:11:16.349859 2781 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-shvf4" podUID="6cdc1c5a-510a-4f56-9d49-45d25220636c" Dec 13 16:11:16.453240 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-76aebda1d62809f515af004515a1b61c8ed10b8db2b2219db6bc6cc317e49f86-rootfs.mount: Deactivated successfully. Dec 13 16:11:16.698184 env[1669]: time="2024-12-13T16:11:16.697939389Z" level=info msg="CreateContainer within sandbox \"4aaf280bd1010273446fe731206da85d27d7df55f23599bc4eeee55387731488\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 16:11:16.712703 env[1669]: time="2024-12-13T16:11:16.712581006Z" level=info msg="CreateContainer within sandbox \"4aaf280bd1010273446fe731206da85d27d7df55f23599bc4eeee55387731488\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"dab2128217434133e71e8f2f03dbdc6db87d4f322b54e8a1d37398ddfb684dc4\"" Dec 13 16:11:16.713635 env[1669]: time="2024-12-13T16:11:16.713522175Z" level=info msg="StartContainer for \"dab2128217434133e71e8f2f03dbdc6db87d4f322b54e8a1d37398ddfb684dc4\"" Dec 13 16:11:16.722207 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount250528547.mount: Deactivated successfully. Dec 13 16:11:16.742140 env[1669]: time="2024-12-13T16:11:16.742087932Z" level=info msg="StartContainer for \"dab2128217434133e71e8f2f03dbdc6db87d4f322b54e8a1d37398ddfb684dc4\" returns successfully" Dec 13 16:11:16.896418 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Dec 13 16:11:17.350339 kubelet[2781]: E1213 16:11:17.350258 2781 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-s6jtb" podUID="0d04c636-903a-4418-a375-169c9224e8cc" Dec 13 16:11:17.705343 kubelet[2781]: I1213 16:11:17.705284 2781 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-mkklf" podStartSLOduration=5.7052425289999995 podStartE2EDuration="5.705242529s" podCreationTimestamp="2024-12-13 16:11:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 16:11:17.705197947 +0000 UTC m=+438.417141775" watchObservedRunningTime="2024-12-13 16:11:17.705242529 +0000 UTC m=+438.417186352" Dec 13 16:11:18.349367 kubelet[2781]: E1213 16:11:18.349320 2781 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-shvf4" podUID="6cdc1c5a-510a-4f56-9d49-45d25220636c" Dec 13 16:11:19.350503 kubelet[2781]: E1213 16:11:19.350449 2781 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-s6jtb" podUID="0d04c636-903a-4418-a375-169c9224e8cc" Dec 13 16:11:19.431674 kubelet[2781]: I1213 16:11:19.431611 2781 setters.go:568] "Node became not ready" node="ci-3510.3.6-a-245bdeb2fc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T16:11:19Z","lastTransitionTime":"2024-12-13T16:11:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 16:11:20.016698 systemd-networkd[1398]: lxc_health: Link UP Dec 13 16:11:20.041243 systemd-networkd[1398]: lxc_health: Gained carrier Dec 13 16:11:20.041457 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 16:11:21.849466 systemd-networkd[1398]: lxc_health: Gained IPv6LL Dec 13 16:11:26.018785 sshd[5101]: pam_unix(sshd:session): session closed for user core Dec 13 16:11:26.019966 systemd[1]: sshd@33-145.40.90.237:22-139.178.89.65:43704.service: Deactivated successfully. Dec 13 16:11:26.020521 systemd-logind[1659]: Session 29 logged out. Waiting for processes to exit. Dec 13 16:11:26.020528 systemd[1]: session-29.scope: Deactivated successfully. Dec 13 16:11:26.020960 systemd-logind[1659]: Removed session 29.