Feb 13 20:53:27.990008 kernel: microcode: updated early: 0xf4 -> 0xfc, date = 2023-07-27 Feb 13 20:53:27.990023 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 18:03:41 -00 2025 Feb 13 20:53:27.990030 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 13 20:53:27.990035 kernel: BIOS-provided physical RAM map: Feb 13 20:53:27.990039 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Feb 13 20:53:27.990043 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Feb 13 20:53:27.990048 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Feb 13 20:53:27.990052 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Feb 13 20:53:27.990056 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Feb 13 20:53:27.990060 kernel: BIOS-e820: [mem 0x0000000040400000-0x0000000081b2afff] usable Feb 13 20:53:27.990064 kernel: BIOS-e820: [mem 0x0000000081b2b000-0x0000000081b2bfff] ACPI NVS Feb 13 20:53:27.990069 kernel: BIOS-e820: [mem 0x0000000081b2c000-0x0000000081b2cfff] reserved Feb 13 20:53:27.990074 kernel: BIOS-e820: [mem 0x0000000081b2d000-0x000000008afccfff] usable Feb 13 20:53:27.990078 kernel: BIOS-e820: [mem 0x000000008afcd000-0x000000008c0b1fff] reserved Feb 13 20:53:27.990083 kernel: BIOS-e820: [mem 0x000000008c0b2000-0x000000008c23afff] usable Feb 13 20:53:27.990088 kernel: BIOS-e820: [mem 0x000000008c23b000-0x000000008c66cfff] ACPI NVS Feb 13 20:53:27.990093 kernel: BIOS-e820: [mem 0x000000008c66d000-0x000000008eefefff] reserved Feb 13 20:53:27.990098 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable Feb 13 20:53:27.990103 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved Feb 13 20:53:27.990107 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Feb 13 20:53:27.990112 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Feb 13 20:53:27.990117 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Feb 13 20:53:27.990121 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Feb 13 20:53:27.990126 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Feb 13 20:53:27.990131 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable Feb 13 20:53:27.990135 kernel: NX (Execute Disable) protection: active Feb 13 20:53:27.990140 kernel: APIC: Static calls initialized Feb 13 20:53:27.990145 kernel: SMBIOS 3.2.1 present. Feb 13 20:53:27.990150 kernel: DMI: Supermicro SYS-5019C-MR-PH004/X11SCM-F, BIOS 1.9 09/16/2022 Feb 13 20:53:27.990155 kernel: tsc: Detected 3400.000 MHz processor Feb 13 20:53:27.990160 kernel: tsc: Detected 3399.906 MHz TSC Feb 13 20:53:27.990165 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 20:53:27.990170 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 20:53:27.990175 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 Feb 13 20:53:27.990180 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 23), built from 10 variable MTRRs Feb 13 20:53:27.990184 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 20:53:27.990189 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 Feb 13 20:53:27.990195 kernel: Using GB pages for direct mapping Feb 13 20:53:27.990200 kernel: ACPI: Early table checksum verification disabled Feb 13 20:53:27.990205 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Feb 13 20:53:27.990212 kernel: ACPI: XSDT 0x000000008C54E0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Feb 13 20:53:27.990217 kernel: ACPI: FACP 0x000000008C58A670 000114 (v06 01072009 AMI 00010013) Feb 13 20:53:27.990222 kernel: ACPI: DSDT 0x000000008C54E268 03C404 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Feb 13 20:53:27.990227 kernel: ACPI: FACS 0x000000008C66CF80 000040 Feb 13 20:53:27.990233 kernel: ACPI: APIC 0x000000008C58A788 00012C (v04 01072009 AMI 00010013) Feb 13 20:53:27.990238 kernel: ACPI: FPDT 0x000000008C58A8B8 000044 (v01 01072009 AMI 00010013) Feb 13 20:53:27.990243 kernel: ACPI: FIDT 0x000000008C58A900 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Feb 13 20:53:27.990248 kernel: ACPI: MCFG 0x000000008C58A9A0 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Feb 13 20:53:27.990253 kernel: ACPI: SPMI 0x000000008C58A9E0 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Feb 13 20:53:27.990258 kernel: ACPI: SSDT 0x000000008C58AA28 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Feb 13 20:53:27.990263 kernel: ACPI: SSDT 0x000000008C58C548 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Feb 13 20:53:27.990269 kernel: ACPI: SSDT 0x000000008C58F710 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Feb 13 20:53:27.990275 kernel: ACPI: HPET 0x000000008C591A40 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 13 20:53:27.990280 kernel: ACPI: SSDT 0x000000008C591A78 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Feb 13 20:53:27.990285 kernel: ACPI: SSDT 0x000000008C592A28 0008F4 (v02 INTEL xh_mossb 00000000 INTL 20160527) Feb 13 20:53:27.990290 kernel: ACPI: UEFI 0x000000008C593320 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 13 20:53:27.990295 kernel: ACPI: LPIT 0x000000008C593368 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 13 20:53:27.990300 kernel: ACPI: SSDT 0x000000008C593400 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Feb 13 20:53:27.990305 kernel: ACPI: SSDT 0x000000008C595BE0 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Feb 13 20:53:27.990310 kernel: ACPI: DBGP 0x000000008C5970C8 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 13 20:53:27.990316 kernel: ACPI: DBG2 0x000000008C597100 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Feb 13 20:53:27.990321 kernel: ACPI: SSDT 0x000000008C597158 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Feb 13 20:53:27.990326 kernel: ACPI: DMAR 0x000000008C598CC0 000070 (v01 INTEL EDK2 00000002 01000013) Feb 13 20:53:27.990331 kernel: ACPI: SSDT 0x000000008C598D30 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Feb 13 20:53:27.990336 kernel: ACPI: TPM2 0x000000008C598E78 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Feb 13 20:53:27.990341 kernel: ACPI: SSDT 0x000000008C598EB0 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Feb 13 20:53:27.990346 kernel: ACPI: WSMT 0x000000008C599C40 000028 (v01 SUPERM 01072009 AMI 00010013) Feb 13 20:53:27.990351 kernel: ACPI: EINJ 0x000000008C599C68 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Feb 13 20:53:27.990357 kernel: ACPI: ERST 0x000000008C599D98 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Feb 13 20:53:27.990362 kernel: ACPI: BERT 0x000000008C599FC8 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Feb 13 20:53:27.990367 kernel: ACPI: HEST 0x000000008C599FF8 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Feb 13 20:53:27.990372 kernel: ACPI: SSDT 0x000000008C59A278 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Feb 13 20:53:27.990377 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58a670-0x8c58a783] Feb 13 20:53:27.990382 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54e268-0x8c58a66b] Feb 13 20:53:27.990387 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66cf80-0x8c66cfbf] Feb 13 20:53:27.990392 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58a788-0x8c58a8b3] Feb 13 20:53:27.990397 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58a8b8-0x8c58a8fb] Feb 13 20:53:27.990403 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58a900-0x8c58a99b] Feb 13 20:53:27.990408 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58a9a0-0x8c58a9db] Feb 13 20:53:27.990413 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58a9e0-0x8c58aa20] Feb 13 20:53:27.990418 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58aa28-0x8c58c543] Feb 13 20:53:27.990426 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58c548-0x8c58f70d] Feb 13 20:53:27.990431 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58f710-0x8c591a3a] Feb 13 20:53:27.990436 kernel: ACPI: Reserving HPET table memory at [mem 0x8c591a40-0x8c591a77] Feb 13 20:53:27.990464 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c591a78-0x8c592a25] Feb 13 20:53:27.990469 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c592a28-0x8c59331b] Feb 13 20:53:27.990490 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c593320-0x8c593361] Feb 13 20:53:27.990496 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c593368-0x8c5933fb] Feb 13 20:53:27.990501 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c593400-0x8c595bdd] Feb 13 20:53:27.990506 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c595be0-0x8c5970c1] Feb 13 20:53:27.990511 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c5970c8-0x8c5970fb] Feb 13 20:53:27.990516 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c597100-0x8c597153] Feb 13 20:53:27.990521 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c597158-0x8c598cbe] Feb 13 20:53:27.990525 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c598cc0-0x8c598d2f] Feb 13 20:53:27.990530 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598d30-0x8c598e73] Feb 13 20:53:27.990535 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c598e78-0x8c598eab] Feb 13 20:53:27.990541 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598eb0-0x8c599c3e] Feb 13 20:53:27.990546 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c599c40-0x8c599c67] Feb 13 20:53:27.990551 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c599c68-0x8c599d97] Feb 13 20:53:27.990556 kernel: ACPI: Reserving ERST table memory at [mem 0x8c599d98-0x8c599fc7] Feb 13 20:53:27.990561 kernel: ACPI: Reserving BERT table memory at [mem 0x8c599fc8-0x8c599ff7] Feb 13 20:53:27.990566 kernel: ACPI: Reserving HEST table memory at [mem 0x8c599ff8-0x8c59a273] Feb 13 20:53:27.990571 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59a278-0x8c59a3d9] Feb 13 20:53:27.990576 kernel: No NUMA configuration found Feb 13 20:53:27.990581 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] Feb 13 20:53:27.990588 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] Feb 13 20:53:27.990593 kernel: Zone ranges: Feb 13 20:53:27.990598 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 20:53:27.990603 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Feb 13 20:53:27.990608 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] Feb 13 20:53:27.990613 kernel: Movable zone start for each node Feb 13 20:53:27.990618 kernel: Early memory node ranges Feb 13 20:53:27.990623 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Feb 13 20:53:27.990628 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Feb 13 20:53:27.990634 kernel: node 0: [mem 0x0000000040400000-0x0000000081b2afff] Feb 13 20:53:27.990639 kernel: node 0: [mem 0x0000000081b2d000-0x000000008afccfff] Feb 13 20:53:27.990644 kernel: node 0: [mem 0x000000008c0b2000-0x000000008c23afff] Feb 13 20:53:27.990649 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] Feb 13 20:53:27.990658 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] Feb 13 20:53:27.990664 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] Feb 13 20:53:27.990669 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 20:53:27.990675 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Feb 13 20:53:27.990682 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Feb 13 20:53:27.990687 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Feb 13 20:53:27.990692 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges Feb 13 20:53:27.990698 kernel: On node 0, zone DMA32: 11460 pages in unavailable ranges Feb 13 20:53:27.990703 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges Feb 13 20:53:27.990709 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges Feb 13 20:53:27.990714 kernel: ACPI: PM-Timer IO Port: 0x1808 Feb 13 20:53:27.990719 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Feb 13 20:53:27.990725 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Feb 13 20:53:27.990731 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Feb 13 20:53:27.990737 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Feb 13 20:53:27.990742 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Feb 13 20:53:27.990747 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Feb 13 20:53:27.990753 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Feb 13 20:53:27.990758 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Feb 13 20:53:27.990763 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Feb 13 20:53:27.990768 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Feb 13 20:53:27.990774 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Feb 13 20:53:27.990780 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Feb 13 20:53:27.990786 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Feb 13 20:53:27.990791 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Feb 13 20:53:27.990796 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Feb 13 20:53:27.990802 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Feb 13 20:53:27.990807 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Feb 13 20:53:27.990812 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 13 20:53:27.990818 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 20:53:27.990823 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 20:53:27.990828 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 13 20:53:27.990835 kernel: TSC deadline timer available Feb 13 20:53:27.990840 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Feb 13 20:53:27.990846 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices Feb 13 20:53:27.990851 kernel: Booting paravirtualized kernel on bare hardware Feb 13 20:53:27.990857 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 20:53:27.990862 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Feb 13 20:53:27.990868 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Feb 13 20:53:27.990873 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Feb 13 20:53:27.990878 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Feb 13 20:53:27.990885 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 13 20:53:27.990891 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 20:53:27.990896 kernel: random: crng init done Feb 13 20:53:27.990902 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Feb 13 20:53:27.990907 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Feb 13 20:53:27.990912 kernel: Fallback order for Node 0: 0 Feb 13 20:53:27.990918 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232415 Feb 13 20:53:27.990923 kernel: Policy zone: Normal Feb 13 20:53:27.990930 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 20:53:27.990935 kernel: software IO TLB: area num 16. Feb 13 20:53:27.990941 kernel: Memory: 32720312K/33452980K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42840K init, 2352K bss, 732408K reserved, 0K cma-reserved) Feb 13 20:53:27.990946 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Feb 13 20:53:27.990952 kernel: ftrace: allocating 37921 entries in 149 pages Feb 13 20:53:27.990957 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 20:53:27.990962 kernel: Dynamic Preempt: voluntary Feb 13 20:53:27.990968 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 20:53:27.990974 kernel: rcu: RCU event tracing is enabled. Feb 13 20:53:27.990980 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Feb 13 20:53:27.990986 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 20:53:27.990991 kernel: Rude variant of Tasks RCU enabled. Feb 13 20:53:27.990996 kernel: Tracing variant of Tasks RCU enabled. Feb 13 20:53:27.991002 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 20:53:27.991007 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Feb 13 20:53:27.991013 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Feb 13 20:53:27.991018 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 20:53:27.991024 kernel: Console: colour dummy device 80x25 Feb 13 20:53:27.991030 kernel: printk: console [tty0] enabled Feb 13 20:53:27.991035 kernel: printk: console [ttyS1] enabled Feb 13 20:53:27.991041 kernel: ACPI: Core revision 20230628 Feb 13 20:53:27.991046 kernel: hpet: HPET dysfunctional in PC10. Force disabled. Feb 13 20:53:27.991052 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 20:53:27.991057 kernel: DMAR: Host address width 39 Feb 13 20:53:27.991063 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Feb 13 20:53:27.991068 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Feb 13 20:53:27.991073 kernel: DMAR: RMRR base: 0x0000008cf18000 end: 0x0000008d161fff Feb 13 20:53:27.991080 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 Feb 13 20:53:27.991085 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Feb 13 20:53:27.991090 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Feb 13 20:53:27.991096 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Feb 13 20:53:27.991101 kernel: x2apic enabled Feb 13 20:53:27.991107 kernel: APIC: Switched APIC routing to: cluster x2apic Feb 13 20:53:27.991112 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Feb 13 20:53:27.991118 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Feb 13 20:53:27.991123 kernel: CPU0: Thermal monitoring enabled (TM1) Feb 13 20:53:27.991129 kernel: process: using mwait in idle threads Feb 13 20:53:27.991135 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 13 20:53:27.991140 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 13 20:53:27.991145 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 20:53:27.991150 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Feb 13 20:53:27.991156 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Feb 13 20:53:27.991161 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Feb 13 20:53:27.991166 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 20:53:27.991172 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Feb 13 20:53:27.991177 kernel: RETBleed: Mitigation: Enhanced IBRS Feb 13 20:53:27.991182 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 13 20:53:27.991189 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Feb 13 20:53:27.991194 kernel: TAA: Mitigation: TSX disabled Feb 13 20:53:27.991200 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Feb 13 20:53:27.991205 kernel: SRBDS: Mitigation: Microcode Feb 13 20:53:27.991210 kernel: GDS: Mitigation: Microcode Feb 13 20:53:27.991215 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 20:53:27.991221 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 20:53:27.991226 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 20:53:27.991231 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Feb 13 20:53:27.991237 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Feb 13 20:53:27.991242 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 20:53:27.991248 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Feb 13 20:53:27.991253 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Feb 13 20:53:27.991259 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Feb 13 20:53:27.991264 kernel: Freeing SMP alternatives memory: 32K Feb 13 20:53:27.991270 kernel: pid_max: default: 32768 minimum: 301 Feb 13 20:53:27.991275 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 20:53:27.991280 kernel: landlock: Up and running. Feb 13 20:53:27.991286 kernel: SELinux: Initializing. Feb 13 20:53:27.991291 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 20:53:27.991296 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 20:53:27.991302 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Feb 13 20:53:27.991308 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Feb 13 20:53:27.991314 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Feb 13 20:53:27.991319 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Feb 13 20:53:27.991325 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Feb 13 20:53:27.991330 kernel: ... version: 4 Feb 13 20:53:27.991336 kernel: ... bit width: 48 Feb 13 20:53:27.991341 kernel: ... generic registers: 4 Feb 13 20:53:27.991346 kernel: ... value mask: 0000ffffffffffff Feb 13 20:53:27.991352 kernel: ... max period: 00007fffffffffff Feb 13 20:53:27.991358 kernel: ... fixed-purpose events: 3 Feb 13 20:53:27.991364 kernel: ... event mask: 000000070000000f Feb 13 20:53:27.991369 kernel: signal: max sigframe size: 2032 Feb 13 20:53:27.991374 kernel: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Feb 13 20:53:27.991380 kernel: rcu: Hierarchical SRCU implementation. Feb 13 20:53:27.991385 kernel: rcu: Max phase no-delay instances is 400. Feb 13 20:53:27.991391 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Feb 13 20:53:27.991396 kernel: smp: Bringing up secondary CPUs ... Feb 13 20:53:27.991401 kernel: smpboot: x86: Booting SMP configuration: Feb 13 20:53:27.991408 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 #15 Feb 13 20:53:27.991414 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 13 20:53:27.991419 kernel: smp: Brought up 1 node, 16 CPUs Feb 13 20:53:27.991426 kernel: smpboot: Max logical packages: 1 Feb 13 20:53:27.991432 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Feb 13 20:53:27.991437 kernel: devtmpfs: initialized Feb 13 20:53:27.991462 kernel: x86/mm: Memory block size: 128MB Feb 13 20:53:27.991468 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x81b2b000-0x81b2bfff] (4096 bytes) Feb 13 20:53:27.991473 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23b000-0x8c66cfff] (4399104 bytes) Feb 13 20:53:27.991494 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 20:53:27.991499 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Feb 13 20:53:27.991505 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 20:53:27.991510 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 20:53:27.991515 kernel: audit: initializing netlink subsys (disabled) Feb 13 20:53:27.991521 kernel: audit: type=2000 audit(1739480002.039:1): state=initialized audit_enabled=0 res=1 Feb 13 20:53:27.991526 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 20:53:27.991531 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 20:53:27.991536 kernel: cpuidle: using governor menu Feb 13 20:53:27.991543 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 20:53:27.991548 kernel: dca service started, version 1.12.1 Feb 13 20:53:27.991554 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Feb 13 20:53:27.991559 kernel: PCI: Using configuration type 1 for base access Feb 13 20:53:27.991564 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Feb 13 20:53:27.991570 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 20:53:27.991575 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 20:53:27.991581 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 20:53:27.991586 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 20:53:27.991592 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 20:53:27.991598 kernel: ACPI: Added _OSI(Module Device) Feb 13 20:53:27.991603 kernel: ACPI: Added _OSI(Processor Device) Feb 13 20:53:27.991609 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 20:53:27.991614 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 20:53:27.991619 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Feb 13 20:53:27.991624 kernel: ACPI: Dynamic OEM Table Load: Feb 13 20:53:27.991630 kernel: ACPI: SSDT 0xFFFF978901606400 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Feb 13 20:53:27.991635 kernel: ACPI: Dynamic OEM Table Load: Feb 13 20:53:27.991642 kernel: ACPI: SSDT 0xFFFF9789015FC800 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Feb 13 20:53:27.991647 kernel: ACPI: Dynamic OEM Table Load: Feb 13 20:53:27.991653 kernel: ACPI: SSDT 0xFFFF9789015E5C00 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Feb 13 20:53:27.991658 kernel: ACPI: Dynamic OEM Table Load: Feb 13 20:53:27.991663 kernel: ACPI: SSDT 0xFFFF9789015FA000 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Feb 13 20:53:27.991669 kernel: ACPI: Dynamic OEM Table Load: Feb 13 20:53:27.991674 kernel: ACPI: SSDT 0xFFFF97890160F000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Feb 13 20:53:27.991679 kernel: ACPI: Dynamic OEM Table Load: Feb 13 20:53:27.991684 kernel: ACPI: SSDT 0xFFFF978901600400 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Feb 13 20:53:27.991691 kernel: ACPI: _OSC evaluated successfully for all CPUs Feb 13 20:53:27.991696 kernel: ACPI: Interpreter enabled Feb 13 20:53:27.991702 kernel: ACPI: PM: (supports S0 S5) Feb 13 20:53:27.991707 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 20:53:27.991712 kernel: HEST: Enabling Firmware First mode for corrected errors. Feb 13 20:53:27.991718 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Feb 13 20:53:27.991723 kernel: HEST: Table parsing has been initialized. Feb 13 20:53:27.991729 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Feb 13 20:53:27.991734 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 20:53:27.991740 kernel: PCI: Using E820 reservations for host bridge windows Feb 13 20:53:27.991746 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Feb 13 20:53:27.991751 kernel: ACPI: \_SB_.PCI0.XDCI.USBC: New power resource Feb 13 20:53:27.991757 kernel: ACPI: \_SB_.PCI0.SAT0.VOL0.V0PR: New power resource Feb 13 20:53:27.991762 kernel: ACPI: \_SB_.PCI0.SAT0.VOL1.V1PR: New power resource Feb 13 20:53:27.991768 kernel: ACPI: \_SB_.PCI0.SAT0.VOL2.V2PR: New power resource Feb 13 20:53:27.991773 kernel: ACPI: \_SB_.PCI0.CNVW.WRST: New power resource Feb 13 20:53:27.991778 kernel: ACPI: \_TZ_.FN00: New power resource Feb 13 20:53:27.991784 kernel: ACPI: \_TZ_.FN01: New power resource Feb 13 20:53:27.991789 kernel: ACPI: \_TZ_.FN02: New power resource Feb 13 20:53:27.991796 kernel: ACPI: \_TZ_.FN03: New power resource Feb 13 20:53:27.991801 kernel: ACPI: \_TZ_.FN04: New power resource Feb 13 20:53:27.991807 kernel: ACPI: \PIN_: New power resource Feb 13 20:53:27.991812 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Feb 13 20:53:27.991886 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 20:53:27.991938 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Feb 13 20:53:27.991985 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Feb 13 20:53:27.991994 kernel: PCI host bridge to bus 0000:00 Feb 13 20:53:27.992044 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 20:53:27.992087 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 20:53:27.992128 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 20:53:27.992170 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] Feb 13 20:53:27.992210 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Feb 13 20:53:27.992252 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Feb 13 20:53:27.992310 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Feb 13 20:53:27.992367 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Feb 13 20:53:27.992415 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Feb 13 20:53:27.992509 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Feb 13 20:53:27.992556 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] Feb 13 20:53:27.992607 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Feb 13 20:53:27.992657 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit] Feb 13 20:53:27.992708 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Feb 13 20:53:27.992756 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] Feb 13 20:53:27.992802 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Feb 13 20:53:27.992853 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Feb 13 20:53:27.992900 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] Feb 13 20:53:27.992949 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit] Feb 13 20:53:27.992999 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Feb 13 20:53:27.993046 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Feb 13 20:53:27.993099 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Feb 13 20:53:27.993147 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Feb 13 20:53:27.993197 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Feb 13 20:53:27.993247 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] Feb 13 20:53:27.993295 kernel: pci 0000:00:16.0: PME# supported from D3hot Feb 13 20:53:27.993351 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Feb 13 20:53:27.993400 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] Feb 13 20:53:27.993472 kernel: pci 0000:00:16.1: PME# supported from D3hot Feb 13 20:53:27.993539 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Feb 13 20:53:27.993585 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] Feb 13 20:53:27.993634 kernel: pci 0000:00:16.4: PME# supported from D3hot Feb 13 20:53:27.993684 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Feb 13 20:53:27.993732 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] Feb 13 20:53:27.993778 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] Feb 13 20:53:27.993824 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] Feb 13 20:53:27.993870 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] Feb 13 20:53:27.993917 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] Feb 13 20:53:27.993965 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] Feb 13 20:53:27.994012 kernel: pci 0000:00:17.0: PME# supported from D3hot Feb 13 20:53:27.994063 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Feb 13 20:53:27.994114 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Feb 13 20:53:27.994169 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Feb 13 20:53:27.994217 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Feb 13 20:53:27.994268 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Feb 13 20:53:27.994315 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Feb 13 20:53:27.994367 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Feb 13 20:53:27.994414 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Feb 13 20:53:27.994506 kernel: pci 0000:00:1c.3: [8086:a33b] type 01 class 0x060400 Feb 13 20:53:27.994553 kernel: pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold Feb 13 20:53:27.994605 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Feb 13 20:53:27.994651 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Feb 13 20:53:27.994703 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Feb 13 20:53:27.994753 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Feb 13 20:53:27.994803 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] Feb 13 20:53:27.994849 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Feb 13 20:53:27.994903 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Feb 13 20:53:27.994949 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Feb 13 20:53:27.995004 kernel: pci 0000:01:00.0: [15b3:1015] type 00 class 0x020000 Feb 13 20:53:27.995053 kernel: pci 0000:01:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Feb 13 20:53:27.995104 kernel: pci 0000:01:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] Feb 13 20:53:27.995152 kernel: pci 0000:01:00.0: PME# supported from D3cold Feb 13 20:53:27.995200 kernel: pci 0000:01:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Feb 13 20:53:27.995248 kernel: pci 0000:01:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Feb 13 20:53:27.995301 kernel: pci 0000:01:00.1: [15b3:1015] type 00 class 0x020000 Feb 13 20:53:27.995351 kernel: pci 0000:01:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Feb 13 20:53:27.995399 kernel: pci 0000:01:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] Feb 13 20:53:27.995476 kernel: pci 0000:01:00.1: PME# supported from D3cold Feb 13 20:53:27.995539 kernel: pci 0000:01:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Feb 13 20:53:27.995587 kernel: pci 0000:01:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Feb 13 20:53:27.995635 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Feb 13 20:53:27.995681 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Feb 13 20:53:27.995730 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Feb 13 20:53:27.995777 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Feb 13 20:53:27.995831 kernel: pci 0000:03:00.0: working around ROM BAR overlap defect Feb 13 20:53:27.995882 kernel: pci 0000:03:00.0: [8086:1533] type 00 class 0x020000 Feb 13 20:53:27.995931 kernel: pci 0000:03:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] Feb 13 20:53:27.995980 kernel: pci 0000:03:00.0: reg 0x18: [io 0x5000-0x501f] Feb 13 20:53:27.996027 kernel: pci 0000:03:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] Feb 13 20:53:27.996076 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Feb 13 20:53:27.996124 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Feb 13 20:53:27.996172 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Feb 13 20:53:27.996220 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Feb 13 20:53:27.996274 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect Feb 13 20:53:27.996322 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Feb 13 20:53:27.996370 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] Feb 13 20:53:27.996418 kernel: pci 0000:04:00.0: reg 0x18: [io 0x4000-0x401f] Feb 13 20:53:27.996506 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] Feb 13 20:53:27.996555 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Feb 13 20:53:27.996605 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Feb 13 20:53:27.996653 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Feb 13 20:53:27.996699 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Feb 13 20:53:27.996747 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Feb 13 20:53:27.996799 kernel: pci 0000:06:00.0: [1a03:1150] type 01 class 0x060400 Feb 13 20:53:27.996848 kernel: pci 0000:06:00.0: enabling Extended Tags Feb 13 20:53:27.996895 kernel: pci 0000:06:00.0: supports D1 D2 Feb 13 20:53:27.996944 kernel: pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 20:53:27.996995 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Feb 13 20:53:27.997041 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Feb 13 20:53:27.997088 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Feb 13 20:53:27.997142 kernel: pci_bus 0000:07: extended config space not accessible Feb 13 20:53:27.997196 kernel: pci 0000:07:00.0: [1a03:2000] type 00 class 0x030000 Feb 13 20:53:27.997247 kernel: pci 0000:07:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] Feb 13 20:53:27.997297 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] Feb 13 20:53:27.997349 kernel: pci 0000:07:00.0: reg 0x18: [io 0x3000-0x307f] Feb 13 20:53:27.997398 kernel: pci 0000:07:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 13 20:53:27.997472 kernel: pci 0000:07:00.0: supports D1 D2 Feb 13 20:53:27.997538 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 20:53:27.997589 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Feb 13 20:53:27.997637 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Feb 13 20:53:27.997686 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Feb 13 20:53:27.997694 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Feb 13 20:53:27.997702 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Feb 13 20:53:27.997708 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Feb 13 20:53:27.997714 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Feb 13 20:53:27.997719 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Feb 13 20:53:27.997725 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Feb 13 20:53:27.997731 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Feb 13 20:53:27.997736 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Feb 13 20:53:27.997742 kernel: iommu: Default domain type: Translated Feb 13 20:53:27.997748 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 20:53:27.997755 kernel: PCI: Using ACPI for IRQ routing Feb 13 20:53:27.997760 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 20:53:27.997766 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Feb 13 20:53:27.997772 kernel: e820: reserve RAM buffer [mem 0x81b2b000-0x83ffffff] Feb 13 20:53:27.997777 kernel: e820: reserve RAM buffer [mem 0x8afcd000-0x8bffffff] Feb 13 20:53:27.997783 kernel: e820: reserve RAM buffer [mem 0x8c23b000-0x8fffffff] Feb 13 20:53:27.997788 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] Feb 13 20:53:27.997794 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] Feb 13 20:53:27.997844 kernel: pci 0000:07:00.0: vgaarb: setting as boot VGA device Feb 13 20:53:27.997893 kernel: pci 0000:07:00.0: vgaarb: bridge control possible Feb 13 20:53:27.997943 kernel: pci 0000:07:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 13 20:53:27.997952 kernel: vgaarb: loaded Feb 13 20:53:27.997958 kernel: clocksource: Switched to clocksource tsc-early Feb 13 20:53:27.997964 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 20:53:27.997969 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 20:53:27.997975 kernel: pnp: PnP ACPI init Feb 13 20:53:27.998023 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Feb 13 20:53:27.998071 kernel: pnp 00:02: [dma 0 disabled] Feb 13 20:53:27.998117 kernel: pnp 00:03: [dma 0 disabled] Feb 13 20:53:27.998167 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Feb 13 20:53:27.998209 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Feb 13 20:53:27.998255 kernel: system 00:05: [io 0x1854-0x1857] has been reserved Feb 13 20:53:27.998301 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved Feb 13 20:53:27.998346 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved Feb 13 20:53:27.998389 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved Feb 13 20:53:27.998435 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved Feb 13 20:53:27.998529 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved Feb 13 20:53:27.998572 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved Feb 13 20:53:27.998615 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved Feb 13 20:53:27.998657 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved Feb 13 20:53:27.998706 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved Feb 13 20:53:27.998748 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved Feb 13 20:53:27.998791 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved Feb 13 20:53:27.998833 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved Feb 13 20:53:27.998876 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved Feb 13 20:53:27.998918 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved Feb 13 20:53:27.998961 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved Feb 13 20:53:27.999008 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved Feb 13 20:53:27.999016 kernel: pnp: PnP ACPI: found 10 devices Feb 13 20:53:27.999022 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 20:53:27.999028 kernel: NET: Registered PF_INET protocol family Feb 13 20:53:27.999034 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 20:53:27.999040 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Feb 13 20:53:27.999046 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 20:53:27.999052 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 20:53:27.999059 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Feb 13 20:53:27.999065 kernel: TCP: Hash tables configured (established 262144 bind 65536) Feb 13 20:53:27.999071 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 13 20:53:27.999076 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 13 20:53:27.999082 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 20:53:27.999088 kernel: NET: Registered PF_XDP protocol family Feb 13 20:53:27.999135 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] Feb 13 20:53:27.999183 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] Feb 13 20:53:27.999232 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] Feb 13 20:53:27.999283 kernel: pci 0000:01:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Feb 13 20:53:27.999331 kernel: pci 0000:01:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Feb 13 20:53:27.999380 kernel: pci 0000:01:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Feb 13 20:53:27.999430 kernel: pci 0000:01:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Feb 13 20:53:27.999524 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Feb 13 20:53:27.999570 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Feb 13 20:53:27.999617 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Feb 13 20:53:27.999666 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Feb 13 20:53:27.999713 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Feb 13 20:53:27.999759 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Feb 13 20:53:27.999807 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Feb 13 20:53:27.999854 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Feb 13 20:53:27.999903 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Feb 13 20:53:27.999950 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Feb 13 20:53:27.999996 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Feb 13 20:53:28.000044 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Feb 13 20:53:28.000092 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Feb 13 20:53:28.000140 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Feb 13 20:53:28.000186 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Feb 13 20:53:28.000233 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Feb 13 20:53:28.000279 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Feb 13 20:53:28.000326 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Feb 13 20:53:28.000367 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 20:53:28.000410 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 20:53:28.000476 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 20:53:28.000538 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] Feb 13 20:53:28.000579 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Feb 13 20:53:28.000626 kernel: pci_bus 0000:01: resource 1 [mem 0x95100000-0x952fffff] Feb 13 20:53:28.000672 kernel: pci_bus 0000:01: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Feb 13 20:53:28.000723 kernel: pci_bus 0000:03: resource 0 [io 0x5000-0x5fff] Feb 13 20:53:28.000768 kernel: pci_bus 0000:03: resource 1 [mem 0x95400000-0x954fffff] Feb 13 20:53:28.000815 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Feb 13 20:53:28.000859 kernel: pci_bus 0000:04: resource 1 [mem 0x95300000-0x953fffff] Feb 13 20:53:28.000906 kernel: pci_bus 0000:06: resource 0 [io 0x3000-0x3fff] Feb 13 20:53:28.000952 kernel: pci_bus 0000:06: resource 1 [mem 0x94000000-0x950fffff] Feb 13 20:53:28.000997 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Feb 13 20:53:28.001042 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] Feb 13 20:53:28.001051 kernel: PCI: CLS 64 bytes, default 64 Feb 13 20:53:28.001057 kernel: DMAR: No ATSR found Feb 13 20:53:28.001062 kernel: DMAR: No SATC found Feb 13 20:53:28.001068 kernel: DMAR: dmar0: Using Queued invalidation Feb 13 20:53:28.001114 kernel: pci 0000:00:00.0: Adding to iommu group 0 Feb 13 20:53:28.001164 kernel: pci 0000:00:01.0: Adding to iommu group 1 Feb 13 20:53:28.001210 kernel: pci 0000:00:08.0: Adding to iommu group 2 Feb 13 20:53:28.001258 kernel: pci 0000:00:12.0: Adding to iommu group 3 Feb 13 20:53:28.001304 kernel: pci 0000:00:14.0: Adding to iommu group 4 Feb 13 20:53:28.001351 kernel: pci 0000:00:14.2: Adding to iommu group 4 Feb 13 20:53:28.001397 kernel: pci 0000:00:15.0: Adding to iommu group 5 Feb 13 20:53:28.001468 kernel: pci 0000:00:15.1: Adding to iommu group 5 Feb 13 20:53:28.001536 kernel: pci 0000:00:16.0: Adding to iommu group 6 Feb 13 20:53:28.001582 kernel: pci 0000:00:16.1: Adding to iommu group 6 Feb 13 20:53:28.001632 kernel: pci 0000:00:16.4: Adding to iommu group 6 Feb 13 20:53:28.001678 kernel: pci 0000:00:17.0: Adding to iommu group 7 Feb 13 20:53:28.001725 kernel: pci 0000:00:1b.0: Adding to iommu group 8 Feb 13 20:53:28.001771 kernel: pci 0000:00:1b.4: Adding to iommu group 9 Feb 13 20:53:28.001819 kernel: pci 0000:00:1b.5: Adding to iommu group 10 Feb 13 20:53:28.001867 kernel: pci 0000:00:1c.0: Adding to iommu group 11 Feb 13 20:53:28.001914 kernel: pci 0000:00:1c.3: Adding to iommu group 12 Feb 13 20:53:28.001959 kernel: pci 0000:00:1e.0: Adding to iommu group 13 Feb 13 20:53:28.002009 kernel: pci 0000:00:1f.0: Adding to iommu group 14 Feb 13 20:53:28.002055 kernel: pci 0000:00:1f.4: Adding to iommu group 14 Feb 13 20:53:28.002102 kernel: pci 0000:00:1f.5: Adding to iommu group 14 Feb 13 20:53:28.002151 kernel: pci 0000:01:00.0: Adding to iommu group 1 Feb 13 20:53:28.002199 kernel: pci 0000:01:00.1: Adding to iommu group 1 Feb 13 20:53:28.002247 kernel: pci 0000:03:00.0: Adding to iommu group 15 Feb 13 20:53:28.002294 kernel: pci 0000:04:00.0: Adding to iommu group 16 Feb 13 20:53:28.002343 kernel: pci 0000:06:00.0: Adding to iommu group 17 Feb 13 20:53:28.002394 kernel: pci 0000:07:00.0: Adding to iommu group 17 Feb 13 20:53:28.002402 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Feb 13 20:53:28.002408 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 13 20:53:28.002414 kernel: software IO TLB: mapped [mem 0x0000000086fcd000-0x000000008afcd000] (64MB) Feb 13 20:53:28.002420 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer Feb 13 20:53:28.002428 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Feb 13 20:53:28.002434 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Feb 13 20:53:28.002459 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Feb 13 20:53:28.002531 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Feb 13 20:53:28.002541 kernel: Initialise system trusted keyrings Feb 13 20:53:28.002547 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Feb 13 20:53:28.002553 kernel: Key type asymmetric registered Feb 13 20:53:28.002558 kernel: Asymmetric key parser 'x509' registered Feb 13 20:53:28.002564 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 20:53:28.002569 kernel: io scheduler mq-deadline registered Feb 13 20:53:28.002575 kernel: io scheduler kyber registered Feb 13 20:53:28.002581 kernel: io scheduler bfq registered Feb 13 20:53:28.002628 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 Feb 13 20:53:28.002675 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 122 Feb 13 20:53:28.002722 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 123 Feb 13 20:53:28.002769 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 124 Feb 13 20:53:28.002816 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 125 Feb 13 20:53:28.002862 kernel: pcieport 0000:00:1c.3: PME: Signaling with IRQ 126 Feb 13 20:53:28.002915 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Feb 13 20:53:28.002925 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Feb 13 20:53:28.002931 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Feb 13 20:53:28.002937 kernel: pstore: Using crash dump compression: deflate Feb 13 20:53:28.002942 kernel: pstore: Registered erst as persistent store backend Feb 13 20:53:28.002948 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 20:53:28.002954 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 20:53:28.002960 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 20:53:28.002965 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Feb 13 20:53:28.002971 kernel: hpet_acpi_add: no address or irqs in _CRS Feb 13 20:53:28.003022 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Feb 13 20:53:28.003031 kernel: i8042: PNP: No PS/2 controller found. Feb 13 20:53:28.003073 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Feb 13 20:53:28.003117 kernel: rtc_cmos rtc_cmos: registered as rtc0 Feb 13 20:53:28.003160 kernel: rtc_cmos rtc_cmos: setting system clock to 2025-02-13T20:53:26 UTC (1739480006) Feb 13 20:53:28.003203 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Feb 13 20:53:28.003211 kernel: intel_pstate: Intel P-state driver initializing Feb 13 20:53:28.003217 kernel: intel_pstate: Disabling energy efficiency optimization Feb 13 20:53:28.003224 kernel: intel_pstate: HWP enabled Feb 13 20:53:28.003230 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 Feb 13 20:53:28.003236 kernel: vesafb: scrolling: redraw Feb 13 20:53:28.003241 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 Feb 13 20:53:28.003247 kernel: vesafb: framebuffer at 0x94000000, mapped to 0x00000000497cca88, using 768k, total 768k Feb 13 20:53:28.003253 kernel: Console: switching to colour frame buffer device 128x48 Feb 13 20:53:28.003259 kernel: fb0: VESA VGA frame buffer device Feb 13 20:53:28.003264 kernel: NET: Registered PF_INET6 protocol family Feb 13 20:53:28.003270 kernel: Segment Routing with IPv6 Feb 13 20:53:28.003277 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 20:53:28.003283 kernel: NET: Registered PF_PACKET protocol family Feb 13 20:53:28.003288 kernel: Key type dns_resolver registered Feb 13 20:53:28.003294 kernel: microcode: Microcode Update Driver: v2.2. Feb 13 20:53:28.003299 kernel: IPI shorthand broadcast: enabled Feb 13 20:53:28.003305 kernel: sched_clock: Marking stable (2477000565, 1385633660)->(4406340965, -543706740) Feb 13 20:53:28.003311 kernel: registered taskstats version 1 Feb 13 20:53:28.003317 kernel: Loading compiled-in X.509 certificates Feb 13 20:53:28.003323 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 6e17590ca2768b672aa48f3e0cedc4061febfe93' Feb 13 20:53:28.003329 kernel: Key type .fscrypt registered Feb 13 20:53:28.003335 kernel: Key type fscrypt-provisioning registered Feb 13 20:53:28.003341 kernel: ima: Allocated hash algorithm: sha1 Feb 13 20:53:28.003346 kernel: ima: No architecture policies found Feb 13 20:53:28.003352 kernel: clk: Disabling unused clocks Feb 13 20:53:28.003358 kernel: Freeing unused kernel image (initmem) memory: 42840K Feb 13 20:53:28.003363 kernel: Write protecting the kernel read-only data: 36864k Feb 13 20:53:28.003369 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Feb 13 20:53:28.003375 kernel: Run /init as init process Feb 13 20:53:28.003381 kernel: with arguments: Feb 13 20:53:28.003387 kernel: /init Feb 13 20:53:28.003393 kernel: with environment: Feb 13 20:53:28.003398 kernel: HOME=/ Feb 13 20:53:28.003404 kernel: TERM=linux Feb 13 20:53:28.003410 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 20:53:28.003416 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 20:53:28.003427 systemd[1]: Detected architecture x86-64. Feb 13 20:53:28.003433 systemd[1]: Running in initrd. Feb 13 20:53:28.003439 systemd[1]: No hostname configured, using default hostname. Feb 13 20:53:28.003467 systemd[1]: Hostname set to . Feb 13 20:53:28.003473 systemd[1]: Initializing machine ID from random generator. Feb 13 20:53:28.003501 systemd[1]: Queued start job for default target initrd.target. Feb 13 20:53:28.003506 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:53:28.003512 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:53:28.003520 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 20:53:28.003526 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 20:53:28.003532 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 20:53:28.003538 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 20:53:28.003545 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 20:53:28.003551 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 20:53:28.003557 kernel: tsc: Refined TSC clocksource calibration: 3407.999 MHz Feb 13 20:53:28.003564 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd336761, max_idle_ns: 440795243819 ns Feb 13 20:53:28.003570 kernel: clocksource: Switched to clocksource tsc Feb 13 20:53:28.003576 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:53:28.003582 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:53:28.003588 systemd[1]: Reached target paths.target - Path Units. Feb 13 20:53:28.003594 systemd[1]: Reached target slices.target - Slice Units. Feb 13 20:53:28.003600 systemd[1]: Reached target swap.target - Swaps. Feb 13 20:53:28.003606 systemd[1]: Reached target timers.target - Timer Units. Feb 13 20:53:28.003612 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 20:53:28.003619 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 20:53:28.003625 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 20:53:28.003631 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 20:53:28.003637 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:53:28.003643 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 20:53:28.003649 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:53:28.003655 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 20:53:28.003661 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 20:53:28.003668 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 20:53:28.003674 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 20:53:28.003680 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 20:53:28.003686 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 20:53:28.003702 systemd-journald[268]: Collecting audit messages is disabled. Feb 13 20:53:28.003718 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 20:53:28.003725 systemd-journald[268]: Journal started Feb 13 20:53:28.003738 systemd-journald[268]: Runtime Journal (/run/log/journal/4fe726295fda48eaa3734b7e56b83207) is 8.0M, max 639.9M, 631.9M free. Feb 13 20:53:28.018315 systemd-modules-load[270]: Inserted module 'overlay' Feb 13 20:53:28.039486 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:53:28.092479 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 20:53:28.097427 kernel: Bridge firewalling registered Feb 13 20:53:28.097457 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 20:53:28.116313 systemd-modules-load[270]: Inserted module 'br_netfilter' Feb 13 20:53:28.147842 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 20:53:28.147944 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:53:28.148052 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 20:53:28.148134 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 20:53:28.164676 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 20:53:28.227854 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 20:53:28.232417 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 20:53:28.256903 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:53:28.290118 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:53:28.311198 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 20:53:28.333213 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:53:28.372793 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:53:28.375728 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 20:53:28.394943 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 20:53:28.396238 systemd-resolved[294]: Positive Trust Anchors: Feb 13 20:53:28.396245 systemd-resolved[294]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 20:53:28.396269 systemd-resolved[294]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 20:53:28.397931 systemd-resolved[294]: Defaulting to hostname 'linux'. Feb 13 20:53:28.398516 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 20:53:28.398571 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:53:28.399863 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:53:28.411776 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:53:28.434667 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 20:53:28.455622 dracut-cmdline[308]: dracut-dracut-053 Feb 13 20:53:28.455622 dracut-cmdline[308]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 13 20:53:28.633456 kernel: SCSI subsystem initialized Feb 13 20:53:28.655475 kernel: Loading iSCSI transport class v2.0-870. Feb 13 20:53:28.678454 kernel: iscsi: registered transport (tcp) Feb 13 20:53:28.710115 kernel: iscsi: registered transport (qla4xxx) Feb 13 20:53:28.710132 kernel: QLogic iSCSI HBA Driver Feb 13 20:53:28.743928 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 20:53:28.764716 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 20:53:28.820900 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 20:53:28.820918 kernel: device-mapper: uevent: version 1.0.3 Feb 13 20:53:28.840737 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 20:53:28.898490 kernel: raid6: avx2x4 gen() 53416 MB/s Feb 13 20:53:28.930496 kernel: raid6: avx2x2 gen() 53808 MB/s Feb 13 20:53:28.967058 kernel: raid6: avx2x1 gen() 45225 MB/s Feb 13 20:53:28.967074 kernel: raid6: using algorithm avx2x2 gen() 53808 MB/s Feb 13 20:53:29.015086 kernel: raid6: .... xor() 31603 MB/s, rmw enabled Feb 13 20:53:29.015106 kernel: raid6: using avx2x2 recovery algorithm Feb 13 20:53:29.056476 kernel: xor: automatically using best checksumming function avx Feb 13 20:53:29.172481 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 20:53:29.178733 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 20:53:29.205751 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:53:29.212393 systemd-udevd[496]: Using default interface naming scheme 'v255'. Feb 13 20:53:29.214857 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:53:29.255581 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 20:53:29.317716 dracut-pre-trigger[508]: rd.md=0: removing MD RAID activation Feb 13 20:53:29.337759 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 20:53:29.363668 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 20:53:29.444418 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:53:29.480160 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 13 20:53:29.480194 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 13 20:53:29.491428 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 20:53:29.525470 kernel: ACPI: bus type USB registered Feb 13 20:53:29.525504 kernel: usbcore: registered new interface driver usbfs Feb 13 20:53:29.527707 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 20:53:29.570340 kernel: usbcore: registered new interface driver hub Feb 13 20:53:29.570353 kernel: usbcore: registered new device driver usb Feb 13 20:53:29.577428 kernel: PTP clock support registered Feb 13 20:53:29.577448 kernel: libata version 3.00 loaded. Feb 13 20:53:29.605778 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 20:53:29.614221 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 20:53:29.614245 kernel: AES CTR mode by8 optimization enabled Feb 13 20:53:29.624428 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Feb 13 20:53:29.767922 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Feb 13 20:53:29.768049 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Feb 13 20:53:29.768158 kernel: ahci 0000:00:17.0: version 3.0 Feb 13 20:53:30.262218 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Feb 13 20:53:30.262289 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 7 ports 6 Gbps 0x7f impl SATA mode Feb 13 20:53:30.262354 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Feb 13 20:53:30.262414 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Feb 13 20:53:30.262485 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Feb 13 20:53:30.262545 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Feb 13 20:53:30.262554 kernel: hub 1-0:1.0: USB hub found Feb 13 20:53:30.262629 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Feb 13 20:53:30.262637 kernel: scsi host0: ahci Feb 13 20:53:30.262697 kernel: hub 1-0:1.0: 16 ports detected Feb 13 20:53:30.262762 kernel: scsi host1: ahci Feb 13 20:53:30.262822 kernel: hub 2-0:1.0: USB hub found Feb 13 20:53:30.262890 kernel: scsi host2: ahci Feb 13 20:53:30.262947 kernel: hub 2-0:1.0: 10 ports detected Feb 13 20:53:30.263010 kernel: scsi host3: ahci Feb 13 20:53:30.263066 kernel: pps pps0: new PPS source ptp0 Feb 13 20:53:30.263128 kernel: igb 0000:03:00.0: added PHC on eth0 Feb 13 20:53:30.263195 kernel: igb 0000:03:00.0: Intel(R) Gigabit Ethernet Network Connection Feb 13 20:53:30.263256 kernel: igb 0000:03:00.0: eth0: (PCIe:2.5Gb/s:Width x1) ac:1f:6b:7b:e7:c2 Feb 13 20:53:30.263316 kernel: igb 0000:03:00.0: eth0: PBA No: 010000-000 Feb 13 20:53:30.263376 kernel: igb 0000:03:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Feb 13 20:53:30.263440 kernel: pps pps1: new PPS source ptp1 Feb 13 20:53:30.263500 kernel: scsi host4: ahci Feb 13 20:53:30.263559 kernel: igb 0000:04:00.0: added PHC on eth1 Feb 13 20:53:30.263624 kernel: scsi host5: ahci Feb 13 20:53:30.263682 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Feb 13 20:53:30.263741 kernel: scsi host6: ahci Feb 13 20:53:30.263799 kernel: igb 0000:04:00.0: eth1: (PCIe:2.5Gb/s:Width x1) ac:1f:6b:7b:e7:c3 Feb 13 20:53:30.263859 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 128 Feb 13 20:53:30.263867 kernel: igb 0000:04:00.0: eth1: PBA No: 010000-000 Feb 13 20:53:30.263925 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 128 Feb 13 20:53:30.263935 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Feb 13 20:53:30.263995 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 128 Feb 13 20:53:30.264003 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Feb 13 20:53:30.264098 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 128 Feb 13 20:53:30.264107 kernel: hub 1-14:1.0: USB hub found Feb 13 20:53:30.264181 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 128 Feb 13 20:53:30.264189 kernel: hub 1-14:1.0: 4 ports detected Feb 13 20:53:30.264254 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 128 Feb 13 20:53:30.264264 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 128 Feb 13 20:53:29.637216 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 20:53:29.670456 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:53:30.327534 kernel: mlx5_core 0000:01:00.0: firmware version: 14.27.1016 Feb 13 20:53:30.811004 kernel: mlx5_core 0000:01:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Feb 13 20:53:30.811088 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Feb 13 20:53:30.811197 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 20:53:30.811206 kernel: ata6: SATA link down (SStatus 0 SControl 300) Feb 13 20:53:30.811214 kernel: mlx5_core 0000:01:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Feb 13 20:53:30.811280 kernel: ata5: SATA link down (SStatus 0 SControl 300) Feb 13 20:53:30.811293 kernel: mlx5_core 0000:01:00.0: Port module event: module 0, Cable plugged Feb 13 20:53:30.811355 kernel: ata7: SATA link down (SStatus 0 SControl 300) Feb 13 20:53:30.811364 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Feb 13 20:53:30.811371 kernel: ata3: SATA link down (SStatus 0 SControl 300) Feb 13 20:53:30.811378 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Feb 13 20:53:30.811385 kernel: ata4: SATA link down (SStatus 0 SControl 300) Feb 13 20:53:30.811393 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Feb 13 20:53:30.811400 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Feb 13 20:53:30.811407 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Feb 13 20:53:30.811416 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Feb 13 20:53:30.811428 kernel: mlx5_core 0000:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Feb 13 20:53:29.692479 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 20:53:30.313534 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 20:53:30.887519 kernel: mlx5_core 0000:01:00.1: firmware version: 14.27.1016 Feb 13 20:53:31.375505 kernel: ata2.00: Features: NCQ-prio Feb 13 20:53:31.375519 kernel: mlx5_core 0000:01:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Feb 13 20:53:31.375596 kernel: ata1.00: Features: NCQ-prio Feb 13 20:53:31.375604 kernel: ata2.00: configured for UDMA/133 Feb 13 20:53:31.375612 kernel: ata1.00: configured for UDMA/133 Feb 13 20:53:31.375619 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Feb 13 20:53:31.596165 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Feb 13 20:53:31.596249 kernel: igb 0000:03:00.0 eno1: renamed from eth0 Feb 13 20:53:31.596367 kernel: usbcore: registered new interface driver usbhid Feb 13 20:53:31.596383 kernel: usbhid: USB HID core driver Feb 13 20:53:31.596399 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Feb 13 20:53:31.596413 kernel: igb 0000:04:00.0 eno2: renamed from eth1 Feb 13 20:53:31.596528 kernel: ata1.00: Enabling discard_zeroes_data Feb 13 20:53:31.596544 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Feb 13 20:53:31.596666 kernel: ata2.00: Enabling discard_zeroes_data Feb 13 20:53:31.596681 kernel: sd 0:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Feb 13 20:53:31.596782 kernel: sd 1:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Feb 13 20:53:31.596887 kernel: sd 1:0:0:0: [sda] 4096-byte physical blocks Feb 13 20:53:31.596991 kernel: sd 1:0:0:0: [sda] Write Protect is off Feb 13 20:53:31.597072 kernel: sd 1:0:0:0: [sda] Mode Sense: 00 3a 00 00 Feb 13 20:53:31.597143 kernel: sd 1:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Feb 13 20:53:31.597244 kernel: sd 1:0:0:0: [sda] Preferred minimum I/O size 4096 bytes Feb 13 20:53:31.597345 kernel: ata2.00: Enabling discard_zeroes_data Feb 13 20:53:31.597360 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 20:53:31.597374 kernel: GPT:9289727 != 937703087 Feb 13 20:53:31.597388 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 20:53:31.597401 kernel: GPT:9289727 != 937703087 Feb 13 20:53:31.597414 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 20:53:31.597432 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 20:53:31.597446 kernel: sd 1:0:0:0: [sda] Attached SCSI disk Feb 13 20:53:31.597542 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Feb 13 20:53:31.597561 kernel: sd 0:0:0:0: [sdb] 4096-byte physical blocks Feb 13 20:53:31.597658 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Feb 13 20:53:31.597755 kernel: sd 0:0:0:0: [sdb] Write Protect is off Feb 13 20:53:31.597819 kernel: mlx5_core 0000:01:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Feb 13 20:53:31.597887 kernel: sd 0:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Feb 13 20:53:31.597948 kernel: sd 0:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Feb 13 20:53:31.598008 kernel: mlx5_core 0000:01:00.1: Port module event: module 1, Cable plugged Feb 13 20:53:31.598073 kernel: sd 0:0:0:0: [sdb] Preferred minimum I/O size 4096 bytes Feb 13 20:53:31.598133 kernel: mlx5_core 0000:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Feb 13 20:53:31.598197 kernel: ata1.00: Enabling discard_zeroes_data Feb 13 20:53:31.598205 kernel: sd 0:0:0:0: [sdb] Attached SCSI disk Feb 13 20:53:31.598264 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: renamed from eth2 Feb 13 20:53:30.313571 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:53:31.698227 kernel: BTRFS: device fsid 892c7470-7713-4b0f-880a-4c5f7bf5b72d devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (574) Feb 13 20:53:31.698242 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: renamed from eth0 Feb 13 20:53:31.698335 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (556) Feb 13 20:53:30.338584 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:53:30.370584 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 20:53:30.380520 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:53:30.380549 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:53:30.391525 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:53:30.411536 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:53:30.421601 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 20:53:30.431797 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:53:31.857522 kernel: ata2.00: Enabling discard_zeroes_data Feb 13 20:53:31.857538 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 20:53:30.449926 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:53:31.877537 kernel: ata2.00: Enabling discard_zeroes_data Feb 13 20:53:30.486113 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:53:31.897516 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 20:53:31.628400 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Micron_5300_MTFDDAK480TDT EFI-SYSTEM. Feb 13 20:53:31.918524 kernel: ata2.00: Enabling discard_zeroes_data Feb 13 20:53:31.918535 disk-uuid[716]: Primary Header is updated. Feb 13 20:53:31.918535 disk-uuid[716]: Secondary Entries is updated. Feb 13 20:53:31.918535 disk-uuid[716]: Secondary Header is updated. Feb 13 20:53:31.957445 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 20:53:31.713985 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Micron_5300_MTFDDAK480TDT ROOT. Feb 13 20:53:31.743654 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Micron_5300_MTFDDAK480TDT USR-A. Feb 13 20:53:31.757619 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Micron_5300_MTFDDAK480TDT USR-A. Feb 13 20:53:31.786836 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. Feb 13 20:53:31.818677 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 20:53:32.918916 kernel: ata2.00: Enabling discard_zeroes_data Feb 13 20:53:32.939482 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 20:53:32.939516 disk-uuid[717]: The operation has completed successfully. Feb 13 20:53:32.975396 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 20:53:32.975510 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 20:53:33.010706 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 20:53:33.049624 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 13 20:53:33.049691 sh[734]: Success Feb 13 20:53:33.079440 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 20:53:33.103555 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 20:53:33.112786 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 20:53:33.172668 kernel: BTRFS info (device dm-0): first mount of filesystem 892c7470-7713-4b0f-880a-4c5f7bf5b72d Feb 13 20:53:33.172689 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:53:33.194005 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 20:53:33.212966 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 20:53:33.230603 kernel: BTRFS info (device dm-0): using free space tree Feb 13 20:53:33.267455 kernel: BTRFS info (device dm-0): enabling ssd optimizations Feb 13 20:53:33.268353 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 20:53:33.277855 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 20:53:33.288675 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 20:53:33.398393 kernel: BTRFS info (device sda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:53:33.398410 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:53:33.398507 kernel: BTRFS info (device sda6): using free space tree Feb 13 20:53:33.398529 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 20:53:33.398543 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 20:53:33.421487 kernel: BTRFS info (device sda6): last unmount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:53:33.435719 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 20:53:33.446910 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 20:53:33.478720 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 20:53:33.491632 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 20:53:33.521547 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 20:53:33.534017 systemd-networkd[917]: lo: Link UP Feb 13 20:53:33.541164 ignition[880]: Ignition 2.19.0 Feb 13 20:53:33.534020 systemd-networkd[917]: lo: Gained carrier Feb 13 20:53:33.541168 ignition[880]: Stage: fetch-offline Feb 13 20:53:33.536337 systemd-networkd[917]: Enumeration completed Feb 13 20:53:33.541192 ignition[880]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:53:33.536409 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 20:53:33.541200 ignition[880]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 20:53:33.537079 systemd-networkd[917]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 20:53:33.541254 ignition[880]: parsed url from cmdline: "" Feb 13 20:53:33.543437 unknown[880]: fetched base config from "system" Feb 13 20:53:33.541256 ignition[880]: no config URL provided Feb 13 20:53:33.543444 unknown[880]: fetched user config from "system" Feb 13 20:53:33.541258 ignition[880]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 20:53:33.553833 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 20:53:33.541281 ignition[880]: parsing config with SHA512: cf5da808dfcfa22882dacddd97112394fe95880d226448225ece6f5f5c7856592085e8754d83cc7be6485b23404fc828bda67a92b6f718932dfbb73e68dbe98b Feb 13 20:53:33.565611 systemd-networkd[917]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 20:53:33.543687 ignition[880]: fetch-offline: fetch-offline passed Feb 13 20:53:33.573917 systemd[1]: Reached target network.target - Network. Feb 13 20:53:33.543690 ignition[880]: POST message to Packet Timeline Feb 13 20:53:33.589725 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 20:53:33.543692 ignition[880]: POST Status error: resource requires networking Feb 13 20:53:33.784618 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Feb 13 20:53:33.593483 systemd-networkd[917]: enp1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 20:53:33.543727 ignition[880]: Ignition finished successfully Feb 13 20:53:33.599613 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 20:53:33.619560 ignition[929]: Ignition 2.19.0 Feb 13 20:53:33.776091 systemd-networkd[917]: enp1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 20:53:33.619569 ignition[929]: Stage: kargs Feb 13 20:53:33.619808 ignition[929]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:53:33.619823 ignition[929]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 20:53:33.621150 ignition[929]: kargs: kargs passed Feb 13 20:53:33.621156 ignition[929]: POST message to Packet Timeline Feb 13 20:53:33.621174 ignition[929]: GET https://metadata.packet.net/metadata: attempt #1 Feb 13 20:53:33.622103 ignition[929]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:35631->[::1]:53: read: connection refused Feb 13 20:53:33.822972 ignition[929]: GET https://metadata.packet.net/metadata: attempt #2 Feb 13 20:53:33.824015 ignition[929]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:40369->[::1]:53: read: connection refused Feb 13 20:53:34.015540 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Feb 13 20:53:34.016539 systemd-networkd[917]: eno1: Link UP Feb 13 20:53:34.016678 systemd-networkd[917]: eno2: Link UP Feb 13 20:53:34.016809 systemd-networkd[917]: enp1s0f0np0: Link UP Feb 13 20:53:34.016956 systemd-networkd[917]: enp1s0f0np0: Gained carrier Feb 13 20:53:34.027672 systemd-networkd[917]: enp1s0f1np1: Link UP Feb 13 20:53:34.055576 systemd-networkd[917]: enp1s0f0np0: DHCPv4 address 147.28.180.203/31, gateway 147.28.180.202 acquired from 145.40.83.140 Feb 13 20:53:34.224393 ignition[929]: GET https://metadata.packet.net/metadata: attempt #3 Feb 13 20:53:34.225596 ignition[929]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:53252->[::1]:53: read: connection refused Feb 13 20:53:34.817196 systemd-networkd[917]: enp1s0f1np1: Gained carrier Feb 13 20:53:35.026056 ignition[929]: GET https://metadata.packet.net/metadata: attempt #4 Feb 13 20:53:35.027096 ignition[929]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:56612->[::1]:53: read: connection refused Feb 13 20:53:35.073018 systemd-networkd[917]: enp1s0f0np0: Gained IPv6LL Feb 13 20:53:36.417029 systemd-networkd[917]: enp1s0f1np1: Gained IPv6LL Feb 13 20:53:36.628664 ignition[929]: GET https://metadata.packet.net/metadata: attempt #5 Feb 13 20:53:36.629726 ignition[929]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:46360->[::1]:53: read: connection refused Feb 13 20:53:39.832401 ignition[929]: GET https://metadata.packet.net/metadata: attempt #6 Feb 13 20:53:40.454078 ignition[929]: GET result: OK Feb 13 20:53:40.788542 ignition[929]: Ignition finished successfully Feb 13 20:53:40.793474 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 20:53:40.824656 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 20:53:40.831181 ignition[953]: Ignition 2.19.0 Feb 13 20:53:40.831186 ignition[953]: Stage: disks Feb 13 20:53:40.831309 ignition[953]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:53:40.831316 ignition[953]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 20:53:40.832015 ignition[953]: disks: disks passed Feb 13 20:53:40.832018 ignition[953]: POST message to Packet Timeline Feb 13 20:53:40.832030 ignition[953]: GET https://metadata.packet.net/metadata: attempt #1 Feb 13 20:53:41.426450 ignition[953]: GET result: OK Feb 13 20:53:41.818261 ignition[953]: Ignition finished successfully Feb 13 20:53:41.821623 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 20:53:41.837634 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 20:53:41.855670 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 20:53:41.867026 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 20:53:41.887965 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 20:53:41.915832 systemd[1]: Reached target basic.target - Basic System. Feb 13 20:53:41.943688 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 20:53:41.977990 systemd-fsck[972]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 20:53:41.989125 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 20:53:42.011628 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 20:53:42.109426 kernel: EXT4-fs (sda9): mounted filesystem 85215ce4-0be3-4782-863e-8dde129924f0 r/w with ordered data mode. Quota mode: none. Feb 13 20:53:42.109713 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 20:53:42.118931 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 20:53:42.135740 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 20:53:42.160987 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 20:53:42.170191 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Feb 13 20:53:42.292022 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (981) Feb 13 20:53:42.292046 kernel: BTRFS info (device sda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:53:42.292055 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:53:42.292062 kernel: BTRFS info (device sda6): using free space tree Feb 13 20:53:42.292069 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 20:53:42.292077 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 20:53:42.192080 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... Feb 13 20:53:42.324713 coreos-metadata[983]: Feb 13 20:53:42.247 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 13 20:53:42.312781 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 20:53:42.312803 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 20:53:42.337133 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 20:53:42.400817 coreos-metadata[984]: Feb 13 20:53:42.302 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 13 20:53:42.355920 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 20:53:42.397926 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 20:53:42.442893 initrd-setup-root[1013]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 20:53:42.453512 initrd-setup-root[1020]: cut: /sysroot/etc/group: No such file or directory Feb 13 20:53:42.464506 initrd-setup-root[1027]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 20:53:42.474544 initrd-setup-root[1034]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 20:53:42.499414 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 20:53:42.524673 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 20:53:42.550447 kernel: BTRFS info (device sda6): last unmount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:53:42.542936 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 20:53:42.566104 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 20:53:42.582569 ignition[1105]: INFO : Ignition 2.19.0 Feb 13 20:53:42.582569 ignition[1105]: INFO : Stage: mount Feb 13 20:53:42.582569 ignition[1105]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:53:42.582569 ignition[1105]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 20:53:42.582569 ignition[1105]: INFO : mount: mount passed Feb 13 20:53:42.582569 ignition[1105]: INFO : POST message to Packet Timeline Feb 13 20:53:42.582569 ignition[1105]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Feb 13 20:53:42.651519 coreos-metadata[984]: Feb 13 20:53:42.623 INFO Fetch successful Feb 13 20:53:42.583588 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 20:53:42.655540 systemd[1]: flatcar-static-network.service: Deactivated successfully. Feb 13 20:53:42.655592 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. Feb 13 20:53:42.737652 ignition[1105]: INFO : GET result: OK Feb 13 20:53:42.828350 coreos-metadata[983]: Feb 13 20:53:42.828 INFO Fetch successful Feb 13 20:53:42.905867 coreos-metadata[983]: Feb 13 20:53:42.905 INFO wrote hostname ci-4081.3.1-a-f6aaf2d828 to /sysroot/etc/hostname Feb 13 20:53:42.907430 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 20:53:43.109968 ignition[1105]: INFO : Ignition finished successfully Feb 13 20:53:43.113096 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 20:53:43.146702 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 20:53:43.157825 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 20:53:43.204455 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1127) Feb 13 20:53:43.233775 kernel: BTRFS info (device sda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:53:43.233791 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:53:43.251357 kernel: BTRFS info (device sda6): using free space tree Feb 13 20:53:43.289164 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 20:53:43.289187 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 20:53:43.302107 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 20:53:43.338974 ignition[1144]: INFO : Ignition 2.19.0 Feb 13 20:53:43.338974 ignition[1144]: INFO : Stage: files Feb 13 20:53:43.353660 ignition[1144]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:53:43.353660 ignition[1144]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 20:53:43.353660 ignition[1144]: DEBUG : files: compiled without relabeling support, skipping Feb 13 20:53:43.353660 ignition[1144]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 20:53:43.353660 ignition[1144]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 20:53:43.353660 ignition[1144]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 20:53:43.353660 ignition[1144]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 20:53:43.353660 ignition[1144]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 20:53:43.353660 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 13 20:53:43.353660 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 13 20:53:43.353660 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 20:53:43.353660 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 13 20:53:43.342559 unknown[1144]: wrote ssh authorized keys file for user: core Feb 13 20:53:43.516484 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 20:53:43.516484 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 20:53:43.516484 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Feb 13 20:53:43.516484 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 20:53:43.516484 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 20:53:43.516484 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 20:53:43.516484 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 20:53:43.516484 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 20:53:43.516484 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 20:53:43.516484 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 20:53:43.516484 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 20:53:43.516484 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 20:53:43.516484 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 20:53:43.516484 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 20:53:43.516484 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 20:53:43.765755 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Feb 13 20:53:43.994215 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 13 20:53:44.318273 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 20:53:44.318273 ignition[1144]: INFO : files: op(c): [started] processing unit "containerd.service" Feb 13 20:53:44.348734 ignition[1144]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 13 20:53:44.348734 ignition[1144]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 13 20:53:44.348734 ignition[1144]: INFO : files: op(c): [finished] processing unit "containerd.service" Feb 13 20:53:44.348734 ignition[1144]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Feb 13 20:53:44.348734 ignition[1144]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 20:53:44.348734 ignition[1144]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 20:53:44.348734 ignition[1144]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Feb 13 20:53:44.348734 ignition[1144]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Feb 13 20:53:44.348734 ignition[1144]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 20:53:44.348734 ignition[1144]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 20:53:44.348734 ignition[1144]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 20:53:44.348734 ignition[1144]: INFO : files: files passed Feb 13 20:53:44.348734 ignition[1144]: INFO : POST message to Packet Timeline Feb 13 20:53:44.348734 ignition[1144]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Feb 13 20:53:44.793307 ignition[1144]: INFO : GET result: OK Feb 13 20:53:45.608022 ignition[1144]: INFO : Ignition finished successfully Feb 13 20:53:45.612030 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 20:53:45.645664 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 20:53:45.655960 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 20:53:45.666811 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 20:53:45.666853 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 20:53:45.708305 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 20:53:45.738759 initrd-setup-root-after-ignition[1182]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:53:45.738759 initrd-setup-root-after-ignition[1182]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:53:45.727563 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 20:53:45.777944 initrd-setup-root-after-ignition[1186]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:53:45.764884 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 20:53:45.842676 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 20:53:45.842724 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 20:53:45.861828 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 20:53:45.883638 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 20:53:45.903735 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 20:53:45.923575 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 20:53:45.990520 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 20:53:46.018881 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 20:53:46.048349 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:53:46.060005 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:53:46.082156 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 20:53:46.100074 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 20:53:46.100498 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 20:53:46.127316 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 20:53:46.149089 systemd[1]: Stopped target basic.target - Basic System. Feb 13 20:53:46.168057 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 20:53:46.186084 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 20:53:46.207086 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 20:53:46.228078 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 20:53:46.248069 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 20:53:46.269113 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 20:53:46.290104 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 20:53:46.310069 systemd[1]: Stopped target swap.target - Swaps. Feb 13 20:53:46.327951 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 20:53:46.328353 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 20:53:46.363901 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:53:46.374108 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:53:46.394953 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 20:53:46.395416 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:53:46.416982 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 20:53:46.417378 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 20:53:46.449035 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 20:53:46.449508 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 20:53:46.469286 systemd[1]: Stopped target paths.target - Path Units. Feb 13 20:53:46.486938 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 20:53:46.487375 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:53:46.508114 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 20:53:46.526091 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 20:53:46.546141 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 20:53:46.546468 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 20:53:46.566092 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 20:53:46.566397 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 20:53:46.589153 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 20:53:46.701613 ignition[1206]: INFO : Ignition 2.19.0 Feb 13 20:53:46.701613 ignition[1206]: INFO : Stage: umount Feb 13 20:53:46.701613 ignition[1206]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:53:46.701613 ignition[1206]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 20:53:46.701613 ignition[1206]: INFO : umount: umount passed Feb 13 20:53:46.701613 ignition[1206]: INFO : POST message to Packet Timeline Feb 13 20:53:46.701613 ignition[1206]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Feb 13 20:53:46.589574 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 20:53:46.608137 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 20:53:46.608541 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 20:53:46.626180 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 13 20:53:46.626585 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 20:53:46.656750 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 20:53:46.670558 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 20:53:46.670709 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:53:46.703697 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 20:53:46.718631 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 20:53:46.718923 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:53:46.737203 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 20:53:46.737695 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 20:53:46.786495 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 20:53:46.791817 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 20:53:46.792066 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 20:53:46.866959 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 20:53:46.867235 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 20:53:47.440709 ignition[1206]: INFO : GET result: OK Feb 13 20:53:47.810203 ignition[1206]: INFO : Ignition finished successfully Feb 13 20:53:47.811297 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 20:53:47.811405 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 20:53:47.829346 systemd[1]: Stopped target network.target - Network. Feb 13 20:53:47.844670 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 20:53:47.844937 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 20:53:47.862771 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 20:53:47.862916 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 20:53:47.880851 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 20:53:47.881012 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 20:53:47.899844 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 20:53:47.900013 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 20:53:47.918850 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 20:53:47.919021 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 20:53:47.938232 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 20:53:47.948550 systemd-networkd[917]: enp1s0f1np1: DHCPv6 lease lost Feb 13 20:53:47.955926 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 20:53:47.962670 systemd-networkd[917]: enp1s0f0np0: DHCPv6 lease lost Feb 13 20:53:47.974605 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 20:53:47.974890 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 20:53:47.993993 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 20:53:47.994457 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 20:53:48.014226 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 20:53:48.014440 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:53:48.047612 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 20:53:48.072588 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 20:53:48.072630 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 20:53:48.092739 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 20:53:48.092831 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:53:48.112781 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 20:53:48.112934 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 20:53:48.130809 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 20:53:48.130973 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:53:48.151063 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:53:48.172624 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 20:53:48.173005 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:53:48.206454 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 20:53:48.206600 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 20:53:48.211937 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 20:53:48.212042 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:53:48.239695 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 20:53:48.239834 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 20:53:48.269717 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 20:53:48.269903 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 20:53:48.298656 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 20:53:48.298829 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:53:48.337555 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 20:53:48.608719 systemd-journald[268]: Failed to send stream file descriptor to service manager: Connection refused Feb 13 20:53:48.608744 systemd-journald[268]: Received SIGTERM from PID 1 (systemd). Feb 13 20:53:48.357503 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 20:53:48.357544 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:53:48.389537 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:53:48.389579 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:53:48.408993 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 20:53:48.409102 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 20:53:48.465899 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 20:53:48.466173 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 20:53:48.481771 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 20:53:48.512853 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 20:53:48.550835 systemd[1]: Switching root. Feb 13 20:53:48.705538 systemd-journald[268]: Journal stopped Feb 13 20:53:27.990008 kernel: microcode: updated early: 0xf4 -> 0xfc, date = 2023-07-27 Feb 13 20:53:27.990023 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 18:03:41 -00 2025 Feb 13 20:53:27.990030 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 13 20:53:27.990035 kernel: BIOS-provided physical RAM map: Feb 13 20:53:27.990039 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Feb 13 20:53:27.990043 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Feb 13 20:53:27.990048 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Feb 13 20:53:27.990052 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Feb 13 20:53:27.990056 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Feb 13 20:53:27.990060 kernel: BIOS-e820: [mem 0x0000000040400000-0x0000000081b2afff] usable Feb 13 20:53:27.990064 kernel: BIOS-e820: [mem 0x0000000081b2b000-0x0000000081b2bfff] ACPI NVS Feb 13 20:53:27.990069 kernel: BIOS-e820: [mem 0x0000000081b2c000-0x0000000081b2cfff] reserved Feb 13 20:53:27.990074 kernel: BIOS-e820: [mem 0x0000000081b2d000-0x000000008afccfff] usable Feb 13 20:53:27.990078 kernel: BIOS-e820: [mem 0x000000008afcd000-0x000000008c0b1fff] reserved Feb 13 20:53:27.990083 kernel: BIOS-e820: [mem 0x000000008c0b2000-0x000000008c23afff] usable Feb 13 20:53:27.990088 kernel: BIOS-e820: [mem 0x000000008c23b000-0x000000008c66cfff] ACPI NVS Feb 13 20:53:27.990093 kernel: BIOS-e820: [mem 0x000000008c66d000-0x000000008eefefff] reserved Feb 13 20:53:27.990098 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable Feb 13 20:53:27.990103 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved Feb 13 20:53:27.990107 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Feb 13 20:53:27.990112 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Feb 13 20:53:27.990117 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Feb 13 20:53:27.990121 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Feb 13 20:53:27.990126 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Feb 13 20:53:27.990131 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable Feb 13 20:53:27.990135 kernel: NX (Execute Disable) protection: active Feb 13 20:53:27.990140 kernel: APIC: Static calls initialized Feb 13 20:53:27.990145 kernel: SMBIOS 3.2.1 present. Feb 13 20:53:27.990150 kernel: DMI: Supermicro SYS-5019C-MR-PH004/X11SCM-F, BIOS 1.9 09/16/2022 Feb 13 20:53:27.990155 kernel: tsc: Detected 3400.000 MHz processor Feb 13 20:53:27.990160 kernel: tsc: Detected 3399.906 MHz TSC Feb 13 20:53:27.990165 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 20:53:27.990170 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 20:53:27.990175 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 Feb 13 20:53:27.990180 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 23), built from 10 variable MTRRs Feb 13 20:53:27.990184 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 20:53:27.990189 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 Feb 13 20:53:27.990195 kernel: Using GB pages for direct mapping Feb 13 20:53:27.990200 kernel: ACPI: Early table checksum verification disabled Feb 13 20:53:27.990205 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Feb 13 20:53:27.990212 kernel: ACPI: XSDT 0x000000008C54E0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Feb 13 20:53:27.990217 kernel: ACPI: FACP 0x000000008C58A670 000114 (v06 01072009 AMI 00010013) Feb 13 20:53:27.990222 kernel: ACPI: DSDT 0x000000008C54E268 03C404 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Feb 13 20:53:27.990227 kernel: ACPI: FACS 0x000000008C66CF80 000040 Feb 13 20:53:27.990233 kernel: ACPI: APIC 0x000000008C58A788 00012C (v04 01072009 AMI 00010013) Feb 13 20:53:27.990238 kernel: ACPI: FPDT 0x000000008C58A8B8 000044 (v01 01072009 AMI 00010013) Feb 13 20:53:27.990243 kernel: ACPI: FIDT 0x000000008C58A900 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Feb 13 20:53:27.990248 kernel: ACPI: MCFG 0x000000008C58A9A0 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Feb 13 20:53:27.990253 kernel: ACPI: SPMI 0x000000008C58A9E0 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Feb 13 20:53:27.990258 kernel: ACPI: SSDT 0x000000008C58AA28 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Feb 13 20:53:27.990263 kernel: ACPI: SSDT 0x000000008C58C548 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Feb 13 20:53:27.990269 kernel: ACPI: SSDT 0x000000008C58F710 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Feb 13 20:53:27.990275 kernel: ACPI: HPET 0x000000008C591A40 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 13 20:53:27.990280 kernel: ACPI: SSDT 0x000000008C591A78 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Feb 13 20:53:27.990285 kernel: ACPI: SSDT 0x000000008C592A28 0008F4 (v02 INTEL xh_mossb 00000000 INTL 20160527) Feb 13 20:53:27.990290 kernel: ACPI: UEFI 0x000000008C593320 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 13 20:53:27.990295 kernel: ACPI: LPIT 0x000000008C593368 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 13 20:53:27.990300 kernel: ACPI: SSDT 0x000000008C593400 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Feb 13 20:53:27.990305 kernel: ACPI: SSDT 0x000000008C595BE0 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Feb 13 20:53:27.990310 kernel: ACPI: DBGP 0x000000008C5970C8 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 13 20:53:27.990316 kernel: ACPI: DBG2 0x000000008C597100 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Feb 13 20:53:27.990321 kernel: ACPI: SSDT 0x000000008C597158 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Feb 13 20:53:27.990326 kernel: ACPI: DMAR 0x000000008C598CC0 000070 (v01 INTEL EDK2 00000002 01000013) Feb 13 20:53:27.990331 kernel: ACPI: SSDT 0x000000008C598D30 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Feb 13 20:53:27.990336 kernel: ACPI: TPM2 0x000000008C598E78 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Feb 13 20:53:27.990341 kernel: ACPI: SSDT 0x000000008C598EB0 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Feb 13 20:53:27.990346 kernel: ACPI: WSMT 0x000000008C599C40 000028 (v01 SUPERM 01072009 AMI 00010013) Feb 13 20:53:27.990351 kernel: ACPI: EINJ 0x000000008C599C68 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Feb 13 20:53:27.990357 kernel: ACPI: ERST 0x000000008C599D98 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Feb 13 20:53:27.990362 kernel: ACPI: BERT 0x000000008C599FC8 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Feb 13 20:53:27.990367 kernel: ACPI: HEST 0x000000008C599FF8 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Feb 13 20:53:27.990372 kernel: ACPI: SSDT 0x000000008C59A278 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Feb 13 20:53:27.990377 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58a670-0x8c58a783] Feb 13 20:53:27.990382 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54e268-0x8c58a66b] Feb 13 20:53:27.990387 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66cf80-0x8c66cfbf] Feb 13 20:53:27.990392 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58a788-0x8c58a8b3] Feb 13 20:53:27.990397 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58a8b8-0x8c58a8fb] Feb 13 20:53:27.990403 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58a900-0x8c58a99b] Feb 13 20:53:27.990408 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58a9a0-0x8c58a9db] Feb 13 20:53:27.990413 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58a9e0-0x8c58aa20] Feb 13 20:53:27.990418 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58aa28-0x8c58c543] Feb 13 20:53:27.990426 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58c548-0x8c58f70d] Feb 13 20:53:27.990431 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58f710-0x8c591a3a] Feb 13 20:53:27.990436 kernel: ACPI: Reserving HPET table memory at [mem 0x8c591a40-0x8c591a77] Feb 13 20:53:27.990464 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c591a78-0x8c592a25] Feb 13 20:53:27.990469 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c592a28-0x8c59331b] Feb 13 20:53:27.990490 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c593320-0x8c593361] Feb 13 20:53:27.990496 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c593368-0x8c5933fb] Feb 13 20:53:27.990501 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c593400-0x8c595bdd] Feb 13 20:53:27.990506 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c595be0-0x8c5970c1] Feb 13 20:53:27.990511 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c5970c8-0x8c5970fb] Feb 13 20:53:27.990516 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c597100-0x8c597153] Feb 13 20:53:27.990521 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c597158-0x8c598cbe] Feb 13 20:53:27.990525 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c598cc0-0x8c598d2f] Feb 13 20:53:27.990530 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598d30-0x8c598e73] Feb 13 20:53:27.990535 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c598e78-0x8c598eab] Feb 13 20:53:27.990541 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598eb0-0x8c599c3e] Feb 13 20:53:27.990546 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c599c40-0x8c599c67] Feb 13 20:53:27.990551 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c599c68-0x8c599d97] Feb 13 20:53:27.990556 kernel: ACPI: Reserving ERST table memory at [mem 0x8c599d98-0x8c599fc7] Feb 13 20:53:27.990561 kernel: ACPI: Reserving BERT table memory at [mem 0x8c599fc8-0x8c599ff7] Feb 13 20:53:27.990566 kernel: ACPI: Reserving HEST table memory at [mem 0x8c599ff8-0x8c59a273] Feb 13 20:53:27.990571 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59a278-0x8c59a3d9] Feb 13 20:53:27.990576 kernel: No NUMA configuration found Feb 13 20:53:27.990581 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] Feb 13 20:53:27.990588 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] Feb 13 20:53:27.990593 kernel: Zone ranges: Feb 13 20:53:27.990598 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 20:53:27.990603 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Feb 13 20:53:27.990608 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] Feb 13 20:53:27.990613 kernel: Movable zone start for each node Feb 13 20:53:27.990618 kernel: Early memory node ranges Feb 13 20:53:27.990623 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Feb 13 20:53:27.990628 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Feb 13 20:53:27.990634 kernel: node 0: [mem 0x0000000040400000-0x0000000081b2afff] Feb 13 20:53:27.990639 kernel: node 0: [mem 0x0000000081b2d000-0x000000008afccfff] Feb 13 20:53:27.990644 kernel: node 0: [mem 0x000000008c0b2000-0x000000008c23afff] Feb 13 20:53:27.990649 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] Feb 13 20:53:27.990658 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] Feb 13 20:53:27.990664 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] Feb 13 20:53:27.990669 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 20:53:27.990675 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Feb 13 20:53:27.990682 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Feb 13 20:53:27.990687 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Feb 13 20:53:27.990692 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges Feb 13 20:53:27.990698 kernel: On node 0, zone DMA32: 11460 pages in unavailable ranges Feb 13 20:53:27.990703 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges Feb 13 20:53:27.990709 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges Feb 13 20:53:27.990714 kernel: ACPI: PM-Timer IO Port: 0x1808 Feb 13 20:53:27.990719 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Feb 13 20:53:27.990725 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Feb 13 20:53:27.990731 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Feb 13 20:53:27.990737 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Feb 13 20:53:27.990742 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Feb 13 20:53:27.990747 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Feb 13 20:53:27.990753 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Feb 13 20:53:27.990758 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Feb 13 20:53:27.990763 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Feb 13 20:53:27.990768 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Feb 13 20:53:27.990774 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Feb 13 20:53:27.990780 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Feb 13 20:53:27.990786 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Feb 13 20:53:27.990791 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Feb 13 20:53:27.990796 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Feb 13 20:53:27.990802 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Feb 13 20:53:27.990807 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Feb 13 20:53:27.990812 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 13 20:53:27.990818 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 20:53:27.990823 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 20:53:27.990828 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 13 20:53:27.990835 kernel: TSC deadline timer available Feb 13 20:53:27.990840 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Feb 13 20:53:27.990846 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices Feb 13 20:53:27.990851 kernel: Booting paravirtualized kernel on bare hardware Feb 13 20:53:27.990857 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 20:53:27.990862 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Feb 13 20:53:27.990868 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Feb 13 20:53:27.990873 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Feb 13 20:53:27.990878 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Feb 13 20:53:27.990885 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 13 20:53:27.990891 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 20:53:27.990896 kernel: random: crng init done Feb 13 20:53:27.990902 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Feb 13 20:53:27.990907 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Feb 13 20:53:27.990912 kernel: Fallback order for Node 0: 0 Feb 13 20:53:27.990918 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232415 Feb 13 20:53:27.990923 kernel: Policy zone: Normal Feb 13 20:53:27.990930 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 20:53:27.990935 kernel: software IO TLB: area num 16. Feb 13 20:53:27.990941 kernel: Memory: 32720312K/33452980K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42840K init, 2352K bss, 732408K reserved, 0K cma-reserved) Feb 13 20:53:27.990946 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Feb 13 20:53:27.990952 kernel: ftrace: allocating 37921 entries in 149 pages Feb 13 20:53:27.990957 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 20:53:27.990962 kernel: Dynamic Preempt: voluntary Feb 13 20:53:27.990968 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 20:53:27.990974 kernel: rcu: RCU event tracing is enabled. Feb 13 20:53:27.990980 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Feb 13 20:53:27.990986 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 20:53:27.990991 kernel: Rude variant of Tasks RCU enabled. Feb 13 20:53:27.990996 kernel: Tracing variant of Tasks RCU enabled. Feb 13 20:53:27.991002 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 20:53:27.991007 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Feb 13 20:53:27.991013 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Feb 13 20:53:27.991018 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 20:53:27.991024 kernel: Console: colour dummy device 80x25 Feb 13 20:53:27.991030 kernel: printk: console [tty0] enabled Feb 13 20:53:27.991035 kernel: printk: console [ttyS1] enabled Feb 13 20:53:27.991041 kernel: ACPI: Core revision 20230628 Feb 13 20:53:27.991046 kernel: hpet: HPET dysfunctional in PC10. Force disabled. Feb 13 20:53:27.991052 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 20:53:27.991057 kernel: DMAR: Host address width 39 Feb 13 20:53:27.991063 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Feb 13 20:53:27.991068 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Feb 13 20:53:27.991073 kernel: DMAR: RMRR base: 0x0000008cf18000 end: 0x0000008d161fff Feb 13 20:53:27.991080 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 Feb 13 20:53:27.991085 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Feb 13 20:53:27.991090 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Feb 13 20:53:27.991096 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Feb 13 20:53:27.991101 kernel: x2apic enabled Feb 13 20:53:27.991107 kernel: APIC: Switched APIC routing to: cluster x2apic Feb 13 20:53:27.991112 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Feb 13 20:53:27.991118 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Feb 13 20:53:27.991123 kernel: CPU0: Thermal monitoring enabled (TM1) Feb 13 20:53:27.991129 kernel: process: using mwait in idle threads Feb 13 20:53:27.991135 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 13 20:53:27.991140 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 13 20:53:27.991145 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 20:53:27.991150 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Feb 13 20:53:27.991156 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Feb 13 20:53:27.991161 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Feb 13 20:53:27.991166 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 20:53:27.991172 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Feb 13 20:53:27.991177 kernel: RETBleed: Mitigation: Enhanced IBRS Feb 13 20:53:27.991182 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 13 20:53:27.991189 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Feb 13 20:53:27.991194 kernel: TAA: Mitigation: TSX disabled Feb 13 20:53:27.991200 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Feb 13 20:53:27.991205 kernel: SRBDS: Mitigation: Microcode Feb 13 20:53:27.991210 kernel: GDS: Mitigation: Microcode Feb 13 20:53:27.991215 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 20:53:27.991221 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 20:53:27.991226 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 20:53:27.991231 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Feb 13 20:53:27.991237 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Feb 13 20:53:27.991242 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 20:53:27.991248 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Feb 13 20:53:27.991253 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Feb 13 20:53:27.991259 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Feb 13 20:53:27.991264 kernel: Freeing SMP alternatives memory: 32K Feb 13 20:53:27.991270 kernel: pid_max: default: 32768 minimum: 301 Feb 13 20:53:27.991275 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 20:53:27.991280 kernel: landlock: Up and running. Feb 13 20:53:27.991286 kernel: SELinux: Initializing. Feb 13 20:53:27.991291 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 20:53:27.991296 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 20:53:27.991302 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Feb 13 20:53:27.991308 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Feb 13 20:53:27.991314 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Feb 13 20:53:27.991319 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Feb 13 20:53:27.991325 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Feb 13 20:53:27.991330 kernel: ... version: 4 Feb 13 20:53:27.991336 kernel: ... bit width: 48 Feb 13 20:53:27.991341 kernel: ... generic registers: 4 Feb 13 20:53:27.991346 kernel: ... value mask: 0000ffffffffffff Feb 13 20:53:27.991352 kernel: ... max period: 00007fffffffffff Feb 13 20:53:27.991358 kernel: ... fixed-purpose events: 3 Feb 13 20:53:27.991364 kernel: ... event mask: 000000070000000f Feb 13 20:53:27.991369 kernel: signal: max sigframe size: 2032 Feb 13 20:53:27.991374 kernel: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Feb 13 20:53:27.991380 kernel: rcu: Hierarchical SRCU implementation. Feb 13 20:53:27.991385 kernel: rcu: Max phase no-delay instances is 400. Feb 13 20:53:27.991391 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Feb 13 20:53:27.991396 kernel: smp: Bringing up secondary CPUs ... Feb 13 20:53:27.991401 kernel: smpboot: x86: Booting SMP configuration: Feb 13 20:53:27.991408 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 #15 Feb 13 20:53:27.991414 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 13 20:53:27.991419 kernel: smp: Brought up 1 node, 16 CPUs Feb 13 20:53:27.991426 kernel: smpboot: Max logical packages: 1 Feb 13 20:53:27.991432 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Feb 13 20:53:27.991437 kernel: devtmpfs: initialized Feb 13 20:53:27.991462 kernel: x86/mm: Memory block size: 128MB Feb 13 20:53:27.991468 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x81b2b000-0x81b2bfff] (4096 bytes) Feb 13 20:53:27.991473 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23b000-0x8c66cfff] (4399104 bytes) Feb 13 20:53:27.991494 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 20:53:27.991499 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Feb 13 20:53:27.991505 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 20:53:27.991510 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 20:53:27.991515 kernel: audit: initializing netlink subsys (disabled) Feb 13 20:53:27.991521 kernel: audit: type=2000 audit(1739480002.039:1): state=initialized audit_enabled=0 res=1 Feb 13 20:53:27.991526 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 20:53:27.991531 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 20:53:27.991536 kernel: cpuidle: using governor menu Feb 13 20:53:27.991543 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 20:53:27.991548 kernel: dca service started, version 1.12.1 Feb 13 20:53:27.991554 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Feb 13 20:53:27.991559 kernel: PCI: Using configuration type 1 for base access Feb 13 20:53:27.991564 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Feb 13 20:53:27.991570 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 20:53:27.991575 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 20:53:27.991581 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 20:53:27.991586 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 20:53:27.991592 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 20:53:27.991598 kernel: ACPI: Added _OSI(Module Device) Feb 13 20:53:27.991603 kernel: ACPI: Added _OSI(Processor Device) Feb 13 20:53:27.991609 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 20:53:27.991614 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 20:53:27.991619 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Feb 13 20:53:27.991624 kernel: ACPI: Dynamic OEM Table Load: Feb 13 20:53:27.991630 kernel: ACPI: SSDT 0xFFFF978901606400 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Feb 13 20:53:27.991635 kernel: ACPI: Dynamic OEM Table Load: Feb 13 20:53:27.991642 kernel: ACPI: SSDT 0xFFFF9789015FC800 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Feb 13 20:53:27.991647 kernel: ACPI: Dynamic OEM Table Load: Feb 13 20:53:27.991653 kernel: ACPI: SSDT 0xFFFF9789015E5C00 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Feb 13 20:53:27.991658 kernel: ACPI: Dynamic OEM Table Load: Feb 13 20:53:27.991663 kernel: ACPI: SSDT 0xFFFF9789015FA000 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Feb 13 20:53:27.991669 kernel: ACPI: Dynamic OEM Table Load: Feb 13 20:53:27.991674 kernel: ACPI: SSDT 0xFFFF97890160F000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Feb 13 20:53:27.991679 kernel: ACPI: Dynamic OEM Table Load: Feb 13 20:53:27.991684 kernel: ACPI: SSDT 0xFFFF978901600400 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Feb 13 20:53:27.991691 kernel: ACPI: _OSC evaluated successfully for all CPUs Feb 13 20:53:27.991696 kernel: ACPI: Interpreter enabled Feb 13 20:53:27.991702 kernel: ACPI: PM: (supports S0 S5) Feb 13 20:53:27.991707 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 20:53:27.991712 kernel: HEST: Enabling Firmware First mode for corrected errors. Feb 13 20:53:27.991718 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Feb 13 20:53:27.991723 kernel: HEST: Table parsing has been initialized. Feb 13 20:53:27.991729 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Feb 13 20:53:27.991734 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 20:53:27.991740 kernel: PCI: Using E820 reservations for host bridge windows Feb 13 20:53:27.991746 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Feb 13 20:53:27.991751 kernel: ACPI: \_SB_.PCI0.XDCI.USBC: New power resource Feb 13 20:53:27.991757 kernel: ACPI: \_SB_.PCI0.SAT0.VOL0.V0PR: New power resource Feb 13 20:53:27.991762 kernel: ACPI: \_SB_.PCI0.SAT0.VOL1.V1PR: New power resource Feb 13 20:53:27.991768 kernel: ACPI: \_SB_.PCI0.SAT0.VOL2.V2PR: New power resource Feb 13 20:53:27.991773 kernel: ACPI: \_SB_.PCI0.CNVW.WRST: New power resource Feb 13 20:53:27.991778 kernel: ACPI: \_TZ_.FN00: New power resource Feb 13 20:53:27.991784 kernel: ACPI: \_TZ_.FN01: New power resource Feb 13 20:53:27.991789 kernel: ACPI: \_TZ_.FN02: New power resource Feb 13 20:53:27.991796 kernel: ACPI: \_TZ_.FN03: New power resource Feb 13 20:53:27.991801 kernel: ACPI: \_TZ_.FN04: New power resource Feb 13 20:53:27.991807 kernel: ACPI: \PIN_: New power resource Feb 13 20:53:27.991812 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Feb 13 20:53:27.991886 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 20:53:27.991938 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Feb 13 20:53:27.991985 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Feb 13 20:53:27.991994 kernel: PCI host bridge to bus 0000:00 Feb 13 20:53:27.992044 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 20:53:27.992087 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 20:53:27.992128 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 20:53:27.992170 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] Feb 13 20:53:27.992210 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Feb 13 20:53:27.992252 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Feb 13 20:53:27.992310 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Feb 13 20:53:27.992367 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Feb 13 20:53:27.992415 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Feb 13 20:53:27.992509 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Feb 13 20:53:27.992556 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] Feb 13 20:53:27.992607 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Feb 13 20:53:27.992657 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit] Feb 13 20:53:27.992708 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Feb 13 20:53:27.992756 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] Feb 13 20:53:27.992802 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Feb 13 20:53:27.992853 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Feb 13 20:53:27.992900 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] Feb 13 20:53:27.992949 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit] Feb 13 20:53:27.992999 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Feb 13 20:53:27.993046 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Feb 13 20:53:27.993099 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Feb 13 20:53:27.993147 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Feb 13 20:53:27.993197 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Feb 13 20:53:27.993247 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] Feb 13 20:53:27.993295 kernel: pci 0000:00:16.0: PME# supported from D3hot Feb 13 20:53:27.993351 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Feb 13 20:53:27.993400 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] Feb 13 20:53:27.993472 kernel: pci 0000:00:16.1: PME# supported from D3hot Feb 13 20:53:27.993539 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Feb 13 20:53:27.993585 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] Feb 13 20:53:27.993634 kernel: pci 0000:00:16.4: PME# supported from D3hot Feb 13 20:53:27.993684 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Feb 13 20:53:27.993732 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] Feb 13 20:53:27.993778 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] Feb 13 20:53:27.993824 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] Feb 13 20:53:27.993870 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] Feb 13 20:53:27.993917 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] Feb 13 20:53:27.993965 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] Feb 13 20:53:27.994012 kernel: pci 0000:00:17.0: PME# supported from D3hot Feb 13 20:53:27.994063 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Feb 13 20:53:27.994114 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Feb 13 20:53:27.994169 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Feb 13 20:53:27.994217 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Feb 13 20:53:27.994268 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Feb 13 20:53:27.994315 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Feb 13 20:53:27.994367 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Feb 13 20:53:27.994414 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Feb 13 20:53:27.994506 kernel: pci 0000:00:1c.3: [8086:a33b] type 01 class 0x060400 Feb 13 20:53:27.994553 kernel: pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold Feb 13 20:53:27.994605 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Feb 13 20:53:27.994651 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Feb 13 20:53:27.994703 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Feb 13 20:53:27.994753 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Feb 13 20:53:27.994803 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] Feb 13 20:53:27.994849 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Feb 13 20:53:27.994903 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Feb 13 20:53:27.994949 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Feb 13 20:53:27.995004 kernel: pci 0000:01:00.0: [15b3:1015] type 00 class 0x020000 Feb 13 20:53:27.995053 kernel: pci 0000:01:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Feb 13 20:53:27.995104 kernel: pci 0000:01:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] Feb 13 20:53:27.995152 kernel: pci 0000:01:00.0: PME# supported from D3cold Feb 13 20:53:27.995200 kernel: pci 0000:01:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Feb 13 20:53:27.995248 kernel: pci 0000:01:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Feb 13 20:53:27.995301 kernel: pci 0000:01:00.1: [15b3:1015] type 00 class 0x020000 Feb 13 20:53:27.995351 kernel: pci 0000:01:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Feb 13 20:53:27.995399 kernel: pci 0000:01:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] Feb 13 20:53:27.995476 kernel: pci 0000:01:00.1: PME# supported from D3cold Feb 13 20:53:27.995539 kernel: pci 0000:01:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Feb 13 20:53:27.995587 kernel: pci 0000:01:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Feb 13 20:53:27.995635 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Feb 13 20:53:27.995681 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Feb 13 20:53:27.995730 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Feb 13 20:53:27.995777 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Feb 13 20:53:27.995831 kernel: pci 0000:03:00.0: working around ROM BAR overlap defect Feb 13 20:53:27.995882 kernel: pci 0000:03:00.0: [8086:1533] type 00 class 0x020000 Feb 13 20:53:27.995931 kernel: pci 0000:03:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] Feb 13 20:53:27.995980 kernel: pci 0000:03:00.0: reg 0x18: [io 0x5000-0x501f] Feb 13 20:53:27.996027 kernel: pci 0000:03:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] Feb 13 20:53:27.996076 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Feb 13 20:53:27.996124 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Feb 13 20:53:27.996172 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Feb 13 20:53:27.996220 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Feb 13 20:53:27.996274 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect Feb 13 20:53:27.996322 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Feb 13 20:53:27.996370 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] Feb 13 20:53:27.996418 kernel: pci 0000:04:00.0: reg 0x18: [io 0x4000-0x401f] Feb 13 20:53:27.996506 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] Feb 13 20:53:27.996555 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Feb 13 20:53:27.996605 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Feb 13 20:53:27.996653 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Feb 13 20:53:27.996699 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Feb 13 20:53:27.996747 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Feb 13 20:53:27.996799 kernel: pci 0000:06:00.0: [1a03:1150] type 01 class 0x060400 Feb 13 20:53:27.996848 kernel: pci 0000:06:00.0: enabling Extended Tags Feb 13 20:53:27.996895 kernel: pci 0000:06:00.0: supports D1 D2 Feb 13 20:53:27.996944 kernel: pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 20:53:27.996995 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Feb 13 20:53:27.997041 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Feb 13 20:53:27.997088 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Feb 13 20:53:27.997142 kernel: pci_bus 0000:07: extended config space not accessible Feb 13 20:53:27.997196 kernel: pci 0000:07:00.0: [1a03:2000] type 00 class 0x030000 Feb 13 20:53:27.997247 kernel: pci 0000:07:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] Feb 13 20:53:27.997297 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] Feb 13 20:53:27.997349 kernel: pci 0000:07:00.0: reg 0x18: [io 0x3000-0x307f] Feb 13 20:53:27.997398 kernel: pci 0000:07:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 13 20:53:27.997472 kernel: pci 0000:07:00.0: supports D1 D2 Feb 13 20:53:27.997538 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 20:53:27.997589 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Feb 13 20:53:27.997637 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Feb 13 20:53:27.997686 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Feb 13 20:53:27.997694 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Feb 13 20:53:27.997702 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Feb 13 20:53:27.997708 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Feb 13 20:53:27.997714 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Feb 13 20:53:27.997719 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Feb 13 20:53:27.997725 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Feb 13 20:53:27.997731 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Feb 13 20:53:27.997736 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Feb 13 20:53:27.997742 kernel: iommu: Default domain type: Translated Feb 13 20:53:27.997748 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 20:53:27.997755 kernel: PCI: Using ACPI for IRQ routing Feb 13 20:53:27.997760 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 20:53:27.997766 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Feb 13 20:53:27.997772 kernel: e820: reserve RAM buffer [mem 0x81b2b000-0x83ffffff] Feb 13 20:53:27.997777 kernel: e820: reserve RAM buffer [mem 0x8afcd000-0x8bffffff] Feb 13 20:53:27.997783 kernel: e820: reserve RAM buffer [mem 0x8c23b000-0x8fffffff] Feb 13 20:53:27.997788 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] Feb 13 20:53:27.997794 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] Feb 13 20:53:27.997844 kernel: pci 0000:07:00.0: vgaarb: setting as boot VGA device Feb 13 20:53:27.997893 kernel: pci 0000:07:00.0: vgaarb: bridge control possible Feb 13 20:53:27.997943 kernel: pci 0000:07:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 13 20:53:27.997952 kernel: vgaarb: loaded Feb 13 20:53:27.997958 kernel: clocksource: Switched to clocksource tsc-early Feb 13 20:53:27.997964 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 20:53:27.997969 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 20:53:27.997975 kernel: pnp: PnP ACPI init Feb 13 20:53:27.998023 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Feb 13 20:53:27.998071 kernel: pnp 00:02: [dma 0 disabled] Feb 13 20:53:27.998117 kernel: pnp 00:03: [dma 0 disabled] Feb 13 20:53:27.998167 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Feb 13 20:53:27.998209 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Feb 13 20:53:27.998255 kernel: system 00:05: [io 0x1854-0x1857] has been reserved Feb 13 20:53:27.998301 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved Feb 13 20:53:27.998346 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved Feb 13 20:53:27.998389 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved Feb 13 20:53:27.998435 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved Feb 13 20:53:27.998529 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved Feb 13 20:53:27.998572 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved Feb 13 20:53:27.998615 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved Feb 13 20:53:27.998657 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved Feb 13 20:53:27.998706 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved Feb 13 20:53:27.998748 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved Feb 13 20:53:27.998791 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved Feb 13 20:53:27.998833 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved Feb 13 20:53:27.998876 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved Feb 13 20:53:27.998918 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved Feb 13 20:53:27.998961 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved Feb 13 20:53:27.999008 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved Feb 13 20:53:27.999016 kernel: pnp: PnP ACPI: found 10 devices Feb 13 20:53:27.999022 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 20:53:27.999028 kernel: NET: Registered PF_INET protocol family Feb 13 20:53:27.999034 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 20:53:27.999040 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Feb 13 20:53:27.999046 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 20:53:27.999052 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 20:53:27.999059 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Feb 13 20:53:27.999065 kernel: TCP: Hash tables configured (established 262144 bind 65536) Feb 13 20:53:27.999071 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 13 20:53:27.999076 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 13 20:53:27.999082 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 20:53:27.999088 kernel: NET: Registered PF_XDP protocol family Feb 13 20:53:27.999135 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] Feb 13 20:53:27.999183 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] Feb 13 20:53:27.999232 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] Feb 13 20:53:27.999283 kernel: pci 0000:01:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Feb 13 20:53:27.999331 kernel: pci 0000:01:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Feb 13 20:53:27.999380 kernel: pci 0000:01:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Feb 13 20:53:27.999430 kernel: pci 0000:01:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Feb 13 20:53:27.999524 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Feb 13 20:53:27.999570 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Feb 13 20:53:27.999617 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Feb 13 20:53:27.999666 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Feb 13 20:53:27.999713 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Feb 13 20:53:27.999759 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Feb 13 20:53:27.999807 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Feb 13 20:53:27.999854 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Feb 13 20:53:27.999903 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Feb 13 20:53:27.999950 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Feb 13 20:53:27.999996 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Feb 13 20:53:28.000044 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Feb 13 20:53:28.000092 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Feb 13 20:53:28.000140 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Feb 13 20:53:28.000186 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Feb 13 20:53:28.000233 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Feb 13 20:53:28.000279 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Feb 13 20:53:28.000326 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Feb 13 20:53:28.000367 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 20:53:28.000410 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 20:53:28.000476 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 20:53:28.000538 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] Feb 13 20:53:28.000579 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Feb 13 20:53:28.000626 kernel: pci_bus 0000:01: resource 1 [mem 0x95100000-0x952fffff] Feb 13 20:53:28.000672 kernel: pci_bus 0000:01: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Feb 13 20:53:28.000723 kernel: pci_bus 0000:03: resource 0 [io 0x5000-0x5fff] Feb 13 20:53:28.000768 kernel: pci_bus 0000:03: resource 1 [mem 0x95400000-0x954fffff] Feb 13 20:53:28.000815 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Feb 13 20:53:28.000859 kernel: pci_bus 0000:04: resource 1 [mem 0x95300000-0x953fffff] Feb 13 20:53:28.000906 kernel: pci_bus 0000:06: resource 0 [io 0x3000-0x3fff] Feb 13 20:53:28.000952 kernel: pci_bus 0000:06: resource 1 [mem 0x94000000-0x950fffff] Feb 13 20:53:28.000997 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Feb 13 20:53:28.001042 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] Feb 13 20:53:28.001051 kernel: PCI: CLS 64 bytes, default 64 Feb 13 20:53:28.001057 kernel: DMAR: No ATSR found Feb 13 20:53:28.001062 kernel: DMAR: No SATC found Feb 13 20:53:28.001068 kernel: DMAR: dmar0: Using Queued invalidation Feb 13 20:53:28.001114 kernel: pci 0000:00:00.0: Adding to iommu group 0 Feb 13 20:53:28.001164 kernel: pci 0000:00:01.0: Adding to iommu group 1 Feb 13 20:53:28.001210 kernel: pci 0000:00:08.0: Adding to iommu group 2 Feb 13 20:53:28.001258 kernel: pci 0000:00:12.0: Adding to iommu group 3 Feb 13 20:53:28.001304 kernel: pci 0000:00:14.0: Adding to iommu group 4 Feb 13 20:53:28.001351 kernel: pci 0000:00:14.2: Adding to iommu group 4 Feb 13 20:53:28.001397 kernel: pci 0000:00:15.0: Adding to iommu group 5 Feb 13 20:53:28.001468 kernel: pci 0000:00:15.1: Adding to iommu group 5 Feb 13 20:53:28.001536 kernel: pci 0000:00:16.0: Adding to iommu group 6 Feb 13 20:53:28.001582 kernel: pci 0000:00:16.1: Adding to iommu group 6 Feb 13 20:53:28.001632 kernel: pci 0000:00:16.4: Adding to iommu group 6 Feb 13 20:53:28.001678 kernel: pci 0000:00:17.0: Adding to iommu group 7 Feb 13 20:53:28.001725 kernel: pci 0000:00:1b.0: Adding to iommu group 8 Feb 13 20:53:28.001771 kernel: pci 0000:00:1b.4: Adding to iommu group 9 Feb 13 20:53:28.001819 kernel: pci 0000:00:1b.5: Adding to iommu group 10 Feb 13 20:53:28.001867 kernel: pci 0000:00:1c.0: Adding to iommu group 11 Feb 13 20:53:28.001914 kernel: pci 0000:00:1c.3: Adding to iommu group 12 Feb 13 20:53:28.001959 kernel: pci 0000:00:1e.0: Adding to iommu group 13 Feb 13 20:53:28.002009 kernel: pci 0000:00:1f.0: Adding to iommu group 14 Feb 13 20:53:28.002055 kernel: pci 0000:00:1f.4: Adding to iommu group 14 Feb 13 20:53:28.002102 kernel: pci 0000:00:1f.5: Adding to iommu group 14 Feb 13 20:53:28.002151 kernel: pci 0000:01:00.0: Adding to iommu group 1 Feb 13 20:53:28.002199 kernel: pci 0000:01:00.1: Adding to iommu group 1 Feb 13 20:53:28.002247 kernel: pci 0000:03:00.0: Adding to iommu group 15 Feb 13 20:53:28.002294 kernel: pci 0000:04:00.0: Adding to iommu group 16 Feb 13 20:53:28.002343 kernel: pci 0000:06:00.0: Adding to iommu group 17 Feb 13 20:53:28.002394 kernel: pci 0000:07:00.0: Adding to iommu group 17 Feb 13 20:53:28.002402 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Feb 13 20:53:28.002408 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 13 20:53:28.002414 kernel: software IO TLB: mapped [mem 0x0000000086fcd000-0x000000008afcd000] (64MB) Feb 13 20:53:28.002420 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer Feb 13 20:53:28.002428 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Feb 13 20:53:28.002434 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Feb 13 20:53:28.002459 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Feb 13 20:53:28.002531 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Feb 13 20:53:28.002541 kernel: Initialise system trusted keyrings Feb 13 20:53:28.002547 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Feb 13 20:53:28.002553 kernel: Key type asymmetric registered Feb 13 20:53:28.002558 kernel: Asymmetric key parser 'x509' registered Feb 13 20:53:28.002564 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 20:53:28.002569 kernel: io scheduler mq-deadline registered Feb 13 20:53:28.002575 kernel: io scheduler kyber registered Feb 13 20:53:28.002581 kernel: io scheduler bfq registered Feb 13 20:53:28.002628 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 Feb 13 20:53:28.002675 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 122 Feb 13 20:53:28.002722 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 123 Feb 13 20:53:28.002769 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 124 Feb 13 20:53:28.002816 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 125 Feb 13 20:53:28.002862 kernel: pcieport 0000:00:1c.3: PME: Signaling with IRQ 126 Feb 13 20:53:28.002915 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Feb 13 20:53:28.002925 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Feb 13 20:53:28.002931 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Feb 13 20:53:28.002937 kernel: pstore: Using crash dump compression: deflate Feb 13 20:53:28.002942 kernel: pstore: Registered erst as persistent store backend Feb 13 20:53:28.002948 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 20:53:28.002954 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 20:53:28.002960 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 20:53:28.002965 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Feb 13 20:53:28.002971 kernel: hpet_acpi_add: no address or irqs in _CRS Feb 13 20:53:28.003022 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Feb 13 20:53:28.003031 kernel: i8042: PNP: No PS/2 controller found. Feb 13 20:53:28.003073 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Feb 13 20:53:28.003117 kernel: rtc_cmos rtc_cmos: registered as rtc0 Feb 13 20:53:28.003160 kernel: rtc_cmos rtc_cmos: setting system clock to 2025-02-13T20:53:26 UTC (1739480006) Feb 13 20:53:28.003203 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Feb 13 20:53:28.003211 kernel: intel_pstate: Intel P-state driver initializing Feb 13 20:53:28.003217 kernel: intel_pstate: Disabling energy efficiency optimization Feb 13 20:53:28.003224 kernel: intel_pstate: HWP enabled Feb 13 20:53:28.003230 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 Feb 13 20:53:28.003236 kernel: vesafb: scrolling: redraw Feb 13 20:53:28.003241 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 Feb 13 20:53:28.003247 kernel: vesafb: framebuffer at 0x94000000, mapped to 0x00000000497cca88, using 768k, total 768k Feb 13 20:53:28.003253 kernel: Console: switching to colour frame buffer device 128x48 Feb 13 20:53:28.003259 kernel: fb0: VESA VGA frame buffer device Feb 13 20:53:28.003264 kernel: NET: Registered PF_INET6 protocol family Feb 13 20:53:28.003270 kernel: Segment Routing with IPv6 Feb 13 20:53:28.003277 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 20:53:28.003283 kernel: NET: Registered PF_PACKET protocol family Feb 13 20:53:28.003288 kernel: Key type dns_resolver registered Feb 13 20:53:28.003294 kernel: microcode: Microcode Update Driver: v2.2. Feb 13 20:53:28.003299 kernel: IPI shorthand broadcast: enabled Feb 13 20:53:28.003305 kernel: sched_clock: Marking stable (2477000565, 1385633660)->(4406340965, -543706740) Feb 13 20:53:28.003311 kernel: registered taskstats version 1 Feb 13 20:53:28.003317 kernel: Loading compiled-in X.509 certificates Feb 13 20:53:28.003323 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 6e17590ca2768b672aa48f3e0cedc4061febfe93' Feb 13 20:53:28.003329 kernel: Key type .fscrypt registered Feb 13 20:53:28.003335 kernel: Key type fscrypt-provisioning registered Feb 13 20:53:28.003341 kernel: ima: Allocated hash algorithm: sha1 Feb 13 20:53:28.003346 kernel: ima: No architecture policies found Feb 13 20:53:28.003352 kernel: clk: Disabling unused clocks Feb 13 20:53:28.003358 kernel: Freeing unused kernel image (initmem) memory: 42840K Feb 13 20:53:28.003363 kernel: Write protecting the kernel read-only data: 36864k Feb 13 20:53:28.003369 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Feb 13 20:53:28.003375 kernel: Run /init as init process Feb 13 20:53:28.003381 kernel: with arguments: Feb 13 20:53:28.003387 kernel: /init Feb 13 20:53:28.003393 kernel: with environment: Feb 13 20:53:28.003398 kernel: HOME=/ Feb 13 20:53:28.003404 kernel: TERM=linux Feb 13 20:53:28.003410 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 20:53:28.003416 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 20:53:28.003427 systemd[1]: Detected architecture x86-64. Feb 13 20:53:28.003433 systemd[1]: Running in initrd. Feb 13 20:53:28.003439 systemd[1]: No hostname configured, using default hostname. Feb 13 20:53:28.003467 systemd[1]: Hostname set to . Feb 13 20:53:28.003473 systemd[1]: Initializing machine ID from random generator. Feb 13 20:53:28.003501 systemd[1]: Queued start job for default target initrd.target. Feb 13 20:53:28.003506 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:53:28.003512 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:53:28.003520 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 20:53:28.003526 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 20:53:28.003532 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 20:53:28.003538 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 20:53:28.003545 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 20:53:28.003551 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 20:53:28.003557 kernel: tsc: Refined TSC clocksource calibration: 3407.999 MHz Feb 13 20:53:28.003564 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd336761, max_idle_ns: 440795243819 ns Feb 13 20:53:28.003570 kernel: clocksource: Switched to clocksource tsc Feb 13 20:53:28.003576 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:53:28.003582 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:53:28.003588 systemd[1]: Reached target paths.target - Path Units. Feb 13 20:53:28.003594 systemd[1]: Reached target slices.target - Slice Units. Feb 13 20:53:28.003600 systemd[1]: Reached target swap.target - Swaps. Feb 13 20:53:28.003606 systemd[1]: Reached target timers.target - Timer Units. Feb 13 20:53:28.003612 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 20:53:28.003619 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 20:53:28.003625 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 20:53:28.003631 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 20:53:28.003637 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:53:28.003643 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 20:53:28.003649 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:53:28.003655 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 20:53:28.003661 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 20:53:28.003668 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 20:53:28.003674 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 20:53:28.003680 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 20:53:28.003686 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 20:53:28.003702 systemd-journald[268]: Collecting audit messages is disabled. Feb 13 20:53:28.003718 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 20:53:28.003725 systemd-journald[268]: Journal started Feb 13 20:53:28.003738 systemd-journald[268]: Runtime Journal (/run/log/journal/4fe726295fda48eaa3734b7e56b83207) is 8.0M, max 639.9M, 631.9M free. Feb 13 20:53:28.018315 systemd-modules-load[270]: Inserted module 'overlay' Feb 13 20:53:28.039486 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:53:28.092479 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 20:53:28.097427 kernel: Bridge firewalling registered Feb 13 20:53:28.097457 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 20:53:28.116313 systemd-modules-load[270]: Inserted module 'br_netfilter' Feb 13 20:53:28.147842 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 20:53:28.147944 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:53:28.148052 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 20:53:28.148134 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 20:53:28.164676 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 20:53:28.227854 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 20:53:28.232417 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 20:53:28.256903 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:53:28.290118 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:53:28.311198 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 20:53:28.333213 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:53:28.372793 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:53:28.375728 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 20:53:28.394943 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 20:53:28.396238 systemd-resolved[294]: Positive Trust Anchors: Feb 13 20:53:28.396245 systemd-resolved[294]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 20:53:28.396269 systemd-resolved[294]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 20:53:28.397931 systemd-resolved[294]: Defaulting to hostname 'linux'. Feb 13 20:53:28.398516 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 20:53:28.398571 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:53:28.399863 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:53:28.411776 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:53:28.434667 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 20:53:28.455622 dracut-cmdline[308]: dracut-dracut-053 Feb 13 20:53:28.455622 dracut-cmdline[308]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 13 20:53:28.633456 kernel: SCSI subsystem initialized Feb 13 20:53:28.655475 kernel: Loading iSCSI transport class v2.0-870. Feb 13 20:53:28.678454 kernel: iscsi: registered transport (tcp) Feb 13 20:53:28.710115 kernel: iscsi: registered transport (qla4xxx) Feb 13 20:53:28.710132 kernel: QLogic iSCSI HBA Driver Feb 13 20:53:28.743928 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 20:53:28.764716 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 20:53:28.820900 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 20:53:28.820918 kernel: device-mapper: uevent: version 1.0.3 Feb 13 20:53:28.840737 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 20:53:28.898490 kernel: raid6: avx2x4 gen() 53416 MB/s Feb 13 20:53:28.930496 kernel: raid6: avx2x2 gen() 53808 MB/s Feb 13 20:53:28.967058 kernel: raid6: avx2x1 gen() 45225 MB/s Feb 13 20:53:28.967074 kernel: raid6: using algorithm avx2x2 gen() 53808 MB/s Feb 13 20:53:29.015086 kernel: raid6: .... xor() 31603 MB/s, rmw enabled Feb 13 20:53:29.015106 kernel: raid6: using avx2x2 recovery algorithm Feb 13 20:53:29.056476 kernel: xor: automatically using best checksumming function avx Feb 13 20:53:29.172481 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 20:53:29.178733 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 20:53:29.205751 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:53:29.212393 systemd-udevd[496]: Using default interface naming scheme 'v255'. Feb 13 20:53:29.214857 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:53:29.255581 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 20:53:29.317716 dracut-pre-trigger[508]: rd.md=0: removing MD RAID activation Feb 13 20:53:29.337759 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 20:53:29.363668 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 20:53:29.444418 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:53:29.480160 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 13 20:53:29.480194 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 13 20:53:29.491428 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 20:53:29.525470 kernel: ACPI: bus type USB registered Feb 13 20:53:29.525504 kernel: usbcore: registered new interface driver usbfs Feb 13 20:53:29.527707 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 20:53:29.570340 kernel: usbcore: registered new interface driver hub Feb 13 20:53:29.570353 kernel: usbcore: registered new device driver usb Feb 13 20:53:29.577428 kernel: PTP clock support registered Feb 13 20:53:29.577448 kernel: libata version 3.00 loaded. Feb 13 20:53:29.605778 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 20:53:29.614221 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 20:53:29.614245 kernel: AES CTR mode by8 optimization enabled Feb 13 20:53:29.624428 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Feb 13 20:53:29.767922 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Feb 13 20:53:29.768049 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Feb 13 20:53:29.768158 kernel: ahci 0000:00:17.0: version 3.0 Feb 13 20:53:30.262218 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Feb 13 20:53:30.262289 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 7 ports 6 Gbps 0x7f impl SATA mode Feb 13 20:53:30.262354 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Feb 13 20:53:30.262414 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Feb 13 20:53:30.262485 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Feb 13 20:53:30.262545 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Feb 13 20:53:30.262554 kernel: hub 1-0:1.0: USB hub found Feb 13 20:53:30.262629 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Feb 13 20:53:30.262637 kernel: scsi host0: ahci Feb 13 20:53:30.262697 kernel: hub 1-0:1.0: 16 ports detected Feb 13 20:53:30.262762 kernel: scsi host1: ahci Feb 13 20:53:30.262822 kernel: hub 2-0:1.0: USB hub found Feb 13 20:53:30.262890 kernel: scsi host2: ahci Feb 13 20:53:30.262947 kernel: hub 2-0:1.0: 10 ports detected Feb 13 20:53:30.263010 kernel: scsi host3: ahci Feb 13 20:53:30.263066 kernel: pps pps0: new PPS source ptp0 Feb 13 20:53:30.263128 kernel: igb 0000:03:00.0: added PHC on eth0 Feb 13 20:53:30.263195 kernel: igb 0000:03:00.0: Intel(R) Gigabit Ethernet Network Connection Feb 13 20:53:30.263256 kernel: igb 0000:03:00.0: eth0: (PCIe:2.5Gb/s:Width x1) ac:1f:6b:7b:e7:c2 Feb 13 20:53:30.263316 kernel: igb 0000:03:00.0: eth0: PBA No: 010000-000 Feb 13 20:53:30.263376 kernel: igb 0000:03:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Feb 13 20:53:30.263440 kernel: pps pps1: new PPS source ptp1 Feb 13 20:53:30.263500 kernel: scsi host4: ahci Feb 13 20:53:30.263559 kernel: igb 0000:04:00.0: added PHC on eth1 Feb 13 20:53:30.263624 kernel: scsi host5: ahci Feb 13 20:53:30.263682 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Feb 13 20:53:30.263741 kernel: scsi host6: ahci Feb 13 20:53:30.263799 kernel: igb 0000:04:00.0: eth1: (PCIe:2.5Gb/s:Width x1) ac:1f:6b:7b:e7:c3 Feb 13 20:53:30.263859 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 128 Feb 13 20:53:30.263867 kernel: igb 0000:04:00.0: eth1: PBA No: 010000-000 Feb 13 20:53:30.263925 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 128 Feb 13 20:53:30.263935 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Feb 13 20:53:30.263995 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 128 Feb 13 20:53:30.264003 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Feb 13 20:53:30.264098 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 128 Feb 13 20:53:30.264107 kernel: hub 1-14:1.0: USB hub found Feb 13 20:53:30.264181 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 128 Feb 13 20:53:30.264189 kernel: hub 1-14:1.0: 4 ports detected Feb 13 20:53:30.264254 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 128 Feb 13 20:53:30.264264 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 128 Feb 13 20:53:29.637216 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 20:53:29.670456 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:53:30.327534 kernel: mlx5_core 0000:01:00.0: firmware version: 14.27.1016 Feb 13 20:53:30.811004 kernel: mlx5_core 0000:01:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Feb 13 20:53:30.811088 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Feb 13 20:53:30.811197 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 20:53:30.811206 kernel: ata6: SATA link down (SStatus 0 SControl 300) Feb 13 20:53:30.811214 kernel: mlx5_core 0000:01:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Feb 13 20:53:30.811280 kernel: ata5: SATA link down (SStatus 0 SControl 300) Feb 13 20:53:30.811293 kernel: mlx5_core 0000:01:00.0: Port module event: module 0, Cable plugged Feb 13 20:53:30.811355 kernel: ata7: SATA link down (SStatus 0 SControl 300) Feb 13 20:53:30.811364 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Feb 13 20:53:30.811371 kernel: ata3: SATA link down (SStatus 0 SControl 300) Feb 13 20:53:30.811378 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Feb 13 20:53:30.811385 kernel: ata4: SATA link down (SStatus 0 SControl 300) Feb 13 20:53:30.811393 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Feb 13 20:53:30.811400 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Feb 13 20:53:30.811407 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Feb 13 20:53:30.811416 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Feb 13 20:53:30.811428 kernel: mlx5_core 0000:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Feb 13 20:53:29.692479 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 20:53:30.313534 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 20:53:30.887519 kernel: mlx5_core 0000:01:00.1: firmware version: 14.27.1016 Feb 13 20:53:31.375505 kernel: ata2.00: Features: NCQ-prio Feb 13 20:53:31.375519 kernel: mlx5_core 0000:01:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Feb 13 20:53:31.375596 kernel: ata1.00: Features: NCQ-prio Feb 13 20:53:31.375604 kernel: ata2.00: configured for UDMA/133 Feb 13 20:53:31.375612 kernel: ata1.00: configured for UDMA/133 Feb 13 20:53:31.375619 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Feb 13 20:53:31.596165 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Feb 13 20:53:31.596249 kernel: igb 0000:03:00.0 eno1: renamed from eth0 Feb 13 20:53:31.596367 kernel: usbcore: registered new interface driver usbhid Feb 13 20:53:31.596383 kernel: usbhid: USB HID core driver Feb 13 20:53:31.596399 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Feb 13 20:53:31.596413 kernel: igb 0000:04:00.0 eno2: renamed from eth1 Feb 13 20:53:31.596528 kernel: ata1.00: Enabling discard_zeroes_data Feb 13 20:53:31.596544 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Feb 13 20:53:31.596666 kernel: ata2.00: Enabling discard_zeroes_data Feb 13 20:53:31.596681 kernel: sd 0:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Feb 13 20:53:31.596782 kernel: sd 1:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Feb 13 20:53:31.596887 kernel: sd 1:0:0:0: [sda] 4096-byte physical blocks Feb 13 20:53:31.596991 kernel: sd 1:0:0:0: [sda] Write Protect is off Feb 13 20:53:31.597072 kernel: sd 1:0:0:0: [sda] Mode Sense: 00 3a 00 00 Feb 13 20:53:31.597143 kernel: sd 1:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Feb 13 20:53:31.597244 kernel: sd 1:0:0:0: [sda] Preferred minimum I/O size 4096 bytes Feb 13 20:53:31.597345 kernel: ata2.00: Enabling discard_zeroes_data Feb 13 20:53:31.597360 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 20:53:31.597374 kernel: GPT:9289727 != 937703087 Feb 13 20:53:31.597388 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 20:53:31.597401 kernel: GPT:9289727 != 937703087 Feb 13 20:53:31.597414 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 20:53:31.597432 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 20:53:31.597446 kernel: sd 1:0:0:0: [sda] Attached SCSI disk Feb 13 20:53:31.597542 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Feb 13 20:53:31.597561 kernel: sd 0:0:0:0: [sdb] 4096-byte physical blocks Feb 13 20:53:31.597658 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Feb 13 20:53:31.597755 kernel: sd 0:0:0:0: [sdb] Write Protect is off Feb 13 20:53:31.597819 kernel: mlx5_core 0000:01:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Feb 13 20:53:31.597887 kernel: sd 0:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Feb 13 20:53:31.597948 kernel: sd 0:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Feb 13 20:53:31.598008 kernel: mlx5_core 0000:01:00.1: Port module event: module 1, Cable plugged Feb 13 20:53:31.598073 kernel: sd 0:0:0:0: [sdb] Preferred minimum I/O size 4096 bytes Feb 13 20:53:31.598133 kernel: mlx5_core 0000:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Feb 13 20:53:31.598197 kernel: ata1.00: Enabling discard_zeroes_data Feb 13 20:53:31.598205 kernel: sd 0:0:0:0: [sdb] Attached SCSI disk Feb 13 20:53:31.598264 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: renamed from eth2 Feb 13 20:53:30.313571 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:53:31.698227 kernel: BTRFS: device fsid 892c7470-7713-4b0f-880a-4c5f7bf5b72d devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (574) Feb 13 20:53:31.698242 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: renamed from eth0 Feb 13 20:53:31.698335 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (556) Feb 13 20:53:30.338584 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:53:30.370584 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 20:53:30.380520 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:53:30.380549 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:53:30.391525 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:53:30.411536 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:53:30.421601 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 20:53:30.431797 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:53:31.857522 kernel: ata2.00: Enabling discard_zeroes_data Feb 13 20:53:31.857538 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 20:53:30.449926 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:53:31.877537 kernel: ata2.00: Enabling discard_zeroes_data Feb 13 20:53:30.486113 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:53:31.897516 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 20:53:31.628400 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Micron_5300_MTFDDAK480TDT EFI-SYSTEM. Feb 13 20:53:31.918524 kernel: ata2.00: Enabling discard_zeroes_data Feb 13 20:53:31.918535 disk-uuid[716]: Primary Header is updated. Feb 13 20:53:31.918535 disk-uuid[716]: Secondary Entries is updated. Feb 13 20:53:31.918535 disk-uuid[716]: Secondary Header is updated. Feb 13 20:53:31.957445 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 20:53:31.713985 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Micron_5300_MTFDDAK480TDT ROOT. Feb 13 20:53:31.743654 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Micron_5300_MTFDDAK480TDT USR-A. Feb 13 20:53:31.757619 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Micron_5300_MTFDDAK480TDT USR-A. Feb 13 20:53:31.786836 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. Feb 13 20:53:31.818677 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 20:53:32.918916 kernel: ata2.00: Enabling discard_zeroes_data Feb 13 20:53:32.939482 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 20:53:32.939516 disk-uuid[717]: The operation has completed successfully. Feb 13 20:53:32.975396 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 20:53:32.975510 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 20:53:33.010706 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 20:53:33.049624 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 13 20:53:33.049691 sh[734]: Success Feb 13 20:53:33.079440 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 20:53:33.103555 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 20:53:33.112786 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 20:53:33.172668 kernel: BTRFS info (device dm-0): first mount of filesystem 892c7470-7713-4b0f-880a-4c5f7bf5b72d Feb 13 20:53:33.172689 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:53:33.194005 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 20:53:33.212966 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 20:53:33.230603 kernel: BTRFS info (device dm-0): using free space tree Feb 13 20:53:33.267455 kernel: BTRFS info (device dm-0): enabling ssd optimizations Feb 13 20:53:33.268353 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 20:53:33.277855 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 20:53:33.288675 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 20:53:33.398393 kernel: BTRFS info (device sda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:53:33.398410 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:53:33.398507 kernel: BTRFS info (device sda6): using free space tree Feb 13 20:53:33.398529 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 20:53:33.398543 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 20:53:33.421487 kernel: BTRFS info (device sda6): last unmount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:53:33.435719 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 20:53:33.446910 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 20:53:33.478720 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 20:53:33.491632 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 20:53:33.521547 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 20:53:33.534017 systemd-networkd[917]: lo: Link UP Feb 13 20:53:33.541164 ignition[880]: Ignition 2.19.0 Feb 13 20:53:33.534020 systemd-networkd[917]: lo: Gained carrier Feb 13 20:53:33.541168 ignition[880]: Stage: fetch-offline Feb 13 20:53:33.536337 systemd-networkd[917]: Enumeration completed Feb 13 20:53:33.541192 ignition[880]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:53:33.536409 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 20:53:33.541200 ignition[880]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 20:53:33.537079 systemd-networkd[917]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 20:53:33.541254 ignition[880]: parsed url from cmdline: "" Feb 13 20:53:33.543437 unknown[880]: fetched base config from "system" Feb 13 20:53:33.541256 ignition[880]: no config URL provided Feb 13 20:53:33.543444 unknown[880]: fetched user config from "system" Feb 13 20:53:33.541258 ignition[880]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 20:53:33.553833 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 20:53:33.541281 ignition[880]: parsing config with SHA512: cf5da808dfcfa22882dacddd97112394fe95880d226448225ece6f5f5c7856592085e8754d83cc7be6485b23404fc828bda67a92b6f718932dfbb73e68dbe98b Feb 13 20:53:33.565611 systemd-networkd[917]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 20:53:33.543687 ignition[880]: fetch-offline: fetch-offline passed Feb 13 20:53:33.573917 systemd[1]: Reached target network.target - Network. Feb 13 20:53:33.543690 ignition[880]: POST message to Packet Timeline Feb 13 20:53:33.589725 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 20:53:33.543692 ignition[880]: POST Status error: resource requires networking Feb 13 20:53:33.784618 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Feb 13 20:53:33.593483 systemd-networkd[917]: enp1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 20:53:33.543727 ignition[880]: Ignition finished successfully Feb 13 20:53:33.599613 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 20:53:33.619560 ignition[929]: Ignition 2.19.0 Feb 13 20:53:33.776091 systemd-networkd[917]: enp1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 20:53:33.619569 ignition[929]: Stage: kargs Feb 13 20:53:33.619808 ignition[929]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:53:33.619823 ignition[929]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 20:53:33.621150 ignition[929]: kargs: kargs passed Feb 13 20:53:33.621156 ignition[929]: POST message to Packet Timeline Feb 13 20:53:33.621174 ignition[929]: GET https://metadata.packet.net/metadata: attempt #1 Feb 13 20:53:33.622103 ignition[929]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:35631->[::1]:53: read: connection refused Feb 13 20:53:33.822972 ignition[929]: GET https://metadata.packet.net/metadata: attempt #2 Feb 13 20:53:33.824015 ignition[929]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:40369->[::1]:53: read: connection refused Feb 13 20:53:34.015540 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Feb 13 20:53:34.016539 systemd-networkd[917]: eno1: Link UP Feb 13 20:53:34.016678 systemd-networkd[917]: eno2: Link UP Feb 13 20:53:34.016809 systemd-networkd[917]: enp1s0f0np0: Link UP Feb 13 20:53:34.016956 systemd-networkd[917]: enp1s0f0np0: Gained carrier Feb 13 20:53:34.027672 systemd-networkd[917]: enp1s0f1np1: Link UP Feb 13 20:53:34.055576 systemd-networkd[917]: enp1s0f0np0: DHCPv4 address 147.28.180.203/31, gateway 147.28.180.202 acquired from 145.40.83.140 Feb 13 20:53:34.224393 ignition[929]: GET https://metadata.packet.net/metadata: attempt #3 Feb 13 20:53:34.225596 ignition[929]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:53252->[::1]:53: read: connection refused Feb 13 20:53:34.817196 systemd-networkd[917]: enp1s0f1np1: Gained carrier Feb 13 20:53:35.026056 ignition[929]: GET https://metadata.packet.net/metadata: attempt #4 Feb 13 20:53:35.027096 ignition[929]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:56612->[::1]:53: read: connection refused Feb 13 20:53:35.073018 systemd-networkd[917]: enp1s0f0np0: Gained IPv6LL Feb 13 20:53:36.417029 systemd-networkd[917]: enp1s0f1np1: Gained IPv6LL Feb 13 20:53:36.628664 ignition[929]: GET https://metadata.packet.net/metadata: attempt #5 Feb 13 20:53:36.629726 ignition[929]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:46360->[::1]:53: read: connection refused Feb 13 20:53:39.832401 ignition[929]: GET https://metadata.packet.net/metadata: attempt #6 Feb 13 20:53:40.454078 ignition[929]: GET result: OK Feb 13 20:53:40.788542 ignition[929]: Ignition finished successfully Feb 13 20:53:40.793474 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 20:53:40.824656 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 20:53:40.831181 ignition[953]: Ignition 2.19.0 Feb 13 20:53:40.831186 ignition[953]: Stage: disks Feb 13 20:53:40.831309 ignition[953]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:53:40.831316 ignition[953]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 20:53:40.832015 ignition[953]: disks: disks passed Feb 13 20:53:40.832018 ignition[953]: POST message to Packet Timeline Feb 13 20:53:40.832030 ignition[953]: GET https://metadata.packet.net/metadata: attempt #1 Feb 13 20:53:41.426450 ignition[953]: GET result: OK Feb 13 20:53:41.818261 ignition[953]: Ignition finished successfully Feb 13 20:53:41.821623 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 20:53:41.837634 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 20:53:41.855670 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 20:53:41.867026 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 20:53:41.887965 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 20:53:41.915832 systemd[1]: Reached target basic.target - Basic System. Feb 13 20:53:41.943688 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 20:53:41.977990 systemd-fsck[972]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 20:53:41.989125 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 20:53:42.011628 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 20:53:42.109426 kernel: EXT4-fs (sda9): mounted filesystem 85215ce4-0be3-4782-863e-8dde129924f0 r/w with ordered data mode. Quota mode: none. Feb 13 20:53:42.109713 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 20:53:42.118931 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 20:53:42.135740 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 20:53:42.160987 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 20:53:42.170191 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Feb 13 20:53:42.292022 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (981) Feb 13 20:53:42.292046 kernel: BTRFS info (device sda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:53:42.292055 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:53:42.292062 kernel: BTRFS info (device sda6): using free space tree Feb 13 20:53:42.292069 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 20:53:42.292077 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 20:53:42.192080 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... Feb 13 20:53:42.324713 coreos-metadata[983]: Feb 13 20:53:42.247 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 13 20:53:42.312781 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 20:53:42.312803 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 20:53:42.337133 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 20:53:42.400817 coreos-metadata[984]: Feb 13 20:53:42.302 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 13 20:53:42.355920 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 20:53:42.397926 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 20:53:42.442893 initrd-setup-root[1013]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 20:53:42.453512 initrd-setup-root[1020]: cut: /sysroot/etc/group: No such file or directory Feb 13 20:53:42.464506 initrd-setup-root[1027]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 20:53:42.474544 initrd-setup-root[1034]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 20:53:42.499414 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 20:53:42.524673 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 20:53:42.550447 kernel: BTRFS info (device sda6): last unmount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:53:42.542936 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 20:53:42.566104 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 20:53:42.582569 ignition[1105]: INFO : Ignition 2.19.0 Feb 13 20:53:42.582569 ignition[1105]: INFO : Stage: mount Feb 13 20:53:42.582569 ignition[1105]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:53:42.582569 ignition[1105]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 20:53:42.582569 ignition[1105]: INFO : mount: mount passed Feb 13 20:53:42.582569 ignition[1105]: INFO : POST message to Packet Timeline Feb 13 20:53:42.582569 ignition[1105]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Feb 13 20:53:42.651519 coreos-metadata[984]: Feb 13 20:53:42.623 INFO Fetch successful Feb 13 20:53:42.583588 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 20:53:42.655540 systemd[1]: flatcar-static-network.service: Deactivated successfully. Feb 13 20:53:42.655592 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. Feb 13 20:53:42.737652 ignition[1105]: INFO : GET result: OK Feb 13 20:53:42.828350 coreos-metadata[983]: Feb 13 20:53:42.828 INFO Fetch successful Feb 13 20:53:42.905867 coreos-metadata[983]: Feb 13 20:53:42.905 INFO wrote hostname ci-4081.3.1-a-f6aaf2d828 to /sysroot/etc/hostname Feb 13 20:53:42.907430 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 20:53:43.109968 ignition[1105]: INFO : Ignition finished successfully Feb 13 20:53:43.113096 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 20:53:43.146702 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 20:53:43.157825 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 20:53:43.204455 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1127) Feb 13 20:53:43.233775 kernel: BTRFS info (device sda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:53:43.233791 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:53:43.251357 kernel: BTRFS info (device sda6): using free space tree Feb 13 20:53:43.289164 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 20:53:43.289187 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 20:53:43.302107 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 20:53:43.338974 ignition[1144]: INFO : Ignition 2.19.0 Feb 13 20:53:43.338974 ignition[1144]: INFO : Stage: files Feb 13 20:53:43.353660 ignition[1144]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:53:43.353660 ignition[1144]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 20:53:43.353660 ignition[1144]: DEBUG : files: compiled without relabeling support, skipping Feb 13 20:53:43.353660 ignition[1144]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 20:53:43.353660 ignition[1144]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 20:53:43.353660 ignition[1144]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 20:53:43.353660 ignition[1144]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 20:53:43.353660 ignition[1144]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 20:53:43.353660 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 13 20:53:43.353660 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 13 20:53:43.353660 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 20:53:43.353660 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 13 20:53:43.342559 unknown[1144]: wrote ssh authorized keys file for user: core Feb 13 20:53:43.516484 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 20:53:43.516484 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 20:53:43.516484 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Feb 13 20:53:43.516484 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 20:53:43.516484 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 20:53:43.516484 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 20:53:43.516484 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 20:53:43.516484 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 20:53:43.516484 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 20:53:43.516484 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 20:53:43.516484 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 20:53:43.516484 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 20:53:43.516484 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 20:53:43.516484 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 20:53:43.516484 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 20:53:43.765755 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Feb 13 20:53:43.994215 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 13 20:53:44.318273 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 20:53:44.318273 ignition[1144]: INFO : files: op(c): [started] processing unit "containerd.service" Feb 13 20:53:44.348734 ignition[1144]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 13 20:53:44.348734 ignition[1144]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 13 20:53:44.348734 ignition[1144]: INFO : files: op(c): [finished] processing unit "containerd.service" Feb 13 20:53:44.348734 ignition[1144]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Feb 13 20:53:44.348734 ignition[1144]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 20:53:44.348734 ignition[1144]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 20:53:44.348734 ignition[1144]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Feb 13 20:53:44.348734 ignition[1144]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Feb 13 20:53:44.348734 ignition[1144]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 20:53:44.348734 ignition[1144]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 20:53:44.348734 ignition[1144]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 20:53:44.348734 ignition[1144]: INFO : files: files passed Feb 13 20:53:44.348734 ignition[1144]: INFO : POST message to Packet Timeline Feb 13 20:53:44.348734 ignition[1144]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Feb 13 20:53:44.793307 ignition[1144]: INFO : GET result: OK Feb 13 20:53:45.608022 ignition[1144]: INFO : Ignition finished successfully Feb 13 20:53:45.612030 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 20:53:45.645664 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 20:53:45.655960 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 20:53:45.666811 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 20:53:45.666853 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 20:53:45.708305 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 20:53:45.738759 initrd-setup-root-after-ignition[1182]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:53:45.738759 initrd-setup-root-after-ignition[1182]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:53:45.727563 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 20:53:45.777944 initrd-setup-root-after-ignition[1186]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:53:45.764884 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 20:53:45.842676 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 20:53:45.842724 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 20:53:45.861828 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 20:53:45.883638 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 20:53:45.903735 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 20:53:45.923575 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 20:53:45.990520 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 20:53:46.018881 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 20:53:46.048349 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:53:46.060005 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:53:46.082156 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 20:53:46.100074 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 20:53:46.100498 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 20:53:46.127316 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 20:53:46.149089 systemd[1]: Stopped target basic.target - Basic System. Feb 13 20:53:46.168057 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 20:53:46.186084 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 20:53:46.207086 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 20:53:46.228078 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 20:53:46.248069 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 20:53:46.269113 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 20:53:46.290104 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 20:53:46.310069 systemd[1]: Stopped target swap.target - Swaps. Feb 13 20:53:46.327951 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 20:53:46.328353 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 20:53:46.363901 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:53:46.374108 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:53:46.394953 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 20:53:46.395416 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:53:46.416982 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 20:53:46.417378 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 20:53:46.449035 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 20:53:46.449508 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 20:53:46.469286 systemd[1]: Stopped target paths.target - Path Units. Feb 13 20:53:46.486938 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 20:53:46.487375 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:53:46.508114 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 20:53:46.526091 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 20:53:46.546141 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 20:53:46.546468 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 20:53:46.566092 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 20:53:46.566397 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 20:53:46.589153 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 20:53:46.701613 ignition[1206]: INFO : Ignition 2.19.0 Feb 13 20:53:46.701613 ignition[1206]: INFO : Stage: umount Feb 13 20:53:46.701613 ignition[1206]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:53:46.701613 ignition[1206]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 20:53:46.701613 ignition[1206]: INFO : umount: umount passed Feb 13 20:53:46.701613 ignition[1206]: INFO : POST message to Packet Timeline Feb 13 20:53:46.701613 ignition[1206]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Feb 13 20:53:46.589574 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 20:53:46.608137 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 20:53:46.608541 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 20:53:46.626180 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 13 20:53:46.626585 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 20:53:46.656750 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 20:53:46.670558 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 20:53:46.670709 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:53:46.703697 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 20:53:46.718631 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 20:53:46.718923 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:53:46.737203 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 20:53:46.737695 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 20:53:46.786495 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 20:53:46.791817 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 20:53:46.792066 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 20:53:46.866959 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 20:53:46.867235 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 20:53:47.440709 ignition[1206]: INFO : GET result: OK Feb 13 20:53:47.810203 ignition[1206]: INFO : Ignition finished successfully Feb 13 20:53:47.811297 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 20:53:47.811405 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 20:53:47.829346 systemd[1]: Stopped target network.target - Network. Feb 13 20:53:47.844670 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 20:53:47.844937 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 20:53:47.862771 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 20:53:47.862916 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 20:53:47.880851 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 20:53:47.881012 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 20:53:47.899844 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 20:53:47.900013 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 20:53:47.918850 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 20:53:47.919021 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 20:53:47.938232 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 20:53:47.948550 systemd-networkd[917]: enp1s0f1np1: DHCPv6 lease lost Feb 13 20:53:47.955926 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 20:53:47.962670 systemd-networkd[917]: enp1s0f0np0: DHCPv6 lease lost Feb 13 20:53:47.974605 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 20:53:47.974890 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 20:53:47.993993 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 20:53:47.994457 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 20:53:48.014226 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 20:53:48.014440 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:53:48.047612 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 20:53:48.072588 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 20:53:48.072630 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 20:53:48.092739 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 20:53:48.092831 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:53:48.112781 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 20:53:48.112934 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 20:53:48.130809 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 20:53:48.130973 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:53:48.151063 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:53:48.172624 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 20:53:48.173005 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:53:48.206454 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 20:53:48.206600 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 20:53:48.211937 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 20:53:48.212042 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:53:48.239695 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 20:53:48.239834 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 20:53:48.269717 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 20:53:48.269903 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 20:53:48.298656 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 20:53:48.298829 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:53:48.337555 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 20:53:48.608719 systemd-journald[268]: Failed to send stream file descriptor to service manager: Connection refused Feb 13 20:53:48.608744 systemd-journald[268]: Received SIGTERM from PID 1 (systemd). Feb 13 20:53:48.357503 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 20:53:48.357544 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:53:48.389537 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:53:48.389579 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:53:48.408993 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 20:53:48.409102 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 20:53:48.465899 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 20:53:48.466173 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 20:53:48.481771 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 20:53:48.512853 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 20:53:48.550835 systemd[1]: Switching root. Feb 13 20:53:48.705538 systemd-journald[268]: Journal stopped Feb 13 20:53:51.153592 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 20:53:51.153607 kernel: SELinux: policy capability open_perms=1 Feb 13 20:53:51.153614 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 20:53:51.153621 kernel: SELinux: policy capability always_check_network=0 Feb 13 20:53:51.153627 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 20:53:51.153634 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 20:53:51.153640 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 20:53:51.153646 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 20:53:51.153651 kernel: audit: type=1403 audit(1739480028.947:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 20:53:51.153658 systemd[1]: Successfully loaded SELinux policy in 159.594ms. Feb 13 20:53:51.153666 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.071ms. Feb 13 20:53:51.153673 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 20:53:51.153679 systemd[1]: Detected architecture x86-64. Feb 13 20:53:51.153686 systemd[1]: Detected first boot. Feb 13 20:53:51.153692 systemd[1]: Hostname set to . Feb 13 20:53:51.153700 systemd[1]: Initializing machine ID from random generator. Feb 13 20:53:51.153707 zram_generator::config[1274]: No configuration found. Feb 13 20:53:51.153713 systemd[1]: Populated /etc with preset unit settings. Feb 13 20:53:51.153720 systemd[1]: Queued start job for default target multi-user.target. Feb 13 20:53:51.153726 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Feb 13 20:53:51.153733 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 20:53:51.153739 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 20:53:51.153747 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 20:53:51.153753 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 20:53:51.153760 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 20:53:51.153767 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 20:53:51.153773 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 20:53:51.153779 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 20:53:51.153786 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:53:51.153794 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:53:51.153800 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 20:53:51.153807 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 20:53:51.153813 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 20:53:51.153820 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 20:53:51.153826 systemd[1]: Expecting device dev-ttyS1.device - /dev/ttyS1... Feb 13 20:53:51.153833 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:53:51.153839 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 20:53:51.153847 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:53:51.153853 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 20:53:51.153860 systemd[1]: Reached target slices.target - Slice Units. Feb 13 20:53:51.153868 systemd[1]: Reached target swap.target - Swaps. Feb 13 20:53:51.153875 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 20:53:51.153882 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 20:53:51.153888 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 20:53:51.153896 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 20:53:51.153903 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:53:51.153910 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 20:53:51.153917 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:53:51.153924 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 20:53:51.153931 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 20:53:51.153939 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 20:53:51.153946 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 20:53:51.153953 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:53:51.153960 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 20:53:51.153967 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 20:53:51.153973 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 20:53:51.153980 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 20:53:51.153988 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:53:51.153995 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 20:53:51.154002 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 20:53:51.154009 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:53:51.154016 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 20:53:51.154022 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:53:51.154029 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 20:53:51.154036 kernel: ACPI: bus type drm_connector registered Feb 13 20:53:51.154042 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:53:51.154050 kernel: fuse: init (API version 7.39) Feb 13 20:53:51.154057 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 20:53:51.154064 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Feb 13 20:53:51.154071 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Feb 13 20:53:51.154078 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 20:53:51.154085 kernel: loop: module loaded Feb 13 20:53:51.154091 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 20:53:51.154106 systemd-journald[1395]: Collecting audit messages is disabled. Feb 13 20:53:51.154121 systemd-journald[1395]: Journal started Feb 13 20:53:51.154136 systemd-journald[1395]: Runtime Journal (/run/log/journal/d832fc18133043fe8a843911f29261d4) is 8.0M, max 639.9M, 631.9M free. Feb 13 20:53:51.185472 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 20:53:51.219471 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 20:53:51.253499 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 20:53:51.304473 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:53:51.325463 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 20:53:51.335154 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 20:53:51.344559 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 20:53:51.355744 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 20:53:51.366700 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 20:53:51.376692 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 20:53:51.386696 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 20:53:51.396775 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 20:53:51.407778 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:53:51.418772 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 20:53:51.418898 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 20:53:51.429889 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:53:51.430062 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:53:51.441989 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 20:53:51.442232 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 20:53:51.452290 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:53:51.452708 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:53:51.464341 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 20:53:51.464787 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 20:53:51.475369 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:53:51.475825 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:53:51.487617 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 20:53:51.498557 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 20:53:51.510494 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 20:53:51.522493 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:53:51.555071 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 20:53:51.580660 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 20:53:51.591234 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 20:53:51.600625 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 20:53:51.601815 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 20:53:51.621291 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 20:53:51.632636 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 20:53:51.633346 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 20:53:51.635881 systemd-journald[1395]: Time spent on flushing to /var/log/journal/d832fc18133043fe8a843911f29261d4 is 12.683ms for 1360 entries. Feb 13 20:53:51.635881 systemd-journald[1395]: System Journal (/var/log/journal/d832fc18133043fe8a843911f29261d4) is 8.0M, max 195.6M, 187.6M free. Feb 13 20:53:51.671851 systemd-journald[1395]: Received client request to flush runtime journal. Feb 13 20:53:51.664556 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 20:53:51.676920 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 20:53:51.687220 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 20:53:51.699209 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 20:53:51.710986 systemd-tmpfiles[1436]: ACLs are not supported, ignoring. Feb 13 20:53:51.710995 systemd-tmpfiles[1436]: ACLs are not supported, ignoring. Feb 13 20:53:51.711571 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 20:53:51.722633 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 20:53:51.733677 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 20:53:51.744670 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 20:53:51.755656 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:53:51.765659 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 20:53:51.779031 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 20:53:51.810604 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 20:53:51.820896 udevadm[1440]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 20:53:51.828993 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 20:53:51.861564 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 20:53:51.869120 systemd-tmpfiles[1454]: ACLs are not supported, ignoring. Feb 13 20:53:51.869130 systemd-tmpfiles[1454]: ACLs are not supported, ignoring. Feb 13 20:53:51.872790 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:53:51.999719 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 20:53:52.019694 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:53:52.031900 systemd-udevd[1461]: Using default interface naming scheme 'v255'. Feb 13 20:53:52.050157 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:53:52.067572 systemd[1]: Found device dev-ttyS1.device - /dev/ttyS1. Feb 13 20:53:52.111299 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 Feb 13 20:53:52.111385 kernel: ACPI: button: Sleep Button [SLPB] Feb 13 20:53:52.111447 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Feb 13 20:53:52.115432 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 20:53:52.115462 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1462) Feb 13 20:53:52.124503 kernel: ACPI: button: Power Button [PWRF] Feb 13 20:53:52.125643 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 20:53:52.208431 kernel: mei_me 0000:00:16.4: Device doesn't have valid ME Interface Feb 13 20:53:52.214522 kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface Feb 13 20:53:52.214713 kernel: IPMI message handler: version 39.2 Feb 13 20:53:52.214733 kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set Feb 13 20:53:52.214860 kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt Feb 13 20:53:52.214967 kernel: i2c i2c-0: 2/4 memory slots populated (from DMI) Feb 13 20:53:52.254687 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. Feb 13 20:53:52.349431 kernel: iTCO_vendor_support: vendor-support=0 Feb 13 20:53:52.367180 kernel: ipmi device interface Feb 13 20:53:52.375617 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 20:53:52.386128 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:53:52.406794 kernel: ipmi_si: IPMI System Interface driver Feb 13 20:53:52.406899 kernel: iTCO_wdt iTCO_wdt: Found a Intel PCH TCO device (Version=6, TCOBASE=0x0400) Feb 13 20:53:52.420070 kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS Feb 13 20:53:52.493378 kernel: iTCO_wdt iTCO_wdt: initialized. heartbeat=30 sec (nowayout=0) Feb 13 20:53:52.493485 kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 Feb 13 20:53:52.493497 kernel: ipmi_si: Adding SMBIOS-specified kcs state machine Feb 13 20:53:52.493507 kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI Feb 13 20:53:52.593482 kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2] regsize 1 spacing 1 irq 0 Feb 13 20:53:52.593679 kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI Feb 13 20:53:52.593846 kernel: ipmi_si: Adding ACPI-specified kcs state machine Feb 13 20:53:52.593868 kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 Feb 13 20:53:52.415734 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 20:53:52.656046 kernel: intel_rapl_common: Found RAPL domain package Feb 13 20:53:52.656082 kernel: intel_rapl_common: Found RAPL domain core Feb 13 20:53:52.656092 kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. Feb 13 20:53:52.656189 kernel: intel_rapl_common: Found RAPL domain dram Feb 13 20:53:52.711429 kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1b0f, dev_id: 0x20) Feb 13 20:53:52.749265 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:53:52.751445 systemd-networkd[1542]: lo: Link UP Feb 13 20:53:52.751449 systemd-networkd[1542]: lo: Gained carrier Feb 13 20:53:52.754059 systemd-networkd[1542]: bond0: netdev ready Feb 13 20:53:52.754963 systemd-networkd[1542]: Enumeration completed Feb 13 20:53:52.760673 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 20:53:52.763846 systemd-networkd[1542]: enp1s0f0np0: Configuring with /etc/systemd/network/10-1c:34:da:5c:16:48.network. Feb 13 20:53:52.782542 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 20:53:52.824467 kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized Feb 13 20:53:52.844428 kernel: ipmi_ssif: IPMI SSIF Interface driver Feb 13 20:53:52.845899 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 20:53:52.869513 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 20:53:52.876519 lvm[1579]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 20:53:52.913213 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 20:53:52.925623 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:53:52.943536 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 20:53:52.945544 lvm[1582]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 20:53:52.983908 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 20:53:52.995877 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 20:53:53.006520 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 20:53:53.006534 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 20:53:53.016543 systemd[1]: Reached target machines.target - Containers. Feb 13 20:53:53.025119 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 20:53:53.046525 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 20:53:53.058095 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 20:53:53.067551 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:53:53.068055 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 20:53:53.079142 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 20:53:53.090364 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 20:53:53.090996 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 20:53:53.104207 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 20:53:53.104655 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 20:53:53.121885 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 20:53:53.144429 kernel: loop0: detected capacity change from 0 to 8 Feb 13 20:53:53.166428 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 20:53:53.230439 kernel: loop1: detected capacity change from 0 to 140768 Feb 13 20:53:53.230509 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Feb 13 20:53:53.271475 kernel: bond0: (slave enp1s0f0np0): Enslaving as a backup interface with an up link Feb 13 20:53:53.272289 systemd-networkd[1542]: enp1s0f1np1: Configuring with /etc/systemd/network/10-1c:34:da:5c:16:49.network. Feb 13 20:53:53.319430 kernel: loop2: detected capacity change from 0 to 142488 Feb 13 20:53:53.399434 kernel: loop3: detected capacity change from 0 to 210664 Feb 13 20:53:53.399508 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Feb 13 20:53:53.417801 ldconfig[1589]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 20:53:53.418904 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 20:53:53.440111 systemd-networkd[1542]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Feb 13 20:53:53.440490 kernel: bond0: (slave enp1s0f1np1): Enslaving as a backup interface with an up link Feb 13 20:53:53.441924 systemd-networkd[1542]: enp1s0f0np0: Link UP Feb 13 20:53:53.442294 systemd-networkd[1542]: enp1s0f0np0: Gained carrier Feb 13 20:53:53.462502 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Feb 13 20:53:53.469617 systemd-networkd[1542]: enp1s0f1np1: Reconfiguring with /etc/systemd/network/10-1c:34:da:5c:16:48.network. Feb 13 20:53:53.469750 systemd-networkd[1542]: enp1s0f1np1: Link UP Feb 13 20:53:53.469889 systemd-networkd[1542]: enp1s0f1np1: Gained carrier Feb 13 20:53:53.482723 systemd-networkd[1542]: bond0: Link UP Feb 13 20:53:53.482873 systemd-networkd[1542]: bond0: Gained carrier Feb 13 20:53:53.502428 kernel: loop4: detected capacity change from 0 to 8 Feb 13 20:53:53.520425 kernel: loop5: detected capacity change from 0 to 140768 Feb 13 20:53:53.548425 kernel: loop6: detected capacity change from 0 to 142488 Feb 13 20:53:53.548444 kernel: bond0: (slave enp1s0f0np0): link status definitely up, 10000 Mbps full duplex Feb 13 20:53:53.573267 kernel: loop7: detected capacity change from 0 to 210664 Feb 13 20:53:53.573281 kernel: bond0: active interface up! Feb 13 20:53:53.584919 (sd-merge)[1607]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-packet'. Feb 13 20:53:53.585146 (sd-merge)[1607]: Merged extensions into '/usr'. Feb 13 20:53:53.600602 systemd[1]: Reloading requested from client PID 1593 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 20:53:53.600610 systemd[1]: Reloading... Feb 13 20:53:53.634474 zram_generator::config[1634]: No configuration found. Feb 13 20:53:53.694437 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:53:53.725426 kernel: bond0: (slave enp1s0f1np1): link status definitely up, 10000 Mbps full duplex Feb 13 20:53:53.744927 systemd[1]: Reloading finished in 144 ms. Feb 13 20:53:53.758969 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 20:53:53.779614 systemd[1]: Starting ensure-sysext.service... Feb 13 20:53:53.787146 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 20:53:53.799850 systemd[1]: Reloading requested from client PID 1695 ('systemctl') (unit ensure-sysext.service)... Feb 13 20:53:53.799857 systemd[1]: Reloading... Feb 13 20:53:53.806491 systemd-tmpfiles[1696]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 20:53:53.806701 systemd-tmpfiles[1696]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 20:53:53.807199 systemd-tmpfiles[1696]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 20:53:53.807369 systemd-tmpfiles[1696]: ACLs are not supported, ignoring. Feb 13 20:53:53.807406 systemd-tmpfiles[1696]: ACLs are not supported, ignoring. Feb 13 20:53:53.809243 systemd-tmpfiles[1696]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 20:53:53.809248 systemd-tmpfiles[1696]: Skipping /boot Feb 13 20:53:53.813564 systemd-tmpfiles[1696]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 20:53:53.813573 systemd-tmpfiles[1696]: Skipping /boot Feb 13 20:53:53.832476 zram_generator::config[1725]: No configuration found. Feb 13 20:53:53.890281 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:53:53.941017 systemd[1]: Reloading finished in 140 ms. Feb 13 20:53:53.952189 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:53:53.975535 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 20:53:53.987277 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 20:53:53.999220 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 20:53:54.003009 augenrules[1807]: No rules Feb 13 20:53:54.011434 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 20:53:54.033562 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 20:53:54.044894 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 20:53:54.054700 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 20:53:54.065700 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 20:53:54.091531 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:53:54.091676 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:53:54.092354 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:53:54.103122 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:53:54.115115 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:53:54.122450 systemd-resolved[1813]: Positive Trust Anchors: Feb 13 20:53:54.122457 systemd-resolved[1813]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 20:53:54.122482 systemd-resolved[1813]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 20:53:54.124554 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:53:54.125085 systemd-resolved[1813]: Using system hostname 'ci-4081.3.1-a-f6aaf2d828'. Feb 13 20:53:54.125358 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 20:53:54.135516 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 20:53:54.135576 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:53:54.136166 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 20:53:54.145840 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 20:53:54.156812 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:53:54.156898 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:53:54.167765 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:53:54.167858 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:53:54.178705 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:53:54.178784 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:53:54.189742 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 20:53:54.202994 systemd[1]: Reached target network.target - Network. Feb 13 20:53:54.212523 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:53:54.224501 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:53:54.224620 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:53:54.238557 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:53:54.247973 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:53:54.259000 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:53:54.268522 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:53:54.268594 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 20:53:54.268641 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:53:54.269253 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:53:54.269332 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:53:54.280765 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:53:54.280841 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:53:54.291695 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:53:54.291769 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:53:54.305247 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:53:54.305385 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:53:54.314565 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:53:54.324973 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 20:53:54.335979 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:53:54.355590 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:53:54.366542 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:53:54.366616 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 20:53:54.366667 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:53:54.367295 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:53:54.367375 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:53:54.379692 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 20:53:54.379771 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 20:53:54.390949 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:53:54.391396 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:53:54.402735 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:53:54.402812 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:53:54.413373 systemd[1]: Finished ensure-sysext.service. Feb 13 20:53:54.422930 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 20:53:54.422961 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 20:53:54.435578 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 20:53:54.475037 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 20:53:54.485598 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 20:53:54.495552 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 20:53:54.506519 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 20:53:54.517498 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 20:53:54.528513 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 20:53:54.528529 systemd[1]: Reached target paths.target - Path Units. Feb 13 20:53:54.536499 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 20:53:54.546688 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 20:53:54.556584 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 20:53:54.567490 systemd[1]: Reached target timers.target - Timer Units. Feb 13 20:53:54.575937 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 20:53:54.586366 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 20:53:54.595059 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 20:53:54.604769 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 20:53:54.614548 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 20:53:54.624660 systemd[1]: Reached target basic.target - Basic System. Feb 13 20:53:54.633081 systemd[1]: System is tainted: cgroupsv1 Feb 13 20:53:54.633202 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 20:53:54.633276 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 20:53:54.647519 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 20:53:54.658277 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 20:53:54.670673 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 20:53:54.679108 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 20:53:54.686787 coreos-metadata[1867]: Feb 13 20:53:54.686 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 13 20:53:54.689143 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 20:53:54.689598 dbus-daemon[1868]: [system] SELinux support is enabled Feb 13 20:53:54.690984 jq[1871]: false Feb 13 20:53:54.699483 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 20:53:54.700370 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 20:53:54.708416 extend-filesystems[1873]: Found loop4 Feb 13 20:53:54.709640 extend-filesystems[1873]: Found loop5 Feb 13 20:53:54.709640 extend-filesystems[1873]: Found loop6 Feb 13 20:53:54.709640 extend-filesystems[1873]: Found loop7 Feb 13 20:53:54.709640 extend-filesystems[1873]: Found sda Feb 13 20:53:54.709640 extend-filesystems[1873]: Found sda1 Feb 13 20:53:54.709640 extend-filesystems[1873]: Found sda2 Feb 13 20:53:54.709640 extend-filesystems[1873]: Found sda3 Feb 13 20:53:54.709640 extend-filesystems[1873]: Found usr Feb 13 20:53:54.709640 extend-filesystems[1873]: Found sda4 Feb 13 20:53:54.709640 extend-filesystems[1873]: Found sda6 Feb 13 20:53:54.709640 extend-filesystems[1873]: Found sda7 Feb 13 20:53:54.709640 extend-filesystems[1873]: Found sda9 Feb 13 20:53:54.709640 extend-filesystems[1873]: Checking size of /dev/sda9 Feb 13 20:53:54.864650 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 116605649 blocks Feb 13 20:53:54.864669 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1470) Feb 13 20:53:54.710158 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 20:53:54.864740 extend-filesystems[1873]: Resized partition /dev/sda9 Feb 13 20:53:54.791491 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 20:53:54.891606 extend-filesystems[1882]: resize2fs 1.47.1 (20-May-2024) Feb 13 20:53:54.814178 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 20:53:54.843926 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 20:53:54.849472 systemd-networkd[1542]: bond0: Gained IPv6LL Feb 13 20:53:54.858351 systemd[1]: Starting tcsd.service - TCG Core Services Daemon... Feb 13 20:53:54.865267 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 20:53:54.892281 systemd-logind[1897]: Watching system buttons on /dev/input/event3 (Power Button) Feb 13 20:53:54.892292 systemd-logind[1897]: Watching system buttons on /dev/input/event2 (Sleep Button) Feb 13 20:53:54.892302 systemd-logind[1897]: Watching system buttons on /dev/input/event0 (HID 0557:2419) Feb 13 20:53:54.892439 systemd-logind[1897]: New seat seat0. Feb 13 20:53:54.912836 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 20:53:54.920483 update_engine[1902]: I20250213 20:53:54.920413 1902 main.cc:92] Flatcar Update Engine starting Feb 13 20:53:54.921311 update_engine[1902]: I20250213 20:53:54.921269 1902 update_check_scheduler.cc:74] Next update check in 8m22s Feb 13 20:53:54.923760 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 20:53:54.925389 jq[1903]: true Feb 13 20:53:54.934790 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 20:53:54.939768 sshd_keygen[1900]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 20:53:54.956633 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 20:53:54.956768 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 20:53:54.956953 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 20:53:54.957073 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 20:53:54.967021 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 20:53:54.967141 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 20:53:54.978710 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 20:53:54.992363 (ntainerd)[1918]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 20:53:54.993892 jq[1917]: true Feb 13 20:53:54.995989 tar[1915]: linux-amd64/helm Feb 13 20:53:54.996652 dbus-daemon[1868]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 13 20:53:55.000884 systemd[1]: tcsd.service: Skipped due to 'exec-condition'. Feb 13 20:53:55.001060 systemd[1]: Condition check resulted in tcsd.service - TCG Core Services Daemon being skipped. Feb 13 20:53:55.007859 systemd[1]: Started update-engine.service - Update Engine. Feb 13 20:53:55.018859 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 20:53:55.028550 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 20:53:55.028650 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 20:53:55.039596 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 20:53:55.039732 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 20:53:55.050897 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 20:53:55.062152 bash[1947]: Updated "/home/core/.ssh/authorized_keys" Feb 13 20:53:55.067601 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 20:53:55.079678 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 20:53:55.086455 locksmithd[1956]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 20:53:55.090756 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 20:53:55.090880 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 20:53:55.117652 systemd[1]: Starting sshkeys.service... Feb 13 20:53:55.125177 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 20:53:55.137500 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 20:53:55.156740 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 20:53:55.162715 containerd[1918]: time="2025-02-13T20:53:55.162670683Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Feb 13 20:53:55.167344 coreos-metadata[1972]: Feb 13 20:53:55.167 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 13 20:53:55.167888 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 20:53:55.175198 containerd[1918]: time="2025-02-13T20:53:55.175180072Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:53:55.176155 containerd[1918]: time="2025-02-13T20:53:55.175912958Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:53:55.176239 containerd[1918]: time="2025-02-13T20:53:55.176223649Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 20:53:55.176269 containerd[1918]: time="2025-02-13T20:53:55.176243098Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 20:53:55.176337 containerd[1918]: time="2025-02-13T20:53:55.176328242Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 20:53:55.176362 containerd[1918]: time="2025-02-13T20:53:55.176339263Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 20:53:55.176381 containerd[1918]: time="2025-02-13T20:53:55.176371561Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:53:55.176381 containerd[1918]: time="2025-02-13T20:53:55.176380286Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:53:55.176533 containerd[1918]: time="2025-02-13T20:53:55.176498392Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:53:55.176533 containerd[1918]: time="2025-02-13T20:53:55.176508719Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 20:53:55.176533 containerd[1918]: time="2025-02-13T20:53:55.176516775Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:53:55.176533 containerd[1918]: time="2025-02-13T20:53:55.176524712Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 20:53:55.176603 containerd[1918]: time="2025-02-13T20:53:55.176569268Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:53:55.176727 containerd[1918]: time="2025-02-13T20:53:55.176679269Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:53:55.176786 containerd[1918]: time="2025-02-13T20:53:55.176754015Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:53:55.176786 containerd[1918]: time="2025-02-13T20:53:55.176762429Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 20:53:55.176820 containerd[1918]: time="2025-02-13T20:53:55.176805140Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 20:53:55.176841 containerd[1918]: time="2025-02-13T20:53:55.176834043Z" level=info msg="metadata content store policy set" policy=shared Feb 13 20:53:55.191150 containerd[1918]: time="2025-02-13T20:53:55.191135465Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 20:53:55.191186 containerd[1918]: time="2025-02-13T20:53:55.191163769Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 20:53:55.191204 containerd[1918]: time="2025-02-13T20:53:55.191186944Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 20:53:55.191204 containerd[1918]: time="2025-02-13T20:53:55.191200875Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 20:53:55.191231 containerd[1918]: time="2025-02-13T20:53:55.191209506Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 20:53:55.191287 containerd[1918]: time="2025-02-13T20:53:55.191278897Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 20:53:55.191505 containerd[1918]: time="2025-02-13T20:53:55.191453657Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 20:53:55.191523 containerd[1918]: time="2025-02-13T20:53:55.191514047Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 20:53:55.191537 containerd[1918]: time="2025-02-13T20:53:55.191525469Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 20:53:55.191551 containerd[1918]: time="2025-02-13T20:53:55.191535819Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 20:53:55.191551 containerd[1918]: time="2025-02-13T20:53:55.191544413Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 20:53:55.191582 containerd[1918]: time="2025-02-13T20:53:55.191551285Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 20:53:55.191582 containerd[1918]: time="2025-02-13T20:53:55.191558381Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 20:53:55.191582 containerd[1918]: time="2025-02-13T20:53:55.191566986Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 20:53:55.191582 containerd[1918]: time="2025-02-13T20:53:55.191575059Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 20:53:55.191634 containerd[1918]: time="2025-02-13T20:53:55.191582321Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 20:53:55.191634 containerd[1918]: time="2025-02-13T20:53:55.191589022Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 20:53:55.191634 containerd[1918]: time="2025-02-13T20:53:55.191595724Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 20:53:55.191634 containerd[1918]: time="2025-02-13T20:53:55.191607050Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 20:53:55.191634 containerd[1918]: time="2025-02-13T20:53:55.191615011Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 20:53:55.191634 containerd[1918]: time="2025-02-13T20:53:55.191621919Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 20:53:55.191634 containerd[1918]: time="2025-02-13T20:53:55.191630617Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 20:53:55.191753 containerd[1918]: time="2025-02-13T20:53:55.191637710Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 20:53:55.191753 containerd[1918]: time="2025-02-13T20:53:55.191645130Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 20:53:55.191753 containerd[1918]: time="2025-02-13T20:53:55.191651739Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 20:53:55.191753 containerd[1918]: time="2025-02-13T20:53:55.191658752Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 20:53:55.191753 containerd[1918]: time="2025-02-13T20:53:55.191665535Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 20:53:55.191753 containerd[1918]: time="2025-02-13T20:53:55.191674133Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 20:53:55.191753 containerd[1918]: time="2025-02-13T20:53:55.191680317Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 20:53:55.191753 containerd[1918]: time="2025-02-13T20:53:55.191687188Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 20:53:55.191753 containerd[1918]: time="2025-02-13T20:53:55.191693708Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 20:53:55.191753 containerd[1918]: time="2025-02-13T20:53:55.191704904Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 20:53:55.191753 containerd[1918]: time="2025-02-13T20:53:55.191716972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 20:53:55.191753 containerd[1918]: time="2025-02-13T20:53:55.191723840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 20:53:55.191753 containerd[1918]: time="2025-02-13T20:53:55.191730879Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 20:53:55.191926 containerd[1918]: time="2025-02-13T20:53:55.191756987Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 20:53:55.191926 containerd[1918]: time="2025-02-13T20:53:55.191767391Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 20:53:55.191926 containerd[1918]: time="2025-02-13T20:53:55.191773702Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 20:53:55.191926 containerd[1918]: time="2025-02-13T20:53:55.191779895Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 20:53:55.191926 containerd[1918]: time="2025-02-13T20:53:55.191785145Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 20:53:55.191926 containerd[1918]: time="2025-02-13T20:53:55.191791730Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 20:53:55.191926 containerd[1918]: time="2025-02-13T20:53:55.191798124Z" level=info msg="NRI interface is disabled by configuration." Feb 13 20:53:55.191926 containerd[1918]: time="2025-02-13T20:53:55.191803556Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 20:53:55.192041 containerd[1918]: time="2025-02-13T20:53:55.191957684Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 20:53:55.192041 containerd[1918]: time="2025-02-13T20:53:55.191991329Z" level=info msg="Connect containerd service" Feb 13 20:53:55.192041 containerd[1918]: time="2025-02-13T20:53:55.192011327Z" level=info msg="using legacy CRI server" Feb 13 20:53:55.192041 containerd[1918]: time="2025-02-13T20:53:55.192016442Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 20:53:55.192159 containerd[1918]: time="2025-02-13T20:53:55.192062850Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 20:53:55.192336 containerd[1918]: time="2025-02-13T20:53:55.192326033Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 20:53:55.192495 containerd[1918]: time="2025-02-13T20:53:55.192420278Z" level=info msg="Start subscribing containerd event" Feb 13 20:53:55.192495 containerd[1918]: time="2025-02-13T20:53:55.192460546Z" level=info msg="Start recovering state" Feb 13 20:53:55.192495 containerd[1918]: time="2025-02-13T20:53:55.192488345Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 20:53:55.192546 containerd[1918]: time="2025-02-13T20:53:55.192494748Z" level=info msg="Start event monitor" Feb 13 20:53:55.192546 containerd[1918]: time="2025-02-13T20:53:55.192507957Z" level=info msg="Start snapshots syncer" Feb 13 20:53:55.192546 containerd[1918]: time="2025-02-13T20:53:55.192514673Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 20:53:55.192546 containerd[1918]: time="2025-02-13T20:53:55.192516398Z" level=info msg="Start cni network conf syncer for default" Feb 13 20:53:55.192546 containerd[1918]: time="2025-02-13T20:53:55.192531518Z" level=info msg="Start streaming server" Feb 13 20:53:55.192624 containerd[1918]: time="2025-02-13T20:53:55.192560965Z" level=info msg="containerd successfully booted in 0.030641s" Feb 13 20:53:55.194631 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 20:53:55.203350 systemd[1]: Started serial-getty@ttyS1.service - Serial Getty on ttyS1. Feb 13 20:53:55.212618 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 20:53:55.220740 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 20:53:55.285284 tar[1915]: linux-amd64/LICENSE Feb 13 20:53:55.285344 tar[1915]: linux-amd64/README.md Feb 13 20:53:55.298793 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 20:53:55.310946 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 20:53:55.322256 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 20:53:55.349567 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:53:55.360233 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 20:53:55.378308 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 20:53:55.422447 kernel: EXT4-fs (sda9): resized filesystem to 116605649 Feb 13 20:53:55.452358 extend-filesystems[1882]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Feb 13 20:53:55.452358 extend-filesystems[1882]: old_desc_blocks = 1, new_desc_blocks = 56 Feb 13 20:53:55.452358 extend-filesystems[1882]: The filesystem on /dev/sda9 is now 116605649 (4k) blocks long. Feb 13 20:53:55.492501 extend-filesystems[1873]: Resized filesystem in /dev/sda9 Feb 13 20:53:55.492501 extend-filesystems[1873]: Found sdb Feb 13 20:53:55.453114 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 20:53:55.453247 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 20:53:56.035344 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:53:56.048128 (kubelet)[2023]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:53:56.120936 kernel: mlx5_core 0000:01:00.0: lag map: port 1:1 port 2:2 Feb 13 20:53:56.121080 kernel: mlx5_core 0000:01:00.0: shared_fdb:0 mode:queue_affinity Feb 13 20:53:56.521433 kubelet[2023]: E0213 20:53:56.521402 2023 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:53:56.522745 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:53:56.522828 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:53:56.747858 systemd-timesyncd[1861]: Contacted time server 208.67.72.43:123 (0.flatcar.pool.ntp.org). Feb 13 20:53:56.748006 systemd-timesyncd[1861]: Initial clock synchronization to Thu 2025-02-13 20:53:56.914391 UTC. Feb 13 20:53:57.382933 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 20:53:57.402768 systemd[1]: Started sshd@0-147.28.180.203:22-139.178.89.65:57996.service - OpenSSH per-connection server daemon (139.178.89.65:57996). Feb 13 20:53:57.454540 sshd[2048]: Accepted publickey for core from 139.178.89.65 port 57996 ssh2: RSA SHA256:6ByWF9I+QbePXoVE/Ooa8KUw2dPGq3Qvw04/G+Sn80U Feb 13 20:53:57.455667 sshd[2048]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:53:57.461411 systemd-logind[1897]: New session 1 of user core. Feb 13 20:53:57.462258 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 20:53:57.482795 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 20:53:57.497060 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 20:53:57.518945 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 20:53:57.546237 (systemd)[2054]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 20:53:57.656571 systemd[2054]: Queued start job for default target default.target. Feb 13 20:53:57.656745 systemd[2054]: Created slice app.slice - User Application Slice. Feb 13 20:53:57.656758 systemd[2054]: Reached target paths.target - Paths. Feb 13 20:53:57.656766 systemd[2054]: Reached target timers.target - Timers. Feb 13 20:53:57.675656 systemd[2054]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 20:53:57.679087 systemd[2054]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 20:53:57.679114 systemd[2054]: Reached target sockets.target - Sockets. Feb 13 20:53:57.679122 systemd[2054]: Reached target basic.target - Basic System. Feb 13 20:53:57.679144 systemd[2054]: Reached target default.target - Main User Target. Feb 13 20:53:57.679159 systemd[2054]: Startup finished in 116ms. Feb 13 20:53:57.679262 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 20:53:57.690566 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 20:53:57.761659 systemd[1]: Started sshd@1-147.28.180.203:22-139.178.89.65:58000.service - OpenSSH per-connection server daemon (139.178.89.65:58000). Feb 13 20:53:57.784608 sshd[2066]: Accepted publickey for core from 139.178.89.65 port 58000 ssh2: RSA SHA256:6ByWF9I+QbePXoVE/Ooa8KUw2dPGq3Qvw04/G+Sn80U Feb 13 20:53:57.785369 sshd[2066]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:53:57.787824 systemd-logind[1897]: New session 2 of user core. Feb 13 20:53:57.788518 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 20:53:57.862577 sshd[2066]: pam_unix(sshd:session): session closed for user core Feb 13 20:53:57.880826 systemd[1]: Started sshd@2-147.28.180.203:22-139.178.89.65:58002.service - OpenSSH per-connection server daemon (139.178.89.65:58002). Feb 13 20:53:57.892966 systemd[1]: sshd@1-147.28.180.203:22-139.178.89.65:58000.service: Deactivated successfully. Feb 13 20:53:57.893811 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 20:53:57.894524 systemd-logind[1897]: Session 2 logged out. Waiting for processes to exit. Feb 13 20:53:57.895383 systemd-logind[1897]: Removed session 2. Feb 13 20:53:57.904327 sshd[2072]: Accepted publickey for core from 139.178.89.65 port 58002 ssh2: RSA SHA256:6ByWF9I+QbePXoVE/Ooa8KUw2dPGq3Qvw04/G+Sn80U Feb 13 20:53:57.905306 sshd[2072]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:53:57.908406 systemd-logind[1897]: New session 3 of user core. Feb 13 20:53:57.918884 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 20:53:57.983971 sshd[2072]: pam_unix(sshd:session): session closed for user core Feb 13 20:53:57.985262 systemd[1]: sshd@2-147.28.180.203:22-139.178.89.65:58002.service: Deactivated successfully. Feb 13 20:53:57.986488 systemd-logind[1897]: Session 3 logged out. Waiting for processes to exit. Feb 13 20:53:57.986578 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 20:53:57.987258 systemd-logind[1897]: Removed session 3. Feb 13 20:54:00.231923 login[1987]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 13 20:54:00.232464 login[1991]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 13 20:54:00.234476 systemd-logind[1897]: New session 5 of user core. Feb 13 20:54:00.235245 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 20:54:00.236459 systemd-logind[1897]: New session 4 of user core. Feb 13 20:54:00.237144 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 20:54:00.405877 coreos-metadata[1867]: Feb 13 20:54:00.405 INFO Fetch successful Feb 13 20:54:00.454568 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 20:54:00.455931 systemd[1]: Starting packet-phone-home.service - Report Success to Packet... Feb 13 20:54:00.517386 coreos-metadata[1972]: Feb 13 20:54:00.517 INFO Fetch successful Feb 13 20:54:00.560387 unknown[1972]: wrote ssh authorized keys file for user: core Feb 13 20:54:00.586427 update-ssh-keys[2120]: Updated "/home/core/.ssh/authorized_keys" Feb 13 20:54:00.586742 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 20:54:00.587819 systemd[1]: Finished sshkeys.service. Feb 13 20:54:00.788652 systemd[1]: Finished packet-phone-home.service - Report Success to Packet. Feb 13 20:54:00.789959 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 20:54:00.790345 systemd[1]: Startup finished in 24.608s (kernel) + 12.001s (userspace) = 36.609s. Feb 13 20:54:06.556281 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 20:54:06.565687 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:54:06.799538 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:54:06.801772 (kubelet)[2141]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:54:06.826626 kubelet[2141]: E0213 20:54:06.826521 2141 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:54:06.829181 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:54:06.829283 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:54:08.111892 systemd[1]: Started sshd@3-147.28.180.203:22-139.178.89.65:33828.service - OpenSSH per-connection server daemon (139.178.89.65:33828). Feb 13 20:54:08.142378 sshd[2159]: Accepted publickey for core from 139.178.89.65 port 33828 ssh2: RSA SHA256:6ByWF9I+QbePXoVE/Ooa8KUw2dPGq3Qvw04/G+Sn80U Feb 13 20:54:08.142975 sshd[2159]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:54:08.145651 systemd-logind[1897]: New session 6 of user core. Feb 13 20:54:08.159194 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 20:54:08.213253 sshd[2159]: pam_unix(sshd:session): session closed for user core Feb 13 20:54:08.225821 systemd[1]: Started sshd@4-147.28.180.203:22-139.178.89.65:33842.service - OpenSSH per-connection server daemon (139.178.89.65:33842). Feb 13 20:54:08.226568 systemd[1]: sshd@3-147.28.180.203:22-139.178.89.65:33828.service: Deactivated successfully. Feb 13 20:54:08.228490 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 20:54:08.229425 systemd-logind[1897]: Session 6 logged out. Waiting for processes to exit. Feb 13 20:54:08.231167 systemd-logind[1897]: Removed session 6. Feb 13 20:54:08.263922 sshd[2165]: Accepted publickey for core from 139.178.89.65 port 33842 ssh2: RSA SHA256:6ByWF9I+QbePXoVE/Ooa8KUw2dPGq3Qvw04/G+Sn80U Feb 13 20:54:08.264594 sshd[2165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:54:08.267183 systemd-logind[1897]: New session 7 of user core. Feb 13 20:54:08.267671 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 20:54:08.313624 sshd[2165]: pam_unix(sshd:session): session closed for user core Feb 13 20:54:08.325801 systemd[1]: Started sshd@5-147.28.180.203:22-139.178.89.65:33858.service - OpenSSH per-connection server daemon (139.178.89.65:33858). Feb 13 20:54:08.326370 systemd[1]: sshd@4-147.28.180.203:22-139.178.89.65:33842.service: Deactivated successfully. Feb 13 20:54:08.327720 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 20:54:08.328951 systemd-logind[1897]: Session 7 logged out. Waiting for processes to exit. Feb 13 20:54:08.330134 systemd-logind[1897]: Removed session 7. Feb 13 20:54:08.384964 sshd[2172]: Accepted publickey for core from 139.178.89.65 port 33858 ssh2: RSA SHA256:6ByWF9I+QbePXoVE/Ooa8KUw2dPGq3Qvw04/G+Sn80U Feb 13 20:54:08.386468 sshd[2172]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:54:08.391590 systemd-logind[1897]: New session 8 of user core. Feb 13 20:54:08.404807 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 20:54:08.460418 sshd[2172]: pam_unix(sshd:session): session closed for user core Feb 13 20:54:08.475767 systemd[1]: Started sshd@6-147.28.180.203:22-139.178.89.65:33866.service - OpenSSH per-connection server daemon (139.178.89.65:33866). Feb 13 20:54:08.476274 systemd[1]: sshd@5-147.28.180.203:22-139.178.89.65:33858.service: Deactivated successfully. Feb 13 20:54:08.477172 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 20:54:08.477631 systemd-logind[1897]: Session 8 logged out. Waiting for processes to exit. Feb 13 20:54:08.478320 systemd-logind[1897]: Removed session 8. Feb 13 20:54:08.514024 sshd[2181]: Accepted publickey for core from 139.178.89.65 port 33866 ssh2: RSA SHA256:6ByWF9I+QbePXoVE/Ooa8KUw2dPGq3Qvw04/G+Sn80U Feb 13 20:54:08.514939 sshd[2181]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:54:08.518308 systemd-logind[1897]: New session 9 of user core. Feb 13 20:54:08.531781 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 20:54:08.593779 sudo[2187]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 20:54:08.593933 sudo[2187]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:54:08.604160 sudo[2187]: pam_unix(sudo:session): session closed for user root Feb 13 20:54:08.605174 sshd[2181]: pam_unix(sshd:session): session closed for user core Feb 13 20:54:08.619746 systemd[1]: Started sshd@7-147.28.180.203:22-139.178.89.65:33872.service - OpenSSH per-connection server daemon (139.178.89.65:33872). Feb 13 20:54:08.620078 systemd[1]: sshd@6-147.28.180.203:22-139.178.89.65:33866.service: Deactivated successfully. Feb 13 20:54:08.621854 systemd-logind[1897]: Session 9 logged out. Waiting for processes to exit. Feb 13 20:54:08.621984 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 20:54:08.622843 systemd-logind[1897]: Removed session 9. Feb 13 20:54:08.675830 sshd[2189]: Accepted publickey for core from 139.178.89.65 port 33872 ssh2: RSA SHA256:6ByWF9I+QbePXoVE/Ooa8KUw2dPGq3Qvw04/G+Sn80U Feb 13 20:54:08.677467 sshd[2189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:54:08.682787 systemd-logind[1897]: New session 10 of user core. Feb 13 20:54:08.702985 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 20:54:08.758741 sudo[2197]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 20:54:08.758901 sudo[2197]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:54:08.761028 sudo[2197]: pam_unix(sudo:session): session closed for user root Feb 13 20:54:08.763799 sudo[2196]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Feb 13 20:54:08.763962 sudo[2196]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:54:08.777751 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Feb 13 20:54:08.778968 auditctl[2200]: No rules Feb 13 20:54:08.779183 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 20:54:08.779341 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Feb 13 20:54:08.780983 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 20:54:08.797316 augenrules[2219]: No rules Feb 13 20:54:08.797702 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 20:54:08.798338 sudo[2196]: pam_unix(sudo:session): session closed for user root Feb 13 20:54:08.799292 sshd[2189]: pam_unix(sshd:session): session closed for user core Feb 13 20:54:08.811764 systemd[1]: Started sshd@8-147.28.180.203:22-139.178.89.65:33886.service - OpenSSH per-connection server daemon (139.178.89.65:33886). Feb 13 20:54:08.812166 systemd[1]: sshd@7-147.28.180.203:22-139.178.89.65:33872.service: Deactivated successfully. Feb 13 20:54:08.813206 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 20:54:08.814171 systemd-logind[1897]: Session 10 logged out. Waiting for processes to exit. Feb 13 20:54:08.815172 systemd-logind[1897]: Removed session 10. Feb 13 20:54:08.852390 sshd[2225]: Accepted publickey for core from 139.178.89.65 port 33886 ssh2: RSA SHA256:6ByWF9I+QbePXoVE/Ooa8KUw2dPGq3Qvw04/G+Sn80U Feb 13 20:54:08.853302 sshd[2225]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:54:08.856700 systemd-logind[1897]: New session 11 of user core. Feb 13 20:54:08.867767 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 20:54:08.930595 sudo[2232]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 20:54:08.931452 sudo[2232]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:54:09.309744 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 20:54:09.309881 (dockerd)[2261]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 20:54:09.606232 dockerd[2261]: time="2025-02-13T20:54:09.606144901Z" level=info msg="Starting up" Feb 13 20:54:09.810554 dockerd[2261]: time="2025-02-13T20:54:09.810481002Z" level=info msg="Loading containers: start." Feb 13 20:54:09.893476 kernel: Initializing XFRM netlink socket Feb 13 20:54:09.955841 systemd-networkd[1542]: docker0: Link UP Feb 13 20:54:09.970437 dockerd[2261]: time="2025-02-13T20:54:09.970386371Z" level=info msg="Loading containers: done." Feb 13 20:54:09.979173 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3993756137-merged.mount: Deactivated successfully. Feb 13 20:54:09.997573 dockerd[2261]: time="2025-02-13T20:54:09.997528389Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 20:54:09.997634 dockerd[2261]: time="2025-02-13T20:54:09.997598544Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Feb 13 20:54:09.997703 dockerd[2261]: time="2025-02-13T20:54:09.997658927Z" level=info msg="Daemon has completed initialization" Feb 13 20:54:10.011349 dockerd[2261]: time="2025-02-13T20:54:10.011289949Z" level=info msg="API listen on /run/docker.sock" Feb 13 20:54:10.011403 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 20:54:10.877659 containerd[1918]: time="2025-02-13T20:54:10.877636100Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\"" Feb 13 20:54:11.561159 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount785993376.mount: Deactivated successfully. Feb 13 20:54:12.396525 containerd[1918]: time="2025-02-13T20:54:12.396468703Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:54:12.396749 containerd[1918]: time="2025-02-13T20:54:12.396575015Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.10: active requests=0, bytes read=32678214" Feb 13 20:54:12.397112 containerd[1918]: time="2025-02-13T20:54:12.397073187Z" level=info msg="ImageCreate event name:\"sha256:172a4e0b731db1008c5339e0b8ef232f5c55632099e37cccfb9ba786c19580c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:54:12.398716 containerd[1918]: time="2025-02-13T20:54:12.398674417Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:54:12.399274 containerd[1918]: time="2025-02-13T20:54:12.399252680Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.10\" with image id \"sha256:172a4e0b731db1008c5339e0b8ef232f5c55632099e37cccfb9ba786c19580c4\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\", size \"32675014\" in 1.521592347s" Feb 13 20:54:12.399312 containerd[1918]: time="2025-02-13T20:54:12.399274460Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\" returns image reference \"sha256:172a4e0b731db1008c5339e0b8ef232f5c55632099e37cccfb9ba786c19580c4\"" Feb 13 20:54:12.410715 containerd[1918]: time="2025-02-13T20:54:12.410696863Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\"" Feb 13 20:54:13.542839 containerd[1918]: time="2025-02-13T20:54:13.542784872Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:54:13.543071 containerd[1918]: time="2025-02-13T20:54:13.542923878Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.10: active requests=0, bytes read=29611545" Feb 13 20:54:13.543479 containerd[1918]: time="2025-02-13T20:54:13.543465744Z" level=info msg="ImageCreate event name:\"sha256:f81ad4d47d77570472cf20a1f6b008ece135be405b2f52f50ed6820f2b6f9a5f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:54:13.544968 containerd[1918]: time="2025-02-13T20:54:13.544925939Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:54:13.545654 containerd[1918]: time="2025-02-13T20:54:13.545606473Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.10\" with image id \"sha256:f81ad4d47d77570472cf20a1f6b008ece135be405b2f52f50ed6820f2b6f9a5f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\", size \"31058091\" in 1.134885304s" Feb 13 20:54:13.545654 containerd[1918]: time="2025-02-13T20:54:13.545629767Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\" returns image reference \"sha256:f81ad4d47d77570472cf20a1f6b008ece135be405b2f52f50ed6820f2b6f9a5f\"" Feb 13 20:54:13.557340 containerd[1918]: time="2025-02-13T20:54:13.557321772Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\"" Feb 13 20:54:14.413350 containerd[1918]: time="2025-02-13T20:54:14.413324027Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:54:14.413579 containerd[1918]: time="2025-02-13T20:54:14.413558491Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.10: active requests=0, bytes read=17782130" Feb 13 20:54:14.413899 containerd[1918]: time="2025-02-13T20:54:14.413889330Z" level=info msg="ImageCreate event name:\"sha256:64edffde4bf75617ad8fc73556d5e80d34b9425c79106b7f74b2059243b2ffe8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:54:14.415736 containerd[1918]: time="2025-02-13T20:54:14.415691893Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:54:14.416207 containerd[1918]: time="2025-02-13T20:54:14.416165743Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.10\" with image id \"sha256:64edffde4bf75617ad8fc73556d5e80d34b9425c79106b7f74b2059243b2ffe8\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\", size \"19228694\" in 858.825012ms" Feb 13 20:54:14.416207 containerd[1918]: time="2025-02-13T20:54:14.416182403Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\" returns image reference \"sha256:64edffde4bf75617ad8fc73556d5e80d34b9425c79106b7f74b2059243b2ffe8\"" Feb 13 20:54:14.426898 containerd[1918]: time="2025-02-13T20:54:14.426878505Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\"" Feb 13 20:54:15.298380 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount298903250.mount: Deactivated successfully. Feb 13 20:54:15.468927 containerd[1918]: time="2025-02-13T20:54:15.468898873Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:54:15.469151 containerd[1918]: time="2025-02-13T20:54:15.469102704Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.10: active requests=0, bytes read=29057858" Feb 13 20:54:15.469486 containerd[1918]: time="2025-02-13T20:54:15.469428932Z" level=info msg="ImageCreate event name:\"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:54:15.470343 containerd[1918]: time="2025-02-13T20:54:15.470295632Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:54:15.470732 containerd[1918]: time="2025-02-13T20:54:15.470690665Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.10\" with image id \"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\", repo tag \"registry.k8s.io/kube-proxy:v1.30.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\", size \"29056877\" in 1.04379108s" Feb 13 20:54:15.470732 containerd[1918]: time="2025-02-13T20:54:15.470706961Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\" returns image reference \"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\"" Feb 13 20:54:15.482047 containerd[1918]: time="2025-02-13T20:54:15.481980496Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 20:54:15.985002 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1409250806.mount: Deactivated successfully. Feb 13 20:54:16.481828 containerd[1918]: time="2025-02-13T20:54:16.481804671Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:54:16.482052 containerd[1918]: time="2025-02-13T20:54:16.482009084Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Feb 13 20:54:16.482447 containerd[1918]: time="2025-02-13T20:54:16.482404776Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:54:16.483996 containerd[1918]: time="2025-02-13T20:54:16.483955193Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:54:16.484663 containerd[1918]: time="2025-02-13T20:54:16.484614338Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.002595425s" Feb 13 20:54:16.484663 containerd[1918]: time="2025-02-13T20:54:16.484631256Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Feb 13 20:54:16.495408 containerd[1918]: time="2025-02-13T20:54:16.495389610Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 13 20:54:17.054607 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 20:54:17.066694 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:54:17.205263 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1123199864.mount: Deactivated successfully. Feb 13 20:54:17.276607 containerd[1918]: time="2025-02-13T20:54:17.276574204Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:54:17.277149 containerd[1918]: time="2025-02-13T20:54:17.277117925Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Feb 13 20:54:17.277896 containerd[1918]: time="2025-02-13T20:54:17.277881836Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:54:17.279271 containerd[1918]: time="2025-02-13T20:54:17.279258593Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:54:17.279849 containerd[1918]: time="2025-02-13T20:54:17.279820197Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 784.40951ms" Feb 13 20:54:17.279918 containerd[1918]: time="2025-02-13T20:54:17.279851710Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 13 20:54:17.280588 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:54:17.283106 (kubelet)[2644]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:54:17.291533 containerd[1918]: time="2025-02-13T20:54:17.291509881Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Feb 13 20:54:17.307661 kubelet[2644]: E0213 20:54:17.307571 2644 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:54:17.308883 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:54:17.308968 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:54:17.810400 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount60489019.mount: Deactivated successfully. Feb 13 20:54:18.942200 containerd[1918]: time="2025-02-13T20:54:18.942144862Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:54:18.942436 containerd[1918]: time="2025-02-13T20:54:18.942349243Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Feb 13 20:54:18.942801 containerd[1918]: time="2025-02-13T20:54:18.942760819Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:54:18.944757 containerd[1918]: time="2025-02-13T20:54:18.944716907Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:54:18.945307 containerd[1918]: time="2025-02-13T20:54:18.945263025Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 1.653726324s" Feb 13 20:54:18.945307 containerd[1918]: time="2025-02-13T20:54:18.945281454Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Feb 13 20:54:21.014146 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:54:21.026737 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:54:21.037679 systemd[1]: Reloading requested from client PID 2880 ('systemctl') (unit session-11.scope)... Feb 13 20:54:21.037686 systemd[1]: Reloading... Feb 13 20:54:21.073492 zram_generator::config[2919]: No configuration found. Feb 13 20:54:21.144956 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:54:21.200804 systemd[1]: Reloading finished in 162 ms. Feb 13 20:54:21.240799 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 20:54:21.240838 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 20:54:21.240975 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:54:21.242249 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:54:21.456395 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:54:21.461095 (kubelet)[2999]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 20:54:21.482337 kubelet[2999]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:54:21.482337 kubelet[2999]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 20:54:21.482337 kubelet[2999]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:54:21.483762 kubelet[2999]: I0213 20:54:21.483609 2999 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 20:54:21.808149 kubelet[2999]: I0213 20:54:21.808109 2999 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 20:54:21.808149 kubelet[2999]: I0213 20:54:21.808123 2999 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 20:54:21.808303 kubelet[2999]: I0213 20:54:21.808273 2999 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 20:54:21.819550 kubelet[2999]: I0213 20:54:21.819542 2999 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 20:54:21.820460 kubelet[2999]: E0213 20:54:21.820399 2999 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://147.28.180.203:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 147.28.180.203:6443: connect: connection refused Feb 13 20:54:21.833880 kubelet[2999]: I0213 20:54:21.833849 2999 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 20:54:21.835114 kubelet[2999]: I0213 20:54:21.835076 2999 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 20:54:21.835230 kubelet[2999]: I0213 20:54:21.835092 2999 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.1-a-f6aaf2d828","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 20:54:21.835765 kubelet[2999]: I0213 20:54:21.835736 2999 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 20:54:21.835765 kubelet[2999]: I0213 20:54:21.835761 2999 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 20:54:21.835826 kubelet[2999]: I0213 20:54:21.835816 2999 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:54:21.836663 kubelet[2999]: I0213 20:54:21.836628 2999 kubelet.go:400] "Attempting to sync node with API server" Feb 13 20:54:21.836663 kubelet[2999]: I0213 20:54:21.836636 2999 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 20:54:21.836663 kubelet[2999]: I0213 20:54:21.836647 2999 kubelet.go:312] "Adding apiserver pod source" Feb 13 20:54:21.836663 kubelet[2999]: I0213 20:54:21.836654 2999 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 20:54:21.839393 kubelet[2999]: W0213 20:54:21.839330 2999 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://147.28.180.203:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 147.28.180.203:6443: connect: connection refused Feb 13 20:54:21.839393 kubelet[2999]: W0213 20:54:21.839334 2999 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://147.28.180.203:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.1-a-f6aaf2d828&limit=500&resourceVersion=0": dial tcp 147.28.180.203:6443: connect: connection refused Feb 13 20:54:21.839393 kubelet[2999]: E0213 20:54:21.839361 2999 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://147.28.180.203:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 147.28.180.203:6443: connect: connection refused Feb 13 20:54:21.839527 kubelet[2999]: E0213 20:54:21.839395 2999 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://147.28.180.203:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.1-a-f6aaf2d828&limit=500&resourceVersion=0": dial tcp 147.28.180.203:6443: connect: connection refused Feb 13 20:54:21.840237 kubelet[2999]: I0213 20:54:21.840229 2999 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 20:54:21.841370 kubelet[2999]: I0213 20:54:21.841328 2999 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 20:54:21.841402 kubelet[2999]: W0213 20:54:21.841373 2999 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 20:54:21.841754 kubelet[2999]: I0213 20:54:21.841694 2999 server.go:1264] "Started kubelet" Feb 13 20:54:21.841800 kubelet[2999]: I0213 20:54:21.841773 2999 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 20:54:21.841830 kubelet[2999]: I0213 20:54:21.841805 2999 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 20:54:21.841936 kubelet[2999]: I0213 20:54:21.841926 2999 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 20:54:21.842465 kubelet[2999]: I0213 20:54:21.842455 2999 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 20:54:21.842533 kubelet[2999]: I0213 20:54:21.842518 2999 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 20:54:21.842582 kubelet[2999]: I0213 20:54:21.842539 2999 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 20:54:21.842685 kubelet[2999]: I0213 20:54:21.842675 2999 server.go:455] "Adding debug handlers to kubelet server" Feb 13 20:54:21.842810 kubelet[2999]: I0213 20:54:21.842794 2999 reconciler.go:26] "Reconciler: start to sync state" Feb 13 20:54:21.846714 kubelet[2999]: E0213 20:54:21.846672 2999 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.28.180.203:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.1-a-f6aaf2d828?timeout=10s\": dial tcp 147.28.180.203:6443: connect: connection refused" interval="200ms" Feb 13 20:54:21.846765 kubelet[2999]: W0213 20:54:21.846698 2999 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://147.28.180.203:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.28.180.203:6443: connect: connection refused Feb 13 20:54:21.846765 kubelet[2999]: E0213 20:54:21.846747 2999 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://147.28.180.203:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.28.180.203:6443: connect: connection refused Feb 13 20:54:21.846838 kubelet[2999]: I0213 20:54:21.846806 2999 factory.go:221] Registration of the systemd container factory successfully Feb 13 20:54:21.846884 kubelet[2999]: I0213 20:54:21.846872 2999 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 20:54:21.847731 kubelet[2999]: E0213 20:54:21.847647 2999 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://147.28.180.203:6443/api/v1/namespaces/default/events\": dial tcp 147.28.180.203:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.1-a-f6aaf2d828.1823dfe66bf2b464 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.1-a-f6aaf2d828,UID:ci-4081.3.1-a-f6aaf2d828,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.1-a-f6aaf2d828,},FirstTimestamp:2025-02-13 20:54:21.841683556 +0000 UTC m=+0.378399193,LastTimestamp:2025-02-13 20:54:21.841683556 +0000 UTC m=+0.378399193,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.1-a-f6aaf2d828,}" Feb 13 20:54:21.847792 kubelet[2999]: I0213 20:54:21.847750 2999 factory.go:221] Registration of the containerd container factory successfully Feb 13 20:54:21.847813 kubelet[2999]: E0213 20:54:21.847795 2999 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 20:54:21.854393 kubelet[2999]: I0213 20:54:21.854367 2999 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 20:54:21.854883 kubelet[2999]: I0213 20:54:21.854875 2999 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 20:54:21.854906 kubelet[2999]: I0213 20:54:21.854893 2999 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 20:54:21.854933 kubelet[2999]: I0213 20:54:21.854907 2999 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 20:54:21.854996 kubelet[2999]: E0213 20:54:21.854934 2999 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 20:54:21.855195 kubelet[2999]: W0213 20:54:21.855171 2999 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://147.28.180.203:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.28.180.203:6443: connect: connection refused Feb 13 20:54:21.855225 kubelet[2999]: E0213 20:54:21.855201 2999 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://147.28.180.203:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.28.180.203:6443: connect: connection refused Feb 13 20:54:21.861988 kubelet[2999]: I0213 20:54:21.861978 2999 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 20:54:21.861988 kubelet[2999]: I0213 20:54:21.861986 2999 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 20:54:21.862084 kubelet[2999]: I0213 20:54:21.861995 2999 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:54:21.863142 kubelet[2999]: I0213 20:54:21.863136 2999 policy_none.go:49] "None policy: Start" Feb 13 20:54:21.863350 kubelet[2999]: I0213 20:54:21.863343 2999 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 20:54:21.863382 kubelet[2999]: I0213 20:54:21.863359 2999 state_mem.go:35] "Initializing new in-memory state store" Feb 13 20:54:21.865476 kubelet[2999]: I0213 20:54:21.865468 2999 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 20:54:21.865566 kubelet[2999]: I0213 20:54:21.865552 2999 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 20:54:21.865618 kubelet[2999]: I0213 20:54:21.865609 2999 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 20:54:21.866022 kubelet[2999]: E0213 20:54:21.866013 2999 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.1-a-f6aaf2d828\" not found" Feb 13 20:54:21.944375 kubelet[2999]: I0213 20:54:21.944350 2999 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.1-a-f6aaf2d828" Feb 13 20:54:21.944729 kubelet[2999]: E0213 20:54:21.944706 2999 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://147.28.180.203:6443/api/v1/nodes\": dial tcp 147.28.180.203:6443: connect: connection refused" node="ci-4081.3.1-a-f6aaf2d828" Feb 13 20:54:21.955142 kubelet[2999]: I0213 20:54:21.955074 2999 topology_manager.go:215] "Topology Admit Handler" podUID="232a8ef4c06c5fb5037131712d404146" podNamespace="kube-system" podName="kube-apiserver-ci-4081.3.1-a-f6aaf2d828" Feb 13 20:54:21.957073 kubelet[2999]: I0213 20:54:21.957015 2999 topology_manager.go:215] "Topology Admit Handler" podUID="514022033b373ff21da9160d8ecfa9fa" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.3.1-a-f6aaf2d828" Feb 13 20:54:21.959294 kubelet[2999]: I0213 20:54:21.959256 2999 topology_manager.go:215] "Topology Admit Handler" podUID="9f3321fc7731c18bf5bfafaa9c141060" podNamespace="kube-system" podName="kube-scheduler-ci-4081.3.1-a-f6aaf2d828" Feb 13 20:54:22.047711 kubelet[2999]: E0213 20:54:22.047574 2999 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.28.180.203:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.1-a-f6aaf2d828?timeout=10s\": dial tcp 147.28.180.203:6443: connect: connection refused" interval="400ms" Feb 13 20:54:22.144867 kubelet[2999]: I0213 20:54:22.144590 2999 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/514022033b373ff21da9160d8ecfa9fa-ca-certs\") pod \"kube-controller-manager-ci-4081.3.1-a-f6aaf2d828\" (UID: \"514022033b373ff21da9160d8ecfa9fa\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-a-f6aaf2d828" Feb 13 20:54:22.144867 kubelet[2999]: I0213 20:54:22.144692 2999 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/514022033b373ff21da9160d8ecfa9fa-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.1-a-f6aaf2d828\" (UID: \"514022033b373ff21da9160d8ecfa9fa\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-a-f6aaf2d828" Feb 13 20:54:22.144867 kubelet[2999]: I0213 20:54:22.144758 2999 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/514022033b373ff21da9160d8ecfa9fa-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.1-a-f6aaf2d828\" (UID: \"514022033b373ff21da9160d8ecfa9fa\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-a-f6aaf2d828" Feb 13 20:54:22.144867 kubelet[2999]: I0213 20:54:22.144827 2999 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9f3321fc7731c18bf5bfafaa9c141060-kubeconfig\") pod \"kube-scheduler-ci-4081.3.1-a-f6aaf2d828\" (UID: \"9f3321fc7731c18bf5bfafaa9c141060\") " pod="kube-system/kube-scheduler-ci-4081.3.1-a-f6aaf2d828" Feb 13 20:54:22.145377 kubelet[2999]: I0213 20:54:22.144882 2999 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/514022033b373ff21da9160d8ecfa9fa-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.1-a-f6aaf2d828\" (UID: \"514022033b373ff21da9160d8ecfa9fa\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-a-f6aaf2d828" Feb 13 20:54:22.145377 kubelet[2999]: I0213 20:54:22.144934 2999 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/232a8ef4c06c5fb5037131712d404146-ca-certs\") pod \"kube-apiserver-ci-4081.3.1-a-f6aaf2d828\" (UID: \"232a8ef4c06c5fb5037131712d404146\") " pod="kube-system/kube-apiserver-ci-4081.3.1-a-f6aaf2d828" Feb 13 20:54:22.145377 kubelet[2999]: I0213 20:54:22.144981 2999 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/232a8ef4c06c5fb5037131712d404146-k8s-certs\") pod \"kube-apiserver-ci-4081.3.1-a-f6aaf2d828\" (UID: \"232a8ef4c06c5fb5037131712d404146\") " pod="kube-system/kube-apiserver-ci-4081.3.1-a-f6aaf2d828" Feb 13 20:54:22.145377 kubelet[2999]: I0213 20:54:22.145028 2999 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/232a8ef4c06c5fb5037131712d404146-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.1-a-f6aaf2d828\" (UID: \"232a8ef4c06c5fb5037131712d404146\") " pod="kube-system/kube-apiserver-ci-4081.3.1-a-f6aaf2d828" Feb 13 20:54:22.145377 kubelet[2999]: I0213 20:54:22.145078 2999 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/514022033b373ff21da9160d8ecfa9fa-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.1-a-f6aaf2d828\" (UID: \"514022033b373ff21da9160d8ecfa9fa\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-a-f6aaf2d828" Feb 13 20:54:22.149650 kubelet[2999]: I0213 20:54:22.149567 2999 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.1-a-f6aaf2d828" Feb 13 20:54:22.150354 kubelet[2999]: E0213 20:54:22.150222 2999 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://147.28.180.203:6443/api/v1/nodes\": dial tcp 147.28.180.203:6443: connect: connection refused" node="ci-4081.3.1-a-f6aaf2d828" Feb 13 20:54:22.266462 containerd[1918]: time="2025-02-13T20:54:22.266311119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.1-a-f6aaf2d828,Uid:232a8ef4c06c5fb5037131712d404146,Namespace:kube-system,Attempt:0,}" Feb 13 20:54:22.269755 containerd[1918]: time="2025-02-13T20:54:22.269741966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.1-a-f6aaf2d828,Uid:514022033b373ff21da9160d8ecfa9fa,Namespace:kube-system,Attempt:0,}" Feb 13 20:54:22.273339 containerd[1918]: time="2025-02-13T20:54:22.273324904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.1-a-f6aaf2d828,Uid:9f3321fc7731c18bf5bfafaa9c141060,Namespace:kube-system,Attempt:0,}" Feb 13 20:54:22.371650 kubelet[2999]: E0213 20:54:22.371550 2999 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://147.28.180.203:6443/api/v1/namespaces/default/events\": dial tcp 147.28.180.203:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.1-a-f6aaf2d828.1823dfe66bf2b464 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.1-a-f6aaf2d828,UID:ci-4081.3.1-a-f6aaf2d828,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.1-a-f6aaf2d828,},FirstTimestamp:2025-02-13 20:54:21.841683556 +0000 UTC m=+0.378399193,LastTimestamp:2025-02-13 20:54:21.841683556 +0000 UTC m=+0.378399193,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.1-a-f6aaf2d828,}" Feb 13 20:54:22.449736 kubelet[2999]: E0213 20:54:22.449403 2999 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.28.180.203:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.1-a-f6aaf2d828?timeout=10s\": dial tcp 147.28.180.203:6443: connect: connection refused" interval="800ms" Feb 13 20:54:22.551883 kubelet[2999]: I0213 20:54:22.551829 2999 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.1-a-f6aaf2d828" Feb 13 20:54:22.552201 kubelet[2999]: E0213 20:54:22.552073 2999 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://147.28.180.203:6443/api/v1/nodes\": dial tcp 147.28.180.203:6443: connect: connection refused" node="ci-4081.3.1-a-f6aaf2d828" Feb 13 20:54:22.766057 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3915488660.mount: Deactivated successfully. Feb 13 20:54:22.767718 containerd[1918]: time="2025-02-13T20:54:22.767675174Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:54:22.767883 containerd[1918]: time="2025-02-13T20:54:22.767837771Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Feb 13 20:54:22.768296 containerd[1918]: time="2025-02-13T20:54:22.768251472Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:54:22.768354 containerd[1918]: time="2025-02-13T20:54:22.768338143Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 20:54:22.768804 containerd[1918]: time="2025-02-13T20:54:22.768761957Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:54:22.769213 containerd[1918]: time="2025-02-13T20:54:22.769174217Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 20:54:22.769213 containerd[1918]: time="2025-02-13T20:54:22.769208596Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:54:22.771154 containerd[1918]: time="2025-02-13T20:54:22.771116552Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:54:22.771999 containerd[1918]: time="2025-02-13T20:54:22.771956198Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 505.441679ms" Feb 13 20:54:22.772359 containerd[1918]: time="2025-02-13T20:54:22.772320674Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 502.552665ms" Feb 13 20:54:22.773545 containerd[1918]: time="2025-02-13T20:54:22.773475541Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 500.115584ms" Feb 13 20:54:22.856670 containerd[1918]: time="2025-02-13T20:54:22.856634062Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:54:22.856670 containerd[1918]: time="2025-02-13T20:54:22.856654873Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:54:22.856670 containerd[1918]: time="2025-02-13T20:54:22.856661655Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:54:22.856670 containerd[1918]: time="2025-02-13T20:54:22.856668940Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:54:22.856806 containerd[1918]: time="2025-02-13T20:54:22.856681031Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:54:22.856806 containerd[1918]: time="2025-02-13T20:54:22.856688826Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:54:22.856806 containerd[1918]: time="2025-02-13T20:54:22.856469577Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:54:22.856806 containerd[1918]: time="2025-02-13T20:54:22.856725008Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:54:22.856806 containerd[1918]: time="2025-02-13T20:54:22.856726256Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:54:22.856806 containerd[1918]: time="2025-02-13T20:54:22.856730407Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:54:22.856806 containerd[1918]: time="2025-02-13T20:54:22.856735282Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:54:22.856806 containerd[1918]: time="2025-02-13T20:54:22.856785955Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:54:22.890825 containerd[1918]: time="2025-02-13T20:54:22.890763603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.1-a-f6aaf2d828,Uid:514022033b373ff21da9160d8ecfa9fa,Namespace:kube-system,Attempt:0,} returns sandbox id \"8a45734f58af292abf143b7422424cd257b5287ff46e096352c08ca62fa8cf0e\"" Feb 13 20:54:22.890825 containerd[1918]: time="2025-02-13T20:54:22.890819460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.1-a-f6aaf2d828,Uid:232a8ef4c06c5fb5037131712d404146,Namespace:kube-system,Attempt:0,} returns sandbox id \"d828997fb7b06e572ec0e79e9e66ed2b323f2dc3d2be080ecaafb5734891f88b\"" Feb 13 20:54:22.891328 containerd[1918]: time="2025-02-13T20:54:22.891315433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.1-a-f6aaf2d828,Uid:9f3321fc7731c18bf5bfafaa9c141060,Namespace:kube-system,Attempt:0,} returns sandbox id \"aa5619b1aba921c51922dc87c13d69143ed260444bf3b05a3595baae7f08789b\"" Feb 13 20:54:22.892782 containerd[1918]: time="2025-02-13T20:54:22.892767807Z" level=info msg="CreateContainer within sandbox \"aa5619b1aba921c51922dc87c13d69143ed260444bf3b05a3595baae7f08789b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 20:54:22.892818 containerd[1918]: time="2025-02-13T20:54:22.892768335Z" level=info msg="CreateContainer within sandbox \"8a45734f58af292abf143b7422424cd257b5287ff46e096352c08ca62fa8cf0e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 20:54:22.892869 containerd[1918]: time="2025-02-13T20:54:22.892856615Z" level=info msg="CreateContainer within sandbox \"d828997fb7b06e572ec0e79e9e66ed2b323f2dc3d2be080ecaafb5734891f88b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 20:54:22.898298 containerd[1918]: time="2025-02-13T20:54:22.898284472Z" level=info msg="CreateContainer within sandbox \"aa5619b1aba921c51922dc87c13d69143ed260444bf3b05a3595baae7f08789b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"06d548503d4c588ddba728bc81ad57cbadacce53093a75d6abba30eed4b0ce92\"" Feb 13 20:54:22.898572 containerd[1918]: time="2025-02-13T20:54:22.898559739Z" level=info msg="StartContainer for \"06d548503d4c588ddba728bc81ad57cbadacce53093a75d6abba30eed4b0ce92\"" Feb 13 20:54:22.900592 containerd[1918]: time="2025-02-13T20:54:22.900577323Z" level=info msg="CreateContainer within sandbox \"d828997fb7b06e572ec0e79e9e66ed2b323f2dc3d2be080ecaafb5734891f88b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a1ebb2123f847ebe6574144bc8c8cf2573827d88db0e32e213cf31213a6a7c4a\"" Feb 13 20:54:22.900787 containerd[1918]: time="2025-02-13T20:54:22.900771447Z" level=info msg="StartContainer for \"a1ebb2123f847ebe6574144bc8c8cf2573827d88db0e32e213cf31213a6a7c4a\"" Feb 13 20:54:22.900904 containerd[1918]: time="2025-02-13T20:54:22.900880707Z" level=info msg="CreateContainer within sandbox \"8a45734f58af292abf143b7422424cd257b5287ff46e096352c08ca62fa8cf0e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"913347e56f4aa5be082f811d9bd6f078c98f232cc1401cc7dea142d392245c83\"" Feb 13 20:54:22.901116 containerd[1918]: time="2025-02-13T20:54:22.901073758Z" level=info msg="StartContainer for \"913347e56f4aa5be082f811d9bd6f078c98f232cc1401cc7dea142d392245c83\"" Feb 13 20:54:22.948673 containerd[1918]: time="2025-02-13T20:54:22.948614738Z" level=info msg="StartContainer for \"06d548503d4c588ddba728bc81ad57cbadacce53093a75d6abba30eed4b0ce92\" returns successfully" Feb 13 20:54:22.948673 containerd[1918]: time="2025-02-13T20:54:22.948640865Z" level=info msg="StartContainer for \"913347e56f4aa5be082f811d9bd6f078c98f232cc1401cc7dea142d392245c83\" returns successfully" Feb 13 20:54:22.948673 containerd[1918]: time="2025-02-13T20:54:22.948674944Z" level=info msg="StartContainer for \"a1ebb2123f847ebe6574144bc8c8cf2573827d88db0e32e213cf31213a6a7c4a\" returns successfully" Feb 13 20:54:22.960395 kubelet[2999]: W0213 20:54:22.960332 2999 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://147.28.180.203:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.28.180.203:6443: connect: connection refused Feb 13 20:54:22.960395 kubelet[2999]: E0213 20:54:22.960371 2999 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://147.28.180.203:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.28.180.203:6443: connect: connection refused Feb 13 20:54:23.353491 kubelet[2999]: I0213 20:54:23.353475 2999 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.1-a-f6aaf2d828" Feb 13 20:54:23.658241 kubelet[2999]: E0213 20:54:23.658173 2999 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.1-a-f6aaf2d828\" not found" node="ci-4081.3.1-a-f6aaf2d828" Feb 13 20:54:23.762123 kubelet[2999]: I0213 20:54:23.762039 2999 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.3.1-a-f6aaf2d828" Feb 13 20:54:23.838002 kubelet[2999]: I0213 20:54:23.837965 2999 apiserver.go:52] "Watching apiserver" Feb 13 20:54:23.842731 kubelet[2999]: I0213 20:54:23.842670 2999 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 20:54:23.869063 kubelet[2999]: E0213 20:54:23.869004 2999 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081.3.1-a-f6aaf2d828\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.1-a-f6aaf2d828" Feb 13 20:54:23.869063 kubelet[2999]: E0213 20:54:23.869014 2999 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4081.3.1-a-f6aaf2d828\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.1-a-f6aaf2d828" Feb 13 20:54:23.869266 kubelet[2999]: E0213 20:54:23.869015 2999 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.3.1-a-f6aaf2d828\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.1-a-f6aaf2d828" Feb 13 20:54:24.865758 kubelet[2999]: W0213 20:54:24.865739 2999 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 20:54:24.866273 kubelet[2999]: W0213 20:54:24.866258 2999 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 20:54:25.901329 systemd[1]: Reloading requested from client PID 3315 ('systemctl') (unit session-11.scope)... Feb 13 20:54:25.901336 systemd[1]: Reloading... Feb 13 20:54:25.942456 zram_generator::config[3354]: No configuration found. Feb 13 20:54:25.946899 kubelet[2999]: W0213 20:54:25.946884 2999 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 20:54:26.013398 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:54:26.073587 systemd[1]: Reloading finished in 172 ms. Feb 13 20:54:26.097800 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:54:26.110220 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 20:54:26.110392 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:54:26.121706 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:54:26.314512 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:54:26.318861 (kubelet)[3429]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 20:54:26.341976 kubelet[3429]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:54:26.341976 kubelet[3429]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 20:54:26.341976 kubelet[3429]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:54:26.341976 kubelet[3429]: I0213 20:54:26.341966 3429 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 20:54:26.345022 kubelet[3429]: I0213 20:54:26.344982 3429 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 20:54:26.345022 kubelet[3429]: I0213 20:54:26.344994 3429 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 20:54:26.345130 kubelet[3429]: I0213 20:54:26.345125 3429 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 20:54:26.345910 kubelet[3429]: I0213 20:54:26.345904 3429 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 20:54:26.346745 kubelet[3429]: I0213 20:54:26.346734 3429 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 20:54:26.354748 kubelet[3429]: I0213 20:54:26.354712 3429 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 20:54:26.354982 kubelet[3429]: I0213 20:54:26.354936 3429 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 20:54:26.355062 kubelet[3429]: I0213 20:54:26.354949 3429 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.1-a-f6aaf2d828","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 20:54:26.355062 kubelet[3429]: I0213 20:54:26.355037 3429 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 20:54:26.355062 kubelet[3429]: I0213 20:54:26.355044 3429 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 20:54:26.355062 kubelet[3429]: I0213 20:54:26.355062 3429 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:54:26.355170 kubelet[3429]: I0213 20:54:26.355106 3429 kubelet.go:400] "Attempting to sync node with API server" Feb 13 20:54:26.355170 kubelet[3429]: I0213 20:54:26.355112 3429 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 20:54:26.355170 kubelet[3429]: I0213 20:54:26.355124 3429 kubelet.go:312] "Adding apiserver pod source" Feb 13 20:54:26.355170 kubelet[3429]: I0213 20:54:26.355133 3429 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 20:54:26.355741 kubelet[3429]: I0213 20:54:26.355695 3429 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 20:54:26.355860 kubelet[3429]: I0213 20:54:26.355852 3429 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 20:54:26.356293 kubelet[3429]: I0213 20:54:26.356279 3429 server.go:1264] "Started kubelet" Feb 13 20:54:26.356346 kubelet[3429]: I0213 20:54:26.356310 3429 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 20:54:26.356394 kubelet[3429]: I0213 20:54:26.356364 3429 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 20:54:26.356718 kubelet[3429]: I0213 20:54:26.356705 3429 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 20:54:26.357206 kubelet[3429]: I0213 20:54:26.357199 3429 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 20:54:26.357265 kubelet[3429]: I0213 20:54:26.357256 3429 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 20:54:26.357295 kubelet[3429]: I0213 20:54:26.357273 3429 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 20:54:26.357387 kubelet[3429]: I0213 20:54:26.357375 3429 reconciler.go:26] "Reconciler: start to sync state" Feb 13 20:54:26.357443 kubelet[3429]: I0213 20:54:26.357397 3429 server.go:455] "Adding debug handlers to kubelet server" Feb 13 20:54:26.357485 kubelet[3429]: E0213 20:54:26.357454 3429 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 20:54:26.357623 kubelet[3429]: I0213 20:54:26.357613 3429 factory.go:221] Registration of the systemd container factory successfully Feb 13 20:54:26.357686 kubelet[3429]: I0213 20:54:26.357675 3429 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 20:54:26.358209 kubelet[3429]: I0213 20:54:26.358198 3429 factory.go:221] Registration of the containerd container factory successfully Feb 13 20:54:26.361930 kubelet[3429]: I0213 20:54:26.361904 3429 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 20:54:26.362473 kubelet[3429]: I0213 20:54:26.362439 3429 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 20:54:26.362473 kubelet[3429]: I0213 20:54:26.362458 3429 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 20:54:26.362473 kubelet[3429]: I0213 20:54:26.362467 3429 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 20:54:26.362576 kubelet[3429]: E0213 20:54:26.362489 3429 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 20:54:26.377527 kubelet[3429]: I0213 20:54:26.377478 3429 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 20:54:26.377527 kubelet[3429]: I0213 20:54:26.377488 3429 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 20:54:26.377527 kubelet[3429]: I0213 20:54:26.377499 3429 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:54:26.377638 kubelet[3429]: I0213 20:54:26.377583 3429 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 20:54:26.377638 kubelet[3429]: I0213 20:54:26.377590 3429 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 20:54:26.377638 kubelet[3429]: I0213 20:54:26.377600 3429 policy_none.go:49] "None policy: Start" Feb 13 20:54:26.377885 kubelet[3429]: I0213 20:54:26.377834 3429 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 20:54:26.377885 kubelet[3429]: I0213 20:54:26.377845 3429 state_mem.go:35] "Initializing new in-memory state store" Feb 13 20:54:26.377937 kubelet[3429]: I0213 20:54:26.377931 3429 state_mem.go:75] "Updated machine memory state" Feb 13 20:54:26.378526 kubelet[3429]: I0213 20:54:26.378490 3429 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 20:54:26.378626 kubelet[3429]: I0213 20:54:26.378576 3429 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 20:54:26.378626 kubelet[3429]: I0213 20:54:26.378622 3429 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 20:54:26.462788 kubelet[3429]: I0213 20:54:26.462694 3429 topology_manager.go:215] "Topology Admit Handler" podUID="232a8ef4c06c5fb5037131712d404146" podNamespace="kube-system" podName="kube-apiserver-ci-4081.3.1-a-f6aaf2d828" Feb 13 20:54:26.463014 kubelet[3429]: I0213 20:54:26.462893 3429 topology_manager.go:215] "Topology Admit Handler" podUID="514022033b373ff21da9160d8ecfa9fa" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.3.1-a-f6aaf2d828" Feb 13 20:54:26.463296 kubelet[3429]: I0213 20:54:26.463193 3429 topology_manager.go:215] "Topology Admit Handler" podUID="9f3321fc7731c18bf5bfafaa9c141060" podNamespace="kube-system" podName="kube-scheduler-ci-4081.3.1-a-f6aaf2d828" Feb 13 20:54:26.463604 kubelet[3429]: I0213 20:54:26.463543 3429 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.1-a-f6aaf2d828" Feb 13 20:54:26.471038 kubelet[3429]: W0213 20:54:26.470934 3429 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 20:54:26.471230 kubelet[3429]: E0213 20:54:26.471075 3429 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4081.3.1-a-f6aaf2d828\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.1-a-f6aaf2d828" Feb 13 20:54:26.472316 kubelet[3429]: W0213 20:54:26.472253 3429 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 20:54:26.472568 kubelet[3429]: W0213 20:54:26.472325 3429 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 20:54:26.472568 kubelet[3429]: E0213 20:54:26.472388 3429 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081.3.1-a-f6aaf2d828\" already exists" pod="kube-system/kube-controller-manager-ci-4081.3.1-a-f6aaf2d828" Feb 13 20:54:26.472568 kubelet[3429]: E0213 20:54:26.472511 3429 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.3.1-a-f6aaf2d828\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.1-a-f6aaf2d828" Feb 13 20:54:26.474339 kubelet[3429]: I0213 20:54:26.474279 3429 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081.3.1-a-f6aaf2d828" Feb 13 20:54:26.474590 kubelet[3429]: I0213 20:54:26.474447 3429 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.3.1-a-f6aaf2d828" Feb 13 20:54:26.659654 kubelet[3429]: I0213 20:54:26.659394 3429 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/232a8ef4c06c5fb5037131712d404146-ca-certs\") pod \"kube-apiserver-ci-4081.3.1-a-f6aaf2d828\" (UID: \"232a8ef4c06c5fb5037131712d404146\") " pod="kube-system/kube-apiserver-ci-4081.3.1-a-f6aaf2d828" Feb 13 20:54:26.659654 kubelet[3429]: I0213 20:54:26.659509 3429 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/514022033b373ff21da9160d8ecfa9fa-ca-certs\") pod \"kube-controller-manager-ci-4081.3.1-a-f6aaf2d828\" (UID: \"514022033b373ff21da9160d8ecfa9fa\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-a-f6aaf2d828" Feb 13 20:54:26.659654 kubelet[3429]: I0213 20:54:26.659583 3429 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/514022033b373ff21da9160d8ecfa9fa-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.1-a-f6aaf2d828\" (UID: \"514022033b373ff21da9160d8ecfa9fa\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-a-f6aaf2d828" Feb 13 20:54:26.659654 kubelet[3429]: I0213 20:54:26.659640 3429 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9f3321fc7731c18bf5bfafaa9c141060-kubeconfig\") pod \"kube-scheduler-ci-4081.3.1-a-f6aaf2d828\" (UID: \"9f3321fc7731c18bf5bfafaa9c141060\") " pod="kube-system/kube-scheduler-ci-4081.3.1-a-f6aaf2d828" Feb 13 20:54:26.660161 kubelet[3429]: I0213 20:54:26.659786 3429 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/232a8ef4c06c5fb5037131712d404146-k8s-certs\") pod \"kube-apiserver-ci-4081.3.1-a-f6aaf2d828\" (UID: \"232a8ef4c06c5fb5037131712d404146\") " pod="kube-system/kube-apiserver-ci-4081.3.1-a-f6aaf2d828" Feb 13 20:54:26.660161 kubelet[3429]: I0213 20:54:26.659929 3429 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/232a8ef4c06c5fb5037131712d404146-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.1-a-f6aaf2d828\" (UID: \"232a8ef4c06c5fb5037131712d404146\") " pod="kube-system/kube-apiserver-ci-4081.3.1-a-f6aaf2d828" Feb 13 20:54:26.660161 kubelet[3429]: I0213 20:54:26.660042 3429 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/514022033b373ff21da9160d8ecfa9fa-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.1-a-f6aaf2d828\" (UID: \"514022033b373ff21da9160d8ecfa9fa\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-a-f6aaf2d828" Feb 13 20:54:26.660161 kubelet[3429]: I0213 20:54:26.660125 3429 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/514022033b373ff21da9160d8ecfa9fa-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.1-a-f6aaf2d828\" (UID: \"514022033b373ff21da9160d8ecfa9fa\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-a-f6aaf2d828" Feb 13 20:54:26.660733 kubelet[3429]: I0213 20:54:26.660198 3429 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/514022033b373ff21da9160d8ecfa9fa-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.1-a-f6aaf2d828\" (UID: \"514022033b373ff21da9160d8ecfa9fa\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-a-f6aaf2d828" Feb 13 20:54:27.355862 kubelet[3429]: I0213 20:54:27.355788 3429 apiserver.go:52] "Watching apiserver" Feb 13 20:54:27.374658 kubelet[3429]: W0213 20:54:27.374598 3429 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 20:54:27.374846 kubelet[3429]: E0213 20:54:27.374720 3429 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081.3.1-a-f6aaf2d828\" already exists" pod="kube-system/kube-controller-manager-ci-4081.3.1-a-f6aaf2d828" Feb 13 20:54:27.375505 kubelet[3429]: W0213 20:54:27.375446 3429 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 20:54:27.375654 kubelet[3429]: E0213 20:54:27.375580 3429 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.3.1-a-f6aaf2d828\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.1-a-f6aaf2d828" Feb 13 20:54:27.414343 kubelet[3429]: I0213 20:54:27.414249 3429 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.1-a-f6aaf2d828" podStartSLOduration=3.414226911 podStartE2EDuration="3.414226911s" podCreationTimestamp="2025-02-13 20:54:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:54:27.414175398 +0000 UTC m=+1.092904080" watchObservedRunningTime="2025-02-13 20:54:27.414226911 +0000 UTC m=+1.092955603" Feb 13 20:54:27.414508 kubelet[3429]: I0213 20:54:27.414374 3429 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.1-a-f6aaf2d828" podStartSLOduration=2.4143649910000002 podStartE2EDuration="2.414364991s" podCreationTimestamp="2025-02-13 20:54:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:54:27.406523822 +0000 UTC m=+1.085252527" watchObservedRunningTime="2025-02-13 20:54:27.414364991 +0000 UTC m=+1.093093672" Feb 13 20:54:27.420160 kubelet[3429]: I0213 20:54:27.420099 3429 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.1-a-f6aaf2d828" podStartSLOduration=3.42008599 podStartE2EDuration="3.42008599s" podCreationTimestamp="2025-02-13 20:54:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:54:27.420063722 +0000 UTC m=+1.098792399" watchObservedRunningTime="2025-02-13 20:54:27.42008599 +0000 UTC m=+1.098814660" Feb 13 20:54:27.457595 kubelet[3429]: I0213 20:54:27.457549 3429 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 20:54:30.547162 sudo[2232]: pam_unix(sudo:session): session closed for user root Feb 13 20:54:30.548023 sshd[2225]: pam_unix(sshd:session): session closed for user core Feb 13 20:54:30.549604 systemd[1]: sshd@8-147.28.180.203:22-139.178.89.65:33886.service: Deactivated successfully. Feb 13 20:54:30.550989 systemd-logind[1897]: Session 11 logged out. Waiting for processes to exit. Feb 13 20:54:30.551025 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 20:54:30.551709 systemd-logind[1897]: Removed session 11. Feb 13 20:54:40.094624 kubelet[3429]: I0213 20:54:40.094576 3429 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 20:54:40.094882 containerd[1918]: time="2025-02-13T20:54:40.094780663Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 20:54:40.095030 kubelet[3429]: I0213 20:54:40.094937 3429 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 20:54:40.190726 update_engine[1902]: I20250213 20:54:40.190564 1902 update_attempter.cc:509] Updating boot flags... Feb 13 20:54:40.227435 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (3601) Feb 13 20:54:40.229323 kubelet[3429]: I0213 20:54:40.229293 3429 topology_manager.go:215] "Topology Admit Handler" podUID="1d05afa4-85cf-4ce4-8a73-5c6298a32e6c" podNamespace="kube-system" podName="kube-proxy-9w7rs" Feb 13 20:54:40.258430 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (3605) Feb 13 20:54:40.258562 kubelet[3429]: I0213 20:54:40.258547 3429 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1d05afa4-85cf-4ce4-8a73-5c6298a32e6c-kube-proxy\") pod \"kube-proxy-9w7rs\" (UID: \"1d05afa4-85cf-4ce4-8a73-5c6298a32e6c\") " pod="kube-system/kube-proxy-9w7rs" Feb 13 20:54:40.258598 kubelet[3429]: I0213 20:54:40.258569 3429 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1d05afa4-85cf-4ce4-8a73-5c6298a32e6c-xtables-lock\") pod \"kube-proxy-9w7rs\" (UID: \"1d05afa4-85cf-4ce4-8a73-5c6298a32e6c\") " pod="kube-system/kube-proxy-9w7rs" Feb 13 20:54:40.258598 kubelet[3429]: I0213 20:54:40.258583 3429 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1d05afa4-85cf-4ce4-8a73-5c6298a32e6c-lib-modules\") pod \"kube-proxy-9w7rs\" (UID: \"1d05afa4-85cf-4ce4-8a73-5c6298a32e6c\") " pod="kube-system/kube-proxy-9w7rs" Feb 13 20:54:40.258636 kubelet[3429]: I0213 20:54:40.258596 3429 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlv4r\" (UniqueName: \"kubernetes.io/projected/1d05afa4-85cf-4ce4-8a73-5c6298a32e6c-kube-api-access-wlv4r\") pod \"kube-proxy-9w7rs\" (UID: \"1d05afa4-85cf-4ce4-8a73-5c6298a32e6c\") " pod="kube-system/kube-proxy-9w7rs" Feb 13 20:54:40.372090 kubelet[3429]: E0213 20:54:40.371862 3429 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Feb 13 20:54:40.372090 kubelet[3429]: E0213 20:54:40.371927 3429 projected.go:200] Error preparing data for projected volume kube-api-access-wlv4r for pod kube-system/kube-proxy-9w7rs: configmap "kube-root-ca.crt" not found Feb 13 20:54:40.372090 kubelet[3429]: E0213 20:54:40.372086 3429 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1d05afa4-85cf-4ce4-8a73-5c6298a32e6c-kube-api-access-wlv4r podName:1d05afa4-85cf-4ce4-8a73-5c6298a32e6c nodeName:}" failed. No retries permitted until 2025-02-13 20:54:40.872040171 +0000 UTC m=+14.550768899 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-wlv4r" (UniqueName: "kubernetes.io/projected/1d05afa4-85cf-4ce4-8a73-5c6298a32e6c-kube-api-access-wlv4r") pod "kube-proxy-9w7rs" (UID: "1d05afa4-85cf-4ce4-8a73-5c6298a32e6c") : configmap "kube-root-ca.crt" not found Feb 13 20:54:40.963988 kubelet[3429]: E0213 20:54:40.963893 3429 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Feb 13 20:54:40.963988 kubelet[3429]: E0213 20:54:40.963955 3429 projected.go:200] Error preparing data for projected volume kube-api-access-wlv4r for pod kube-system/kube-proxy-9w7rs: configmap "kube-root-ca.crt" not found Feb 13 20:54:40.964341 kubelet[3429]: E0213 20:54:40.964077 3429 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1d05afa4-85cf-4ce4-8a73-5c6298a32e6c-kube-api-access-wlv4r podName:1d05afa4-85cf-4ce4-8a73-5c6298a32e6c nodeName:}" failed. No retries permitted until 2025-02-13 20:54:41.964021442 +0000 UTC m=+15.642750182 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-wlv4r" (UniqueName: "kubernetes.io/projected/1d05afa4-85cf-4ce4-8a73-5c6298a32e6c-kube-api-access-wlv4r") pod "kube-proxy-9w7rs" (UID: "1d05afa4-85cf-4ce4-8a73-5c6298a32e6c") : configmap "kube-root-ca.crt" not found Feb 13 20:54:41.322327 kubelet[3429]: I0213 20:54:41.322266 3429 topology_manager.go:215] "Topology Admit Handler" podUID="cd8ddeed-4fbd-43ac-a0bf-0d64c344570c" podNamespace="tigera-operator" podName="tigera-operator-7bc55997bb-fgh5g" Feb 13 20:54:41.368000 kubelet[3429]: I0213 20:54:41.367865 3429 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgx5m\" (UniqueName: \"kubernetes.io/projected/cd8ddeed-4fbd-43ac-a0bf-0d64c344570c-kube-api-access-sgx5m\") pod \"tigera-operator-7bc55997bb-fgh5g\" (UID: \"cd8ddeed-4fbd-43ac-a0bf-0d64c344570c\") " pod="tigera-operator/tigera-operator-7bc55997bb-fgh5g" Feb 13 20:54:41.368249 kubelet[3429]: I0213 20:54:41.368063 3429 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/cd8ddeed-4fbd-43ac-a0bf-0d64c344570c-var-lib-calico\") pod \"tigera-operator-7bc55997bb-fgh5g\" (UID: \"cd8ddeed-4fbd-43ac-a0bf-0d64c344570c\") " pod="tigera-operator/tigera-operator-7bc55997bb-fgh5g" Feb 13 20:54:41.629021 containerd[1918]: time="2025-02-13T20:54:41.628787225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-fgh5g,Uid:cd8ddeed-4fbd-43ac-a0bf-0d64c344570c,Namespace:tigera-operator,Attempt:0,}" Feb 13 20:54:41.640172 containerd[1918]: time="2025-02-13T20:54:41.640059156Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:54:41.640172 containerd[1918]: time="2025-02-13T20:54:41.640149777Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:54:41.640331 containerd[1918]: time="2025-02-13T20:54:41.640165299Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:54:41.640602 containerd[1918]: time="2025-02-13T20:54:41.640541158Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:54:41.683495 containerd[1918]: time="2025-02-13T20:54:41.683439046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-fgh5g,Uid:cd8ddeed-4fbd-43ac-a0bf-0d64c344570c,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"e4cabd4c3f1b3eb1f7b1526fe205bb8e9c607c920b94b09e0cb38aad0bbcc645\"" Feb 13 20:54:41.684383 containerd[1918]: time="2025-02-13T20:54:41.684370024Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Feb 13 20:54:42.032776 containerd[1918]: time="2025-02-13T20:54:42.032511157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9w7rs,Uid:1d05afa4-85cf-4ce4-8a73-5c6298a32e6c,Namespace:kube-system,Attempt:0,}" Feb 13 20:54:42.042939 containerd[1918]: time="2025-02-13T20:54:42.042896700Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:54:42.042939 containerd[1918]: time="2025-02-13T20:54:42.042931439Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:54:42.043059 containerd[1918]: time="2025-02-13T20:54:42.042946540Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:54:42.043324 containerd[1918]: time="2025-02-13T20:54:42.043308388Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:54:42.067207 containerd[1918]: time="2025-02-13T20:54:42.067182083Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9w7rs,Uid:1d05afa4-85cf-4ce4-8a73-5c6298a32e6c,Namespace:kube-system,Attempt:0,} returns sandbox id \"39da66cbb68675671c3593ec88c33f78c94013272024c3ccfb4831e3c59063d9\"" Feb 13 20:54:42.068827 containerd[1918]: time="2025-02-13T20:54:42.068773896Z" level=info msg="CreateContainer within sandbox \"39da66cbb68675671c3593ec88c33f78c94013272024c3ccfb4831e3c59063d9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 20:54:42.074290 containerd[1918]: time="2025-02-13T20:54:42.074248536Z" level=info msg="CreateContainer within sandbox \"39da66cbb68675671c3593ec88c33f78c94013272024c3ccfb4831e3c59063d9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1ad21099e4369650873ad9609ab53f7475aa863ff12c14ad31f8353d63cf63e4\"" Feb 13 20:54:42.074603 containerd[1918]: time="2025-02-13T20:54:42.074543165Z" level=info msg="StartContainer for \"1ad21099e4369650873ad9609ab53f7475aa863ff12c14ad31f8353d63cf63e4\"" Feb 13 20:54:42.098198 containerd[1918]: time="2025-02-13T20:54:42.098171062Z" level=info msg="StartContainer for \"1ad21099e4369650873ad9609ab53f7475aa863ff12c14ad31f8353d63cf63e4\" returns successfully" Feb 13 20:54:42.426108 kubelet[3429]: I0213 20:54:42.425968 3429 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9w7rs" podStartSLOduration=2.425927671 podStartE2EDuration="2.425927671s" podCreationTimestamp="2025-02-13 20:54:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:54:42.425870489 +0000 UTC m=+16.104599221" watchObservedRunningTime="2025-02-13 20:54:42.425927671 +0000 UTC m=+16.104656386" Feb 13 20:54:43.281924 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2033381805.mount: Deactivated successfully. Feb 13 20:54:43.542355 containerd[1918]: time="2025-02-13T20:54:43.542298805Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:54:43.542580 containerd[1918]: time="2025-02-13T20:54:43.542513132Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21762497" Feb 13 20:54:43.542802 containerd[1918]: time="2025-02-13T20:54:43.542785938Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:54:43.544033 containerd[1918]: time="2025-02-13T20:54:43.543994128Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:54:43.544483 containerd[1918]: time="2025-02-13T20:54:43.544470217Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 1.860083075s" Feb 13 20:54:43.544519 containerd[1918]: time="2025-02-13T20:54:43.544484873Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Feb 13 20:54:43.545415 containerd[1918]: time="2025-02-13T20:54:43.545401552Z" level=info msg="CreateContainer within sandbox \"e4cabd4c3f1b3eb1f7b1526fe205bb8e9c607c920b94b09e0cb38aad0bbcc645\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Feb 13 20:54:43.549490 containerd[1918]: time="2025-02-13T20:54:43.549473509Z" level=info msg="CreateContainer within sandbox \"e4cabd4c3f1b3eb1f7b1526fe205bb8e9c607c920b94b09e0cb38aad0bbcc645\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"18d30570da6ca68f9cb1fec42ba7caa604928000a3e6897461485366bf75d3f4\"" Feb 13 20:54:43.549714 containerd[1918]: time="2025-02-13T20:54:43.549697903Z" level=info msg="StartContainer for \"18d30570da6ca68f9cb1fec42ba7caa604928000a3e6897461485366bf75d3f4\"" Feb 13 20:54:43.575408 containerd[1918]: time="2025-02-13T20:54:43.575388744Z" level=info msg="StartContainer for \"18d30570da6ca68f9cb1fec42ba7caa604928000a3e6897461485366bf75d3f4\" returns successfully" Feb 13 20:54:44.425184 kubelet[3429]: I0213 20:54:44.425129 3429 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7bc55997bb-fgh5g" podStartSLOduration=1.5644252440000002 podStartE2EDuration="3.425120791s" podCreationTimestamp="2025-02-13 20:54:41 +0000 UTC" firstStartedPulling="2025-02-13 20:54:41.684157367 +0000 UTC m=+15.362886028" lastFinishedPulling="2025-02-13 20:54:43.544852913 +0000 UTC m=+17.223581575" observedRunningTime="2025-02-13 20:54:44.42503214 +0000 UTC m=+18.103760808" watchObservedRunningTime="2025-02-13 20:54:44.425120791 +0000 UTC m=+18.103849454" Feb 13 20:54:46.403721 kubelet[3429]: I0213 20:54:46.403659 3429 topology_manager.go:215] "Topology Admit Handler" podUID="2fd6af92-e552-4fbe-a075-7f0640934230" podNamespace="calico-system" podName="calico-typha-7cf965854c-ldbr4" Feb 13 20:54:46.427838 kubelet[3429]: I0213 20:54:46.427807 3429 topology_manager.go:215] "Topology Admit Handler" podUID="aca86446-0900-4c88-9816-fd5ab475d1f4" podNamespace="calico-system" podName="calico-node-2q2tv" Feb 13 20:54:46.507750 kubelet[3429]: I0213 20:54:46.507627 3429 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/aca86446-0900-4c88-9816-fd5ab475d1f4-node-certs\") pod \"calico-node-2q2tv\" (UID: \"aca86446-0900-4c88-9816-fd5ab475d1f4\") " pod="calico-system/calico-node-2q2tv" Feb 13 20:54:46.508000 kubelet[3429]: I0213 20:54:46.507782 3429 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/aca86446-0900-4c88-9816-fd5ab475d1f4-policysync\") pod \"calico-node-2q2tv\" (UID: \"aca86446-0900-4c88-9816-fd5ab475d1f4\") " pod="calico-system/calico-node-2q2tv" Feb 13 20:54:46.508000 kubelet[3429]: I0213 20:54:46.507889 3429 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnctc\" (UniqueName: \"kubernetes.io/projected/aca86446-0900-4c88-9816-fd5ab475d1f4-kube-api-access-wnctc\") pod \"calico-node-2q2tv\" (UID: \"aca86446-0900-4c88-9816-fd5ab475d1f4\") " pod="calico-system/calico-node-2q2tv" Feb 13 20:54:46.508228 kubelet[3429]: I0213 20:54:46.508003 3429 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2fd6af92-e552-4fbe-a075-7f0640934230-tigera-ca-bundle\") pod \"calico-typha-7cf965854c-ldbr4\" (UID: \"2fd6af92-e552-4fbe-a075-7f0640934230\") " pod="calico-system/calico-typha-7cf965854c-ldbr4" Feb 13 20:54:46.508228 kubelet[3429]: I0213 20:54:46.508099 3429 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/2fd6af92-e552-4fbe-a075-7f0640934230-typha-certs\") pod \"calico-typha-7cf965854c-ldbr4\" (UID: \"2fd6af92-e552-4fbe-a075-7f0640934230\") " pod="calico-system/calico-typha-7cf965854c-ldbr4" Feb 13 20:54:46.508228 kubelet[3429]: I0213 20:54:46.508158 3429 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aca86446-0900-4c88-9816-fd5ab475d1f4-tigera-ca-bundle\") pod \"calico-node-2q2tv\" (UID: \"aca86446-0900-4c88-9816-fd5ab475d1f4\") " pod="calico-system/calico-node-2q2tv" Feb 13 20:54:46.508228 kubelet[3429]: I0213 20:54:46.508205 3429 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/aca86446-0900-4c88-9816-fd5ab475d1f4-var-run-calico\") pod \"calico-node-2q2tv\" (UID: \"aca86446-0900-4c88-9816-fd5ab475d1f4\") " pod="calico-system/calico-node-2q2tv" Feb 13 20:54:46.508652 kubelet[3429]: I0213 20:54:46.508254 3429 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/aca86446-0900-4c88-9816-fd5ab475d1f4-cni-bin-dir\") pod \"calico-node-2q2tv\" (UID: \"aca86446-0900-4c88-9816-fd5ab475d1f4\") " pod="calico-system/calico-node-2q2tv" Feb 13 20:54:46.508652 kubelet[3429]: I0213 20:54:46.508301 3429 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/aca86446-0900-4c88-9816-fd5ab475d1f4-cni-net-dir\") pod \"calico-node-2q2tv\" (UID: \"aca86446-0900-4c88-9816-fd5ab475d1f4\") " pod="calico-system/calico-node-2q2tv" Feb 13 20:54:46.508652 kubelet[3429]: I0213 20:54:46.508353 3429 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74bgz\" (UniqueName: \"kubernetes.io/projected/2fd6af92-e552-4fbe-a075-7f0640934230-kube-api-access-74bgz\") pod \"calico-typha-7cf965854c-ldbr4\" (UID: \"2fd6af92-e552-4fbe-a075-7f0640934230\") " pod="calico-system/calico-typha-7cf965854c-ldbr4" Feb 13 20:54:46.508652 kubelet[3429]: I0213 20:54:46.508403 3429 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aca86446-0900-4c88-9816-fd5ab475d1f4-lib-modules\") pod \"calico-node-2q2tv\" (UID: \"aca86446-0900-4c88-9816-fd5ab475d1f4\") " pod="calico-system/calico-node-2q2tv" Feb 13 20:54:46.508652 kubelet[3429]: I0213 20:54:46.508568 3429 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/aca86446-0900-4c88-9816-fd5ab475d1f4-var-lib-calico\") pod \"calico-node-2q2tv\" (UID: \"aca86446-0900-4c88-9816-fd5ab475d1f4\") " pod="calico-system/calico-node-2q2tv" Feb 13 20:54:46.509234 kubelet[3429]: I0213 20:54:46.508730 3429 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aca86446-0900-4c88-9816-fd5ab475d1f4-xtables-lock\") pod \"calico-node-2q2tv\" (UID: \"aca86446-0900-4c88-9816-fd5ab475d1f4\") " pod="calico-system/calico-node-2q2tv" Feb 13 20:54:46.509234 kubelet[3429]: I0213 20:54:46.508820 3429 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/aca86446-0900-4c88-9816-fd5ab475d1f4-cni-log-dir\") pod \"calico-node-2q2tv\" (UID: \"aca86446-0900-4c88-9816-fd5ab475d1f4\") " pod="calico-system/calico-node-2q2tv" Feb 13 20:54:46.509234 kubelet[3429]: I0213 20:54:46.508874 3429 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/aca86446-0900-4c88-9816-fd5ab475d1f4-flexvol-driver-host\") pod \"calico-node-2q2tv\" (UID: \"aca86446-0900-4c88-9816-fd5ab475d1f4\") " pod="calico-system/calico-node-2q2tv" Feb 13 20:54:46.555383 kubelet[3429]: I0213 20:54:46.555301 3429 topology_manager.go:215] "Topology Admit Handler" podUID="33fccad8-e90d-49bb-89c6-670419a141a0" podNamespace="calico-system" podName="csi-node-driver-fqp2c" Feb 13 20:54:46.556324 kubelet[3429]: E0213 20:54:46.556247 3429 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fqp2c" podUID="33fccad8-e90d-49bb-89c6-670419a141a0" Feb 13 20:54:46.610067 kubelet[3429]: I0213 20:54:46.610041 3429 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvld5\" (UniqueName: \"kubernetes.io/projected/33fccad8-e90d-49bb-89c6-670419a141a0-kube-api-access-wvld5\") pod \"csi-node-driver-fqp2c\" (UID: \"33fccad8-e90d-49bb-89c6-670419a141a0\") " pod="calico-system/csi-node-driver-fqp2c" Feb 13 20:54:46.610194 kubelet[3429]: I0213 20:54:46.610131 3429 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/33fccad8-e90d-49bb-89c6-670419a141a0-varrun\") pod \"csi-node-driver-fqp2c\" (UID: \"33fccad8-e90d-49bb-89c6-670419a141a0\") " pod="calico-system/csi-node-driver-fqp2c" Feb 13 20:54:46.610194 kubelet[3429]: I0213 20:54:46.610165 3429 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/33fccad8-e90d-49bb-89c6-670419a141a0-kubelet-dir\") pod \"csi-node-driver-fqp2c\" (UID: \"33fccad8-e90d-49bb-89c6-670419a141a0\") " pod="calico-system/csi-node-driver-fqp2c" Feb 13 20:54:46.610313 kubelet[3429]: I0213 20:54:46.610203 3429 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/33fccad8-e90d-49bb-89c6-670419a141a0-registration-dir\") pod \"csi-node-driver-fqp2c\" (UID: \"33fccad8-e90d-49bb-89c6-670419a141a0\") " pod="calico-system/csi-node-driver-fqp2c" Feb 13 20:54:46.610313 kubelet[3429]: I0213 20:54:46.610237 3429 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/33fccad8-e90d-49bb-89c6-670419a141a0-socket-dir\") pod \"csi-node-driver-fqp2c\" (UID: \"33fccad8-e90d-49bb-89c6-670419a141a0\") " pod="calico-system/csi-node-driver-fqp2c" Feb 13 20:54:46.616952 kubelet[3429]: E0213 20:54:46.616903 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:54:46.616952 kubelet[3429]: W0213 20:54:46.616924 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:54:46.616952 kubelet[3429]: E0213 20:54:46.616953 3429 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:54:46.617232 kubelet[3429]: E0213 20:54:46.617216 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:54:46.617232 kubelet[3429]: W0213 20:54:46.617225 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:54:46.617232 kubelet[3429]: E0213 20:54:46.617236 3429 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:54:46.618585 kubelet[3429]: E0213 20:54:46.618539 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:54:46.618585 kubelet[3429]: W0213 20:54:46.618553 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:54:46.618585 kubelet[3429]: E0213 20:54:46.618568 3429 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:54:46.618752 kubelet[3429]: E0213 20:54:46.618739 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:54:46.618752 kubelet[3429]: W0213 20:54:46.618752 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:54:46.618833 kubelet[3429]: E0213 20:54:46.618767 3429 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:54:46.618936 kubelet[3429]: E0213 20:54:46.618926 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:54:46.618936 kubelet[3429]: W0213 20:54:46.618935 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:54:46.619009 kubelet[3429]: E0213 20:54:46.618946 3429 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:54:46.619124 kubelet[3429]: E0213 20:54:46.619094 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:54:46.619124 kubelet[3429]: W0213 20:54:46.619103 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:54:46.619124 kubelet[3429]: E0213 20:54:46.619111 3429 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:54:46.709514 containerd[1918]: time="2025-02-13T20:54:46.709253459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7cf965854c-ldbr4,Uid:2fd6af92-e552-4fbe-a075-7f0640934230,Namespace:calico-system,Attempt:0,}" Feb 13 20:54:46.711278 kubelet[3429]: E0213 20:54:46.711230 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:54:46.711278 kubelet[3429]: W0213 20:54:46.711274 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:54:46.711643 kubelet[3429]: E0213 20:54:46.711316 3429 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:54:46.712003 kubelet[3429]: E0213 20:54:46.711962 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:54:46.712003 kubelet[3429]: W0213 20:54:46.711999 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:54:46.712249 kubelet[3429]: E0213 20:54:46.712044 3429 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:54:46.712638 kubelet[3429]: E0213 20:54:46.712605 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:54:46.712793 kubelet[3429]: W0213 20:54:46.712637 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:54:46.712793 kubelet[3429]: E0213 20:54:46.712675 3429 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:54:46.713316 kubelet[3429]: E0213 20:54:46.713278 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:54:46.713513 kubelet[3429]: W0213 20:54:46.713319 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:54:46.713513 kubelet[3429]: E0213 20:54:46.713364 3429 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:54:46.713917 kubelet[3429]: E0213 20:54:46.713843 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:54:46.713917 kubelet[3429]: W0213 20:54:46.713870 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:54:46.714220 kubelet[3429]: E0213 20:54:46.713991 3429 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:54:46.714330 kubelet[3429]: E0213 20:54:46.714312 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:54:46.714468 kubelet[3429]: W0213 20:54:46.714336 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:54:46.714468 kubelet[3429]: E0213 20:54:46.714409 3429 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:54:46.714871 kubelet[3429]: E0213 20:54:46.714805 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:54:46.714871 kubelet[3429]: W0213 20:54:46.714829 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:54:46.715175 kubelet[3429]: E0213 20:54:46.714939 3429 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:54:46.715399 kubelet[3429]: E0213 20:54:46.715356 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:54:46.715399 kubelet[3429]: W0213 20:54:46.715394 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:54:46.715610 kubelet[3429]: E0213 20:54:46.715498 3429 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:54:46.716030 kubelet[3429]: E0213 20:54:46.715955 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:54:46.716030 kubelet[3429]: W0213 20:54:46.715992 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:54:46.716336 kubelet[3429]: E0213 20:54:46.716120 3429 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:54:46.716490 kubelet[3429]: E0213 20:54:46.716417 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:54:46.716490 kubelet[3429]: W0213 20:54:46.716478 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:54:46.716693 kubelet[3429]: E0213 20:54:46.716551 3429 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:54:46.717099 kubelet[3429]: E0213 20:54:46.717025 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:54:46.717099 kubelet[3429]: W0213 20:54:46.717061 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:54:46.717401 kubelet[3429]: E0213 20:54:46.717175 3429 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:54:46.717729 kubelet[3429]: E0213 20:54:46.717644 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:54:46.717729 kubelet[3429]: W0213 20:54:46.717680 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:54:46.718066 kubelet[3429]: E0213 20:54:46.717793 3429 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:54:46.718261 kubelet[3429]: E0213 20:54:46.718225 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:54:46.718529 kubelet[3429]: W0213 20:54:46.718260 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:54:46.718529 kubelet[3429]: E0213 20:54:46.718385 3429 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:54:46.718995 kubelet[3429]: E0213 20:54:46.718910 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:54:46.718995 kubelet[3429]: W0213 20:54:46.718948 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:54:46.719288 kubelet[3429]: E0213 20:54:46.719071 3429 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:54:46.719551 kubelet[3429]: E0213 20:54:46.719507 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:54:46.719551 kubelet[3429]: W0213 20:54:46.719536 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:54:46.720007 kubelet[3429]: E0213 20:54:46.719647 3429 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:54:46.720007 kubelet[3429]: E0213 20:54:46.719997 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:54:46.720354 kubelet[3429]: W0213 20:54:46.720030 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:54:46.720354 kubelet[3429]: E0213 20:54:46.720149 3429 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:54:46.720728 kubelet[3429]: E0213 20:54:46.720573 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:54:46.720728 kubelet[3429]: W0213 20:54:46.720600 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:54:46.720728 kubelet[3429]: E0213 20:54:46.720709 3429 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:54:46.721216 kubelet[3429]: E0213 20:54:46.721156 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:54:46.721216 kubelet[3429]: W0213 20:54:46.721189 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:54:46.721581 kubelet[3429]: E0213 20:54:46.721290 3429 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:54:46.721938 kubelet[3429]: E0213 20:54:46.721882 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:54:46.721938 kubelet[3429]: W0213 20:54:46.721920 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:54:46.722329 kubelet[3429]: E0213 20:54:46.722045 3429 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:54:46.722545 kubelet[3429]: E0213 20:54:46.722413 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:54:46.722545 kubelet[3429]: W0213 20:54:46.722474 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:54:46.722680 kubelet[3429]: E0213 20:54:46.722586 3429 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:54:46.722824 kubelet[3429]: E0213 20:54:46.722816 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:54:46.722824 kubelet[3429]: W0213 20:54:46.722823 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:54:46.722908 kubelet[3429]: E0213 20:54:46.722839 3429 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:54:46.722989 kubelet[3429]: E0213 20:54:46.722982 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:54:46.722989 kubelet[3429]: W0213 20:54:46.722988 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:54:46.723039 kubelet[3429]: E0213 20:54:46.723002 3429 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:54:46.723095 kubelet[3429]: E0213 20:54:46.723090 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:54:46.723119 kubelet[3429]: W0213 20:54:46.723095 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:54:46.723119 kubelet[3429]: E0213 20:54:46.723102 3429 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:54:46.723224 kubelet[3429]: E0213 20:54:46.723217 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:54:46.723242 kubelet[3429]: W0213 20:54:46.723225 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:54:46.723242 kubelet[3429]: E0213 20:54:46.723235 3429 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:54:46.723356 kubelet[3429]: E0213 20:54:46.723350 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:54:46.723375 kubelet[3429]: W0213 20:54:46.723357 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:54:46.723375 kubelet[3429]: E0213 20:54:46.723365 3429 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:54:46.726280 kubelet[3429]: E0213 20:54:46.726242 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:54:46.726280 kubelet[3429]: W0213 20:54:46.726249 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:54:46.726280 kubelet[3429]: E0213 20:54:46.726256 3429 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:54:46.730735 containerd[1918]: time="2025-02-13T20:54:46.730715330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2q2tv,Uid:aca86446-0900-4c88-9816-fd5ab475d1f4,Namespace:calico-system,Attempt:0,}" Feb 13 20:54:47.181633 containerd[1918]: time="2025-02-13T20:54:47.181560417Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:54:47.181633 containerd[1918]: time="2025-02-13T20:54:47.181618831Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:54:47.181800 containerd[1918]: time="2025-02-13T20:54:47.181634588Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:54:47.181800 containerd[1918]: time="2025-02-13T20:54:47.181721997Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:54:47.222812 containerd[1918]: time="2025-02-13T20:54:47.222793042Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7cf965854c-ldbr4,Uid:2fd6af92-e552-4fbe-a075-7f0640934230,Namespace:calico-system,Attempt:0,} returns sandbox id \"44e4abe0bf889bf6b1860e7a62a7f4df9dd20a15d0b6e2df6d311d39ed8be532\"" Feb 13 20:54:47.223486 containerd[1918]: time="2025-02-13T20:54:47.223474095Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Feb 13 20:54:47.229517 containerd[1918]: time="2025-02-13T20:54:47.229277508Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:54:47.229517 containerd[1918]: time="2025-02-13T20:54:47.229482005Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:54:47.229517 containerd[1918]: time="2025-02-13T20:54:47.229490047Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:54:47.229636 containerd[1918]: time="2025-02-13T20:54:47.229535252Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:54:47.253239 containerd[1918]: time="2025-02-13T20:54:47.253215831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2q2tv,Uid:aca86446-0900-4c88-9816-fd5ab475d1f4,Namespace:calico-system,Attempt:0,} returns sandbox id \"07c0da706aad620fd3a971e1364b5ad5e8b807c76eed717f74a6a43b1ba5f8fa\"" Feb 13 20:54:48.363181 kubelet[3429]: E0213 20:54:48.363055 3429 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fqp2c" podUID="33fccad8-e90d-49bb-89c6-670419a141a0" Feb 13 20:54:48.900186 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1720400515.mount: Deactivated successfully. Feb 13 20:54:49.526524 containerd[1918]: time="2025-02-13T20:54:49.526479534Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:54:49.526725 containerd[1918]: time="2025-02-13T20:54:49.526652593Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Feb 13 20:54:49.526979 containerd[1918]: time="2025-02-13T20:54:49.526942092Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:54:49.528011 containerd[1918]: time="2025-02-13T20:54:49.527971745Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:54:49.528430 containerd[1918]: time="2025-02-13T20:54:49.528388136Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 2.304898178s" Feb 13 20:54:49.528430 containerd[1918]: time="2025-02-13T20:54:49.528402283Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Feb 13 20:54:49.528902 containerd[1918]: time="2025-02-13T20:54:49.528860192Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Feb 13 20:54:49.531940 containerd[1918]: time="2025-02-13T20:54:49.531883808Z" level=info msg="CreateContainer within sandbox \"44e4abe0bf889bf6b1860e7a62a7f4df9dd20a15d0b6e2df6d311d39ed8be532\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Feb 13 20:54:49.535570 containerd[1918]: time="2025-02-13T20:54:49.535526605Z" level=info msg="CreateContainer within sandbox \"44e4abe0bf889bf6b1860e7a62a7f4df9dd20a15d0b6e2df6d311d39ed8be532\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"4da42931de503c9a74383c8e9db75417ed6910d80b6a15261127fe531f1dac18\"" Feb 13 20:54:49.535726 containerd[1918]: time="2025-02-13T20:54:49.535676173Z" level=info msg="StartContainer for \"4da42931de503c9a74383c8e9db75417ed6910d80b6a15261127fe531f1dac18\"" Feb 13 20:54:49.582491 containerd[1918]: time="2025-02-13T20:54:49.582469650Z" level=info msg="StartContainer for \"4da42931de503c9a74383c8e9db75417ed6910d80b6a15261127fe531f1dac18\" returns successfully" Feb 13 20:54:50.364053 kubelet[3429]: E0213 20:54:50.363919 3429 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fqp2c" podUID="33fccad8-e90d-49bb-89c6-670419a141a0" Feb 13 20:54:50.434935 kubelet[3429]: I0213 20:54:50.434870 3429 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7cf965854c-ldbr4" podStartSLOduration=2.1294120850000002 podStartE2EDuration="4.434857119s" podCreationTimestamp="2025-02-13 20:54:46 +0000 UTC" firstStartedPulling="2025-02-13 20:54:47.223343843 +0000 UTC m=+20.902072505" lastFinishedPulling="2025-02-13 20:54:49.528788876 +0000 UTC m=+23.207517539" observedRunningTime="2025-02-13 20:54:50.434770572 +0000 UTC m=+24.113499238" watchObservedRunningTime="2025-02-13 20:54:50.434857119 +0000 UTC m=+24.113585780" Feb 13 20:54:50.522184 kubelet[3429]: E0213 20:54:50.522086 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:54:50.522184 kubelet[3429]: W0213 20:54:50.522138 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:54:50.522184 kubelet[3429]: E0213 20:54:50.522180 3429 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:54:50.522889 kubelet[3429]: E0213 20:54:50.522809 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:54:50.522889 kubelet[3429]: W0213 20:54:50.522848 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:54:50.522889 kubelet[3429]: E0213 20:54:50.522880 3429 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:54:50.523579 kubelet[3429]: E0213 20:54:50.523501 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:54:50.523579 kubelet[3429]: W0213 20:54:50.523539 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:54:50.523579 kubelet[3429]: E0213 20:54:50.523577 3429 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:54:50.524268 kubelet[3429]: E0213 20:54:50.524191 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:54:50.524268 kubelet[3429]: W0213 20:54:50.524228 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:54:50.524268 kubelet[3429]: E0213 20:54:50.524262 3429 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:54:50.524961 kubelet[3429]: E0213 20:54:50.524883 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:54:50.524961 kubelet[3429]: W0213 20:54:50.524920 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:54:50.524961 kubelet[3429]: E0213 20:54:50.524955 3429 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:54:50.525601 kubelet[3429]: E0213 20:54:50.525510 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:54:50.525601 kubelet[3429]: W0213 20:54:50.525538 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:54:50.525601 kubelet[3429]: E0213 20:54:50.525567 3429 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:54:50.526265 kubelet[3429]: E0213 20:54:50.526179 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:54:50.526265 kubelet[3429]: W0213 20:54:50.526218 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:54:50.526265 kubelet[3429]: E0213 20:54:50.526251 3429 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:54:50.526944 kubelet[3429]: E0213 20:54:50.526866 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:54:50.526944 kubelet[3429]: W0213 20:54:50.526902 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:54:50.526944 kubelet[3429]: E0213 20:54:50.526937 3429 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:54:50.527637 kubelet[3429]: E0213 20:54:50.527564 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:54:50.527637 kubelet[3429]: W0213 20:54:50.527593 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:54:50.527637 kubelet[3429]: E0213 20:54:50.527629 3429 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:54:50.528256 kubelet[3429]: E0213 20:54:50.528179 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:54:50.528256 kubelet[3429]: W0213 20:54:50.528207 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:54:50.528256 kubelet[3429]: E0213 20:54:50.528258 3429 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:54:50.528931 kubelet[3429]: E0213 20:54:50.528880 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:54:50.528931 kubelet[3429]: W0213 20:54:50.528908 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:54:50.529178 kubelet[3429]: E0213 20:54:50.528936 3429 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:54:50.529537 kubelet[3429]: E0213 20:54:50.529460 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:54:50.529537 kubelet[3429]: W0213 20:54:50.529490 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:54:50.529537 kubelet[3429]: E0213 20:54:50.529518 3429 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:54:50.530144 kubelet[3429]: E0213 20:54:50.530068 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:54:50.530144 kubelet[3429]: W0213 20:54:50.530096 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:54:50.530144 kubelet[3429]: E0213 20:54:50.530123 3429 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:54:50.530733 kubelet[3429]: E0213 20:54:50.530662 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:54:50.530733 kubelet[3429]: W0213 20:54:50.530690 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:54:50.530733 kubelet[3429]: E0213 20:54:50.530720 3429 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:54:50.531369 kubelet[3429]: E0213 20:54:50.531292 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:54:50.531369 kubelet[3429]: W0213 20:54:50.531328 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:54:50.531369 kubelet[3429]: E0213 20:54:50.531363 3429 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:54:50.543878 kubelet[3429]: E0213 20:54:50.543799 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:54:50.543878 kubelet[3429]: W0213 20:54:50.543838 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:54:50.543878 kubelet[3429]: E0213 20:54:50.543871 3429 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:54:50.544648 kubelet[3429]: E0213 20:54:50.544569 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:54:50.544648 kubelet[3429]: W0213 20:54:50.544605 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:54:50.544648 kubelet[3429]: E0213 20:54:50.544645 3429 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:54:50.545365 kubelet[3429]: E0213 20:54:50.545281 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:54:50.545365 kubelet[3429]: W0213 20:54:50.545320 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:54:50.545365 kubelet[3429]: E0213 20:54:50.545363 3429 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:54:50.546119 kubelet[3429]: E0213 20:54:50.546063 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:54:50.546119 kubelet[3429]: W0213 20:54:50.546111 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:54:50.546571 kubelet[3429]: E0213 20:54:50.546169 3429 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:54:50.546836 kubelet[3429]: E0213 20:54:50.546797 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:54:50.546836 kubelet[3429]: W0213 20:54:50.546832 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:54:50.547168 kubelet[3429]: E0213 20:54:50.546961 3429 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:54:50.547445 kubelet[3429]: E0213 20:54:50.547393 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:54:50.547445 kubelet[3429]: W0213 20:54:50.547436 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:54:50.547790 kubelet[3429]: E0213 20:54:50.547544 3429 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:54:50.548036 kubelet[3429]: E0213 20:54:50.547993 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:54:50.548036 kubelet[3429]: W0213 20:54:50.548022 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:54:50.548335 kubelet[3429]: E0213 20:54:50.548143 3429 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:54:50.548572 kubelet[3429]: E0213 20:54:50.548526 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:54:50.548572 kubelet[3429]: W0213 20:54:50.548561 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:54:50.548918 kubelet[3429]: E0213 20:54:50.548695 3429 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:54:50.549138 kubelet[3429]: E0213 20:54:50.549098 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:54:50.549138 kubelet[3429]: W0213 20:54:50.549127 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:54:50.549498 kubelet[3429]: E0213 20:54:50.549175 3429 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:54:50.549728 kubelet[3429]: E0213 20:54:50.549690 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:54:50.549951 kubelet[3429]: W0213 20:54:50.549727 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:54:50.549951 kubelet[3429]: E0213 20:54:50.549845 3429 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:54:50.550309 kubelet[3429]: E0213 20:54:50.550273 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:54:50.550515 kubelet[3429]: W0213 20:54:50.550305 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:54:50.550515 kubelet[3429]: E0213 20:54:50.550390 3429 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:54:50.551037 kubelet[3429]: E0213 20:54:50.550986 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:54:50.551037 kubelet[3429]: W0213 20:54:50.551031 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:54:50.551456 kubelet[3429]: E0213 20:54:50.551146 3429 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:54:50.551723 kubelet[3429]: E0213 20:54:50.551683 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:54:50.551723 kubelet[3429]: W0213 20:54:50.551719 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:54:50.552042 kubelet[3429]: E0213 20:54:50.551834 3429 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:54:50.552344 kubelet[3429]: E0213 20:54:50.552305 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:54:50.552344 kubelet[3429]: W0213 20:54:50.552339 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:54:50.552720 kubelet[3429]: E0213 20:54:50.552404 3429 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:54:50.552964 kubelet[3429]: E0213 20:54:50.552924 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:54:50.552964 kubelet[3429]: W0213 20:54:50.552955 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:54:50.553298 kubelet[3429]: E0213 20:54:50.553007 3429 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:54:50.553711 kubelet[3429]: E0213 20:54:50.553658 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:54:50.553711 kubelet[3429]: W0213 20:54:50.553701 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:54:50.554138 kubelet[3429]: E0213 20:54:50.553758 3429 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:54:50.554450 kubelet[3429]: E0213 20:54:50.554395 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:54:50.554649 kubelet[3429]: W0213 20:54:50.554455 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:54:50.554649 kubelet[3429]: E0213 20:54:50.554505 3429 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:54:50.555876 kubelet[3429]: E0213 20:54:50.555823 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:54:50.555876 kubelet[3429]: W0213 20:54:50.555866 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:54:50.556213 kubelet[3429]: E0213 20:54:50.555914 3429 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:54:51.130812 containerd[1918]: time="2025-02-13T20:54:51.130759359Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:54:51.131029 containerd[1918]: time="2025-02-13T20:54:51.130990697Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Feb 13 20:54:51.131323 containerd[1918]: time="2025-02-13T20:54:51.131312406Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:54:51.132290 containerd[1918]: time="2025-02-13T20:54:51.132254525Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:54:51.132963 containerd[1918]: time="2025-02-13T20:54:51.132921373Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.604043401s" Feb 13 20:54:51.132963 containerd[1918]: time="2025-02-13T20:54:51.132938352Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Feb 13 20:54:51.133920 containerd[1918]: time="2025-02-13T20:54:51.133878288Z" level=info msg="CreateContainer within sandbox \"07c0da706aad620fd3a971e1364b5ad5e8b807c76eed717f74a6a43b1ba5f8fa\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 13 20:54:51.139342 containerd[1918]: time="2025-02-13T20:54:51.139324131Z" level=info msg="CreateContainer within sandbox \"07c0da706aad620fd3a971e1364b5ad5e8b807c76eed717f74a6a43b1ba5f8fa\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"e8d83111cfa2fddf89244b993eecdc6ef164c7d22fc752a49af399d7ce1267d4\"" Feb 13 20:54:51.139597 containerd[1918]: time="2025-02-13T20:54:51.139564944Z" level=info msg="StartContainer for \"e8d83111cfa2fddf89244b993eecdc6ef164c7d22fc752a49af399d7ce1267d4\"" Feb 13 20:54:51.169523 containerd[1918]: time="2025-02-13T20:54:51.169502639Z" level=info msg="StartContainer for \"e8d83111cfa2fddf89244b993eecdc6ef164c7d22fc752a49af399d7ce1267d4\" returns successfully" Feb 13 20:54:51.184322 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e8d83111cfa2fddf89244b993eecdc6ef164c7d22fc752a49af399d7ce1267d4-rootfs.mount: Deactivated successfully. Feb 13 20:54:51.434566 kubelet[3429]: I0213 20:54:51.434391 3429 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 20:54:51.959040 containerd[1918]: time="2025-02-13T20:54:51.959002502Z" level=info msg="shim disconnected" id=e8d83111cfa2fddf89244b993eecdc6ef164c7d22fc752a49af399d7ce1267d4 namespace=k8s.io Feb 13 20:54:51.959040 containerd[1918]: time="2025-02-13T20:54:51.959038029Z" level=warning msg="cleaning up after shim disconnected" id=e8d83111cfa2fddf89244b993eecdc6ef164c7d22fc752a49af399d7ce1267d4 namespace=k8s.io Feb 13 20:54:51.959040 containerd[1918]: time="2025-02-13T20:54:51.959043478Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:54:52.363732 kubelet[3429]: E0213 20:54:52.363624 3429 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fqp2c" podUID="33fccad8-e90d-49bb-89c6-670419a141a0" Feb 13 20:54:52.441988 containerd[1918]: time="2025-02-13T20:54:52.441870171Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Feb 13 20:54:54.363796 kubelet[3429]: E0213 20:54:54.363717 3429 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fqp2c" podUID="33fccad8-e90d-49bb-89c6-670419a141a0" Feb 13 20:54:56.363096 kubelet[3429]: E0213 20:54:56.363071 3429 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fqp2c" podUID="33fccad8-e90d-49bb-89c6-670419a141a0" Feb 13 20:54:56.597116 containerd[1918]: time="2025-02-13T20:54:56.597058924Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:54:56.597337 containerd[1918]: time="2025-02-13T20:54:56.597303140Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Feb 13 20:54:56.597599 containerd[1918]: time="2025-02-13T20:54:56.597558112Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:54:56.598684 containerd[1918]: time="2025-02-13T20:54:56.598644090Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:54:56.599075 containerd[1918]: time="2025-02-13T20:54:56.599053269Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 4.157105331s" Feb 13 20:54:56.599075 containerd[1918]: time="2025-02-13T20:54:56.599069850Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Feb 13 20:54:56.600102 containerd[1918]: time="2025-02-13T20:54:56.600060022Z" level=info msg="CreateContainer within sandbox \"07c0da706aad620fd3a971e1364b5ad5e8b807c76eed717f74a6a43b1ba5f8fa\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 20:54:56.604641 containerd[1918]: time="2025-02-13T20:54:56.604577646Z" level=info msg="CreateContainer within sandbox \"07c0da706aad620fd3a971e1364b5ad5e8b807c76eed717f74a6a43b1ba5f8fa\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"1d6e1594c8d3df91c3b8fbcd91a3ad8cb079faab121e98045c2e80765b6142ae\"" Feb 13 20:54:56.604809 containerd[1918]: time="2025-02-13T20:54:56.604794335Z" level=info msg="StartContainer for \"1d6e1594c8d3df91c3b8fbcd91a3ad8cb079faab121e98045c2e80765b6142ae\"" Feb 13 20:54:56.647931 containerd[1918]: time="2025-02-13T20:54:56.647837075Z" level=info msg="StartContainer for \"1d6e1594c8d3df91c3b8fbcd91a3ad8cb079faab121e98045c2e80765b6142ae\" returns successfully" Feb 13 20:54:57.195288 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1d6e1594c8d3df91c3b8fbcd91a3ad8cb079faab121e98045c2e80765b6142ae-rootfs.mount: Deactivated successfully. Feb 13 20:54:57.242134 kubelet[3429]: I0213 20:54:57.242099 3429 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 13 20:54:57.267791 kubelet[3429]: I0213 20:54:57.267253 3429 topology_manager.go:215] "Topology Admit Handler" podUID="ed2abc63-0eb6-4122-b8d3-cd7022d17802" podNamespace="kube-system" podName="coredns-7db6d8ff4d-b6xth" Feb 13 20:54:57.269001 kubelet[3429]: I0213 20:54:57.268936 3429 topology_manager.go:215] "Topology Admit Handler" podUID="dc173c30-2906-4734-85ec-0b16586ce47f" podNamespace="kube-system" podName="coredns-7db6d8ff4d-lkxpm" Feb 13 20:54:57.269903 kubelet[3429]: I0213 20:54:57.269840 3429 topology_manager.go:215] "Topology Admit Handler" podUID="aa8386d9-1397-4a7f-9ace-37696d683da6" podNamespace="calico-system" podName="calico-kube-controllers-554f7dd6cb-n9jmw" Feb 13 20:54:57.270721 kubelet[3429]: I0213 20:54:57.270658 3429 topology_manager.go:215] "Topology Admit Handler" podUID="14070312-726d-4bcd-91eb-341f8e9a1a5e" podNamespace="calico-apiserver" podName="calico-apiserver-784664ffb7-4nzlt" Feb 13 20:54:57.271476 kubelet[3429]: I0213 20:54:57.271401 3429 topology_manager.go:215] "Topology Admit Handler" podUID="03b0b730-6f3a-4b02-bedd-65f23a457b35" podNamespace="calico-apiserver" podName="calico-apiserver-784664ffb7-z5wx4" Feb 13 20:54:57.293412 kubelet[3429]: I0213 20:54:57.293354 3429 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/14070312-726d-4bcd-91eb-341f8e9a1a5e-calico-apiserver-certs\") pod \"calico-apiserver-784664ffb7-4nzlt\" (UID: \"14070312-726d-4bcd-91eb-341f8e9a1a5e\") " pod="calico-apiserver/calico-apiserver-784664ffb7-4nzlt" Feb 13 20:54:57.293706 kubelet[3429]: I0213 20:54:57.293491 3429 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ed2abc63-0eb6-4122-b8d3-cd7022d17802-config-volume\") pod \"coredns-7db6d8ff4d-b6xth\" (UID: \"ed2abc63-0eb6-4122-b8d3-cd7022d17802\") " pod="kube-system/coredns-7db6d8ff4d-b6xth" Feb 13 20:54:57.293706 kubelet[3429]: I0213 20:54:57.293582 3429 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hswd\" (UniqueName: \"kubernetes.io/projected/ed2abc63-0eb6-4122-b8d3-cd7022d17802-kube-api-access-9hswd\") pod \"coredns-7db6d8ff4d-b6xth\" (UID: \"ed2abc63-0eb6-4122-b8d3-cd7022d17802\") " pod="kube-system/coredns-7db6d8ff4d-b6xth" Feb 13 20:54:57.293706 kubelet[3429]: I0213 20:54:57.293630 3429 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgr48\" (UniqueName: \"kubernetes.io/projected/dc173c30-2906-4734-85ec-0b16586ce47f-kube-api-access-xgr48\") pod \"coredns-7db6d8ff4d-lkxpm\" (UID: \"dc173c30-2906-4734-85ec-0b16586ce47f\") " pod="kube-system/coredns-7db6d8ff4d-lkxpm" Feb 13 20:54:57.293706 kubelet[3429]: I0213 20:54:57.293676 3429 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/03b0b730-6f3a-4b02-bedd-65f23a457b35-calico-apiserver-certs\") pod \"calico-apiserver-784664ffb7-z5wx4\" (UID: \"03b0b730-6f3a-4b02-bedd-65f23a457b35\") " pod="calico-apiserver/calico-apiserver-784664ffb7-z5wx4" Feb 13 20:54:57.294115 kubelet[3429]: I0213 20:54:57.293718 3429 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kncn\" (UniqueName: \"kubernetes.io/projected/14070312-726d-4bcd-91eb-341f8e9a1a5e-kube-api-access-6kncn\") pod \"calico-apiserver-784664ffb7-4nzlt\" (UID: \"14070312-726d-4bcd-91eb-341f8e9a1a5e\") " pod="calico-apiserver/calico-apiserver-784664ffb7-4nzlt" Feb 13 20:54:57.294115 kubelet[3429]: I0213 20:54:57.293755 3429 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dc173c30-2906-4734-85ec-0b16586ce47f-config-volume\") pod \"coredns-7db6d8ff4d-lkxpm\" (UID: \"dc173c30-2906-4734-85ec-0b16586ce47f\") " pod="kube-system/coredns-7db6d8ff4d-lkxpm" Feb 13 20:54:57.294115 kubelet[3429]: I0213 20:54:57.293794 3429 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92js2\" (UniqueName: \"kubernetes.io/projected/aa8386d9-1397-4a7f-9ace-37696d683da6-kube-api-access-92js2\") pod \"calico-kube-controllers-554f7dd6cb-n9jmw\" (UID: \"aa8386d9-1397-4a7f-9ace-37696d683da6\") " pod="calico-system/calico-kube-controllers-554f7dd6cb-n9jmw" Feb 13 20:54:57.294115 kubelet[3429]: I0213 20:54:57.293942 3429 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa8386d9-1397-4a7f-9ace-37696d683da6-tigera-ca-bundle\") pod \"calico-kube-controllers-554f7dd6cb-n9jmw\" (UID: \"aa8386d9-1397-4a7f-9ace-37696d683da6\") " pod="calico-system/calico-kube-controllers-554f7dd6cb-n9jmw" Feb 13 20:54:57.294115 kubelet[3429]: I0213 20:54:57.294020 3429 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drdkg\" (UniqueName: \"kubernetes.io/projected/03b0b730-6f3a-4b02-bedd-65f23a457b35-kube-api-access-drdkg\") pod \"calico-apiserver-784664ffb7-z5wx4\" (UID: \"03b0b730-6f3a-4b02-bedd-65f23a457b35\") " pod="calico-apiserver/calico-apiserver-784664ffb7-z5wx4" Feb 13 20:54:57.575993 containerd[1918]: time="2025-02-13T20:54:57.575913221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-b6xth,Uid:ed2abc63-0eb6-4122-b8d3-cd7022d17802,Namespace:kube-system,Attempt:0,}" Feb 13 20:54:57.577785 containerd[1918]: time="2025-02-13T20:54:57.577708424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-554f7dd6cb-n9jmw,Uid:aa8386d9-1397-4a7f-9ace-37696d683da6,Namespace:calico-system,Attempt:0,}" Feb 13 20:54:57.580636 containerd[1918]: time="2025-02-13T20:54:57.580564288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-lkxpm,Uid:dc173c30-2906-4734-85ec-0b16586ce47f,Namespace:kube-system,Attempt:0,}" Feb 13 20:54:57.582449 containerd[1918]: time="2025-02-13T20:54:57.582340770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-784664ffb7-4nzlt,Uid:14070312-726d-4bcd-91eb-341f8e9a1a5e,Namespace:calico-apiserver,Attempt:0,}" Feb 13 20:54:57.584173 containerd[1918]: time="2025-02-13T20:54:57.584088095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-784664ffb7-z5wx4,Uid:03b0b730-6f3a-4b02-bedd-65f23a457b35,Namespace:calico-apiserver,Attempt:0,}" Feb 13 20:54:57.689258 containerd[1918]: time="2025-02-13T20:54:57.689146808Z" level=info msg="shim disconnected" id=1d6e1594c8d3df91c3b8fbcd91a3ad8cb079faab121e98045c2e80765b6142ae namespace=k8s.io Feb 13 20:54:57.689258 containerd[1918]: time="2025-02-13T20:54:57.689192646Z" level=warning msg="cleaning up after shim disconnected" id=1d6e1594c8d3df91c3b8fbcd91a3ad8cb079faab121e98045c2e80765b6142ae namespace=k8s.io Feb 13 20:54:57.689258 containerd[1918]: time="2025-02-13T20:54:57.689212945Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:54:57.728874 containerd[1918]: time="2025-02-13T20:54:57.728840648Z" level=error msg="Failed to destroy network for sandbox \"eba8c9a8883cacb26cc9f5f64c9afccce956a2bba4a6b15af0774e559541158b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:54:57.728985 containerd[1918]: time="2025-02-13T20:54:57.728905841Z" level=error msg="Failed to destroy network for sandbox \"dd791de27a6fc911fac2454cdaed32c7be1d456c73e780ca09699e6c110bcbbe\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:54:57.728985 containerd[1918]: time="2025-02-13T20:54:57.728919028Z" level=error msg="Failed to destroy network for sandbox \"11bd862c6efc92f4633b356435d221e7af2a4d135f930aea5588f9c5b70c758c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:54:57.728985 containerd[1918]: time="2025-02-13T20:54:57.728955774Z" level=error msg="Failed to destroy network for sandbox \"20d8033f7a9a30cdcc0ed9f4a32dbe88307d5edbb5a868fbd9b5aab31219f1f1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:54:57.729104 containerd[1918]: time="2025-02-13T20:54:57.729084634Z" level=error msg="encountered an error cleaning up failed sandbox \"eba8c9a8883cacb26cc9f5f64c9afccce956a2bba4a6b15af0774e559541158b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:54:57.729143 containerd[1918]: time="2025-02-13T20:54:57.729123585Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-784664ffb7-z5wx4,Uid:03b0b730-6f3a-4b02-bedd-65f23a457b35,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"eba8c9a8883cacb26cc9f5f64c9afccce956a2bba4a6b15af0774e559541158b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:54:57.729206 containerd[1918]: time="2025-02-13T20:54:57.729139651Z" level=error msg="encountered an error cleaning up failed sandbox \"11bd862c6efc92f4633b356435d221e7af2a4d135f930aea5588f9c5b70c758c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:54:57.729206 containerd[1918]: time="2025-02-13T20:54:57.729160417Z" level=error msg="encountered an error cleaning up failed sandbox \"dd791de27a6fc911fac2454cdaed32c7be1d456c73e780ca09699e6c110bcbbe\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:54:57.729206 containerd[1918]: time="2025-02-13T20:54:57.729173797Z" level=error msg="encountered an error cleaning up failed sandbox \"20d8033f7a9a30cdcc0ed9f4a32dbe88307d5edbb5a868fbd9b5aab31219f1f1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:54:57.729206 containerd[1918]: time="2025-02-13T20:54:57.729188210Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-554f7dd6cb-n9jmw,Uid:aa8386d9-1397-4a7f-9ace-37696d683da6,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"dd791de27a6fc911fac2454cdaed32c7be1d456c73e780ca09699e6c110bcbbe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:54:57.729320 containerd[1918]: time="2025-02-13T20:54:57.729206032Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-b6xth,Uid:ed2abc63-0eb6-4122-b8d3-cd7022d17802,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"20d8033f7a9a30cdcc0ed9f4a32dbe88307d5edbb5a868fbd9b5aab31219f1f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:54:57.729320 containerd[1918]: time="2025-02-13T20:54:57.729162434Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-784664ffb7-4nzlt,Uid:14070312-726d-4bcd-91eb-341f8e9a1a5e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"11bd862c6efc92f4633b356435d221e7af2a4d135f930aea5588f9c5b70c758c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:54:57.729320 containerd[1918]: time="2025-02-13T20:54:57.729280872Z" level=error msg="Failed to destroy network for sandbox \"1699588569619b3db96783aa1e356eb41c3b2aa24a2b4d7234f15632e88b7829\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:54:57.729397 kubelet[3429]: E0213 20:54:57.729271 3429 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eba8c9a8883cacb26cc9f5f64c9afccce956a2bba4a6b15af0774e559541158b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:54:57.729397 kubelet[3429]: E0213 20:54:57.729316 3429 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11bd862c6efc92f4633b356435d221e7af2a4d135f930aea5588f9c5b70c758c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:54:57.729397 kubelet[3429]: E0213 20:54:57.729335 3429 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eba8c9a8883cacb26cc9f5f64c9afccce956a2bba4a6b15af0774e559541158b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-784664ffb7-z5wx4" Feb 13 20:54:57.729397 kubelet[3429]: E0213 20:54:57.729339 3429 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11bd862c6efc92f4633b356435d221e7af2a4d135f930aea5588f9c5b70c758c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-784664ffb7-4nzlt" Feb 13 20:54:57.729706 kubelet[3429]: E0213 20:54:57.729348 3429 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eba8c9a8883cacb26cc9f5f64c9afccce956a2bba4a6b15af0774e559541158b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-784664ffb7-z5wx4" Feb 13 20:54:57.729706 kubelet[3429]: E0213 20:54:57.729353 3429 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11bd862c6efc92f4633b356435d221e7af2a4d135f930aea5588f9c5b70c758c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-784664ffb7-4nzlt" Feb 13 20:54:57.729706 kubelet[3429]: E0213 20:54:57.729277 3429 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd791de27a6fc911fac2454cdaed32c7be1d456c73e780ca09699e6c110bcbbe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:54:57.729796 containerd[1918]: time="2025-02-13T20:54:57.729414882Z" level=error msg="encountered an error cleaning up failed sandbox \"1699588569619b3db96783aa1e356eb41c3b2aa24a2b4d7234f15632e88b7829\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:54:57.729796 containerd[1918]: time="2025-02-13T20:54:57.729443198Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-lkxpm,Uid:dc173c30-2906-4734-85ec-0b16586ce47f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1699588569619b3db96783aa1e356eb41c3b2aa24a2b4d7234f15632e88b7829\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:54:57.729833 kubelet[3429]: E0213 20:54:57.729378 3429 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-784664ffb7-4nzlt_calico-apiserver(14070312-726d-4bcd-91eb-341f8e9a1a5e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-784664ffb7-4nzlt_calico-apiserver(14070312-726d-4bcd-91eb-341f8e9a1a5e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"11bd862c6efc92f4633b356435d221e7af2a4d135f930aea5588f9c5b70c758c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-784664ffb7-4nzlt" podUID="14070312-726d-4bcd-91eb-341f8e9a1a5e" Feb 13 20:54:57.729833 kubelet[3429]: E0213 20:54:57.729386 3429 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd791de27a6fc911fac2454cdaed32c7be1d456c73e780ca09699e6c110bcbbe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-554f7dd6cb-n9jmw" Feb 13 20:54:57.729884 kubelet[3429]: E0213 20:54:57.729378 3429 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-784664ffb7-z5wx4_calico-apiserver(03b0b730-6f3a-4b02-bedd-65f23a457b35)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-784664ffb7-z5wx4_calico-apiserver(03b0b730-6f3a-4b02-bedd-65f23a457b35)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eba8c9a8883cacb26cc9f5f64c9afccce956a2bba4a6b15af0774e559541158b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-784664ffb7-z5wx4" podUID="03b0b730-6f3a-4b02-bedd-65f23a457b35" Feb 13 20:54:57.729884 kubelet[3429]: E0213 20:54:57.729271 3429 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20d8033f7a9a30cdcc0ed9f4a32dbe88307d5edbb5a868fbd9b5aab31219f1f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:54:57.729884 kubelet[3429]: E0213 20:54:57.729400 3429 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd791de27a6fc911fac2454cdaed32c7be1d456c73e780ca09699e6c110bcbbe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-554f7dd6cb-n9jmw" Feb 13 20:54:57.729955 kubelet[3429]: E0213 20:54:57.729407 3429 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20d8033f7a9a30cdcc0ed9f4a32dbe88307d5edbb5a868fbd9b5aab31219f1f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-b6xth" Feb 13 20:54:57.729955 kubelet[3429]: E0213 20:54:57.729417 3429 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20d8033f7a9a30cdcc0ed9f4a32dbe88307d5edbb5a868fbd9b5aab31219f1f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-b6xth" Feb 13 20:54:57.729955 kubelet[3429]: E0213 20:54:57.729432 3429 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-554f7dd6cb-n9jmw_calico-system(aa8386d9-1397-4a7f-9ace-37696d683da6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-554f7dd6cb-n9jmw_calico-system(aa8386d9-1397-4a7f-9ace-37696d683da6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dd791de27a6fc911fac2454cdaed32c7be1d456c73e780ca09699e6c110bcbbe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-554f7dd6cb-n9jmw" podUID="aa8386d9-1397-4a7f-9ace-37696d683da6" Feb 13 20:54:57.730021 kubelet[3429]: E0213 20:54:57.729440 3429 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-b6xth_kube-system(ed2abc63-0eb6-4122-b8d3-cd7022d17802)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-b6xth_kube-system(ed2abc63-0eb6-4122-b8d3-cd7022d17802)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"20d8033f7a9a30cdcc0ed9f4a32dbe88307d5edbb5a868fbd9b5aab31219f1f1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-b6xth" podUID="ed2abc63-0eb6-4122-b8d3-cd7022d17802" Feb 13 20:54:57.730021 kubelet[3429]: E0213 20:54:57.729534 3429 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1699588569619b3db96783aa1e356eb41c3b2aa24a2b4d7234f15632e88b7829\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:54:57.730021 kubelet[3429]: E0213 20:54:57.729551 3429 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1699588569619b3db96783aa1e356eb41c3b2aa24a2b4d7234f15632e88b7829\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-lkxpm" Feb 13 20:54:57.730103 kubelet[3429]: E0213 20:54:57.729575 3429 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1699588569619b3db96783aa1e356eb41c3b2aa24a2b4d7234f15632e88b7829\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-lkxpm" Feb 13 20:54:57.730103 kubelet[3429]: E0213 20:54:57.729589 3429 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-lkxpm_kube-system(dc173c30-2906-4734-85ec-0b16586ce47f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-lkxpm_kube-system(dc173c30-2906-4734-85ec-0b16586ce47f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1699588569619b3db96783aa1e356eb41c3b2aa24a2b4d7234f15632e88b7829\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-lkxpm" podUID="dc173c30-2906-4734-85ec-0b16586ce47f" Feb 13 20:54:57.730598 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-eba8c9a8883cacb26cc9f5f64c9afccce956a2bba4a6b15af0774e559541158b-shm.mount: Deactivated successfully. Feb 13 20:54:57.730683 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-11bd862c6efc92f4633b356435d221e7af2a4d135f930aea5588f9c5b70c758c-shm.mount: Deactivated successfully. Feb 13 20:54:57.730739 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1699588569619b3db96783aa1e356eb41c3b2aa24a2b4d7234f15632e88b7829-shm.mount: Deactivated successfully. Feb 13 20:54:57.730792 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-20d8033f7a9a30cdcc0ed9f4a32dbe88307d5edbb5a868fbd9b5aab31219f1f1-shm.mount: Deactivated successfully. Feb 13 20:54:57.730845 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dd791de27a6fc911fac2454cdaed32c7be1d456c73e780ca09699e6c110bcbbe-shm.mount: Deactivated successfully. Feb 13 20:54:58.368889 containerd[1918]: time="2025-02-13T20:54:58.368850176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fqp2c,Uid:33fccad8-e90d-49bb-89c6-670419a141a0,Namespace:calico-system,Attempt:0,}" Feb 13 20:54:58.399134 containerd[1918]: time="2025-02-13T20:54:58.399106971Z" level=error msg="Failed to destroy network for sandbox \"3f29a0d44f56fea504a6321d257ffe7b3e1a11f9b4b8f8ee22ba456869c3b6f1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:54:58.399322 containerd[1918]: time="2025-02-13T20:54:58.399306823Z" level=error msg="encountered an error cleaning up failed sandbox \"3f29a0d44f56fea504a6321d257ffe7b3e1a11f9b4b8f8ee22ba456869c3b6f1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:54:58.399365 containerd[1918]: time="2025-02-13T20:54:58.399340761Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fqp2c,Uid:33fccad8-e90d-49bb-89c6-670419a141a0,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3f29a0d44f56fea504a6321d257ffe7b3e1a11f9b4b8f8ee22ba456869c3b6f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:54:58.399508 kubelet[3429]: E0213 20:54:58.399460 3429 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f29a0d44f56fea504a6321d257ffe7b3e1a11f9b4b8f8ee22ba456869c3b6f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:54:58.399508 kubelet[3429]: E0213 20:54:58.399495 3429 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f29a0d44f56fea504a6321d257ffe7b3e1a11f9b4b8f8ee22ba456869c3b6f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fqp2c" Feb 13 20:54:58.399508 kubelet[3429]: E0213 20:54:58.399508 3429 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f29a0d44f56fea504a6321d257ffe7b3e1a11f9b4b8f8ee22ba456869c3b6f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fqp2c" Feb 13 20:54:58.399589 kubelet[3429]: E0213 20:54:58.399532 3429 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-fqp2c_calico-system(33fccad8-e90d-49bb-89c6-670419a141a0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-fqp2c_calico-system(33fccad8-e90d-49bb-89c6-670419a141a0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3f29a0d44f56fea504a6321d257ffe7b3e1a11f9b4b8f8ee22ba456869c3b6f1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fqp2c" podUID="33fccad8-e90d-49bb-89c6-670419a141a0" Feb 13 20:54:58.457190 kubelet[3429]: I0213 20:54:58.457169 3429 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11bd862c6efc92f4633b356435d221e7af2a4d135f930aea5588f9c5b70c758c" Feb 13 20:54:58.457614 containerd[1918]: time="2025-02-13T20:54:58.457589039Z" level=info msg="StopPodSandbox for \"11bd862c6efc92f4633b356435d221e7af2a4d135f930aea5588f9c5b70c758c\"" Feb 13 20:54:58.457733 kubelet[3429]: I0213 20:54:58.457720 3429 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f29a0d44f56fea504a6321d257ffe7b3e1a11f9b4b8f8ee22ba456869c3b6f1" Feb 13 20:54:58.457800 containerd[1918]: time="2025-02-13T20:54:58.457720178Z" level=info msg="Ensure that sandbox 11bd862c6efc92f4633b356435d221e7af2a4d135f930aea5588f9c5b70c758c in task-service has been cleanup successfully" Feb 13 20:54:58.458054 containerd[1918]: time="2025-02-13T20:54:58.458032658Z" level=info msg="StopPodSandbox for \"3f29a0d44f56fea504a6321d257ffe7b3e1a11f9b4b8f8ee22ba456869c3b6f1\"" Feb 13 20:54:58.458185 containerd[1918]: time="2025-02-13T20:54:58.458167163Z" level=info msg="Ensure that sandbox 3f29a0d44f56fea504a6321d257ffe7b3e1a11f9b4b8f8ee22ba456869c3b6f1 in task-service has been cleanup successfully" Feb 13 20:54:58.458403 kubelet[3429]: I0213 20:54:58.458389 3429 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1699588569619b3db96783aa1e356eb41c3b2aa24a2b4d7234f15632e88b7829" Feb 13 20:54:58.458770 containerd[1918]: time="2025-02-13T20:54:58.458744697Z" level=info msg="StopPodSandbox for \"1699588569619b3db96783aa1e356eb41c3b2aa24a2b4d7234f15632e88b7829\"" Feb 13 20:54:58.458943 containerd[1918]: time="2025-02-13T20:54:58.458926941Z" level=info msg="Ensure that sandbox 1699588569619b3db96783aa1e356eb41c3b2aa24a2b4d7234f15632e88b7829 in task-service has been cleanup successfully" Feb 13 20:54:58.459025 kubelet[3429]: I0213 20:54:58.459010 3429 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd791de27a6fc911fac2454cdaed32c7be1d456c73e780ca09699e6c110bcbbe" Feb 13 20:54:58.459382 containerd[1918]: time="2025-02-13T20:54:58.459358967Z" level=info msg="StopPodSandbox for \"dd791de27a6fc911fac2454cdaed32c7be1d456c73e780ca09699e6c110bcbbe\"" Feb 13 20:54:58.459959 containerd[1918]: time="2025-02-13T20:54:58.459893059Z" level=info msg="Ensure that sandbox dd791de27a6fc911fac2454cdaed32c7be1d456c73e780ca09699e6c110bcbbe in task-service has been cleanup successfully" Feb 13 20:54:58.460225 kubelet[3429]: I0213 20:54:58.460207 3429 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20d8033f7a9a30cdcc0ed9f4a32dbe88307d5edbb5a868fbd9b5aab31219f1f1" Feb 13 20:54:58.461290 containerd[1918]: time="2025-02-13T20:54:58.460831169Z" level=info msg="StopPodSandbox for \"20d8033f7a9a30cdcc0ed9f4a32dbe88307d5edbb5a868fbd9b5aab31219f1f1\"" Feb 13 20:54:58.461290 containerd[1918]: time="2025-02-13T20:54:58.461074250Z" level=info msg="Ensure that sandbox 20d8033f7a9a30cdcc0ed9f4a32dbe88307d5edbb5a868fbd9b5aab31219f1f1 in task-service has been cleanup successfully" Feb 13 20:54:58.463743 kubelet[3429]: I0213 20:54:58.463711 3429 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eba8c9a8883cacb26cc9f5f64c9afccce956a2bba4a6b15af0774e559541158b" Feb 13 20:54:58.463876 containerd[1918]: time="2025-02-13T20:54:58.463816351Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Feb 13 20:54:58.464222 containerd[1918]: time="2025-02-13T20:54:58.464200179Z" level=info msg="StopPodSandbox for \"eba8c9a8883cacb26cc9f5f64c9afccce956a2bba4a6b15af0774e559541158b\"" Feb 13 20:54:58.464368 containerd[1918]: time="2025-02-13T20:54:58.464357638Z" level=info msg="Ensure that sandbox eba8c9a8883cacb26cc9f5f64c9afccce956a2bba4a6b15af0774e559541158b in task-service has been cleanup successfully" Feb 13 20:54:58.477506 containerd[1918]: time="2025-02-13T20:54:58.477472049Z" level=error msg="StopPodSandbox for \"11bd862c6efc92f4633b356435d221e7af2a4d135f930aea5588f9c5b70c758c\" failed" error="failed to destroy network for sandbox \"11bd862c6efc92f4633b356435d221e7af2a4d135f930aea5588f9c5b70c758c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:54:58.477608 containerd[1918]: time="2025-02-13T20:54:58.477570154Z" level=error msg="StopPodSandbox for \"3f29a0d44f56fea504a6321d257ffe7b3e1a11f9b4b8f8ee22ba456869c3b6f1\" failed" error="failed to destroy network for sandbox \"3f29a0d44f56fea504a6321d257ffe7b3e1a11f9b4b8f8ee22ba456869c3b6f1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:54:58.477674 kubelet[3429]: E0213 20:54:58.477641 3429 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"11bd862c6efc92f4633b356435d221e7af2a4d135f930aea5588f9c5b70c758c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="11bd862c6efc92f4633b356435d221e7af2a4d135f930aea5588f9c5b70c758c" Feb 13 20:54:58.477729 kubelet[3429]: E0213 20:54:58.477678 3429 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3f29a0d44f56fea504a6321d257ffe7b3e1a11f9b4b8f8ee22ba456869c3b6f1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3f29a0d44f56fea504a6321d257ffe7b3e1a11f9b4b8f8ee22ba456869c3b6f1" Feb 13 20:54:58.477760 kubelet[3429]: E0213 20:54:58.477706 3429 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3f29a0d44f56fea504a6321d257ffe7b3e1a11f9b4b8f8ee22ba456869c3b6f1"} Feb 13 20:54:58.477760 kubelet[3429]: E0213 20:54:58.477702 3429 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"11bd862c6efc92f4633b356435d221e7af2a4d135f930aea5588f9c5b70c758c"} Feb 13 20:54:58.477819 containerd[1918]: time="2025-02-13T20:54:58.477724266Z" level=error msg="StopPodSandbox for \"1699588569619b3db96783aa1e356eb41c3b2aa24a2b4d7234f15632e88b7829\" failed" error="failed to destroy network for sandbox \"1699588569619b3db96783aa1e356eb41c3b2aa24a2b4d7234f15632e88b7829\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:54:58.477850 kubelet[3429]: E0213 20:54:58.477762 3429 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"33fccad8-e90d-49bb-89c6-670419a141a0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3f29a0d44f56fea504a6321d257ffe7b3e1a11f9b4b8f8ee22ba456869c3b6f1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 20:54:58.477850 kubelet[3429]: E0213 20:54:58.477766 3429 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"14070312-726d-4bcd-91eb-341f8e9a1a5e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"11bd862c6efc92f4633b356435d221e7af2a4d135f930aea5588f9c5b70c758c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 20:54:58.477850 kubelet[3429]: E0213 20:54:58.477781 3429 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"33fccad8-e90d-49bb-89c6-670419a141a0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3f29a0d44f56fea504a6321d257ffe7b3e1a11f9b4b8f8ee22ba456869c3b6f1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fqp2c" podUID="33fccad8-e90d-49bb-89c6-670419a141a0" Feb 13 20:54:58.478005 kubelet[3429]: E0213 20:54:58.477784 3429 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"14070312-726d-4bcd-91eb-341f8e9a1a5e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"11bd862c6efc92f4633b356435d221e7af2a4d135f930aea5588f9c5b70c758c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-784664ffb7-4nzlt" podUID="14070312-726d-4bcd-91eb-341f8e9a1a5e" Feb 13 20:54:58.478005 kubelet[3429]: E0213 20:54:58.477805 3429 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1699588569619b3db96783aa1e356eb41c3b2aa24a2b4d7234f15632e88b7829\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1699588569619b3db96783aa1e356eb41c3b2aa24a2b4d7234f15632e88b7829" Feb 13 20:54:58.478005 kubelet[3429]: E0213 20:54:58.477819 3429 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1699588569619b3db96783aa1e356eb41c3b2aa24a2b4d7234f15632e88b7829"} Feb 13 20:54:58.478005 kubelet[3429]: E0213 20:54:58.477832 3429 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"dc173c30-2906-4734-85ec-0b16586ce47f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1699588569619b3db96783aa1e356eb41c3b2aa24a2b4d7234f15632e88b7829\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 20:54:58.478163 kubelet[3429]: E0213 20:54:58.477851 3429 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"dc173c30-2906-4734-85ec-0b16586ce47f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1699588569619b3db96783aa1e356eb41c3b2aa24a2b4d7234f15632e88b7829\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-lkxpm" podUID="dc173c30-2906-4734-85ec-0b16586ce47f" Feb 13 20:54:58.478466 containerd[1918]: time="2025-02-13T20:54:58.478453052Z" level=error msg="StopPodSandbox for \"dd791de27a6fc911fac2454cdaed32c7be1d456c73e780ca09699e6c110bcbbe\" failed" error="failed to destroy network for sandbox \"dd791de27a6fc911fac2454cdaed32c7be1d456c73e780ca09699e6c110bcbbe\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:54:58.478537 kubelet[3429]: E0213 20:54:58.478521 3429 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"dd791de27a6fc911fac2454cdaed32c7be1d456c73e780ca09699e6c110bcbbe\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="dd791de27a6fc911fac2454cdaed32c7be1d456c73e780ca09699e6c110bcbbe" Feb 13 20:54:58.478564 kubelet[3429]: E0213 20:54:58.478544 3429 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"dd791de27a6fc911fac2454cdaed32c7be1d456c73e780ca09699e6c110bcbbe"} Feb 13 20:54:58.478591 kubelet[3429]: E0213 20:54:58.478574 3429 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"aa8386d9-1397-4a7f-9ace-37696d683da6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dd791de27a6fc911fac2454cdaed32c7be1d456c73e780ca09699e6c110bcbbe\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 20:54:58.478629 kubelet[3429]: E0213 20:54:58.478591 3429 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"aa8386d9-1397-4a7f-9ace-37696d683da6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dd791de27a6fc911fac2454cdaed32c7be1d456c73e780ca09699e6c110bcbbe\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-554f7dd6cb-n9jmw" podUID="aa8386d9-1397-4a7f-9ace-37696d683da6" Feb 13 20:54:58.478994 containerd[1918]: time="2025-02-13T20:54:58.478976588Z" level=error msg="StopPodSandbox for \"20d8033f7a9a30cdcc0ed9f4a32dbe88307d5edbb5a868fbd9b5aab31219f1f1\" failed" error="failed to destroy network for sandbox \"20d8033f7a9a30cdcc0ed9f4a32dbe88307d5edbb5a868fbd9b5aab31219f1f1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:54:58.479051 kubelet[3429]: E0213 20:54:58.479038 3429 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"20d8033f7a9a30cdcc0ed9f4a32dbe88307d5edbb5a868fbd9b5aab31219f1f1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="20d8033f7a9a30cdcc0ed9f4a32dbe88307d5edbb5a868fbd9b5aab31219f1f1" Feb 13 20:54:58.479079 kubelet[3429]: E0213 20:54:58.479055 3429 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"20d8033f7a9a30cdcc0ed9f4a32dbe88307d5edbb5a868fbd9b5aab31219f1f1"} Feb 13 20:54:58.479079 kubelet[3429]: E0213 20:54:58.479068 3429 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ed2abc63-0eb6-4122-b8d3-cd7022d17802\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"20d8033f7a9a30cdcc0ed9f4a32dbe88307d5edbb5a868fbd9b5aab31219f1f1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 20:54:58.479125 kubelet[3429]: E0213 20:54:58.479078 3429 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ed2abc63-0eb6-4122-b8d3-cd7022d17802\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"20d8033f7a9a30cdcc0ed9f4a32dbe88307d5edbb5a868fbd9b5aab31219f1f1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-b6xth" podUID="ed2abc63-0eb6-4122-b8d3-cd7022d17802" Feb 13 20:54:58.482274 containerd[1918]: time="2025-02-13T20:54:58.482259364Z" level=error msg="StopPodSandbox for \"eba8c9a8883cacb26cc9f5f64c9afccce956a2bba4a6b15af0774e559541158b\" failed" error="failed to destroy network for sandbox \"eba8c9a8883cacb26cc9f5f64c9afccce956a2bba4a6b15af0774e559541158b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:54:58.482352 kubelet[3429]: E0213 20:54:58.482339 3429 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"eba8c9a8883cacb26cc9f5f64c9afccce956a2bba4a6b15af0774e559541158b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="eba8c9a8883cacb26cc9f5f64c9afccce956a2bba4a6b15af0774e559541158b" Feb 13 20:54:58.482383 kubelet[3429]: E0213 20:54:58.482356 3429 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"eba8c9a8883cacb26cc9f5f64c9afccce956a2bba4a6b15af0774e559541158b"} Feb 13 20:54:58.482383 kubelet[3429]: E0213 20:54:58.482373 3429 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"03b0b730-6f3a-4b02-bedd-65f23a457b35\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"eba8c9a8883cacb26cc9f5f64c9afccce956a2bba4a6b15af0774e559541158b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 20:54:58.482440 kubelet[3429]: E0213 20:54:58.482384 3429 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"03b0b730-6f3a-4b02-bedd-65f23a457b35\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"eba8c9a8883cacb26cc9f5f64c9afccce956a2bba4a6b15af0774e559541158b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-784664ffb7-z5wx4" podUID="03b0b730-6f3a-4b02-bedd-65f23a457b35" Feb 13 20:55:00.601869 kubelet[3429]: I0213 20:55:00.601845 3429 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 20:55:03.562702 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1164955867.mount: Deactivated successfully. Feb 13 20:55:03.584624 containerd[1918]: time="2025-02-13T20:55:03.584575533Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:55:03.584786 containerd[1918]: time="2025-02-13T20:55:03.584762649Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Feb 13 20:55:03.585137 containerd[1918]: time="2025-02-13T20:55:03.585100503Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:55:03.585997 containerd[1918]: time="2025-02-13T20:55:03.585952025Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:55:03.586412 containerd[1918]: time="2025-02-13T20:55:03.586370715Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 5.122518007s" Feb 13 20:55:03.586412 containerd[1918]: time="2025-02-13T20:55:03.586386933Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Feb 13 20:55:03.589791 containerd[1918]: time="2025-02-13T20:55:03.589774757Z" level=info msg="CreateContainer within sandbox \"07c0da706aad620fd3a971e1364b5ad5e8b807c76eed717f74a6a43b1ba5f8fa\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 13 20:55:03.595491 containerd[1918]: time="2025-02-13T20:55:03.595467970Z" level=info msg="CreateContainer within sandbox \"07c0da706aad620fd3a971e1364b5ad5e8b807c76eed717f74a6a43b1ba5f8fa\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"82bd89c91dab59b9865ec878e4cc465abd34bf927524b835d3008f3ae8f15fff\"" Feb 13 20:55:03.595761 containerd[1918]: time="2025-02-13T20:55:03.595737976Z" level=info msg="StartContainer for \"82bd89c91dab59b9865ec878e4cc465abd34bf927524b835d3008f3ae8f15fff\"" Feb 13 20:55:03.629792 containerd[1918]: time="2025-02-13T20:55:03.629767716Z" level=info msg="StartContainer for \"82bd89c91dab59b9865ec878e4cc465abd34bf927524b835d3008f3ae8f15fff\" returns successfully" Feb 13 20:55:03.689690 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 13 20:55:03.689747 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 13 20:55:04.510202 kubelet[3429]: I0213 20:55:04.510098 3429 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-2q2tv" podStartSLOduration=2.177152202 podStartE2EDuration="18.510057045s" podCreationTimestamp="2025-02-13 20:54:46 +0000 UTC" firstStartedPulling="2025-02-13 20:54:47.253843866 +0000 UTC m=+20.932572533" lastFinishedPulling="2025-02-13 20:55:03.586748713 +0000 UTC m=+37.265477376" observedRunningTime="2025-02-13 20:55:04.509239376 +0000 UTC m=+38.187968149" watchObservedRunningTime="2025-02-13 20:55:04.510057045 +0000 UTC m=+38.188785769" Feb 13 20:55:04.919498 kernel: bpftool[5025]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Feb 13 20:55:05.064951 systemd-networkd[1542]: vxlan.calico: Link UP Feb 13 20:55:05.064955 systemd-networkd[1542]: vxlan.calico: Gained carrier Feb 13 20:55:05.481839 kubelet[3429]: I0213 20:55:05.481745 3429 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 20:55:06.336867 systemd-networkd[1542]: vxlan.calico: Gained IPv6LL Feb 13 20:55:10.364317 containerd[1918]: time="2025-02-13T20:55:10.364223148Z" level=info msg="StopPodSandbox for \"dd791de27a6fc911fac2454cdaed32c7be1d456c73e780ca09699e6c110bcbbe\"" Feb 13 20:55:10.365258 containerd[1918]: time="2025-02-13T20:55:10.364586083Z" level=info msg="StopPodSandbox for \"eba8c9a8883cacb26cc9f5f64c9afccce956a2bba4a6b15af0774e559541158b\"" Feb 13 20:55:10.457870 containerd[1918]: 2025-02-13 20:55:10.435 [INFO][5174] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="eba8c9a8883cacb26cc9f5f64c9afccce956a2bba4a6b15af0774e559541158b" Feb 13 20:55:10.457870 containerd[1918]: 2025-02-13 20:55:10.435 [INFO][5174] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="eba8c9a8883cacb26cc9f5f64c9afccce956a2bba4a6b15af0774e559541158b" iface="eth0" netns="/var/run/netns/cni-86ccfdf9-1af0-f6c0-0f05-6de326d7f29e" Feb 13 20:55:10.457870 containerd[1918]: 2025-02-13 20:55:10.435 [INFO][5174] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="eba8c9a8883cacb26cc9f5f64c9afccce956a2bba4a6b15af0774e559541158b" iface="eth0" netns="/var/run/netns/cni-86ccfdf9-1af0-f6c0-0f05-6de326d7f29e" Feb 13 20:55:10.457870 containerd[1918]: 2025-02-13 20:55:10.436 [INFO][5174] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="eba8c9a8883cacb26cc9f5f64c9afccce956a2bba4a6b15af0774e559541158b" iface="eth0" netns="/var/run/netns/cni-86ccfdf9-1af0-f6c0-0f05-6de326d7f29e" Feb 13 20:55:10.457870 containerd[1918]: 2025-02-13 20:55:10.436 [INFO][5174] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="eba8c9a8883cacb26cc9f5f64c9afccce956a2bba4a6b15af0774e559541158b" Feb 13 20:55:10.457870 containerd[1918]: 2025-02-13 20:55:10.436 [INFO][5174] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="eba8c9a8883cacb26cc9f5f64c9afccce956a2bba4a6b15af0774e559541158b" Feb 13 20:55:10.457870 containerd[1918]: 2025-02-13 20:55:10.449 [INFO][5209] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="eba8c9a8883cacb26cc9f5f64c9afccce956a2bba4a6b15af0774e559541158b" HandleID="k8s-pod-network.eba8c9a8883cacb26cc9f5f64c9afccce956a2bba4a6b15af0774e559541158b" Workload="ci--4081.3.1--a--f6aaf2d828-k8s-calico--apiserver--784664ffb7--z5wx4-eth0" Feb 13 20:55:10.457870 containerd[1918]: 2025-02-13 20:55:10.449 [INFO][5209] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:55:10.457870 containerd[1918]: 2025-02-13 20:55:10.449 [INFO][5209] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:55:10.457870 containerd[1918]: 2025-02-13 20:55:10.454 [WARNING][5209] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="eba8c9a8883cacb26cc9f5f64c9afccce956a2bba4a6b15af0774e559541158b" HandleID="k8s-pod-network.eba8c9a8883cacb26cc9f5f64c9afccce956a2bba4a6b15af0774e559541158b" Workload="ci--4081.3.1--a--f6aaf2d828-k8s-calico--apiserver--784664ffb7--z5wx4-eth0" Feb 13 20:55:10.457870 containerd[1918]: 2025-02-13 20:55:10.454 [INFO][5209] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="eba8c9a8883cacb26cc9f5f64c9afccce956a2bba4a6b15af0774e559541158b" HandleID="k8s-pod-network.eba8c9a8883cacb26cc9f5f64c9afccce956a2bba4a6b15af0774e559541158b" Workload="ci--4081.3.1--a--f6aaf2d828-k8s-calico--apiserver--784664ffb7--z5wx4-eth0" Feb 13 20:55:10.457870 containerd[1918]: 2025-02-13 20:55:10.455 [INFO][5209] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:55:10.457870 containerd[1918]: 2025-02-13 20:55:10.456 [INFO][5174] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="eba8c9a8883cacb26cc9f5f64c9afccce956a2bba4a6b15af0774e559541158b" Feb 13 20:55:10.458260 containerd[1918]: time="2025-02-13T20:55:10.457991502Z" level=info msg="TearDown network for sandbox \"eba8c9a8883cacb26cc9f5f64c9afccce956a2bba4a6b15af0774e559541158b\" successfully" Feb 13 20:55:10.458260 containerd[1918]: time="2025-02-13T20:55:10.458023872Z" level=info msg="StopPodSandbox for \"eba8c9a8883cacb26cc9f5f64c9afccce956a2bba4a6b15af0774e559541158b\" returns successfully" Feb 13 20:55:10.458590 containerd[1918]: time="2025-02-13T20:55:10.458570881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-784664ffb7-z5wx4,Uid:03b0b730-6f3a-4b02-bedd-65f23a457b35,Namespace:calico-apiserver,Attempt:1,}" Feb 13 20:55:10.459881 systemd[1]: run-netns-cni\x2d86ccfdf9\x2d1af0\x2df6c0\x2d0f05\x2d6de326d7f29e.mount: Deactivated successfully. Feb 13 20:55:10.461247 containerd[1918]: 2025-02-13 20:55:10.434 [INFO][5175] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="dd791de27a6fc911fac2454cdaed32c7be1d456c73e780ca09699e6c110bcbbe" Feb 13 20:55:10.461247 containerd[1918]: 2025-02-13 20:55:10.434 [INFO][5175] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="dd791de27a6fc911fac2454cdaed32c7be1d456c73e780ca09699e6c110bcbbe" iface="eth0" netns="/var/run/netns/cni-c5d4aba6-6d69-01a9-c814-1c6b62ee890a" Feb 13 20:55:10.461247 containerd[1918]: 2025-02-13 20:55:10.434 [INFO][5175] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="dd791de27a6fc911fac2454cdaed32c7be1d456c73e780ca09699e6c110bcbbe" iface="eth0" netns="/var/run/netns/cni-c5d4aba6-6d69-01a9-c814-1c6b62ee890a" Feb 13 20:55:10.461247 containerd[1918]: 2025-02-13 20:55:10.435 [INFO][5175] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="dd791de27a6fc911fac2454cdaed32c7be1d456c73e780ca09699e6c110bcbbe" iface="eth0" netns="/var/run/netns/cni-c5d4aba6-6d69-01a9-c814-1c6b62ee890a" Feb 13 20:55:10.461247 containerd[1918]: 2025-02-13 20:55:10.435 [INFO][5175] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="dd791de27a6fc911fac2454cdaed32c7be1d456c73e780ca09699e6c110bcbbe" Feb 13 20:55:10.461247 containerd[1918]: 2025-02-13 20:55:10.435 [INFO][5175] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dd791de27a6fc911fac2454cdaed32c7be1d456c73e780ca09699e6c110bcbbe" Feb 13 20:55:10.461247 containerd[1918]: 2025-02-13 20:55:10.449 [INFO][5208] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dd791de27a6fc911fac2454cdaed32c7be1d456c73e780ca09699e6c110bcbbe" HandleID="k8s-pod-network.dd791de27a6fc911fac2454cdaed32c7be1d456c73e780ca09699e6c110bcbbe" Workload="ci--4081.3.1--a--f6aaf2d828-k8s-calico--kube--controllers--554f7dd6cb--n9jmw-eth0" Feb 13 20:55:10.461247 containerd[1918]: 2025-02-13 20:55:10.449 [INFO][5208] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:55:10.461247 containerd[1918]: 2025-02-13 20:55:10.455 [INFO][5208] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:55:10.461247 containerd[1918]: 2025-02-13 20:55:10.459 [WARNING][5208] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dd791de27a6fc911fac2454cdaed32c7be1d456c73e780ca09699e6c110bcbbe" HandleID="k8s-pod-network.dd791de27a6fc911fac2454cdaed32c7be1d456c73e780ca09699e6c110bcbbe" Workload="ci--4081.3.1--a--f6aaf2d828-k8s-calico--kube--controllers--554f7dd6cb--n9jmw-eth0" Feb 13 20:55:10.461247 containerd[1918]: 2025-02-13 20:55:10.459 [INFO][5208] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dd791de27a6fc911fac2454cdaed32c7be1d456c73e780ca09699e6c110bcbbe" HandleID="k8s-pod-network.dd791de27a6fc911fac2454cdaed32c7be1d456c73e780ca09699e6c110bcbbe" Workload="ci--4081.3.1--a--f6aaf2d828-k8s-calico--kube--controllers--554f7dd6cb--n9jmw-eth0" Feb 13 20:55:10.461247 containerd[1918]: 2025-02-13 20:55:10.459 [INFO][5208] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:55:10.461247 containerd[1918]: 2025-02-13 20:55:10.460 [INFO][5175] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="dd791de27a6fc911fac2454cdaed32c7be1d456c73e780ca09699e6c110bcbbe" Feb 13 20:55:10.461487 containerd[1918]: time="2025-02-13T20:55:10.461314569Z" level=info msg="TearDown network for sandbox \"dd791de27a6fc911fac2454cdaed32c7be1d456c73e780ca09699e6c110bcbbe\" successfully" Feb 13 20:55:10.461487 containerd[1918]: time="2025-02-13T20:55:10.461329173Z" level=info msg="StopPodSandbox for \"dd791de27a6fc911fac2454cdaed32c7be1d456c73e780ca09699e6c110bcbbe\" returns successfully" Feb 13 20:55:10.461672 containerd[1918]: time="2025-02-13T20:55:10.461636311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-554f7dd6cb-n9jmw,Uid:aa8386d9-1397-4a7f-9ace-37696d683da6,Namespace:calico-system,Attempt:1,}" Feb 13 20:55:10.463741 systemd[1]: run-netns-cni\x2dc5d4aba6\x2d6d69\x2d01a9\x2dc814\x2d1c6b62ee890a.mount: Deactivated successfully. Feb 13 20:55:10.516829 systemd-networkd[1542]: cali920a1084f34: Link UP Feb 13 20:55:10.516952 systemd-networkd[1542]: cali920a1084f34: Gained carrier Feb 13 20:55:10.521898 containerd[1918]: 2025-02-13 20:55:10.478 [INFO][5238] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.1--a--f6aaf2d828-k8s-calico--apiserver--784664ffb7--z5wx4-eth0 calico-apiserver-784664ffb7- calico-apiserver 03b0b730-6f3a-4b02-bedd-65f23a457b35 745 0 2025-02-13 20:54:46 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:784664ffb7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.1-a-f6aaf2d828 calico-apiserver-784664ffb7-z5wx4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali920a1084f34 [] []}} ContainerID="262150e550ef21330d285506bda379bce450ddd9721671f5df782deba45b48a2" Namespace="calico-apiserver" Pod="calico-apiserver-784664ffb7-z5wx4" WorkloadEndpoint="ci--4081.3.1--a--f6aaf2d828-k8s-calico--apiserver--784664ffb7--z5wx4-" Feb 13 20:55:10.521898 containerd[1918]: 2025-02-13 20:55:10.479 [INFO][5238] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="262150e550ef21330d285506bda379bce450ddd9721671f5df782deba45b48a2" Namespace="calico-apiserver" Pod="calico-apiserver-784664ffb7-z5wx4" WorkloadEndpoint="ci--4081.3.1--a--f6aaf2d828-k8s-calico--apiserver--784664ffb7--z5wx4-eth0" Feb 13 20:55:10.521898 containerd[1918]: 2025-02-13 20:55:10.492 [INFO][5281] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="262150e550ef21330d285506bda379bce450ddd9721671f5df782deba45b48a2" HandleID="k8s-pod-network.262150e550ef21330d285506bda379bce450ddd9721671f5df782deba45b48a2" Workload="ci--4081.3.1--a--f6aaf2d828-k8s-calico--apiserver--784664ffb7--z5wx4-eth0" Feb 13 20:55:10.521898 containerd[1918]: 2025-02-13 20:55:10.499 [INFO][5281] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="262150e550ef21330d285506bda379bce450ddd9721671f5df782deba45b48a2" HandleID="k8s-pod-network.262150e550ef21330d285506bda379bce450ddd9721671f5df782deba45b48a2" Workload="ci--4081.3.1--a--f6aaf2d828-k8s-calico--apiserver--784664ffb7--z5wx4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000311800), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.1-a-f6aaf2d828", "pod":"calico-apiserver-784664ffb7-z5wx4", "timestamp":"2025-02-13 20:55:10.492510647 +0000 UTC"}, Hostname:"ci-4081.3.1-a-f6aaf2d828", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:55:10.521898 containerd[1918]: 2025-02-13 20:55:10.499 [INFO][5281] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:55:10.521898 containerd[1918]: 2025-02-13 20:55:10.499 [INFO][5281] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:55:10.521898 containerd[1918]: 2025-02-13 20:55:10.499 [INFO][5281] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.1-a-f6aaf2d828' Feb 13 20:55:10.521898 containerd[1918]: 2025-02-13 20:55:10.500 [INFO][5281] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.262150e550ef21330d285506bda379bce450ddd9721671f5df782deba45b48a2" host="ci-4081.3.1-a-f6aaf2d828" Feb 13 20:55:10.521898 containerd[1918]: 2025-02-13 20:55:10.503 [INFO][5281] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.1-a-f6aaf2d828" Feb 13 20:55:10.521898 containerd[1918]: 2025-02-13 20:55:10.506 [INFO][5281] ipam/ipam.go 489: Trying affinity for 192.168.31.64/26 host="ci-4081.3.1-a-f6aaf2d828" Feb 13 20:55:10.521898 containerd[1918]: 2025-02-13 20:55:10.507 [INFO][5281] ipam/ipam.go 155: Attempting to load block cidr=192.168.31.64/26 host="ci-4081.3.1-a-f6aaf2d828" Feb 13 20:55:10.521898 containerd[1918]: 2025-02-13 20:55:10.509 [INFO][5281] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.31.64/26 host="ci-4081.3.1-a-f6aaf2d828" Feb 13 20:55:10.521898 containerd[1918]: 2025-02-13 20:55:10.509 [INFO][5281] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.31.64/26 handle="k8s-pod-network.262150e550ef21330d285506bda379bce450ddd9721671f5df782deba45b48a2" host="ci-4081.3.1-a-f6aaf2d828" Feb 13 20:55:10.521898 containerd[1918]: 2025-02-13 20:55:10.510 [INFO][5281] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.262150e550ef21330d285506bda379bce450ddd9721671f5df782deba45b48a2 Feb 13 20:55:10.521898 containerd[1918]: 2025-02-13 20:55:10.512 [INFO][5281] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.31.64/26 handle="k8s-pod-network.262150e550ef21330d285506bda379bce450ddd9721671f5df782deba45b48a2" host="ci-4081.3.1-a-f6aaf2d828" Feb 13 20:55:10.521898 containerd[1918]: 2025-02-13 20:55:10.515 [INFO][5281] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.31.65/26] block=192.168.31.64/26 handle="k8s-pod-network.262150e550ef21330d285506bda379bce450ddd9721671f5df782deba45b48a2" host="ci-4081.3.1-a-f6aaf2d828" Feb 13 20:55:10.521898 containerd[1918]: 2025-02-13 20:55:10.515 [INFO][5281] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.31.65/26] handle="k8s-pod-network.262150e550ef21330d285506bda379bce450ddd9721671f5df782deba45b48a2" host="ci-4081.3.1-a-f6aaf2d828" Feb 13 20:55:10.521898 containerd[1918]: 2025-02-13 20:55:10.515 [INFO][5281] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:55:10.521898 containerd[1918]: 2025-02-13 20:55:10.515 [INFO][5281] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.31.65/26] IPv6=[] ContainerID="262150e550ef21330d285506bda379bce450ddd9721671f5df782deba45b48a2" HandleID="k8s-pod-network.262150e550ef21330d285506bda379bce450ddd9721671f5df782deba45b48a2" Workload="ci--4081.3.1--a--f6aaf2d828-k8s-calico--apiserver--784664ffb7--z5wx4-eth0" Feb 13 20:55:10.522574 containerd[1918]: 2025-02-13 20:55:10.515 [INFO][5238] cni-plugin/k8s.go 386: Populated endpoint ContainerID="262150e550ef21330d285506bda379bce450ddd9721671f5df782deba45b48a2" Namespace="calico-apiserver" Pod="calico-apiserver-784664ffb7-z5wx4" WorkloadEndpoint="ci--4081.3.1--a--f6aaf2d828-k8s-calico--apiserver--784664ffb7--z5wx4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--f6aaf2d828-k8s-calico--apiserver--784664ffb7--z5wx4-eth0", GenerateName:"calico-apiserver-784664ffb7-", Namespace:"calico-apiserver", SelfLink:"", UID:"03b0b730-6f3a-4b02-bedd-65f23a457b35", ResourceVersion:"745", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 54, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"784664ffb7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-f6aaf2d828", ContainerID:"", Pod:"calico-apiserver-784664ffb7-z5wx4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.31.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali920a1084f34", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:55:10.522574 containerd[1918]: 2025-02-13 20:55:10.516 [INFO][5238] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.31.65/32] ContainerID="262150e550ef21330d285506bda379bce450ddd9721671f5df782deba45b48a2" Namespace="calico-apiserver" Pod="calico-apiserver-784664ffb7-z5wx4" WorkloadEndpoint="ci--4081.3.1--a--f6aaf2d828-k8s-calico--apiserver--784664ffb7--z5wx4-eth0" Feb 13 20:55:10.522574 containerd[1918]: 2025-02-13 20:55:10.516 [INFO][5238] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali920a1084f34 ContainerID="262150e550ef21330d285506bda379bce450ddd9721671f5df782deba45b48a2" Namespace="calico-apiserver" Pod="calico-apiserver-784664ffb7-z5wx4" WorkloadEndpoint="ci--4081.3.1--a--f6aaf2d828-k8s-calico--apiserver--784664ffb7--z5wx4-eth0" Feb 13 20:55:10.522574 containerd[1918]: 2025-02-13 20:55:10.516 [INFO][5238] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="262150e550ef21330d285506bda379bce450ddd9721671f5df782deba45b48a2" Namespace="calico-apiserver" Pod="calico-apiserver-784664ffb7-z5wx4" WorkloadEndpoint="ci--4081.3.1--a--f6aaf2d828-k8s-calico--apiserver--784664ffb7--z5wx4-eth0" Feb 13 20:55:10.522574 containerd[1918]: 2025-02-13 20:55:10.517 [INFO][5238] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="262150e550ef21330d285506bda379bce450ddd9721671f5df782deba45b48a2" Namespace="calico-apiserver" Pod="calico-apiserver-784664ffb7-z5wx4" WorkloadEndpoint="ci--4081.3.1--a--f6aaf2d828-k8s-calico--apiserver--784664ffb7--z5wx4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--f6aaf2d828-k8s-calico--apiserver--784664ffb7--z5wx4-eth0", GenerateName:"calico-apiserver-784664ffb7-", Namespace:"calico-apiserver", SelfLink:"", UID:"03b0b730-6f3a-4b02-bedd-65f23a457b35", ResourceVersion:"745", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 54, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"784664ffb7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-f6aaf2d828", ContainerID:"262150e550ef21330d285506bda379bce450ddd9721671f5df782deba45b48a2", Pod:"calico-apiserver-784664ffb7-z5wx4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.31.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali920a1084f34", MAC:"02:1b:76:75:07:e5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:55:10.522574 containerd[1918]: 2025-02-13 20:55:10.521 [INFO][5238] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="262150e550ef21330d285506bda379bce450ddd9721671f5df782deba45b48a2" Namespace="calico-apiserver" Pod="calico-apiserver-784664ffb7-z5wx4" WorkloadEndpoint="ci--4081.3.1--a--f6aaf2d828-k8s-calico--apiserver--784664ffb7--z5wx4-eth0" Feb 13 20:55:10.531309 containerd[1918]: time="2025-02-13T20:55:10.531265382Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:55:10.531309 containerd[1918]: time="2025-02-13T20:55:10.531300427Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:55:10.531309 containerd[1918]: time="2025-02-13T20:55:10.531307935Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:55:10.531429 containerd[1918]: time="2025-02-13T20:55:10.531354920Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:55:10.531587 systemd-networkd[1542]: cali16317655c3d: Link UP Feb 13 20:55:10.531701 systemd-networkd[1542]: cali16317655c3d: Gained carrier Feb 13 20:55:10.536792 containerd[1918]: 2025-02-13 20:55:10.480 [INFO][5247] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.1--a--f6aaf2d828-k8s-calico--kube--controllers--554f7dd6cb--n9jmw-eth0 calico-kube-controllers-554f7dd6cb- calico-system aa8386d9-1397-4a7f-9ace-37696d683da6 744 0 2025-02-13 20:54:46 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:554f7dd6cb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.1-a-f6aaf2d828 calico-kube-controllers-554f7dd6cb-n9jmw eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali16317655c3d [] []}} ContainerID="ce27ef43757bba3f3e57caf8af4de1b9fdf5b1f28ddbee851f2981a29fffdb46" Namespace="calico-system" Pod="calico-kube-controllers-554f7dd6cb-n9jmw" WorkloadEndpoint="ci--4081.3.1--a--f6aaf2d828-k8s-calico--kube--controllers--554f7dd6cb--n9jmw-" Feb 13 20:55:10.536792 containerd[1918]: 2025-02-13 20:55:10.480 [INFO][5247] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ce27ef43757bba3f3e57caf8af4de1b9fdf5b1f28ddbee851f2981a29fffdb46" Namespace="calico-system" Pod="calico-kube-controllers-554f7dd6cb-n9jmw" WorkloadEndpoint="ci--4081.3.1--a--f6aaf2d828-k8s-calico--kube--controllers--554f7dd6cb--n9jmw-eth0" Feb 13 20:55:10.536792 containerd[1918]: 2025-02-13 20:55:10.492 [INFO][5282] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ce27ef43757bba3f3e57caf8af4de1b9fdf5b1f28ddbee851f2981a29fffdb46" HandleID="k8s-pod-network.ce27ef43757bba3f3e57caf8af4de1b9fdf5b1f28ddbee851f2981a29fffdb46" Workload="ci--4081.3.1--a--f6aaf2d828-k8s-calico--kube--controllers--554f7dd6cb--n9jmw-eth0" Feb 13 20:55:10.536792 containerd[1918]: 2025-02-13 20:55:10.499 [INFO][5282] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ce27ef43757bba3f3e57caf8af4de1b9fdf5b1f28ddbee851f2981a29fffdb46" HandleID="k8s-pod-network.ce27ef43757bba3f3e57caf8af4de1b9fdf5b1f28ddbee851f2981a29fffdb46" Workload="ci--4081.3.1--a--f6aaf2d828-k8s-calico--kube--controllers--554f7dd6cb--n9jmw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f5410), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.1-a-f6aaf2d828", "pod":"calico-kube-controllers-554f7dd6cb-n9jmw", "timestamp":"2025-02-13 20:55:10.492964343 +0000 UTC"}, Hostname:"ci-4081.3.1-a-f6aaf2d828", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:55:10.536792 containerd[1918]: 2025-02-13 20:55:10.499 [INFO][5282] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:55:10.536792 containerd[1918]: 2025-02-13 20:55:10.515 [INFO][5282] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:55:10.536792 containerd[1918]: 2025-02-13 20:55:10.515 [INFO][5282] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.1-a-f6aaf2d828' Feb 13 20:55:10.536792 containerd[1918]: 2025-02-13 20:55:10.516 [INFO][5282] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ce27ef43757bba3f3e57caf8af4de1b9fdf5b1f28ddbee851f2981a29fffdb46" host="ci-4081.3.1-a-f6aaf2d828" Feb 13 20:55:10.536792 containerd[1918]: 2025-02-13 20:55:10.518 [INFO][5282] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.1-a-f6aaf2d828" Feb 13 20:55:10.536792 containerd[1918]: 2025-02-13 20:55:10.521 [INFO][5282] ipam/ipam.go 489: Trying affinity for 192.168.31.64/26 host="ci-4081.3.1-a-f6aaf2d828" Feb 13 20:55:10.536792 containerd[1918]: 2025-02-13 20:55:10.522 [INFO][5282] ipam/ipam.go 155: Attempting to load block cidr=192.168.31.64/26 host="ci-4081.3.1-a-f6aaf2d828" Feb 13 20:55:10.536792 containerd[1918]: 2025-02-13 20:55:10.523 [INFO][5282] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.31.64/26 host="ci-4081.3.1-a-f6aaf2d828" Feb 13 20:55:10.536792 containerd[1918]: 2025-02-13 20:55:10.523 [INFO][5282] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.31.64/26 handle="k8s-pod-network.ce27ef43757bba3f3e57caf8af4de1b9fdf5b1f28ddbee851f2981a29fffdb46" host="ci-4081.3.1-a-f6aaf2d828" Feb 13 20:55:10.536792 containerd[1918]: 2025-02-13 20:55:10.524 [INFO][5282] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ce27ef43757bba3f3e57caf8af4de1b9fdf5b1f28ddbee851f2981a29fffdb46 Feb 13 20:55:10.536792 containerd[1918]: 2025-02-13 20:55:10.526 [INFO][5282] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.31.64/26 handle="k8s-pod-network.ce27ef43757bba3f3e57caf8af4de1b9fdf5b1f28ddbee851f2981a29fffdb46" host="ci-4081.3.1-a-f6aaf2d828" Feb 13 20:55:10.536792 containerd[1918]: 2025-02-13 20:55:10.529 [INFO][5282] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.31.66/26] block=192.168.31.64/26 handle="k8s-pod-network.ce27ef43757bba3f3e57caf8af4de1b9fdf5b1f28ddbee851f2981a29fffdb46" host="ci-4081.3.1-a-f6aaf2d828" Feb 13 20:55:10.536792 containerd[1918]: 2025-02-13 20:55:10.529 [INFO][5282] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.31.66/26] handle="k8s-pod-network.ce27ef43757bba3f3e57caf8af4de1b9fdf5b1f28ddbee851f2981a29fffdb46" host="ci-4081.3.1-a-f6aaf2d828" Feb 13 20:55:10.536792 containerd[1918]: 2025-02-13 20:55:10.529 [INFO][5282] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:55:10.536792 containerd[1918]: 2025-02-13 20:55:10.529 [INFO][5282] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.31.66/26] IPv6=[] ContainerID="ce27ef43757bba3f3e57caf8af4de1b9fdf5b1f28ddbee851f2981a29fffdb46" HandleID="k8s-pod-network.ce27ef43757bba3f3e57caf8af4de1b9fdf5b1f28ddbee851f2981a29fffdb46" Workload="ci--4081.3.1--a--f6aaf2d828-k8s-calico--kube--controllers--554f7dd6cb--n9jmw-eth0" Feb 13 20:55:10.537196 containerd[1918]: 2025-02-13 20:55:10.530 [INFO][5247] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ce27ef43757bba3f3e57caf8af4de1b9fdf5b1f28ddbee851f2981a29fffdb46" Namespace="calico-system" Pod="calico-kube-controllers-554f7dd6cb-n9jmw" WorkloadEndpoint="ci--4081.3.1--a--f6aaf2d828-k8s-calico--kube--controllers--554f7dd6cb--n9jmw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--f6aaf2d828-k8s-calico--kube--controllers--554f7dd6cb--n9jmw-eth0", GenerateName:"calico-kube-controllers-554f7dd6cb-", Namespace:"calico-system", SelfLink:"", UID:"aa8386d9-1397-4a7f-9ace-37696d683da6", ResourceVersion:"744", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 54, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"554f7dd6cb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-f6aaf2d828", ContainerID:"", Pod:"calico-kube-controllers-554f7dd6cb-n9jmw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.31.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali16317655c3d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:55:10.537196 containerd[1918]: 2025-02-13 20:55:10.530 [INFO][5247] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.31.66/32] ContainerID="ce27ef43757bba3f3e57caf8af4de1b9fdf5b1f28ddbee851f2981a29fffdb46" Namespace="calico-system" Pod="calico-kube-controllers-554f7dd6cb-n9jmw" WorkloadEndpoint="ci--4081.3.1--a--f6aaf2d828-k8s-calico--kube--controllers--554f7dd6cb--n9jmw-eth0" Feb 13 20:55:10.537196 containerd[1918]: 2025-02-13 20:55:10.530 [INFO][5247] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali16317655c3d ContainerID="ce27ef43757bba3f3e57caf8af4de1b9fdf5b1f28ddbee851f2981a29fffdb46" Namespace="calico-system" Pod="calico-kube-controllers-554f7dd6cb-n9jmw" WorkloadEndpoint="ci--4081.3.1--a--f6aaf2d828-k8s-calico--kube--controllers--554f7dd6cb--n9jmw-eth0" Feb 13 20:55:10.537196 containerd[1918]: 2025-02-13 20:55:10.531 [INFO][5247] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ce27ef43757bba3f3e57caf8af4de1b9fdf5b1f28ddbee851f2981a29fffdb46" Namespace="calico-system" Pod="calico-kube-controllers-554f7dd6cb-n9jmw" WorkloadEndpoint="ci--4081.3.1--a--f6aaf2d828-k8s-calico--kube--controllers--554f7dd6cb--n9jmw-eth0" Feb 13 20:55:10.537196 containerd[1918]: 2025-02-13 20:55:10.531 [INFO][5247] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ce27ef43757bba3f3e57caf8af4de1b9fdf5b1f28ddbee851f2981a29fffdb46" Namespace="calico-system" Pod="calico-kube-controllers-554f7dd6cb-n9jmw" WorkloadEndpoint="ci--4081.3.1--a--f6aaf2d828-k8s-calico--kube--controllers--554f7dd6cb--n9jmw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--f6aaf2d828-k8s-calico--kube--controllers--554f7dd6cb--n9jmw-eth0", GenerateName:"calico-kube-controllers-554f7dd6cb-", Namespace:"calico-system", SelfLink:"", UID:"aa8386d9-1397-4a7f-9ace-37696d683da6", ResourceVersion:"744", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 54, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"554f7dd6cb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-f6aaf2d828", ContainerID:"ce27ef43757bba3f3e57caf8af4de1b9fdf5b1f28ddbee851f2981a29fffdb46", Pod:"calico-kube-controllers-554f7dd6cb-n9jmw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.31.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali16317655c3d", MAC:"22:36:9a:c7:6c:bf", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:55:10.537196 containerd[1918]: 2025-02-13 20:55:10.535 [INFO][5247] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ce27ef43757bba3f3e57caf8af4de1b9fdf5b1f28ddbee851f2981a29fffdb46" Namespace="calico-system" Pod="calico-kube-controllers-554f7dd6cb-n9jmw" WorkloadEndpoint="ci--4081.3.1--a--f6aaf2d828-k8s-calico--kube--controllers--554f7dd6cb--n9jmw-eth0" Feb 13 20:55:10.546019 containerd[1918]: time="2025-02-13T20:55:10.545971826Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:55:10.546019 containerd[1918]: time="2025-02-13T20:55:10.545999280Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:55:10.546019 containerd[1918]: time="2025-02-13T20:55:10.546005969Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:55:10.546131 containerd[1918]: time="2025-02-13T20:55:10.546050291Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:55:10.571418 containerd[1918]: time="2025-02-13T20:55:10.571398524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-784664ffb7-z5wx4,Uid:03b0b730-6f3a-4b02-bedd-65f23a457b35,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"262150e550ef21330d285506bda379bce450ddd9721671f5df782deba45b48a2\"" Feb 13 20:55:10.572055 containerd[1918]: time="2025-02-13T20:55:10.572041099Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-554f7dd6cb-n9jmw,Uid:aa8386d9-1397-4a7f-9ace-37696d683da6,Namespace:calico-system,Attempt:1,} returns sandbox id \"ce27ef43757bba3f3e57caf8af4de1b9fdf5b1f28ddbee851f2981a29fffdb46\"" Feb 13 20:55:10.572118 containerd[1918]: time="2025-02-13T20:55:10.572108545Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 13 20:55:11.364934 containerd[1918]: time="2025-02-13T20:55:11.364803597Z" level=info msg="StopPodSandbox for \"3f29a0d44f56fea504a6321d257ffe7b3e1a11f9b4b8f8ee22ba456869c3b6f1\"" Feb 13 20:55:11.423477 containerd[1918]: 2025-02-13 20:55:11.404 [INFO][5439] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3f29a0d44f56fea504a6321d257ffe7b3e1a11f9b4b8f8ee22ba456869c3b6f1" Feb 13 20:55:11.423477 containerd[1918]: 2025-02-13 20:55:11.404 [INFO][5439] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3f29a0d44f56fea504a6321d257ffe7b3e1a11f9b4b8f8ee22ba456869c3b6f1" iface="eth0" netns="/var/run/netns/cni-f2698e6c-6361-ccbd-ea96-2aa4f2ac080a" Feb 13 20:55:11.423477 containerd[1918]: 2025-02-13 20:55:11.404 [INFO][5439] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3f29a0d44f56fea504a6321d257ffe7b3e1a11f9b4b8f8ee22ba456869c3b6f1" iface="eth0" netns="/var/run/netns/cni-f2698e6c-6361-ccbd-ea96-2aa4f2ac080a" Feb 13 20:55:11.423477 containerd[1918]: 2025-02-13 20:55:11.404 [INFO][5439] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3f29a0d44f56fea504a6321d257ffe7b3e1a11f9b4b8f8ee22ba456869c3b6f1" iface="eth0" netns="/var/run/netns/cni-f2698e6c-6361-ccbd-ea96-2aa4f2ac080a" Feb 13 20:55:11.423477 containerd[1918]: 2025-02-13 20:55:11.404 [INFO][5439] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3f29a0d44f56fea504a6321d257ffe7b3e1a11f9b4b8f8ee22ba456869c3b6f1" Feb 13 20:55:11.423477 containerd[1918]: 2025-02-13 20:55:11.405 [INFO][5439] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3f29a0d44f56fea504a6321d257ffe7b3e1a11f9b4b8f8ee22ba456869c3b6f1" Feb 13 20:55:11.423477 containerd[1918]: 2025-02-13 20:55:11.416 [INFO][5454] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3f29a0d44f56fea504a6321d257ffe7b3e1a11f9b4b8f8ee22ba456869c3b6f1" HandleID="k8s-pod-network.3f29a0d44f56fea504a6321d257ffe7b3e1a11f9b4b8f8ee22ba456869c3b6f1" Workload="ci--4081.3.1--a--f6aaf2d828-k8s-csi--node--driver--fqp2c-eth0" Feb 13 20:55:11.423477 containerd[1918]: 2025-02-13 20:55:11.416 [INFO][5454] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:55:11.423477 containerd[1918]: 2025-02-13 20:55:11.416 [INFO][5454] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:55:11.423477 containerd[1918]: 2025-02-13 20:55:11.420 [WARNING][5454] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3f29a0d44f56fea504a6321d257ffe7b3e1a11f9b4b8f8ee22ba456869c3b6f1" HandleID="k8s-pod-network.3f29a0d44f56fea504a6321d257ffe7b3e1a11f9b4b8f8ee22ba456869c3b6f1" Workload="ci--4081.3.1--a--f6aaf2d828-k8s-csi--node--driver--fqp2c-eth0" Feb 13 20:55:11.423477 containerd[1918]: 2025-02-13 20:55:11.420 [INFO][5454] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3f29a0d44f56fea504a6321d257ffe7b3e1a11f9b4b8f8ee22ba456869c3b6f1" HandleID="k8s-pod-network.3f29a0d44f56fea504a6321d257ffe7b3e1a11f9b4b8f8ee22ba456869c3b6f1" Workload="ci--4081.3.1--a--f6aaf2d828-k8s-csi--node--driver--fqp2c-eth0" Feb 13 20:55:11.423477 containerd[1918]: 2025-02-13 20:55:11.421 [INFO][5454] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:55:11.423477 containerd[1918]: 2025-02-13 20:55:11.422 [INFO][5439] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3f29a0d44f56fea504a6321d257ffe7b3e1a11f9b4b8f8ee22ba456869c3b6f1" Feb 13 20:55:11.423829 containerd[1918]: time="2025-02-13T20:55:11.423548551Z" level=info msg="TearDown network for sandbox \"3f29a0d44f56fea504a6321d257ffe7b3e1a11f9b4b8f8ee22ba456869c3b6f1\" successfully" Feb 13 20:55:11.423829 containerd[1918]: time="2025-02-13T20:55:11.423568107Z" level=info msg="StopPodSandbox for \"3f29a0d44f56fea504a6321d257ffe7b3e1a11f9b4b8f8ee22ba456869c3b6f1\" returns successfully" Feb 13 20:55:11.423991 containerd[1918]: time="2025-02-13T20:55:11.423976659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fqp2c,Uid:33fccad8-e90d-49bb-89c6-670419a141a0,Namespace:calico-system,Attempt:1,}" Feb 13 20:55:11.461151 systemd[1]: run-netns-cni\x2df2698e6c\x2d6361\x2dccbd\x2dea96\x2d2aa4f2ac080a.mount: Deactivated successfully. Feb 13 20:55:11.503807 systemd-networkd[1542]: cali299c7c4128f: Link UP Feb 13 20:55:11.504004 systemd-networkd[1542]: cali299c7c4128f: Gained carrier Feb 13 20:55:11.512300 containerd[1918]: 2025-02-13 20:55:11.454 [INFO][5469] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.1--a--f6aaf2d828-k8s-csi--node--driver--fqp2c-eth0 csi-node-driver- calico-system 33fccad8-e90d-49bb-89c6-670419a141a0 756 0 2025-02-13 20:54:46 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.1-a-f6aaf2d828 csi-node-driver-fqp2c eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali299c7c4128f [] []}} ContainerID="1308f0c89cf44e5767561d745a990222d82e338378168de20d00fab134bce790" Namespace="calico-system" Pod="csi-node-driver-fqp2c" WorkloadEndpoint="ci--4081.3.1--a--f6aaf2d828-k8s-csi--node--driver--fqp2c-" Feb 13 20:55:11.512300 containerd[1918]: 2025-02-13 20:55:11.454 [INFO][5469] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1308f0c89cf44e5767561d745a990222d82e338378168de20d00fab134bce790" Namespace="calico-system" Pod="csi-node-driver-fqp2c" WorkloadEndpoint="ci--4081.3.1--a--f6aaf2d828-k8s-csi--node--driver--fqp2c-eth0" Feb 13 20:55:11.512300 containerd[1918]: 2025-02-13 20:55:11.471 [INFO][5486] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1308f0c89cf44e5767561d745a990222d82e338378168de20d00fab134bce790" HandleID="k8s-pod-network.1308f0c89cf44e5767561d745a990222d82e338378168de20d00fab134bce790" Workload="ci--4081.3.1--a--f6aaf2d828-k8s-csi--node--driver--fqp2c-eth0" Feb 13 20:55:11.512300 containerd[1918]: 2025-02-13 20:55:11.477 [INFO][5486] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1308f0c89cf44e5767561d745a990222d82e338378168de20d00fab134bce790" HandleID="k8s-pod-network.1308f0c89cf44e5767561d745a990222d82e338378168de20d00fab134bce790" Workload="ci--4081.3.1--a--f6aaf2d828-k8s-csi--node--driver--fqp2c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000361d10), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.1-a-f6aaf2d828", "pod":"csi-node-driver-fqp2c", "timestamp":"2025-02-13 20:55:11.471706831 +0000 UTC"}, Hostname:"ci-4081.3.1-a-f6aaf2d828", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:55:11.512300 containerd[1918]: 2025-02-13 20:55:11.477 [INFO][5486] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:55:11.512300 containerd[1918]: 2025-02-13 20:55:11.477 [INFO][5486] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:55:11.512300 containerd[1918]: 2025-02-13 20:55:11.477 [INFO][5486] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.1-a-f6aaf2d828' Feb 13 20:55:11.512300 containerd[1918]: 2025-02-13 20:55:11.478 [INFO][5486] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1308f0c89cf44e5767561d745a990222d82e338378168de20d00fab134bce790" host="ci-4081.3.1-a-f6aaf2d828" Feb 13 20:55:11.512300 containerd[1918]: 2025-02-13 20:55:11.480 [INFO][5486] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.1-a-f6aaf2d828" Feb 13 20:55:11.512300 containerd[1918]: 2025-02-13 20:55:11.482 [INFO][5486] ipam/ipam.go 489: Trying affinity for 192.168.31.64/26 host="ci-4081.3.1-a-f6aaf2d828" Feb 13 20:55:11.512300 containerd[1918]: 2025-02-13 20:55:11.483 [INFO][5486] ipam/ipam.go 155: Attempting to load block cidr=192.168.31.64/26 host="ci-4081.3.1-a-f6aaf2d828" Feb 13 20:55:11.512300 containerd[1918]: 2025-02-13 20:55:11.485 [INFO][5486] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.31.64/26 host="ci-4081.3.1-a-f6aaf2d828" Feb 13 20:55:11.512300 containerd[1918]: 2025-02-13 20:55:11.485 [INFO][5486] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.31.64/26 handle="k8s-pod-network.1308f0c89cf44e5767561d745a990222d82e338378168de20d00fab134bce790" host="ci-4081.3.1-a-f6aaf2d828" Feb 13 20:55:11.512300 containerd[1918]: 2025-02-13 20:55:11.486 [INFO][5486] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1308f0c89cf44e5767561d745a990222d82e338378168de20d00fab134bce790 Feb 13 20:55:11.512300 containerd[1918]: 2025-02-13 20:55:11.497 [INFO][5486] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.31.64/26 handle="k8s-pod-network.1308f0c89cf44e5767561d745a990222d82e338378168de20d00fab134bce790" host="ci-4081.3.1-a-f6aaf2d828" Feb 13 20:55:11.512300 containerd[1918]: 2025-02-13 20:55:11.501 [INFO][5486] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.31.67/26] block=192.168.31.64/26 handle="k8s-pod-network.1308f0c89cf44e5767561d745a990222d82e338378168de20d00fab134bce790" host="ci-4081.3.1-a-f6aaf2d828" Feb 13 20:55:11.512300 containerd[1918]: 2025-02-13 20:55:11.501 [INFO][5486] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.31.67/26] handle="k8s-pod-network.1308f0c89cf44e5767561d745a990222d82e338378168de20d00fab134bce790" host="ci-4081.3.1-a-f6aaf2d828" Feb 13 20:55:11.512300 containerd[1918]: 2025-02-13 20:55:11.501 [INFO][5486] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:55:11.512300 containerd[1918]: 2025-02-13 20:55:11.501 [INFO][5486] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.31.67/26] IPv6=[] ContainerID="1308f0c89cf44e5767561d745a990222d82e338378168de20d00fab134bce790" HandleID="k8s-pod-network.1308f0c89cf44e5767561d745a990222d82e338378168de20d00fab134bce790" Workload="ci--4081.3.1--a--f6aaf2d828-k8s-csi--node--driver--fqp2c-eth0" Feb 13 20:55:11.513356 containerd[1918]: 2025-02-13 20:55:11.502 [INFO][5469] cni-plugin/k8s.go 386: Populated endpoint ContainerID="1308f0c89cf44e5767561d745a990222d82e338378168de20d00fab134bce790" Namespace="calico-system" Pod="csi-node-driver-fqp2c" WorkloadEndpoint="ci--4081.3.1--a--f6aaf2d828-k8s-csi--node--driver--fqp2c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--f6aaf2d828-k8s-csi--node--driver--fqp2c-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"33fccad8-e90d-49bb-89c6-670419a141a0", ResourceVersion:"756", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 54, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-f6aaf2d828", ContainerID:"", Pod:"csi-node-driver-fqp2c", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.31.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali299c7c4128f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:55:11.513356 containerd[1918]: 2025-02-13 20:55:11.502 [INFO][5469] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.31.67/32] ContainerID="1308f0c89cf44e5767561d745a990222d82e338378168de20d00fab134bce790" Namespace="calico-system" Pod="csi-node-driver-fqp2c" WorkloadEndpoint="ci--4081.3.1--a--f6aaf2d828-k8s-csi--node--driver--fqp2c-eth0" Feb 13 20:55:11.513356 containerd[1918]: 2025-02-13 20:55:11.502 [INFO][5469] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali299c7c4128f ContainerID="1308f0c89cf44e5767561d745a990222d82e338378168de20d00fab134bce790" Namespace="calico-system" Pod="csi-node-driver-fqp2c" WorkloadEndpoint="ci--4081.3.1--a--f6aaf2d828-k8s-csi--node--driver--fqp2c-eth0" Feb 13 20:55:11.513356 containerd[1918]: 2025-02-13 20:55:11.504 [INFO][5469] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1308f0c89cf44e5767561d745a990222d82e338378168de20d00fab134bce790" Namespace="calico-system" Pod="csi-node-driver-fqp2c" WorkloadEndpoint="ci--4081.3.1--a--f6aaf2d828-k8s-csi--node--driver--fqp2c-eth0" Feb 13 20:55:11.513356 containerd[1918]: 2025-02-13 20:55:11.504 [INFO][5469] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1308f0c89cf44e5767561d745a990222d82e338378168de20d00fab134bce790" Namespace="calico-system" Pod="csi-node-driver-fqp2c" WorkloadEndpoint="ci--4081.3.1--a--f6aaf2d828-k8s-csi--node--driver--fqp2c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--f6aaf2d828-k8s-csi--node--driver--fqp2c-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"33fccad8-e90d-49bb-89c6-670419a141a0", ResourceVersion:"756", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 54, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-f6aaf2d828", ContainerID:"1308f0c89cf44e5767561d745a990222d82e338378168de20d00fab134bce790", Pod:"csi-node-driver-fqp2c", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.31.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali299c7c4128f", MAC:"0a:2e:22:6e:1a:3c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:55:11.513356 containerd[1918]: 2025-02-13 20:55:11.511 [INFO][5469] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="1308f0c89cf44e5767561d745a990222d82e338378168de20d00fab134bce790" Namespace="calico-system" Pod="csi-node-driver-fqp2c" WorkloadEndpoint="ci--4081.3.1--a--f6aaf2d828-k8s-csi--node--driver--fqp2c-eth0" Feb 13 20:55:11.522889 containerd[1918]: time="2025-02-13T20:55:11.522646461Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:55:11.522889 containerd[1918]: time="2025-02-13T20:55:11.522879856Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:55:11.522889 containerd[1918]: time="2025-02-13T20:55:11.522888120Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:55:11.522995 containerd[1918]: time="2025-02-13T20:55:11.522957179Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:55:11.543059 containerd[1918]: time="2025-02-13T20:55:11.543037008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fqp2c,Uid:33fccad8-e90d-49bb-89c6-670419a141a0,Namespace:calico-system,Attempt:1,} returns sandbox id \"1308f0c89cf44e5767561d745a990222d82e338378168de20d00fab134bce790\"" Feb 13 20:55:12.032890 systemd-networkd[1542]: cali16317655c3d: Gained IPv6LL Feb 13 20:55:12.364685 containerd[1918]: time="2025-02-13T20:55:12.364553914Z" level=info msg="StopPodSandbox for \"11bd862c6efc92f4633b356435d221e7af2a4d135f930aea5588f9c5b70c758c\"" Feb 13 20:55:12.408197 containerd[1918]: 2025-02-13 20:55:12.390 [INFO][5584] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="11bd862c6efc92f4633b356435d221e7af2a4d135f930aea5588f9c5b70c758c" Feb 13 20:55:12.408197 containerd[1918]: 2025-02-13 20:55:12.391 [INFO][5584] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="11bd862c6efc92f4633b356435d221e7af2a4d135f930aea5588f9c5b70c758c" iface="eth0" netns="/var/run/netns/cni-d02b2a33-d85f-9c4b-f547-66cb72554752" Feb 13 20:55:12.408197 containerd[1918]: 2025-02-13 20:55:12.391 [INFO][5584] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="11bd862c6efc92f4633b356435d221e7af2a4d135f930aea5588f9c5b70c758c" iface="eth0" netns="/var/run/netns/cni-d02b2a33-d85f-9c4b-f547-66cb72554752" Feb 13 20:55:12.408197 containerd[1918]: 2025-02-13 20:55:12.391 [INFO][5584] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="11bd862c6efc92f4633b356435d221e7af2a4d135f930aea5588f9c5b70c758c" iface="eth0" netns="/var/run/netns/cni-d02b2a33-d85f-9c4b-f547-66cb72554752" Feb 13 20:55:12.408197 containerd[1918]: 2025-02-13 20:55:12.391 [INFO][5584] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="11bd862c6efc92f4633b356435d221e7af2a4d135f930aea5588f9c5b70c758c" Feb 13 20:55:12.408197 containerd[1918]: 2025-02-13 20:55:12.391 [INFO][5584] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="11bd862c6efc92f4633b356435d221e7af2a4d135f930aea5588f9c5b70c758c" Feb 13 20:55:12.408197 containerd[1918]: 2025-02-13 20:55:12.401 [INFO][5599] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="11bd862c6efc92f4633b356435d221e7af2a4d135f930aea5588f9c5b70c758c" HandleID="k8s-pod-network.11bd862c6efc92f4633b356435d221e7af2a4d135f930aea5588f9c5b70c758c" Workload="ci--4081.3.1--a--f6aaf2d828-k8s-calico--apiserver--784664ffb7--4nzlt-eth0" Feb 13 20:55:12.408197 containerd[1918]: 2025-02-13 20:55:12.402 [INFO][5599] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:55:12.408197 containerd[1918]: 2025-02-13 20:55:12.402 [INFO][5599] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:55:12.408197 containerd[1918]: 2025-02-13 20:55:12.405 [WARNING][5599] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="11bd862c6efc92f4633b356435d221e7af2a4d135f930aea5588f9c5b70c758c" HandleID="k8s-pod-network.11bd862c6efc92f4633b356435d221e7af2a4d135f930aea5588f9c5b70c758c" Workload="ci--4081.3.1--a--f6aaf2d828-k8s-calico--apiserver--784664ffb7--4nzlt-eth0" Feb 13 20:55:12.408197 containerd[1918]: 2025-02-13 20:55:12.405 [INFO][5599] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="11bd862c6efc92f4633b356435d221e7af2a4d135f930aea5588f9c5b70c758c" HandleID="k8s-pod-network.11bd862c6efc92f4633b356435d221e7af2a4d135f930aea5588f9c5b70c758c" Workload="ci--4081.3.1--a--f6aaf2d828-k8s-calico--apiserver--784664ffb7--4nzlt-eth0" Feb 13 20:55:12.408197 containerd[1918]: 2025-02-13 20:55:12.406 [INFO][5599] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:55:12.408197 containerd[1918]: 2025-02-13 20:55:12.407 [INFO][5584] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="11bd862c6efc92f4633b356435d221e7af2a4d135f930aea5588f9c5b70c758c" Feb 13 20:55:12.408929 containerd[1918]: time="2025-02-13T20:55:12.408285776Z" level=info msg="TearDown network for sandbox \"11bd862c6efc92f4633b356435d221e7af2a4d135f930aea5588f9c5b70c758c\" successfully" Feb 13 20:55:12.408929 containerd[1918]: time="2025-02-13T20:55:12.408309845Z" level=info msg="StopPodSandbox for \"11bd862c6efc92f4633b356435d221e7af2a4d135f930aea5588f9c5b70c758c\" returns successfully" Feb 13 20:55:12.408929 containerd[1918]: time="2025-02-13T20:55:12.408716655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-784664ffb7-4nzlt,Uid:14070312-726d-4bcd-91eb-341f8e9a1a5e,Namespace:calico-apiserver,Attempt:1,}" Feb 13 20:55:12.410128 systemd[1]: run-netns-cni\x2dd02b2a33\x2dd85f\x2d9c4b\x2df547\x2d66cb72554752.mount: Deactivated successfully. Feb 13 20:55:12.465835 systemd-networkd[1542]: cali4a3cdd3bd47: Link UP Feb 13 20:55:12.465947 systemd-networkd[1542]: cali4a3cdd3bd47: Gained carrier Feb 13 20:55:12.470844 containerd[1918]: 2025-02-13 20:55:12.429 [INFO][5613] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.1--a--f6aaf2d828-k8s-calico--apiserver--784664ffb7--4nzlt-eth0 calico-apiserver-784664ffb7- calico-apiserver 14070312-726d-4bcd-91eb-341f8e9a1a5e 764 0 2025-02-13 20:54:46 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:784664ffb7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.1-a-f6aaf2d828 calico-apiserver-784664ffb7-4nzlt eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4a3cdd3bd47 [] []}} ContainerID="d7832ff19b31f5a191484dafe8801665ad87a04e376a718c4ef40fd08a4f0c61" Namespace="calico-apiserver" Pod="calico-apiserver-784664ffb7-4nzlt" WorkloadEndpoint="ci--4081.3.1--a--f6aaf2d828-k8s-calico--apiserver--784664ffb7--4nzlt-" Feb 13 20:55:12.470844 containerd[1918]: 2025-02-13 20:55:12.429 [INFO][5613] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d7832ff19b31f5a191484dafe8801665ad87a04e376a718c4ef40fd08a4f0c61" Namespace="calico-apiserver" Pod="calico-apiserver-784664ffb7-4nzlt" WorkloadEndpoint="ci--4081.3.1--a--f6aaf2d828-k8s-calico--apiserver--784664ffb7--4nzlt-eth0" Feb 13 20:55:12.470844 containerd[1918]: 2025-02-13 20:55:12.444 [INFO][5635] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d7832ff19b31f5a191484dafe8801665ad87a04e376a718c4ef40fd08a4f0c61" HandleID="k8s-pod-network.d7832ff19b31f5a191484dafe8801665ad87a04e376a718c4ef40fd08a4f0c61" Workload="ci--4081.3.1--a--f6aaf2d828-k8s-calico--apiserver--784664ffb7--4nzlt-eth0" Feb 13 20:55:12.470844 containerd[1918]: 2025-02-13 20:55:12.450 [INFO][5635] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d7832ff19b31f5a191484dafe8801665ad87a04e376a718c4ef40fd08a4f0c61" HandleID="k8s-pod-network.d7832ff19b31f5a191484dafe8801665ad87a04e376a718c4ef40fd08a4f0c61" Workload="ci--4081.3.1--a--f6aaf2d828-k8s-calico--apiserver--784664ffb7--4nzlt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000365a00), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.1-a-f6aaf2d828", "pod":"calico-apiserver-784664ffb7-4nzlt", "timestamp":"2025-02-13 20:55:12.444839408 +0000 UTC"}, Hostname:"ci-4081.3.1-a-f6aaf2d828", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:55:12.470844 containerd[1918]: 2025-02-13 20:55:12.450 [INFO][5635] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:55:12.470844 containerd[1918]: 2025-02-13 20:55:12.450 [INFO][5635] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:55:12.470844 containerd[1918]: 2025-02-13 20:55:12.450 [INFO][5635] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.1-a-f6aaf2d828' Feb 13 20:55:12.470844 containerd[1918]: 2025-02-13 20:55:12.451 [INFO][5635] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d7832ff19b31f5a191484dafe8801665ad87a04e376a718c4ef40fd08a4f0c61" host="ci-4081.3.1-a-f6aaf2d828" Feb 13 20:55:12.470844 containerd[1918]: 2025-02-13 20:55:12.453 [INFO][5635] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.1-a-f6aaf2d828" Feb 13 20:55:12.470844 containerd[1918]: 2025-02-13 20:55:12.455 [INFO][5635] ipam/ipam.go 489: Trying affinity for 192.168.31.64/26 host="ci-4081.3.1-a-f6aaf2d828" Feb 13 20:55:12.470844 containerd[1918]: 2025-02-13 20:55:12.456 [INFO][5635] ipam/ipam.go 155: Attempting to load block cidr=192.168.31.64/26 host="ci-4081.3.1-a-f6aaf2d828" Feb 13 20:55:12.470844 containerd[1918]: 2025-02-13 20:55:12.457 [INFO][5635] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.31.64/26 host="ci-4081.3.1-a-f6aaf2d828" Feb 13 20:55:12.470844 containerd[1918]: 2025-02-13 20:55:12.457 [INFO][5635] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.31.64/26 handle="k8s-pod-network.d7832ff19b31f5a191484dafe8801665ad87a04e376a718c4ef40fd08a4f0c61" host="ci-4081.3.1-a-f6aaf2d828" Feb 13 20:55:12.470844 containerd[1918]: 2025-02-13 20:55:12.458 [INFO][5635] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d7832ff19b31f5a191484dafe8801665ad87a04e376a718c4ef40fd08a4f0c61 Feb 13 20:55:12.470844 containerd[1918]: 2025-02-13 20:55:12.460 [INFO][5635] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.31.64/26 handle="k8s-pod-network.d7832ff19b31f5a191484dafe8801665ad87a04e376a718c4ef40fd08a4f0c61" host="ci-4081.3.1-a-f6aaf2d828" Feb 13 20:55:12.470844 containerd[1918]: 2025-02-13 20:55:12.464 [INFO][5635] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.31.68/26] block=192.168.31.64/26 handle="k8s-pod-network.d7832ff19b31f5a191484dafe8801665ad87a04e376a718c4ef40fd08a4f0c61" host="ci-4081.3.1-a-f6aaf2d828" Feb 13 20:55:12.470844 containerd[1918]: 2025-02-13 20:55:12.464 [INFO][5635] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.31.68/26] handle="k8s-pod-network.d7832ff19b31f5a191484dafe8801665ad87a04e376a718c4ef40fd08a4f0c61" host="ci-4081.3.1-a-f6aaf2d828" Feb 13 20:55:12.470844 containerd[1918]: 2025-02-13 20:55:12.464 [INFO][5635] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:55:12.470844 containerd[1918]: 2025-02-13 20:55:12.464 [INFO][5635] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.31.68/26] IPv6=[] ContainerID="d7832ff19b31f5a191484dafe8801665ad87a04e376a718c4ef40fd08a4f0c61" HandleID="k8s-pod-network.d7832ff19b31f5a191484dafe8801665ad87a04e376a718c4ef40fd08a4f0c61" Workload="ci--4081.3.1--a--f6aaf2d828-k8s-calico--apiserver--784664ffb7--4nzlt-eth0" Feb 13 20:55:12.471229 containerd[1918]: 2025-02-13 20:55:12.465 [INFO][5613] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d7832ff19b31f5a191484dafe8801665ad87a04e376a718c4ef40fd08a4f0c61" Namespace="calico-apiserver" Pod="calico-apiserver-784664ffb7-4nzlt" WorkloadEndpoint="ci--4081.3.1--a--f6aaf2d828-k8s-calico--apiserver--784664ffb7--4nzlt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--f6aaf2d828-k8s-calico--apiserver--784664ffb7--4nzlt-eth0", GenerateName:"calico-apiserver-784664ffb7-", Namespace:"calico-apiserver", SelfLink:"", UID:"14070312-726d-4bcd-91eb-341f8e9a1a5e", ResourceVersion:"764", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 54, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"784664ffb7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-f6aaf2d828", ContainerID:"", Pod:"calico-apiserver-784664ffb7-4nzlt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.31.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4a3cdd3bd47", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:55:12.471229 containerd[1918]: 2025-02-13 20:55:12.465 [INFO][5613] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.31.68/32] ContainerID="d7832ff19b31f5a191484dafe8801665ad87a04e376a718c4ef40fd08a4f0c61" Namespace="calico-apiserver" Pod="calico-apiserver-784664ffb7-4nzlt" WorkloadEndpoint="ci--4081.3.1--a--f6aaf2d828-k8s-calico--apiserver--784664ffb7--4nzlt-eth0" Feb 13 20:55:12.471229 containerd[1918]: 2025-02-13 20:55:12.465 [INFO][5613] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4a3cdd3bd47 ContainerID="d7832ff19b31f5a191484dafe8801665ad87a04e376a718c4ef40fd08a4f0c61" Namespace="calico-apiserver" Pod="calico-apiserver-784664ffb7-4nzlt" WorkloadEndpoint="ci--4081.3.1--a--f6aaf2d828-k8s-calico--apiserver--784664ffb7--4nzlt-eth0" Feb 13 20:55:12.471229 containerd[1918]: 2025-02-13 20:55:12.465 [INFO][5613] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d7832ff19b31f5a191484dafe8801665ad87a04e376a718c4ef40fd08a4f0c61" Namespace="calico-apiserver" Pod="calico-apiserver-784664ffb7-4nzlt" WorkloadEndpoint="ci--4081.3.1--a--f6aaf2d828-k8s-calico--apiserver--784664ffb7--4nzlt-eth0" Feb 13 20:55:12.471229 containerd[1918]: 2025-02-13 20:55:12.466 [INFO][5613] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d7832ff19b31f5a191484dafe8801665ad87a04e376a718c4ef40fd08a4f0c61" Namespace="calico-apiserver" Pod="calico-apiserver-784664ffb7-4nzlt" WorkloadEndpoint="ci--4081.3.1--a--f6aaf2d828-k8s-calico--apiserver--784664ffb7--4nzlt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--f6aaf2d828-k8s-calico--apiserver--784664ffb7--4nzlt-eth0", GenerateName:"calico-apiserver-784664ffb7-", Namespace:"calico-apiserver", SelfLink:"", UID:"14070312-726d-4bcd-91eb-341f8e9a1a5e", ResourceVersion:"764", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 54, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"784664ffb7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-f6aaf2d828", ContainerID:"d7832ff19b31f5a191484dafe8801665ad87a04e376a718c4ef40fd08a4f0c61", Pod:"calico-apiserver-784664ffb7-4nzlt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.31.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4a3cdd3bd47", MAC:"6a:64:8d:0e:66:49", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:55:12.471229 containerd[1918]: 2025-02-13 20:55:12.470 [INFO][5613] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d7832ff19b31f5a191484dafe8801665ad87a04e376a718c4ef40fd08a4f0c61" Namespace="calico-apiserver" Pod="calico-apiserver-784664ffb7-4nzlt" WorkloadEndpoint="ci--4081.3.1--a--f6aaf2d828-k8s-calico--apiserver--784664ffb7--4nzlt-eth0" Feb 13 20:55:12.479876 containerd[1918]: time="2025-02-13T20:55:12.479801930Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:55:12.479876 containerd[1918]: time="2025-02-13T20:55:12.479833425Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:55:12.479876 containerd[1918]: time="2025-02-13T20:55:12.479840491Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:55:12.480001 containerd[1918]: time="2025-02-13T20:55:12.479884438Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:55:12.480514 systemd-networkd[1542]: cali920a1084f34: Gained IPv6LL Feb 13 20:55:12.527354 containerd[1918]: time="2025-02-13T20:55:12.527336228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-784664ffb7-4nzlt,Uid:14070312-726d-4bcd-91eb-341f8e9a1a5e,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"d7832ff19b31f5a191484dafe8801665ad87a04e376a718c4ef40fd08a4f0c61\"" Feb 13 20:55:13.248526 systemd-networkd[1542]: cali299c7c4128f: Gained IPv6LL Feb 13 20:55:13.363406 containerd[1918]: time="2025-02-13T20:55:13.363382530Z" level=info msg="StopPodSandbox for \"1699588569619b3db96783aa1e356eb41c3b2aa24a2b4d7234f15632e88b7829\"" Feb 13 20:55:13.363514 containerd[1918]: time="2025-02-13T20:55:13.363382530Z" level=info msg="StopPodSandbox for \"20d8033f7a9a30cdcc0ed9f4a32dbe88307d5edbb5a868fbd9b5aab31219f1f1\"" Feb 13 20:55:13.403290 containerd[1918]: 2025-02-13 20:55:13.386 [INFO][5741] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1699588569619b3db96783aa1e356eb41c3b2aa24a2b4d7234f15632e88b7829" Feb 13 20:55:13.403290 containerd[1918]: 2025-02-13 20:55:13.386 [INFO][5741] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1699588569619b3db96783aa1e356eb41c3b2aa24a2b4d7234f15632e88b7829" iface="eth0" netns="/var/run/netns/cni-c4ee69d0-3158-c003-98bd-d5da8b7e122e" Feb 13 20:55:13.403290 containerd[1918]: 2025-02-13 20:55:13.387 [INFO][5741] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1699588569619b3db96783aa1e356eb41c3b2aa24a2b4d7234f15632e88b7829" iface="eth0" netns="/var/run/netns/cni-c4ee69d0-3158-c003-98bd-d5da8b7e122e" Feb 13 20:55:13.403290 containerd[1918]: 2025-02-13 20:55:13.387 [INFO][5741] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1699588569619b3db96783aa1e356eb41c3b2aa24a2b4d7234f15632e88b7829" iface="eth0" netns="/var/run/netns/cni-c4ee69d0-3158-c003-98bd-d5da8b7e122e" Feb 13 20:55:13.403290 containerd[1918]: 2025-02-13 20:55:13.387 [INFO][5741] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1699588569619b3db96783aa1e356eb41c3b2aa24a2b4d7234f15632e88b7829" Feb 13 20:55:13.403290 containerd[1918]: 2025-02-13 20:55:13.387 [INFO][5741] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1699588569619b3db96783aa1e356eb41c3b2aa24a2b4d7234f15632e88b7829" Feb 13 20:55:13.403290 containerd[1918]: 2025-02-13 20:55:13.397 [INFO][5773] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1699588569619b3db96783aa1e356eb41c3b2aa24a2b4d7234f15632e88b7829" HandleID="k8s-pod-network.1699588569619b3db96783aa1e356eb41c3b2aa24a2b4d7234f15632e88b7829" Workload="ci--4081.3.1--a--f6aaf2d828-k8s-coredns--7db6d8ff4d--lkxpm-eth0" Feb 13 20:55:13.403290 containerd[1918]: 2025-02-13 20:55:13.397 [INFO][5773] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:55:13.403290 containerd[1918]: 2025-02-13 20:55:13.397 [INFO][5773] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:55:13.403290 containerd[1918]: 2025-02-13 20:55:13.401 [WARNING][5773] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1699588569619b3db96783aa1e356eb41c3b2aa24a2b4d7234f15632e88b7829" HandleID="k8s-pod-network.1699588569619b3db96783aa1e356eb41c3b2aa24a2b4d7234f15632e88b7829" Workload="ci--4081.3.1--a--f6aaf2d828-k8s-coredns--7db6d8ff4d--lkxpm-eth0" Feb 13 20:55:13.403290 containerd[1918]: 2025-02-13 20:55:13.401 [INFO][5773] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1699588569619b3db96783aa1e356eb41c3b2aa24a2b4d7234f15632e88b7829" HandleID="k8s-pod-network.1699588569619b3db96783aa1e356eb41c3b2aa24a2b4d7234f15632e88b7829" Workload="ci--4081.3.1--a--f6aaf2d828-k8s-coredns--7db6d8ff4d--lkxpm-eth0" Feb 13 20:55:13.403290 containerd[1918]: 2025-02-13 20:55:13.402 [INFO][5773] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:55:13.403290 containerd[1918]: 2025-02-13 20:55:13.402 [INFO][5741] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1699588569619b3db96783aa1e356eb41c3b2aa24a2b4d7234f15632e88b7829" Feb 13 20:55:13.403572 containerd[1918]: time="2025-02-13T20:55:13.403356396Z" level=info msg="TearDown network for sandbox \"1699588569619b3db96783aa1e356eb41c3b2aa24a2b4d7234f15632e88b7829\" successfully" Feb 13 20:55:13.403572 containerd[1918]: time="2025-02-13T20:55:13.403377257Z" level=info msg="StopPodSandbox for \"1699588569619b3db96783aa1e356eb41c3b2aa24a2b4d7234f15632e88b7829\" returns successfully" Feb 13 20:55:13.403730 containerd[1918]: time="2025-02-13T20:55:13.403692180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-lkxpm,Uid:dc173c30-2906-4734-85ec-0b16586ce47f,Namespace:kube-system,Attempt:1,}" Feb 13 20:55:13.404007 containerd[1918]: time="2025-02-13T20:55:13.403993833Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:55:13.405080 containerd[1918]: time="2025-02-13T20:55:13.405060957Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Feb 13 20:55:13.405118 systemd[1]: run-netns-cni\x2dc4ee69d0\x2d3158\x2dc003\x2d98bd\x2dd5da8b7e122e.mount: Deactivated successfully. Feb 13 20:55:13.405469 containerd[1918]: time="2025-02-13T20:55:13.405457682Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:55:13.407068 containerd[1918]: time="2025-02-13T20:55:13.407055385Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:55:13.407402 containerd[1918]: time="2025-02-13T20:55:13.407388361Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 2.835264468s" Feb 13 20:55:13.407439 containerd[1918]: time="2025-02-13T20:55:13.407406653Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Feb 13 20:55:13.407857 containerd[1918]: time="2025-02-13T20:55:13.407845529Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Feb 13 20:55:13.408520 containerd[1918]: 2025-02-13 20:55:13.386 [INFO][5740] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="20d8033f7a9a30cdcc0ed9f4a32dbe88307d5edbb5a868fbd9b5aab31219f1f1" Feb 13 20:55:13.408520 containerd[1918]: 2025-02-13 20:55:13.386 [INFO][5740] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="20d8033f7a9a30cdcc0ed9f4a32dbe88307d5edbb5a868fbd9b5aab31219f1f1" iface="eth0" netns="/var/run/netns/cni-e9561cd0-0d2f-1743-e6c0-4b842ca0d80f" Feb 13 20:55:13.408520 containerd[1918]: 2025-02-13 20:55:13.386 [INFO][5740] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="20d8033f7a9a30cdcc0ed9f4a32dbe88307d5edbb5a868fbd9b5aab31219f1f1" iface="eth0" netns="/var/run/netns/cni-e9561cd0-0d2f-1743-e6c0-4b842ca0d80f" Feb 13 20:55:13.408520 containerd[1918]: 2025-02-13 20:55:13.386 [INFO][5740] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="20d8033f7a9a30cdcc0ed9f4a32dbe88307d5edbb5a868fbd9b5aab31219f1f1" iface="eth0" netns="/var/run/netns/cni-e9561cd0-0d2f-1743-e6c0-4b842ca0d80f" Feb 13 20:55:13.408520 containerd[1918]: 2025-02-13 20:55:13.386 [INFO][5740] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="20d8033f7a9a30cdcc0ed9f4a32dbe88307d5edbb5a868fbd9b5aab31219f1f1" Feb 13 20:55:13.408520 containerd[1918]: 2025-02-13 20:55:13.386 [INFO][5740] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="20d8033f7a9a30cdcc0ed9f4a32dbe88307d5edbb5a868fbd9b5aab31219f1f1" Feb 13 20:55:13.408520 containerd[1918]: 2025-02-13 20:55:13.397 [INFO][5772] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="20d8033f7a9a30cdcc0ed9f4a32dbe88307d5edbb5a868fbd9b5aab31219f1f1" HandleID="k8s-pod-network.20d8033f7a9a30cdcc0ed9f4a32dbe88307d5edbb5a868fbd9b5aab31219f1f1" Workload="ci--4081.3.1--a--f6aaf2d828-k8s-coredns--7db6d8ff4d--b6xth-eth0" Feb 13 20:55:13.408520 containerd[1918]: 2025-02-13 20:55:13.397 [INFO][5772] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:55:13.408520 containerd[1918]: 2025-02-13 20:55:13.402 [INFO][5772] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:55:13.408520 containerd[1918]: 2025-02-13 20:55:13.405 [WARNING][5772] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="20d8033f7a9a30cdcc0ed9f4a32dbe88307d5edbb5a868fbd9b5aab31219f1f1" HandleID="k8s-pod-network.20d8033f7a9a30cdcc0ed9f4a32dbe88307d5edbb5a868fbd9b5aab31219f1f1" Workload="ci--4081.3.1--a--f6aaf2d828-k8s-coredns--7db6d8ff4d--b6xth-eth0" Feb 13 20:55:13.408520 containerd[1918]: 2025-02-13 20:55:13.405 [INFO][5772] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="20d8033f7a9a30cdcc0ed9f4a32dbe88307d5edbb5a868fbd9b5aab31219f1f1" HandleID="k8s-pod-network.20d8033f7a9a30cdcc0ed9f4a32dbe88307d5edbb5a868fbd9b5aab31219f1f1" Workload="ci--4081.3.1--a--f6aaf2d828-k8s-coredns--7db6d8ff4d--b6xth-eth0" Feb 13 20:55:13.408520 containerd[1918]: 2025-02-13 20:55:13.406 [INFO][5772] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:55:13.408520 containerd[1918]: 2025-02-13 20:55:13.407 [INFO][5740] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="20d8033f7a9a30cdcc0ed9f4a32dbe88307d5edbb5a868fbd9b5aab31219f1f1" Feb 13 20:55:13.408895 containerd[1918]: time="2025-02-13T20:55:13.408537573Z" level=info msg="CreateContainer within sandbox \"262150e550ef21330d285506bda379bce450ddd9721671f5df782deba45b48a2\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 20:55:13.408895 containerd[1918]: time="2025-02-13T20:55:13.408601732Z" level=info msg="TearDown network for sandbox \"20d8033f7a9a30cdcc0ed9f4a32dbe88307d5edbb5a868fbd9b5aab31219f1f1\" successfully" Feb 13 20:55:13.408895 containerd[1918]: time="2025-02-13T20:55:13.408612171Z" level=info msg="StopPodSandbox for \"20d8033f7a9a30cdcc0ed9f4a32dbe88307d5edbb5a868fbd9b5aab31219f1f1\" returns successfully" Feb 13 20:55:13.408895 containerd[1918]: time="2025-02-13T20:55:13.408824371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-b6xth,Uid:ed2abc63-0eb6-4122-b8d3-cd7022d17802,Namespace:kube-system,Attempt:1,}" Feb 13 20:55:13.416035 containerd[1918]: time="2025-02-13T20:55:13.415982575Z" level=info msg="CreateContainer within sandbox \"262150e550ef21330d285506bda379bce450ddd9721671f5df782deba45b48a2\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"93a920cb503a3d0ae647bcfced6836ec6cd210e3209a65ae86bb1d87f772ce74\"" Feb 13 20:55:13.416295 containerd[1918]: time="2025-02-13T20:55:13.416281396Z" level=info msg="StartContainer for \"93a920cb503a3d0ae647bcfced6836ec6cd210e3209a65ae86bb1d87f772ce74\"" Feb 13 20:55:13.459596 containerd[1918]: time="2025-02-13T20:55:13.459572207Z" level=info msg="StartContainer for \"93a920cb503a3d0ae647bcfced6836ec6cd210e3209a65ae86bb1d87f772ce74\" returns successfully" Feb 13 20:55:13.460503 systemd-networkd[1542]: calia34a8d326ed: Link UP Feb 13 20:55:13.460639 systemd-networkd[1542]: calia34a8d326ed: Gained carrier Feb 13 20:55:13.461719 systemd[1]: run-netns-cni\x2de9561cd0\x2d0d2f\x2d1743\x2de6c0\x2d4b842ca0d80f.mount: Deactivated successfully. Feb 13 20:55:13.466057 containerd[1918]: 2025-02-13 20:55:13.426 [INFO][5810] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.1--a--f6aaf2d828-k8s-coredns--7db6d8ff4d--lkxpm-eth0 coredns-7db6d8ff4d- kube-system dc173c30-2906-4734-85ec-0b16586ce47f 773 0 2025-02-13 20:54:41 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.1-a-f6aaf2d828 coredns-7db6d8ff4d-lkxpm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia34a8d326ed [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="2043baaa55f21219f845f8794a11bf7fbe661b109c877f0d59f22c43584413ae" Namespace="kube-system" Pod="coredns-7db6d8ff4d-lkxpm" WorkloadEndpoint="ci--4081.3.1--a--f6aaf2d828-k8s-coredns--7db6d8ff4d--lkxpm-" Feb 13 20:55:13.466057 containerd[1918]: 2025-02-13 20:55:13.426 [INFO][5810] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2043baaa55f21219f845f8794a11bf7fbe661b109c877f0d59f22c43584413ae" Namespace="kube-system" Pod="coredns-7db6d8ff4d-lkxpm" WorkloadEndpoint="ci--4081.3.1--a--f6aaf2d828-k8s-coredns--7db6d8ff4d--lkxpm-eth0" Feb 13 20:55:13.466057 containerd[1918]: 2025-02-13 20:55:13.440 [INFO][5879] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2043baaa55f21219f845f8794a11bf7fbe661b109c877f0d59f22c43584413ae" HandleID="k8s-pod-network.2043baaa55f21219f845f8794a11bf7fbe661b109c877f0d59f22c43584413ae" Workload="ci--4081.3.1--a--f6aaf2d828-k8s-coredns--7db6d8ff4d--lkxpm-eth0" Feb 13 20:55:13.466057 containerd[1918]: 2025-02-13 20:55:13.445 [INFO][5879] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2043baaa55f21219f845f8794a11bf7fbe661b109c877f0d59f22c43584413ae" HandleID="k8s-pod-network.2043baaa55f21219f845f8794a11bf7fbe661b109c877f0d59f22c43584413ae" Workload="ci--4081.3.1--a--f6aaf2d828-k8s-coredns--7db6d8ff4d--lkxpm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00042def0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.1-a-f6aaf2d828", "pod":"coredns-7db6d8ff4d-lkxpm", "timestamp":"2025-02-13 20:55:13.440820023 +0000 UTC"}, Hostname:"ci-4081.3.1-a-f6aaf2d828", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:55:13.466057 containerd[1918]: 2025-02-13 20:55:13.445 [INFO][5879] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:55:13.466057 containerd[1918]: 2025-02-13 20:55:13.445 [INFO][5879] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:55:13.466057 containerd[1918]: 2025-02-13 20:55:13.445 [INFO][5879] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.1-a-f6aaf2d828' Feb 13 20:55:13.466057 containerd[1918]: 2025-02-13 20:55:13.446 [INFO][5879] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2043baaa55f21219f845f8794a11bf7fbe661b109c877f0d59f22c43584413ae" host="ci-4081.3.1-a-f6aaf2d828" Feb 13 20:55:13.466057 containerd[1918]: 2025-02-13 20:55:13.448 [INFO][5879] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.1-a-f6aaf2d828" Feb 13 20:55:13.466057 containerd[1918]: 2025-02-13 20:55:13.451 [INFO][5879] ipam/ipam.go 489: Trying affinity for 192.168.31.64/26 host="ci-4081.3.1-a-f6aaf2d828" Feb 13 20:55:13.466057 containerd[1918]: 2025-02-13 20:55:13.452 [INFO][5879] ipam/ipam.go 155: Attempting to load block cidr=192.168.31.64/26 host="ci-4081.3.1-a-f6aaf2d828" Feb 13 20:55:13.466057 containerd[1918]: 2025-02-13 20:55:13.453 [INFO][5879] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.31.64/26 host="ci-4081.3.1-a-f6aaf2d828" Feb 13 20:55:13.466057 containerd[1918]: 2025-02-13 20:55:13.453 [INFO][5879] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.31.64/26 handle="k8s-pod-network.2043baaa55f21219f845f8794a11bf7fbe661b109c877f0d59f22c43584413ae" host="ci-4081.3.1-a-f6aaf2d828" Feb 13 20:55:13.466057 containerd[1918]: 2025-02-13 20:55:13.454 [INFO][5879] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2043baaa55f21219f845f8794a11bf7fbe661b109c877f0d59f22c43584413ae Feb 13 20:55:13.466057 containerd[1918]: 2025-02-13 20:55:13.455 [INFO][5879] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.31.64/26 handle="k8s-pod-network.2043baaa55f21219f845f8794a11bf7fbe661b109c877f0d59f22c43584413ae" host="ci-4081.3.1-a-f6aaf2d828" Feb 13 20:55:13.466057 containerd[1918]: 2025-02-13 20:55:13.458 [INFO][5879] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.31.69/26] block=192.168.31.64/26 handle="k8s-pod-network.2043baaa55f21219f845f8794a11bf7fbe661b109c877f0d59f22c43584413ae" host="ci-4081.3.1-a-f6aaf2d828" Feb 13 20:55:13.466057 containerd[1918]: 2025-02-13 20:55:13.458 [INFO][5879] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.31.69/26] handle="k8s-pod-network.2043baaa55f21219f845f8794a11bf7fbe661b109c877f0d59f22c43584413ae" host="ci-4081.3.1-a-f6aaf2d828" Feb 13 20:55:13.466057 containerd[1918]: 2025-02-13 20:55:13.458 [INFO][5879] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:55:13.466057 containerd[1918]: 2025-02-13 20:55:13.458 [INFO][5879] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.31.69/26] IPv6=[] ContainerID="2043baaa55f21219f845f8794a11bf7fbe661b109c877f0d59f22c43584413ae" HandleID="k8s-pod-network.2043baaa55f21219f845f8794a11bf7fbe661b109c877f0d59f22c43584413ae" Workload="ci--4081.3.1--a--f6aaf2d828-k8s-coredns--7db6d8ff4d--lkxpm-eth0" Feb 13 20:55:13.466507 containerd[1918]: 2025-02-13 20:55:13.459 [INFO][5810] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2043baaa55f21219f845f8794a11bf7fbe661b109c877f0d59f22c43584413ae" Namespace="kube-system" Pod="coredns-7db6d8ff4d-lkxpm" WorkloadEndpoint="ci--4081.3.1--a--f6aaf2d828-k8s-coredns--7db6d8ff4d--lkxpm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--f6aaf2d828-k8s-coredns--7db6d8ff4d--lkxpm-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"dc173c30-2906-4734-85ec-0b16586ce47f", ResourceVersion:"773", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 54, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-f6aaf2d828", ContainerID:"", Pod:"coredns-7db6d8ff4d-lkxpm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.31.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia34a8d326ed", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:55:13.466507 containerd[1918]: 2025-02-13 20:55:13.459 [INFO][5810] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.31.69/32] ContainerID="2043baaa55f21219f845f8794a11bf7fbe661b109c877f0d59f22c43584413ae" Namespace="kube-system" Pod="coredns-7db6d8ff4d-lkxpm" WorkloadEndpoint="ci--4081.3.1--a--f6aaf2d828-k8s-coredns--7db6d8ff4d--lkxpm-eth0" Feb 13 20:55:13.466507 containerd[1918]: 2025-02-13 20:55:13.459 [INFO][5810] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia34a8d326ed ContainerID="2043baaa55f21219f845f8794a11bf7fbe661b109c877f0d59f22c43584413ae" Namespace="kube-system" Pod="coredns-7db6d8ff4d-lkxpm" WorkloadEndpoint="ci--4081.3.1--a--f6aaf2d828-k8s-coredns--7db6d8ff4d--lkxpm-eth0" Feb 13 20:55:13.466507 containerd[1918]: 2025-02-13 20:55:13.460 [INFO][5810] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2043baaa55f21219f845f8794a11bf7fbe661b109c877f0d59f22c43584413ae" Namespace="kube-system" Pod="coredns-7db6d8ff4d-lkxpm" WorkloadEndpoint="ci--4081.3.1--a--f6aaf2d828-k8s-coredns--7db6d8ff4d--lkxpm-eth0" Feb 13 20:55:13.466507 containerd[1918]: 2025-02-13 20:55:13.460 [INFO][5810] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2043baaa55f21219f845f8794a11bf7fbe661b109c877f0d59f22c43584413ae" Namespace="kube-system" Pod="coredns-7db6d8ff4d-lkxpm" WorkloadEndpoint="ci--4081.3.1--a--f6aaf2d828-k8s-coredns--7db6d8ff4d--lkxpm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--f6aaf2d828-k8s-coredns--7db6d8ff4d--lkxpm-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"dc173c30-2906-4734-85ec-0b16586ce47f", ResourceVersion:"773", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 54, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-f6aaf2d828", ContainerID:"2043baaa55f21219f845f8794a11bf7fbe661b109c877f0d59f22c43584413ae", Pod:"coredns-7db6d8ff4d-lkxpm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.31.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia34a8d326ed", MAC:"36:26:5b:ed:6e:65", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:55:13.466507 containerd[1918]: 2025-02-13 20:55:13.465 [INFO][5810] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2043baaa55f21219f845f8794a11bf7fbe661b109c877f0d59f22c43584413ae" Namespace="kube-system" Pod="coredns-7db6d8ff4d-lkxpm" WorkloadEndpoint="ci--4081.3.1--a--f6aaf2d828-k8s-coredns--7db6d8ff4d--lkxpm-eth0" Feb 13 20:55:13.475885 containerd[1918]: time="2025-02-13T20:55:13.475834640Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:55:13.476032 systemd-networkd[1542]: cali9c793e1c8f8: Link UP Feb 13 20:55:13.476082 containerd[1918]: time="2025-02-13T20:55:13.476058800Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:55:13.476082 containerd[1918]: time="2025-02-13T20:55:13.476070408Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:55:13.476138 containerd[1918]: time="2025-02-13T20:55:13.476125046Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:55:13.476183 systemd-networkd[1542]: cali9c793e1c8f8: Gained carrier Feb 13 20:55:13.482222 containerd[1918]: 2025-02-13 20:55:13.432 [INFO][5832] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.1--a--f6aaf2d828-k8s-coredns--7db6d8ff4d--b6xth-eth0 coredns-7db6d8ff4d- kube-system ed2abc63-0eb6-4122-b8d3-cd7022d17802 772 0 2025-02-13 20:54:41 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.1-a-f6aaf2d828 coredns-7db6d8ff4d-b6xth eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9c793e1c8f8 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="ad5fea81af6349f31df076994d8b3df667090ef84535d45384fccf514322f03e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-b6xth" WorkloadEndpoint="ci--4081.3.1--a--f6aaf2d828-k8s-coredns--7db6d8ff4d--b6xth-" Feb 13 20:55:13.482222 containerd[1918]: 2025-02-13 20:55:13.432 [INFO][5832] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ad5fea81af6349f31df076994d8b3df667090ef84535d45384fccf514322f03e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-b6xth" WorkloadEndpoint="ci--4081.3.1--a--f6aaf2d828-k8s-coredns--7db6d8ff4d--b6xth-eth0" Feb 13 20:55:13.482222 containerd[1918]: 2025-02-13 20:55:13.447 [INFO][5899] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ad5fea81af6349f31df076994d8b3df667090ef84535d45384fccf514322f03e" HandleID="k8s-pod-network.ad5fea81af6349f31df076994d8b3df667090ef84535d45384fccf514322f03e" Workload="ci--4081.3.1--a--f6aaf2d828-k8s-coredns--7db6d8ff4d--b6xth-eth0" Feb 13 20:55:13.482222 containerd[1918]: 2025-02-13 20:55:13.451 [INFO][5899] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ad5fea81af6349f31df076994d8b3df667090ef84535d45384fccf514322f03e" HandleID="k8s-pod-network.ad5fea81af6349f31df076994d8b3df667090ef84535d45384fccf514322f03e" Workload="ci--4081.3.1--a--f6aaf2d828-k8s-coredns--7db6d8ff4d--b6xth-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000549aa0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.1-a-f6aaf2d828", "pod":"coredns-7db6d8ff4d-b6xth", "timestamp":"2025-02-13 20:55:13.447543291 +0000 UTC"}, Hostname:"ci-4081.3.1-a-f6aaf2d828", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:55:13.482222 containerd[1918]: 2025-02-13 20:55:13.451 [INFO][5899] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:55:13.482222 containerd[1918]: 2025-02-13 20:55:13.458 [INFO][5899] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:55:13.482222 containerd[1918]: 2025-02-13 20:55:13.458 [INFO][5899] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.1-a-f6aaf2d828' Feb 13 20:55:13.482222 containerd[1918]: 2025-02-13 20:55:13.460 [INFO][5899] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ad5fea81af6349f31df076994d8b3df667090ef84535d45384fccf514322f03e" host="ci-4081.3.1-a-f6aaf2d828" Feb 13 20:55:13.482222 containerd[1918]: 2025-02-13 20:55:13.462 [INFO][5899] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.1-a-f6aaf2d828" Feb 13 20:55:13.482222 containerd[1918]: 2025-02-13 20:55:13.465 [INFO][5899] ipam/ipam.go 489: Trying affinity for 192.168.31.64/26 host="ci-4081.3.1-a-f6aaf2d828" Feb 13 20:55:13.482222 containerd[1918]: 2025-02-13 20:55:13.466 [INFO][5899] ipam/ipam.go 155: Attempting to load block cidr=192.168.31.64/26 host="ci-4081.3.1-a-f6aaf2d828" Feb 13 20:55:13.482222 containerd[1918]: 2025-02-13 20:55:13.467 [INFO][5899] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.31.64/26 host="ci-4081.3.1-a-f6aaf2d828" Feb 13 20:55:13.482222 containerd[1918]: 2025-02-13 20:55:13.467 [INFO][5899] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.31.64/26 handle="k8s-pod-network.ad5fea81af6349f31df076994d8b3df667090ef84535d45384fccf514322f03e" host="ci-4081.3.1-a-f6aaf2d828" Feb 13 20:55:13.482222 containerd[1918]: 2025-02-13 20:55:13.468 [INFO][5899] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ad5fea81af6349f31df076994d8b3df667090ef84535d45384fccf514322f03e Feb 13 20:55:13.482222 containerd[1918]: 2025-02-13 20:55:13.471 [INFO][5899] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.31.64/26 handle="k8s-pod-network.ad5fea81af6349f31df076994d8b3df667090ef84535d45384fccf514322f03e" host="ci-4081.3.1-a-f6aaf2d828" Feb 13 20:55:13.482222 containerd[1918]: 2025-02-13 20:55:13.474 [INFO][5899] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.31.70/26] block=192.168.31.64/26 handle="k8s-pod-network.ad5fea81af6349f31df076994d8b3df667090ef84535d45384fccf514322f03e" host="ci-4081.3.1-a-f6aaf2d828" Feb 13 20:55:13.482222 containerd[1918]: 2025-02-13 20:55:13.474 [INFO][5899] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.31.70/26] handle="k8s-pod-network.ad5fea81af6349f31df076994d8b3df667090ef84535d45384fccf514322f03e" host="ci-4081.3.1-a-f6aaf2d828" Feb 13 20:55:13.482222 containerd[1918]: 2025-02-13 20:55:13.474 [INFO][5899] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:55:13.482222 containerd[1918]: 2025-02-13 20:55:13.474 [INFO][5899] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.31.70/26] IPv6=[] ContainerID="ad5fea81af6349f31df076994d8b3df667090ef84535d45384fccf514322f03e" HandleID="k8s-pod-network.ad5fea81af6349f31df076994d8b3df667090ef84535d45384fccf514322f03e" Workload="ci--4081.3.1--a--f6aaf2d828-k8s-coredns--7db6d8ff4d--b6xth-eth0" Feb 13 20:55:13.482628 containerd[1918]: 2025-02-13 20:55:13.475 [INFO][5832] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ad5fea81af6349f31df076994d8b3df667090ef84535d45384fccf514322f03e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-b6xth" WorkloadEndpoint="ci--4081.3.1--a--f6aaf2d828-k8s-coredns--7db6d8ff4d--b6xth-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--f6aaf2d828-k8s-coredns--7db6d8ff4d--b6xth-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"ed2abc63-0eb6-4122-b8d3-cd7022d17802", ResourceVersion:"772", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 54, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-f6aaf2d828", ContainerID:"", Pod:"coredns-7db6d8ff4d-b6xth", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.31.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9c793e1c8f8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:55:13.482628 containerd[1918]: 2025-02-13 20:55:13.475 [INFO][5832] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.31.70/32] ContainerID="ad5fea81af6349f31df076994d8b3df667090ef84535d45384fccf514322f03e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-b6xth" WorkloadEndpoint="ci--4081.3.1--a--f6aaf2d828-k8s-coredns--7db6d8ff4d--b6xth-eth0" Feb 13 20:55:13.482628 containerd[1918]: 2025-02-13 20:55:13.475 [INFO][5832] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9c793e1c8f8 ContainerID="ad5fea81af6349f31df076994d8b3df667090ef84535d45384fccf514322f03e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-b6xth" WorkloadEndpoint="ci--4081.3.1--a--f6aaf2d828-k8s-coredns--7db6d8ff4d--b6xth-eth0" Feb 13 20:55:13.482628 containerd[1918]: 2025-02-13 20:55:13.476 [INFO][5832] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ad5fea81af6349f31df076994d8b3df667090ef84535d45384fccf514322f03e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-b6xth" WorkloadEndpoint="ci--4081.3.1--a--f6aaf2d828-k8s-coredns--7db6d8ff4d--b6xth-eth0" Feb 13 20:55:13.482628 containerd[1918]: 2025-02-13 20:55:13.476 [INFO][5832] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ad5fea81af6349f31df076994d8b3df667090ef84535d45384fccf514322f03e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-b6xth" WorkloadEndpoint="ci--4081.3.1--a--f6aaf2d828-k8s-coredns--7db6d8ff4d--b6xth-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--f6aaf2d828-k8s-coredns--7db6d8ff4d--b6xth-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"ed2abc63-0eb6-4122-b8d3-cd7022d17802", ResourceVersion:"772", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 54, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-f6aaf2d828", ContainerID:"ad5fea81af6349f31df076994d8b3df667090ef84535d45384fccf514322f03e", Pod:"coredns-7db6d8ff4d-b6xth", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.31.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9c793e1c8f8", MAC:"16:14:89:19:51:a8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:55:13.482628 containerd[1918]: 2025-02-13 20:55:13.481 [INFO][5832] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ad5fea81af6349f31df076994d8b3df667090ef84535d45384fccf514322f03e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-b6xth" WorkloadEndpoint="ci--4081.3.1--a--f6aaf2d828-k8s-coredns--7db6d8ff4d--b6xth-eth0" Feb 13 20:55:13.491387 containerd[1918]: time="2025-02-13T20:55:13.491346219Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:55:13.491387 containerd[1918]: time="2025-02-13T20:55:13.491376669Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:55:13.491387 containerd[1918]: time="2025-02-13T20:55:13.491383653Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:55:13.491516 containerd[1918]: time="2025-02-13T20:55:13.491430896Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:55:13.516509 containerd[1918]: time="2025-02-13T20:55:13.516458687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-lkxpm,Uid:dc173c30-2906-4734-85ec-0b16586ce47f,Namespace:kube-system,Attempt:1,} returns sandbox id \"2043baaa55f21219f845f8794a11bf7fbe661b109c877f0d59f22c43584413ae\"" Feb 13 20:55:13.517704 containerd[1918]: time="2025-02-13T20:55:13.517689400Z" level=info msg="CreateContainer within sandbox \"2043baaa55f21219f845f8794a11bf7fbe661b109c877f0d59f22c43584413ae\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 20:55:13.518110 containerd[1918]: time="2025-02-13T20:55:13.518093970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-b6xth,Uid:ed2abc63-0eb6-4122-b8d3-cd7022d17802,Namespace:kube-system,Attempt:1,} returns sandbox id \"ad5fea81af6349f31df076994d8b3df667090ef84535d45384fccf514322f03e\"" Feb 13 20:55:13.519124 containerd[1918]: time="2025-02-13T20:55:13.519112305Z" level=info msg="CreateContainer within sandbox \"ad5fea81af6349f31df076994d8b3df667090ef84535d45384fccf514322f03e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 20:55:13.523140 containerd[1918]: time="2025-02-13T20:55:13.523100551Z" level=info msg="CreateContainer within sandbox \"2043baaa55f21219f845f8794a11bf7fbe661b109c877f0d59f22c43584413ae\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5273d9e48cd60cbfd2cd7b49269653eb2cd16f22947b03e72e6daab43ce73140\"" Feb 13 20:55:13.523271 containerd[1918]: time="2025-02-13T20:55:13.523260039Z" level=info msg="StartContainer for \"5273d9e48cd60cbfd2cd7b49269653eb2cd16f22947b03e72e6daab43ce73140\"" Feb 13 20:55:13.523802 containerd[1918]: time="2025-02-13T20:55:13.523759258Z" level=info msg="CreateContainer within sandbox \"ad5fea81af6349f31df076994d8b3df667090ef84535d45384fccf514322f03e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"92bc854dfe780f0a74fe64a2f7f616c4a731a49de966f73c0fb76d06605c4c5b\"" Feb 13 20:55:13.523948 containerd[1918]: time="2025-02-13T20:55:13.523924050Z" level=info msg="StartContainer for \"92bc854dfe780f0a74fe64a2f7f616c4a731a49de966f73c0fb76d06605c4c5b\"" Feb 13 20:55:13.526041 kubelet[3429]: I0213 20:55:13.526010 3429 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-784664ffb7-z5wx4" podStartSLOduration=24.690191775 podStartE2EDuration="27.525996623s" podCreationTimestamp="2025-02-13 20:54:46 +0000 UTC" firstStartedPulling="2025-02-13 20:55:10.571969944 +0000 UTC m=+44.250698607" lastFinishedPulling="2025-02-13 20:55:13.407774792 +0000 UTC m=+47.086503455" observedRunningTime="2025-02-13 20:55:13.52573637 +0000 UTC m=+47.204465039" watchObservedRunningTime="2025-02-13 20:55:13.525996623 +0000 UTC m=+47.204725281" Feb 13 20:55:13.571384 containerd[1918]: time="2025-02-13T20:55:13.571360826Z" level=info msg="StartContainer for \"5273d9e48cd60cbfd2cd7b49269653eb2cd16f22947b03e72e6daab43ce73140\" returns successfully" Feb 13 20:55:13.571476 containerd[1918]: time="2025-02-13T20:55:13.571360838Z" level=info msg="StartContainer for \"92bc854dfe780f0a74fe64a2f7f616c4a731a49de966f73c0fb76d06605c4c5b\" returns successfully" Feb 13 20:55:13.824682 systemd-networkd[1542]: cali4a3cdd3bd47: Gained IPv6LL Feb 13 20:55:14.349499 kubelet[3429]: I0213 20:55:14.349383 3429 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 20:55:14.516068 kubelet[3429]: I0213 20:55:14.515981 3429 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-lkxpm" podStartSLOduration=33.515950642 podStartE2EDuration="33.515950642s" podCreationTimestamp="2025-02-13 20:54:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:55:14.515665662 +0000 UTC m=+48.194394354" watchObservedRunningTime="2025-02-13 20:55:14.515950642 +0000 UTC m=+48.194679322" Feb 13 20:55:14.527539 kubelet[3429]: I0213 20:55:14.527474 3429 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-b6xth" podStartSLOduration=33.527452323 podStartE2EDuration="33.527452323s" podCreationTimestamp="2025-02-13 20:54:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:55:14.527057301 +0000 UTC m=+48.205785993" watchObservedRunningTime="2025-02-13 20:55:14.527452323 +0000 UTC m=+48.206181002" Feb 13 20:55:14.529363 systemd-networkd[1542]: cali9c793e1c8f8: Gained IPv6LL Feb 13 20:55:14.848729 systemd-networkd[1542]: calia34a8d326ed: Gained IPv6LL Feb 13 20:55:16.196910 containerd[1918]: time="2025-02-13T20:55:16.196860172Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:55:16.197138 containerd[1918]: time="2025-02-13T20:55:16.197071798Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Feb 13 20:55:16.197352 containerd[1918]: time="2025-02-13T20:55:16.197316108Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:55:16.198378 containerd[1918]: time="2025-02-13T20:55:16.198338478Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:55:16.198802 containerd[1918]: time="2025-02-13T20:55:16.198761205Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 2.79089697s" Feb 13 20:55:16.198802 containerd[1918]: time="2025-02-13T20:55:16.198776977Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Feb 13 20:55:16.199344 containerd[1918]: time="2025-02-13T20:55:16.199305948Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Feb 13 20:55:16.202378 containerd[1918]: time="2025-02-13T20:55:16.202360920Z" level=info msg="CreateContainer within sandbox \"ce27ef43757bba3f3e57caf8af4de1b9fdf5b1f28ddbee851f2981a29fffdb46\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Feb 13 20:55:16.205838 containerd[1918]: time="2025-02-13T20:55:16.205822274Z" level=info msg="CreateContainer within sandbox \"ce27ef43757bba3f3e57caf8af4de1b9fdf5b1f28ddbee851f2981a29fffdb46\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"6f1fae501ab79da303b453e04af3a28bad162de93842c776f4151b801ff195fe\"" Feb 13 20:55:16.206041 containerd[1918]: time="2025-02-13T20:55:16.206027713Z" level=info msg="StartContainer for \"6f1fae501ab79da303b453e04af3a28bad162de93842c776f4151b801ff195fe\"" Feb 13 20:55:16.254321 containerd[1918]: time="2025-02-13T20:55:16.254297932Z" level=info msg="StartContainer for \"6f1fae501ab79da303b453e04af3a28bad162de93842c776f4151b801ff195fe\" returns successfully" Feb 13 20:55:16.535014 kubelet[3429]: I0213 20:55:16.534745 3429 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-554f7dd6cb-n9jmw" podStartSLOduration=24.90800446 podStartE2EDuration="30.534707988s" podCreationTimestamp="2025-02-13 20:54:46 +0000 UTC" firstStartedPulling="2025-02-13 20:55:10.572455831 +0000 UTC m=+44.251184493" lastFinishedPulling="2025-02-13 20:55:16.199159358 +0000 UTC m=+49.877888021" observedRunningTime="2025-02-13 20:55:16.533618792 +0000 UTC m=+50.212347523" watchObservedRunningTime="2025-02-13 20:55:16.534707988 +0000 UTC m=+50.213436702" Feb 13 20:55:18.382835 containerd[1918]: time="2025-02-13T20:55:18.382771379Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:55:18.383544 containerd[1918]: time="2025-02-13T20:55:18.383528051Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Feb 13 20:55:18.384038 containerd[1918]: time="2025-02-13T20:55:18.384025532Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:55:18.385566 containerd[1918]: time="2025-02-13T20:55:18.385528141Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:55:18.385835 containerd[1918]: time="2025-02-13T20:55:18.385813396Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 2.186492587s" Feb 13 20:55:18.385835 containerd[1918]: time="2025-02-13T20:55:18.385829182Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Feb 13 20:55:18.386517 containerd[1918]: time="2025-02-13T20:55:18.386503318Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 13 20:55:18.387078 containerd[1918]: time="2025-02-13T20:55:18.387063412Z" level=info msg="CreateContainer within sandbox \"1308f0c89cf44e5767561d745a990222d82e338378168de20d00fab134bce790\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 13 20:55:18.393097 containerd[1918]: time="2025-02-13T20:55:18.393074228Z" level=info msg="CreateContainer within sandbox \"1308f0c89cf44e5767561d745a990222d82e338378168de20d00fab134bce790\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"377b57fc8488b24ed29dc916d7e7817378efe793fe1affc96a104cfb8e1fc625\"" Feb 13 20:55:18.393450 containerd[1918]: time="2025-02-13T20:55:18.393438318Z" level=info msg="StartContainer for \"377b57fc8488b24ed29dc916d7e7817378efe793fe1affc96a104cfb8e1fc625\"" Feb 13 20:55:18.422191 containerd[1918]: time="2025-02-13T20:55:18.422144427Z" level=info msg="StartContainer for \"377b57fc8488b24ed29dc916d7e7817378efe793fe1affc96a104cfb8e1fc625\" returns successfully" Feb 13 20:55:18.883574 containerd[1918]: time="2025-02-13T20:55:18.883550439Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:55:18.883864 containerd[1918]: time="2025-02-13T20:55:18.883820103Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Feb 13 20:55:18.885327 containerd[1918]: time="2025-02-13T20:55:18.885305654Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 498.765991ms" Feb 13 20:55:18.885362 containerd[1918]: time="2025-02-13T20:55:18.885331903Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Feb 13 20:55:18.886002 containerd[1918]: time="2025-02-13T20:55:18.885980823Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Feb 13 20:55:18.886772 containerd[1918]: time="2025-02-13T20:55:18.886712725Z" level=info msg="CreateContainer within sandbox \"d7832ff19b31f5a191484dafe8801665ad87a04e376a718c4ef40fd08a4f0c61\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 20:55:18.890527 containerd[1918]: time="2025-02-13T20:55:18.890488854Z" level=info msg="CreateContainer within sandbox \"d7832ff19b31f5a191484dafe8801665ad87a04e376a718c4ef40fd08a4f0c61\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ff88a93022304cefba728fbc2b79822ca9e4d4ebbdd1246267d91dc31cb6f7a1\"" Feb 13 20:55:18.890753 containerd[1918]: time="2025-02-13T20:55:18.890707500Z" level=info msg="StartContainer for \"ff88a93022304cefba728fbc2b79822ca9e4d4ebbdd1246267d91dc31cb6f7a1\"" Feb 13 20:55:18.940321 containerd[1918]: time="2025-02-13T20:55:18.940302827Z" level=info msg="StartContainer for \"ff88a93022304cefba728fbc2b79822ca9e4d4ebbdd1246267d91dc31cb6f7a1\" returns successfully" Feb 13 20:55:19.531568 kubelet[3429]: I0213 20:55:19.531533 3429 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-784664ffb7-4nzlt" podStartSLOduration=27.173517456 podStartE2EDuration="33.531520294s" podCreationTimestamp="2025-02-13 20:54:46 +0000 UTC" firstStartedPulling="2025-02-13 20:55:12.527869946 +0000 UTC m=+46.206598608" lastFinishedPulling="2025-02-13 20:55:18.88587278 +0000 UTC m=+52.564601446" observedRunningTime="2025-02-13 20:55:19.531154297 +0000 UTC m=+53.209882965" watchObservedRunningTime="2025-02-13 20:55:19.531520294 +0000 UTC m=+53.210248956" Feb 13 20:55:20.767935 containerd[1918]: time="2025-02-13T20:55:20.767883430Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:55:20.768172 containerd[1918]: time="2025-02-13T20:55:20.768080725Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Feb 13 20:55:20.768468 containerd[1918]: time="2025-02-13T20:55:20.768454910Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:55:20.769375 containerd[1918]: time="2025-02-13T20:55:20.769361136Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:55:20.769782 containerd[1918]: time="2025-02-13T20:55:20.769767597Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.883769453s" Feb 13 20:55:20.769831 containerd[1918]: time="2025-02-13T20:55:20.769785673Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Feb 13 20:55:20.770864 containerd[1918]: time="2025-02-13T20:55:20.770826661Z" level=info msg="CreateContainer within sandbox \"1308f0c89cf44e5767561d745a990222d82e338378168de20d00fab134bce790\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 13 20:55:20.775349 containerd[1918]: time="2025-02-13T20:55:20.775308018Z" level=info msg="CreateContainer within sandbox \"1308f0c89cf44e5767561d745a990222d82e338378168de20d00fab134bce790\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"bfce6fff1f97f44268b555dfc618edf8221d2ea329800cc09a78d24403a2bf89\"" Feb 13 20:55:20.775571 containerd[1918]: time="2025-02-13T20:55:20.775529017Z" level=info msg="StartContainer for \"bfce6fff1f97f44268b555dfc618edf8221d2ea329800cc09a78d24403a2bf89\"" Feb 13 20:55:20.824250 containerd[1918]: time="2025-02-13T20:55:20.824221516Z" level=info msg="StartContainer for \"bfce6fff1f97f44268b555dfc618edf8221d2ea329800cc09a78d24403a2bf89\" returns successfully" Feb 13 20:55:21.416213 kubelet[3429]: I0213 20:55:21.415994 3429 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 13 20:55:21.416213 kubelet[3429]: I0213 20:55:21.416083 3429 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 13 20:55:21.567184 kubelet[3429]: I0213 20:55:21.567062 3429 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-fqp2c" podStartSLOduration=26.34047955 podStartE2EDuration="35.567022051s" podCreationTimestamp="2025-02-13 20:54:46 +0000 UTC" firstStartedPulling="2025-02-13 20:55:11.543618205 +0000 UTC m=+45.222346867" lastFinishedPulling="2025-02-13 20:55:20.770160705 +0000 UTC m=+54.448889368" observedRunningTime="2025-02-13 20:55:21.565510787 +0000 UTC m=+55.244239528" watchObservedRunningTime="2025-02-13 20:55:21.567022051 +0000 UTC m=+55.245750774" Feb 13 20:55:26.362350 containerd[1918]: time="2025-02-13T20:55:26.362233631Z" level=info msg="StopPodSandbox for \"1699588569619b3db96783aa1e356eb41c3b2aa24a2b4d7234f15632e88b7829\"" Feb 13 20:55:26.474604 containerd[1918]: 2025-02-13 20:55:26.431 [WARNING][6447] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1699588569619b3db96783aa1e356eb41c3b2aa24a2b4d7234f15632e88b7829" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--f6aaf2d828-k8s-coredns--7db6d8ff4d--lkxpm-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"dc173c30-2906-4734-85ec-0b16586ce47f", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 54, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-f6aaf2d828", ContainerID:"2043baaa55f21219f845f8794a11bf7fbe661b109c877f0d59f22c43584413ae", Pod:"coredns-7db6d8ff4d-lkxpm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.31.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia34a8d326ed", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:55:26.474604 containerd[1918]: 2025-02-13 20:55:26.432 [INFO][6447] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1699588569619b3db96783aa1e356eb41c3b2aa24a2b4d7234f15632e88b7829" Feb 13 20:55:26.474604 containerd[1918]: 2025-02-13 20:55:26.432 [INFO][6447] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1699588569619b3db96783aa1e356eb41c3b2aa24a2b4d7234f15632e88b7829" iface="eth0" netns="" Feb 13 20:55:26.474604 containerd[1918]: 2025-02-13 20:55:26.432 [INFO][6447] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1699588569619b3db96783aa1e356eb41c3b2aa24a2b4d7234f15632e88b7829" Feb 13 20:55:26.474604 containerd[1918]: 2025-02-13 20:55:26.432 [INFO][6447] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1699588569619b3db96783aa1e356eb41c3b2aa24a2b4d7234f15632e88b7829" Feb 13 20:55:26.474604 containerd[1918]: 2025-02-13 20:55:26.461 [INFO][6464] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1699588569619b3db96783aa1e356eb41c3b2aa24a2b4d7234f15632e88b7829" HandleID="k8s-pod-network.1699588569619b3db96783aa1e356eb41c3b2aa24a2b4d7234f15632e88b7829" Workload="ci--4081.3.1--a--f6aaf2d828-k8s-coredns--7db6d8ff4d--lkxpm-eth0" Feb 13 20:55:26.474604 containerd[1918]: 2025-02-13 20:55:26.461 [INFO][6464] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:55:26.474604 containerd[1918]: 2025-02-13 20:55:26.461 [INFO][6464] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:55:26.474604 containerd[1918]: 2025-02-13 20:55:26.469 [WARNING][6464] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1699588569619b3db96783aa1e356eb41c3b2aa24a2b4d7234f15632e88b7829" HandleID="k8s-pod-network.1699588569619b3db96783aa1e356eb41c3b2aa24a2b4d7234f15632e88b7829" Workload="ci--4081.3.1--a--f6aaf2d828-k8s-coredns--7db6d8ff4d--lkxpm-eth0" Feb 13 20:55:26.474604 containerd[1918]: 2025-02-13 20:55:26.469 [INFO][6464] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1699588569619b3db96783aa1e356eb41c3b2aa24a2b4d7234f15632e88b7829" HandleID="k8s-pod-network.1699588569619b3db96783aa1e356eb41c3b2aa24a2b4d7234f15632e88b7829" Workload="ci--4081.3.1--a--f6aaf2d828-k8s-coredns--7db6d8ff4d--lkxpm-eth0" Feb 13 20:55:26.474604 containerd[1918]: 2025-02-13 20:55:26.471 [INFO][6464] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:55:26.474604 containerd[1918]: 2025-02-13 20:55:26.473 [INFO][6447] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1699588569619b3db96783aa1e356eb41c3b2aa24a2b4d7234f15632e88b7829" Feb 13 20:55:26.475344 containerd[1918]: time="2025-02-13T20:55:26.474644190Z" level=info msg="TearDown network for sandbox \"1699588569619b3db96783aa1e356eb41c3b2aa24a2b4d7234f15632e88b7829\" successfully" Feb 13 20:55:26.475344 containerd[1918]: time="2025-02-13T20:55:26.474675187Z" level=info msg="StopPodSandbox for \"1699588569619b3db96783aa1e356eb41c3b2aa24a2b4d7234f15632e88b7829\" returns successfully" Feb 13 20:55:26.475344 containerd[1918]: time="2025-02-13T20:55:26.475295217Z" level=info msg="RemovePodSandbox for \"1699588569619b3db96783aa1e356eb41c3b2aa24a2b4d7234f15632e88b7829\"" Feb 13 20:55:26.475512 containerd[1918]: time="2025-02-13T20:55:26.475350471Z" level=info msg="Forcibly stopping sandbox \"1699588569619b3db96783aa1e356eb41c3b2aa24a2b4d7234f15632e88b7829\"" Feb 13 20:55:26.550613 containerd[1918]: 2025-02-13 20:55:26.520 [WARNING][6495] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1699588569619b3db96783aa1e356eb41c3b2aa24a2b4d7234f15632e88b7829" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--f6aaf2d828-k8s-coredns--7db6d8ff4d--lkxpm-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"dc173c30-2906-4734-85ec-0b16586ce47f", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 54, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-f6aaf2d828", ContainerID:"2043baaa55f21219f845f8794a11bf7fbe661b109c877f0d59f22c43584413ae", Pod:"coredns-7db6d8ff4d-lkxpm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.31.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia34a8d326ed", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:55:26.550613 containerd[1918]: 2025-02-13 20:55:26.520 [INFO][6495] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1699588569619b3db96783aa1e356eb41c3b2aa24a2b4d7234f15632e88b7829" Feb 13 20:55:26.550613 containerd[1918]: 2025-02-13 20:55:26.520 [INFO][6495] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1699588569619b3db96783aa1e356eb41c3b2aa24a2b4d7234f15632e88b7829" iface="eth0" netns="" Feb 13 20:55:26.550613 containerd[1918]: 2025-02-13 20:55:26.520 [INFO][6495] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1699588569619b3db96783aa1e356eb41c3b2aa24a2b4d7234f15632e88b7829" Feb 13 20:55:26.550613 containerd[1918]: 2025-02-13 20:55:26.520 [INFO][6495] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1699588569619b3db96783aa1e356eb41c3b2aa24a2b4d7234f15632e88b7829" Feb 13 20:55:26.550613 containerd[1918]: 2025-02-13 20:55:26.539 [INFO][6510] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1699588569619b3db96783aa1e356eb41c3b2aa24a2b4d7234f15632e88b7829" HandleID="k8s-pod-network.1699588569619b3db96783aa1e356eb41c3b2aa24a2b4d7234f15632e88b7829" Workload="ci--4081.3.1--a--f6aaf2d828-k8s-coredns--7db6d8ff4d--lkxpm-eth0" Feb 13 20:55:26.550613 containerd[1918]: 2025-02-13 20:55:26.539 [INFO][6510] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:55:26.550613 containerd[1918]: 2025-02-13 20:55:26.539 [INFO][6510] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:55:26.550613 containerd[1918]: 2025-02-13 20:55:26.546 [WARNING][6510] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1699588569619b3db96783aa1e356eb41c3b2aa24a2b4d7234f15632e88b7829" HandleID="k8s-pod-network.1699588569619b3db96783aa1e356eb41c3b2aa24a2b4d7234f15632e88b7829" Workload="ci--4081.3.1--a--f6aaf2d828-k8s-coredns--7db6d8ff4d--lkxpm-eth0" Feb 13 20:55:26.550613 containerd[1918]: 2025-02-13 20:55:26.546 [INFO][6510] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1699588569619b3db96783aa1e356eb41c3b2aa24a2b4d7234f15632e88b7829" HandleID="k8s-pod-network.1699588569619b3db96783aa1e356eb41c3b2aa24a2b4d7234f15632e88b7829" Workload="ci--4081.3.1--a--f6aaf2d828-k8s-coredns--7db6d8ff4d--lkxpm-eth0" Feb 13 20:55:26.550613 containerd[1918]: 2025-02-13 20:55:26.548 [INFO][6510] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:55:26.550613 containerd[1918]: 2025-02-13 20:55:26.549 [INFO][6495] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1699588569619b3db96783aa1e356eb41c3b2aa24a2b4d7234f15632e88b7829" Feb 13 20:55:26.551100 containerd[1918]: time="2025-02-13T20:55:26.550649620Z" level=info msg="TearDown network for sandbox \"1699588569619b3db96783aa1e356eb41c3b2aa24a2b4d7234f15632e88b7829\" successfully" Feb 13 20:55:26.552556 containerd[1918]: time="2025-02-13T20:55:26.552503157Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1699588569619b3db96783aa1e356eb41c3b2aa24a2b4d7234f15632e88b7829\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:55:26.552556 containerd[1918]: time="2025-02-13T20:55:26.552531156Z" level=info msg="RemovePodSandbox \"1699588569619b3db96783aa1e356eb41c3b2aa24a2b4d7234f15632e88b7829\" returns successfully" Feb 13 20:55:26.552842 containerd[1918]: time="2025-02-13T20:55:26.552830505Z" level=info msg="StopPodSandbox for \"dd791de27a6fc911fac2454cdaed32c7be1d456c73e780ca09699e6c110bcbbe\"" Feb 13 20:55:26.586810 containerd[1918]: 2025-02-13 20:55:26.571 [WARNING][6542] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dd791de27a6fc911fac2454cdaed32c7be1d456c73e780ca09699e6c110bcbbe" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--f6aaf2d828-k8s-calico--kube--controllers--554f7dd6cb--n9jmw-eth0", GenerateName:"calico-kube-controllers-554f7dd6cb-", Namespace:"calico-system", SelfLink:"", UID:"aa8386d9-1397-4a7f-9ace-37696d683da6", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 54, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"554f7dd6cb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-f6aaf2d828", ContainerID:"ce27ef43757bba3f3e57caf8af4de1b9fdf5b1f28ddbee851f2981a29fffdb46", Pod:"calico-kube-controllers-554f7dd6cb-n9jmw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.31.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali16317655c3d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:55:26.586810 containerd[1918]: 2025-02-13 20:55:26.571 [INFO][6542] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="dd791de27a6fc911fac2454cdaed32c7be1d456c73e780ca09699e6c110bcbbe" Feb 13 20:55:26.586810 containerd[1918]: 2025-02-13 20:55:26.571 [INFO][6542] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dd791de27a6fc911fac2454cdaed32c7be1d456c73e780ca09699e6c110bcbbe" iface="eth0" netns="" Feb 13 20:55:26.586810 containerd[1918]: 2025-02-13 20:55:26.571 [INFO][6542] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="dd791de27a6fc911fac2454cdaed32c7be1d456c73e780ca09699e6c110bcbbe" Feb 13 20:55:26.586810 containerd[1918]: 2025-02-13 20:55:26.571 [INFO][6542] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dd791de27a6fc911fac2454cdaed32c7be1d456c73e780ca09699e6c110bcbbe" Feb 13 20:55:26.586810 containerd[1918]: 2025-02-13 20:55:26.581 [INFO][6557] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dd791de27a6fc911fac2454cdaed32c7be1d456c73e780ca09699e6c110bcbbe" HandleID="k8s-pod-network.dd791de27a6fc911fac2454cdaed32c7be1d456c73e780ca09699e6c110bcbbe" Workload="ci--4081.3.1--a--f6aaf2d828-k8s-calico--kube--controllers--554f7dd6cb--n9jmw-eth0" Feb 13 20:55:26.586810 containerd[1918]: 2025-02-13 20:55:26.581 [INFO][6557] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:55:26.586810 containerd[1918]: 2025-02-13 20:55:26.581 [INFO][6557] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:55:26.586810 containerd[1918]: 2025-02-13 20:55:26.584 [WARNING][6557] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dd791de27a6fc911fac2454cdaed32c7be1d456c73e780ca09699e6c110bcbbe" HandleID="k8s-pod-network.dd791de27a6fc911fac2454cdaed32c7be1d456c73e780ca09699e6c110bcbbe" Workload="ci--4081.3.1--a--f6aaf2d828-k8s-calico--kube--controllers--554f7dd6cb--n9jmw-eth0" Feb 13 20:55:26.586810 containerd[1918]: 2025-02-13 20:55:26.584 [INFO][6557] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dd791de27a6fc911fac2454cdaed32c7be1d456c73e780ca09699e6c110bcbbe" HandleID="k8s-pod-network.dd791de27a6fc911fac2454cdaed32c7be1d456c73e780ca09699e6c110bcbbe" Workload="ci--4081.3.1--a--f6aaf2d828-k8s-calico--kube--controllers--554f7dd6cb--n9jmw-eth0" Feb 13 20:55:26.586810 containerd[1918]: 2025-02-13 20:55:26.585 [INFO][6557] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:55:26.586810 containerd[1918]: 2025-02-13 20:55:26.586 [INFO][6542] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="dd791de27a6fc911fac2454cdaed32c7be1d456c73e780ca09699e6c110bcbbe" Feb 13 20:55:26.587086 containerd[1918]: time="2025-02-13T20:55:26.586831430Z" level=info msg="TearDown network for sandbox \"dd791de27a6fc911fac2454cdaed32c7be1d456c73e780ca09699e6c110bcbbe\" successfully" Feb 13 20:55:26.587086 containerd[1918]: time="2025-02-13T20:55:26.586846199Z" level=info msg="StopPodSandbox for \"dd791de27a6fc911fac2454cdaed32c7be1d456c73e780ca09699e6c110bcbbe\" returns successfully" Feb 13 20:55:26.587124 containerd[1918]: time="2025-02-13T20:55:26.587087517Z" level=info msg="RemovePodSandbox for \"dd791de27a6fc911fac2454cdaed32c7be1d456c73e780ca09699e6c110bcbbe\"" Feb 13 20:55:26.587124 containerd[1918]: time="2025-02-13T20:55:26.587103345Z" level=info msg="Forcibly stopping sandbox \"dd791de27a6fc911fac2454cdaed32c7be1d456c73e780ca09699e6c110bcbbe\"" Feb 13 20:55:26.622986 containerd[1918]: 2025-02-13 20:55:26.605 [WARNING][6583] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dd791de27a6fc911fac2454cdaed32c7be1d456c73e780ca09699e6c110bcbbe" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--f6aaf2d828-k8s-calico--kube--controllers--554f7dd6cb--n9jmw-eth0", GenerateName:"calico-kube-controllers-554f7dd6cb-", Namespace:"calico-system", SelfLink:"", UID:"aa8386d9-1397-4a7f-9ace-37696d683da6", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 54, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"554f7dd6cb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-f6aaf2d828", ContainerID:"ce27ef43757bba3f3e57caf8af4de1b9fdf5b1f28ddbee851f2981a29fffdb46", Pod:"calico-kube-controllers-554f7dd6cb-n9jmw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.31.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali16317655c3d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:55:26.622986 containerd[1918]: 2025-02-13 20:55:26.606 [INFO][6583] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="dd791de27a6fc911fac2454cdaed32c7be1d456c73e780ca09699e6c110bcbbe" Feb 13 20:55:26.622986 containerd[1918]: 2025-02-13 20:55:26.606 [INFO][6583] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dd791de27a6fc911fac2454cdaed32c7be1d456c73e780ca09699e6c110bcbbe" iface="eth0" netns="" Feb 13 20:55:26.622986 containerd[1918]: 2025-02-13 20:55:26.606 [INFO][6583] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="dd791de27a6fc911fac2454cdaed32c7be1d456c73e780ca09699e6c110bcbbe" Feb 13 20:55:26.622986 containerd[1918]: 2025-02-13 20:55:26.606 [INFO][6583] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dd791de27a6fc911fac2454cdaed32c7be1d456c73e780ca09699e6c110bcbbe" Feb 13 20:55:26.622986 containerd[1918]: 2025-02-13 20:55:26.616 [INFO][6595] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dd791de27a6fc911fac2454cdaed32c7be1d456c73e780ca09699e6c110bcbbe" HandleID="k8s-pod-network.dd791de27a6fc911fac2454cdaed32c7be1d456c73e780ca09699e6c110bcbbe" Workload="ci--4081.3.1--a--f6aaf2d828-k8s-calico--kube--controllers--554f7dd6cb--n9jmw-eth0" Feb 13 20:55:26.622986 containerd[1918]: 2025-02-13 20:55:26.616 [INFO][6595] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:55:26.622986 containerd[1918]: 2025-02-13 20:55:26.616 [INFO][6595] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:55:26.622986 containerd[1918]: 2025-02-13 20:55:26.620 [WARNING][6595] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dd791de27a6fc911fac2454cdaed32c7be1d456c73e780ca09699e6c110bcbbe" HandleID="k8s-pod-network.dd791de27a6fc911fac2454cdaed32c7be1d456c73e780ca09699e6c110bcbbe" Workload="ci--4081.3.1--a--f6aaf2d828-k8s-calico--kube--controllers--554f7dd6cb--n9jmw-eth0" Feb 13 20:55:26.622986 containerd[1918]: 2025-02-13 20:55:26.620 [INFO][6595] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dd791de27a6fc911fac2454cdaed32c7be1d456c73e780ca09699e6c110bcbbe" HandleID="k8s-pod-network.dd791de27a6fc911fac2454cdaed32c7be1d456c73e780ca09699e6c110bcbbe" Workload="ci--4081.3.1--a--f6aaf2d828-k8s-calico--kube--controllers--554f7dd6cb--n9jmw-eth0" Feb 13 20:55:26.622986 containerd[1918]: 2025-02-13 20:55:26.621 [INFO][6595] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:55:26.622986 containerd[1918]: 2025-02-13 20:55:26.622 [INFO][6583] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="dd791de27a6fc911fac2454cdaed32c7be1d456c73e780ca09699e6c110bcbbe" Feb 13 20:55:26.622986 containerd[1918]: time="2025-02-13T20:55:26.622945257Z" level=info msg="TearDown network for sandbox \"dd791de27a6fc911fac2454cdaed32c7be1d456c73e780ca09699e6c110bcbbe\" successfully" Feb 13 20:55:26.624298 containerd[1918]: time="2025-02-13T20:55:26.624259070Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dd791de27a6fc911fac2454cdaed32c7be1d456c73e780ca09699e6c110bcbbe\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:55:26.624298 containerd[1918]: time="2025-02-13T20:55:26.624287325Z" level=info msg="RemovePodSandbox \"dd791de27a6fc911fac2454cdaed32c7be1d456c73e780ca09699e6c110bcbbe\" returns successfully" Feb 13 20:55:26.624542 containerd[1918]: time="2025-02-13T20:55:26.624527988Z" level=info msg="StopPodSandbox for \"20d8033f7a9a30cdcc0ed9f4a32dbe88307d5edbb5a868fbd9b5aab31219f1f1\"" Feb 13 20:55:26.661928 containerd[1918]: 2025-02-13 20:55:26.645 [WARNING][6623] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="20d8033f7a9a30cdcc0ed9f4a32dbe88307d5edbb5a868fbd9b5aab31219f1f1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--f6aaf2d828-k8s-coredns--7db6d8ff4d--b6xth-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"ed2abc63-0eb6-4122-b8d3-cd7022d17802", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 54, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-f6aaf2d828", ContainerID:"ad5fea81af6349f31df076994d8b3df667090ef84535d45384fccf514322f03e", Pod:"coredns-7db6d8ff4d-b6xth", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.31.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9c793e1c8f8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:55:26.661928 containerd[1918]: 2025-02-13 20:55:26.645 [INFO][6623] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="20d8033f7a9a30cdcc0ed9f4a32dbe88307d5edbb5a868fbd9b5aab31219f1f1" Feb 13 20:55:26.661928 containerd[1918]: 2025-02-13 20:55:26.645 [INFO][6623] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="20d8033f7a9a30cdcc0ed9f4a32dbe88307d5edbb5a868fbd9b5aab31219f1f1" iface="eth0" netns="" Feb 13 20:55:26.661928 containerd[1918]: 2025-02-13 20:55:26.645 [INFO][6623] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="20d8033f7a9a30cdcc0ed9f4a32dbe88307d5edbb5a868fbd9b5aab31219f1f1" Feb 13 20:55:26.661928 containerd[1918]: 2025-02-13 20:55:26.645 [INFO][6623] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="20d8033f7a9a30cdcc0ed9f4a32dbe88307d5edbb5a868fbd9b5aab31219f1f1" Feb 13 20:55:26.661928 containerd[1918]: 2025-02-13 20:55:26.655 [INFO][6637] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="20d8033f7a9a30cdcc0ed9f4a32dbe88307d5edbb5a868fbd9b5aab31219f1f1" HandleID="k8s-pod-network.20d8033f7a9a30cdcc0ed9f4a32dbe88307d5edbb5a868fbd9b5aab31219f1f1" Workload="ci--4081.3.1--a--f6aaf2d828-k8s-coredns--7db6d8ff4d--b6xth-eth0" Feb 13 20:55:26.661928 containerd[1918]: 2025-02-13 20:55:26.655 [INFO][6637] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:55:26.661928 containerd[1918]: 2025-02-13 20:55:26.655 [INFO][6637] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:55:26.661928 containerd[1918]: 2025-02-13 20:55:26.659 [WARNING][6637] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="20d8033f7a9a30cdcc0ed9f4a32dbe88307d5edbb5a868fbd9b5aab31219f1f1" HandleID="k8s-pod-network.20d8033f7a9a30cdcc0ed9f4a32dbe88307d5edbb5a868fbd9b5aab31219f1f1" Workload="ci--4081.3.1--a--f6aaf2d828-k8s-coredns--7db6d8ff4d--b6xth-eth0" Feb 13 20:55:26.661928 containerd[1918]: 2025-02-13 20:55:26.659 [INFO][6637] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="20d8033f7a9a30cdcc0ed9f4a32dbe88307d5edbb5a868fbd9b5aab31219f1f1" HandleID="k8s-pod-network.20d8033f7a9a30cdcc0ed9f4a32dbe88307d5edbb5a868fbd9b5aab31219f1f1" Workload="ci--4081.3.1--a--f6aaf2d828-k8s-coredns--7db6d8ff4d--b6xth-eth0" Feb 13 20:55:26.661928 containerd[1918]: 2025-02-13 20:55:26.660 [INFO][6637] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:55:26.661928 containerd[1918]: 2025-02-13 20:55:26.661 [INFO][6623] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="20d8033f7a9a30cdcc0ed9f4a32dbe88307d5edbb5a868fbd9b5aab31219f1f1" Feb 13 20:55:26.662242 containerd[1918]: time="2025-02-13T20:55:26.661951485Z" level=info msg="TearDown network for sandbox \"20d8033f7a9a30cdcc0ed9f4a32dbe88307d5edbb5a868fbd9b5aab31219f1f1\" successfully" Feb 13 20:55:26.662242 containerd[1918]: time="2025-02-13T20:55:26.661970396Z" level=info msg="StopPodSandbox for \"20d8033f7a9a30cdcc0ed9f4a32dbe88307d5edbb5a868fbd9b5aab31219f1f1\" returns successfully" Feb 13 20:55:26.662283 containerd[1918]: time="2025-02-13T20:55:26.662252782Z" level=info msg="RemovePodSandbox for \"20d8033f7a9a30cdcc0ed9f4a32dbe88307d5edbb5a868fbd9b5aab31219f1f1\"" Feb 13 20:55:26.662283 containerd[1918]: time="2025-02-13T20:55:26.662272098Z" level=info msg="Forcibly stopping sandbox \"20d8033f7a9a30cdcc0ed9f4a32dbe88307d5edbb5a868fbd9b5aab31219f1f1\"" Feb 13 20:55:26.696783 containerd[1918]: 2025-02-13 20:55:26.680 [WARNING][6665] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="20d8033f7a9a30cdcc0ed9f4a32dbe88307d5edbb5a868fbd9b5aab31219f1f1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--f6aaf2d828-k8s-coredns--7db6d8ff4d--b6xth-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"ed2abc63-0eb6-4122-b8d3-cd7022d17802", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 54, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-f6aaf2d828", ContainerID:"ad5fea81af6349f31df076994d8b3df667090ef84535d45384fccf514322f03e", Pod:"coredns-7db6d8ff4d-b6xth", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.31.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9c793e1c8f8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:55:26.696783 containerd[1918]: 2025-02-13 20:55:26.680 [INFO][6665] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="20d8033f7a9a30cdcc0ed9f4a32dbe88307d5edbb5a868fbd9b5aab31219f1f1" Feb 13 20:55:26.696783 containerd[1918]: 2025-02-13 20:55:26.680 [INFO][6665] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="20d8033f7a9a30cdcc0ed9f4a32dbe88307d5edbb5a868fbd9b5aab31219f1f1" iface="eth0" netns="" Feb 13 20:55:26.696783 containerd[1918]: 2025-02-13 20:55:26.680 [INFO][6665] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="20d8033f7a9a30cdcc0ed9f4a32dbe88307d5edbb5a868fbd9b5aab31219f1f1" Feb 13 20:55:26.696783 containerd[1918]: 2025-02-13 20:55:26.680 [INFO][6665] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="20d8033f7a9a30cdcc0ed9f4a32dbe88307d5edbb5a868fbd9b5aab31219f1f1" Feb 13 20:55:26.696783 containerd[1918]: 2025-02-13 20:55:26.691 [INFO][6677] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="20d8033f7a9a30cdcc0ed9f4a32dbe88307d5edbb5a868fbd9b5aab31219f1f1" HandleID="k8s-pod-network.20d8033f7a9a30cdcc0ed9f4a32dbe88307d5edbb5a868fbd9b5aab31219f1f1" Workload="ci--4081.3.1--a--f6aaf2d828-k8s-coredns--7db6d8ff4d--b6xth-eth0" Feb 13 20:55:26.696783 containerd[1918]: 2025-02-13 20:55:26.691 [INFO][6677] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:55:26.696783 containerd[1918]: 2025-02-13 20:55:26.691 [INFO][6677] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:55:26.696783 containerd[1918]: 2025-02-13 20:55:26.694 [WARNING][6677] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="20d8033f7a9a30cdcc0ed9f4a32dbe88307d5edbb5a868fbd9b5aab31219f1f1" HandleID="k8s-pod-network.20d8033f7a9a30cdcc0ed9f4a32dbe88307d5edbb5a868fbd9b5aab31219f1f1" Workload="ci--4081.3.1--a--f6aaf2d828-k8s-coredns--7db6d8ff4d--b6xth-eth0" Feb 13 20:55:26.696783 containerd[1918]: 2025-02-13 20:55:26.694 [INFO][6677] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="20d8033f7a9a30cdcc0ed9f4a32dbe88307d5edbb5a868fbd9b5aab31219f1f1" HandleID="k8s-pod-network.20d8033f7a9a30cdcc0ed9f4a32dbe88307d5edbb5a868fbd9b5aab31219f1f1" Workload="ci--4081.3.1--a--f6aaf2d828-k8s-coredns--7db6d8ff4d--b6xth-eth0" Feb 13 20:55:26.696783 containerd[1918]: 2025-02-13 20:55:26.695 [INFO][6677] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:55:26.696783 containerd[1918]: 2025-02-13 20:55:26.696 [INFO][6665] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="20d8033f7a9a30cdcc0ed9f4a32dbe88307d5edbb5a868fbd9b5aab31219f1f1" Feb 13 20:55:26.696783 containerd[1918]: time="2025-02-13T20:55:26.696779801Z" level=info msg="TearDown network for sandbox \"20d8033f7a9a30cdcc0ed9f4a32dbe88307d5edbb5a868fbd9b5aab31219f1f1\" successfully" Feb 13 20:55:26.698174 containerd[1918]: time="2025-02-13T20:55:26.698134481Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"20d8033f7a9a30cdcc0ed9f4a32dbe88307d5edbb5a868fbd9b5aab31219f1f1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:55:26.698174 containerd[1918]: time="2025-02-13T20:55:26.698165183Z" level=info msg="RemovePodSandbox \"20d8033f7a9a30cdcc0ed9f4a32dbe88307d5edbb5a868fbd9b5aab31219f1f1\" returns successfully" Feb 13 20:55:26.698430 containerd[1918]: time="2025-02-13T20:55:26.698386942Z" level=info msg="StopPodSandbox for \"3f29a0d44f56fea504a6321d257ffe7b3e1a11f9b4b8f8ee22ba456869c3b6f1\"" Feb 13 20:55:26.733584 containerd[1918]: 2025-02-13 20:55:26.717 [WARNING][6704] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3f29a0d44f56fea504a6321d257ffe7b3e1a11f9b4b8f8ee22ba456869c3b6f1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--f6aaf2d828-k8s-csi--node--driver--fqp2c-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"33fccad8-e90d-49bb-89c6-670419a141a0", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 54, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-f6aaf2d828", ContainerID:"1308f0c89cf44e5767561d745a990222d82e338378168de20d00fab134bce790", Pod:"csi-node-driver-fqp2c", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.31.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali299c7c4128f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:55:26.733584 containerd[1918]: 2025-02-13 20:55:26.717 [INFO][6704] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3f29a0d44f56fea504a6321d257ffe7b3e1a11f9b4b8f8ee22ba456869c3b6f1" Feb 13 20:55:26.733584 containerd[1918]: 2025-02-13 20:55:26.717 [INFO][6704] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3f29a0d44f56fea504a6321d257ffe7b3e1a11f9b4b8f8ee22ba456869c3b6f1" iface="eth0" netns="" Feb 13 20:55:26.733584 containerd[1918]: 2025-02-13 20:55:26.717 [INFO][6704] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3f29a0d44f56fea504a6321d257ffe7b3e1a11f9b4b8f8ee22ba456869c3b6f1" Feb 13 20:55:26.733584 containerd[1918]: 2025-02-13 20:55:26.717 [INFO][6704] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3f29a0d44f56fea504a6321d257ffe7b3e1a11f9b4b8f8ee22ba456869c3b6f1" Feb 13 20:55:26.733584 containerd[1918]: 2025-02-13 20:55:26.727 [INFO][6720] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3f29a0d44f56fea504a6321d257ffe7b3e1a11f9b4b8f8ee22ba456869c3b6f1" HandleID="k8s-pod-network.3f29a0d44f56fea504a6321d257ffe7b3e1a11f9b4b8f8ee22ba456869c3b6f1" Workload="ci--4081.3.1--a--f6aaf2d828-k8s-csi--node--driver--fqp2c-eth0" Feb 13 20:55:26.733584 containerd[1918]: 2025-02-13 20:55:26.727 [INFO][6720] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:55:26.733584 containerd[1918]: 2025-02-13 20:55:26.727 [INFO][6720] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:55:26.733584 containerd[1918]: 2025-02-13 20:55:26.731 [WARNING][6720] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3f29a0d44f56fea504a6321d257ffe7b3e1a11f9b4b8f8ee22ba456869c3b6f1" HandleID="k8s-pod-network.3f29a0d44f56fea504a6321d257ffe7b3e1a11f9b4b8f8ee22ba456869c3b6f1" Workload="ci--4081.3.1--a--f6aaf2d828-k8s-csi--node--driver--fqp2c-eth0" Feb 13 20:55:26.733584 containerd[1918]: 2025-02-13 20:55:26.731 [INFO][6720] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3f29a0d44f56fea504a6321d257ffe7b3e1a11f9b4b8f8ee22ba456869c3b6f1" HandleID="k8s-pod-network.3f29a0d44f56fea504a6321d257ffe7b3e1a11f9b4b8f8ee22ba456869c3b6f1" Workload="ci--4081.3.1--a--f6aaf2d828-k8s-csi--node--driver--fqp2c-eth0" Feb 13 20:55:26.733584 containerd[1918]: 2025-02-13 20:55:26.732 [INFO][6720] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:55:26.733584 containerd[1918]: 2025-02-13 20:55:26.732 [INFO][6704] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3f29a0d44f56fea504a6321d257ffe7b3e1a11f9b4b8f8ee22ba456869c3b6f1" Feb 13 20:55:26.733880 containerd[1918]: time="2025-02-13T20:55:26.733609411Z" level=info msg="TearDown network for sandbox \"3f29a0d44f56fea504a6321d257ffe7b3e1a11f9b4b8f8ee22ba456869c3b6f1\" successfully" Feb 13 20:55:26.733880 containerd[1918]: time="2025-02-13T20:55:26.733624388Z" level=info msg="StopPodSandbox for \"3f29a0d44f56fea504a6321d257ffe7b3e1a11f9b4b8f8ee22ba456869c3b6f1\" returns successfully" Feb 13 20:55:26.733937 containerd[1918]: time="2025-02-13T20:55:26.733921412Z" level=info msg="RemovePodSandbox for \"3f29a0d44f56fea504a6321d257ffe7b3e1a11f9b4b8f8ee22ba456869c3b6f1\"" Feb 13 20:55:26.733959 containerd[1918]: time="2025-02-13T20:55:26.733944660Z" level=info msg="Forcibly stopping sandbox \"3f29a0d44f56fea504a6321d257ffe7b3e1a11f9b4b8f8ee22ba456869c3b6f1\"" Feb 13 20:55:26.770802 containerd[1918]: 2025-02-13 20:55:26.753 [WARNING][6747] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3f29a0d44f56fea504a6321d257ffe7b3e1a11f9b4b8f8ee22ba456869c3b6f1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--f6aaf2d828-k8s-csi--node--driver--fqp2c-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"33fccad8-e90d-49bb-89c6-670419a141a0", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 54, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-f6aaf2d828", ContainerID:"1308f0c89cf44e5767561d745a990222d82e338378168de20d00fab134bce790", Pod:"csi-node-driver-fqp2c", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.31.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali299c7c4128f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:55:26.770802 containerd[1918]: 2025-02-13 20:55:26.753 [INFO][6747] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3f29a0d44f56fea504a6321d257ffe7b3e1a11f9b4b8f8ee22ba456869c3b6f1" Feb 13 20:55:26.770802 containerd[1918]: 2025-02-13 20:55:26.753 [INFO][6747] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3f29a0d44f56fea504a6321d257ffe7b3e1a11f9b4b8f8ee22ba456869c3b6f1" iface="eth0" netns="" Feb 13 20:55:26.770802 containerd[1918]: 2025-02-13 20:55:26.753 [INFO][6747] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3f29a0d44f56fea504a6321d257ffe7b3e1a11f9b4b8f8ee22ba456869c3b6f1" Feb 13 20:55:26.770802 containerd[1918]: 2025-02-13 20:55:26.753 [INFO][6747] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3f29a0d44f56fea504a6321d257ffe7b3e1a11f9b4b8f8ee22ba456869c3b6f1" Feb 13 20:55:26.770802 containerd[1918]: 2025-02-13 20:55:26.764 [INFO][6759] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3f29a0d44f56fea504a6321d257ffe7b3e1a11f9b4b8f8ee22ba456869c3b6f1" HandleID="k8s-pod-network.3f29a0d44f56fea504a6321d257ffe7b3e1a11f9b4b8f8ee22ba456869c3b6f1" Workload="ci--4081.3.1--a--f6aaf2d828-k8s-csi--node--driver--fqp2c-eth0" Feb 13 20:55:26.770802 containerd[1918]: 2025-02-13 20:55:26.764 [INFO][6759] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:55:26.770802 containerd[1918]: 2025-02-13 20:55:26.764 [INFO][6759] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:55:26.770802 containerd[1918]: 2025-02-13 20:55:26.768 [WARNING][6759] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3f29a0d44f56fea504a6321d257ffe7b3e1a11f9b4b8f8ee22ba456869c3b6f1" HandleID="k8s-pod-network.3f29a0d44f56fea504a6321d257ffe7b3e1a11f9b4b8f8ee22ba456869c3b6f1" Workload="ci--4081.3.1--a--f6aaf2d828-k8s-csi--node--driver--fqp2c-eth0" Feb 13 20:55:26.770802 containerd[1918]: 2025-02-13 20:55:26.768 [INFO][6759] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3f29a0d44f56fea504a6321d257ffe7b3e1a11f9b4b8f8ee22ba456869c3b6f1" HandleID="k8s-pod-network.3f29a0d44f56fea504a6321d257ffe7b3e1a11f9b4b8f8ee22ba456869c3b6f1" Workload="ci--4081.3.1--a--f6aaf2d828-k8s-csi--node--driver--fqp2c-eth0" Feb 13 20:55:26.770802 containerd[1918]: 2025-02-13 20:55:26.769 [INFO][6759] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:55:26.770802 containerd[1918]: 2025-02-13 20:55:26.770 [INFO][6747] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3f29a0d44f56fea504a6321d257ffe7b3e1a11f9b4b8f8ee22ba456869c3b6f1" Feb 13 20:55:26.771099 containerd[1918]: time="2025-02-13T20:55:26.770816693Z" level=info msg="TearDown network for sandbox \"3f29a0d44f56fea504a6321d257ffe7b3e1a11f9b4b8f8ee22ba456869c3b6f1\" successfully" Feb 13 20:55:26.772235 containerd[1918]: time="2025-02-13T20:55:26.772223135Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3f29a0d44f56fea504a6321d257ffe7b3e1a11f9b4b8f8ee22ba456869c3b6f1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:55:26.772259 containerd[1918]: time="2025-02-13T20:55:26.772251988Z" level=info msg="RemovePodSandbox \"3f29a0d44f56fea504a6321d257ffe7b3e1a11f9b4b8f8ee22ba456869c3b6f1\" returns successfully" Feb 13 20:55:26.772506 containerd[1918]: time="2025-02-13T20:55:26.772494736Z" level=info msg="StopPodSandbox for \"11bd862c6efc92f4633b356435d221e7af2a4d135f930aea5588f9c5b70c758c\"" Feb 13 20:55:26.807491 containerd[1918]: 2025-02-13 20:55:26.790 [WARNING][6788] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="11bd862c6efc92f4633b356435d221e7af2a4d135f930aea5588f9c5b70c758c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--f6aaf2d828-k8s-calico--apiserver--784664ffb7--4nzlt-eth0", GenerateName:"calico-apiserver-784664ffb7-", Namespace:"calico-apiserver", SelfLink:"", UID:"14070312-726d-4bcd-91eb-341f8e9a1a5e", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 54, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"784664ffb7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-f6aaf2d828", ContainerID:"d7832ff19b31f5a191484dafe8801665ad87a04e376a718c4ef40fd08a4f0c61", Pod:"calico-apiserver-784664ffb7-4nzlt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.31.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4a3cdd3bd47", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:55:26.807491 containerd[1918]: 2025-02-13 20:55:26.790 [INFO][6788] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="11bd862c6efc92f4633b356435d221e7af2a4d135f930aea5588f9c5b70c758c" Feb 13 20:55:26.807491 containerd[1918]: 2025-02-13 20:55:26.790 [INFO][6788] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="11bd862c6efc92f4633b356435d221e7af2a4d135f930aea5588f9c5b70c758c" iface="eth0" netns="" Feb 13 20:55:26.807491 containerd[1918]: 2025-02-13 20:55:26.790 [INFO][6788] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="11bd862c6efc92f4633b356435d221e7af2a4d135f930aea5588f9c5b70c758c" Feb 13 20:55:26.807491 containerd[1918]: 2025-02-13 20:55:26.790 [INFO][6788] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="11bd862c6efc92f4633b356435d221e7af2a4d135f930aea5588f9c5b70c758c" Feb 13 20:55:26.807491 containerd[1918]: 2025-02-13 20:55:26.801 [INFO][6802] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="11bd862c6efc92f4633b356435d221e7af2a4d135f930aea5588f9c5b70c758c" HandleID="k8s-pod-network.11bd862c6efc92f4633b356435d221e7af2a4d135f930aea5588f9c5b70c758c" Workload="ci--4081.3.1--a--f6aaf2d828-k8s-calico--apiserver--784664ffb7--4nzlt-eth0" Feb 13 20:55:26.807491 containerd[1918]: 2025-02-13 20:55:26.801 [INFO][6802] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:55:26.807491 containerd[1918]: 2025-02-13 20:55:26.801 [INFO][6802] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:55:26.807491 containerd[1918]: 2025-02-13 20:55:26.805 [WARNING][6802] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="11bd862c6efc92f4633b356435d221e7af2a4d135f930aea5588f9c5b70c758c" HandleID="k8s-pod-network.11bd862c6efc92f4633b356435d221e7af2a4d135f930aea5588f9c5b70c758c" Workload="ci--4081.3.1--a--f6aaf2d828-k8s-calico--apiserver--784664ffb7--4nzlt-eth0" Feb 13 20:55:26.807491 containerd[1918]: 2025-02-13 20:55:26.805 [INFO][6802] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="11bd862c6efc92f4633b356435d221e7af2a4d135f930aea5588f9c5b70c758c" HandleID="k8s-pod-network.11bd862c6efc92f4633b356435d221e7af2a4d135f930aea5588f9c5b70c758c" Workload="ci--4081.3.1--a--f6aaf2d828-k8s-calico--apiserver--784664ffb7--4nzlt-eth0" Feb 13 20:55:26.807491 containerd[1918]: 2025-02-13 20:55:26.806 [INFO][6802] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:55:26.807491 containerd[1918]: 2025-02-13 20:55:26.806 [INFO][6788] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="11bd862c6efc92f4633b356435d221e7af2a4d135f930aea5588f9c5b70c758c" Feb 13 20:55:26.807491 containerd[1918]: time="2025-02-13T20:55:26.807482218Z" level=info msg="TearDown network for sandbox \"11bd862c6efc92f4633b356435d221e7af2a4d135f930aea5588f9c5b70c758c\" successfully" Feb 13 20:55:26.807792 containerd[1918]: time="2025-02-13T20:55:26.807501195Z" level=info msg="StopPodSandbox for \"11bd862c6efc92f4633b356435d221e7af2a4d135f930aea5588f9c5b70c758c\" returns successfully" Feb 13 20:55:26.807792 containerd[1918]: time="2025-02-13T20:55:26.807732300Z" level=info msg="RemovePodSandbox for \"11bd862c6efc92f4633b356435d221e7af2a4d135f930aea5588f9c5b70c758c\"" Feb 13 20:55:26.807792 containerd[1918]: time="2025-02-13T20:55:26.807751167Z" level=info msg="Forcibly stopping sandbox \"11bd862c6efc92f4633b356435d221e7af2a4d135f930aea5588f9c5b70c758c\"" Feb 13 20:55:26.851614 containerd[1918]: 2025-02-13 20:55:26.827 [WARNING][6829] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="11bd862c6efc92f4633b356435d221e7af2a4d135f930aea5588f9c5b70c758c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--f6aaf2d828-k8s-calico--apiserver--784664ffb7--4nzlt-eth0", GenerateName:"calico-apiserver-784664ffb7-", Namespace:"calico-apiserver", SelfLink:"", UID:"14070312-726d-4bcd-91eb-341f8e9a1a5e", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 54, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"784664ffb7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-f6aaf2d828", ContainerID:"d7832ff19b31f5a191484dafe8801665ad87a04e376a718c4ef40fd08a4f0c61", Pod:"calico-apiserver-784664ffb7-4nzlt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.31.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4a3cdd3bd47", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:55:26.851614 containerd[1918]: 2025-02-13 20:55:26.827 [INFO][6829] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="11bd862c6efc92f4633b356435d221e7af2a4d135f930aea5588f9c5b70c758c" Feb 13 20:55:26.851614 containerd[1918]: 2025-02-13 20:55:26.827 [INFO][6829] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="11bd862c6efc92f4633b356435d221e7af2a4d135f930aea5588f9c5b70c758c" iface="eth0" netns="" Feb 13 20:55:26.851614 containerd[1918]: 2025-02-13 20:55:26.827 [INFO][6829] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="11bd862c6efc92f4633b356435d221e7af2a4d135f930aea5588f9c5b70c758c" Feb 13 20:55:26.851614 containerd[1918]: 2025-02-13 20:55:26.827 [INFO][6829] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="11bd862c6efc92f4633b356435d221e7af2a4d135f930aea5588f9c5b70c758c" Feb 13 20:55:26.851614 containerd[1918]: 2025-02-13 20:55:26.843 [INFO][6844] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="11bd862c6efc92f4633b356435d221e7af2a4d135f930aea5588f9c5b70c758c" HandleID="k8s-pod-network.11bd862c6efc92f4633b356435d221e7af2a4d135f930aea5588f9c5b70c758c" Workload="ci--4081.3.1--a--f6aaf2d828-k8s-calico--apiserver--784664ffb7--4nzlt-eth0" Feb 13 20:55:26.851614 containerd[1918]: 2025-02-13 20:55:26.844 [INFO][6844] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:55:26.851614 containerd[1918]: 2025-02-13 20:55:26.844 [INFO][6844] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:55:26.851614 containerd[1918]: 2025-02-13 20:55:26.848 [WARNING][6844] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="11bd862c6efc92f4633b356435d221e7af2a4d135f930aea5588f9c5b70c758c" HandleID="k8s-pod-network.11bd862c6efc92f4633b356435d221e7af2a4d135f930aea5588f9c5b70c758c" Workload="ci--4081.3.1--a--f6aaf2d828-k8s-calico--apiserver--784664ffb7--4nzlt-eth0" Feb 13 20:55:26.851614 containerd[1918]: 2025-02-13 20:55:26.848 [INFO][6844] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="11bd862c6efc92f4633b356435d221e7af2a4d135f930aea5588f9c5b70c758c" HandleID="k8s-pod-network.11bd862c6efc92f4633b356435d221e7af2a4d135f930aea5588f9c5b70c758c" Workload="ci--4081.3.1--a--f6aaf2d828-k8s-calico--apiserver--784664ffb7--4nzlt-eth0" Feb 13 20:55:26.851614 containerd[1918]: 2025-02-13 20:55:26.849 [INFO][6844] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:55:26.851614 containerd[1918]: 2025-02-13 20:55:26.850 [INFO][6829] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="11bd862c6efc92f4633b356435d221e7af2a4d135f930aea5588f9c5b70c758c" Feb 13 20:55:26.852076 containerd[1918]: time="2025-02-13T20:55:26.851654395Z" level=info msg="TearDown network for sandbox \"11bd862c6efc92f4633b356435d221e7af2a4d135f930aea5588f9c5b70c758c\" successfully" Feb 13 20:55:26.858533 containerd[1918]: time="2025-02-13T20:55:26.858506159Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"11bd862c6efc92f4633b356435d221e7af2a4d135f930aea5588f9c5b70c758c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:55:26.858604 containerd[1918]: time="2025-02-13T20:55:26.858560398Z" level=info msg="RemovePodSandbox \"11bd862c6efc92f4633b356435d221e7af2a4d135f930aea5588f9c5b70c758c\" returns successfully" Feb 13 20:55:26.858828 containerd[1918]: time="2025-02-13T20:55:26.858816299Z" level=info msg="StopPodSandbox for \"eba8c9a8883cacb26cc9f5f64c9afccce956a2bba4a6b15af0774e559541158b\"" Feb 13 20:55:26.901691 containerd[1918]: 2025-02-13 20:55:26.884 [WARNING][6875] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="eba8c9a8883cacb26cc9f5f64c9afccce956a2bba4a6b15af0774e559541158b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--f6aaf2d828-k8s-calico--apiserver--784664ffb7--z5wx4-eth0", GenerateName:"calico-apiserver-784664ffb7-", Namespace:"calico-apiserver", SelfLink:"", UID:"03b0b730-6f3a-4b02-bedd-65f23a457b35", ResourceVersion:"794", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 54, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"784664ffb7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-f6aaf2d828", ContainerID:"262150e550ef21330d285506bda379bce450ddd9721671f5df782deba45b48a2", Pod:"calico-apiserver-784664ffb7-z5wx4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.31.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali920a1084f34", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:55:26.901691 containerd[1918]: 2025-02-13 20:55:26.884 [INFO][6875] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="eba8c9a8883cacb26cc9f5f64c9afccce956a2bba4a6b15af0774e559541158b" Feb 13 20:55:26.901691 containerd[1918]: 2025-02-13 20:55:26.884 [INFO][6875] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="eba8c9a8883cacb26cc9f5f64c9afccce956a2bba4a6b15af0774e559541158b" iface="eth0" netns="" Feb 13 20:55:26.901691 containerd[1918]: 2025-02-13 20:55:26.884 [INFO][6875] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="eba8c9a8883cacb26cc9f5f64c9afccce956a2bba4a6b15af0774e559541158b" Feb 13 20:55:26.901691 containerd[1918]: 2025-02-13 20:55:26.884 [INFO][6875] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="eba8c9a8883cacb26cc9f5f64c9afccce956a2bba4a6b15af0774e559541158b" Feb 13 20:55:26.901691 containerd[1918]: 2025-02-13 20:55:26.895 [INFO][6889] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="eba8c9a8883cacb26cc9f5f64c9afccce956a2bba4a6b15af0774e559541158b" HandleID="k8s-pod-network.eba8c9a8883cacb26cc9f5f64c9afccce956a2bba4a6b15af0774e559541158b" Workload="ci--4081.3.1--a--f6aaf2d828-k8s-calico--apiserver--784664ffb7--z5wx4-eth0" Feb 13 20:55:26.901691 containerd[1918]: 2025-02-13 20:55:26.895 [INFO][6889] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:55:26.901691 containerd[1918]: 2025-02-13 20:55:26.895 [INFO][6889] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:55:26.901691 containerd[1918]: 2025-02-13 20:55:26.899 [WARNING][6889] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="eba8c9a8883cacb26cc9f5f64c9afccce956a2bba4a6b15af0774e559541158b" HandleID="k8s-pod-network.eba8c9a8883cacb26cc9f5f64c9afccce956a2bba4a6b15af0774e559541158b" Workload="ci--4081.3.1--a--f6aaf2d828-k8s-calico--apiserver--784664ffb7--z5wx4-eth0" Feb 13 20:55:26.901691 containerd[1918]: 2025-02-13 20:55:26.899 [INFO][6889] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="eba8c9a8883cacb26cc9f5f64c9afccce956a2bba4a6b15af0774e559541158b" HandleID="k8s-pod-network.eba8c9a8883cacb26cc9f5f64c9afccce956a2bba4a6b15af0774e559541158b" Workload="ci--4081.3.1--a--f6aaf2d828-k8s-calico--apiserver--784664ffb7--z5wx4-eth0" Feb 13 20:55:26.901691 containerd[1918]: 2025-02-13 20:55:26.900 [INFO][6889] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:55:26.901691 containerd[1918]: 2025-02-13 20:55:26.900 [INFO][6875] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="eba8c9a8883cacb26cc9f5f64c9afccce956a2bba4a6b15af0774e559541158b" Feb 13 20:55:26.901691 containerd[1918]: time="2025-02-13T20:55:26.901645524Z" level=info msg="TearDown network for sandbox \"eba8c9a8883cacb26cc9f5f64c9afccce956a2bba4a6b15af0774e559541158b\" successfully" Feb 13 20:55:26.901691 containerd[1918]: time="2025-02-13T20:55:26.901660713Z" level=info msg="StopPodSandbox for \"eba8c9a8883cacb26cc9f5f64c9afccce956a2bba4a6b15af0774e559541158b\" returns successfully" Feb 13 20:55:26.902040 containerd[1918]: time="2025-02-13T20:55:26.901925790Z" level=info msg="RemovePodSandbox for \"eba8c9a8883cacb26cc9f5f64c9afccce956a2bba4a6b15af0774e559541158b\"" Feb 13 20:55:26.902040 containerd[1918]: time="2025-02-13T20:55:26.901941723Z" level=info msg="Forcibly stopping sandbox \"eba8c9a8883cacb26cc9f5f64c9afccce956a2bba4a6b15af0774e559541158b\"" Feb 13 20:55:26.937886 containerd[1918]: 2025-02-13 20:55:26.921 [WARNING][6913] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="eba8c9a8883cacb26cc9f5f64c9afccce956a2bba4a6b15af0774e559541158b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--f6aaf2d828-k8s-calico--apiserver--784664ffb7--z5wx4-eth0", GenerateName:"calico-apiserver-784664ffb7-", Namespace:"calico-apiserver", SelfLink:"", UID:"03b0b730-6f3a-4b02-bedd-65f23a457b35", ResourceVersion:"794", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 54, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"784664ffb7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-f6aaf2d828", ContainerID:"262150e550ef21330d285506bda379bce450ddd9721671f5df782deba45b48a2", Pod:"calico-apiserver-784664ffb7-z5wx4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.31.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali920a1084f34", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:55:26.937886 containerd[1918]: 2025-02-13 20:55:26.921 [INFO][6913] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="eba8c9a8883cacb26cc9f5f64c9afccce956a2bba4a6b15af0774e559541158b" Feb 13 20:55:26.937886 containerd[1918]: 2025-02-13 20:55:26.921 [INFO][6913] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="eba8c9a8883cacb26cc9f5f64c9afccce956a2bba4a6b15af0774e559541158b" iface="eth0" netns="" Feb 13 20:55:26.937886 containerd[1918]: 2025-02-13 20:55:26.921 [INFO][6913] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="eba8c9a8883cacb26cc9f5f64c9afccce956a2bba4a6b15af0774e559541158b" Feb 13 20:55:26.937886 containerd[1918]: 2025-02-13 20:55:26.921 [INFO][6913] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="eba8c9a8883cacb26cc9f5f64c9afccce956a2bba4a6b15af0774e559541158b" Feb 13 20:55:26.937886 containerd[1918]: 2025-02-13 20:55:26.931 [INFO][6926] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="eba8c9a8883cacb26cc9f5f64c9afccce956a2bba4a6b15af0774e559541158b" HandleID="k8s-pod-network.eba8c9a8883cacb26cc9f5f64c9afccce956a2bba4a6b15af0774e559541158b" Workload="ci--4081.3.1--a--f6aaf2d828-k8s-calico--apiserver--784664ffb7--z5wx4-eth0" Feb 13 20:55:26.937886 containerd[1918]: 2025-02-13 20:55:26.931 [INFO][6926] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:55:26.937886 containerd[1918]: 2025-02-13 20:55:26.931 [INFO][6926] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:55:26.937886 containerd[1918]: 2025-02-13 20:55:26.935 [WARNING][6926] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="eba8c9a8883cacb26cc9f5f64c9afccce956a2bba4a6b15af0774e559541158b" HandleID="k8s-pod-network.eba8c9a8883cacb26cc9f5f64c9afccce956a2bba4a6b15af0774e559541158b" Workload="ci--4081.3.1--a--f6aaf2d828-k8s-calico--apiserver--784664ffb7--z5wx4-eth0" Feb 13 20:55:26.937886 containerd[1918]: 2025-02-13 20:55:26.935 [INFO][6926] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="eba8c9a8883cacb26cc9f5f64c9afccce956a2bba4a6b15af0774e559541158b" HandleID="k8s-pod-network.eba8c9a8883cacb26cc9f5f64c9afccce956a2bba4a6b15af0774e559541158b" Workload="ci--4081.3.1--a--f6aaf2d828-k8s-calico--apiserver--784664ffb7--z5wx4-eth0" Feb 13 20:55:26.937886 containerd[1918]: 2025-02-13 20:55:26.936 [INFO][6926] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:55:26.937886 containerd[1918]: 2025-02-13 20:55:26.937 [INFO][6913] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="eba8c9a8883cacb26cc9f5f64c9afccce956a2bba4a6b15af0774e559541158b" Feb 13 20:55:26.937886 containerd[1918]: time="2025-02-13T20:55:26.937880014Z" level=info msg="TearDown network for sandbox \"eba8c9a8883cacb26cc9f5f64c9afccce956a2bba4a6b15af0774e559541158b\" successfully" Feb 13 20:55:26.939259 containerd[1918]: time="2025-02-13T20:55:26.939246313Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"eba8c9a8883cacb26cc9f5f64c9afccce956a2bba4a6b15af0774e559541158b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:55:26.939291 containerd[1918]: time="2025-02-13T20:55:26.939271423Z" level=info msg="RemovePodSandbox \"eba8c9a8883cacb26cc9f5f64c9afccce956a2bba4a6b15af0774e559541158b\" returns successfully" Feb 13 20:57:19.709609 systemd[1]: Started sshd@9-147.28.180.203:22-92.255.57.132:43444.service - OpenSSH per-connection server daemon (92.255.57.132:43444). Feb 13 20:57:20.794481 sshd[7210]: Invalid user 1234 from 92.255.57.132 port 43444 Feb 13 20:57:20.969492 sshd[7210]: Connection closed by invalid user 1234 92.255.57.132 port 43444 [preauth] Feb 13 20:57:20.972715 systemd[1]: sshd@9-147.28.180.203:22-92.255.57.132:43444.service: Deactivated successfully. Feb 13 21:00:31.806160 systemd[1]: Started sshd@10-147.28.180.203:22-139.178.89.65:55562.service - OpenSSH per-connection server daemon (139.178.89.65:55562). Feb 13 21:00:31.879057 sshd[7659]: Accepted publickey for core from 139.178.89.65 port 55562 ssh2: RSA SHA256:6ByWF9I+QbePXoVE/Ooa8KUw2dPGq3Qvw04/G+Sn80U Feb 13 21:00:31.880539 sshd[7659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:00:31.885872 systemd-logind[1897]: New session 12 of user core. Feb 13 21:00:31.903807 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 21:00:31.997477 sshd[7659]: pam_unix(sshd:session): session closed for user core Feb 13 21:00:31.999032 systemd[1]: sshd@10-147.28.180.203:22-139.178.89.65:55562.service: Deactivated successfully. Feb 13 21:00:32.000413 systemd-logind[1897]: Session 12 logged out. Waiting for processes to exit. Feb 13 21:00:32.000541 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 21:00:32.001148 systemd-logind[1897]: Removed session 12. Feb 13 21:00:37.016111 systemd[1]: Started sshd@11-147.28.180.203:22-139.178.89.65:51804.service - OpenSSH per-connection server daemon (139.178.89.65:51804). Feb 13 21:00:37.093751 sshd[7690]: Accepted publickey for core from 139.178.89.65 port 51804 ssh2: RSA SHA256:6ByWF9I+QbePXoVE/Ooa8KUw2dPGq3Qvw04/G+Sn80U Feb 13 21:00:37.094734 sshd[7690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:00:37.098125 systemd-logind[1897]: New session 13 of user core. Feb 13 21:00:37.117588 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 21:00:37.205088 sshd[7690]: pam_unix(sshd:session): session closed for user core Feb 13 21:00:37.206667 systemd[1]: sshd@11-147.28.180.203:22-139.178.89.65:51804.service: Deactivated successfully. Feb 13 21:00:37.208030 systemd-logind[1897]: Session 13 logged out. Waiting for processes to exit. Feb 13 21:00:37.208130 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 21:00:37.208717 systemd-logind[1897]: Removed session 13. Feb 13 21:00:42.220596 systemd[1]: Started sshd@12-147.28.180.203:22-139.178.89.65:51820.service - OpenSSH per-connection server daemon (139.178.89.65:51820). Feb 13 21:00:42.250899 sshd[7720]: Accepted publickey for core from 139.178.89.65 port 51820 ssh2: RSA SHA256:6ByWF9I+QbePXoVE/Ooa8KUw2dPGq3Qvw04/G+Sn80U Feb 13 21:00:42.251554 sshd[7720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:00:42.254167 systemd-logind[1897]: New session 14 of user core. Feb 13 21:00:42.264742 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 21:00:42.348772 sshd[7720]: pam_unix(sshd:session): session closed for user core Feb 13 21:00:42.350292 systemd[1]: sshd@12-147.28.180.203:22-139.178.89.65:51820.service: Deactivated successfully. Feb 13 21:00:42.351772 systemd-logind[1897]: Session 14 logged out. Waiting for processes to exit. Feb 13 21:00:42.351810 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 21:00:42.352372 systemd-logind[1897]: Removed session 14. Feb 13 21:00:47.366131 systemd[1]: Started sshd@13-147.28.180.203:22-139.178.89.65:54174.service - OpenSSH per-connection server daemon (139.178.89.65:54174). Feb 13 21:00:47.448719 sshd[7776]: Accepted publickey for core from 139.178.89.65 port 54174 ssh2: RSA SHA256:6ByWF9I+QbePXoVE/Ooa8KUw2dPGq3Qvw04/G+Sn80U Feb 13 21:00:47.449745 sshd[7776]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:00:47.453121 systemd-logind[1897]: New session 15 of user core. Feb 13 21:00:47.467741 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 21:00:47.553869 sshd[7776]: pam_unix(sshd:session): session closed for user core Feb 13 21:00:47.571163 systemd[1]: Started sshd@14-147.28.180.203:22-139.178.89.65:54182.service - OpenSSH per-connection server daemon (139.178.89.65:54182). Feb 13 21:00:47.572812 systemd[1]: sshd@13-147.28.180.203:22-139.178.89.65:54174.service: Deactivated successfully. Feb 13 21:00:47.576329 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 21:00:47.577167 systemd-logind[1897]: Session 15 logged out. Waiting for processes to exit. Feb 13 21:00:47.578005 systemd-logind[1897]: Removed session 15. Feb 13 21:00:47.600414 sshd[7801]: Accepted publickey for core from 139.178.89.65 port 54182 ssh2: RSA SHA256:6ByWF9I+QbePXoVE/Ooa8KUw2dPGq3Qvw04/G+Sn80U Feb 13 21:00:47.601052 sshd[7801]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:00:47.603682 systemd-logind[1897]: New session 16 of user core. Feb 13 21:00:47.620625 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 21:00:47.763044 sshd[7801]: pam_unix(sshd:session): session closed for user core Feb 13 21:00:47.779723 systemd[1]: Started sshd@15-147.28.180.203:22-139.178.89.65:54190.service - OpenSSH per-connection server daemon (139.178.89.65:54190). Feb 13 21:00:47.780120 systemd[1]: sshd@14-147.28.180.203:22-139.178.89.65:54182.service: Deactivated successfully. Feb 13 21:00:47.781036 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 21:00:47.781794 systemd-logind[1897]: Session 16 logged out. Waiting for processes to exit. Feb 13 21:00:47.782415 systemd-logind[1897]: Removed session 16. Feb 13 21:00:47.804508 sshd[7826]: Accepted publickey for core from 139.178.89.65 port 54190 ssh2: RSA SHA256:6ByWF9I+QbePXoVE/Ooa8KUw2dPGq3Qvw04/G+Sn80U Feb 13 21:00:47.805194 sshd[7826]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:00:47.807771 systemd-logind[1897]: New session 17 of user core. Feb 13 21:00:47.820612 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 21:00:47.931959 sshd[7826]: pam_unix(sshd:session): session closed for user core Feb 13 21:00:47.933664 systemd[1]: sshd@15-147.28.180.203:22-139.178.89.65:54190.service: Deactivated successfully. Feb 13 21:00:47.935209 systemd-logind[1897]: Session 17 logged out. Waiting for processes to exit. Feb 13 21:00:47.935273 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 21:00:47.935965 systemd-logind[1897]: Removed session 17. Feb 13 21:00:52.951616 systemd[1]: Started sshd@16-147.28.180.203:22-139.178.89.65:54206.service - OpenSSH per-connection server daemon (139.178.89.65:54206). Feb 13 21:00:52.988224 sshd[7861]: Accepted publickey for core from 139.178.89.65 port 54206 ssh2: RSA SHA256:6ByWF9I+QbePXoVE/Ooa8KUw2dPGq3Qvw04/G+Sn80U Feb 13 21:00:52.989098 sshd[7861]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:00:52.992280 systemd-logind[1897]: New session 18 of user core. Feb 13 21:00:53.014757 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 21:00:53.102657 sshd[7861]: pam_unix(sshd:session): session closed for user core Feb 13 21:00:53.104249 systemd[1]: sshd@16-147.28.180.203:22-139.178.89.65:54206.service: Deactivated successfully. Feb 13 21:00:53.105720 systemd-logind[1897]: Session 18 logged out. Waiting for processes to exit. Feb 13 21:00:53.105801 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 21:00:53.106363 systemd-logind[1897]: Removed session 18. Feb 13 21:00:58.122714 systemd[1]: Started sshd@17-147.28.180.203:22-139.178.89.65:34226.service - OpenSSH per-connection server daemon (139.178.89.65:34226). Feb 13 21:00:58.152903 sshd[7909]: Accepted publickey for core from 139.178.89.65 port 34226 ssh2: RSA SHA256:6ByWF9I+QbePXoVE/Ooa8KUw2dPGq3Qvw04/G+Sn80U Feb 13 21:00:58.153638 sshd[7909]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:00:58.156321 systemd-logind[1897]: New session 19 of user core. Feb 13 21:00:58.174724 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 21:00:58.260419 sshd[7909]: pam_unix(sshd:session): session closed for user core Feb 13 21:00:58.262119 systemd[1]: sshd@17-147.28.180.203:22-139.178.89.65:34226.service: Deactivated successfully. Feb 13 21:00:58.263618 systemd-logind[1897]: Session 19 logged out. Waiting for processes to exit. Feb 13 21:00:58.263673 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 21:00:58.264281 systemd-logind[1897]: Removed session 19. Feb 13 21:01:03.282719 systemd[1]: Started sshd@18-147.28.180.203:22-139.178.89.65:34234.service - OpenSSH per-connection server daemon (139.178.89.65:34234). Feb 13 21:01:03.313347 sshd[7956]: Accepted publickey for core from 139.178.89.65 port 34234 ssh2: RSA SHA256:6ByWF9I+QbePXoVE/Ooa8KUw2dPGq3Qvw04/G+Sn80U Feb 13 21:01:03.314031 sshd[7956]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:01:03.316714 systemd-logind[1897]: New session 20 of user core. Feb 13 21:01:03.326616 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 21:01:03.412586 sshd[7956]: pam_unix(sshd:session): session closed for user core Feb 13 21:01:03.414275 systemd[1]: sshd@18-147.28.180.203:22-139.178.89.65:34234.service: Deactivated successfully. Feb 13 21:01:03.415739 systemd-logind[1897]: Session 20 logged out. Waiting for processes to exit. Feb 13 21:01:03.415794 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 21:01:03.416344 systemd-logind[1897]: Removed session 20. Feb 13 21:01:08.435840 systemd[1]: Started sshd@19-147.28.180.203:22-139.178.89.65:35840.service - OpenSSH per-connection server daemon (139.178.89.65:35840). Feb 13 21:01:08.511761 sshd[7984]: Accepted publickey for core from 139.178.89.65 port 35840 ssh2: RSA SHA256:6ByWF9I+QbePXoVE/Ooa8KUw2dPGq3Qvw04/G+Sn80U Feb 13 21:01:08.514382 sshd[7984]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:01:08.522573 systemd-logind[1897]: New session 21 of user core. Feb 13 21:01:08.534137 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 21:01:08.623545 sshd[7984]: pam_unix(sshd:session): session closed for user core Feb 13 21:01:08.633746 systemd[1]: Started sshd@20-147.28.180.203:22-139.178.89.65:35846.service - OpenSSH per-connection server daemon (139.178.89.65:35846). Feb 13 21:01:08.634061 systemd[1]: sshd@19-147.28.180.203:22-139.178.89.65:35840.service: Deactivated successfully. Feb 13 21:01:08.634989 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 21:01:08.635796 systemd-logind[1897]: Session 21 logged out. Waiting for processes to exit. Feb 13 21:01:08.636382 systemd-logind[1897]: Removed session 21. Feb 13 21:01:08.664058 sshd[8007]: Accepted publickey for core from 139.178.89.65 port 35846 ssh2: RSA SHA256:6ByWF9I+QbePXoVE/Ooa8KUw2dPGq3Qvw04/G+Sn80U Feb 13 21:01:08.664726 sshd[8007]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:01:08.667437 systemd-logind[1897]: New session 22 of user core. Feb 13 21:01:08.675782 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 21:01:08.811919 sshd[8007]: pam_unix(sshd:session): session closed for user core Feb 13 21:01:08.831739 systemd[1]: Started sshd@21-147.28.180.203:22-139.178.89.65:35860.service - OpenSSH per-connection server daemon (139.178.89.65:35860). Feb 13 21:01:08.832103 systemd[1]: sshd@20-147.28.180.203:22-139.178.89.65:35846.service: Deactivated successfully. Feb 13 21:01:08.833085 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 21:01:08.833912 systemd-logind[1897]: Session 22 logged out. Waiting for processes to exit. Feb 13 21:01:08.834676 systemd-logind[1897]: Removed session 22. Feb 13 21:01:08.870643 sshd[8032]: Accepted publickey for core from 139.178.89.65 port 35860 ssh2: RSA SHA256:6ByWF9I+QbePXoVE/Ooa8KUw2dPGq3Qvw04/G+Sn80U Feb 13 21:01:08.871634 sshd[8032]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:01:08.875357 systemd-logind[1897]: New session 23 of user core. Feb 13 21:01:08.885675 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 21:01:10.153668 sshd[8032]: pam_unix(sshd:session): session closed for user core Feb 13 21:01:10.176791 systemd[1]: Started sshd@22-147.28.180.203:22-139.178.89.65:35862.service - OpenSSH per-connection server daemon (139.178.89.65:35862). Feb 13 21:01:10.177369 systemd[1]: sshd@21-147.28.180.203:22-139.178.89.65:35860.service: Deactivated successfully. Feb 13 21:01:10.178946 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 21:01:10.180195 systemd-logind[1897]: Session 23 logged out. Waiting for processes to exit. Feb 13 21:01:10.181242 systemd-logind[1897]: Removed session 23. Feb 13 21:01:10.221299 sshd[8063]: Accepted publickey for core from 139.178.89.65 port 35862 ssh2: RSA SHA256:6ByWF9I+QbePXoVE/Ooa8KUw2dPGq3Qvw04/G+Sn80U Feb 13 21:01:10.222247 sshd[8063]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:01:10.225446 systemd-logind[1897]: New session 24 of user core. Feb 13 21:01:10.241669 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 21:01:10.441774 sshd[8063]: pam_unix(sshd:session): session closed for user core Feb 13 21:01:10.463120 systemd[1]: Started sshd@23-147.28.180.203:22-139.178.89.65:35866.service - OpenSSH per-connection server daemon (139.178.89.65:35866). Feb 13 21:01:10.464606 systemd[1]: sshd@22-147.28.180.203:22-139.178.89.65:35862.service: Deactivated successfully. Feb 13 21:01:10.468213 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 21:01:10.471396 systemd-logind[1897]: Session 24 logged out. Waiting for processes to exit. Feb 13 21:01:10.474366 systemd-logind[1897]: Removed session 24. Feb 13 21:01:10.548828 sshd[8093]: Accepted publickey for core from 139.178.89.65 port 35866 ssh2: RSA SHA256:6ByWF9I+QbePXoVE/Ooa8KUw2dPGq3Qvw04/G+Sn80U Feb 13 21:01:10.549881 sshd[8093]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:01:10.553083 systemd-logind[1897]: New session 25 of user core. Feb 13 21:01:10.565685 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 21:01:10.701992 sshd[8093]: pam_unix(sshd:session): session closed for user core Feb 13 21:01:10.703703 systemd[1]: sshd@23-147.28.180.203:22-139.178.89.65:35866.service: Deactivated successfully. Feb 13 21:01:10.705117 systemd-logind[1897]: Session 25 logged out. Waiting for processes to exit. Feb 13 21:01:10.705198 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 21:01:10.705831 systemd-logind[1897]: Removed session 25. Feb 13 21:01:15.718775 systemd[1]: Started sshd@24-147.28.180.203:22-139.178.89.65:37982.service - OpenSSH per-connection server daemon (139.178.89.65:37982). Feb 13 21:01:15.748374 sshd[8153]: Accepted publickey for core from 139.178.89.65 port 37982 ssh2: RSA SHA256:6ByWF9I+QbePXoVE/Ooa8KUw2dPGq3Qvw04/G+Sn80U Feb 13 21:01:15.749122 sshd[8153]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:01:15.751864 systemd-logind[1897]: New session 26 of user core. Feb 13 21:01:15.769763 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 21:01:15.861998 sshd[8153]: pam_unix(sshd:session): session closed for user core Feb 13 21:01:15.868393 systemd[1]: sshd@24-147.28.180.203:22-139.178.89.65:37982.service: Deactivated successfully. Feb 13 21:01:15.874808 systemd-logind[1897]: Session 26 logged out. Waiting for processes to exit. Feb 13 21:01:15.875340 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 21:01:15.878373 systemd-logind[1897]: Removed session 26. Feb 13 21:01:20.882736 systemd[1]: Started sshd@25-147.28.180.203:22-139.178.89.65:37998.service - OpenSSH per-connection server daemon (139.178.89.65:37998). Feb 13 21:01:20.913070 sshd[8180]: Accepted publickey for core from 139.178.89.65 port 37998 ssh2: RSA SHA256:6ByWF9I+QbePXoVE/Ooa8KUw2dPGq3Qvw04/G+Sn80U Feb 13 21:01:20.913859 sshd[8180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:01:20.916891 systemd-logind[1897]: New session 27 of user core. Feb 13 21:01:20.933729 systemd[1]: Started session-27.scope - Session 27 of User core. Feb 13 21:01:21.020220 sshd[8180]: pam_unix(sshd:session): session closed for user core Feb 13 21:01:21.021878 systemd[1]: sshd@25-147.28.180.203:22-139.178.89.65:37998.service: Deactivated successfully. Feb 13 21:01:21.023330 systemd-logind[1897]: Session 27 logged out. Waiting for processes to exit. Feb 13 21:01:21.023451 systemd[1]: session-27.scope: Deactivated successfully. Feb 13 21:01:21.024089 systemd-logind[1897]: Removed session 27. Feb 13 21:01:26.038142 systemd[1]: Started sshd@26-147.28.180.203:22-139.178.89.65:54036.service - OpenSSH per-connection server daemon (139.178.89.65:54036). Feb 13 21:01:26.092411 sshd[8227]: Accepted publickey for core from 139.178.89.65 port 54036 ssh2: RSA SHA256:6ByWF9I+QbePXoVE/Ooa8KUw2dPGq3Qvw04/G+Sn80U Feb 13 21:01:26.093075 sshd[8227]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:01:26.095724 systemd-logind[1897]: New session 28 of user core. Feb 13 21:01:26.106551 systemd[1]: Started session-28.scope - Session 28 of User core. Feb 13 21:01:26.221116 sshd[8227]: pam_unix(sshd:session): session closed for user core Feb 13 21:01:26.222825 systemd[1]: sshd@26-147.28.180.203:22-139.178.89.65:54036.service: Deactivated successfully. Feb 13 21:01:26.224293 systemd-logind[1897]: Session 28 logged out. Waiting for processes to exit. Feb 13 21:01:26.224338 systemd[1]: session-28.scope: Deactivated successfully. Feb 13 21:01:26.225065 systemd-logind[1897]: Removed session 28.