Jan 13 20:51:30.457948 kernel: microcode: updated early: 0xf4 -> 0x100, date = 2024-02-05 Jan 13 20:51:30.457963 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 18:58:40 -00 2025 Jan 13 20:51:30.457969 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=8a11404d893165624d9716a125d997be53e2d6cdb0c50a945acda5b62a14eda5 Jan 13 20:51:30.457975 kernel: BIOS-provided physical RAM map: Jan 13 20:51:30.457979 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Jan 13 20:51:30.457983 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Jan 13 20:51:30.457988 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Jan 13 20:51:30.457992 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Jan 13 20:51:30.457996 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Jan 13 20:51:30.458001 kernel: BIOS-e820: [mem 0x0000000040400000-0x0000000081b19fff] usable Jan 13 20:51:30.458005 kernel: BIOS-e820: [mem 0x0000000081b1a000-0x0000000081b1afff] ACPI NVS Jan 13 20:51:30.458009 kernel: BIOS-e820: [mem 0x0000000081b1b000-0x0000000081b1bfff] reserved Jan 13 20:51:30.458014 kernel: BIOS-e820: [mem 0x0000000081b1c000-0x000000008afccfff] usable Jan 13 20:51:30.458018 kernel: BIOS-e820: [mem 0x000000008afcd000-0x000000008c0b1fff] reserved Jan 13 20:51:30.458023 kernel: BIOS-e820: [mem 0x000000008c0b2000-0x000000008c23afff] usable Jan 13 20:51:30.458028 kernel: BIOS-e820: [mem 0x000000008c23b000-0x000000008c66cfff] ACPI NVS Jan 13 20:51:30.458034 kernel: BIOS-e820: [mem 0x000000008c66d000-0x000000008eefefff] reserved Jan 13 20:51:30.458038 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable Jan 13 20:51:30.458043 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved Jan 13 20:51:30.458048 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 13 20:51:30.458052 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Jan 13 20:51:30.458057 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Jan 13 20:51:30.458062 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Jan 13 20:51:30.458066 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Jan 13 20:51:30.458071 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable Jan 13 20:51:30.458076 kernel: NX (Execute Disable) protection: active Jan 13 20:51:30.458080 kernel: APIC: Static calls initialized Jan 13 20:51:30.458085 kernel: SMBIOS 3.2.1 present. Jan 13 20:51:30.458091 kernel: DMI: Supermicro SYS-5019C-MR-PH004/X11SCM-F, BIOS 1.9 09/16/2022 Jan 13 20:51:30.458096 kernel: tsc: Detected 3400.000 MHz processor Jan 13 20:51:30.458100 kernel: tsc: Detected 3399.906 MHz TSC Jan 13 20:51:30.458105 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 13 20:51:30.458110 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 13 20:51:30.458115 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 Jan 13 20:51:30.458120 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 23), built from 10 variable MTRRs Jan 13 20:51:30.458125 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 13 20:51:30.458130 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 Jan 13 20:51:30.458134 kernel: Using GB pages for direct mapping Jan 13 20:51:30.458140 kernel: ACPI: Early table checksum verification disabled Jan 13 20:51:30.458145 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Jan 13 20:51:30.458152 kernel: ACPI: XSDT 0x000000008C54E0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Jan 13 20:51:30.458157 kernel: ACPI: FACP 0x000000008C58A670 000114 (v06 01072009 AMI 00010013) Jan 13 20:51:30.458162 kernel: ACPI: DSDT 0x000000008C54E268 03C404 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Jan 13 20:51:30.458167 kernel: ACPI: FACS 0x000000008C66CF80 000040 Jan 13 20:51:30.458173 kernel: ACPI: APIC 0x000000008C58A788 00012C (v04 01072009 AMI 00010013) Jan 13 20:51:30.458178 kernel: ACPI: FPDT 0x000000008C58A8B8 000044 (v01 01072009 AMI 00010013) Jan 13 20:51:30.458183 kernel: ACPI: FIDT 0x000000008C58A900 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Jan 13 20:51:30.458188 kernel: ACPI: MCFG 0x000000008C58A9A0 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Jan 13 20:51:30.458194 kernel: ACPI: SPMI 0x000000008C58A9E0 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Jan 13 20:51:30.458199 kernel: ACPI: SSDT 0x000000008C58AA28 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Jan 13 20:51:30.458204 kernel: ACPI: SSDT 0x000000008C58C548 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Jan 13 20:51:30.458209 kernel: ACPI: SSDT 0x000000008C58F710 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Jan 13 20:51:30.458215 kernel: ACPI: HPET 0x000000008C591A40 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Jan 13 20:51:30.458220 kernel: ACPI: SSDT 0x000000008C591A78 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Jan 13 20:51:30.458225 kernel: ACPI: SSDT 0x000000008C592A28 0008F4 (v02 INTEL xh_mossb 00000000 INTL 20160527) Jan 13 20:51:30.458230 kernel: ACPI: UEFI 0x000000008C593320 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Jan 13 20:51:30.458235 kernel: ACPI: LPIT 0x000000008C593368 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Jan 13 20:51:30.458240 kernel: ACPI: SSDT 0x000000008C593400 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Jan 13 20:51:30.458245 kernel: ACPI: SSDT 0x000000008C595BE0 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Jan 13 20:51:30.458253 kernel: ACPI: DBGP 0x000000008C5970C8 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Jan 13 20:51:30.458259 kernel: ACPI: DBG2 0x000000008C597100 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Jan 13 20:51:30.458264 kernel: ACPI: SSDT 0x000000008C597158 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Jan 13 20:51:30.458269 kernel: ACPI: DMAR 0x000000008C598CC0 000070 (v01 INTEL EDK2 00000002 01000013) Jan 13 20:51:30.458274 kernel: ACPI: SSDT 0x000000008C598D30 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Jan 13 20:51:30.458280 kernel: ACPI: TPM2 0x000000008C598E78 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Jan 13 20:51:30.458285 kernel: ACPI: SSDT 0x000000008C598EB0 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Jan 13 20:51:30.458290 kernel: ACPI: WSMT 0x000000008C599C40 000028 (v01 SUPERM 01072009 AMI 00010013) Jan 13 20:51:30.458295 kernel: ACPI: EINJ 0x000000008C599C68 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Jan 13 20:51:30.458300 kernel: ACPI: ERST 0x000000008C599D98 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Jan 13 20:51:30.458306 kernel: ACPI: BERT 0x000000008C599FC8 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Jan 13 20:51:30.458311 kernel: ACPI: HEST 0x000000008C599FF8 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Jan 13 20:51:30.458316 kernel: ACPI: SSDT 0x000000008C59A278 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Jan 13 20:51:30.458321 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58a670-0x8c58a783] Jan 13 20:51:30.458326 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54e268-0x8c58a66b] Jan 13 20:51:30.458331 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66cf80-0x8c66cfbf] Jan 13 20:51:30.458336 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58a788-0x8c58a8b3] Jan 13 20:51:30.458341 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58a8b8-0x8c58a8fb] Jan 13 20:51:30.458347 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58a900-0x8c58a99b] Jan 13 20:51:30.458352 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58a9a0-0x8c58a9db] Jan 13 20:51:30.458357 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58a9e0-0x8c58aa20] Jan 13 20:51:30.458362 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58aa28-0x8c58c543] Jan 13 20:51:30.458367 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58c548-0x8c58f70d] Jan 13 20:51:30.458372 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58f710-0x8c591a3a] Jan 13 20:51:30.458377 kernel: ACPI: Reserving HPET table memory at [mem 0x8c591a40-0x8c591a77] Jan 13 20:51:30.458382 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c591a78-0x8c592a25] Jan 13 20:51:30.458387 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c592a28-0x8c59331b] Jan 13 20:51:30.458392 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c593320-0x8c593361] Jan 13 20:51:30.458398 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c593368-0x8c5933fb] Jan 13 20:51:30.458403 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c593400-0x8c595bdd] Jan 13 20:51:30.458408 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c595be0-0x8c5970c1] Jan 13 20:51:30.458413 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c5970c8-0x8c5970fb] Jan 13 20:51:30.458418 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c597100-0x8c597153] Jan 13 20:51:30.458423 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c597158-0x8c598cbe] Jan 13 20:51:30.458428 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c598cc0-0x8c598d2f] Jan 13 20:51:30.458433 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598d30-0x8c598e73] Jan 13 20:51:30.458438 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c598e78-0x8c598eab] Jan 13 20:51:30.458444 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598eb0-0x8c599c3e] Jan 13 20:51:30.458449 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c599c40-0x8c599c67] Jan 13 20:51:30.458454 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c599c68-0x8c599d97] Jan 13 20:51:30.458459 kernel: ACPI: Reserving ERST table memory at [mem 0x8c599d98-0x8c599fc7] Jan 13 20:51:30.458464 kernel: ACPI: Reserving BERT table memory at [mem 0x8c599fc8-0x8c599ff7] Jan 13 20:51:30.458469 kernel: ACPI: Reserving HEST table memory at [mem 0x8c599ff8-0x8c59a273] Jan 13 20:51:30.458474 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59a278-0x8c59a3d9] Jan 13 20:51:30.458479 kernel: No NUMA configuration found Jan 13 20:51:30.458484 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] Jan 13 20:51:30.458490 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] Jan 13 20:51:30.458496 kernel: Zone ranges: Jan 13 20:51:30.458501 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 13 20:51:30.458506 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 13 20:51:30.458511 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] Jan 13 20:51:30.458516 kernel: Movable zone start for each node Jan 13 20:51:30.458521 kernel: Early memory node ranges Jan 13 20:51:30.458526 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Jan 13 20:51:30.458531 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Jan 13 20:51:30.458536 kernel: node 0: [mem 0x0000000040400000-0x0000000081b19fff] Jan 13 20:51:30.458542 kernel: node 0: [mem 0x0000000081b1c000-0x000000008afccfff] Jan 13 20:51:30.458547 kernel: node 0: [mem 0x000000008c0b2000-0x000000008c23afff] Jan 13 20:51:30.458552 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] Jan 13 20:51:30.458561 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] Jan 13 20:51:30.458566 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] Jan 13 20:51:30.458572 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 13 20:51:30.458577 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Jan 13 20:51:30.458583 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Jan 13 20:51:30.458589 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Jan 13 20:51:30.458594 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges Jan 13 20:51:30.458600 kernel: On node 0, zone DMA32: 11460 pages in unavailable ranges Jan 13 20:51:30.458605 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges Jan 13 20:51:30.458610 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges Jan 13 20:51:30.458616 kernel: ACPI: PM-Timer IO Port: 0x1808 Jan 13 20:51:30.458621 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Jan 13 20:51:30.458627 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Jan 13 20:51:30.458633 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Jan 13 20:51:30.458639 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Jan 13 20:51:30.458644 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Jan 13 20:51:30.458649 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Jan 13 20:51:30.458655 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Jan 13 20:51:30.458660 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Jan 13 20:51:30.458665 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Jan 13 20:51:30.458671 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Jan 13 20:51:30.458676 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Jan 13 20:51:30.458681 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Jan 13 20:51:30.458687 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Jan 13 20:51:30.458693 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Jan 13 20:51:30.458698 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Jan 13 20:51:30.458703 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Jan 13 20:51:30.458709 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Jan 13 20:51:30.458714 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 13 20:51:30.458719 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 13 20:51:30.458725 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 13 20:51:30.458730 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 13 20:51:30.458737 kernel: TSC deadline timer available Jan 13 20:51:30.458742 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Jan 13 20:51:30.458748 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices Jan 13 20:51:30.458753 kernel: Booting paravirtualized kernel on bare hardware Jan 13 20:51:30.458758 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 13 20:51:30.458764 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Jan 13 20:51:30.458769 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Jan 13 20:51:30.458775 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Jan 13 20:51:30.458780 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Jan 13 20:51:30.458787 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=8a11404d893165624d9716a125d997be53e2d6cdb0c50a945acda5b62a14eda5 Jan 13 20:51:30.458793 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 20:51:30.458798 kernel: random: crng init done Jan 13 20:51:30.458803 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Jan 13 20:51:30.458809 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Jan 13 20:51:30.458814 kernel: Fallback order for Node 0: 0 Jan 13 20:51:30.458820 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232415 Jan 13 20:51:30.458825 kernel: Policy zone: Normal Jan 13 20:51:30.458832 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 20:51:30.458837 kernel: software IO TLB: area num 16. Jan 13 20:51:30.458843 kernel: Memory: 32718264K/33452980K available (14336K kernel code, 2299K rwdata, 22800K rodata, 43320K init, 1756K bss, 734456K reserved, 0K cma-reserved) Jan 13 20:51:30.458848 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Jan 13 20:51:30.458854 kernel: ftrace: allocating 37890 entries in 149 pages Jan 13 20:51:30.458859 kernel: ftrace: allocated 149 pages with 4 groups Jan 13 20:51:30.458864 kernel: Dynamic Preempt: voluntary Jan 13 20:51:30.458870 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 20:51:30.458876 kernel: rcu: RCU event tracing is enabled. Jan 13 20:51:30.458882 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Jan 13 20:51:30.458888 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 20:51:30.458893 kernel: Rude variant of Tasks RCU enabled. Jan 13 20:51:30.458898 kernel: Tracing variant of Tasks RCU enabled. Jan 13 20:51:30.458904 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 20:51:30.458909 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Jan 13 20:51:30.458915 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Jan 13 20:51:30.458920 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 20:51:30.458925 kernel: Console: colour VGA+ 80x25 Jan 13 20:51:30.458932 kernel: printk: console [tty0] enabled Jan 13 20:51:30.458937 kernel: printk: console [ttyS1] enabled Jan 13 20:51:30.458943 kernel: ACPI: Core revision 20230628 Jan 13 20:51:30.458948 kernel: hpet: HPET dysfunctional in PC10. Force disabled. Jan 13 20:51:30.458953 kernel: APIC: Switch to symmetric I/O mode setup Jan 13 20:51:30.458959 kernel: DMAR: Host address width 39 Jan 13 20:51:30.458964 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Jan 13 20:51:30.458970 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Jan 13 20:51:30.458975 kernel: DMAR: RMRR base: 0x0000008cf18000 end: 0x0000008d161fff Jan 13 20:51:30.458981 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 Jan 13 20:51:30.458987 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Jan 13 20:51:30.458993 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Jan 13 20:51:30.458998 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Jan 13 20:51:30.459004 kernel: x2apic enabled Jan 13 20:51:30.459009 kernel: APIC: Switched APIC routing to: cluster x2apic Jan 13 20:51:30.459014 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Jan 13 20:51:30.459020 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Jan 13 20:51:30.459025 kernel: CPU0: Thermal monitoring enabled (TM1) Jan 13 20:51:30.459032 kernel: process: using mwait in idle threads Jan 13 20:51:30.459037 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 13 20:51:30.459042 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 13 20:51:30.459048 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 13 20:51:30.459053 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jan 13 20:51:30.459058 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jan 13 20:51:30.459064 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jan 13 20:51:30.459069 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 13 20:51:30.459074 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Jan 13 20:51:30.459080 kernel: RETBleed: Mitigation: Enhanced IBRS Jan 13 20:51:30.459085 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 13 20:51:30.459091 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 13 20:51:30.459096 kernel: TAA: Mitigation: TSX disabled Jan 13 20:51:30.459102 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Jan 13 20:51:30.459107 kernel: SRBDS: Mitigation: Microcode Jan 13 20:51:30.459112 kernel: GDS: Mitigation: Microcode Jan 13 20:51:30.459118 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 13 20:51:30.459123 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 13 20:51:30.459128 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 13 20:51:30.459134 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jan 13 20:51:30.459139 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jan 13 20:51:30.459144 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 13 20:51:30.459150 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jan 13 20:51:30.459156 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jan 13 20:51:30.459161 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Jan 13 20:51:30.459166 kernel: Freeing SMP alternatives memory: 32K Jan 13 20:51:30.459172 kernel: pid_max: default: 32768 minimum: 301 Jan 13 20:51:30.459177 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 20:51:30.459182 kernel: landlock: Up and running. Jan 13 20:51:30.459187 kernel: SELinux: Initializing. Jan 13 20:51:30.459193 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 20:51:30.459198 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 20:51:30.459204 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Jan 13 20:51:30.459209 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 13 20:51:30.459215 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 13 20:51:30.459221 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 13 20:51:30.459226 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Jan 13 20:51:30.459232 kernel: ... version: 4 Jan 13 20:51:30.459237 kernel: ... bit width: 48 Jan 13 20:51:30.459243 kernel: ... generic registers: 4 Jan 13 20:51:30.459250 kernel: ... value mask: 0000ffffffffffff Jan 13 20:51:30.459255 kernel: ... max period: 00007fffffffffff Jan 13 20:51:30.459261 kernel: ... fixed-purpose events: 3 Jan 13 20:51:30.459267 kernel: ... event mask: 000000070000000f Jan 13 20:51:30.459272 kernel: signal: max sigframe size: 2032 Jan 13 20:51:30.459278 kernel: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Jan 13 20:51:30.459283 kernel: rcu: Hierarchical SRCU implementation. Jan 13 20:51:30.459289 kernel: rcu: Max phase no-delay instances is 400. Jan 13 20:51:30.459294 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Jan 13 20:51:30.459299 kernel: smp: Bringing up secondary CPUs ... Jan 13 20:51:30.459305 kernel: smpboot: x86: Booting SMP configuration: Jan 13 20:51:30.459310 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 #15 Jan 13 20:51:30.459317 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 13 20:51:30.459322 kernel: smp: Brought up 1 node, 16 CPUs Jan 13 20:51:30.459328 kernel: smpboot: Max logical packages: 1 Jan 13 20:51:30.459333 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Jan 13 20:51:30.459338 kernel: devtmpfs: initialized Jan 13 20:51:30.459344 kernel: x86/mm: Memory block size: 128MB Jan 13 20:51:30.459349 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x81b1a000-0x81b1afff] (4096 bytes) Jan 13 20:51:30.459355 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23b000-0x8c66cfff] (4399104 bytes) Jan 13 20:51:30.459361 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 20:51:30.459367 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Jan 13 20:51:30.459372 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 20:51:30.459377 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 20:51:30.459383 kernel: audit: initializing netlink subsys (disabled) Jan 13 20:51:30.459388 kernel: audit: type=2000 audit(1736801485.042:1): state=initialized audit_enabled=0 res=1 Jan 13 20:51:30.459393 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 20:51:30.459399 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 13 20:51:30.459404 kernel: cpuidle: using governor menu Jan 13 20:51:30.459411 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 20:51:30.459416 kernel: dca service started, version 1.12.1 Jan 13 20:51:30.459422 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Jan 13 20:51:30.459427 kernel: PCI: Using configuration type 1 for base access Jan 13 20:51:30.459432 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Jan 13 20:51:30.459438 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 13 20:51:30.459443 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 20:51:30.459448 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 20:51:30.459454 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 20:51:30.459460 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 20:51:30.459466 kernel: ACPI: Added _OSI(Module Device) Jan 13 20:51:30.459471 kernel: ACPI: Added _OSI(Processor Device) Jan 13 20:51:30.459476 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 20:51:30.459482 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 20:51:30.459487 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Jan 13 20:51:30.459492 kernel: ACPI: Dynamic OEM Table Load: Jan 13 20:51:30.459498 kernel: ACPI: SSDT 0xFFFF9B07C1EC0000 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Jan 13 20:51:30.459503 kernel: ACPI: Dynamic OEM Table Load: Jan 13 20:51:30.459510 kernel: ACPI: SSDT 0xFFFF9B07C1EBD000 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Jan 13 20:51:30.459515 kernel: ACPI: Dynamic OEM Table Load: Jan 13 20:51:30.459520 kernel: ACPI: SSDT 0xFFFF9B07C1569B00 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Jan 13 20:51:30.459525 kernel: ACPI: Dynamic OEM Table Load: Jan 13 20:51:30.459531 kernel: ACPI: SSDT 0xFFFF9B07C1EBF800 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Jan 13 20:51:30.459536 kernel: ACPI: Dynamic OEM Table Load: Jan 13 20:51:30.459541 kernel: ACPI: SSDT 0xFFFF9B07C1ECF000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Jan 13 20:51:30.459547 kernel: ACPI: Dynamic OEM Table Load: Jan 13 20:51:30.459552 kernel: ACPI: SSDT 0xFFFF9B07C0E3E000 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Jan 13 20:51:30.459557 kernel: ACPI: _OSC evaluated successfully for all CPUs Jan 13 20:51:30.459564 kernel: ACPI: Interpreter enabled Jan 13 20:51:30.459569 kernel: ACPI: PM: (supports S0 S5) Jan 13 20:51:30.459574 kernel: ACPI: Using IOAPIC for interrupt routing Jan 13 20:51:30.459580 kernel: HEST: Enabling Firmware First mode for corrected errors. Jan 13 20:51:30.459585 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Jan 13 20:51:30.459590 kernel: HEST: Table parsing has been initialized. Jan 13 20:51:30.459596 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Jan 13 20:51:30.459601 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 13 20:51:30.459607 kernel: PCI: Using E820 reservations for host bridge windows Jan 13 20:51:30.459613 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Jan 13 20:51:30.459619 kernel: ACPI: \_SB_.PCI0.XDCI.USBC: New power resource Jan 13 20:51:30.459624 kernel: ACPI: \_SB_.PCI0.SAT0.VOL0.V0PR: New power resource Jan 13 20:51:30.459630 kernel: ACPI: \_SB_.PCI0.SAT0.VOL1.V1PR: New power resource Jan 13 20:51:30.459635 kernel: ACPI: \_SB_.PCI0.SAT0.VOL2.V2PR: New power resource Jan 13 20:51:30.459640 kernel: ACPI: \_SB_.PCI0.CNVW.WRST: New power resource Jan 13 20:51:30.459646 kernel: ACPI: \_TZ_.FN00: New power resource Jan 13 20:51:30.459651 kernel: ACPI: \_TZ_.FN01: New power resource Jan 13 20:51:30.459657 kernel: ACPI: \_TZ_.FN02: New power resource Jan 13 20:51:30.459663 kernel: ACPI: \_TZ_.FN03: New power resource Jan 13 20:51:30.459668 kernel: ACPI: \_TZ_.FN04: New power resource Jan 13 20:51:30.459674 kernel: ACPI: \PIN_: New power resource Jan 13 20:51:30.459679 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Jan 13 20:51:30.459753 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 13 20:51:30.459806 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Jan 13 20:51:30.459854 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Jan 13 20:51:30.459864 kernel: PCI host bridge to bus 0000:00 Jan 13 20:51:30.459914 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 13 20:51:30.459958 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 13 20:51:30.460000 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 13 20:51:30.460042 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] Jan 13 20:51:30.460083 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Jan 13 20:51:30.460125 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Jan 13 20:51:30.460186 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Jan 13 20:51:30.460243 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Jan 13 20:51:30.460297 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Jan 13 20:51:30.460350 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Jan 13 20:51:30.460399 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] Jan 13 20:51:30.460452 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Jan 13 20:51:30.460504 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit] Jan 13 20:51:30.460556 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Jan 13 20:51:30.460605 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] Jan 13 20:51:30.460654 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Jan 13 20:51:30.460705 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Jan 13 20:51:30.460754 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] Jan 13 20:51:30.460804 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit] Jan 13 20:51:30.460856 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Jan 13 20:51:30.460904 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Jan 13 20:51:30.460959 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Jan 13 20:51:30.461006 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Jan 13 20:51:30.461058 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Jan 13 20:51:30.461107 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] Jan 13 20:51:30.461158 kernel: pci 0000:00:16.0: PME# supported from D3hot Jan 13 20:51:30.461215 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Jan 13 20:51:30.461270 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] Jan 13 20:51:30.461318 kernel: pci 0000:00:16.1: PME# supported from D3hot Jan 13 20:51:30.461370 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Jan 13 20:51:30.461419 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] Jan 13 20:51:30.461469 kernel: pci 0000:00:16.4: PME# supported from D3hot Jan 13 20:51:30.461521 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Jan 13 20:51:30.461568 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] Jan 13 20:51:30.461617 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] Jan 13 20:51:30.461664 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] Jan 13 20:51:30.461712 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] Jan 13 20:51:30.461759 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] Jan 13 20:51:30.461811 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] Jan 13 20:51:30.461858 kernel: pci 0000:00:17.0: PME# supported from D3hot Jan 13 20:51:30.461911 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Jan 13 20:51:30.461959 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Jan 13 20:51:30.462016 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Jan 13 20:51:30.462067 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Jan 13 20:51:30.462119 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Jan 13 20:51:30.462169 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Jan 13 20:51:30.462222 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Jan 13 20:51:30.462277 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Jan 13 20:51:30.462333 kernel: pci 0000:00:1c.3: [8086:a33b] type 01 class 0x060400 Jan 13 20:51:30.462383 kernel: pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold Jan 13 20:51:30.462434 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Jan 13 20:51:30.462483 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Jan 13 20:51:30.462534 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Jan 13 20:51:30.462586 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Jan 13 20:51:30.462636 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] Jan 13 20:51:30.462684 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Jan 13 20:51:30.462738 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Jan 13 20:51:30.462789 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Jan 13 20:51:30.462843 kernel: pci 0000:01:00.0: [15b3:1015] type 00 class 0x020000 Jan 13 20:51:30.462894 kernel: pci 0000:01:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Jan 13 20:51:30.462946 kernel: pci 0000:01:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] Jan 13 20:51:30.462995 kernel: pci 0000:01:00.0: PME# supported from D3cold Jan 13 20:51:30.463045 kernel: pci 0000:01:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Jan 13 20:51:30.463095 kernel: pci 0000:01:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Jan 13 20:51:30.463150 kernel: pci 0000:01:00.1: [15b3:1015] type 00 class 0x020000 Jan 13 20:51:30.463200 kernel: pci 0000:01:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Jan 13 20:51:30.463252 kernel: pci 0000:01:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] Jan 13 20:51:30.463305 kernel: pci 0000:01:00.1: PME# supported from D3cold Jan 13 20:51:30.463354 kernel: pci 0000:01:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Jan 13 20:51:30.463404 kernel: pci 0000:01:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Jan 13 20:51:30.463452 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jan 13 20:51:30.463501 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Jan 13 20:51:30.463550 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Jan 13 20:51:30.463599 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Jan 13 20:51:30.463653 kernel: pci 0000:03:00.0: working around ROM BAR overlap defect Jan 13 20:51:30.463707 kernel: pci 0000:03:00.0: [8086:1533] type 00 class 0x020000 Jan 13 20:51:30.463756 kernel: pci 0000:03:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] Jan 13 20:51:30.463805 kernel: pci 0000:03:00.0: reg 0x18: [io 0x5000-0x501f] Jan 13 20:51:30.463855 kernel: pci 0000:03:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] Jan 13 20:51:30.463903 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Jan 13 20:51:30.463953 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Jan 13 20:51:30.464000 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Jan 13 20:51:30.464051 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Jan 13 20:51:30.464104 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect Jan 13 20:51:30.464155 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Jan 13 20:51:30.464204 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] Jan 13 20:51:30.464258 kernel: pci 0000:04:00.0: reg 0x18: [io 0x4000-0x401f] Jan 13 20:51:30.464308 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] Jan 13 20:51:30.464358 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Jan 13 20:51:30.464408 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Jan 13 20:51:30.464459 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Jan 13 20:51:30.464509 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Jan 13 20:51:30.464557 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Jan 13 20:51:30.464613 kernel: pci 0000:06:00.0: [1a03:1150] type 01 class 0x060400 Jan 13 20:51:30.464663 kernel: pci 0000:06:00.0: enabling Extended Tags Jan 13 20:51:30.464714 kernel: pci 0000:06:00.0: supports D1 D2 Jan 13 20:51:30.464763 kernel: pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 13 20:51:30.464815 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Jan 13 20:51:30.464864 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Jan 13 20:51:30.464911 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Jan 13 20:51:30.464966 kernel: pci_bus 0000:07: extended config space not accessible Jan 13 20:51:30.465022 kernel: pci 0000:07:00.0: [1a03:2000] type 00 class 0x030000 Jan 13 20:51:30.465075 kernel: pci 0000:07:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] Jan 13 20:51:30.465127 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] Jan 13 20:51:30.465181 kernel: pci 0000:07:00.0: reg 0x18: [io 0x3000-0x307f] Jan 13 20:51:30.465233 kernel: pci 0000:07:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 13 20:51:30.465287 kernel: pci 0000:07:00.0: supports D1 D2 Jan 13 20:51:30.465340 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 13 20:51:30.465389 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Jan 13 20:51:30.465439 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Jan 13 20:51:30.465489 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Jan 13 20:51:30.465497 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Jan 13 20:51:30.465505 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Jan 13 20:51:30.465511 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Jan 13 20:51:30.465517 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Jan 13 20:51:30.465523 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Jan 13 20:51:30.465529 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Jan 13 20:51:30.465534 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Jan 13 20:51:30.465540 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Jan 13 20:51:30.465546 kernel: iommu: Default domain type: Translated Jan 13 20:51:30.465552 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 13 20:51:30.465558 kernel: PCI: Using ACPI for IRQ routing Jan 13 20:51:30.465564 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 13 20:51:30.465569 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Jan 13 20:51:30.465576 kernel: e820: reserve RAM buffer [mem 0x81b1a000-0x83ffffff] Jan 13 20:51:30.465582 kernel: e820: reserve RAM buffer [mem 0x8afcd000-0x8bffffff] Jan 13 20:51:30.465587 kernel: e820: reserve RAM buffer [mem 0x8c23b000-0x8fffffff] Jan 13 20:51:30.465593 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] Jan 13 20:51:30.465598 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] Jan 13 20:51:30.465649 kernel: pci 0000:07:00.0: vgaarb: setting as boot VGA device Jan 13 20:51:30.465704 kernel: pci 0000:07:00.0: vgaarb: bridge control possible Jan 13 20:51:30.465755 kernel: pci 0000:07:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 13 20:51:30.465764 kernel: vgaarb: loaded Jan 13 20:51:30.465770 kernel: clocksource: Switched to clocksource tsc-early Jan 13 20:51:30.465776 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 20:51:30.465781 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 20:51:30.465787 kernel: pnp: PnP ACPI init Jan 13 20:51:30.465838 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Jan 13 20:51:30.465889 kernel: pnp 00:02: [dma 0 disabled] Jan 13 20:51:30.465938 kernel: pnp 00:03: [dma 0 disabled] Jan 13 20:51:30.465988 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Jan 13 20:51:30.466032 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Jan 13 20:51:30.466079 kernel: system 00:05: [io 0x1854-0x1857] has been reserved Jan 13 20:51:30.466127 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved Jan 13 20:51:30.466174 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved Jan 13 20:51:30.466217 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved Jan 13 20:51:30.466265 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved Jan 13 20:51:30.466312 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved Jan 13 20:51:30.466357 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved Jan 13 20:51:30.466400 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved Jan 13 20:51:30.466445 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved Jan 13 20:51:30.466495 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved Jan 13 20:51:30.466540 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved Jan 13 20:51:30.466584 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved Jan 13 20:51:30.466627 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved Jan 13 20:51:30.466670 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved Jan 13 20:51:30.466715 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved Jan 13 20:51:30.466759 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved Jan 13 20:51:30.466810 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved Jan 13 20:51:30.466819 kernel: pnp: PnP ACPI: found 10 devices Jan 13 20:51:30.466825 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 13 20:51:30.466830 kernel: NET: Registered PF_INET protocol family Jan 13 20:51:30.466836 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 20:51:30.466842 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Jan 13 20:51:30.466848 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 20:51:30.466854 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 20:51:30.466861 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 13 20:51:30.466867 kernel: TCP: Hash tables configured (established 262144 bind 65536) Jan 13 20:51:30.466873 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 13 20:51:30.466879 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 13 20:51:30.466885 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 20:51:30.466890 kernel: NET: Registered PF_XDP protocol family Jan 13 20:51:30.466939 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] Jan 13 20:51:30.466989 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] Jan 13 20:51:30.467039 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] Jan 13 20:51:30.467092 kernel: pci 0000:01:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Jan 13 20:51:30.467143 kernel: pci 0000:01:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Jan 13 20:51:30.467192 kernel: pci 0000:01:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Jan 13 20:51:30.467242 kernel: pci 0000:01:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Jan 13 20:51:30.467294 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jan 13 20:51:30.467344 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Jan 13 20:51:30.467391 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Jan 13 20:51:30.467440 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Jan 13 20:51:30.467491 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Jan 13 20:51:30.467541 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Jan 13 20:51:30.467589 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Jan 13 20:51:30.467636 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Jan 13 20:51:30.467687 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Jan 13 20:51:30.467735 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Jan 13 20:51:30.467783 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Jan 13 20:51:30.467831 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Jan 13 20:51:30.467884 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Jan 13 20:51:30.467933 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Jan 13 20:51:30.467982 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Jan 13 20:51:30.468030 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Jan 13 20:51:30.468079 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Jan 13 20:51:30.468125 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Jan 13 20:51:30.468169 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 13 20:51:30.468212 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 13 20:51:30.468257 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 13 20:51:30.468299 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] Jan 13 20:51:30.468342 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Jan 13 20:51:30.468391 kernel: pci_bus 0000:01: resource 1 [mem 0x95100000-0x952fffff] Jan 13 20:51:30.468437 kernel: pci_bus 0000:01: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Jan 13 20:51:30.468490 kernel: pci_bus 0000:03: resource 0 [io 0x5000-0x5fff] Jan 13 20:51:30.468534 kernel: pci_bus 0000:03: resource 1 [mem 0x95400000-0x954fffff] Jan 13 20:51:30.468583 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Jan 13 20:51:30.468627 kernel: pci_bus 0000:04: resource 1 [mem 0x95300000-0x953fffff] Jan 13 20:51:30.468676 kernel: pci_bus 0000:06: resource 0 [io 0x3000-0x3fff] Jan 13 20:51:30.468720 kernel: pci_bus 0000:06: resource 1 [mem 0x94000000-0x950fffff] Jan 13 20:51:30.468770 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Jan 13 20:51:30.468815 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] Jan 13 20:51:30.468823 kernel: PCI: CLS 64 bytes, default 64 Jan 13 20:51:30.468829 kernel: DMAR: No ATSR found Jan 13 20:51:30.468835 kernel: DMAR: No SATC found Jan 13 20:51:30.468841 kernel: DMAR: dmar0: Using Queued invalidation Jan 13 20:51:30.468889 kernel: pci 0000:00:00.0: Adding to iommu group 0 Jan 13 20:51:30.468937 kernel: pci 0000:00:01.0: Adding to iommu group 1 Jan 13 20:51:30.468988 kernel: pci 0000:00:08.0: Adding to iommu group 2 Jan 13 20:51:30.469036 kernel: pci 0000:00:12.0: Adding to iommu group 3 Jan 13 20:51:30.469084 kernel: pci 0000:00:14.0: Adding to iommu group 4 Jan 13 20:51:30.469131 kernel: pci 0000:00:14.2: Adding to iommu group 4 Jan 13 20:51:30.469179 kernel: pci 0000:00:15.0: Adding to iommu group 5 Jan 13 20:51:30.469226 kernel: pci 0000:00:15.1: Adding to iommu group 5 Jan 13 20:51:30.469277 kernel: pci 0000:00:16.0: Adding to iommu group 6 Jan 13 20:51:30.469325 kernel: pci 0000:00:16.1: Adding to iommu group 6 Jan 13 20:51:30.469375 kernel: pci 0000:00:16.4: Adding to iommu group 6 Jan 13 20:51:30.469423 kernel: pci 0000:00:17.0: Adding to iommu group 7 Jan 13 20:51:30.469471 kernel: pci 0000:00:1b.0: Adding to iommu group 8 Jan 13 20:51:30.469520 kernel: pci 0000:00:1b.4: Adding to iommu group 9 Jan 13 20:51:30.469567 kernel: pci 0000:00:1b.5: Adding to iommu group 10 Jan 13 20:51:30.469615 kernel: pci 0000:00:1c.0: Adding to iommu group 11 Jan 13 20:51:30.469662 kernel: pci 0000:00:1c.3: Adding to iommu group 12 Jan 13 20:51:30.469711 kernel: pci 0000:00:1e.0: Adding to iommu group 13 Jan 13 20:51:30.469761 kernel: pci 0000:00:1f.0: Adding to iommu group 14 Jan 13 20:51:30.469809 kernel: pci 0000:00:1f.4: Adding to iommu group 14 Jan 13 20:51:30.469856 kernel: pci 0000:00:1f.5: Adding to iommu group 14 Jan 13 20:51:30.469905 kernel: pci 0000:01:00.0: Adding to iommu group 1 Jan 13 20:51:30.469956 kernel: pci 0000:01:00.1: Adding to iommu group 1 Jan 13 20:51:30.470004 kernel: pci 0000:03:00.0: Adding to iommu group 15 Jan 13 20:51:30.470054 kernel: pci 0000:04:00.0: Adding to iommu group 16 Jan 13 20:51:30.470102 kernel: pci 0000:06:00.0: Adding to iommu group 17 Jan 13 20:51:30.470156 kernel: pci 0000:07:00.0: Adding to iommu group 17 Jan 13 20:51:30.470165 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Jan 13 20:51:30.470171 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 13 20:51:30.470177 kernel: software IO TLB: mapped [mem 0x0000000086fcd000-0x000000008afcd000] (64MB) Jan 13 20:51:30.470183 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer Jan 13 20:51:30.470189 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Jan 13 20:51:30.470194 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Jan 13 20:51:30.470200 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Jan 13 20:51:30.470287 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Jan 13 20:51:30.470299 kernel: Initialise system trusted keyrings Jan 13 20:51:30.470305 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Jan 13 20:51:30.470311 kernel: Key type asymmetric registered Jan 13 20:51:30.470317 kernel: Asymmetric key parser 'x509' registered Jan 13 20:51:30.470322 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 13 20:51:30.470328 kernel: io scheduler mq-deadline registered Jan 13 20:51:30.470334 kernel: io scheduler kyber registered Jan 13 20:51:30.470339 kernel: io scheduler bfq registered Jan 13 20:51:30.470390 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 Jan 13 20:51:30.470439 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 122 Jan 13 20:51:30.470488 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 123 Jan 13 20:51:30.470536 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 124 Jan 13 20:51:30.470583 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 125 Jan 13 20:51:30.470632 kernel: pcieport 0000:00:1c.3: PME: Signaling with IRQ 126 Jan 13 20:51:30.470685 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Jan 13 20:51:30.470696 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Jan 13 20:51:30.470702 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Jan 13 20:51:30.470708 kernel: pstore: Using crash dump compression: deflate Jan 13 20:51:30.470714 kernel: pstore: Registered erst as persistent store backend Jan 13 20:51:30.470720 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 13 20:51:30.470725 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 20:51:30.470731 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 13 20:51:30.470737 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 13 20:51:30.470743 kernel: hpet_acpi_add: no address or irqs in _CRS Jan 13 20:51:30.470794 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Jan 13 20:51:30.470803 kernel: i8042: PNP: No PS/2 controller found. Jan 13 20:51:30.470847 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Jan 13 20:51:30.470891 kernel: rtc_cmos rtc_cmos: registered as rtc0 Jan 13 20:51:30.470935 kernel: rtc_cmos rtc_cmos: setting system clock to 2025-01-13T20:51:29 UTC (1736801489) Jan 13 20:51:30.470979 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Jan 13 20:51:30.470988 kernel: intel_pstate: Intel P-state driver initializing Jan 13 20:51:30.470993 kernel: intel_pstate: Disabling energy efficiency optimization Jan 13 20:51:30.471001 kernel: intel_pstate: HWP enabled Jan 13 20:51:30.471007 kernel: NET: Registered PF_INET6 protocol family Jan 13 20:51:30.471012 kernel: Segment Routing with IPv6 Jan 13 20:51:30.471018 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 20:51:30.471024 kernel: NET: Registered PF_PACKET protocol family Jan 13 20:51:30.471030 kernel: Key type dns_resolver registered Jan 13 20:51:30.471035 kernel: microcode: Microcode Update Driver: v2.2. Jan 13 20:51:30.471041 kernel: IPI shorthand broadcast: enabled Jan 13 20:51:30.471047 kernel: sched_clock: Marking stable (2491351186, 1448867226)->(4503222445, -563004033) Jan 13 20:51:30.471054 kernel: registered taskstats version 1 Jan 13 20:51:30.471059 kernel: Loading compiled-in X.509 certificates Jan 13 20:51:30.471066 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: ede78b3e719729f95eaaf7cb6a5289b567f6ee3e' Jan 13 20:51:30.471071 kernel: Key type .fscrypt registered Jan 13 20:51:30.471077 kernel: Key type fscrypt-provisioning registered Jan 13 20:51:30.471083 kernel: ima: Allocated hash algorithm: sha1 Jan 13 20:51:30.471089 kernel: ima: No architecture policies found Jan 13 20:51:30.471094 kernel: clk: Disabling unused clocks Jan 13 20:51:30.471100 kernel: Freeing unused kernel image (initmem) memory: 43320K Jan 13 20:51:30.471107 kernel: Write protecting the kernel read-only data: 38912k Jan 13 20:51:30.471113 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Jan 13 20:51:30.471118 kernel: Run /init as init process Jan 13 20:51:30.471124 kernel: with arguments: Jan 13 20:51:30.471130 kernel: /init Jan 13 20:51:30.471135 kernel: with environment: Jan 13 20:51:30.471141 kernel: HOME=/ Jan 13 20:51:30.471147 kernel: TERM=linux Jan 13 20:51:30.471152 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 20:51:30.471160 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:51:30.471168 systemd[1]: Detected architecture x86-64. Jan 13 20:51:30.471174 systemd[1]: Running in initrd. Jan 13 20:51:30.471180 systemd[1]: No hostname configured, using default hostname. Jan 13 20:51:30.471186 systemd[1]: Hostname set to . Jan 13 20:51:30.471192 systemd[1]: Initializing machine ID from random generator. Jan 13 20:51:30.471198 systemd[1]: Queued start job for default target initrd.target. Jan 13 20:51:30.471205 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:51:30.471211 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:51:30.471218 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 20:51:30.471224 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:51:30.471230 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 20:51:30.471236 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 20:51:30.471242 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 20:51:30.471252 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 20:51:30.471258 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:51:30.471264 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:51:30.471271 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:51:30.471277 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:51:30.471283 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:51:30.471289 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:51:30.471295 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:51:30.471302 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:51:30.471308 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 20:51:30.471314 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 20:51:30.471320 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:51:30.471326 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:51:30.471332 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:51:30.471338 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:51:30.471344 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 20:51:30.471350 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:51:30.471357 kernel: tsc: Refined TSC clocksource calibration: 3407.999 MHz Jan 13 20:51:30.471363 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd336761, max_idle_ns: 440795243819 ns Jan 13 20:51:30.471369 kernel: clocksource: Switched to clocksource tsc Jan 13 20:51:30.471375 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 20:51:30.471381 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 20:51:30.471387 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:51:30.471403 systemd-journald[270]: Collecting audit messages is disabled. Jan 13 20:51:30.471419 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:51:30.471426 systemd-journald[270]: Journal started Jan 13 20:51:30.471440 systemd-journald[270]: Runtime Journal (/run/log/journal/b2d6e2dd8f8246fda77f8b39e3bfe7ab) is 8.0M, max 639.9M, 631.9M free. Jan 13 20:51:30.474744 systemd-modules-load[272]: Inserted module 'overlay' Jan 13 20:51:30.491366 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:51:30.514294 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 20:51:30.514312 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:51:30.521428 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 20:51:30.521521 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:51:30.521607 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 20:51:30.522500 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 20:51:30.526831 systemd-modules-load[272]: Inserted module 'br_netfilter' Jan 13 20:51:30.527252 kernel: Bridge firewalling registered Jan 13 20:51:30.527557 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:51:30.545938 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:51:30.626928 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:51:30.644720 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:51:30.675601 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:51:30.713654 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:51:30.726319 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:51:30.728163 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:51:30.735734 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:51:30.735889 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:51:30.736951 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:51:30.739557 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:51:30.753696 systemd-resolved[306]: Positive Trust Anchors: Jan 13 20:51:30.753701 systemd-resolved[306]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:51:30.753724 systemd-resolved[306]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:51:30.755220 systemd-resolved[306]: Defaulting to hostname 'linux'. Jan 13 20:51:30.779497 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:51:30.796539 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:51:30.830524 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 20:51:30.930982 dracut-cmdline[308]: dracut-dracut-053 Jan 13 20:51:30.938359 dracut-cmdline[308]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=8a11404d893165624d9716a125d997be53e2d6cdb0c50a945acda5b62a14eda5 Jan 13 20:51:31.016287 kernel: SCSI subsystem initialized Jan 13 20:51:31.031278 kernel: Loading iSCSI transport class v2.0-870. Jan 13 20:51:31.044296 kernel: iscsi: registered transport (tcp) Jan 13 20:51:31.064560 kernel: iscsi: registered transport (qla4xxx) Jan 13 20:51:31.064577 kernel: QLogic iSCSI HBA Driver Jan 13 20:51:31.087033 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 20:51:31.108515 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 20:51:31.172213 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 20:51:31.172241 kernel: device-mapper: uevent: version 1.0.3 Jan 13 20:51:31.181028 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 20:51:31.217282 kernel: raid6: avx2x4 gen() 46750 MB/s Jan 13 20:51:31.238283 kernel: raid6: avx2x2 gen() 53676 MB/s Jan 13 20:51:31.264384 kernel: raid6: avx2x1 gen() 44954 MB/s Jan 13 20:51:31.264401 kernel: raid6: using algorithm avx2x2 gen() 53676 MB/s Jan 13 20:51:31.291410 kernel: raid6: .... xor() 32279 MB/s, rmw enabled Jan 13 20:51:31.291428 kernel: raid6: using avx2x2 recovery algorithm Jan 13 20:51:31.312253 kernel: xor: automatically using best checksumming function avx Jan 13 20:51:31.410283 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 20:51:31.415734 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:51:31.445562 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:51:31.452307 systemd-udevd[494]: Using default interface naming scheme 'v255'. Jan 13 20:51:31.454728 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:51:31.492532 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 20:51:31.554170 dracut-pre-trigger[508]: rd.md=0: removing MD RAID activation Jan 13 20:51:31.625720 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:51:31.650646 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:51:31.751527 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:51:31.775296 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 13 20:51:31.775317 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 13 20:51:31.782447 kernel: cryptd: max_cpu_qlen set to 1000 Jan 13 20:51:31.790393 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 20:51:31.812416 kernel: PTP clock support registered Jan 13 20:51:31.812429 kernel: ACPI: bus type USB registered Jan 13 20:51:31.812437 kernel: usbcore: registered new interface driver usbfs Jan 13 20:51:31.812444 kernel: usbcore: registered new interface driver hub Jan 13 20:51:31.812451 kernel: usbcore: registered new device driver usb Jan 13 20:51:31.813253 kernel: AVX2 version of gcm_enc/dec engaged. Jan 13 20:51:31.814368 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:51:31.896223 kernel: libata version 3.00 loaded. Jan 13 20:51:31.896265 kernel: AES CTR mode by8 optimization enabled Jan 13 20:51:31.896280 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Jan 13 20:51:31.925434 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Jan 13 20:51:31.925536 kernel: ahci 0000:00:17.0: version 3.0 Jan 13 20:51:32.026155 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Jan 13 20:51:32.026230 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 7 ports 6 Gbps 0x7f impl SATA mode Jan 13 20:51:32.026306 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Jan 13 20:51:32.026371 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Jan 13 20:51:32.026432 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Jan 13 20:51:32.026492 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Jan 13 20:51:32.026552 kernel: hub 1-0:1.0: USB hub found Jan 13 20:51:32.026625 kernel: scsi host0: ahci Jan 13 20:51:32.026687 kernel: hub 1-0:1.0: 16 ports detected Jan 13 20:51:32.026755 kernel: scsi host1: ahci Jan 13 20:51:32.026815 kernel: hub 2-0:1.0: USB hub found Jan 13 20:51:32.026884 kernel: scsi host2: ahci Jan 13 20:51:32.026943 kernel: hub 2-0:1.0: 10 ports detected Jan 13 20:51:32.027009 kernel: scsi host3: ahci Jan 13 20:51:32.027069 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Jan 13 20:51:32.027078 kernel: scsi host4: ahci Jan 13 20:51:32.027137 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Jan 13 20:51:32.027145 kernel: scsi host5: ahci Jan 13 20:51:32.027204 kernel: scsi host6: ahci Jan 13 20:51:32.027264 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 127 Jan 13 20:51:32.027272 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 127 Jan 13 20:51:32.027279 kernel: pps pps0: new PPS source ptp0 Jan 13 20:51:32.027343 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 127 Jan 13 20:51:32.027351 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 127 Jan 13 20:51:32.027359 kernel: igb 0000:03:00.0: added PHC on eth0 Jan 13 20:51:32.042959 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 127 Jan 13 20:51:32.042968 kernel: igb 0000:03:00.0: Intel(R) Gigabit Ethernet Network Connection Jan 13 20:51:32.043033 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 127 Jan 13 20:51:32.043041 kernel: igb 0000:03:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:70:d3:7e Jan 13 20:51:32.043104 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 127 Jan 13 20:51:32.043112 kernel: igb 0000:03:00.0: eth0: PBA No: 010000-000 Jan 13 20:51:32.043175 kernel: igb 0000:03:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Jan 13 20:51:31.814436 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:51:32.069926 kernel: mlx5_core 0000:01:00.0: firmware version: 14.31.1014 Jan 13 20:51:32.583647 kernel: mlx5_core 0000:01:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Jan 13 20:51:32.583731 kernel: pps pps1: new PPS source ptp1 Jan 13 20:51:32.583799 kernel: igb 0000:04:00.0: added PHC on eth1 Jan 13 20:51:32.583867 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Jan 13 20:51:32.583932 kernel: igb 0000:04:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:70:d3:7f Jan 13 20:51:32.583995 kernel: igb 0000:04:00.0: eth1: PBA No: 010000-000 Jan 13 20:51:32.584056 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Jan 13 20:51:32.584118 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Jan 13 20:51:32.680173 kernel: hub 1-14:1.0: USB hub found Jan 13 20:51:32.680285 kernel: hub 1-14:1.0: 4 ports detected Jan 13 20:51:32.680360 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 13 20:51:32.680370 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 13 20:51:32.680378 kernel: mlx5_core 0000:01:00.0: E-Switch: Total vports 10, per vport: max uc(128) max mc(2048) Jan 13 20:51:32.680447 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 13 20:51:32.680456 kernel: mlx5_core 0000:01:00.0: Port module event: module 0, Cable plugged Jan 13 20:51:32.680522 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 13 20:51:32.680533 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Jan 13 20:51:32.680541 kernel: ata7: SATA link down (SStatus 0 SControl 300) Jan 13 20:51:32.680548 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Jan 13 20:51:32.680555 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Jan 13 20:51:32.680562 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Jan 13 20:51:32.680569 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Jan 13 20:51:32.680577 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Jan 13 20:51:32.680584 kernel: ata2.00: Features: NCQ-prio Jan 13 20:51:32.680592 kernel: ata1.00: Features: NCQ-prio Jan 13 20:51:32.680600 kernel: ata2.00: configured for UDMA/133 Jan 13 20:51:32.680607 kernel: ata1.00: configured for UDMA/133 Jan 13 20:51:32.680614 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Jan 13 20:51:32.680685 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Jan 13 20:51:32.680749 kernel: igb 0000:03:00.0 eno1: renamed from eth0 Jan 13 20:51:32.680832 kernel: ata2.00: Enabling discard_zeroes_data Jan 13 20:51:32.680841 kernel: ata1.00: Enabling discard_zeroes_data Jan 13 20:51:32.680849 kernel: sd 1:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Jan 13 20:51:32.680914 kernel: igb 0000:04:00.0 eno2: renamed from eth1 Jan 13 20:51:32.681008 kernel: sd 0:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Jan 13 20:51:32.681079 kernel: sd 1:0:0:0: [sdb] 4096-byte physical blocks Jan 13 20:51:32.681143 kernel: sd 1:0:0:0: [sdb] Write Protect is off Jan 13 20:51:32.681207 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 13 20:51:32.681277 kernel: sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Jan 13 20:51:32.681339 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 13 20:51:32.681402 kernel: sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 13 20:51:32.681464 kernel: sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 Jan 13 20:51:32.681522 kernel: sd 1:0:0:0: [sdb] Preferred minimum I/O size 4096 bytes Jan 13 20:51:32.681582 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 13 20:51:32.681643 kernel: ata2.00: Enabling discard_zeroes_data Jan 13 20:51:32.681652 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Jan 13 20:51:32.681755 kernel: sd 0:0:0:0: [sda] Preferred minimum I/O size 4096 bytes Jan 13 20:51:32.681822 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 20:51:32.681831 kernel: GPT:9289727 != 937703087 Jan 13 20:51:32.681839 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 20:51:32.681846 kernel: GPT:9289727 != 937703087 Jan 13 20:51:32.681853 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 20:51:32.681860 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Jan 13 20:51:32.681867 kernel: sd 1:0:0:0: [sdb] Attached SCSI disk Jan 13 20:51:32.681931 kernel: mlx5_core 0000:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jan 13 20:51:32.681998 kernel: mlx5_core 0000:01:00.1: firmware version: 14.31.1014 Jan 13 20:51:33.185396 kernel: ata1.00: Enabling discard_zeroes_data Jan 13 20:51:33.185463 kernel: mlx5_core 0000:01:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Jan 13 20:51:33.185933 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 13 20:51:33.186336 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 13 20:51:33.186381 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sdb6 scanned by (udev-worker) (566) Jan 13 20:51:33.186419 kernel: BTRFS: device fsid 7f507843-6957-466b-8fb7-5bee228b170a devid 1 transid 44 /dev/sdb3 scanned by (udev-worker) (578) Jan 13 20:51:33.186456 kernel: usbcore: registered new interface driver usbhid Jan 13 20:51:33.186510 kernel: usbhid: USB HID core driver Jan 13 20:51:33.186562 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Jan 13 20:51:33.186629 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Jan 13 20:51:33.187234 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Jan 13 20:51:33.187333 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Jan 13 20:51:33.187808 kernel: ata2.00: Enabling discard_zeroes_data Jan 13 20:51:33.187854 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Jan 13 20:51:33.187892 kernel: mlx5_core 0000:01:00.1: E-Switch: Total vports 10, per vport: max uc(128) max mc(2048) Jan 13 20:51:33.188427 kernel: mlx5_core 0000:01:00.1: Port module event: module 1, Cable plugged Jan 13 20:51:33.188960 kernel: mlx5_core 0000:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jan 13 20:51:32.084110 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:51:33.215516 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: renamed from eth1 Jan 13 20:51:33.215599 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: renamed from eth0 Jan 13 20:51:32.123329 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:51:32.123425 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:51:32.134327 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:51:32.154401 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:51:33.255336 disk-uuid[711]: Primary Header is updated. Jan 13 20:51:33.255336 disk-uuid[711]: Secondary Entries is updated. Jan 13 20:51:33.255336 disk-uuid[711]: Secondary Header is updated. Jan 13 20:51:32.164553 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 20:51:32.165825 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:51:32.165849 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:51:32.165875 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:51:32.166385 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 20:51:32.214404 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:51:32.225476 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:51:32.251472 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:51:32.260494 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:51:32.691377 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Micron_5300_MTFDDAK480TDT ROOT. Jan 13 20:51:32.753282 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Micron_5300_MTFDDAK480TDT EFI-SYSTEM. Jan 13 20:51:32.768085 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. Jan 13 20:51:32.782566 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Micron_5300_MTFDDAK480TDT USR-A. Jan 13 20:51:32.804221 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Micron_5300_MTFDDAK480TDT USR-A. Jan 13 20:51:32.848398 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 20:51:33.873381 kernel: ata2.00: Enabling discard_zeroes_data Jan 13 20:51:33.882092 disk-uuid[712]: The operation has completed successfully. Jan 13 20:51:33.890388 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Jan 13 20:51:33.920274 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 20:51:33.920373 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 20:51:33.959449 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 20:51:33.986386 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 13 20:51:33.986451 sh[739]: Success Jan 13 20:51:34.024406 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 20:51:34.046288 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 20:51:34.047764 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 20:51:34.103645 kernel: BTRFS info (device dm-0): first mount of filesystem 7f507843-6957-466b-8fb7-5bee228b170a Jan 13 20:51:34.103660 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:51:34.103668 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 20:51:34.110661 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 20:51:34.116520 kernel: BTRFS info (device dm-0): using free space tree Jan 13 20:51:34.130251 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 13 20:51:34.133159 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 20:51:34.141674 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 20:51:34.155390 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 20:51:34.197318 kernel: BTRFS info (device sdb6): first mount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 13 20:51:34.197334 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:51:34.166947 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 20:51:34.248474 kernel: BTRFS info (device sdb6): using free space tree Jan 13 20:51:34.248489 kernel: BTRFS info (device sdb6): enabling ssd optimizations Jan 13 20:51:34.248499 kernel: BTRFS info (device sdb6): auto enabling async discard Jan 13 20:51:34.248507 kernel: BTRFS info (device sdb6): last unmount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 13 20:51:34.248579 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 20:51:34.259305 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 20:51:34.317952 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:51:34.339424 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:51:34.352527 systemd-networkd[925]: lo: Link UP Jan 13 20:51:34.352530 systemd-networkd[925]: lo: Gained carrier Jan 13 20:51:34.357929 ignition[825]: Ignition 2.20.0 Jan 13 20:51:34.354972 systemd-networkd[925]: Enumeration completed Jan 13 20:51:34.357933 ignition[825]: Stage: fetch-offline Jan 13 20:51:34.355745 systemd-networkd[925]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:51:34.357951 ignition[825]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:51:34.357485 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:51:34.357956 ignition[825]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 13 20:51:34.360153 unknown[825]: fetched base config from "system" Jan 13 20:51:34.358011 ignition[825]: parsed url from cmdline: "" Jan 13 20:51:34.360156 unknown[825]: fetched user config from "system" Jan 13 20:51:34.358013 ignition[825]: no config URL provided Jan 13 20:51:34.365697 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:51:34.358016 ignition[825]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 20:51:34.385309 systemd-networkd[925]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:51:34.358038 ignition[825]: parsing config with SHA512: 90b2e4d612ad633961858ebf20c04fcec323837b00430002082e81cd1047f1660ea07fd13108b9668e99f112b8d5db5a7a7208564b8dc39d10274f011bf8ad73 Jan 13 20:51:34.391641 systemd[1]: Reached target network.target - Network. Jan 13 20:51:34.360355 ignition[825]: fetch-offline: fetch-offline passed Jan 13 20:51:34.397518 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 13 20:51:34.360357 ignition[825]: POST message to Packet Timeline Jan 13 20:51:34.412464 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 20:51:34.360360 ignition[825]: POST Status error: resource requires networking Jan 13 20:51:34.413565 systemd-networkd[925]: enp1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:51:34.360397 ignition[825]: Ignition finished successfully Jan 13 20:51:34.425988 ignition[939]: Ignition 2.20.0 Jan 13 20:51:34.425998 ignition[939]: Stage: kargs Jan 13 20:51:34.426212 ignition[939]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:51:34.426226 ignition[939]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 13 20:51:34.427406 ignition[939]: kargs: kargs passed Jan 13 20:51:34.640393 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Jan 13 20:51:34.637298 systemd-networkd[925]: enp1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:51:34.427412 ignition[939]: POST message to Packet Timeline Jan 13 20:51:34.427436 ignition[939]: GET https://metadata.packet.net/metadata: attempt #1 Jan 13 20:51:34.428210 ignition[939]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:60862->[::1]:53: read: connection refused Jan 13 20:51:34.628542 ignition[939]: GET https://metadata.packet.net/metadata: attempt #2 Jan 13 20:51:34.629086 ignition[939]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:35983->[::1]:53: read: connection refused Jan 13 20:51:34.885293 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Jan 13 20:51:34.886061 systemd-networkd[925]: eno1: Link UP Jan 13 20:51:34.886336 systemd-networkd[925]: eno2: Link UP Jan 13 20:51:34.886514 systemd-networkd[925]: enp1s0f0np0: Link UP Jan 13 20:51:34.886731 systemd-networkd[925]: enp1s0f0np0: Gained carrier Jan 13 20:51:34.900671 systemd-networkd[925]: enp1s0f1np1: Link UP Jan 13 20:51:34.935494 systemd-networkd[925]: enp1s0f0np0: DHCPv4 address 147.28.180.137/31, gateway 147.28.180.136 acquired from 145.40.83.140 Jan 13 20:51:35.029381 ignition[939]: GET https://metadata.packet.net/metadata: attempt #3 Jan 13 20:51:35.030487 ignition[939]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:33501->[::1]:53: read: connection refused Jan 13 20:51:35.674025 systemd-networkd[925]: enp1s0f1np1: Gained carrier Jan 13 20:51:35.831030 ignition[939]: GET https://metadata.packet.net/metadata: attempt #4 Jan 13 20:51:35.832096 ignition[939]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:53907->[::1]:53: read: connection refused Jan 13 20:51:35.929861 systemd-networkd[925]: enp1s0f0np0: Gained IPv6LL Jan 13 20:51:37.432437 ignition[939]: GET https://metadata.packet.net/metadata: attempt #5 Jan 13 20:51:37.433584 ignition[939]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:42770->[::1]:53: read: connection refused Jan 13 20:51:37.721850 systemd-networkd[925]: enp1s0f1np1: Gained IPv6LL Jan 13 20:51:40.636295 ignition[939]: GET https://metadata.packet.net/metadata: attempt #6 Jan 13 20:51:41.413649 ignition[939]: GET result: OK Jan 13 20:51:41.755988 ignition[939]: Ignition finished successfully Jan 13 20:51:41.757731 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 20:51:41.791521 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 20:51:41.797679 ignition[958]: Ignition 2.20.0 Jan 13 20:51:41.797684 ignition[958]: Stage: disks Jan 13 20:51:41.797800 ignition[958]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:51:41.797808 ignition[958]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 13 20:51:41.798379 ignition[958]: disks: disks passed Jan 13 20:51:41.798383 ignition[958]: POST message to Packet Timeline Jan 13 20:51:41.798396 ignition[958]: GET https://metadata.packet.net/metadata: attempt #1 Jan 13 20:51:42.318237 ignition[958]: GET result: OK Jan 13 20:51:42.799824 ignition[958]: Ignition finished successfully Jan 13 20:51:42.803140 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 20:51:42.819634 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 20:51:42.837534 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 20:51:42.859568 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:51:42.880561 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:51:42.900558 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:51:42.929570 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 20:51:42.961329 systemd-fsck[974]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 13 20:51:42.973124 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 20:51:42.991417 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 20:51:43.090253 kernel: EXT4-fs (sdb9): mounted filesystem 59ba8ffc-e6b0-4bb4-a36e-13a47bd6ad99 r/w with ordered data mode. Quota mode: none. Jan 13 20:51:43.090540 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 20:51:43.104695 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 20:51:43.119495 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:51:43.153072 kernel: BTRFS: device label OEM devid 1 transid 18 /dev/sdb6 scanned by mount (983) Jan 13 20:51:43.153088 kernel: BTRFS info (device sdb6): first mount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 13 20:51:43.124269 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 20:51:43.187525 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:51:43.187542 kernel: BTRFS info (device sdb6): using free space tree Jan 13 20:51:43.187556 kernel: BTRFS info (device sdb6): enabling ssd optimizations Jan 13 20:51:43.187569 kernel: BTRFS info (device sdb6): auto enabling async discard Jan 13 20:51:43.193406 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 13 20:51:43.230380 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... Jan 13 20:51:43.253374 coreos-metadata[1001]: Jan 13 20:51:43.249 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jan 13 20:51:43.277563 coreos-metadata[1000]: Jan 13 20:51:43.249 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jan 13 20:51:43.241367 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 20:51:43.241387 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:51:43.265639 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:51:43.285565 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 20:51:43.323764 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 20:51:43.375477 initrd-setup-root[1015]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 20:51:43.385372 initrd-setup-root[1022]: cut: /sysroot/etc/group: No such file or directory Jan 13 20:51:43.396356 initrd-setup-root[1029]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 20:51:43.407377 initrd-setup-root[1036]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 20:51:43.435242 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 20:51:43.448449 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 20:51:43.454125 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 20:51:43.487527 kernel: BTRFS info (device sdb6): last unmount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 13 20:51:43.488239 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 20:51:43.512519 ignition[1104]: INFO : Ignition 2.20.0 Jan 13 20:51:43.512519 ignition[1104]: INFO : Stage: mount Jan 13 20:51:43.526383 ignition[1104]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:51:43.526383 ignition[1104]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 13 20:51:43.526383 ignition[1104]: INFO : mount: mount passed Jan 13 20:51:43.526383 ignition[1104]: INFO : POST message to Packet Timeline Jan 13 20:51:43.526383 ignition[1104]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jan 13 20:51:43.522598 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 20:51:43.749959 coreos-metadata[1000]: Jan 13 20:51:43.749 INFO Fetch successful Jan 13 20:51:43.785738 coreos-metadata[1000]: Jan 13 20:51:43.785 INFO wrote hostname ci-4186.1.0-a-3c6cffff8a to /sysroot/etc/hostname Jan 13 20:51:43.786965 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 13 20:51:44.051219 coreos-metadata[1001]: Jan 13 20:51:44.051 INFO Fetch successful Jan 13 20:51:44.093479 systemd[1]: flatcar-static-network.service: Deactivated successfully. Jan 13 20:51:44.093532 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. Jan 13 20:51:44.553523 ignition[1104]: INFO : GET result: OK Jan 13 20:51:45.016137 ignition[1104]: INFO : Ignition finished successfully Jan 13 20:51:45.019147 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 20:51:45.055504 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 20:51:45.067788 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:51:45.113388 kernel: BTRFS: device label OEM devid 1 transid 19 /dev/sdb6 scanned by mount (1128) Jan 13 20:51:45.113410 kernel: BTRFS info (device sdb6): first mount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 13 20:51:45.121539 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:51:45.127415 kernel: BTRFS info (device sdb6): using free space tree Jan 13 20:51:45.142313 kernel: BTRFS info (device sdb6): enabling ssd optimizations Jan 13 20:51:45.142329 kernel: BTRFS info (device sdb6): auto enabling async discard Jan 13 20:51:45.144173 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:51:45.169432 ignition[1145]: INFO : Ignition 2.20.0 Jan 13 20:51:45.169432 ignition[1145]: INFO : Stage: files Jan 13 20:51:45.184477 ignition[1145]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:51:45.184477 ignition[1145]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 13 20:51:45.184477 ignition[1145]: DEBUG : files: compiled without relabeling support, skipping Jan 13 20:51:45.184477 ignition[1145]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 20:51:45.184477 ignition[1145]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 20:51:45.184477 ignition[1145]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 20:51:45.184477 ignition[1145]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 20:51:45.184477 ignition[1145]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 20:51:45.184477 ignition[1145]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 20:51:45.184477 ignition[1145]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 13 20:51:45.173208 unknown[1145]: wrote ssh authorized keys file for user: core Jan 13 20:51:45.317344 ignition[1145]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 13 20:51:45.330787 ignition[1145]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 20:51:45.347488 ignition[1145]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 13 20:51:45.347488 ignition[1145]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 20:51:45.347488 ignition[1145]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 13 20:51:45.347488 ignition[1145]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 13 20:51:45.347488 ignition[1145]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 20:51:45.347488 ignition[1145]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 20:51:45.347488 ignition[1145]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 20:51:45.347488 ignition[1145]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 20:51:45.347488 ignition[1145]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:51:45.347488 ignition[1145]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:51:45.347488 ignition[1145]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 13 20:51:45.347488 ignition[1145]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 13 20:51:45.347488 ignition[1145]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 13 20:51:45.347488 ignition[1145]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 13 20:51:45.871810 ignition[1145]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 13 20:51:46.040105 ignition[1145]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 13 20:51:46.040105 ignition[1145]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 13 20:51:46.071497 ignition[1145]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 20:51:46.071497 ignition[1145]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 20:51:46.071497 ignition[1145]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 13 20:51:46.071497 ignition[1145]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 13 20:51:46.071497 ignition[1145]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 13 20:51:46.071497 ignition[1145]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:51:46.071497 ignition[1145]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:51:46.071497 ignition[1145]: INFO : files: files passed Jan 13 20:51:46.071497 ignition[1145]: INFO : POST message to Packet Timeline Jan 13 20:51:46.071497 ignition[1145]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jan 13 20:51:46.673912 ignition[1145]: INFO : GET result: OK Jan 13 20:51:47.038403 ignition[1145]: INFO : Ignition finished successfully Jan 13 20:51:47.040724 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 20:51:47.076513 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 20:51:47.086895 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 20:51:47.096669 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 20:51:47.096728 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 20:51:47.156707 initrd-setup-root-after-ignition[1184]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:51:47.156707 initrd-setup-root-after-ignition[1184]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:51:47.195535 initrd-setup-root-after-ignition[1188]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:51:47.161434 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:51:47.172544 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 20:51:47.218442 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 20:51:47.273844 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 20:51:47.273895 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 20:51:47.293659 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 20:51:47.314451 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 20:51:47.335722 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 20:51:47.345621 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 20:51:47.423633 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:51:47.456802 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 20:51:47.475677 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:51:47.479447 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:51:47.511583 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 20:51:47.529591 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 20:51:47.529741 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:51:47.558966 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 20:51:47.580857 systemd[1]: Stopped target basic.target - Basic System. Jan 13 20:51:47.598878 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 20:51:47.617962 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:51:47.638855 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 20:51:47.659866 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 20:51:47.679860 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:51:47.700902 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 20:51:47.722881 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 20:51:47.742858 systemd[1]: Stopped target swap.target - Swaps. Jan 13 20:51:47.761867 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 20:51:47.762286 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:51:47.787967 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:51:47.807882 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:51:47.829738 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 20:51:47.830135 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:51:47.853763 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 20:51:47.854155 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 20:51:47.885842 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 20:51:47.886306 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:51:47.906051 systemd[1]: Stopped target paths.target - Path Units. Jan 13 20:51:47.923730 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 20:51:47.924130 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:51:47.944876 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 20:51:47.963855 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 20:51:47.982940 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 20:51:47.983240 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:51:48.002889 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 20:51:48.003181 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:51:48.026073 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 20:51:48.026496 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:51:48.046963 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 20:51:48.157560 ignition[1208]: INFO : Ignition 2.20.0 Jan 13 20:51:48.157560 ignition[1208]: INFO : Stage: umount Jan 13 20:51:48.157560 ignition[1208]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:51:48.157560 ignition[1208]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 13 20:51:48.157560 ignition[1208]: INFO : umount: umount passed Jan 13 20:51:48.157560 ignition[1208]: INFO : POST message to Packet Timeline Jan 13 20:51:48.157560 ignition[1208]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jan 13 20:51:48.047363 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 20:51:48.064950 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 13 20:51:48.065354 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 13 20:51:48.094497 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 20:51:48.115327 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 20:51:48.115450 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:51:48.145540 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 20:51:48.149513 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 20:51:48.149693 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:51:48.176592 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 20:51:48.176737 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:51:48.216187 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 20:51:48.221317 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 20:51:48.221565 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 20:51:48.294023 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 20:51:48.294098 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 20:51:48.570098 ignition[1208]: INFO : GET result: OK Jan 13 20:51:49.662884 ignition[1208]: INFO : Ignition finished successfully Jan 13 20:51:49.666742 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 20:51:49.667019 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 20:51:49.681962 systemd[1]: Stopped target network.target - Network. Jan 13 20:51:49.697522 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 20:51:49.697704 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 20:51:49.715679 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 20:51:49.715843 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 20:51:49.734668 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 20:51:49.734827 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 20:51:49.753660 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 20:51:49.753823 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 20:51:49.772648 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 20:51:49.772815 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 20:51:49.792006 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 20:51:49.806452 systemd-networkd[925]: enp1s0f1np1: DHCPv6 lease lost Jan 13 20:51:49.809712 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 20:51:49.815408 systemd-networkd[925]: enp1s0f0np0: DHCPv6 lease lost Jan 13 20:51:49.828327 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 20:51:49.828591 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 20:51:49.847501 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 20:51:49.847828 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 20:51:49.867809 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 20:51:49.867930 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:51:49.897539 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 20:51:49.897557 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 20:51:49.897582 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:51:49.914579 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 20:51:49.914625 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:51:49.946633 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 20:51:49.946717 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 20:51:49.964646 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 20:51:49.964810 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:51:49.985800 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:51:50.005531 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 20:51:50.005912 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:51:50.035318 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 20:51:50.035472 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 20:51:50.042756 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 20:51:50.042858 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:51:50.073572 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 20:51:50.073728 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:51:50.101829 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 20:51:50.101995 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 20:51:50.143510 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:51:50.143703 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:51:50.201349 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 20:51:50.236292 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 20:51:50.236348 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:51:50.255460 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:51:50.255548 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:51:50.474398 systemd-journald[270]: Received SIGTERM from PID 1 (systemd). Jan 13 20:51:50.278581 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 20:51:50.278812 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 20:51:50.342268 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 20:51:50.342545 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 20:51:50.357458 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 20:51:50.390692 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 20:51:50.418480 systemd[1]: Switching root. Jan 13 20:51:50.537467 systemd-journald[270]: Journal stopped Jan 13 20:51:30.457948 kernel: microcode: updated early: 0xf4 -> 0x100, date = 2024-02-05 Jan 13 20:51:30.457963 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 18:58:40 -00 2025 Jan 13 20:51:30.457969 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=8a11404d893165624d9716a125d997be53e2d6cdb0c50a945acda5b62a14eda5 Jan 13 20:51:30.457975 kernel: BIOS-provided physical RAM map: Jan 13 20:51:30.457979 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Jan 13 20:51:30.457983 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Jan 13 20:51:30.457988 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Jan 13 20:51:30.457992 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Jan 13 20:51:30.457996 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Jan 13 20:51:30.458001 kernel: BIOS-e820: [mem 0x0000000040400000-0x0000000081b19fff] usable Jan 13 20:51:30.458005 kernel: BIOS-e820: [mem 0x0000000081b1a000-0x0000000081b1afff] ACPI NVS Jan 13 20:51:30.458009 kernel: BIOS-e820: [mem 0x0000000081b1b000-0x0000000081b1bfff] reserved Jan 13 20:51:30.458014 kernel: BIOS-e820: [mem 0x0000000081b1c000-0x000000008afccfff] usable Jan 13 20:51:30.458018 kernel: BIOS-e820: [mem 0x000000008afcd000-0x000000008c0b1fff] reserved Jan 13 20:51:30.458023 kernel: BIOS-e820: [mem 0x000000008c0b2000-0x000000008c23afff] usable Jan 13 20:51:30.458028 kernel: BIOS-e820: [mem 0x000000008c23b000-0x000000008c66cfff] ACPI NVS Jan 13 20:51:30.458034 kernel: BIOS-e820: [mem 0x000000008c66d000-0x000000008eefefff] reserved Jan 13 20:51:30.458038 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable Jan 13 20:51:30.458043 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved Jan 13 20:51:30.458048 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 13 20:51:30.458052 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Jan 13 20:51:30.458057 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Jan 13 20:51:30.458062 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Jan 13 20:51:30.458066 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Jan 13 20:51:30.458071 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable Jan 13 20:51:30.458076 kernel: NX (Execute Disable) protection: active Jan 13 20:51:30.458080 kernel: APIC: Static calls initialized Jan 13 20:51:30.458085 kernel: SMBIOS 3.2.1 present. Jan 13 20:51:30.458091 kernel: DMI: Supermicro SYS-5019C-MR-PH004/X11SCM-F, BIOS 1.9 09/16/2022 Jan 13 20:51:30.458096 kernel: tsc: Detected 3400.000 MHz processor Jan 13 20:51:30.458100 kernel: tsc: Detected 3399.906 MHz TSC Jan 13 20:51:30.458105 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 13 20:51:30.458110 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 13 20:51:30.458115 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 Jan 13 20:51:30.458120 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 23), built from 10 variable MTRRs Jan 13 20:51:30.458125 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 13 20:51:30.458130 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 Jan 13 20:51:30.458134 kernel: Using GB pages for direct mapping Jan 13 20:51:30.458140 kernel: ACPI: Early table checksum verification disabled Jan 13 20:51:30.458145 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Jan 13 20:51:30.458152 kernel: ACPI: XSDT 0x000000008C54E0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Jan 13 20:51:30.458157 kernel: ACPI: FACP 0x000000008C58A670 000114 (v06 01072009 AMI 00010013) Jan 13 20:51:30.458162 kernel: ACPI: DSDT 0x000000008C54E268 03C404 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Jan 13 20:51:30.458167 kernel: ACPI: FACS 0x000000008C66CF80 000040 Jan 13 20:51:30.458173 kernel: ACPI: APIC 0x000000008C58A788 00012C (v04 01072009 AMI 00010013) Jan 13 20:51:30.458178 kernel: ACPI: FPDT 0x000000008C58A8B8 000044 (v01 01072009 AMI 00010013) Jan 13 20:51:30.458183 kernel: ACPI: FIDT 0x000000008C58A900 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Jan 13 20:51:30.458188 kernel: ACPI: MCFG 0x000000008C58A9A0 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Jan 13 20:51:30.458194 kernel: ACPI: SPMI 0x000000008C58A9E0 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Jan 13 20:51:30.458199 kernel: ACPI: SSDT 0x000000008C58AA28 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Jan 13 20:51:30.458204 kernel: ACPI: SSDT 0x000000008C58C548 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Jan 13 20:51:30.458209 kernel: ACPI: SSDT 0x000000008C58F710 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Jan 13 20:51:30.458215 kernel: ACPI: HPET 0x000000008C591A40 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Jan 13 20:51:30.458220 kernel: ACPI: SSDT 0x000000008C591A78 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Jan 13 20:51:30.458225 kernel: ACPI: SSDT 0x000000008C592A28 0008F4 (v02 INTEL xh_mossb 00000000 INTL 20160527) Jan 13 20:51:30.458230 kernel: ACPI: UEFI 0x000000008C593320 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Jan 13 20:51:30.458235 kernel: ACPI: LPIT 0x000000008C593368 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Jan 13 20:51:30.458240 kernel: ACPI: SSDT 0x000000008C593400 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Jan 13 20:51:30.458245 kernel: ACPI: SSDT 0x000000008C595BE0 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Jan 13 20:51:30.458253 kernel: ACPI: DBGP 0x000000008C5970C8 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Jan 13 20:51:30.458259 kernel: ACPI: DBG2 0x000000008C597100 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Jan 13 20:51:30.458264 kernel: ACPI: SSDT 0x000000008C597158 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Jan 13 20:51:30.458269 kernel: ACPI: DMAR 0x000000008C598CC0 000070 (v01 INTEL EDK2 00000002 01000013) Jan 13 20:51:30.458274 kernel: ACPI: SSDT 0x000000008C598D30 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Jan 13 20:51:30.458280 kernel: ACPI: TPM2 0x000000008C598E78 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Jan 13 20:51:30.458285 kernel: ACPI: SSDT 0x000000008C598EB0 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Jan 13 20:51:30.458290 kernel: ACPI: WSMT 0x000000008C599C40 000028 (v01 SUPERM 01072009 AMI 00010013) Jan 13 20:51:30.458295 kernel: ACPI: EINJ 0x000000008C599C68 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Jan 13 20:51:30.458300 kernel: ACPI: ERST 0x000000008C599D98 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Jan 13 20:51:30.458306 kernel: ACPI: BERT 0x000000008C599FC8 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Jan 13 20:51:30.458311 kernel: ACPI: HEST 0x000000008C599FF8 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Jan 13 20:51:30.458316 kernel: ACPI: SSDT 0x000000008C59A278 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Jan 13 20:51:30.458321 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58a670-0x8c58a783] Jan 13 20:51:30.458326 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54e268-0x8c58a66b] Jan 13 20:51:30.458331 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66cf80-0x8c66cfbf] Jan 13 20:51:30.458336 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58a788-0x8c58a8b3] Jan 13 20:51:30.458341 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58a8b8-0x8c58a8fb] Jan 13 20:51:30.458347 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58a900-0x8c58a99b] Jan 13 20:51:30.458352 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58a9a0-0x8c58a9db] Jan 13 20:51:30.458357 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58a9e0-0x8c58aa20] Jan 13 20:51:30.458362 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58aa28-0x8c58c543] Jan 13 20:51:30.458367 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58c548-0x8c58f70d] Jan 13 20:51:30.458372 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58f710-0x8c591a3a] Jan 13 20:51:30.458377 kernel: ACPI: Reserving HPET table memory at [mem 0x8c591a40-0x8c591a77] Jan 13 20:51:30.458382 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c591a78-0x8c592a25] Jan 13 20:51:30.458387 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c592a28-0x8c59331b] Jan 13 20:51:30.458392 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c593320-0x8c593361] Jan 13 20:51:30.458398 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c593368-0x8c5933fb] Jan 13 20:51:30.458403 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c593400-0x8c595bdd] Jan 13 20:51:30.458408 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c595be0-0x8c5970c1] Jan 13 20:51:30.458413 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c5970c8-0x8c5970fb] Jan 13 20:51:30.458418 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c597100-0x8c597153] Jan 13 20:51:30.458423 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c597158-0x8c598cbe] Jan 13 20:51:30.458428 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c598cc0-0x8c598d2f] Jan 13 20:51:30.458433 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598d30-0x8c598e73] Jan 13 20:51:30.458438 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c598e78-0x8c598eab] Jan 13 20:51:30.458444 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598eb0-0x8c599c3e] Jan 13 20:51:30.458449 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c599c40-0x8c599c67] Jan 13 20:51:30.458454 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c599c68-0x8c599d97] Jan 13 20:51:30.458459 kernel: ACPI: Reserving ERST table memory at [mem 0x8c599d98-0x8c599fc7] Jan 13 20:51:30.458464 kernel: ACPI: Reserving BERT table memory at [mem 0x8c599fc8-0x8c599ff7] Jan 13 20:51:30.458469 kernel: ACPI: Reserving HEST table memory at [mem 0x8c599ff8-0x8c59a273] Jan 13 20:51:30.458474 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59a278-0x8c59a3d9] Jan 13 20:51:30.458479 kernel: No NUMA configuration found Jan 13 20:51:30.458484 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] Jan 13 20:51:30.458490 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] Jan 13 20:51:30.458496 kernel: Zone ranges: Jan 13 20:51:30.458501 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 13 20:51:30.458506 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 13 20:51:30.458511 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] Jan 13 20:51:30.458516 kernel: Movable zone start for each node Jan 13 20:51:30.458521 kernel: Early memory node ranges Jan 13 20:51:30.458526 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Jan 13 20:51:30.458531 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Jan 13 20:51:30.458536 kernel: node 0: [mem 0x0000000040400000-0x0000000081b19fff] Jan 13 20:51:30.458542 kernel: node 0: [mem 0x0000000081b1c000-0x000000008afccfff] Jan 13 20:51:30.458547 kernel: node 0: [mem 0x000000008c0b2000-0x000000008c23afff] Jan 13 20:51:30.458552 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] Jan 13 20:51:30.458561 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] Jan 13 20:51:30.458566 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] Jan 13 20:51:30.458572 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 13 20:51:30.458577 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Jan 13 20:51:30.458583 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Jan 13 20:51:30.458589 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Jan 13 20:51:30.458594 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges Jan 13 20:51:30.458600 kernel: On node 0, zone DMA32: 11460 pages in unavailable ranges Jan 13 20:51:30.458605 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges Jan 13 20:51:30.458610 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges Jan 13 20:51:30.458616 kernel: ACPI: PM-Timer IO Port: 0x1808 Jan 13 20:51:30.458621 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Jan 13 20:51:30.458627 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Jan 13 20:51:30.458633 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Jan 13 20:51:30.458639 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Jan 13 20:51:30.458644 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Jan 13 20:51:30.458649 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Jan 13 20:51:30.458655 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Jan 13 20:51:30.458660 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Jan 13 20:51:30.458665 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Jan 13 20:51:30.458671 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Jan 13 20:51:30.458676 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Jan 13 20:51:30.458681 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Jan 13 20:51:30.458687 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Jan 13 20:51:30.458693 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Jan 13 20:51:30.458698 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Jan 13 20:51:30.458703 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Jan 13 20:51:30.458709 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Jan 13 20:51:30.458714 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 13 20:51:30.458719 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 13 20:51:30.458725 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 13 20:51:30.458730 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 13 20:51:30.458737 kernel: TSC deadline timer available Jan 13 20:51:30.458742 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Jan 13 20:51:30.458748 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices Jan 13 20:51:30.458753 kernel: Booting paravirtualized kernel on bare hardware Jan 13 20:51:30.458758 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 13 20:51:30.458764 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Jan 13 20:51:30.458769 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Jan 13 20:51:30.458775 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Jan 13 20:51:30.458780 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Jan 13 20:51:30.458787 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=8a11404d893165624d9716a125d997be53e2d6cdb0c50a945acda5b62a14eda5 Jan 13 20:51:30.458793 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 20:51:30.458798 kernel: random: crng init done Jan 13 20:51:30.458803 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Jan 13 20:51:30.458809 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Jan 13 20:51:30.458814 kernel: Fallback order for Node 0: 0 Jan 13 20:51:30.458820 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232415 Jan 13 20:51:30.458825 kernel: Policy zone: Normal Jan 13 20:51:30.458832 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 20:51:30.458837 kernel: software IO TLB: area num 16. Jan 13 20:51:30.458843 kernel: Memory: 32718264K/33452980K available (14336K kernel code, 2299K rwdata, 22800K rodata, 43320K init, 1756K bss, 734456K reserved, 0K cma-reserved) Jan 13 20:51:30.458848 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Jan 13 20:51:30.458854 kernel: ftrace: allocating 37890 entries in 149 pages Jan 13 20:51:30.458859 kernel: ftrace: allocated 149 pages with 4 groups Jan 13 20:51:30.458864 kernel: Dynamic Preempt: voluntary Jan 13 20:51:30.458870 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 20:51:30.458876 kernel: rcu: RCU event tracing is enabled. Jan 13 20:51:30.458882 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Jan 13 20:51:30.458888 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 20:51:30.458893 kernel: Rude variant of Tasks RCU enabled. Jan 13 20:51:30.458898 kernel: Tracing variant of Tasks RCU enabled. Jan 13 20:51:30.458904 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 20:51:30.458909 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Jan 13 20:51:30.458915 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Jan 13 20:51:30.458920 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 20:51:30.458925 kernel: Console: colour VGA+ 80x25 Jan 13 20:51:30.458932 kernel: printk: console [tty0] enabled Jan 13 20:51:30.458937 kernel: printk: console [ttyS1] enabled Jan 13 20:51:30.458943 kernel: ACPI: Core revision 20230628 Jan 13 20:51:30.458948 kernel: hpet: HPET dysfunctional in PC10. Force disabled. Jan 13 20:51:30.458953 kernel: APIC: Switch to symmetric I/O mode setup Jan 13 20:51:30.458959 kernel: DMAR: Host address width 39 Jan 13 20:51:30.458964 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Jan 13 20:51:30.458970 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Jan 13 20:51:30.458975 kernel: DMAR: RMRR base: 0x0000008cf18000 end: 0x0000008d161fff Jan 13 20:51:30.458981 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 Jan 13 20:51:30.458987 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Jan 13 20:51:30.458993 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Jan 13 20:51:30.458998 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Jan 13 20:51:30.459004 kernel: x2apic enabled Jan 13 20:51:30.459009 kernel: APIC: Switched APIC routing to: cluster x2apic Jan 13 20:51:30.459014 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Jan 13 20:51:30.459020 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Jan 13 20:51:30.459025 kernel: CPU0: Thermal monitoring enabled (TM1) Jan 13 20:51:30.459032 kernel: process: using mwait in idle threads Jan 13 20:51:30.459037 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 13 20:51:30.459042 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 13 20:51:30.459048 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 13 20:51:30.459053 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jan 13 20:51:30.459058 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jan 13 20:51:30.459064 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jan 13 20:51:30.459069 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 13 20:51:30.459074 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Jan 13 20:51:30.459080 kernel: RETBleed: Mitigation: Enhanced IBRS Jan 13 20:51:30.459085 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 13 20:51:30.459091 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 13 20:51:30.459096 kernel: TAA: Mitigation: TSX disabled Jan 13 20:51:30.459102 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Jan 13 20:51:30.459107 kernel: SRBDS: Mitigation: Microcode Jan 13 20:51:30.459112 kernel: GDS: Mitigation: Microcode Jan 13 20:51:30.459118 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 13 20:51:30.459123 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 13 20:51:30.459128 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 13 20:51:30.459134 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jan 13 20:51:30.459139 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jan 13 20:51:30.459144 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 13 20:51:30.459150 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jan 13 20:51:30.459156 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jan 13 20:51:30.459161 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Jan 13 20:51:30.459166 kernel: Freeing SMP alternatives memory: 32K Jan 13 20:51:30.459172 kernel: pid_max: default: 32768 minimum: 301 Jan 13 20:51:30.459177 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 20:51:30.459182 kernel: landlock: Up and running. Jan 13 20:51:30.459187 kernel: SELinux: Initializing. Jan 13 20:51:30.459193 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 20:51:30.459198 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 20:51:30.459204 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Jan 13 20:51:30.459209 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 13 20:51:30.459215 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 13 20:51:30.459221 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 13 20:51:30.459226 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Jan 13 20:51:30.459232 kernel: ... version: 4 Jan 13 20:51:30.459237 kernel: ... bit width: 48 Jan 13 20:51:30.459243 kernel: ... generic registers: 4 Jan 13 20:51:30.459250 kernel: ... value mask: 0000ffffffffffff Jan 13 20:51:30.459255 kernel: ... max period: 00007fffffffffff Jan 13 20:51:30.459261 kernel: ... fixed-purpose events: 3 Jan 13 20:51:30.459267 kernel: ... event mask: 000000070000000f Jan 13 20:51:30.459272 kernel: signal: max sigframe size: 2032 Jan 13 20:51:30.459278 kernel: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Jan 13 20:51:30.459283 kernel: rcu: Hierarchical SRCU implementation. Jan 13 20:51:30.459289 kernel: rcu: Max phase no-delay instances is 400. Jan 13 20:51:30.459294 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Jan 13 20:51:30.459299 kernel: smp: Bringing up secondary CPUs ... Jan 13 20:51:30.459305 kernel: smpboot: x86: Booting SMP configuration: Jan 13 20:51:30.459310 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 #15 Jan 13 20:51:30.459317 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 13 20:51:30.459322 kernel: smp: Brought up 1 node, 16 CPUs Jan 13 20:51:30.459328 kernel: smpboot: Max logical packages: 1 Jan 13 20:51:30.459333 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Jan 13 20:51:30.459338 kernel: devtmpfs: initialized Jan 13 20:51:30.459344 kernel: x86/mm: Memory block size: 128MB Jan 13 20:51:30.459349 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x81b1a000-0x81b1afff] (4096 bytes) Jan 13 20:51:30.459355 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23b000-0x8c66cfff] (4399104 bytes) Jan 13 20:51:30.459361 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 20:51:30.459367 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Jan 13 20:51:30.459372 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 20:51:30.459377 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 20:51:30.459383 kernel: audit: initializing netlink subsys (disabled) Jan 13 20:51:30.459388 kernel: audit: type=2000 audit(1736801485.042:1): state=initialized audit_enabled=0 res=1 Jan 13 20:51:30.459393 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 20:51:30.459399 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 13 20:51:30.459404 kernel: cpuidle: using governor menu Jan 13 20:51:30.459411 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 20:51:30.459416 kernel: dca service started, version 1.12.1 Jan 13 20:51:30.459422 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Jan 13 20:51:30.459427 kernel: PCI: Using configuration type 1 for base access Jan 13 20:51:30.459432 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Jan 13 20:51:30.459438 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 13 20:51:30.459443 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 20:51:30.459448 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 20:51:30.459454 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 20:51:30.459460 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 20:51:30.459466 kernel: ACPI: Added _OSI(Module Device) Jan 13 20:51:30.459471 kernel: ACPI: Added _OSI(Processor Device) Jan 13 20:51:30.459476 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 20:51:30.459482 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 20:51:30.459487 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Jan 13 20:51:30.459492 kernel: ACPI: Dynamic OEM Table Load: Jan 13 20:51:30.459498 kernel: ACPI: SSDT 0xFFFF9B07C1EC0000 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Jan 13 20:51:30.459503 kernel: ACPI: Dynamic OEM Table Load: Jan 13 20:51:30.459510 kernel: ACPI: SSDT 0xFFFF9B07C1EBD000 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Jan 13 20:51:30.459515 kernel: ACPI: Dynamic OEM Table Load: Jan 13 20:51:30.459520 kernel: ACPI: SSDT 0xFFFF9B07C1569B00 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Jan 13 20:51:30.459525 kernel: ACPI: Dynamic OEM Table Load: Jan 13 20:51:30.459531 kernel: ACPI: SSDT 0xFFFF9B07C1EBF800 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Jan 13 20:51:30.459536 kernel: ACPI: Dynamic OEM Table Load: Jan 13 20:51:30.459541 kernel: ACPI: SSDT 0xFFFF9B07C1ECF000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Jan 13 20:51:30.459547 kernel: ACPI: Dynamic OEM Table Load: Jan 13 20:51:30.459552 kernel: ACPI: SSDT 0xFFFF9B07C0E3E000 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Jan 13 20:51:30.459557 kernel: ACPI: _OSC evaluated successfully for all CPUs Jan 13 20:51:30.459564 kernel: ACPI: Interpreter enabled Jan 13 20:51:30.459569 kernel: ACPI: PM: (supports S0 S5) Jan 13 20:51:30.459574 kernel: ACPI: Using IOAPIC for interrupt routing Jan 13 20:51:30.459580 kernel: HEST: Enabling Firmware First mode for corrected errors. Jan 13 20:51:30.459585 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Jan 13 20:51:30.459590 kernel: HEST: Table parsing has been initialized. Jan 13 20:51:30.459596 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Jan 13 20:51:30.459601 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 13 20:51:30.459607 kernel: PCI: Using E820 reservations for host bridge windows Jan 13 20:51:30.459613 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Jan 13 20:51:30.459619 kernel: ACPI: \_SB_.PCI0.XDCI.USBC: New power resource Jan 13 20:51:30.459624 kernel: ACPI: \_SB_.PCI0.SAT0.VOL0.V0PR: New power resource Jan 13 20:51:30.459630 kernel: ACPI: \_SB_.PCI0.SAT0.VOL1.V1PR: New power resource Jan 13 20:51:30.459635 kernel: ACPI: \_SB_.PCI0.SAT0.VOL2.V2PR: New power resource Jan 13 20:51:30.459640 kernel: ACPI: \_SB_.PCI0.CNVW.WRST: New power resource Jan 13 20:51:30.459646 kernel: ACPI: \_TZ_.FN00: New power resource Jan 13 20:51:30.459651 kernel: ACPI: \_TZ_.FN01: New power resource Jan 13 20:51:30.459657 kernel: ACPI: \_TZ_.FN02: New power resource Jan 13 20:51:30.459663 kernel: ACPI: \_TZ_.FN03: New power resource Jan 13 20:51:30.459668 kernel: ACPI: \_TZ_.FN04: New power resource Jan 13 20:51:30.459674 kernel: ACPI: \PIN_: New power resource Jan 13 20:51:30.459679 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Jan 13 20:51:30.459753 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 13 20:51:30.459806 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Jan 13 20:51:30.459854 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Jan 13 20:51:30.459864 kernel: PCI host bridge to bus 0000:00 Jan 13 20:51:30.459914 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 13 20:51:30.459958 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 13 20:51:30.460000 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 13 20:51:30.460042 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] Jan 13 20:51:30.460083 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Jan 13 20:51:30.460125 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Jan 13 20:51:30.460186 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Jan 13 20:51:30.460243 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Jan 13 20:51:30.460297 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Jan 13 20:51:30.460350 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Jan 13 20:51:30.460399 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] Jan 13 20:51:30.460452 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Jan 13 20:51:30.460504 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit] Jan 13 20:51:30.460556 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Jan 13 20:51:30.460605 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] Jan 13 20:51:30.460654 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Jan 13 20:51:30.460705 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Jan 13 20:51:30.460754 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] Jan 13 20:51:30.460804 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit] Jan 13 20:51:30.460856 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Jan 13 20:51:30.460904 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Jan 13 20:51:30.460959 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Jan 13 20:51:30.461006 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Jan 13 20:51:30.461058 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Jan 13 20:51:30.461107 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] Jan 13 20:51:30.461158 kernel: pci 0000:00:16.0: PME# supported from D3hot Jan 13 20:51:30.461215 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Jan 13 20:51:30.461270 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] Jan 13 20:51:30.461318 kernel: pci 0000:00:16.1: PME# supported from D3hot Jan 13 20:51:30.461370 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Jan 13 20:51:30.461419 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] Jan 13 20:51:30.461469 kernel: pci 0000:00:16.4: PME# supported from D3hot Jan 13 20:51:30.461521 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Jan 13 20:51:30.461568 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] Jan 13 20:51:30.461617 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] Jan 13 20:51:30.461664 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] Jan 13 20:51:30.461712 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] Jan 13 20:51:30.461759 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] Jan 13 20:51:30.461811 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] Jan 13 20:51:30.461858 kernel: pci 0000:00:17.0: PME# supported from D3hot Jan 13 20:51:30.461911 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Jan 13 20:51:30.461959 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Jan 13 20:51:30.462016 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Jan 13 20:51:30.462067 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Jan 13 20:51:30.462119 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Jan 13 20:51:30.462169 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Jan 13 20:51:30.462222 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Jan 13 20:51:30.462277 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Jan 13 20:51:30.462333 kernel: pci 0000:00:1c.3: [8086:a33b] type 01 class 0x060400 Jan 13 20:51:30.462383 kernel: pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold Jan 13 20:51:30.462434 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Jan 13 20:51:30.462483 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Jan 13 20:51:30.462534 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Jan 13 20:51:30.462586 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Jan 13 20:51:30.462636 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] Jan 13 20:51:30.462684 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Jan 13 20:51:30.462738 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Jan 13 20:51:30.462789 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Jan 13 20:51:30.462843 kernel: pci 0000:01:00.0: [15b3:1015] type 00 class 0x020000 Jan 13 20:51:30.462894 kernel: pci 0000:01:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Jan 13 20:51:30.462946 kernel: pci 0000:01:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] Jan 13 20:51:30.462995 kernel: pci 0000:01:00.0: PME# supported from D3cold Jan 13 20:51:30.463045 kernel: pci 0000:01:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Jan 13 20:51:30.463095 kernel: pci 0000:01:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Jan 13 20:51:30.463150 kernel: pci 0000:01:00.1: [15b3:1015] type 00 class 0x020000 Jan 13 20:51:30.463200 kernel: pci 0000:01:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Jan 13 20:51:30.463252 kernel: pci 0000:01:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] Jan 13 20:51:30.463305 kernel: pci 0000:01:00.1: PME# supported from D3cold Jan 13 20:51:30.463354 kernel: pci 0000:01:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Jan 13 20:51:30.463404 kernel: pci 0000:01:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Jan 13 20:51:30.463452 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jan 13 20:51:30.463501 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Jan 13 20:51:30.463550 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Jan 13 20:51:30.463599 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Jan 13 20:51:30.463653 kernel: pci 0000:03:00.0: working around ROM BAR overlap defect Jan 13 20:51:30.463707 kernel: pci 0000:03:00.0: [8086:1533] type 00 class 0x020000 Jan 13 20:51:30.463756 kernel: pci 0000:03:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] Jan 13 20:51:30.463805 kernel: pci 0000:03:00.0: reg 0x18: [io 0x5000-0x501f] Jan 13 20:51:30.463855 kernel: pci 0000:03:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] Jan 13 20:51:30.463903 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Jan 13 20:51:30.463953 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Jan 13 20:51:30.464000 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Jan 13 20:51:30.464051 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Jan 13 20:51:30.464104 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect Jan 13 20:51:30.464155 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Jan 13 20:51:30.464204 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] Jan 13 20:51:30.464258 kernel: pci 0000:04:00.0: reg 0x18: [io 0x4000-0x401f] Jan 13 20:51:30.464308 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] Jan 13 20:51:30.464358 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Jan 13 20:51:30.464408 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Jan 13 20:51:30.464459 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Jan 13 20:51:30.464509 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Jan 13 20:51:30.464557 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Jan 13 20:51:30.464613 kernel: pci 0000:06:00.0: [1a03:1150] type 01 class 0x060400 Jan 13 20:51:30.464663 kernel: pci 0000:06:00.0: enabling Extended Tags Jan 13 20:51:30.464714 kernel: pci 0000:06:00.0: supports D1 D2 Jan 13 20:51:30.464763 kernel: pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 13 20:51:30.464815 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Jan 13 20:51:30.464864 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Jan 13 20:51:30.464911 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Jan 13 20:51:30.464966 kernel: pci_bus 0000:07: extended config space not accessible Jan 13 20:51:30.465022 kernel: pci 0000:07:00.0: [1a03:2000] type 00 class 0x030000 Jan 13 20:51:30.465075 kernel: pci 0000:07:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] Jan 13 20:51:30.465127 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] Jan 13 20:51:30.465181 kernel: pci 0000:07:00.0: reg 0x18: [io 0x3000-0x307f] Jan 13 20:51:30.465233 kernel: pci 0000:07:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 13 20:51:30.465287 kernel: pci 0000:07:00.0: supports D1 D2 Jan 13 20:51:30.465340 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 13 20:51:30.465389 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Jan 13 20:51:30.465439 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Jan 13 20:51:30.465489 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Jan 13 20:51:30.465497 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Jan 13 20:51:30.465505 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Jan 13 20:51:30.465511 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Jan 13 20:51:30.465517 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Jan 13 20:51:30.465523 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Jan 13 20:51:30.465529 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Jan 13 20:51:30.465534 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Jan 13 20:51:30.465540 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Jan 13 20:51:30.465546 kernel: iommu: Default domain type: Translated Jan 13 20:51:30.465552 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 13 20:51:30.465558 kernel: PCI: Using ACPI for IRQ routing Jan 13 20:51:30.465564 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 13 20:51:30.465569 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Jan 13 20:51:30.465576 kernel: e820: reserve RAM buffer [mem 0x81b1a000-0x83ffffff] Jan 13 20:51:30.465582 kernel: e820: reserve RAM buffer [mem 0x8afcd000-0x8bffffff] Jan 13 20:51:30.465587 kernel: e820: reserve RAM buffer [mem 0x8c23b000-0x8fffffff] Jan 13 20:51:30.465593 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] Jan 13 20:51:30.465598 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] Jan 13 20:51:30.465649 kernel: pci 0000:07:00.0: vgaarb: setting as boot VGA device Jan 13 20:51:30.465704 kernel: pci 0000:07:00.0: vgaarb: bridge control possible Jan 13 20:51:30.465755 kernel: pci 0000:07:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 13 20:51:30.465764 kernel: vgaarb: loaded Jan 13 20:51:30.465770 kernel: clocksource: Switched to clocksource tsc-early Jan 13 20:51:30.465776 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 20:51:30.465781 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 20:51:30.465787 kernel: pnp: PnP ACPI init Jan 13 20:51:30.465838 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Jan 13 20:51:30.465889 kernel: pnp 00:02: [dma 0 disabled] Jan 13 20:51:30.465938 kernel: pnp 00:03: [dma 0 disabled] Jan 13 20:51:30.465988 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Jan 13 20:51:30.466032 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Jan 13 20:51:30.466079 kernel: system 00:05: [io 0x1854-0x1857] has been reserved Jan 13 20:51:30.466127 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved Jan 13 20:51:30.466174 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved Jan 13 20:51:30.466217 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved Jan 13 20:51:30.466265 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved Jan 13 20:51:30.466312 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved Jan 13 20:51:30.466357 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved Jan 13 20:51:30.466400 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved Jan 13 20:51:30.466445 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved Jan 13 20:51:30.466495 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved Jan 13 20:51:30.466540 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved Jan 13 20:51:30.466584 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved Jan 13 20:51:30.466627 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved Jan 13 20:51:30.466670 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved Jan 13 20:51:30.466715 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved Jan 13 20:51:30.466759 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved Jan 13 20:51:30.466810 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved Jan 13 20:51:30.466819 kernel: pnp: PnP ACPI: found 10 devices Jan 13 20:51:30.466825 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 13 20:51:30.466830 kernel: NET: Registered PF_INET protocol family Jan 13 20:51:30.466836 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 20:51:30.466842 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Jan 13 20:51:30.466848 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 20:51:30.466854 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 20:51:30.466861 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 13 20:51:30.466867 kernel: TCP: Hash tables configured (established 262144 bind 65536) Jan 13 20:51:30.466873 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 13 20:51:30.466879 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 13 20:51:30.466885 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 20:51:30.466890 kernel: NET: Registered PF_XDP protocol family Jan 13 20:51:30.466939 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] Jan 13 20:51:30.466989 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] Jan 13 20:51:30.467039 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] Jan 13 20:51:30.467092 kernel: pci 0000:01:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Jan 13 20:51:30.467143 kernel: pci 0000:01:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Jan 13 20:51:30.467192 kernel: pci 0000:01:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Jan 13 20:51:30.467242 kernel: pci 0000:01:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Jan 13 20:51:30.467294 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jan 13 20:51:30.467344 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Jan 13 20:51:30.467391 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Jan 13 20:51:30.467440 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Jan 13 20:51:30.467491 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Jan 13 20:51:30.467541 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Jan 13 20:51:30.467589 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Jan 13 20:51:30.467636 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Jan 13 20:51:30.467687 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Jan 13 20:51:30.467735 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Jan 13 20:51:30.467783 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Jan 13 20:51:30.467831 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Jan 13 20:51:30.467884 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Jan 13 20:51:30.467933 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Jan 13 20:51:30.467982 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Jan 13 20:51:30.468030 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Jan 13 20:51:30.468079 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Jan 13 20:51:30.468125 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Jan 13 20:51:30.468169 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 13 20:51:30.468212 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 13 20:51:30.468257 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 13 20:51:30.468299 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] Jan 13 20:51:30.468342 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Jan 13 20:51:30.468391 kernel: pci_bus 0000:01: resource 1 [mem 0x95100000-0x952fffff] Jan 13 20:51:30.468437 kernel: pci_bus 0000:01: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Jan 13 20:51:30.468490 kernel: pci_bus 0000:03: resource 0 [io 0x5000-0x5fff] Jan 13 20:51:30.468534 kernel: pci_bus 0000:03: resource 1 [mem 0x95400000-0x954fffff] Jan 13 20:51:30.468583 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Jan 13 20:51:30.468627 kernel: pci_bus 0000:04: resource 1 [mem 0x95300000-0x953fffff] Jan 13 20:51:30.468676 kernel: pci_bus 0000:06: resource 0 [io 0x3000-0x3fff] Jan 13 20:51:30.468720 kernel: pci_bus 0000:06: resource 1 [mem 0x94000000-0x950fffff] Jan 13 20:51:30.468770 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Jan 13 20:51:30.468815 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] Jan 13 20:51:30.468823 kernel: PCI: CLS 64 bytes, default 64 Jan 13 20:51:30.468829 kernel: DMAR: No ATSR found Jan 13 20:51:30.468835 kernel: DMAR: No SATC found Jan 13 20:51:30.468841 kernel: DMAR: dmar0: Using Queued invalidation Jan 13 20:51:30.468889 kernel: pci 0000:00:00.0: Adding to iommu group 0 Jan 13 20:51:30.468937 kernel: pci 0000:00:01.0: Adding to iommu group 1 Jan 13 20:51:30.468988 kernel: pci 0000:00:08.0: Adding to iommu group 2 Jan 13 20:51:30.469036 kernel: pci 0000:00:12.0: Adding to iommu group 3 Jan 13 20:51:30.469084 kernel: pci 0000:00:14.0: Adding to iommu group 4 Jan 13 20:51:30.469131 kernel: pci 0000:00:14.2: Adding to iommu group 4 Jan 13 20:51:30.469179 kernel: pci 0000:00:15.0: Adding to iommu group 5 Jan 13 20:51:30.469226 kernel: pci 0000:00:15.1: Adding to iommu group 5 Jan 13 20:51:30.469277 kernel: pci 0000:00:16.0: Adding to iommu group 6 Jan 13 20:51:30.469325 kernel: pci 0000:00:16.1: Adding to iommu group 6 Jan 13 20:51:30.469375 kernel: pci 0000:00:16.4: Adding to iommu group 6 Jan 13 20:51:30.469423 kernel: pci 0000:00:17.0: Adding to iommu group 7 Jan 13 20:51:30.469471 kernel: pci 0000:00:1b.0: Adding to iommu group 8 Jan 13 20:51:30.469520 kernel: pci 0000:00:1b.4: Adding to iommu group 9 Jan 13 20:51:30.469567 kernel: pci 0000:00:1b.5: Adding to iommu group 10 Jan 13 20:51:30.469615 kernel: pci 0000:00:1c.0: Adding to iommu group 11 Jan 13 20:51:30.469662 kernel: pci 0000:00:1c.3: Adding to iommu group 12 Jan 13 20:51:30.469711 kernel: pci 0000:00:1e.0: Adding to iommu group 13 Jan 13 20:51:30.469761 kernel: pci 0000:00:1f.0: Adding to iommu group 14 Jan 13 20:51:30.469809 kernel: pci 0000:00:1f.4: Adding to iommu group 14 Jan 13 20:51:30.469856 kernel: pci 0000:00:1f.5: Adding to iommu group 14 Jan 13 20:51:30.469905 kernel: pci 0000:01:00.0: Adding to iommu group 1 Jan 13 20:51:30.469956 kernel: pci 0000:01:00.1: Adding to iommu group 1 Jan 13 20:51:30.470004 kernel: pci 0000:03:00.0: Adding to iommu group 15 Jan 13 20:51:30.470054 kernel: pci 0000:04:00.0: Adding to iommu group 16 Jan 13 20:51:30.470102 kernel: pci 0000:06:00.0: Adding to iommu group 17 Jan 13 20:51:30.470156 kernel: pci 0000:07:00.0: Adding to iommu group 17 Jan 13 20:51:30.470165 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Jan 13 20:51:30.470171 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 13 20:51:30.470177 kernel: software IO TLB: mapped [mem 0x0000000086fcd000-0x000000008afcd000] (64MB) Jan 13 20:51:30.470183 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer Jan 13 20:51:30.470189 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Jan 13 20:51:30.470194 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Jan 13 20:51:30.470200 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Jan 13 20:51:30.470287 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Jan 13 20:51:30.470299 kernel: Initialise system trusted keyrings Jan 13 20:51:30.470305 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Jan 13 20:51:30.470311 kernel: Key type asymmetric registered Jan 13 20:51:30.470317 kernel: Asymmetric key parser 'x509' registered Jan 13 20:51:30.470322 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 13 20:51:30.470328 kernel: io scheduler mq-deadline registered Jan 13 20:51:30.470334 kernel: io scheduler kyber registered Jan 13 20:51:30.470339 kernel: io scheduler bfq registered Jan 13 20:51:30.470390 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 Jan 13 20:51:30.470439 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 122 Jan 13 20:51:30.470488 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 123 Jan 13 20:51:30.470536 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 124 Jan 13 20:51:30.470583 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 125 Jan 13 20:51:30.470632 kernel: pcieport 0000:00:1c.3: PME: Signaling with IRQ 126 Jan 13 20:51:30.470685 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Jan 13 20:51:30.470696 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Jan 13 20:51:30.470702 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Jan 13 20:51:30.470708 kernel: pstore: Using crash dump compression: deflate Jan 13 20:51:30.470714 kernel: pstore: Registered erst as persistent store backend Jan 13 20:51:30.470720 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 13 20:51:30.470725 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 20:51:30.470731 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 13 20:51:30.470737 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 13 20:51:30.470743 kernel: hpet_acpi_add: no address or irqs in _CRS Jan 13 20:51:30.470794 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Jan 13 20:51:30.470803 kernel: i8042: PNP: No PS/2 controller found. Jan 13 20:51:30.470847 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Jan 13 20:51:30.470891 kernel: rtc_cmos rtc_cmos: registered as rtc0 Jan 13 20:51:30.470935 kernel: rtc_cmos rtc_cmos: setting system clock to 2025-01-13T20:51:29 UTC (1736801489) Jan 13 20:51:30.470979 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Jan 13 20:51:30.470988 kernel: intel_pstate: Intel P-state driver initializing Jan 13 20:51:30.470993 kernel: intel_pstate: Disabling energy efficiency optimization Jan 13 20:51:30.471001 kernel: intel_pstate: HWP enabled Jan 13 20:51:30.471007 kernel: NET: Registered PF_INET6 protocol family Jan 13 20:51:30.471012 kernel: Segment Routing with IPv6 Jan 13 20:51:30.471018 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 20:51:30.471024 kernel: NET: Registered PF_PACKET protocol family Jan 13 20:51:30.471030 kernel: Key type dns_resolver registered Jan 13 20:51:30.471035 kernel: microcode: Microcode Update Driver: v2.2. Jan 13 20:51:30.471041 kernel: IPI shorthand broadcast: enabled Jan 13 20:51:30.471047 kernel: sched_clock: Marking stable (2491351186, 1448867226)->(4503222445, -563004033) Jan 13 20:51:30.471054 kernel: registered taskstats version 1 Jan 13 20:51:30.471059 kernel: Loading compiled-in X.509 certificates Jan 13 20:51:30.471066 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: ede78b3e719729f95eaaf7cb6a5289b567f6ee3e' Jan 13 20:51:30.471071 kernel: Key type .fscrypt registered Jan 13 20:51:30.471077 kernel: Key type fscrypt-provisioning registered Jan 13 20:51:30.471083 kernel: ima: Allocated hash algorithm: sha1 Jan 13 20:51:30.471089 kernel: ima: No architecture policies found Jan 13 20:51:30.471094 kernel: clk: Disabling unused clocks Jan 13 20:51:30.471100 kernel: Freeing unused kernel image (initmem) memory: 43320K Jan 13 20:51:30.471107 kernel: Write protecting the kernel read-only data: 38912k Jan 13 20:51:30.471113 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Jan 13 20:51:30.471118 kernel: Run /init as init process Jan 13 20:51:30.471124 kernel: with arguments: Jan 13 20:51:30.471130 kernel: /init Jan 13 20:51:30.471135 kernel: with environment: Jan 13 20:51:30.471141 kernel: HOME=/ Jan 13 20:51:30.471147 kernel: TERM=linux Jan 13 20:51:30.471152 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 20:51:30.471160 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:51:30.471168 systemd[1]: Detected architecture x86-64. Jan 13 20:51:30.471174 systemd[1]: Running in initrd. Jan 13 20:51:30.471180 systemd[1]: No hostname configured, using default hostname. Jan 13 20:51:30.471186 systemd[1]: Hostname set to . Jan 13 20:51:30.471192 systemd[1]: Initializing machine ID from random generator. Jan 13 20:51:30.471198 systemd[1]: Queued start job for default target initrd.target. Jan 13 20:51:30.471205 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:51:30.471211 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:51:30.471218 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 20:51:30.471224 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:51:30.471230 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 20:51:30.471236 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 20:51:30.471242 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 20:51:30.471252 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 20:51:30.471258 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:51:30.471264 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:51:30.471271 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:51:30.471277 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:51:30.471283 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:51:30.471289 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:51:30.471295 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:51:30.471302 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:51:30.471308 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 20:51:30.471314 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 20:51:30.471320 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:51:30.471326 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:51:30.471332 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:51:30.471338 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:51:30.471344 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 20:51:30.471350 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:51:30.471357 kernel: tsc: Refined TSC clocksource calibration: 3407.999 MHz Jan 13 20:51:30.471363 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd336761, max_idle_ns: 440795243819 ns Jan 13 20:51:30.471369 kernel: clocksource: Switched to clocksource tsc Jan 13 20:51:30.471375 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 20:51:30.471381 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 20:51:30.471387 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:51:30.471403 systemd-journald[270]: Collecting audit messages is disabled. Jan 13 20:51:30.471419 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:51:30.471426 systemd-journald[270]: Journal started Jan 13 20:51:30.471440 systemd-journald[270]: Runtime Journal (/run/log/journal/b2d6e2dd8f8246fda77f8b39e3bfe7ab) is 8.0M, max 639.9M, 631.9M free. Jan 13 20:51:30.474744 systemd-modules-load[272]: Inserted module 'overlay' Jan 13 20:51:30.491366 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:51:30.514294 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 20:51:30.514312 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:51:30.521428 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 20:51:30.521521 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:51:30.521607 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 20:51:30.522500 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 20:51:30.526831 systemd-modules-load[272]: Inserted module 'br_netfilter' Jan 13 20:51:30.527252 kernel: Bridge firewalling registered Jan 13 20:51:30.527557 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:51:30.545938 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:51:30.626928 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:51:30.644720 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:51:30.675601 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:51:30.713654 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:51:30.726319 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:51:30.728163 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:51:30.735734 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:51:30.735889 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:51:30.736951 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:51:30.739557 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:51:30.753696 systemd-resolved[306]: Positive Trust Anchors: Jan 13 20:51:30.753701 systemd-resolved[306]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:51:30.753724 systemd-resolved[306]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:51:30.755220 systemd-resolved[306]: Defaulting to hostname 'linux'. Jan 13 20:51:30.779497 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:51:30.796539 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:51:30.830524 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 20:51:30.930982 dracut-cmdline[308]: dracut-dracut-053 Jan 13 20:51:30.938359 dracut-cmdline[308]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=8a11404d893165624d9716a125d997be53e2d6cdb0c50a945acda5b62a14eda5 Jan 13 20:51:31.016287 kernel: SCSI subsystem initialized Jan 13 20:51:31.031278 kernel: Loading iSCSI transport class v2.0-870. Jan 13 20:51:31.044296 kernel: iscsi: registered transport (tcp) Jan 13 20:51:31.064560 kernel: iscsi: registered transport (qla4xxx) Jan 13 20:51:31.064577 kernel: QLogic iSCSI HBA Driver Jan 13 20:51:31.087033 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 20:51:31.108515 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 20:51:31.172213 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 20:51:31.172241 kernel: device-mapper: uevent: version 1.0.3 Jan 13 20:51:31.181028 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 20:51:31.217282 kernel: raid6: avx2x4 gen() 46750 MB/s Jan 13 20:51:31.238283 kernel: raid6: avx2x2 gen() 53676 MB/s Jan 13 20:51:31.264384 kernel: raid6: avx2x1 gen() 44954 MB/s Jan 13 20:51:31.264401 kernel: raid6: using algorithm avx2x2 gen() 53676 MB/s Jan 13 20:51:31.291410 kernel: raid6: .... xor() 32279 MB/s, rmw enabled Jan 13 20:51:31.291428 kernel: raid6: using avx2x2 recovery algorithm Jan 13 20:51:31.312253 kernel: xor: automatically using best checksumming function avx Jan 13 20:51:31.410283 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 20:51:31.415734 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:51:31.445562 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:51:31.452307 systemd-udevd[494]: Using default interface naming scheme 'v255'. Jan 13 20:51:31.454728 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:51:31.492532 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 20:51:31.554170 dracut-pre-trigger[508]: rd.md=0: removing MD RAID activation Jan 13 20:51:31.625720 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:51:31.650646 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:51:31.751527 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:51:31.775296 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 13 20:51:31.775317 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 13 20:51:31.782447 kernel: cryptd: max_cpu_qlen set to 1000 Jan 13 20:51:31.790393 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 20:51:31.812416 kernel: PTP clock support registered Jan 13 20:51:31.812429 kernel: ACPI: bus type USB registered Jan 13 20:51:31.812437 kernel: usbcore: registered new interface driver usbfs Jan 13 20:51:31.812444 kernel: usbcore: registered new interface driver hub Jan 13 20:51:31.812451 kernel: usbcore: registered new device driver usb Jan 13 20:51:31.813253 kernel: AVX2 version of gcm_enc/dec engaged. Jan 13 20:51:31.814368 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:51:31.896223 kernel: libata version 3.00 loaded. Jan 13 20:51:31.896265 kernel: AES CTR mode by8 optimization enabled Jan 13 20:51:31.896280 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Jan 13 20:51:31.925434 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Jan 13 20:51:31.925536 kernel: ahci 0000:00:17.0: version 3.0 Jan 13 20:51:32.026155 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Jan 13 20:51:32.026230 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 7 ports 6 Gbps 0x7f impl SATA mode Jan 13 20:51:32.026306 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Jan 13 20:51:32.026371 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Jan 13 20:51:32.026432 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Jan 13 20:51:32.026492 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Jan 13 20:51:32.026552 kernel: hub 1-0:1.0: USB hub found Jan 13 20:51:32.026625 kernel: scsi host0: ahci Jan 13 20:51:32.026687 kernel: hub 1-0:1.0: 16 ports detected Jan 13 20:51:32.026755 kernel: scsi host1: ahci Jan 13 20:51:32.026815 kernel: hub 2-0:1.0: USB hub found Jan 13 20:51:32.026884 kernel: scsi host2: ahci Jan 13 20:51:32.026943 kernel: hub 2-0:1.0: 10 ports detected Jan 13 20:51:32.027009 kernel: scsi host3: ahci Jan 13 20:51:32.027069 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Jan 13 20:51:32.027078 kernel: scsi host4: ahci Jan 13 20:51:32.027137 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Jan 13 20:51:32.027145 kernel: scsi host5: ahci Jan 13 20:51:32.027204 kernel: scsi host6: ahci Jan 13 20:51:32.027264 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 127 Jan 13 20:51:32.027272 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 127 Jan 13 20:51:32.027279 kernel: pps pps0: new PPS source ptp0 Jan 13 20:51:32.027343 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 127 Jan 13 20:51:32.027351 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 127 Jan 13 20:51:32.027359 kernel: igb 0000:03:00.0: added PHC on eth0 Jan 13 20:51:32.042959 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 127 Jan 13 20:51:32.042968 kernel: igb 0000:03:00.0: Intel(R) Gigabit Ethernet Network Connection Jan 13 20:51:32.043033 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 127 Jan 13 20:51:32.043041 kernel: igb 0000:03:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:70:d3:7e Jan 13 20:51:32.043104 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 127 Jan 13 20:51:32.043112 kernel: igb 0000:03:00.0: eth0: PBA No: 010000-000 Jan 13 20:51:32.043175 kernel: igb 0000:03:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Jan 13 20:51:31.814436 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:51:32.069926 kernel: mlx5_core 0000:01:00.0: firmware version: 14.31.1014 Jan 13 20:51:32.583647 kernel: mlx5_core 0000:01:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Jan 13 20:51:32.583731 kernel: pps pps1: new PPS source ptp1 Jan 13 20:51:32.583799 kernel: igb 0000:04:00.0: added PHC on eth1 Jan 13 20:51:32.583867 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Jan 13 20:51:32.583932 kernel: igb 0000:04:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:70:d3:7f Jan 13 20:51:32.583995 kernel: igb 0000:04:00.0: eth1: PBA No: 010000-000 Jan 13 20:51:32.584056 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Jan 13 20:51:32.584118 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Jan 13 20:51:32.680173 kernel: hub 1-14:1.0: USB hub found Jan 13 20:51:32.680285 kernel: hub 1-14:1.0: 4 ports detected Jan 13 20:51:32.680360 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 13 20:51:32.680370 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 13 20:51:32.680378 kernel: mlx5_core 0000:01:00.0: E-Switch: Total vports 10, per vport: max uc(128) max mc(2048) Jan 13 20:51:32.680447 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 13 20:51:32.680456 kernel: mlx5_core 0000:01:00.0: Port module event: module 0, Cable plugged Jan 13 20:51:32.680522 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 13 20:51:32.680533 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Jan 13 20:51:32.680541 kernel: ata7: SATA link down (SStatus 0 SControl 300) Jan 13 20:51:32.680548 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Jan 13 20:51:32.680555 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Jan 13 20:51:32.680562 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Jan 13 20:51:32.680569 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Jan 13 20:51:32.680577 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Jan 13 20:51:32.680584 kernel: ata2.00: Features: NCQ-prio Jan 13 20:51:32.680592 kernel: ata1.00: Features: NCQ-prio Jan 13 20:51:32.680600 kernel: ata2.00: configured for UDMA/133 Jan 13 20:51:32.680607 kernel: ata1.00: configured for UDMA/133 Jan 13 20:51:32.680614 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Jan 13 20:51:32.680685 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Jan 13 20:51:32.680749 kernel: igb 0000:03:00.0 eno1: renamed from eth0 Jan 13 20:51:32.680832 kernel: ata2.00: Enabling discard_zeroes_data Jan 13 20:51:32.680841 kernel: ata1.00: Enabling discard_zeroes_data Jan 13 20:51:32.680849 kernel: sd 1:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Jan 13 20:51:32.680914 kernel: igb 0000:04:00.0 eno2: renamed from eth1 Jan 13 20:51:32.681008 kernel: sd 0:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Jan 13 20:51:32.681079 kernel: sd 1:0:0:0: [sdb] 4096-byte physical blocks Jan 13 20:51:32.681143 kernel: sd 1:0:0:0: [sdb] Write Protect is off Jan 13 20:51:32.681207 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 13 20:51:32.681277 kernel: sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Jan 13 20:51:32.681339 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 13 20:51:32.681402 kernel: sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 13 20:51:32.681464 kernel: sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 Jan 13 20:51:32.681522 kernel: sd 1:0:0:0: [sdb] Preferred minimum I/O size 4096 bytes Jan 13 20:51:32.681582 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 13 20:51:32.681643 kernel: ata2.00: Enabling discard_zeroes_data Jan 13 20:51:32.681652 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Jan 13 20:51:32.681755 kernel: sd 0:0:0:0: [sda] Preferred minimum I/O size 4096 bytes Jan 13 20:51:32.681822 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 20:51:32.681831 kernel: GPT:9289727 != 937703087 Jan 13 20:51:32.681839 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 20:51:32.681846 kernel: GPT:9289727 != 937703087 Jan 13 20:51:32.681853 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 20:51:32.681860 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Jan 13 20:51:32.681867 kernel: sd 1:0:0:0: [sdb] Attached SCSI disk Jan 13 20:51:32.681931 kernel: mlx5_core 0000:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jan 13 20:51:32.681998 kernel: mlx5_core 0000:01:00.1: firmware version: 14.31.1014 Jan 13 20:51:33.185396 kernel: ata1.00: Enabling discard_zeroes_data Jan 13 20:51:33.185463 kernel: mlx5_core 0000:01:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Jan 13 20:51:33.185933 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 13 20:51:33.186336 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 13 20:51:33.186381 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sdb6 scanned by (udev-worker) (566) Jan 13 20:51:33.186419 kernel: BTRFS: device fsid 7f507843-6957-466b-8fb7-5bee228b170a devid 1 transid 44 /dev/sdb3 scanned by (udev-worker) (578) Jan 13 20:51:33.186456 kernel: usbcore: registered new interface driver usbhid Jan 13 20:51:33.186510 kernel: usbhid: USB HID core driver Jan 13 20:51:33.186562 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Jan 13 20:51:33.186629 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Jan 13 20:51:33.187234 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Jan 13 20:51:33.187333 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Jan 13 20:51:33.187808 kernel: ata2.00: Enabling discard_zeroes_data Jan 13 20:51:33.187854 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Jan 13 20:51:33.187892 kernel: mlx5_core 0000:01:00.1: E-Switch: Total vports 10, per vport: max uc(128) max mc(2048) Jan 13 20:51:33.188427 kernel: mlx5_core 0000:01:00.1: Port module event: module 1, Cable plugged Jan 13 20:51:33.188960 kernel: mlx5_core 0000:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jan 13 20:51:32.084110 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:51:33.215516 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: renamed from eth1 Jan 13 20:51:33.215599 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: renamed from eth0 Jan 13 20:51:32.123329 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:51:32.123425 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:51:32.134327 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:51:32.154401 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:51:33.255336 disk-uuid[711]: Primary Header is updated. Jan 13 20:51:33.255336 disk-uuid[711]: Secondary Entries is updated. Jan 13 20:51:33.255336 disk-uuid[711]: Secondary Header is updated. Jan 13 20:51:32.164553 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 20:51:32.165825 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:51:32.165849 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:51:32.165875 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:51:32.166385 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 20:51:32.214404 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:51:32.225476 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:51:32.251472 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:51:32.260494 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:51:32.691377 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Micron_5300_MTFDDAK480TDT ROOT. Jan 13 20:51:32.753282 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Micron_5300_MTFDDAK480TDT EFI-SYSTEM. Jan 13 20:51:32.768085 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. Jan 13 20:51:32.782566 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Micron_5300_MTFDDAK480TDT USR-A. Jan 13 20:51:32.804221 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Micron_5300_MTFDDAK480TDT USR-A. Jan 13 20:51:32.848398 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 20:51:33.873381 kernel: ata2.00: Enabling discard_zeroes_data Jan 13 20:51:33.882092 disk-uuid[712]: The operation has completed successfully. Jan 13 20:51:33.890388 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Jan 13 20:51:33.920274 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 20:51:33.920373 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 20:51:33.959449 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 20:51:33.986386 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 13 20:51:33.986451 sh[739]: Success Jan 13 20:51:34.024406 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 20:51:34.046288 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 20:51:34.047764 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 20:51:34.103645 kernel: BTRFS info (device dm-0): first mount of filesystem 7f507843-6957-466b-8fb7-5bee228b170a Jan 13 20:51:34.103660 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:51:34.103668 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 20:51:34.110661 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 20:51:34.116520 kernel: BTRFS info (device dm-0): using free space tree Jan 13 20:51:34.130251 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 13 20:51:34.133159 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 20:51:34.141674 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 20:51:34.155390 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 20:51:34.197318 kernel: BTRFS info (device sdb6): first mount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 13 20:51:34.197334 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:51:34.166947 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 20:51:34.248474 kernel: BTRFS info (device sdb6): using free space tree Jan 13 20:51:34.248489 kernel: BTRFS info (device sdb6): enabling ssd optimizations Jan 13 20:51:34.248499 kernel: BTRFS info (device sdb6): auto enabling async discard Jan 13 20:51:34.248507 kernel: BTRFS info (device sdb6): last unmount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 13 20:51:34.248579 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 20:51:34.259305 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 20:51:34.317952 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:51:34.339424 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:51:34.352527 systemd-networkd[925]: lo: Link UP Jan 13 20:51:34.352530 systemd-networkd[925]: lo: Gained carrier Jan 13 20:51:34.357929 ignition[825]: Ignition 2.20.0 Jan 13 20:51:34.354972 systemd-networkd[925]: Enumeration completed Jan 13 20:51:34.357933 ignition[825]: Stage: fetch-offline Jan 13 20:51:34.355745 systemd-networkd[925]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:51:34.357951 ignition[825]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:51:34.357485 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:51:34.357956 ignition[825]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 13 20:51:34.360153 unknown[825]: fetched base config from "system" Jan 13 20:51:34.358011 ignition[825]: parsed url from cmdline: "" Jan 13 20:51:34.360156 unknown[825]: fetched user config from "system" Jan 13 20:51:34.358013 ignition[825]: no config URL provided Jan 13 20:51:34.365697 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:51:34.358016 ignition[825]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 20:51:34.385309 systemd-networkd[925]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:51:34.358038 ignition[825]: parsing config with SHA512: 90b2e4d612ad633961858ebf20c04fcec323837b00430002082e81cd1047f1660ea07fd13108b9668e99f112b8d5db5a7a7208564b8dc39d10274f011bf8ad73 Jan 13 20:51:34.391641 systemd[1]: Reached target network.target - Network. Jan 13 20:51:34.360355 ignition[825]: fetch-offline: fetch-offline passed Jan 13 20:51:34.397518 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 13 20:51:34.360357 ignition[825]: POST message to Packet Timeline Jan 13 20:51:34.412464 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 20:51:34.360360 ignition[825]: POST Status error: resource requires networking Jan 13 20:51:34.413565 systemd-networkd[925]: enp1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:51:34.360397 ignition[825]: Ignition finished successfully Jan 13 20:51:34.425988 ignition[939]: Ignition 2.20.0 Jan 13 20:51:34.425998 ignition[939]: Stage: kargs Jan 13 20:51:34.426212 ignition[939]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:51:34.426226 ignition[939]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 13 20:51:34.427406 ignition[939]: kargs: kargs passed Jan 13 20:51:34.640393 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Jan 13 20:51:34.637298 systemd-networkd[925]: enp1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:51:34.427412 ignition[939]: POST message to Packet Timeline Jan 13 20:51:34.427436 ignition[939]: GET https://metadata.packet.net/metadata: attempt #1 Jan 13 20:51:34.428210 ignition[939]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:60862->[::1]:53: read: connection refused Jan 13 20:51:34.628542 ignition[939]: GET https://metadata.packet.net/metadata: attempt #2 Jan 13 20:51:34.629086 ignition[939]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:35983->[::1]:53: read: connection refused Jan 13 20:51:34.885293 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Jan 13 20:51:34.886061 systemd-networkd[925]: eno1: Link UP Jan 13 20:51:34.886336 systemd-networkd[925]: eno2: Link UP Jan 13 20:51:34.886514 systemd-networkd[925]: enp1s0f0np0: Link UP Jan 13 20:51:34.886731 systemd-networkd[925]: enp1s0f0np0: Gained carrier Jan 13 20:51:34.900671 systemd-networkd[925]: enp1s0f1np1: Link UP Jan 13 20:51:34.935494 systemd-networkd[925]: enp1s0f0np0: DHCPv4 address 147.28.180.137/31, gateway 147.28.180.136 acquired from 145.40.83.140 Jan 13 20:51:35.029381 ignition[939]: GET https://metadata.packet.net/metadata: attempt #3 Jan 13 20:51:35.030487 ignition[939]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:33501->[::1]:53: read: connection refused Jan 13 20:51:35.674025 systemd-networkd[925]: enp1s0f1np1: Gained carrier Jan 13 20:51:35.831030 ignition[939]: GET https://metadata.packet.net/metadata: attempt #4 Jan 13 20:51:35.832096 ignition[939]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:53907->[::1]:53: read: connection refused Jan 13 20:51:35.929861 systemd-networkd[925]: enp1s0f0np0: Gained IPv6LL Jan 13 20:51:37.432437 ignition[939]: GET https://metadata.packet.net/metadata: attempt #5 Jan 13 20:51:37.433584 ignition[939]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:42770->[::1]:53: read: connection refused Jan 13 20:51:37.721850 systemd-networkd[925]: enp1s0f1np1: Gained IPv6LL Jan 13 20:51:40.636295 ignition[939]: GET https://metadata.packet.net/metadata: attempt #6 Jan 13 20:51:41.413649 ignition[939]: GET result: OK Jan 13 20:51:41.755988 ignition[939]: Ignition finished successfully Jan 13 20:51:41.757731 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 20:51:41.791521 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 20:51:41.797679 ignition[958]: Ignition 2.20.0 Jan 13 20:51:41.797684 ignition[958]: Stage: disks Jan 13 20:51:41.797800 ignition[958]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:51:41.797808 ignition[958]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 13 20:51:41.798379 ignition[958]: disks: disks passed Jan 13 20:51:41.798383 ignition[958]: POST message to Packet Timeline Jan 13 20:51:41.798396 ignition[958]: GET https://metadata.packet.net/metadata: attempt #1 Jan 13 20:51:42.318237 ignition[958]: GET result: OK Jan 13 20:51:42.799824 ignition[958]: Ignition finished successfully Jan 13 20:51:42.803140 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 20:51:42.819634 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 20:51:42.837534 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 20:51:42.859568 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:51:42.880561 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:51:42.900558 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:51:42.929570 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 20:51:42.961329 systemd-fsck[974]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 13 20:51:42.973124 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 20:51:42.991417 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 20:51:43.090253 kernel: EXT4-fs (sdb9): mounted filesystem 59ba8ffc-e6b0-4bb4-a36e-13a47bd6ad99 r/w with ordered data mode. Quota mode: none. Jan 13 20:51:43.090540 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 20:51:43.104695 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 20:51:43.119495 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:51:43.153072 kernel: BTRFS: device label OEM devid 1 transid 18 /dev/sdb6 scanned by mount (983) Jan 13 20:51:43.153088 kernel: BTRFS info (device sdb6): first mount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 13 20:51:43.124269 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 20:51:43.187525 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:51:43.187542 kernel: BTRFS info (device sdb6): using free space tree Jan 13 20:51:43.187556 kernel: BTRFS info (device sdb6): enabling ssd optimizations Jan 13 20:51:43.187569 kernel: BTRFS info (device sdb6): auto enabling async discard Jan 13 20:51:43.193406 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 13 20:51:43.230380 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... Jan 13 20:51:43.253374 coreos-metadata[1001]: Jan 13 20:51:43.249 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jan 13 20:51:43.277563 coreos-metadata[1000]: Jan 13 20:51:43.249 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jan 13 20:51:43.241367 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 20:51:43.241387 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:51:43.265639 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:51:43.285565 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 20:51:43.323764 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 20:51:43.375477 initrd-setup-root[1015]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 20:51:43.385372 initrd-setup-root[1022]: cut: /sysroot/etc/group: No such file or directory Jan 13 20:51:43.396356 initrd-setup-root[1029]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 20:51:43.407377 initrd-setup-root[1036]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 20:51:43.435242 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 20:51:43.448449 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 20:51:43.454125 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 20:51:43.487527 kernel: BTRFS info (device sdb6): last unmount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 13 20:51:43.488239 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 20:51:43.512519 ignition[1104]: INFO : Ignition 2.20.0 Jan 13 20:51:43.512519 ignition[1104]: INFO : Stage: mount Jan 13 20:51:43.526383 ignition[1104]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:51:43.526383 ignition[1104]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 13 20:51:43.526383 ignition[1104]: INFO : mount: mount passed Jan 13 20:51:43.526383 ignition[1104]: INFO : POST message to Packet Timeline Jan 13 20:51:43.526383 ignition[1104]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jan 13 20:51:43.522598 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 20:51:43.749959 coreos-metadata[1000]: Jan 13 20:51:43.749 INFO Fetch successful Jan 13 20:51:43.785738 coreos-metadata[1000]: Jan 13 20:51:43.785 INFO wrote hostname ci-4186.1.0-a-3c6cffff8a to /sysroot/etc/hostname Jan 13 20:51:43.786965 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 13 20:51:44.051219 coreos-metadata[1001]: Jan 13 20:51:44.051 INFO Fetch successful Jan 13 20:51:44.093479 systemd[1]: flatcar-static-network.service: Deactivated successfully. Jan 13 20:51:44.093532 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. Jan 13 20:51:44.553523 ignition[1104]: INFO : GET result: OK Jan 13 20:51:45.016137 ignition[1104]: INFO : Ignition finished successfully Jan 13 20:51:45.019147 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 20:51:45.055504 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 20:51:45.067788 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:51:45.113388 kernel: BTRFS: device label OEM devid 1 transid 19 /dev/sdb6 scanned by mount (1128) Jan 13 20:51:45.113410 kernel: BTRFS info (device sdb6): first mount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 13 20:51:45.121539 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:51:45.127415 kernel: BTRFS info (device sdb6): using free space tree Jan 13 20:51:45.142313 kernel: BTRFS info (device sdb6): enabling ssd optimizations Jan 13 20:51:45.142329 kernel: BTRFS info (device sdb6): auto enabling async discard Jan 13 20:51:45.144173 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:51:45.169432 ignition[1145]: INFO : Ignition 2.20.0 Jan 13 20:51:45.169432 ignition[1145]: INFO : Stage: files Jan 13 20:51:45.184477 ignition[1145]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:51:45.184477 ignition[1145]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 13 20:51:45.184477 ignition[1145]: DEBUG : files: compiled without relabeling support, skipping Jan 13 20:51:45.184477 ignition[1145]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 20:51:45.184477 ignition[1145]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 20:51:45.184477 ignition[1145]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 20:51:45.184477 ignition[1145]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 20:51:45.184477 ignition[1145]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 20:51:45.184477 ignition[1145]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 20:51:45.184477 ignition[1145]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 13 20:51:45.173208 unknown[1145]: wrote ssh authorized keys file for user: core Jan 13 20:51:45.317344 ignition[1145]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 13 20:51:45.330787 ignition[1145]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 20:51:45.347488 ignition[1145]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 13 20:51:45.347488 ignition[1145]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 20:51:45.347488 ignition[1145]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 13 20:51:45.347488 ignition[1145]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 13 20:51:45.347488 ignition[1145]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 20:51:45.347488 ignition[1145]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 20:51:45.347488 ignition[1145]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 20:51:45.347488 ignition[1145]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 20:51:45.347488 ignition[1145]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:51:45.347488 ignition[1145]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:51:45.347488 ignition[1145]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 13 20:51:45.347488 ignition[1145]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 13 20:51:45.347488 ignition[1145]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 13 20:51:45.347488 ignition[1145]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 13 20:51:45.871810 ignition[1145]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 13 20:51:46.040105 ignition[1145]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 13 20:51:46.040105 ignition[1145]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 13 20:51:46.071497 ignition[1145]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 20:51:46.071497 ignition[1145]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 20:51:46.071497 ignition[1145]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 13 20:51:46.071497 ignition[1145]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 13 20:51:46.071497 ignition[1145]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 13 20:51:46.071497 ignition[1145]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:51:46.071497 ignition[1145]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:51:46.071497 ignition[1145]: INFO : files: files passed Jan 13 20:51:46.071497 ignition[1145]: INFO : POST message to Packet Timeline Jan 13 20:51:46.071497 ignition[1145]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jan 13 20:51:46.673912 ignition[1145]: INFO : GET result: OK Jan 13 20:51:47.038403 ignition[1145]: INFO : Ignition finished successfully Jan 13 20:51:47.040724 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 20:51:47.076513 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 20:51:47.086895 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 20:51:47.096669 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 20:51:47.096728 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 20:51:47.156707 initrd-setup-root-after-ignition[1184]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:51:47.156707 initrd-setup-root-after-ignition[1184]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:51:47.195535 initrd-setup-root-after-ignition[1188]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:51:47.161434 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:51:47.172544 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 20:51:47.218442 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 20:51:47.273844 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 20:51:47.273895 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 20:51:47.293659 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 20:51:47.314451 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 20:51:47.335722 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 20:51:47.345621 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 20:51:47.423633 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:51:47.456802 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 20:51:47.475677 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:51:47.479447 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:51:47.511583 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 20:51:47.529591 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 20:51:47.529741 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:51:47.558966 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 20:51:47.580857 systemd[1]: Stopped target basic.target - Basic System. Jan 13 20:51:47.598878 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 20:51:47.617962 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:51:47.638855 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 20:51:47.659866 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 20:51:47.679860 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:51:47.700902 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 20:51:47.722881 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 20:51:47.742858 systemd[1]: Stopped target swap.target - Swaps. Jan 13 20:51:47.761867 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 20:51:47.762286 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:51:47.787967 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:51:47.807882 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:51:47.829738 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 20:51:47.830135 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:51:47.853763 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 20:51:47.854155 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 20:51:47.885842 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 20:51:47.886306 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:51:47.906051 systemd[1]: Stopped target paths.target - Path Units. Jan 13 20:51:47.923730 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 20:51:47.924130 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:51:47.944876 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 20:51:47.963855 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 20:51:47.982940 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 20:51:47.983240 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:51:48.002889 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 20:51:48.003181 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:51:48.026073 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 20:51:48.026496 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:51:48.046963 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 20:51:48.157560 ignition[1208]: INFO : Ignition 2.20.0 Jan 13 20:51:48.157560 ignition[1208]: INFO : Stage: umount Jan 13 20:51:48.157560 ignition[1208]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:51:48.157560 ignition[1208]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 13 20:51:48.157560 ignition[1208]: INFO : umount: umount passed Jan 13 20:51:48.157560 ignition[1208]: INFO : POST message to Packet Timeline Jan 13 20:51:48.157560 ignition[1208]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jan 13 20:51:48.047363 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 20:51:48.064950 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 13 20:51:48.065354 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 13 20:51:48.094497 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 20:51:48.115327 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 20:51:48.115450 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:51:48.145540 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 20:51:48.149513 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 20:51:48.149693 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:51:48.176592 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 20:51:48.176737 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:51:48.216187 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 20:51:48.221317 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 20:51:48.221565 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 20:51:48.294023 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 20:51:48.294098 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 20:51:48.570098 ignition[1208]: INFO : GET result: OK Jan 13 20:51:49.662884 ignition[1208]: INFO : Ignition finished successfully Jan 13 20:51:49.666742 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 20:51:49.667019 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 20:51:49.681962 systemd[1]: Stopped target network.target - Network. Jan 13 20:51:49.697522 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 20:51:49.697704 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 20:51:49.715679 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 20:51:49.715843 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 20:51:49.734668 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 20:51:49.734827 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 20:51:49.753660 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 20:51:49.753823 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 20:51:49.772648 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 20:51:49.772815 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 20:51:49.792006 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 20:51:49.806452 systemd-networkd[925]: enp1s0f1np1: DHCPv6 lease lost Jan 13 20:51:49.809712 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 20:51:49.815408 systemd-networkd[925]: enp1s0f0np0: DHCPv6 lease lost Jan 13 20:51:49.828327 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 20:51:49.828591 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 20:51:49.847501 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 20:51:49.847828 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 20:51:49.867809 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 20:51:49.867930 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:51:49.897539 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 20:51:49.897557 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 20:51:49.897582 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:51:49.914579 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 20:51:49.914625 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:51:49.946633 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 20:51:49.946717 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 20:51:49.964646 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 20:51:49.964810 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:51:49.985800 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:51:50.005531 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 20:51:50.005912 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:51:50.035318 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 20:51:50.035472 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 20:51:50.042756 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 20:51:50.042858 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:51:50.073572 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 20:51:50.073728 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:51:50.101829 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 20:51:50.101995 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 20:51:50.143510 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:51:50.143703 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:51:50.201349 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 20:51:50.236292 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 20:51:50.236348 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:51:50.255460 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:51:50.255548 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:51:50.474398 systemd-journald[270]: Received SIGTERM from PID 1 (systemd). Jan 13 20:51:50.278581 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 20:51:50.278812 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 20:51:50.342268 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 20:51:50.342545 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 20:51:50.357458 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 20:51:50.390692 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 20:51:50.418480 systemd[1]: Switching root. Jan 13 20:51:50.537467 systemd-journald[270]: Journal stopped Jan 13 20:51:52.178512 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 20:51:52.178530 kernel: SELinux: policy capability open_perms=1 Jan 13 20:51:52.178538 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 20:51:52.178545 kernel: SELinux: policy capability always_check_network=0 Jan 13 20:51:52.178553 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 20:51:52.178559 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 20:51:52.178567 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 20:51:52.178573 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 20:51:52.178580 kernel: audit: type=1403 audit(1736801510.694:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 13 20:51:52.178588 systemd[1]: Successfully loaded SELinux policy in 73.607ms. Jan 13 20:51:52.178598 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.693ms. Jan 13 20:51:52.178606 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:51:52.178614 systemd[1]: Detected architecture x86-64. Jan 13 20:51:52.178621 systemd[1]: Detected first boot. Jan 13 20:51:52.178629 systemd[1]: Hostname set to . Jan 13 20:51:52.178638 systemd[1]: Initializing machine ID from random generator. Jan 13 20:51:52.178646 zram_generator::config[1260]: No configuration found. Jan 13 20:51:52.178654 systemd[1]: Populated /etc with preset unit settings. Jan 13 20:51:52.178662 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 13 20:51:52.178669 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 13 20:51:52.178677 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 13 20:51:52.178685 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 20:51:52.178694 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 20:51:52.178702 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 20:51:52.178710 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 20:51:52.178718 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 20:51:52.178726 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 20:51:52.178733 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 20:51:52.178741 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 20:51:52.178750 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:51:52.178758 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:51:52.178766 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 20:51:52.178774 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 20:51:52.178782 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 20:51:52.178791 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:51:52.178799 systemd[1]: Expecting device dev-ttyS1.device - /dev/ttyS1... Jan 13 20:51:52.178807 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:51:52.178816 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 13 20:51:52.178824 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 13 20:51:52.178833 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 13 20:51:52.178842 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 20:51:52.178850 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:51:52.178859 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:51:52.178867 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:51:52.178875 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:51:52.178884 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 20:51:52.178892 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 20:51:52.178900 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:51:52.178908 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:51:52.178916 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:51:52.178926 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 20:51:52.178934 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 20:51:52.178942 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 20:51:52.178950 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 20:51:52.178958 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:51:52.178967 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 20:51:52.178975 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 20:51:52.178983 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 20:51:52.178992 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 20:51:52.179001 systemd[1]: Reached target machines.target - Containers. Jan 13 20:51:52.179009 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 20:51:52.179017 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:51:52.179025 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:51:52.179033 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 20:51:52.179041 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:51:52.179050 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:51:52.179059 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:51:52.179067 kernel: ACPI: bus type drm_connector registered Jan 13 20:51:52.179075 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 20:51:52.179083 kernel: fuse: init (API version 7.39) Jan 13 20:51:52.179090 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:51:52.179098 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 20:51:52.179107 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 13 20:51:52.179115 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 13 20:51:52.179124 kernel: loop: module loaded Jan 13 20:51:52.179132 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 13 20:51:52.179143 systemd[1]: Stopped systemd-fsck-usr.service. Jan 13 20:51:52.179151 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:51:52.179168 systemd-journald[1364]: Collecting audit messages is disabled. Jan 13 20:51:52.179187 systemd-journald[1364]: Journal started Jan 13 20:51:52.179204 systemd-journald[1364]: Runtime Journal (/run/log/journal/bfdf9e4dea284cd8bea620c90b41f2c2) is 8.0M, max 639.9M, 631.9M free. Jan 13 20:51:51.084328 systemd[1]: Queued start job for default target multi-user.target. Jan 13 20:51:51.099369 systemd[1]: Unnecessary job was removed for dev-sdb6.device - /dev/sdb6. Jan 13 20:51:51.099645 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 13 20:51:52.192296 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:51:52.214299 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 20:51:52.226382 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 20:51:52.255287 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:51:52.272286 systemd[1]: verity-setup.service: Deactivated successfully. Jan 13 20:51:52.272314 systemd[1]: Stopped verity-setup.service. Jan 13 20:51:52.301294 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:51:52.301315 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:51:52.318713 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 20:51:52.328385 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 20:51:52.338541 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 20:51:52.348515 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 20:51:52.358522 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 20:51:52.368530 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 20:51:52.378591 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 20:51:52.389580 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:51:52.400618 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 20:51:52.400738 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 20:51:52.411688 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:51:52.411838 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:51:52.422952 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:51:52.423181 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:51:52.434205 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:51:52.434607 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:51:52.446186 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 20:51:52.446604 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 20:51:52.457147 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:51:52.457541 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:51:52.468179 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:51:52.479159 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 20:51:52.491147 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 20:51:52.504203 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:51:52.541423 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 20:51:52.572617 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 20:51:52.585468 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 20:51:52.595466 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 20:51:52.595488 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:51:52.606136 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 13 20:51:52.626457 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 20:51:52.638594 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 20:51:52.648639 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:51:52.650744 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 20:51:52.661058 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 20:51:52.672389 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:51:52.673144 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 20:51:52.676846 systemd-journald[1364]: Time spent on flushing to /var/log/journal/bfdf9e4dea284cd8bea620c90b41f2c2 is 17.009ms for 1359 entries. Jan 13 20:51:52.676846 systemd-journald[1364]: System Journal (/var/log/journal/bfdf9e4dea284cd8bea620c90b41f2c2) is 8.0M, max 195.6M, 187.6M free. Jan 13 20:51:52.710255 systemd-journald[1364]: Received client request to flush runtime journal. Jan 13 20:51:52.690393 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:51:52.691072 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:51:52.701217 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 20:51:52.713111 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 20:51:52.725254 kernel: loop0: detected capacity change from 0 to 141000 Jan 13 20:51:52.728975 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 13 20:51:52.747569 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 20:51:52.752327 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 20:51:52.762446 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 20:51:52.773467 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 20:51:52.784490 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 20:51:52.801114 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 20:51:52.801288 kernel: loop1: detected capacity change from 0 to 138184 Jan 13 20:51:52.812459 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:51:52.822467 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 20:51:52.835569 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 20:51:52.859435 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 13 20:51:52.871256 kernel: loop2: detected capacity change from 0 to 210664 Jan 13 20:51:52.877038 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:51:52.889866 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 20:51:52.906517 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 13 20:51:52.917836 udevadm[1400]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 13 20:51:52.920564 systemd-tmpfiles[1414]: ACLs are not supported, ignoring. Jan 13 20:51:52.920574 systemd-tmpfiles[1414]: ACLs are not supported, ignoring. Jan 13 20:51:52.923109 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:51:52.954254 kernel: loop3: detected capacity change from 0 to 8 Jan 13 20:51:52.989311 kernel: loop4: detected capacity change from 0 to 141000 Jan 13 20:51:53.012754 ldconfig[1390]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 20:51:53.013862 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 20:51:53.015251 kernel: loop5: detected capacity change from 0 to 138184 Jan 13 20:51:53.035302 kernel: loop6: detected capacity change from 0 to 210664 Jan 13 20:51:53.080197 (sd-merge)[1420]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-packet'. Jan 13 20:51:53.080435 kernel: loop7: detected capacity change from 0 to 8 Jan 13 20:51:53.080452 (sd-merge)[1420]: Merged extensions into '/usr'. Jan 13 20:51:53.083444 systemd[1]: Reloading requested from client PID 1395 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 20:51:53.083450 systemd[1]: Reloading... Jan 13 20:51:53.111262 zram_generator::config[1445]: No configuration found. Jan 13 20:51:53.179780 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:51:53.218407 systemd[1]: Reloading finished in 134 ms. Jan 13 20:51:53.245388 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 20:51:53.256608 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 20:51:53.282687 systemd[1]: Starting ensure-sysext.service... Jan 13 20:51:53.290318 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:51:53.309435 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:51:53.321917 systemd[1]: Reloading requested from client PID 1502 ('systemctl') (unit ensure-sysext.service)... Jan 13 20:51:53.321924 systemd[1]: Reloading... Jan 13 20:51:53.322585 systemd-tmpfiles[1503]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 20:51:53.322745 systemd-tmpfiles[1503]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 13 20:51:53.323220 systemd-tmpfiles[1503]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 13 20:51:53.323424 systemd-tmpfiles[1503]: ACLs are not supported, ignoring. Jan 13 20:51:53.323457 systemd-tmpfiles[1503]: ACLs are not supported, ignoring. Jan 13 20:51:53.326033 systemd-tmpfiles[1503]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:51:53.326038 systemd-tmpfiles[1503]: Skipping /boot Jan 13 20:51:53.331197 systemd-tmpfiles[1503]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:51:53.331201 systemd-tmpfiles[1503]: Skipping /boot Jan 13 20:51:53.335234 systemd-udevd[1504]: Using default interface naming scheme 'v255'. Jan 13 20:51:53.351320 zram_generator::config[1531]: No configuration found. Jan 13 20:51:53.383266 kernel: BTRFS warning: duplicate device /dev/sdb3 devid 1 generation 44 scanned by (udev-worker) (1618) Jan 13 20:51:53.391258 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 Jan 13 20:51:53.391303 kernel: mousedev: PS/2 mouse device common for all mice Jan 13 20:51:53.391333 kernel: IPMI message handler: version 39.2 Jan 13 20:51:53.418305 kernel: ACPI: button: Sleep Button [SLPB] Jan 13 20:51:53.418360 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 13 20:51:53.424728 kernel: ipmi device interface Jan 13 20:51:53.424791 kernel: ACPI: button: Power Button [PWRF] Jan 13 20:51:53.429255 kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface Jan 13 20:51:53.435401 kernel: mei_me 0000:00:16.4: Device doesn't have valid ME Interface Jan 13 20:51:53.460638 kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set Jan 13 20:51:53.460753 kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt Jan 13 20:51:53.460876 kernel: i2c i2c-0: 2/4 memory slots populated (from DMI) Jan 13 20:51:53.451263 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:51:53.476533 kernel: ipmi_si: IPMI System Interface driver Jan 13 20:51:53.476591 kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS Jan 13 20:51:53.497726 kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 Jan 13 20:51:53.497748 kernel: ipmi_si: Adding SMBIOS-specified kcs state machine Jan 13 20:51:53.497765 kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI Jan 13 20:51:53.525552 kernel: iTCO_vendor_support: vendor-support=0 Jan 13 20:51:53.525567 kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2] regsize 1 spacing 1 irq 0 Jan 13 20:51:53.525654 kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI Jan 13 20:51:53.525761 kernel: ipmi_si: Adding ACPI-specified kcs state machine Jan 13 20:51:53.525771 kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 Jan 13 20:51:53.532911 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. Jan 13 20:51:53.546492 systemd[1]: Condition check resulted in dev-ttyS1.device - /dev/ttyS1 being skipped. Jan 13 20:51:53.547057 systemd[1]: Reloading finished in 224 ms. Jan 13 20:51:53.562014 kernel: iTCO_wdt iTCO_wdt: Found a Intel PCH TCO device (Version=6, TCOBASE=0x0400) Jan 13 20:51:53.569258 kernel: iTCO_wdt iTCO_wdt: initialized. heartbeat=30 sec (nowayout=0) Jan 13 20:51:53.569080 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:51:53.587254 kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. Jan 13 20:51:53.615021 kernel: intel_rapl_common: Found RAPL domain package Jan 13 20:51:53.615072 kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1b0f, dev_id: 0x20) Jan 13 20:51:53.615169 kernel: intel_rapl_common: Found RAPL domain core Jan 13 20:51:53.626017 kernel: intel_rapl_common: Found RAPL domain dram Jan 13 20:51:53.627535 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:51:53.647078 systemd[1]: Finished ensure-sysext.service. Jan 13 20:51:53.668193 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:51:53.677431 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:51:53.686148 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 20:51:53.694255 kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized Jan 13 20:51:53.695899 augenrules[1704]: No rules Jan 13 20:51:53.702285 kernel: ipmi_ssif: IPMI SSIF Interface driver Jan 13 20:51:53.708464 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:51:53.727828 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:51:53.738869 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:51:53.749884 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:51:53.761820 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:51:53.771417 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:51:53.772010 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 20:51:53.783989 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 20:51:53.796214 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:51:53.797175 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:51:53.798051 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 13 20:51:53.822933 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 20:51:53.834957 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:51:53.844334 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:51:53.844875 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 13 20:51:53.855488 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:51:53.855598 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:51:53.855787 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 20:51:53.855934 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:51:53.856018 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:51:53.856170 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:51:53.856255 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:51:53.856405 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:51:53.856484 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:51:53.856634 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:51:53.856713 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:51:53.856859 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 20:51:53.857013 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 20:51:53.862082 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 13 20:51:53.862125 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:51:53.862169 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:51:53.862815 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 20:51:53.863668 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 20:51:53.863699 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 20:51:53.863916 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 20:51:53.869491 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 20:51:53.870919 lvm[1732]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:51:53.884320 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 20:51:53.905224 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 13 20:51:53.911640 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:51:53.930528 systemd-resolved[1717]: Positive Trust Anchors: Jan 13 20:51:53.930533 systemd-resolved[1717]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:51:53.930559 systemd-resolved[1717]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:51:53.933057 systemd-resolved[1717]: Using system hostname 'ci-4186.1.0-a-3c6cffff8a'. Jan 13 20:51:53.937167 systemd-networkd[1716]: lo: Link UP Jan 13 20:51:53.937170 systemd-networkd[1716]: lo: Gained carrier Jan 13 20:51:53.939665 systemd-networkd[1716]: bond0: netdev ready Jan 13 20:51:53.940710 systemd-networkd[1716]: Enumeration completed Jan 13 20:51:53.947438 systemd-networkd[1716]: enp1s0f0np0: Configuring with /etc/systemd/network/10-0c:42:a1:79:3d:90.network. Jan 13 20:51:54.005334 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 13 20:51:54.007223 lvm[1752]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:51:54.017368 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 13 20:51:54.029524 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:51:54.040310 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:51:54.051463 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:51:54.063404 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 13 20:51:54.075493 systemd[1]: Reached target network.target - Network. Jan 13 20:51:54.083285 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:51:54.094285 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:51:54.103337 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 20:51:54.114300 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 20:51:54.125286 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 20:51:54.136277 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 20:51:54.136292 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:51:54.144277 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 20:51:54.154359 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 20:51:54.164339 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 20:51:54.175276 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:51:54.183837 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 20:51:54.193925 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 20:51:54.203649 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 20:51:54.214010 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 20:51:54.225544 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 20:51:54.235350 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:51:54.245356 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:51:54.253352 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:51:54.253368 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:51:54.261318 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 20:51:54.272031 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 13 20:51:54.289608 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 20:51:54.297858 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 20:51:54.302152 coreos-metadata[1759]: Jan 13 20:51:54.302 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jan 13 20:51:54.303089 coreos-metadata[1759]: Jan 13 20:51:54.303 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) Jan 13 20:51:54.307977 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 20:51:54.308182 dbus-daemon[1760]: [system] SELinux support is enabled Jan 13 20:51:54.309695 jq[1763]: false Jan 13 20:51:54.317367 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 20:51:54.318004 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 20:51:54.325355 extend-filesystems[1765]: Found loop4 Jan 13 20:51:54.325355 extend-filesystems[1765]: Found loop5 Jan 13 20:51:54.350905 kernel: EXT4-fs (sdb9): resizing filesystem from 553472 to 116605649 blocks Jan 13 20:51:54.350930 kernel: BTRFS warning: duplicate device /dev/sdb3 devid 1 generation 44 scanned by (udev-worker) (1655) Jan 13 20:51:54.350944 extend-filesystems[1765]: Found loop6 Jan 13 20:51:54.350944 extend-filesystems[1765]: Found loop7 Jan 13 20:51:54.350944 extend-filesystems[1765]: Found sda Jan 13 20:51:54.350944 extend-filesystems[1765]: Found sdb Jan 13 20:51:54.350944 extend-filesystems[1765]: Found sdb1 Jan 13 20:51:54.350944 extend-filesystems[1765]: Found sdb2 Jan 13 20:51:54.350944 extend-filesystems[1765]: Found sdb3 Jan 13 20:51:54.350944 extend-filesystems[1765]: Found usr Jan 13 20:51:54.350944 extend-filesystems[1765]: Found sdb4 Jan 13 20:51:54.350944 extend-filesystems[1765]: Found sdb6 Jan 13 20:51:54.350944 extend-filesystems[1765]: Found sdb7 Jan 13 20:51:54.350944 extend-filesystems[1765]: Found sdb9 Jan 13 20:51:54.350944 extend-filesystems[1765]: Checking size of /dev/sdb9 Jan 13 20:51:54.350944 extend-filesystems[1765]: Resized partition /dev/sdb9 Jan 13 20:51:54.328976 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 13 20:51:54.535377 extend-filesystems[1773]: resize2fs 1.47.1 (20-May-2024) Jan 13 20:51:54.365980 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 20:51:54.385948 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 20:51:54.402638 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 20:51:54.425115 systemd[1]: Starting tcsd.service - TCG Core Services Daemon... Jan 13 20:51:54.431615 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 20:51:54.552707 update_engine[1790]: I20250113 20:51:54.473125 1790 main.cc:92] Flatcar Update Engine starting Jan 13 20:51:54.552707 update_engine[1790]: I20250113 20:51:54.473906 1790 update_check_scheduler.cc:74] Next update check in 7m56s Jan 13 20:51:54.431940 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 20:51:54.558453 jq[1791]: true Jan 13 20:51:54.446876 systemd-logind[1785]: Watching system buttons on /dev/input/event3 (Power Button) Jan 13 20:51:54.446888 systemd-logind[1785]: Watching system buttons on /dev/input/event2 (Sleep Button) Jan 13 20:51:54.446900 systemd-logind[1785]: Watching system buttons on /dev/input/event0 (HID 0557:2419) Jan 13 20:51:54.447013 systemd-logind[1785]: New seat seat0. Jan 13 20:51:54.464337 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 20:51:54.481615 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 20:51:54.499872 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 20:51:54.526436 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 20:51:54.526529 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 20:51:54.526681 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 20:51:54.526839 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 20:51:54.558638 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 20:51:54.558739 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 20:51:54.572290 (ntainerd)[1796]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 13 20:51:54.573722 jq[1795]: true Jan 13 20:51:54.575938 tar[1793]: linux-amd64/helm Jan 13 20:51:54.576240 dbus-daemon[1760]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 13 20:51:54.579858 systemd[1]: tcsd.service: Skipped due to 'exec-condition'. Jan 13 20:51:54.579954 systemd[1]: Condition check resulted in tcsd.service - TCG Core Services Daemon being skipped. Jan 13 20:51:54.588730 systemd[1]: Started update-engine.service - Update Engine. Jan 13 20:51:54.598812 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 20:51:54.598913 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 20:51:54.610346 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 20:51:54.610450 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 20:51:54.625521 bash[1823]: Updated "/home/core/.ssh/authorized_keys" Jan 13 20:51:54.638434 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 20:51:54.650501 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 20:51:54.656093 locksmithd[1824]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 20:51:54.662803 systemd[1]: Starting sshkeys.service... Jan 13 20:51:54.675017 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 13 20:51:54.687218 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 13 20:51:54.709548 coreos-metadata[1836]: Jan 13 20:51:54.709 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jan 13 20:51:54.710281 coreos-metadata[1836]: Jan 13 20:51:54.710 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) Jan 13 20:51:54.741443 containerd[1796]: time="2025-01-13T20:51:54.741401653Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 13 20:51:54.754638 containerd[1796]: time="2025-01-13T20:51:54.754608157Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:51:54.755482 containerd[1796]: time="2025-01-13T20:51:54.755437209Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:51:54.755482 containerd[1796]: time="2025-01-13T20:51:54.755451962Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 13 20:51:54.755482 containerd[1796]: time="2025-01-13T20:51:54.755464618Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 13 20:51:54.755599 containerd[1796]: time="2025-01-13T20:51:54.755550381Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 13 20:51:54.755599 containerd[1796]: time="2025-01-13T20:51:54.755564782Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 13 20:51:54.755639 containerd[1796]: time="2025-01-13T20:51:54.755598036Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:51:54.755639 containerd[1796]: time="2025-01-13T20:51:54.755606345Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:51:54.755748 containerd[1796]: time="2025-01-13T20:51:54.755708208Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:51:54.755748 containerd[1796]: time="2025-01-13T20:51:54.755722422Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 13 20:51:54.755748 containerd[1796]: time="2025-01-13T20:51:54.755730753Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:51:54.755748 containerd[1796]: time="2025-01-13T20:51:54.755735900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 13 20:51:54.755813 containerd[1796]: time="2025-01-13T20:51:54.755781774Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:51:54.755946 containerd[1796]: time="2025-01-13T20:51:54.755913961Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:51:54.756012 containerd[1796]: time="2025-01-13T20:51:54.755971484Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:51:54.756012 containerd[1796]: time="2025-01-13T20:51:54.755979939Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 13 20:51:54.756043 containerd[1796]: time="2025-01-13T20:51:54.756025266Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 13 20:51:54.756061 containerd[1796]: time="2025-01-13T20:51:54.756056069Z" level=info msg="metadata content store policy set" policy=shared Jan 13 20:51:54.765578 containerd[1796]: time="2025-01-13T20:51:54.765538142Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 13 20:51:54.765578 containerd[1796]: time="2025-01-13T20:51:54.765562568Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 13 20:51:54.765578 containerd[1796]: time="2025-01-13T20:51:54.765576988Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 13 20:51:54.765639 containerd[1796]: time="2025-01-13T20:51:54.765586897Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 13 20:51:54.765639 containerd[1796]: time="2025-01-13T20:51:54.765594622Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 13 20:51:54.765692 containerd[1796]: time="2025-01-13T20:51:54.765674782Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 13 20:51:54.765823 containerd[1796]: time="2025-01-13T20:51:54.765812951Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 13 20:51:54.765913 containerd[1796]: time="2025-01-13T20:51:54.765873440Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 13 20:51:54.765913 containerd[1796]: time="2025-01-13T20:51:54.765886242Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 13 20:51:54.765913 containerd[1796]: time="2025-01-13T20:51:54.765894711Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 13 20:51:54.765913 containerd[1796]: time="2025-01-13T20:51:54.765902966Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 13 20:51:54.765913 containerd[1796]: time="2025-01-13T20:51:54.765910137Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 13 20:51:54.765995 containerd[1796]: time="2025-01-13T20:51:54.765916789Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 13 20:51:54.765995 containerd[1796]: time="2025-01-13T20:51:54.765924051Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 13 20:51:54.765995 containerd[1796]: time="2025-01-13T20:51:54.765940630Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 13 20:51:54.765995 containerd[1796]: time="2025-01-13T20:51:54.765949754Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 13 20:51:54.765995 containerd[1796]: time="2025-01-13T20:51:54.765956554Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 13 20:51:54.765995 containerd[1796]: time="2025-01-13T20:51:54.765962632Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 13 20:51:54.765995 containerd[1796]: time="2025-01-13T20:51:54.765974617Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 13 20:51:54.765995 containerd[1796]: time="2025-01-13T20:51:54.765984962Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 13 20:51:54.765995 containerd[1796]: time="2025-01-13T20:51:54.765992006Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 13 20:51:54.766110 containerd[1796]: time="2025-01-13T20:51:54.765998845Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 13 20:51:54.766110 containerd[1796]: time="2025-01-13T20:51:54.766005612Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 13 20:51:54.766110 containerd[1796]: time="2025-01-13T20:51:54.766012836Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 13 20:51:54.766110 containerd[1796]: time="2025-01-13T20:51:54.766019307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 13 20:51:54.766110 containerd[1796]: time="2025-01-13T20:51:54.766029589Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 13 20:51:54.766110 containerd[1796]: time="2025-01-13T20:51:54.766037603Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 13 20:51:54.766110 containerd[1796]: time="2025-01-13T20:51:54.766045753Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 13 20:51:54.766110 containerd[1796]: time="2025-01-13T20:51:54.766052917Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 13 20:51:54.766110 containerd[1796]: time="2025-01-13T20:51:54.766059281Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 13 20:51:54.766110 containerd[1796]: time="2025-01-13T20:51:54.766066174Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 13 20:51:54.766110 containerd[1796]: time="2025-01-13T20:51:54.766077659Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 13 20:51:54.766110 containerd[1796]: time="2025-01-13T20:51:54.766090879Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 13 20:51:54.766110 containerd[1796]: time="2025-01-13T20:51:54.766098328Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 13 20:51:54.766110 containerd[1796]: time="2025-01-13T20:51:54.766103845Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 13 20:51:54.766312 containerd[1796]: time="2025-01-13T20:51:54.766131748Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 13 20:51:54.766312 containerd[1796]: time="2025-01-13T20:51:54.766144538Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 13 20:51:54.766312 containerd[1796]: time="2025-01-13T20:51:54.766150777Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 13 20:51:54.766312 containerd[1796]: time="2025-01-13T20:51:54.766157027Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 13 20:51:54.766312 containerd[1796]: time="2025-01-13T20:51:54.766162098Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 13 20:51:54.766312 containerd[1796]: time="2025-01-13T20:51:54.766172689Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 13 20:51:54.766312 containerd[1796]: time="2025-01-13T20:51:54.766179099Z" level=info msg="NRI interface is disabled by configuration." Jan 13 20:51:54.766312 containerd[1796]: time="2025-01-13T20:51:54.766185138Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 13 20:51:54.766419 containerd[1796]: time="2025-01-13T20:51:54.766370027Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 13 20:51:54.766419 containerd[1796]: time="2025-01-13T20:51:54.766401207Z" level=info msg="Connect containerd service" Jan 13 20:51:54.766526 containerd[1796]: time="2025-01-13T20:51:54.766419527Z" level=info msg="using legacy CRI server" Jan 13 20:51:54.766526 containerd[1796]: time="2025-01-13T20:51:54.766424266Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 20:51:54.766888 containerd[1796]: time="2025-01-13T20:51:54.766846250Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 13 20:51:54.767197 containerd[1796]: time="2025-01-13T20:51:54.767162645Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 20:51:54.767308 containerd[1796]: time="2025-01-13T20:51:54.767256750Z" level=info msg="Start subscribing containerd event" Jan 13 20:51:54.767308 containerd[1796]: time="2025-01-13T20:51:54.767289855Z" level=info msg="Start recovering state" Jan 13 20:51:54.767349 containerd[1796]: time="2025-01-13T20:51:54.767329580Z" level=info msg="Start event monitor" Jan 13 20:51:54.767349 containerd[1796]: time="2025-01-13T20:51:54.767333947Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 20:51:54.767349 containerd[1796]: time="2025-01-13T20:51:54.767339674Z" level=info msg="Start snapshots syncer" Jan 13 20:51:54.767397 containerd[1796]: time="2025-01-13T20:51:54.767352987Z" level=info msg="Start cni network conf syncer for default" Jan 13 20:51:54.767397 containerd[1796]: time="2025-01-13T20:51:54.767357323Z" level=info msg="Start streaming server" Jan 13 20:51:54.767397 containerd[1796]: time="2025-01-13T20:51:54.767364060Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 20:51:54.767436 containerd[1796]: time="2025-01-13T20:51:54.767397723Z" level=info msg="containerd successfully booted in 0.026436s" Jan 13 20:51:54.767456 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 20:51:54.801272 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Jan 13 20:51:54.813278 kernel: bond0: (slave enp1s0f0np0): Enslaving as a backup interface with an up link Jan 13 20:51:54.815298 sshd_keygen[1788]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 20:51:54.817621 systemd-networkd[1716]: enp1s0f1np1: Configuring with /etc/systemd/network/10-0c:42:a1:79:3d:91.network. Jan 13 20:51:54.827634 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 20:51:54.831413 tar[1793]: linux-amd64/LICENSE Jan 13 20:51:54.831449 tar[1793]: linux-amd64/README.md Jan 13 20:51:54.864013 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 20:51:54.868255 kernel: EXT4-fs (sdb9): resized filesystem to 116605649 Jan 13 20:51:54.876581 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 13 20:51:54.886705 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 20:51:54.886809 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 20:51:54.892946 extend-filesystems[1773]: Filesystem at /dev/sdb9 is mounted on /; on-line resizing required Jan 13 20:51:54.892946 extend-filesystems[1773]: old_desc_blocks = 1, new_desc_blocks = 56 Jan 13 20:51:54.892946 extend-filesystems[1773]: The filesystem on /dev/sdb9 is now 116605649 (4k) blocks long. Jan 13 20:51:54.927547 extend-filesystems[1765]: Resized filesystem in /dev/sdb9 Jan 13 20:51:54.896666 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 20:51:54.896748 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 20:51:54.967731 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 20:51:54.978779 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 20:51:54.991989 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 20:51:55.001423 systemd[1]: Started serial-getty@ttyS1.service - Serial Getty on ttyS1. Jan 13 20:51:55.015305 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Jan 13 20:51:55.015387 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 20:51:55.028300 kernel: bond0: (slave enp1s0f1np1): Enslaving as a backup interface with an up link Jan 13 20:51:55.028300 systemd-networkd[1716]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Jan 13 20:51:55.030187 systemd-networkd[1716]: enp1s0f0np0: Link UP Jan 13 20:51:55.030697 systemd-networkd[1716]: enp1s0f0np0: Gained carrier Jan 13 20:51:55.040307 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Jan 13 20:51:55.058107 systemd-networkd[1716]: enp1s0f1np1: Reconfiguring with /etc/systemd/network/10-0c:42:a1:79:3d:90.network. Jan 13 20:51:55.058559 systemd-networkd[1716]: enp1s0f1np1: Link UP Jan 13 20:51:55.058971 systemd-networkd[1716]: enp1s0f1np1: Gained carrier Jan 13 20:51:55.078998 systemd-networkd[1716]: bond0: Link UP Jan 13 20:51:55.080024 systemd-networkd[1716]: bond0: Gained carrier Jan 13 20:51:55.080871 systemd-timesyncd[1718]: Network configuration changed, trying to establish connection. Jan 13 20:51:55.082775 systemd-timesyncd[1718]: Network configuration changed, trying to establish connection. Jan 13 20:51:55.084060 systemd-timesyncd[1718]: Network configuration changed, trying to establish connection. Jan 13 20:51:55.084650 systemd-timesyncd[1718]: Network configuration changed, trying to establish connection. Jan 13 20:51:55.151303 kernel: bond0: (slave enp1s0f0np0): link status definitely up, 10000 Mbps full duplex Jan 13 20:51:55.151322 kernel: bond0: active interface up! Jan 13 20:51:55.267305 kernel: bond0: (slave enp1s0f1np1): link status definitely up, 10000 Mbps full duplex Jan 13 20:51:55.303241 coreos-metadata[1759]: Jan 13 20:51:55.303 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Jan 13 20:51:55.710405 coreos-metadata[1836]: Jan 13 20:51:55.710 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Jan 13 20:51:56.601695 systemd-timesyncd[1718]: Network configuration changed, trying to establish connection. Jan 13 20:51:56.793494 systemd-networkd[1716]: bond0: Gained IPv6LL Jan 13 20:51:56.794290 systemd-timesyncd[1718]: Network configuration changed, trying to establish connection. Jan 13 20:51:56.798352 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 20:51:56.812520 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 20:51:56.838480 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:51:56.848962 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 20:51:56.866874 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 20:51:57.534209 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:51:57.546795 (kubelet)[1895]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:51:58.060509 kubelet[1895]: E0113 20:51:58.060454 1895 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:51:58.061645 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:51:58.061721 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:51:58.493481 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 20:51:58.509587 systemd[1]: Started sshd@0-147.28.180.137:22-147.75.109.163:43800.service - OpenSSH per-connection server daemon (147.75.109.163:43800). Jan 13 20:51:58.552854 sshd[1914]: Accepted publickey for core from 147.75.109.163 port 43800 ssh2: RSA SHA256:e+MgUisS161AGA1r9VNVQu08608PiJAdYjzTmlTIRrQ Jan 13 20:51:58.553934 sshd-session[1914]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:51:58.559989 systemd-logind[1785]: New session 1 of user core. Jan 13 20:51:58.560932 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 20:51:58.578494 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 20:51:58.592503 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 20:51:58.626677 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 20:51:58.637467 (systemd)[1918]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 13 20:51:58.681536 kernel: mlx5_core 0000:01:00.0: lag map: port 1:1 port 2:2 Jan 13 20:51:58.681703 kernel: mlx5_core 0000:01:00.0: shared_fdb:0 mode:queue_affinity Jan 13 20:51:58.719294 systemd[1918]: Queued start job for default target default.target. Jan 13 20:51:58.739074 systemd[1918]: Created slice app.slice - User Application Slice. Jan 13 20:51:58.739097 systemd[1918]: Reached target paths.target - Paths. Jan 13 20:51:58.739111 systemd[1918]: Reached target timers.target - Timers. Jan 13 20:51:58.739916 systemd[1918]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 20:51:58.746079 systemd[1918]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 20:51:58.746118 systemd[1918]: Reached target sockets.target - Sockets. Jan 13 20:51:58.746133 systemd[1918]: Reached target basic.target - Basic System. Jan 13 20:51:58.746164 systemd[1918]: Reached target default.target - Main User Target. Jan 13 20:51:58.746188 systemd[1918]: Startup finished in 104ms. Jan 13 20:51:58.746208 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 20:51:58.758157 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 20:51:58.826007 systemd[1]: Started sshd@1-147.28.180.137:22-147.75.109.163:34304.service - OpenSSH per-connection server daemon (147.75.109.163:34304). Jan 13 20:51:58.863763 sshd[1931]: Accepted publickey for core from 147.75.109.163 port 34304 ssh2: RSA SHA256:e+MgUisS161AGA1r9VNVQu08608PiJAdYjzTmlTIRrQ Jan 13 20:51:58.864473 sshd-session[1931]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:51:58.866702 systemd-logind[1785]: New session 2 of user core. Jan 13 20:51:58.875453 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 20:51:58.932183 sshd[1933]: Connection closed by 147.75.109.163 port 34304 Jan 13 20:51:58.932322 sshd-session[1931]: pam_unix(sshd:session): session closed for user core Jan 13 20:51:58.949877 systemd[1]: sshd@1-147.28.180.137:22-147.75.109.163:34304.service: Deactivated successfully. Jan 13 20:51:58.950628 systemd[1]: session-2.scope: Deactivated successfully. Jan 13 20:51:58.951220 systemd-logind[1785]: Session 2 logged out. Waiting for processes to exit. Jan 13 20:51:58.951928 systemd[1]: Started sshd@2-147.28.180.137:22-147.75.109.163:34316.service - OpenSSH per-connection server daemon (147.75.109.163:34316). Jan 13 20:51:58.964025 systemd-logind[1785]: Removed session 2. Jan 13 20:51:58.989431 sshd[1938]: Accepted publickey for core from 147.75.109.163 port 34316 ssh2: RSA SHA256:e+MgUisS161AGA1r9VNVQu08608PiJAdYjzTmlTIRrQ Jan 13 20:51:58.990027 sshd-session[1938]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:51:58.992517 systemd-logind[1785]: New session 3 of user core. Jan 13 20:51:59.000382 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 20:51:59.057157 sshd[1941]: Connection closed by 147.75.109.163 port 34316 Jan 13 20:51:59.057286 sshd-session[1938]: pam_unix(sshd:session): session closed for user core Jan 13 20:51:59.058842 systemd[1]: sshd@2-147.28.180.137:22-147.75.109.163:34316.service: Deactivated successfully. Jan 13 20:51:59.059659 systemd[1]: session-3.scope: Deactivated successfully. Jan 13 20:51:59.060046 systemd-logind[1785]: Session 3 logged out. Waiting for processes to exit. Jan 13 20:51:59.060458 systemd-logind[1785]: Removed session 3. Jan 13 20:51:59.903885 coreos-metadata[1836]: Jan 13 20:51:59.903 INFO Fetch successful Jan 13 20:52:00.049886 unknown[1836]: wrote ssh authorized keys file for user: core Jan 13 20:52:00.054739 agetty[1877]: failed to open credentials directory Jan 13 20:52:00.055196 agetty[1878]: failed to open credentials directory Jan 13 20:52:00.070273 login[1878]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 13 20:52:00.073034 login[1877]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 13 20:52:00.079726 coreos-metadata[1759]: Jan 13 20:52:00.079 INFO Fetch successful Jan 13 20:52:00.082045 systemd-logind[1785]: New session 4 of user core. Jan 13 20:52:00.083867 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 20:52:00.086125 systemd-logind[1785]: New session 5 of user core. Jan 13 20:52:00.086996 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 20:52:00.090022 update-ssh-keys[1949]: Updated "/home/core/.ssh/authorized_keys" Jan 13 20:52:00.090720 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 13 20:52:00.091864 systemd[1]: Finished sshkeys.service. Jan 13 20:52:00.133706 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 13 20:52:00.134956 systemd[1]: Starting packet-phone-home.service - Report Success to Packet... Jan 13 20:52:00.492983 systemd[1]: Finished packet-phone-home.service - Report Success to Packet. Jan 13 20:52:00.495719 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 20:52:00.496272 systemd[1]: Startup finished in 2.678s (kernel) + 20.860s (initrd) + 9.873s (userspace) = 33.411s. Jan 13 20:52:08.205893 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 13 20:52:08.226583 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:52:08.443713 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:52:08.446768 (kubelet)[1992]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:52:08.479749 kubelet[1992]: E0113 20:52:08.479599 1992 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:52:08.481771 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:52:08.481854 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:52:09.074342 systemd[1]: Started sshd@3-147.28.180.137:22-147.75.109.163:36624.service - OpenSSH per-connection server daemon (147.75.109.163:36624). Jan 13 20:52:09.103672 sshd[2010]: Accepted publickey for core from 147.75.109.163 port 36624 ssh2: RSA SHA256:e+MgUisS161AGA1r9VNVQu08608PiJAdYjzTmlTIRrQ Jan 13 20:52:09.104276 sshd-session[2010]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:52:09.106628 systemd-logind[1785]: New session 6 of user core. Jan 13 20:52:09.121527 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 20:52:09.172736 sshd[2012]: Connection closed by 147.75.109.163 port 36624 Jan 13 20:52:09.172902 sshd-session[2010]: pam_unix(sshd:session): session closed for user core Jan 13 20:52:09.188847 systemd[1]: sshd@3-147.28.180.137:22-147.75.109.163:36624.service: Deactivated successfully. Jan 13 20:52:09.192714 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 20:52:09.196311 systemd-logind[1785]: Session 6 logged out. Waiting for processes to exit. Jan 13 20:52:09.212615 systemd[1]: Started sshd@4-147.28.180.137:22-147.75.109.163:36636.service - OpenSSH per-connection server daemon (147.75.109.163:36636). Jan 13 20:52:09.213243 systemd-logind[1785]: Removed session 6. Jan 13 20:52:09.239694 sshd[2017]: Accepted publickey for core from 147.75.109.163 port 36636 ssh2: RSA SHA256:e+MgUisS161AGA1r9VNVQu08608PiJAdYjzTmlTIRrQ Jan 13 20:52:09.240307 sshd-session[2017]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:52:09.242861 systemd-logind[1785]: New session 7 of user core. Jan 13 20:52:09.268660 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 20:52:09.319371 sshd[2019]: Connection closed by 147.75.109.163 port 36636 Jan 13 20:52:09.319529 sshd-session[2017]: pam_unix(sshd:session): session closed for user core Jan 13 20:52:09.329879 systemd[1]: sshd@4-147.28.180.137:22-147.75.109.163:36636.service: Deactivated successfully. Jan 13 20:52:09.330675 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 20:52:09.331478 systemd-logind[1785]: Session 7 logged out. Waiting for processes to exit. Jan 13 20:52:09.332172 systemd[1]: Started sshd@5-147.28.180.137:22-147.75.109.163:36652.service - OpenSSH per-connection server daemon (147.75.109.163:36652). Jan 13 20:52:09.332732 systemd-logind[1785]: Removed session 7. Jan 13 20:52:09.370050 sshd[2024]: Accepted publickey for core from 147.75.109.163 port 36652 ssh2: RSA SHA256:e+MgUisS161AGA1r9VNVQu08608PiJAdYjzTmlTIRrQ Jan 13 20:52:09.370663 sshd-session[2024]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:52:09.372959 systemd-logind[1785]: New session 8 of user core. Jan 13 20:52:09.392665 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 13 20:52:09.456359 sshd[2026]: Connection closed by 147.75.109.163 port 36652 Jan 13 20:52:09.457211 sshd-session[2024]: pam_unix(sshd:session): session closed for user core Jan 13 20:52:09.475809 systemd[1]: sshd@5-147.28.180.137:22-147.75.109.163:36652.service: Deactivated successfully. Jan 13 20:52:09.479734 systemd[1]: session-8.scope: Deactivated successfully. Jan 13 20:52:09.483156 systemd-logind[1785]: Session 8 logged out. Waiting for processes to exit. Jan 13 20:52:09.502320 systemd[1]: Started sshd@6-147.28.180.137:22-147.75.109.163:36664.service - OpenSSH per-connection server daemon (147.75.109.163:36664). Jan 13 20:52:09.505215 systemd-logind[1785]: Removed session 8. Jan 13 20:52:09.564815 sshd[2031]: Accepted publickey for core from 147.75.109.163 port 36664 ssh2: RSA SHA256:e+MgUisS161AGA1r9VNVQu08608PiJAdYjzTmlTIRrQ Jan 13 20:52:09.565481 sshd-session[2031]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:52:09.568089 systemd-logind[1785]: New session 9 of user core. Jan 13 20:52:09.585512 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 13 20:52:09.642570 sudo[2034]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 13 20:52:09.642717 sudo[2034]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:52:09.653999 sudo[2034]: pam_unix(sudo:session): session closed for user root Jan 13 20:52:09.654783 sshd[2033]: Connection closed by 147.75.109.163 port 36664 Jan 13 20:52:09.654968 sshd-session[2031]: pam_unix(sshd:session): session closed for user core Jan 13 20:52:09.670285 systemd[1]: sshd@6-147.28.180.137:22-147.75.109.163:36664.service: Deactivated successfully. Jan 13 20:52:09.671272 systemd[1]: session-9.scope: Deactivated successfully. Jan 13 20:52:09.672202 systemd-logind[1785]: Session 9 logged out. Waiting for processes to exit. Jan 13 20:52:09.673058 systemd[1]: Started sshd@7-147.28.180.137:22-147.75.109.163:36672.service - OpenSSH per-connection server daemon (147.75.109.163:36672). Jan 13 20:52:09.673908 systemd-logind[1785]: Removed session 9. Jan 13 20:52:09.723295 sshd[2039]: Accepted publickey for core from 147.75.109.163 port 36672 ssh2: RSA SHA256:e+MgUisS161AGA1r9VNVQu08608PiJAdYjzTmlTIRrQ Jan 13 20:52:09.724314 sshd-session[2039]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:52:09.728108 systemd-logind[1785]: New session 10 of user core. Jan 13 20:52:09.746576 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 13 20:52:09.806183 sudo[2043]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 13 20:52:09.806336 sudo[2043]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:52:09.808301 sudo[2043]: pam_unix(sudo:session): session closed for user root Jan 13 20:52:09.810910 sudo[2042]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 13 20:52:09.811053 sudo[2042]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:52:09.833818 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:52:09.863816 augenrules[2065]: No rules Jan 13 20:52:09.864986 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:52:09.865367 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:52:09.867278 sudo[2042]: pam_unix(sudo:session): session closed for user root Jan 13 20:52:09.869856 sshd[2041]: Connection closed by 147.75.109.163 port 36672 Jan 13 20:52:09.870749 sshd-session[2039]: pam_unix(sshd:session): session closed for user core Jan 13 20:52:09.893370 systemd[1]: sshd@7-147.28.180.137:22-147.75.109.163:36672.service: Deactivated successfully. Jan 13 20:52:09.897091 systemd[1]: session-10.scope: Deactivated successfully. Jan 13 20:52:09.900553 systemd-logind[1785]: Session 10 logged out. Waiting for processes to exit. Jan 13 20:52:09.921038 systemd[1]: Started sshd@8-147.28.180.137:22-147.75.109.163:36678.service - OpenSSH per-connection server daemon (147.75.109.163:36678). Jan 13 20:52:09.923352 systemd-logind[1785]: Removed session 10. Jan 13 20:52:09.983671 sshd[2073]: Accepted publickey for core from 147.75.109.163 port 36678 ssh2: RSA SHA256:e+MgUisS161AGA1r9VNVQu08608PiJAdYjzTmlTIRrQ Jan 13 20:52:09.984347 sshd-session[2073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:52:09.987142 systemd-logind[1785]: New session 11 of user core. Jan 13 20:52:10.014587 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 13 20:52:10.070134 sudo[2076]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 20:52:10.070822 sudo[2076]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:52:10.364545 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 13 20:52:10.364621 (dockerd)[2103]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 13 20:52:10.600806 dockerd[2103]: time="2025-01-13T20:52:10.600749573Z" level=info msg="Starting up" Jan 13 20:52:10.664374 dockerd[2103]: time="2025-01-13T20:52:10.664281045Z" level=info msg="Loading containers: start." Jan 13 20:52:10.784292 kernel: Initializing XFRM netlink socket Jan 13 20:52:10.800886 systemd-timesyncd[1718]: Network configuration changed, trying to establish connection. Jan 13 20:52:11.235865 systemd-resolved[1717]: Clock change detected. Flushing caches. Jan 13 20:52:11.235947 systemd-timesyncd[1718]: Contacted time server [2603:c024:c005:a600:efb6:d213:cad8:251d]:123 (2.flatcar.pool.ntp.org). Jan 13 20:52:11.235972 systemd-timesyncd[1718]: Initial clock synchronization to Mon 2025-01-13 20:52:11.235844 UTC. Jan 13 20:52:11.287245 systemd-networkd[1716]: docker0: Link UP Jan 13 20:52:11.319064 dockerd[2103]: time="2025-01-13T20:52:11.319018542Z" level=info msg="Loading containers: done." Jan 13 20:52:11.328117 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1097074554-merged.mount: Deactivated successfully. Jan 13 20:52:11.328472 dockerd[2103]: time="2025-01-13T20:52:11.328455734Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 13 20:52:11.328506 dockerd[2103]: time="2025-01-13T20:52:11.328499023Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jan 13 20:52:11.328559 dockerd[2103]: time="2025-01-13T20:52:11.328549771Z" level=info msg="Daemon has completed initialization" Jan 13 20:52:11.363624 dockerd[2103]: time="2025-01-13T20:52:11.363520668Z" level=info msg="API listen on /run/docker.sock" Jan 13 20:52:11.363676 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 13 20:52:12.215656 containerd[1796]: time="2025-01-13T20:52:12.215633320Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\"" Jan 13 20:52:12.797520 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1327186142.mount: Deactivated successfully. Jan 13 20:52:13.622807 containerd[1796]: time="2025-01-13T20:52:13.622782683Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:52:13.623012 containerd[1796]: time="2025-01-13T20:52:13.622922104Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.8: active requests=0, bytes read=32675642" Jan 13 20:52:13.623397 containerd[1796]: time="2025-01-13T20:52:13.623386205Z" level=info msg="ImageCreate event name:\"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:52:13.624914 containerd[1796]: time="2025-01-13T20:52:13.624870946Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:52:13.625574 containerd[1796]: time="2025-01-13T20:52:13.625531479Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.8\" with image id \"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\", size \"32672442\" in 1.40985953s" Jan 13 20:52:13.625574 containerd[1796]: time="2025-01-13T20:52:13.625548421Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\" returns image reference \"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\"" Jan 13 20:52:13.636749 containerd[1796]: time="2025-01-13T20:52:13.636728536Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\"" Jan 13 20:52:14.841983 containerd[1796]: time="2025-01-13T20:52:14.841933974Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:52:14.842205 containerd[1796]: time="2025-01-13T20:52:14.842065369Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.8: active requests=0, bytes read=29606409" Jan 13 20:52:14.843394 containerd[1796]: time="2025-01-13T20:52:14.843352160Z" level=info msg="ImageCreate event name:\"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:52:14.844943 containerd[1796]: time="2025-01-13T20:52:14.844901878Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:52:14.846055 containerd[1796]: time="2025-01-13T20:52:14.846013396Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.8\" with image id \"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\", size \"31051521\" in 1.209264208s" Jan 13 20:52:14.846055 containerd[1796]: time="2025-01-13T20:52:14.846029273Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\" returns image reference \"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\"" Jan 13 20:52:14.857427 containerd[1796]: time="2025-01-13T20:52:14.857408529Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\"" Jan 13 20:52:15.698843 containerd[1796]: time="2025-01-13T20:52:15.698789290Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:52:15.699015 containerd[1796]: time="2025-01-13T20:52:15.698963113Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.8: active requests=0, bytes read=17783035" Jan 13 20:52:15.699458 containerd[1796]: time="2025-01-13T20:52:15.699418515Z" level=info msg="ImageCreate event name:\"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:52:15.701226 containerd[1796]: time="2025-01-13T20:52:15.701186355Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:52:15.701729 containerd[1796]: time="2025-01-13T20:52:15.701689399Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.8\" with image id \"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\", size \"19228165\" in 844.260957ms" Jan 13 20:52:15.701729 containerd[1796]: time="2025-01-13T20:52:15.701703403Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\" returns image reference \"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\"" Jan 13 20:52:15.712429 containerd[1796]: time="2025-01-13T20:52:15.712408500Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\"" Jan 13 20:52:16.520919 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount765100824.mount: Deactivated successfully. Jan 13 20:52:16.689710 containerd[1796]: time="2025-01-13T20:52:16.689655780Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:52:16.689916 containerd[1796]: time="2025-01-13T20:52:16.689853060Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.8: active requests=0, bytes read=29057470" Jan 13 20:52:16.690192 containerd[1796]: time="2025-01-13T20:52:16.690178516Z" level=info msg="ImageCreate event name:\"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:52:16.691204 containerd[1796]: time="2025-01-13T20:52:16.691174968Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:52:16.691596 containerd[1796]: time="2025-01-13T20:52:16.691553657Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.8\" with image id \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\", repo tag \"registry.k8s.io/kube-proxy:v1.30.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\", size \"29056489\" in 979.124361ms" Jan 13 20:52:16.691596 containerd[1796]: time="2025-01-13T20:52:16.691569235Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\"" Jan 13 20:52:16.702041 containerd[1796]: time="2025-01-13T20:52:16.702019560Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 13 20:52:17.240281 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount166872499.mount: Deactivated successfully. Jan 13 20:52:17.729562 containerd[1796]: time="2025-01-13T20:52:17.729508308Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:52:17.729775 containerd[1796]: time="2025-01-13T20:52:17.729725674Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 13 20:52:17.730120 containerd[1796]: time="2025-01-13T20:52:17.730083223Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:52:17.731692 containerd[1796]: time="2025-01-13T20:52:17.731652454Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:52:17.732760 containerd[1796]: time="2025-01-13T20:52:17.732719300Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.030673386s" Jan 13 20:52:17.732760 containerd[1796]: time="2025-01-13T20:52:17.732735296Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 13 20:52:17.743225 containerd[1796]: time="2025-01-13T20:52:17.743205984Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 13 20:52:18.272778 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount327412037.mount: Deactivated successfully. Jan 13 20:52:18.273862 containerd[1796]: time="2025-01-13T20:52:18.273844793Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:52:18.274106 containerd[1796]: time="2025-01-13T20:52:18.274087238Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jan 13 20:52:18.274472 containerd[1796]: time="2025-01-13T20:52:18.274432602Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:52:18.275651 containerd[1796]: time="2025-01-13T20:52:18.275612086Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:52:18.276087 containerd[1796]: time="2025-01-13T20:52:18.276045768Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 532.821433ms" Jan 13 20:52:18.276087 containerd[1796]: time="2025-01-13T20:52:18.276061219Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 13 20:52:18.287477 containerd[1796]: time="2025-01-13T20:52:18.287457816Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 13 20:52:18.796729 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3647994482.mount: Deactivated successfully. Jan 13 20:52:19.136032 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 13 20:52:19.145431 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:52:19.438682 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:52:19.441007 (kubelet)[2568]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:52:19.476204 kubelet[2568]: E0113 20:52:19.476174 2568 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:52:19.478345 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:52:19.478496 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:52:19.952448 containerd[1796]: time="2025-01-13T20:52:19.952420903Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:52:19.952713 containerd[1796]: time="2025-01-13T20:52:19.952623564Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Jan 13 20:52:19.953125 containerd[1796]: time="2025-01-13T20:52:19.953111972Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:52:19.955190 containerd[1796]: time="2025-01-13T20:52:19.955143915Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:52:19.955713 containerd[1796]: time="2025-01-13T20:52:19.955668343Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 1.668189873s" Jan 13 20:52:19.955713 containerd[1796]: time="2025-01-13T20:52:19.955685107Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jan 13 20:52:22.119699 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:52:22.142545 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:52:22.155612 systemd[1]: Reloading requested from client PID 2744 ('systemctl') (unit session-11.scope)... Jan 13 20:52:22.155620 systemd[1]: Reloading... Jan 13 20:52:22.201231 zram_generator::config[2783]: No configuration found. Jan 13 20:52:22.272384 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:52:22.333826 systemd[1]: Reloading finished in 177 ms. Jan 13 20:52:22.383611 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:52:22.385292 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:52:22.385942 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 20:52:22.386042 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:52:22.386922 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:52:22.587986 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:52:22.593032 (kubelet)[2852]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 20:52:22.615159 kubelet[2852]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:52:22.615159 kubelet[2852]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 20:52:22.615159 kubelet[2852]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:52:22.615446 kubelet[2852]: I0113 20:52:22.615159 2852 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 20:52:22.756484 kubelet[2852]: I0113 20:52:22.756445 2852 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 13 20:52:22.756484 kubelet[2852]: I0113 20:52:22.756456 2852 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 20:52:22.756573 kubelet[2852]: I0113 20:52:22.756567 2852 server.go:927] "Client rotation is on, will bootstrap in background" Jan 13 20:52:22.772501 kubelet[2852]: I0113 20:52:22.772489 2852 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:52:22.773253 kubelet[2852]: E0113 20:52:22.773242 2852 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://147.28.180.137:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 147.28.180.137:6443: connect: connection refused Jan 13 20:52:22.786716 kubelet[2852]: I0113 20:52:22.786690 2852 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 20:52:22.788213 kubelet[2852]: I0113 20:52:22.788157 2852 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 20:52:22.788317 kubelet[2852]: I0113 20:52:22.788172 2852 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4186.1.0-a-3c6cffff8a","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 20:52:22.788806 kubelet[2852]: I0113 20:52:22.788772 2852 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 20:52:22.788806 kubelet[2852]: I0113 20:52:22.788779 2852 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 20:52:22.788851 kubelet[2852]: I0113 20:52:22.788848 2852 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:52:22.789592 kubelet[2852]: I0113 20:52:22.789558 2852 kubelet.go:400] "Attempting to sync node with API server" Jan 13 20:52:22.789592 kubelet[2852]: I0113 20:52:22.789566 2852 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 20:52:22.789592 kubelet[2852]: I0113 20:52:22.789577 2852 kubelet.go:312] "Adding apiserver pod source" Jan 13 20:52:22.789592 kubelet[2852]: I0113 20:52:22.789585 2852 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 20:52:22.789998 kubelet[2852]: W0113 20:52:22.789946 2852 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://147.28.180.137:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 147.28.180.137:6443: connect: connection refused Jan 13 20:52:22.789998 kubelet[2852]: E0113 20:52:22.789976 2852 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://147.28.180.137:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 147.28.180.137:6443: connect: connection refused Jan 13 20:52:22.792145 kubelet[2852]: W0113 20:52:22.792114 2852 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://147.28.180.137:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.1.0-a-3c6cffff8a&limit=500&resourceVersion=0": dial tcp 147.28.180.137:6443: connect: connection refused Jan 13 20:52:22.792194 kubelet[2852]: E0113 20:52:22.792152 2852 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://147.28.180.137:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.1.0-a-3c6cffff8a&limit=500&resourceVersion=0": dial tcp 147.28.180.137:6443: connect: connection refused Jan 13 20:52:22.793589 kubelet[2852]: I0113 20:52:22.793515 2852 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 20:52:22.795072 kubelet[2852]: I0113 20:52:22.795029 2852 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 20:52:22.795106 kubelet[2852]: W0113 20:52:22.795079 2852 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 20:52:22.795720 kubelet[2852]: I0113 20:52:22.795664 2852 server.go:1264] "Started kubelet" Jan 13 20:52:22.795754 kubelet[2852]: I0113 20:52:22.795726 2852 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 20:52:22.795782 kubelet[2852]: I0113 20:52:22.795755 2852 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 20:52:22.795994 kubelet[2852]: I0113 20:52:22.795960 2852 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 20:52:22.796478 kubelet[2852]: I0113 20:52:22.796423 2852 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 20:52:22.796525 kubelet[2852]: I0113 20:52:22.796513 2852 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 20:52:22.796568 kubelet[2852]: E0113 20:52:22.796548 2852 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4186.1.0-a-3c6cffff8a\" not found" Jan 13 20:52:22.796597 kubelet[2852]: I0113 20:52:22.796576 2852 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 13 20:52:22.796597 kubelet[2852]: I0113 20:52:22.796580 2852 server.go:455] "Adding debug handlers to kubelet server" Jan 13 20:52:22.796659 kubelet[2852]: I0113 20:52:22.796600 2852 reconciler.go:26] "Reconciler: start to sync state" Jan 13 20:52:22.796742 kubelet[2852]: W0113 20:52:22.796721 2852 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://147.28.180.137:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.28.180.137:6443: connect: connection refused Jan 13 20:52:22.796785 kubelet[2852]: E0113 20:52:22.796750 2852 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://147.28.180.137:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.28.180.137:6443: connect: connection refused Jan 13 20:52:22.796785 kubelet[2852]: E0113 20:52:22.796719 2852 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.28.180.137:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.1.0-a-3c6cffff8a?timeout=10s\": dial tcp 147.28.180.137:6443: connect: connection refused" interval="200ms" Jan 13 20:52:22.796948 kubelet[2852]: I0113 20:52:22.796940 2852 factory.go:221] Registration of the systemd container factory successfully Jan 13 20:52:22.797000 kubelet[2852]: I0113 20:52:22.796991 2852 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 20:52:22.797473 kubelet[2852]: I0113 20:52:22.797463 2852 factory.go:221] Registration of the containerd container factory successfully Jan 13 20:52:22.798704 kubelet[2852]: E0113 20:52:22.798691 2852 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 20:52:22.800910 kubelet[2852]: E0113 20:52:22.800791 2852 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://147.28.180.137:6443/api/v1/namespaces/default/events\": dial tcp 147.28.180.137:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4186.1.0-a-3c6cffff8a.181a5bcd1baf6d6d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4186.1.0-a-3c6cffff8a,UID:ci-4186.1.0-a-3c6cffff8a,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4186.1.0-a-3c6cffff8a,},FirstTimestamp:2025-01-13 20:52:22.795652461 +0000 UTC m=+0.200749147,LastTimestamp:2025-01-13 20:52:22.795652461 +0000 UTC m=+0.200749147,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4186.1.0-a-3c6cffff8a,}" Jan 13 20:52:22.805339 kubelet[2852]: I0113 20:52:22.805319 2852 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 20:52:22.805866 kubelet[2852]: I0113 20:52:22.805857 2852 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 20:52:22.805902 kubelet[2852]: I0113 20:52:22.805874 2852 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 20:52:22.805902 kubelet[2852]: I0113 20:52:22.805883 2852 kubelet.go:2337] "Starting kubelet main sync loop" Jan 13 20:52:22.805936 kubelet[2852]: E0113 20:52:22.805905 2852 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 20:52:22.806252 kubelet[2852]: W0113 20:52:22.806169 2852 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://147.28.180.137:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.28.180.137:6443: connect: connection refused Jan 13 20:52:22.806252 kubelet[2852]: E0113 20:52:22.806249 2852 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://147.28.180.137:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.28.180.137:6443: connect: connection refused Jan 13 20:52:22.808930 kubelet[2852]: I0113 20:52:22.808893 2852 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 20:52:22.808930 kubelet[2852]: I0113 20:52:22.808900 2852 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 20:52:22.808930 kubelet[2852]: I0113 20:52:22.808908 2852 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:52:22.809899 kubelet[2852]: I0113 20:52:22.809868 2852 policy_none.go:49] "None policy: Start" Jan 13 20:52:22.810122 kubelet[2852]: I0113 20:52:22.810091 2852 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 20:52:22.810122 kubelet[2852]: I0113 20:52:22.810102 2852 state_mem.go:35] "Initializing new in-memory state store" Jan 13 20:52:22.814700 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 13 20:52:22.834082 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 13 20:52:22.836399 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 13 20:52:22.853994 kubelet[2852]: I0113 20:52:22.853950 2852 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 20:52:22.854168 kubelet[2852]: I0113 20:52:22.854106 2852 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 20:52:22.854231 kubelet[2852]: I0113 20:52:22.854226 2852 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 20:52:22.855133 kubelet[2852]: E0113 20:52:22.855083 2852 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4186.1.0-a-3c6cffff8a\" not found" Jan 13 20:52:22.901729 kubelet[2852]: I0113 20:52:22.901668 2852 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186.1.0-a-3c6cffff8a" Jan 13 20:52:22.902502 kubelet[2852]: E0113 20:52:22.902405 2852 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://147.28.180.137:6443/api/v1/nodes\": dial tcp 147.28.180.137:6443: connect: connection refused" node="ci-4186.1.0-a-3c6cffff8a" Jan 13 20:52:22.906643 kubelet[2852]: I0113 20:52:22.906526 2852 topology_manager.go:215] "Topology Admit Handler" podUID="dd7f8b50e904faff3cd5c8b35f7b77ce" podNamespace="kube-system" podName="kube-apiserver-ci-4186.1.0-a-3c6cffff8a" Jan 13 20:52:22.909958 kubelet[2852]: I0113 20:52:22.909870 2852 topology_manager.go:215] "Topology Admit Handler" podUID="9556bfc17e36c83f9946eb7b38b217e7" podNamespace="kube-system" podName="kube-controller-manager-ci-4186.1.0-a-3c6cffff8a" Jan 13 20:52:22.913726 kubelet[2852]: I0113 20:52:22.913633 2852 topology_manager.go:215] "Topology Admit Handler" podUID="79bcb5d4b58611a77d9417949f323b01" podNamespace="kube-system" podName="kube-scheduler-ci-4186.1.0-a-3c6cffff8a" Jan 13 20:52:22.926823 systemd[1]: Created slice kubepods-burstable-poddd7f8b50e904faff3cd5c8b35f7b77ce.slice - libcontainer container kubepods-burstable-poddd7f8b50e904faff3cd5c8b35f7b77ce.slice. Jan 13 20:52:22.962586 systemd[1]: Created slice kubepods-burstable-pod9556bfc17e36c83f9946eb7b38b217e7.slice - libcontainer container kubepods-burstable-pod9556bfc17e36c83f9946eb7b38b217e7.slice. Jan 13 20:52:22.971438 systemd[1]: Created slice kubepods-burstable-pod79bcb5d4b58611a77d9417949f323b01.slice - libcontainer container kubepods-burstable-pod79bcb5d4b58611a77d9417949f323b01.slice. Jan 13 20:52:22.998300 kubelet[2852]: E0113 20:52:22.998145 2852 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.28.180.137:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.1.0-a-3c6cffff8a?timeout=10s\": dial tcp 147.28.180.137:6443: connect: connection refused" interval="400ms" Jan 13 20:52:23.099232 kubelet[2852]: I0113 20:52:23.098942 2852 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/79bcb5d4b58611a77d9417949f323b01-kubeconfig\") pod \"kube-scheduler-ci-4186.1.0-a-3c6cffff8a\" (UID: \"79bcb5d4b58611a77d9417949f323b01\") " pod="kube-system/kube-scheduler-ci-4186.1.0-a-3c6cffff8a" Jan 13 20:52:23.099232 kubelet[2852]: I0113 20:52:23.099043 2852 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd7f8b50e904faff3cd5c8b35f7b77ce-ca-certs\") pod \"kube-apiserver-ci-4186.1.0-a-3c6cffff8a\" (UID: \"dd7f8b50e904faff3cd5c8b35f7b77ce\") " pod="kube-system/kube-apiserver-ci-4186.1.0-a-3c6cffff8a" Jan 13 20:52:23.099232 kubelet[2852]: I0113 20:52:23.099116 2852 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd7f8b50e904faff3cd5c8b35f7b77ce-k8s-certs\") pod \"kube-apiserver-ci-4186.1.0-a-3c6cffff8a\" (UID: \"dd7f8b50e904faff3cd5c8b35f7b77ce\") " pod="kube-system/kube-apiserver-ci-4186.1.0-a-3c6cffff8a" Jan 13 20:52:23.099232 kubelet[2852]: I0113 20:52:23.099192 2852 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd7f8b50e904faff3cd5c8b35f7b77ce-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4186.1.0-a-3c6cffff8a\" (UID: \"dd7f8b50e904faff3cd5c8b35f7b77ce\") " pod="kube-system/kube-apiserver-ci-4186.1.0-a-3c6cffff8a" Jan 13 20:52:23.099752 kubelet[2852]: I0113 20:52:23.099252 2852 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9556bfc17e36c83f9946eb7b38b217e7-ca-certs\") pod \"kube-controller-manager-ci-4186.1.0-a-3c6cffff8a\" (UID: \"9556bfc17e36c83f9946eb7b38b217e7\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-3c6cffff8a" Jan 13 20:52:23.099752 kubelet[2852]: I0113 20:52:23.099305 2852 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9556bfc17e36c83f9946eb7b38b217e7-k8s-certs\") pod \"kube-controller-manager-ci-4186.1.0-a-3c6cffff8a\" (UID: \"9556bfc17e36c83f9946eb7b38b217e7\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-3c6cffff8a" Jan 13 20:52:23.099752 kubelet[2852]: I0113 20:52:23.099353 2852 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9556bfc17e36c83f9946eb7b38b217e7-flexvolume-dir\") pod \"kube-controller-manager-ci-4186.1.0-a-3c6cffff8a\" (UID: \"9556bfc17e36c83f9946eb7b38b217e7\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-3c6cffff8a" Jan 13 20:52:23.099752 kubelet[2852]: I0113 20:52:23.099401 2852 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9556bfc17e36c83f9946eb7b38b217e7-kubeconfig\") pod \"kube-controller-manager-ci-4186.1.0-a-3c6cffff8a\" (UID: \"9556bfc17e36c83f9946eb7b38b217e7\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-3c6cffff8a" Jan 13 20:52:23.099752 kubelet[2852]: I0113 20:52:23.099456 2852 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9556bfc17e36c83f9946eb7b38b217e7-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4186.1.0-a-3c6cffff8a\" (UID: \"9556bfc17e36c83f9946eb7b38b217e7\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-3c6cffff8a" Jan 13 20:52:23.105278 kubelet[2852]: I0113 20:52:23.105248 2852 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186.1.0-a-3c6cffff8a" Jan 13 20:52:23.105396 kubelet[2852]: E0113 20:52:23.105385 2852 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://147.28.180.137:6443/api/v1/nodes\": dial tcp 147.28.180.137:6443: connect: connection refused" node="ci-4186.1.0-a-3c6cffff8a" Jan 13 20:52:23.255492 containerd[1796]: time="2025-01-13T20:52:23.255362239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4186.1.0-a-3c6cffff8a,Uid:dd7f8b50e904faff3cd5c8b35f7b77ce,Namespace:kube-system,Attempt:0,}" Jan 13 20:52:23.268827 containerd[1796]: time="2025-01-13T20:52:23.268700360Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4186.1.0-a-3c6cffff8a,Uid:9556bfc17e36c83f9946eb7b38b217e7,Namespace:kube-system,Attempt:0,}" Jan 13 20:52:23.277340 containerd[1796]: time="2025-01-13T20:52:23.277230102Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4186.1.0-a-3c6cffff8a,Uid:79bcb5d4b58611a77d9417949f323b01,Namespace:kube-system,Attempt:0,}" Jan 13 20:52:23.399656 kubelet[2852]: E0113 20:52:23.399521 2852 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.28.180.137:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.1.0-a-3c6cffff8a?timeout=10s\": dial tcp 147.28.180.137:6443: connect: connection refused" interval="800ms" Jan 13 20:52:23.513951 kubelet[2852]: I0113 20:52:23.513871 2852 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186.1.0-a-3c6cffff8a" Jan 13 20:52:23.514234 kubelet[2852]: E0113 20:52:23.514159 2852 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://147.28.180.137:6443/api/v1/nodes\": dial tcp 147.28.180.137:6443: connect: connection refused" node="ci-4186.1.0-a-3c6cffff8a" Jan 13 20:52:23.664910 kubelet[2852]: W0113 20:52:23.664795 2852 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://147.28.180.137:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.1.0-a-3c6cffff8a&limit=500&resourceVersion=0": dial tcp 147.28.180.137:6443: connect: connection refused Jan 13 20:52:23.664910 kubelet[2852]: E0113 20:52:23.664841 2852 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://147.28.180.137:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.1.0-a-3c6cffff8a&limit=500&resourceVersion=0": dial tcp 147.28.180.137:6443: connect: connection refused Jan 13 20:52:23.772907 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2458215452.mount: Deactivated successfully. Jan 13 20:52:23.774421 containerd[1796]: time="2025-01-13T20:52:23.774374154Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:52:23.774659 containerd[1796]: time="2025-01-13T20:52:23.774627011Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 13 20:52:23.775148 containerd[1796]: time="2025-01-13T20:52:23.775134062Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:52:23.775771 containerd[1796]: time="2025-01-13T20:52:23.775756763Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:52:23.775953 containerd[1796]: time="2025-01-13T20:52:23.775935236Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 20:52:23.776593 containerd[1796]: time="2025-01-13T20:52:23.776581813Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:52:23.776736 containerd[1796]: time="2025-01-13T20:52:23.776719589Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 20:52:23.794723 containerd[1796]: time="2025-01-13T20:52:23.794684613Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:52:23.795177 containerd[1796]: time="2025-01-13T20:52:23.795115962Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 526.205421ms" Jan 13 20:52:23.795573 containerd[1796]: time="2025-01-13T20:52:23.795536130Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 539.968688ms" Jan 13 20:52:23.796435 containerd[1796]: time="2025-01-13T20:52:23.796386125Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 518.983134ms" Jan 13 20:52:23.823258 kubelet[2852]: W0113 20:52:23.823197 2852 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://147.28.180.137:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.28.180.137:6443: connect: connection refused Jan 13 20:52:23.823258 kubelet[2852]: E0113 20:52:23.823217 2852 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://147.28.180.137:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.28.180.137:6443: connect: connection refused Jan 13 20:52:23.868600 containerd[1796]: time="2025-01-13T20:52:23.868538450Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:52:23.868706 containerd[1796]: time="2025-01-13T20:52:23.868427534Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:52:23.868706 containerd[1796]: time="2025-01-13T20:52:23.868673124Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:52:23.868706 containerd[1796]: time="2025-01-13T20:52:23.868685076Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:52:23.868706 containerd[1796]: time="2025-01-13T20:52:23.868685966Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:52:23.868816 containerd[1796]: time="2025-01-13T20:52:23.868711314Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:52:23.868816 containerd[1796]: time="2025-01-13T20:52:23.868718857Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:52:23.868816 containerd[1796]: time="2025-01-13T20:52:23.868759209Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:52:23.868816 containerd[1796]: time="2025-01-13T20:52:23.868778072Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:52:23.868816 containerd[1796]: time="2025-01-13T20:52:23.868778578Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:52:23.868816 containerd[1796]: time="2025-01-13T20:52:23.868792052Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:52:23.868952 containerd[1796]: time="2025-01-13T20:52:23.868845303Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:52:23.899361 systemd[1]: Started cri-containerd-3283c94bf95495164edb6f7ee843565f61b4f9063c288f2978044dc3223ebcd9.scope - libcontainer container 3283c94bf95495164edb6f7ee843565f61b4f9063c288f2978044dc3223ebcd9. Jan 13 20:52:23.900312 systemd[1]: Started cri-containerd-4085b0a87afb326b57293596d72bd0c9a5ae1a1834e1aea841ddeecca732e7bf.scope - libcontainer container 4085b0a87afb326b57293596d72bd0c9a5ae1a1834e1aea841ddeecca732e7bf. Jan 13 20:52:23.901368 systemd[1]: Started cri-containerd-b8ad4f22cd371955757c51f8cf39fb8c15b723b3b503f5d0c4ce6e4b502fc6e6.scope - libcontainer container b8ad4f22cd371955757c51f8cf39fb8c15b723b3b503f5d0c4ce6e4b502fc6e6. Jan 13 20:52:23.926507 containerd[1796]: time="2025-01-13T20:52:23.926432379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4186.1.0-a-3c6cffff8a,Uid:79bcb5d4b58611a77d9417949f323b01,Namespace:kube-system,Attempt:0,} returns sandbox id \"3283c94bf95495164edb6f7ee843565f61b4f9063c288f2978044dc3223ebcd9\"" Jan 13 20:52:23.926601 containerd[1796]: time="2025-01-13T20:52:23.926547761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4186.1.0-a-3c6cffff8a,Uid:9556bfc17e36c83f9946eb7b38b217e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"4085b0a87afb326b57293596d72bd0c9a5ae1a1834e1aea841ddeecca732e7bf\"" Jan 13 20:52:23.927243 containerd[1796]: time="2025-01-13T20:52:23.927227457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4186.1.0-a-3c6cffff8a,Uid:dd7f8b50e904faff3cd5c8b35f7b77ce,Namespace:kube-system,Attempt:0,} returns sandbox id \"b8ad4f22cd371955757c51f8cf39fb8c15b723b3b503f5d0c4ce6e4b502fc6e6\"" Jan 13 20:52:23.928366 containerd[1796]: time="2025-01-13T20:52:23.928348467Z" level=info msg="CreateContainer within sandbox \"3283c94bf95495164edb6f7ee843565f61b4f9063c288f2978044dc3223ebcd9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 13 20:52:23.928742 containerd[1796]: time="2025-01-13T20:52:23.928605904Z" level=info msg="CreateContainer within sandbox \"4085b0a87afb326b57293596d72bd0c9a5ae1a1834e1aea841ddeecca732e7bf\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 13 20:52:23.928742 containerd[1796]: time="2025-01-13T20:52:23.928723401Z" level=info msg="CreateContainer within sandbox \"b8ad4f22cd371955757c51f8cf39fb8c15b723b3b503f5d0c4ce6e4b502fc6e6\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 13 20:52:23.935407 containerd[1796]: time="2025-01-13T20:52:23.935366922Z" level=info msg="CreateContainer within sandbox \"4085b0a87afb326b57293596d72bd0c9a5ae1a1834e1aea841ddeecca732e7bf\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"82049841c7e47655eb628e4210abef6df0b27ef0bc4798413e448159dc7b41ac\"" Jan 13 20:52:23.935626 containerd[1796]: time="2025-01-13T20:52:23.935611268Z" level=info msg="StartContainer for \"82049841c7e47655eb628e4210abef6df0b27ef0bc4798413e448159dc7b41ac\"" Jan 13 20:52:23.936265 containerd[1796]: time="2025-01-13T20:52:23.936252723Z" level=info msg="CreateContainer within sandbox \"3283c94bf95495164edb6f7ee843565f61b4f9063c288f2978044dc3223ebcd9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b530e6fc67edef8a77a92adfdbec4dce1545c29c9f53aae2d19c22aeaafb393c\"" Jan 13 20:52:23.936446 containerd[1796]: time="2025-01-13T20:52:23.936431511Z" level=info msg="StartContainer for \"b530e6fc67edef8a77a92adfdbec4dce1545c29c9f53aae2d19c22aeaafb393c\"" Jan 13 20:52:23.938907 containerd[1796]: time="2025-01-13T20:52:23.938887926Z" level=info msg="CreateContainer within sandbox \"b8ad4f22cd371955757c51f8cf39fb8c15b723b3b503f5d0c4ce6e4b502fc6e6\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"49c65edc5971729d265162d0f087109045644b076a425ac099bf0c632b635c79\"" Jan 13 20:52:23.939187 containerd[1796]: time="2025-01-13T20:52:23.939152605Z" level=info msg="StartContainer for \"49c65edc5971729d265162d0f087109045644b076a425ac099bf0c632b635c79\"" Jan 13 20:52:23.963468 systemd[1]: Started cri-containerd-82049841c7e47655eb628e4210abef6df0b27ef0bc4798413e448159dc7b41ac.scope - libcontainer container 82049841c7e47655eb628e4210abef6df0b27ef0bc4798413e448159dc7b41ac. Jan 13 20:52:23.964078 systemd[1]: Started cri-containerd-b530e6fc67edef8a77a92adfdbec4dce1545c29c9f53aae2d19c22aeaafb393c.scope - libcontainer container b530e6fc67edef8a77a92adfdbec4dce1545c29c9f53aae2d19c22aeaafb393c. Jan 13 20:52:23.965654 systemd[1]: Started cri-containerd-49c65edc5971729d265162d0f087109045644b076a425ac099bf0c632b635c79.scope - libcontainer container 49c65edc5971729d265162d0f087109045644b076a425ac099bf0c632b635c79. Jan 13 20:52:23.987016 containerd[1796]: time="2025-01-13T20:52:23.986988795Z" level=info msg="StartContainer for \"b530e6fc67edef8a77a92adfdbec4dce1545c29c9f53aae2d19c22aeaafb393c\" returns successfully" Jan 13 20:52:23.987519 containerd[1796]: time="2025-01-13T20:52:23.987502334Z" level=info msg="StartContainer for \"82049841c7e47655eb628e4210abef6df0b27ef0bc4798413e448159dc7b41ac\" returns successfully" Jan 13 20:52:23.988043 containerd[1796]: time="2025-01-13T20:52:23.988030657Z" level=info msg="StartContainer for \"49c65edc5971729d265162d0f087109045644b076a425ac099bf0c632b635c79\" returns successfully" Jan 13 20:52:24.316139 kubelet[2852]: I0113 20:52:24.316067 2852 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186.1.0-a-3c6cffff8a" Jan 13 20:52:24.669844 kubelet[2852]: E0113 20:52:24.669822 2852 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4186.1.0-a-3c6cffff8a\" not found" node="ci-4186.1.0-a-3c6cffff8a" Jan 13 20:52:24.786562 kubelet[2852]: I0113 20:52:24.786478 2852 kubelet_node_status.go:76] "Successfully registered node" node="ci-4186.1.0-a-3c6cffff8a" Jan 13 20:52:24.803012 kubelet[2852]: E0113 20:52:24.802966 2852 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4186.1.0-a-3c6cffff8a\" not found" Jan 13 20:52:24.903176 kubelet[2852]: E0113 20:52:24.903096 2852 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4186.1.0-a-3c6cffff8a\" not found" Jan 13 20:52:25.003772 kubelet[2852]: E0113 20:52:25.003543 2852 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4186.1.0-a-3c6cffff8a\" not found" Jan 13 20:52:25.104568 kubelet[2852]: E0113 20:52:25.104468 2852 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4186.1.0-a-3c6cffff8a\" not found" Jan 13 20:52:25.205542 kubelet[2852]: E0113 20:52:25.205441 2852 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4186.1.0-a-3c6cffff8a\" not found" Jan 13 20:52:25.305794 kubelet[2852]: E0113 20:52:25.305565 2852 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4186.1.0-a-3c6cffff8a\" not found" Jan 13 20:52:25.791684 kubelet[2852]: I0113 20:52:25.791572 2852 apiserver.go:52] "Watching apiserver" Jan 13 20:52:25.797862 kubelet[2852]: I0113 20:52:25.797770 2852 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 13 20:52:25.841971 kubelet[2852]: W0113 20:52:25.841898 2852 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 13 20:52:27.060998 systemd[1]: Reloading requested from client PID 3172 ('systemctl') (unit session-11.scope)... Jan 13 20:52:27.061012 systemd[1]: Reloading... Jan 13 20:52:27.104168 zram_generator::config[3211]: No configuration found. Jan 13 20:52:27.170835 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:52:27.240811 systemd[1]: Reloading finished in 179 ms. Jan 13 20:52:27.263351 kubelet[2852]: I0113 20:52:27.263249 2852 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:52:27.263355 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:52:27.270271 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 20:52:27.270390 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:52:27.286620 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:52:27.496169 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:52:27.498575 (kubelet)[3275]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 20:52:27.526348 kubelet[3275]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:52:27.526348 kubelet[3275]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 20:52:27.526348 kubelet[3275]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:52:27.526586 kubelet[3275]: I0113 20:52:27.526372 3275 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 20:52:27.528921 kubelet[3275]: I0113 20:52:27.528882 3275 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 13 20:52:27.528921 kubelet[3275]: I0113 20:52:27.528893 3275 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 20:52:27.528990 kubelet[3275]: I0113 20:52:27.528985 3275 server.go:927] "Client rotation is on, will bootstrap in background" Jan 13 20:52:27.529926 kubelet[3275]: I0113 20:52:27.529895 3275 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 13 20:52:27.530619 kubelet[3275]: I0113 20:52:27.530589 3275 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:52:27.540482 kubelet[3275]: I0113 20:52:27.540467 3275 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 20:52:27.540601 kubelet[3275]: I0113 20:52:27.540584 3275 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 20:52:27.540697 kubelet[3275]: I0113 20:52:27.540603 3275 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4186.1.0-a-3c6cffff8a","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 20:52:27.540747 kubelet[3275]: I0113 20:52:27.540706 3275 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 20:52:27.540747 kubelet[3275]: I0113 20:52:27.540713 3275 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 20:52:27.540747 kubelet[3275]: I0113 20:52:27.540735 3275 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:52:27.540795 kubelet[3275]: I0113 20:52:27.540785 3275 kubelet.go:400] "Attempting to sync node with API server" Jan 13 20:52:27.540795 kubelet[3275]: I0113 20:52:27.540793 3275 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 20:52:27.540827 kubelet[3275]: I0113 20:52:27.540805 3275 kubelet.go:312] "Adding apiserver pod source" Jan 13 20:52:27.540827 kubelet[3275]: I0113 20:52:27.540813 3275 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 20:52:27.541278 kubelet[3275]: I0113 20:52:27.541247 3275 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 20:52:27.541428 kubelet[3275]: I0113 20:52:27.541420 3275 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 20:52:27.541803 kubelet[3275]: I0113 20:52:27.541778 3275 server.go:1264] "Started kubelet" Jan 13 20:52:27.541862 kubelet[3275]: I0113 20:52:27.541832 3275 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 20:52:27.541895 kubelet[3275]: I0113 20:52:27.541840 3275 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 20:52:27.541989 kubelet[3275]: I0113 20:52:27.541982 3275 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 20:52:27.542415 kubelet[3275]: I0113 20:52:27.542406 3275 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 20:52:27.542474 kubelet[3275]: I0113 20:52:27.542463 3275 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 20:52:27.542688 kubelet[3275]: I0113 20:52:27.542584 3275 reconciler.go:26] "Reconciler: start to sync state" Jan 13 20:52:27.542745 kubelet[3275]: I0113 20:52:27.542730 3275 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 13 20:52:27.542790 kubelet[3275]: I0113 20:52:27.542760 3275 server.go:455] "Adding debug handlers to kubelet server" Jan 13 20:52:27.543634 kubelet[3275]: E0113 20:52:27.543603 3275 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 20:52:27.544921 kubelet[3275]: I0113 20:52:27.544907 3275 factory.go:221] Registration of the containerd container factory successfully Jan 13 20:52:27.544921 kubelet[3275]: I0113 20:52:27.544919 3275 factory.go:221] Registration of the systemd container factory successfully Jan 13 20:52:27.545008 kubelet[3275]: I0113 20:52:27.544978 3275 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 20:52:27.548620 kubelet[3275]: I0113 20:52:27.548595 3275 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 20:52:27.549272 kubelet[3275]: I0113 20:52:27.549260 3275 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 20:52:27.549318 kubelet[3275]: I0113 20:52:27.549276 3275 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 20:52:27.549318 kubelet[3275]: I0113 20:52:27.549285 3275 kubelet.go:2337] "Starting kubelet main sync loop" Jan 13 20:52:27.549318 kubelet[3275]: E0113 20:52:27.549308 3275 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 20:52:27.558608 kubelet[3275]: I0113 20:52:27.558592 3275 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 20:52:27.558608 kubelet[3275]: I0113 20:52:27.558602 3275 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 20:52:27.558608 kubelet[3275]: I0113 20:52:27.558613 3275 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:52:27.558716 kubelet[3275]: I0113 20:52:27.558693 3275 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 13 20:52:27.558716 kubelet[3275]: I0113 20:52:27.558699 3275 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 13 20:52:27.558716 kubelet[3275]: I0113 20:52:27.558710 3275 policy_none.go:49] "None policy: Start" Jan 13 20:52:27.558995 kubelet[3275]: I0113 20:52:27.558957 3275 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 20:52:27.558995 kubelet[3275]: I0113 20:52:27.558968 3275 state_mem.go:35] "Initializing new in-memory state store" Jan 13 20:52:27.559062 kubelet[3275]: I0113 20:52:27.559056 3275 state_mem.go:75] "Updated machine memory state" Jan 13 20:52:27.561019 kubelet[3275]: I0113 20:52:27.560982 3275 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 20:52:27.561102 kubelet[3275]: I0113 20:52:27.561083 3275 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 20:52:27.561176 kubelet[3275]: I0113 20:52:27.561160 3275 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 20:52:27.649757 kubelet[3275]: I0113 20:52:27.649656 3275 topology_manager.go:215] "Topology Admit Handler" podUID="79bcb5d4b58611a77d9417949f323b01" podNamespace="kube-system" podName="kube-scheduler-ci-4186.1.0-a-3c6cffff8a" Jan 13 20:52:27.650039 kubelet[3275]: I0113 20:52:27.649863 3275 topology_manager.go:215] "Topology Admit Handler" podUID="dd7f8b50e904faff3cd5c8b35f7b77ce" podNamespace="kube-system" podName="kube-apiserver-ci-4186.1.0-a-3c6cffff8a" Jan 13 20:52:27.650039 kubelet[3275]: I0113 20:52:27.650018 3275 topology_manager.go:215] "Topology Admit Handler" podUID="9556bfc17e36c83f9946eb7b38b217e7" podNamespace="kube-system" podName="kube-controller-manager-ci-4186.1.0-a-3c6cffff8a" Jan 13 20:52:27.650308 kubelet[3275]: I0113 20:52:27.650078 3275 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186.1.0-a-3c6cffff8a" Jan 13 20:52:27.657138 kubelet[3275]: W0113 20:52:27.657052 3275 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 13 20:52:27.657769 kubelet[3275]: W0113 20:52:27.657720 3275 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 13 20:52:27.658632 kubelet[3275]: W0113 20:52:27.658592 3275 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 13 20:52:27.658808 kubelet[3275]: E0113 20:52:27.658705 3275 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4186.1.0-a-3c6cffff8a\" already exists" pod="kube-system/kube-apiserver-ci-4186.1.0-a-3c6cffff8a" Jan 13 20:52:27.661129 kubelet[3275]: I0113 20:52:27.661035 3275 kubelet_node_status.go:112] "Node was previously registered" node="ci-4186.1.0-a-3c6cffff8a" Jan 13 20:52:27.661352 kubelet[3275]: I0113 20:52:27.661217 3275 kubelet_node_status.go:76] "Successfully registered node" node="ci-4186.1.0-a-3c6cffff8a" Jan 13 20:52:27.845279 kubelet[3275]: I0113 20:52:27.845021 3275 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/79bcb5d4b58611a77d9417949f323b01-kubeconfig\") pod \"kube-scheduler-ci-4186.1.0-a-3c6cffff8a\" (UID: \"79bcb5d4b58611a77d9417949f323b01\") " pod="kube-system/kube-scheduler-ci-4186.1.0-a-3c6cffff8a" Jan 13 20:52:27.845279 kubelet[3275]: I0113 20:52:27.845118 3275 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd7f8b50e904faff3cd5c8b35f7b77ce-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4186.1.0-a-3c6cffff8a\" (UID: \"dd7f8b50e904faff3cd5c8b35f7b77ce\") " pod="kube-system/kube-apiserver-ci-4186.1.0-a-3c6cffff8a" Jan 13 20:52:27.845279 kubelet[3275]: I0113 20:52:27.845209 3275 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9556bfc17e36c83f9946eb7b38b217e7-k8s-certs\") pod \"kube-controller-manager-ci-4186.1.0-a-3c6cffff8a\" (UID: \"9556bfc17e36c83f9946eb7b38b217e7\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-3c6cffff8a" Jan 13 20:52:27.845279 kubelet[3275]: I0113 20:52:27.845271 3275 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd7f8b50e904faff3cd5c8b35f7b77ce-ca-certs\") pod \"kube-apiserver-ci-4186.1.0-a-3c6cffff8a\" (UID: \"dd7f8b50e904faff3cd5c8b35f7b77ce\") " pod="kube-system/kube-apiserver-ci-4186.1.0-a-3c6cffff8a" Jan 13 20:52:27.845792 kubelet[3275]: I0113 20:52:27.845321 3275 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd7f8b50e904faff3cd5c8b35f7b77ce-k8s-certs\") pod \"kube-apiserver-ci-4186.1.0-a-3c6cffff8a\" (UID: \"dd7f8b50e904faff3cd5c8b35f7b77ce\") " pod="kube-system/kube-apiserver-ci-4186.1.0-a-3c6cffff8a" Jan 13 20:52:27.845792 kubelet[3275]: I0113 20:52:27.845374 3275 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9556bfc17e36c83f9946eb7b38b217e7-ca-certs\") pod \"kube-controller-manager-ci-4186.1.0-a-3c6cffff8a\" (UID: \"9556bfc17e36c83f9946eb7b38b217e7\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-3c6cffff8a" Jan 13 20:52:27.845792 kubelet[3275]: I0113 20:52:27.845421 3275 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9556bfc17e36c83f9946eb7b38b217e7-flexvolume-dir\") pod \"kube-controller-manager-ci-4186.1.0-a-3c6cffff8a\" (UID: \"9556bfc17e36c83f9946eb7b38b217e7\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-3c6cffff8a" Jan 13 20:52:27.845792 kubelet[3275]: I0113 20:52:27.845485 3275 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9556bfc17e36c83f9946eb7b38b217e7-kubeconfig\") pod \"kube-controller-manager-ci-4186.1.0-a-3c6cffff8a\" (UID: \"9556bfc17e36c83f9946eb7b38b217e7\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-3c6cffff8a" Jan 13 20:52:27.845792 kubelet[3275]: I0113 20:52:27.845539 3275 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9556bfc17e36c83f9946eb7b38b217e7-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4186.1.0-a-3c6cffff8a\" (UID: \"9556bfc17e36c83f9946eb7b38b217e7\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-3c6cffff8a" Jan 13 20:52:28.541043 kubelet[3275]: I0113 20:52:28.541001 3275 apiserver.go:52] "Watching apiserver" Jan 13 20:52:28.543429 kubelet[3275]: I0113 20:52:28.543418 3275 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 13 20:52:28.555620 kubelet[3275]: W0113 20:52:28.555602 3275 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 13 20:52:28.555728 kubelet[3275]: E0113 20:52:28.555646 3275 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4186.1.0-a-3c6cffff8a\" already exists" pod="kube-system/kube-scheduler-ci-4186.1.0-a-3c6cffff8a" Jan 13 20:52:28.567262 kubelet[3275]: I0113 20:52:28.567221 3275 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4186.1.0-a-3c6cffff8a" podStartSLOduration=1.567207021 podStartE2EDuration="1.567207021s" podCreationTimestamp="2025-01-13 20:52:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:52:28.567122374 +0000 UTC m=+1.066428977" watchObservedRunningTime="2025-01-13 20:52:28.567207021 +0000 UTC m=+1.066513622" Jan 13 20:52:28.576033 kubelet[3275]: I0113 20:52:28.576001 3275 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4186.1.0-a-3c6cffff8a" podStartSLOduration=3.575990138 podStartE2EDuration="3.575990138s" podCreationTimestamp="2025-01-13 20:52:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:52:28.575946295 +0000 UTC m=+1.075252900" watchObservedRunningTime="2025-01-13 20:52:28.575990138 +0000 UTC m=+1.075296741" Jan 13 20:52:28.576149 kubelet[3275]: I0113 20:52:28.576067 3275 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4186.1.0-a-3c6cffff8a" podStartSLOduration=1.5760625419999998 podStartE2EDuration="1.576062542s" podCreationTimestamp="2025-01-13 20:52:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:52:28.571960113 +0000 UTC m=+1.071266717" watchObservedRunningTime="2025-01-13 20:52:28.576062542 +0000 UTC m=+1.075369142" Jan 13 20:52:31.596641 sudo[2076]: pam_unix(sudo:session): session closed for user root Jan 13 20:52:31.597323 sshd[2075]: Connection closed by 147.75.109.163 port 36678 Jan 13 20:52:31.597453 sshd-session[2073]: pam_unix(sshd:session): session closed for user core Jan 13 20:52:31.599356 systemd[1]: sshd@8-147.28.180.137:22-147.75.109.163:36678.service: Deactivated successfully. Jan 13 20:52:31.600146 systemd[1]: session-11.scope: Deactivated successfully. Jan 13 20:52:31.600276 systemd[1]: session-11.scope: Consumed 3.678s CPU time, 198.9M memory peak, 0B memory swap peak. Jan 13 20:52:31.600679 systemd-logind[1785]: Session 11 logged out. Waiting for processes to exit. Jan 13 20:52:31.601147 systemd-logind[1785]: Removed session 11. Jan 13 20:52:40.525420 update_engine[1790]: I20250113 20:52:40.525269 1790 update_attempter.cc:509] Updating boot flags... Jan 13 20:52:40.564167 kernel: BTRFS warning: duplicate device /dev/sdb3 devid 1 generation 44 scanned by (udev-worker) (3443) Jan 13 20:52:40.596165 kernel: BTRFS warning: duplicate device /dev/sdb3 devid 1 generation 44 scanned by (udev-worker) (3442) Jan 13 20:52:41.056881 kubelet[3275]: I0113 20:52:41.056768 3275 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 13 20:52:41.057862 containerd[1796]: time="2025-01-13T20:52:41.057479513Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 20:52:41.058528 kubelet[3275]: I0113 20:52:41.057971 3275 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 13 20:52:41.441218 kubelet[3275]: I0113 20:52:41.441144 3275 topology_manager.go:215] "Topology Admit Handler" podUID="531a3411-9fbb-4ce3-ad89-a6e9172ae8bc" podNamespace="tigera-operator" podName="tigera-operator-7bc55997bb-jr889" Jan 13 20:52:41.452540 systemd[1]: Created slice kubepods-besteffort-pod531a3411_9fbb_4ce3_ad89_a6e9172ae8bc.slice - libcontainer container kubepods-besteffort-pod531a3411_9fbb_4ce3_ad89_a6e9172ae8bc.slice. Jan 13 20:52:41.453663 kubelet[3275]: I0113 20:52:41.453620 3275 topology_manager.go:215] "Topology Admit Handler" podUID="798a2961-8af4-4859-af10-8030268a237e" podNamespace="kube-system" podName="kube-proxy-rmr25" Jan 13 20:52:41.471806 systemd[1]: Created slice kubepods-besteffort-pod798a2961_8af4_4859_af10_8030268a237e.slice - libcontainer container kubepods-besteffort-pod798a2961_8af4_4859_af10_8030268a237e.slice. Jan 13 20:52:41.547669 kubelet[3275]: I0113 20:52:41.547537 3275 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/531a3411-9fbb-4ce3-ad89-a6e9172ae8bc-var-lib-calico\") pod \"tigera-operator-7bc55997bb-jr889\" (UID: \"531a3411-9fbb-4ce3-ad89-a6e9172ae8bc\") " pod="tigera-operator/tigera-operator-7bc55997bb-jr889" Jan 13 20:52:41.547669 kubelet[3275]: I0113 20:52:41.547638 3275 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/798a2961-8af4-4859-af10-8030268a237e-lib-modules\") pod \"kube-proxy-rmr25\" (UID: \"798a2961-8af4-4859-af10-8030268a237e\") " pod="kube-system/kube-proxy-rmr25" Jan 13 20:52:41.548048 kubelet[3275]: I0113 20:52:41.547696 3275 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7kj5\" (UniqueName: \"kubernetes.io/projected/798a2961-8af4-4859-af10-8030268a237e-kube-api-access-r7kj5\") pod \"kube-proxy-rmr25\" (UID: \"798a2961-8af4-4859-af10-8030268a237e\") " pod="kube-system/kube-proxy-rmr25" Jan 13 20:52:41.548048 kubelet[3275]: I0113 20:52:41.547751 3275 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/798a2961-8af4-4859-af10-8030268a237e-kube-proxy\") pod \"kube-proxy-rmr25\" (UID: \"798a2961-8af4-4859-af10-8030268a237e\") " pod="kube-system/kube-proxy-rmr25" Jan 13 20:52:41.548048 kubelet[3275]: I0113 20:52:41.547802 3275 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8p27\" (UniqueName: \"kubernetes.io/projected/531a3411-9fbb-4ce3-ad89-a6e9172ae8bc-kube-api-access-r8p27\") pod \"tigera-operator-7bc55997bb-jr889\" (UID: \"531a3411-9fbb-4ce3-ad89-a6e9172ae8bc\") " pod="tigera-operator/tigera-operator-7bc55997bb-jr889" Jan 13 20:52:41.548048 kubelet[3275]: I0113 20:52:41.547854 3275 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/798a2961-8af4-4859-af10-8030268a237e-xtables-lock\") pod \"kube-proxy-rmr25\" (UID: \"798a2961-8af4-4859-af10-8030268a237e\") " pod="kube-system/kube-proxy-rmr25" Jan 13 20:52:41.768973 containerd[1796]: time="2025-01-13T20:52:41.768761109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-jr889,Uid:531a3411-9fbb-4ce3-ad89-a6e9172ae8bc,Namespace:tigera-operator,Attempt:0,}" Jan 13 20:52:41.774206 containerd[1796]: time="2025-01-13T20:52:41.774150509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rmr25,Uid:798a2961-8af4-4859-af10-8030268a237e,Namespace:kube-system,Attempt:0,}" Jan 13 20:52:41.781552 containerd[1796]: time="2025-01-13T20:52:41.781508565Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:52:41.781632 containerd[1796]: time="2025-01-13T20:52:41.781560063Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:52:41.781632 containerd[1796]: time="2025-01-13T20:52:41.781574540Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:52:41.781632 containerd[1796]: time="2025-01-13T20:52:41.781618837Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:52:41.783988 containerd[1796]: time="2025-01-13T20:52:41.783942016Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:52:41.783988 containerd[1796]: time="2025-01-13T20:52:41.783969810Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:52:41.783988 containerd[1796]: time="2025-01-13T20:52:41.783977594Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:52:41.784111 containerd[1796]: time="2025-01-13T20:52:41.784021823Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:52:41.801491 systemd[1]: Started cri-containerd-a4c3af6e0933d9ef388e19f78b173e14113fe24c9138b1d0ae47c4f3df72ee94.scope - libcontainer container a4c3af6e0933d9ef388e19f78b173e14113fe24c9138b1d0ae47c4f3df72ee94. Jan 13 20:52:41.803083 systemd[1]: Started cri-containerd-a2e4dd2707110baba3517a2b7d042ceea7895cbd54684b948498fcad3c40feef.scope - libcontainer container a2e4dd2707110baba3517a2b7d042ceea7895cbd54684b948498fcad3c40feef. Jan 13 20:52:41.812851 containerd[1796]: time="2025-01-13T20:52:41.812831016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rmr25,Uid:798a2961-8af4-4859-af10-8030268a237e,Namespace:kube-system,Attempt:0,} returns sandbox id \"a2e4dd2707110baba3517a2b7d042ceea7895cbd54684b948498fcad3c40feef\"" Jan 13 20:52:41.814169 containerd[1796]: time="2025-01-13T20:52:41.814144137Z" level=info msg="CreateContainer within sandbox \"a2e4dd2707110baba3517a2b7d042ceea7895cbd54684b948498fcad3c40feef\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 20:52:41.819843 containerd[1796]: time="2025-01-13T20:52:41.819827030Z" level=info msg="CreateContainer within sandbox \"a2e4dd2707110baba3517a2b7d042ceea7895cbd54684b948498fcad3c40feef\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f44e9b76d3d0e9f6d870cac0f83ec609e67241daa53f5302dd0123ba71471d1b\"" Jan 13 20:52:41.820068 containerd[1796]: time="2025-01-13T20:52:41.820052922Z" level=info msg="StartContainer for \"f44e9b76d3d0e9f6d870cac0f83ec609e67241daa53f5302dd0123ba71471d1b\"" Jan 13 20:52:41.824586 containerd[1796]: time="2025-01-13T20:52:41.824560459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-jr889,Uid:531a3411-9fbb-4ce3-ad89-a6e9172ae8bc,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"a4c3af6e0933d9ef388e19f78b173e14113fe24c9138b1d0ae47c4f3df72ee94\"" Jan 13 20:52:41.825382 containerd[1796]: time="2025-01-13T20:52:41.825369347Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 13 20:52:41.840491 systemd[1]: Started cri-containerd-f44e9b76d3d0e9f6d870cac0f83ec609e67241daa53f5302dd0123ba71471d1b.scope - libcontainer container f44e9b76d3d0e9f6d870cac0f83ec609e67241daa53f5302dd0123ba71471d1b. Jan 13 20:52:41.853953 containerd[1796]: time="2025-01-13T20:52:41.853930633Z" level=info msg="StartContainer for \"f44e9b76d3d0e9f6d870cac0f83ec609e67241daa53f5302dd0123ba71471d1b\" returns successfully" Jan 13 20:52:42.604393 kubelet[3275]: I0113 20:52:42.604201 3275 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rmr25" podStartSLOduration=1.6041296699999998 podStartE2EDuration="1.60412967s" podCreationTimestamp="2025-01-13 20:52:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:52:42.603751145 +0000 UTC m=+15.103057831" watchObservedRunningTime="2025-01-13 20:52:42.60412967 +0000 UTC m=+15.103436331" Jan 13 20:52:47.860415 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount112821990.mount: Deactivated successfully. Jan 13 20:52:48.142193 containerd[1796]: time="2025-01-13T20:52:48.142138766Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:52:48.142521 containerd[1796]: time="2025-01-13T20:52:48.142470416Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21764301" Jan 13 20:52:48.142839 containerd[1796]: time="2025-01-13T20:52:48.142799107Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:52:48.143871 containerd[1796]: time="2025-01-13T20:52:48.143830526Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:52:48.144292 containerd[1796]: time="2025-01-13T20:52:48.144251191Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 6.31886485s" Jan 13 20:52:48.144292 containerd[1796]: time="2025-01-13T20:52:48.144267520Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Jan 13 20:52:48.145257 containerd[1796]: time="2025-01-13T20:52:48.145241318Z" level=info msg="CreateContainer within sandbox \"a4c3af6e0933d9ef388e19f78b173e14113fe24c9138b1d0ae47c4f3df72ee94\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 13 20:52:48.148820 containerd[1796]: time="2025-01-13T20:52:48.148772809Z" level=info msg="CreateContainer within sandbox \"a4c3af6e0933d9ef388e19f78b173e14113fe24c9138b1d0ae47c4f3df72ee94\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"d857dff721b3bd3f2554771c730edd7bd9dc5bf2f4b601b8d4dcceaebeec65ff\"" Jan 13 20:52:48.149006 containerd[1796]: time="2025-01-13T20:52:48.148994460Z" level=info msg="StartContainer for \"d857dff721b3bd3f2554771c730edd7bd9dc5bf2f4b601b8d4dcceaebeec65ff\"" Jan 13 20:52:48.166447 systemd[1]: Started cri-containerd-d857dff721b3bd3f2554771c730edd7bd9dc5bf2f4b601b8d4dcceaebeec65ff.scope - libcontainer container d857dff721b3bd3f2554771c730edd7bd9dc5bf2f4b601b8d4dcceaebeec65ff. Jan 13 20:52:48.177754 containerd[1796]: time="2025-01-13T20:52:48.177730776Z" level=info msg="StartContainer for \"d857dff721b3bd3f2554771c730edd7bd9dc5bf2f4b601b8d4dcceaebeec65ff\" returns successfully" Jan 13 20:52:51.042849 kubelet[3275]: I0113 20:52:51.042679 3275 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7bc55997bb-jr889" podStartSLOduration=3.723012535 podStartE2EDuration="10.042563038s" podCreationTimestamp="2025-01-13 20:52:41 +0000 UTC" firstStartedPulling="2025-01-13 20:52:41.825152729 +0000 UTC m=+14.324459332" lastFinishedPulling="2025-01-13 20:52:48.144703225 +0000 UTC m=+20.644009835" observedRunningTime="2025-01-13 20:52:48.619545374 +0000 UTC m=+21.118852048" watchObservedRunningTime="2025-01-13 20:52:51.042563038 +0000 UTC m=+23.541869707" Jan 13 20:52:51.044318 kubelet[3275]: I0113 20:52:51.043502 3275 topology_manager.go:215] "Topology Admit Handler" podUID="641ae9aa-5c2d-43a1-847e-a181802fd94d" podNamespace="calico-system" podName="calico-typha-7cc65d9568-nwcds" Jan 13 20:52:51.054280 systemd[1]: Created slice kubepods-besteffort-pod641ae9aa_5c2d_43a1_847e_a181802fd94d.slice - libcontainer container kubepods-besteffort-pod641ae9aa_5c2d_43a1_847e_a181802fd94d.slice. Jan 13 20:52:51.093184 kubelet[3275]: I0113 20:52:51.093149 3275 topology_manager.go:215] "Topology Admit Handler" podUID="5348d9b4-b127-4f7b-a864-5a7e2ff31720" podNamespace="calico-system" podName="calico-node-skxv7" Jan 13 20:52:51.097505 systemd[1]: Created slice kubepods-besteffort-pod5348d9b4_b127_4f7b_a864_5a7e2ff31720.slice - libcontainer container kubepods-besteffort-pod5348d9b4_b127_4f7b_a864_5a7e2ff31720.slice. Jan 13 20:52:51.120628 kubelet[3275]: I0113 20:52:51.120606 3275 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/5348d9b4-b127-4f7b-a864-5a7e2ff31720-node-certs\") pod \"calico-node-skxv7\" (UID: \"5348d9b4-b127-4f7b-a864-5a7e2ff31720\") " pod="calico-system/calico-node-skxv7" Jan 13 20:52:51.120628 kubelet[3275]: I0113 20:52:51.120631 3275 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5348d9b4-b127-4f7b-a864-5a7e2ff31720-var-lib-calico\") pod \"calico-node-skxv7\" (UID: \"5348d9b4-b127-4f7b-a864-5a7e2ff31720\") " pod="calico-system/calico-node-skxv7" Jan 13 20:52:51.120742 kubelet[3275]: I0113 20:52:51.120648 3275 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/5348d9b4-b127-4f7b-a864-5a7e2ff31720-policysync\") pod \"calico-node-skxv7\" (UID: \"5348d9b4-b127-4f7b-a864-5a7e2ff31720\") " pod="calico-system/calico-node-skxv7" Jan 13 20:52:51.120742 kubelet[3275]: I0113 20:52:51.120662 3275 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/5348d9b4-b127-4f7b-a864-5a7e2ff31720-cni-bin-dir\") pod \"calico-node-skxv7\" (UID: \"5348d9b4-b127-4f7b-a864-5a7e2ff31720\") " pod="calico-system/calico-node-skxv7" Jan 13 20:52:51.120742 kubelet[3275]: I0113 20:52:51.120675 3275 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/641ae9aa-5c2d-43a1-847e-a181802fd94d-typha-certs\") pod \"calico-typha-7cc65d9568-nwcds\" (UID: \"641ae9aa-5c2d-43a1-847e-a181802fd94d\") " pod="calico-system/calico-typha-7cc65d9568-nwcds" Jan 13 20:52:51.120742 kubelet[3275]: I0113 20:52:51.120695 3275 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-httdd\" (UniqueName: \"kubernetes.io/projected/641ae9aa-5c2d-43a1-847e-a181802fd94d-kube-api-access-httdd\") pod \"calico-typha-7cc65d9568-nwcds\" (UID: \"641ae9aa-5c2d-43a1-847e-a181802fd94d\") " pod="calico-system/calico-typha-7cc65d9568-nwcds" Jan 13 20:52:51.120742 kubelet[3275]: I0113 20:52:51.120713 3275 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/5348d9b4-b127-4f7b-a864-5a7e2ff31720-cni-log-dir\") pod \"calico-node-skxv7\" (UID: \"5348d9b4-b127-4f7b-a864-5a7e2ff31720\") " pod="calico-system/calico-node-skxv7" Jan 13 20:52:51.120847 kubelet[3275]: I0113 20:52:51.120725 3275 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/641ae9aa-5c2d-43a1-847e-a181802fd94d-tigera-ca-bundle\") pod \"calico-typha-7cc65d9568-nwcds\" (UID: \"641ae9aa-5c2d-43a1-847e-a181802fd94d\") " pod="calico-system/calico-typha-7cc65d9568-nwcds" Jan 13 20:52:51.120847 kubelet[3275]: I0113 20:52:51.120733 3275 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/5348d9b4-b127-4f7b-a864-5a7e2ff31720-var-run-calico\") pod \"calico-node-skxv7\" (UID: \"5348d9b4-b127-4f7b-a864-5a7e2ff31720\") " pod="calico-system/calico-node-skxv7" Jan 13 20:52:51.120847 kubelet[3275]: I0113 20:52:51.120743 3275 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5348d9b4-b127-4f7b-a864-5a7e2ff31720-tigera-ca-bundle\") pod \"calico-node-skxv7\" (UID: \"5348d9b4-b127-4f7b-a864-5a7e2ff31720\") " pod="calico-system/calico-node-skxv7" Jan 13 20:52:51.120847 kubelet[3275]: I0113 20:52:51.120752 3275 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5348d9b4-b127-4f7b-a864-5a7e2ff31720-xtables-lock\") pod \"calico-node-skxv7\" (UID: \"5348d9b4-b127-4f7b-a864-5a7e2ff31720\") " pod="calico-system/calico-node-skxv7" Jan 13 20:52:51.120847 kubelet[3275]: I0113 20:52:51.120761 3275 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/5348d9b4-b127-4f7b-a864-5a7e2ff31720-cni-net-dir\") pod \"calico-node-skxv7\" (UID: \"5348d9b4-b127-4f7b-a864-5a7e2ff31720\") " pod="calico-system/calico-node-skxv7" Jan 13 20:52:51.120930 kubelet[3275]: I0113 20:52:51.120771 3275 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxqzv\" (UniqueName: \"kubernetes.io/projected/5348d9b4-b127-4f7b-a864-5a7e2ff31720-kube-api-access-mxqzv\") pod \"calico-node-skxv7\" (UID: \"5348d9b4-b127-4f7b-a864-5a7e2ff31720\") " pod="calico-system/calico-node-skxv7" Jan 13 20:52:51.120930 kubelet[3275]: I0113 20:52:51.120781 3275 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5348d9b4-b127-4f7b-a864-5a7e2ff31720-lib-modules\") pod \"calico-node-skxv7\" (UID: \"5348d9b4-b127-4f7b-a864-5a7e2ff31720\") " pod="calico-system/calico-node-skxv7" Jan 13 20:52:51.120930 kubelet[3275]: I0113 20:52:51.120790 3275 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/5348d9b4-b127-4f7b-a864-5a7e2ff31720-flexvol-driver-host\") pod \"calico-node-skxv7\" (UID: \"5348d9b4-b127-4f7b-a864-5a7e2ff31720\") " pod="calico-system/calico-node-skxv7" Jan 13 20:52:51.221137 kubelet[3275]: I0113 20:52:51.221048 3275 topology_manager.go:215] "Topology Admit Handler" podUID="041808ec-5de1-4583-be60-748668409e39" podNamespace="calico-system" podName="csi-node-driver-h7b85" Jan 13 20:52:51.222370 kubelet[3275]: E0113 20:52:51.221818 3275 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h7b85" podUID="041808ec-5de1-4583-be60-748668409e39" Jan 13 20:52:51.226004 kubelet[3275]: E0113 20:52:51.225934 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:52:51.226348 kubelet[3275]: W0113 20:52:51.226008 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:52:51.226348 kubelet[3275]: E0113 20:52:51.226083 3275 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:52:51.226918 kubelet[3275]: E0113 20:52:51.226862 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:52:51.227125 kubelet[3275]: W0113 20:52:51.226915 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:52:51.227125 kubelet[3275]: E0113 20:52:51.226976 3275 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:52:51.231662 kubelet[3275]: E0113 20:52:51.231575 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:52:51.231662 kubelet[3275]: W0113 20:52:51.231617 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:52:51.231985 kubelet[3275]: E0113 20:52:51.231671 3275 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:52:51.232336 kubelet[3275]: E0113 20:52:51.232252 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:52:51.232336 kubelet[3275]: W0113 20:52:51.232282 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:52:51.232336 kubelet[3275]: E0113 20:52:51.232314 3275 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:52:51.243670 kubelet[3275]: E0113 20:52:51.243573 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:52:51.243670 kubelet[3275]: W0113 20:52:51.243619 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:52:51.243670 kubelet[3275]: E0113 20:52:51.243667 3275 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:52:51.245475 kubelet[3275]: E0113 20:52:51.245430 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:52:51.245475 kubelet[3275]: W0113 20:52:51.245471 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:52:51.245727 kubelet[3275]: E0113 20:52:51.245504 3275 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:52:51.312311 kubelet[3275]: E0113 20:52:51.312204 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:52:51.312311 kubelet[3275]: W0113 20:52:51.312221 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:52:51.312311 kubelet[3275]: E0113 20:52:51.312240 3275 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:52:51.312473 kubelet[3275]: E0113 20:52:51.312451 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:52:51.312473 kubelet[3275]: W0113 20:52:51.312465 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:52:51.312540 kubelet[3275]: E0113 20:52:51.312482 3275 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:52:51.312731 kubelet[3275]: E0113 20:52:51.312690 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:52:51.312731 kubelet[3275]: W0113 20:52:51.312705 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:52:51.312731 kubelet[3275]: E0113 20:52:51.312720 3275 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:52:51.312962 kubelet[3275]: E0113 20:52:51.312950 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:52:51.313001 kubelet[3275]: W0113 20:52:51.312962 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:52:51.313001 kubelet[3275]: E0113 20:52:51.312975 3275 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:52:51.313195 kubelet[3275]: E0113 20:52:51.313185 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:52:51.313195 kubelet[3275]: W0113 20:52:51.313194 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:52:51.313267 kubelet[3275]: E0113 20:52:51.313204 3275 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:52:51.313509 kubelet[3275]: E0113 20:52:51.313468 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:52:51.313509 kubelet[3275]: W0113 20:52:51.313485 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:52:51.313509 kubelet[3275]: E0113 20:52:51.313497 3275 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:52:51.313697 kubelet[3275]: E0113 20:52:51.313665 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:52:51.313697 kubelet[3275]: W0113 20:52:51.313674 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:52:51.313697 kubelet[3275]: E0113 20:52:51.313684 3275 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:52:51.313861 kubelet[3275]: E0113 20:52:51.313851 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:52:51.313898 kubelet[3275]: W0113 20:52:51.313860 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:52:51.313898 kubelet[3275]: E0113 20:52:51.313870 3275 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:52:51.314081 kubelet[3275]: E0113 20:52:51.314069 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:52:51.314114 kubelet[3275]: W0113 20:52:51.314083 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:52:51.314114 kubelet[3275]: E0113 20:52:51.314098 3275 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:52:51.314345 kubelet[3275]: E0113 20:52:51.314329 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:52:51.314345 kubelet[3275]: W0113 20:52:51.314338 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:52:51.314449 kubelet[3275]: E0113 20:52:51.314349 3275 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:52:51.314503 kubelet[3275]: E0113 20:52:51.314493 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:52:51.314503 kubelet[3275]: W0113 20:52:51.314502 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:52:51.314572 kubelet[3275]: E0113 20:52:51.314511 3275 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:52:51.314712 kubelet[3275]: E0113 20:52:51.314701 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:52:51.314745 kubelet[3275]: W0113 20:52:51.314715 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:52:51.314745 kubelet[3275]: E0113 20:52:51.314739 3275 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:52:51.314952 kubelet[3275]: E0113 20:52:51.314939 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:52:51.314998 kubelet[3275]: W0113 20:52:51.314953 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:52:51.314998 kubelet[3275]: E0113 20:52:51.314968 3275 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:52:51.315208 kubelet[3275]: E0113 20:52:51.315172 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:52:51.315208 kubelet[3275]: W0113 20:52:51.315185 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:52:51.315208 kubelet[3275]: E0113 20:52:51.315201 3275 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:52:51.315401 kubelet[3275]: E0113 20:52:51.315390 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:52:51.315436 kubelet[3275]: W0113 20:52:51.315403 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:52:51.315436 kubelet[3275]: E0113 20:52:51.315419 3275 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:52:51.315642 kubelet[3275]: E0113 20:52:51.315631 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:52:51.315678 kubelet[3275]: W0113 20:52:51.315644 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:52:51.315678 kubelet[3275]: E0113 20:52:51.315658 3275 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:52:51.315865 kubelet[3275]: E0113 20:52:51.315853 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:52:51.315905 kubelet[3275]: W0113 20:52:51.315867 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:52:51.315905 kubelet[3275]: E0113 20:52:51.315883 3275 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:52:51.316062 kubelet[3275]: E0113 20:52:51.316053 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:52:51.316099 kubelet[3275]: W0113 20:52:51.316065 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:52:51.316099 kubelet[3275]: E0113 20:52:51.316075 3275 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:52:51.316257 kubelet[3275]: E0113 20:52:51.316248 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:52:51.316290 kubelet[3275]: W0113 20:52:51.316257 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:52:51.316290 kubelet[3275]: E0113 20:52:51.316266 3275 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:52:51.316465 kubelet[3275]: E0113 20:52:51.316420 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:52:51.316465 kubelet[3275]: W0113 20:52:51.316433 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:52:51.316465 kubelet[3275]: E0113 20:52:51.316446 3275 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:52:51.322722 kubelet[3275]: E0113 20:52:51.322673 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:52:51.322722 kubelet[3275]: W0113 20:52:51.322686 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:52:51.322722 kubelet[3275]: E0113 20:52:51.322699 3275 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:52:51.322722 kubelet[3275]: I0113 20:52:51.322723 3275 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/041808ec-5de1-4583-be60-748668409e39-varrun\") pod \"csi-node-driver-h7b85\" (UID: \"041808ec-5de1-4583-be60-748668409e39\") " pod="calico-system/csi-node-driver-h7b85" Jan 13 20:52:51.322960 kubelet[3275]: E0113 20:52:51.322917 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:52:51.322960 kubelet[3275]: W0113 20:52:51.322929 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:52:51.322960 kubelet[3275]: E0113 20:52:51.322942 3275 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:52:51.322960 kubelet[3275]: I0113 20:52:51.322958 3275 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5l9jm\" (UniqueName: \"kubernetes.io/projected/041808ec-5de1-4583-be60-748668409e39-kube-api-access-5l9jm\") pod \"csi-node-driver-h7b85\" (UID: \"041808ec-5de1-4583-be60-748668409e39\") " pod="calico-system/csi-node-driver-h7b85" Jan 13 20:52:51.323142 kubelet[3275]: E0113 20:52:51.323132 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:52:51.323142 kubelet[3275]: W0113 20:52:51.323142 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:52:51.323236 kubelet[3275]: E0113 20:52:51.323160 3275 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:52:51.323236 kubelet[3275]: I0113 20:52:51.323180 3275 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/041808ec-5de1-4583-be60-748668409e39-socket-dir\") pod \"csi-node-driver-h7b85\" (UID: \"041808ec-5de1-4583-be60-748668409e39\") " pod="calico-system/csi-node-driver-h7b85" Jan 13 20:52:51.323375 kubelet[3275]: E0113 20:52:51.323335 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:52:51.323375 kubelet[3275]: W0113 20:52:51.323346 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:52:51.323375 kubelet[3275]: E0113 20:52:51.323358 3275 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:52:51.323496 kubelet[3275]: I0113 20:52:51.323379 3275 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/041808ec-5de1-4583-be60-748668409e39-registration-dir\") pod \"csi-node-driver-h7b85\" (UID: \"041808ec-5de1-4583-be60-748668409e39\") " pod="calico-system/csi-node-driver-h7b85" Jan 13 20:52:51.323588 kubelet[3275]: E0113 20:52:51.323546 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:52:51.323588 kubelet[3275]: W0113 20:52:51.323556 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:52:51.323588 kubelet[3275]: E0113 20:52:51.323569 3275 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:52:51.323588 kubelet[3275]: I0113 20:52:51.323589 3275 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/041808ec-5de1-4583-be60-748668409e39-kubelet-dir\") pod \"csi-node-driver-h7b85\" (UID: \"041808ec-5de1-4583-be60-748668409e39\") " pod="calico-system/csi-node-driver-h7b85" Jan 13 20:52:51.323812 kubelet[3275]: E0113 20:52:51.323776 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:52:51.323812 kubelet[3275]: W0113 20:52:51.323788 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:52:51.323812 kubelet[3275]: E0113 20:52:51.323801 3275 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:52:51.324015 kubelet[3275]: E0113 20:52:51.324001 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:52:51.324069 kubelet[3275]: W0113 20:52:51.324016 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:52:51.324069 kubelet[3275]: E0113 20:52:51.324045 3275 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:52:51.324285 kubelet[3275]: E0113 20:52:51.324272 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:52:51.324332 kubelet[3275]: W0113 20:52:51.324285 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:52:51.324332 kubelet[3275]: E0113 20:52:51.324313 3275 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:52:51.324499 kubelet[3275]: E0113 20:52:51.324489 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:52:51.324499 kubelet[3275]: W0113 20:52:51.324498 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:52:51.324578 kubelet[3275]: E0113 20:52:51.324519 3275 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:52:51.324706 kubelet[3275]: E0113 20:52:51.324692 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:52:51.324706 kubelet[3275]: W0113 20:52:51.324701 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:52:51.324795 kubelet[3275]: E0113 20:52:51.324722 3275 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:52:51.324864 kubelet[3275]: E0113 20:52:51.324854 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:52:51.324908 kubelet[3275]: W0113 20:52:51.324863 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:52:51.324908 kubelet[3275]: E0113 20:52:51.324882 3275 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:52:51.325009 kubelet[3275]: E0113 20:52:51.325001 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:52:51.325009 kubelet[3275]: W0113 20:52:51.325009 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:52:51.325086 kubelet[3275]: E0113 20:52:51.325019 3275 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:52:51.325225 kubelet[3275]: E0113 20:52:51.325172 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:52:51.325225 kubelet[3275]: W0113 20:52:51.325181 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:52:51.325225 kubelet[3275]: E0113 20:52:51.325189 3275 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:52:51.325383 kubelet[3275]: E0113 20:52:51.325374 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:52:51.325383 kubelet[3275]: W0113 20:52:51.325383 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:52:51.325462 kubelet[3275]: E0113 20:52:51.325392 3275 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:52:51.325537 kubelet[3275]: E0113 20:52:51.325528 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:52:51.325537 kubelet[3275]: W0113 20:52:51.325537 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:52:51.325607 kubelet[3275]: E0113 20:52:51.325546 3275 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:52:51.357012 containerd[1796]: time="2025-01-13T20:52:51.356978991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7cc65d9568-nwcds,Uid:641ae9aa-5c2d-43a1-847e-a181802fd94d,Namespace:calico-system,Attempt:0,}" Jan 13 20:52:51.366911 containerd[1796]: time="2025-01-13T20:52:51.366868889Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:52:51.366911 containerd[1796]: time="2025-01-13T20:52:51.366899033Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:52:51.366911 containerd[1796]: time="2025-01-13T20:52:51.366907871Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:52:51.367055 containerd[1796]: time="2025-01-13T20:52:51.366954282Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:52:51.379344 systemd[1]: Started cri-containerd-63582e37d21548bcbce92bbfc40223d9e30af32ef0fb35716581933c01026bc7.scope - libcontainer container 63582e37d21548bcbce92bbfc40223d9e30af32ef0fb35716581933c01026bc7. Jan 13 20:52:51.400118 containerd[1796]: time="2025-01-13T20:52:51.400098467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-skxv7,Uid:5348d9b4-b127-4f7b-a864-5a7e2ff31720,Namespace:calico-system,Attempt:0,}" Jan 13 20:52:51.400911 containerd[1796]: time="2025-01-13T20:52:51.400898589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7cc65d9568-nwcds,Uid:641ae9aa-5c2d-43a1-847e-a181802fd94d,Namespace:calico-system,Attempt:0,} returns sandbox id \"63582e37d21548bcbce92bbfc40223d9e30af32ef0fb35716581933c01026bc7\"" Jan 13 20:52:51.401567 containerd[1796]: time="2025-01-13T20:52:51.401555157Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 13 20:52:51.409425 containerd[1796]: time="2025-01-13T20:52:51.409350938Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:52:51.409425 containerd[1796]: time="2025-01-13T20:52:51.409399125Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:52:51.409425 containerd[1796]: time="2025-01-13T20:52:51.409415582Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:52:51.409671 containerd[1796]: time="2025-01-13T20:52:51.409624405Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:52:51.424592 kubelet[3275]: E0113 20:52:51.424452 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:52:51.424592 kubelet[3275]: W0113 20:52:51.424474 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:52:51.424592 kubelet[3275]: E0113 20:52:51.424492 3275 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:52:51.424876 kubelet[3275]: E0113 20:52:51.424821 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:52:51.424876 kubelet[3275]: W0113 20:52:51.424835 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:52:51.424876 kubelet[3275]: E0113 20:52:51.424852 3275 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:52:51.424974 kubelet[3275]: E0113 20:52:51.424968 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:52:51.424998 kubelet[3275]: W0113 20:52:51.424977 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:52:51.425180 kubelet[3275]: E0113 20:52:51.425143 3275 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:52:51.425317 kubelet[3275]: E0113 20:52:51.425273 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:52:51.425317 kubelet[3275]: W0113 20:52:51.425280 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:52:51.425317 kubelet[3275]: E0113 20:52:51.425287 3275 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:52:51.425392 kubelet[3275]: E0113 20:52:51.425375 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:52:51.425392 kubelet[3275]: W0113 20:52:51.425380 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:52:51.425392 kubelet[3275]: E0113 20:52:51.425385 3275 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:52:51.425509 kubelet[3275]: E0113 20:52:51.425473 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:52:51.425509 kubelet[3275]: W0113 20:52:51.425478 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:52:51.425509 kubelet[3275]: E0113 20:52:51.425482 3275 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:52:51.425577 kubelet[3275]: E0113 20:52:51.425564 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:52:51.425577 kubelet[3275]: W0113 20:52:51.425571 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:52:51.425728 kubelet[3275]: E0113 20:52:51.425579 3275 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:52:51.425728 kubelet[3275]: E0113 20:52:51.425657 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:52:51.425728 kubelet[3275]: W0113 20:52:51.425662 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:52:51.425728 kubelet[3275]: E0113 20:52:51.425667 3275 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:52:51.425791 kubelet[3275]: E0113 20:52:51.425778 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:52:51.425791 kubelet[3275]: W0113 20:52:51.425785 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:52:51.425823 kubelet[3275]: E0113 20:52:51.425792 3275 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:52:51.425950 kubelet[3275]: E0113 20:52:51.425914 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:52:51.425950 kubelet[3275]: W0113 20:52:51.425920 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:52:51.425950 kubelet[3275]: E0113 20:52:51.425928 3275 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:52:51.426036 kubelet[3275]: E0113 20:52:51.426031 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:52:51.426036 kubelet[3275]: W0113 20:52:51.426036 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:52:51.426069 kubelet[3275]: E0113 20:52:51.426042 3275 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:52:51.426139 kubelet[3275]: E0113 20:52:51.426134 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:52:51.426196 kubelet[3275]: W0113 20:52:51.426139 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:52:51.426196 kubelet[3275]: E0113 20:52:51.426145 3275 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:52:51.426251 kubelet[3275]: E0113 20:52:51.426243 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:52:51.426269 kubelet[3275]: W0113 20:52:51.426252 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:52:51.426269 kubelet[3275]: E0113 20:52:51.426262 3275 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:52:51.426374 kubelet[3275]: E0113 20:52:51.426367 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:52:51.426374 kubelet[3275]: W0113 20:52:51.426372 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:52:51.426427 kubelet[3275]: E0113 20:52:51.426379 3275 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:52:51.426467 kubelet[3275]: E0113 20:52:51.426462 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:52:51.426486 kubelet[3275]: W0113 20:52:51.426467 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:52:51.426486 kubelet[3275]: E0113 20:52:51.426473 3275 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:52:51.426547 kubelet[3275]: E0113 20:52:51.426542 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:52:51.426569 kubelet[3275]: W0113 20:52:51.426546 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:52:51.426569 kubelet[3275]: E0113 20:52:51.426552 3275 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:52:51.426650 kubelet[3275]: E0113 20:52:51.426645 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:52:51.426650 kubelet[3275]: W0113 20:52:51.426650 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:52:51.426686 kubelet[3275]: E0113 20:52:51.426656 3275 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:52:51.426737 kubelet[3275]: E0113 20:52:51.426733 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:52:51.426760 kubelet[3275]: W0113 20:52:51.426737 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:52:51.426760 kubelet[3275]: E0113 20:52:51.426743 3275 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:52:51.426813 kubelet[3275]: E0113 20:52:51.426809 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:52:51.426833 kubelet[3275]: W0113 20:52:51.426813 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:52:51.426833 kubelet[3275]: E0113 20:52:51.426823 3275 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:52:51.426880 kubelet[3275]: E0113 20:52:51.426876 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:52:51.426901 kubelet[3275]: W0113 20:52:51.426880 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:52:51.426901 kubelet[3275]: E0113 20:52:51.426886 3275 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:52:51.426972 kubelet[3275]: E0113 20:52:51.426966 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:52:51.426989 kubelet[3275]: W0113 20:52:51.426972 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:52:51.426989 kubelet[3275]: E0113 20:52:51.426979 3275 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:52:51.427106 kubelet[3275]: E0113 20:52:51.427102 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:52:51.427106 kubelet[3275]: W0113 20:52:51.427106 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:52:51.427151 kubelet[3275]: E0113 20:52:51.427112 3275 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:52:51.427191 kubelet[3275]: E0113 20:52:51.427186 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:52:51.427208 kubelet[3275]: W0113 20:52:51.427190 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:52:51.427208 kubelet[3275]: E0113 20:52:51.427196 3275 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:52:51.427318 kubelet[3275]: E0113 20:52:51.427313 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:52:51.427339 kubelet[3275]: W0113 20:52:51.427318 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:52:51.427339 kubelet[3275]: E0113 20:52:51.427323 3275 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:52:51.427410 kubelet[3275]: E0113 20:52:51.427405 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:52:51.427427 kubelet[3275]: W0113 20:52:51.427410 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:52:51.427427 kubelet[3275]: E0113 20:52:51.427415 3275 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:52:51.430985 kubelet[3275]: E0113 20:52:51.430950 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:52:51.430985 kubelet[3275]: W0113 20:52:51.430956 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:52:51.430985 kubelet[3275]: E0113 20:52:51.430962 3275 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:52:51.432473 systemd[1]: Started cri-containerd-49859323a4966c83cd55caf0c69de09e466c038055407cb9acb949d8fa4b0fb9.scope - libcontainer container 49859323a4966c83cd55caf0c69de09e466c038055407cb9acb949d8fa4b0fb9. Jan 13 20:52:51.442665 containerd[1796]: time="2025-01-13T20:52:51.442612041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-skxv7,Uid:5348d9b4-b127-4f7b-a864-5a7e2ff31720,Namespace:calico-system,Attempt:0,} returns sandbox id \"49859323a4966c83cd55caf0c69de09e466c038055407cb9acb949d8fa4b0fb9\"" Jan 13 20:52:52.767725 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount997875558.mount: Deactivated successfully. Jan 13 20:52:53.043448 containerd[1796]: time="2025-01-13T20:52:53.043364733Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:52:53.043636 containerd[1796]: time="2025-01-13T20:52:53.043584708Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Jan 13 20:52:53.043929 containerd[1796]: time="2025-01-13T20:52:53.043892018Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:52:53.044865 containerd[1796]: time="2025-01-13T20:52:53.044825975Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:52:53.045589 containerd[1796]: time="2025-01-13T20:52:53.045549752Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 1.64397675s" Jan 13 20:52:53.045589 containerd[1796]: time="2025-01-13T20:52:53.045563125Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Jan 13 20:52:53.046019 containerd[1796]: time="2025-01-13T20:52:53.045981867Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 13 20:52:53.048948 containerd[1796]: time="2025-01-13T20:52:53.048823318Z" level=info msg="CreateContainer within sandbox \"63582e37d21548bcbce92bbfc40223d9e30af32ef0fb35716581933c01026bc7\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 13 20:52:53.052851 containerd[1796]: time="2025-01-13T20:52:53.052807391Z" level=info msg="CreateContainer within sandbox \"63582e37d21548bcbce92bbfc40223d9e30af32ef0fb35716581933c01026bc7\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"f1f98ddd5fc1a85966255276318a4464d04d788e0a963a2bf0ed919fe4187d68\"" Jan 13 20:52:53.053032 containerd[1796]: time="2025-01-13T20:52:53.053018830Z" level=info msg="StartContainer for \"f1f98ddd5fc1a85966255276318a4464d04d788e0a963a2bf0ed919fe4187d68\"" Jan 13 20:52:53.071464 systemd[1]: Started cri-containerd-f1f98ddd5fc1a85966255276318a4464d04d788e0a963a2bf0ed919fe4187d68.scope - libcontainer container f1f98ddd5fc1a85966255276318a4464d04d788e0a963a2bf0ed919fe4187d68. Jan 13 20:52:53.111247 containerd[1796]: time="2025-01-13T20:52:53.111187835Z" level=info msg="StartContainer for \"f1f98ddd5fc1a85966255276318a4464d04d788e0a963a2bf0ed919fe4187d68\" returns successfully" Jan 13 20:52:53.549881 kubelet[3275]: E0113 20:52:53.549794 3275 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h7b85" podUID="041808ec-5de1-4583-be60-748668409e39" Jan 13 20:52:53.632611 kubelet[3275]: E0113 20:52:53.632539 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:52:53.632611 kubelet[3275]: W0113 20:52:53.632585 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:52:53.632611 kubelet[3275]: E0113 20:52:53.632625 3275 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:52:53.633338 kubelet[3275]: E0113 20:52:53.633186 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:52:53.633338 kubelet[3275]: W0113 20:52:53.633222 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:52:53.633338 kubelet[3275]: E0113 20:52:53.633255 3275 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:52:53.633819 kubelet[3275]: E0113 20:52:53.633763 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:52:53.633819 kubelet[3275]: W0113 20:52:53.633797 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:52:53.634135 kubelet[3275]: E0113 20:52:53.633831 3275 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:52:53.634406 kubelet[3275]: E0113 20:52:53.634359 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:52:53.634406 kubelet[3275]: W0113 20:52:53.634395 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:52:53.634752 kubelet[3275]: E0113 20:52:53.634430 3275 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:52:53.635096 kubelet[3275]: E0113 20:52:53.635052 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:52:53.635096 kubelet[3275]: W0113 20:52:53.635090 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:52:53.635463 kubelet[3275]: E0113 20:52:53.635124 3275 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:52:53.635798 kubelet[3275]: E0113 20:52:53.635754 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:52:53.635798 kubelet[3275]: W0113 20:52:53.635793 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:52:53.636125 kubelet[3275]: E0113 20:52:53.635827 3275 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:52:53.636498 kubelet[3275]: E0113 20:52:53.636453 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:52:53.636498 kubelet[3275]: W0113 20:52:53.636492 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:52:53.636841 kubelet[3275]: E0113 20:52:53.636526 3275 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:52:53.637149 kubelet[3275]: E0113 20:52:53.637111 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:52:53.637149 kubelet[3275]: W0113 20:52:53.637145 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:52:53.637439 kubelet[3275]: E0113 20:52:53.637195 3275 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:52:53.638246 kubelet[3275]: E0113 20:52:53.638200 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:52:53.638451 kubelet[3275]: W0113 20:52:53.638250 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:52:53.638451 kubelet[3275]: E0113 20:52:53.638310 3275 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:52:53.638984 kubelet[3275]: E0113 20:52:53.638897 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:52:53.638984 kubelet[3275]: W0113 20:52:53.638928 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:52:53.638984 kubelet[3275]: I0113 20:52:53.638883 3275 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7cc65d9568-nwcds" podStartSLOduration=0.994353249 podStartE2EDuration="2.638840725s" podCreationTimestamp="2025-01-13 20:52:51 +0000 UTC" firstStartedPulling="2025-01-13 20:52:51.401439416 +0000 UTC m=+23.900746021" lastFinishedPulling="2025-01-13 20:52:53.045926894 +0000 UTC m=+25.545233497" observedRunningTime="2025-01-13 20:52:53.638313376 +0000 UTC m=+26.137620051" watchObservedRunningTime="2025-01-13 20:52:53.638840725 +0000 UTC m=+26.138147382" Jan 13 20:52:53.639760 kubelet[3275]: E0113 20:52:53.638959 3275 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:52:53.639760 kubelet[3275]: E0113 20:52:53.639657 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:52:53.639760 kubelet[3275]: W0113 20:52:53.639682 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:52:53.639760 kubelet[3275]: E0113 20:52:53.639711 3275 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:52:53.640405 kubelet[3275]: E0113 20:52:53.640222 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:52:53.640405 kubelet[3275]: W0113 20:52:53.640246 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:52:53.640405 kubelet[3275]: E0113 20:52:53.640274 3275 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:52:53.640834 kubelet[3275]: E0113 20:52:53.640790 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:52:53.640834 kubelet[3275]: W0113 20:52:53.640828 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:52:53.641110 kubelet[3275]: E0113 20:52:53.640857 3275 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:52:53.641474 kubelet[3275]: E0113 20:52:53.641436 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:52:53.641615 kubelet[3275]: W0113 20:52:53.641474 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:52:53.641615 kubelet[3275]: E0113 20:52:53.641509 3275 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:52:53.642061 kubelet[3275]: E0113 20:52:53.642033 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:52:53.642215 kubelet[3275]: W0113 20:52:53.642061 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:52:53.642215 kubelet[3275]: E0113 20:52:53.642089 3275 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:52:53.645793 kubelet[3275]: E0113 20:52:53.645715 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:52:53.645793 kubelet[3275]: W0113 20:52:53.645753 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:52:53.645793 kubelet[3275]: E0113 20:52:53.645786 3275 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:52:53.646535 kubelet[3275]: E0113 20:52:53.646457 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:52:53.646535 kubelet[3275]: W0113 20:52:53.646495 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:52:53.646535 kubelet[3275]: E0113 20:52:53.646535 3275 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:52:53.647193 kubelet[3275]: E0113 20:52:53.647104 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:52:53.647193 kubelet[3275]: W0113 20:52:53.647145 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:52:53.647437 kubelet[3275]: E0113 20:52:53.647224 3275 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:52:53.647885 kubelet[3275]: E0113 20:52:53.647801 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:52:53.647885 kubelet[3275]: W0113 20:52:53.647838 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:52:53.647885 kubelet[3275]: E0113 20:52:53.647881 3275 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:52:53.648567 kubelet[3275]: E0113 20:52:53.648482 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:52:53.648567 kubelet[3275]: W0113 20:52:53.648524 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:52:53.648849 kubelet[3275]: E0113 20:52:53.648644 3275 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:52:53.649147 kubelet[3275]: E0113 20:52:53.649096 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:52:53.649147 kubelet[3275]: W0113 20:52:53.649123 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:52:53.649448 kubelet[3275]: E0113 20:52:53.649258 3275 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:52:53.649766 kubelet[3275]: E0113 20:52:53.649689 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:52:53.649766 kubelet[3275]: W0113 20:52:53.649725 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:52:53.650047 kubelet[3275]: E0113 20:52:53.649840 3275 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:52:53.650345 kubelet[3275]: E0113 20:52:53.650276 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:52:53.650345 kubelet[3275]: W0113 20:52:53.650301 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:52:53.650667 kubelet[3275]: E0113 20:52:53.650417 3275 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:52:53.650884 kubelet[3275]: E0113 20:52:53.650824 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:52:53.650884 kubelet[3275]: W0113 20:52:53.650859 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:52:53.651110 kubelet[3275]: E0113 20:52:53.650903 3275 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:52:53.651565 kubelet[3275]: E0113 20:52:53.651483 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:52:53.651565 kubelet[3275]: W0113 20:52:53.651522 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:52:53.651890 kubelet[3275]: E0113 20:52:53.651608 3275 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:52:53.652092 kubelet[3275]: E0113 20:52:53.652044 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:52:53.652092 kubelet[3275]: W0113 20:52:53.652074 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:52:53.652372 kubelet[3275]: E0113 20:52:53.652144 3275 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:52:53.652804 kubelet[3275]: E0113 20:52:53.652708 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:52:53.652804 kubelet[3275]: W0113 20:52:53.652745 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:52:53.653113 kubelet[3275]: E0113 20:52:53.652873 3275 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:52:53.653301 kubelet[3275]: E0113 20:52:53.653270 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:52:53.653301 kubelet[3275]: W0113 20:52:53.653295 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:52:53.653527 kubelet[3275]: E0113 20:52:53.653368 3275 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:52:53.653932 kubelet[3275]: E0113 20:52:53.653847 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:52:53.653932 kubelet[3275]: W0113 20:52:53.653883 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:52:53.653932 kubelet[3275]: E0113 20:52:53.653934 3275 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:52:53.654615 kubelet[3275]: E0113 20:52:53.654538 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:52:53.654615 kubelet[3275]: W0113 20:52:53.654574 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:52:53.654615 kubelet[3275]: E0113 20:52:53.654616 3275 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:52:53.655305 kubelet[3275]: E0113 20:52:53.655227 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:52:53.655305 kubelet[3275]: W0113 20:52:53.655265 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:52:53.655305 kubelet[3275]: E0113 20:52:53.655307 3275 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:52:53.655995 kubelet[3275]: E0113 20:52:53.655913 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:52:53.655995 kubelet[3275]: W0113 20:52:53.655949 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:52:53.655995 kubelet[3275]: E0113 20:52:53.655982 3275 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:52:53.656626 kubelet[3275]: E0113 20:52:53.656534 3275 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:52:53.656626 kubelet[3275]: W0113 20:52:53.656571 3275 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:52:53.656626 kubelet[3275]: E0113 20:52:53.656605 3275 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:52:54.362508 containerd[1796]: time="2025-01-13T20:52:54.362468125Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:52:54.362788 containerd[1796]: time="2025-01-13T20:52:54.362716102Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Jan 13 20:52:54.362981 containerd[1796]: time="2025-01-13T20:52:54.362968174Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:52:54.363981 containerd[1796]: time="2025-01-13T20:52:54.363967070Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:52:54.364383 containerd[1796]: time="2025-01-13T20:52:54.364367487Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.318372625s" Jan 13 20:52:54.364421 containerd[1796]: time="2025-01-13T20:52:54.364384813Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 13 20:52:54.365496 containerd[1796]: time="2025-01-13T20:52:54.365468585Z" level=info msg="CreateContainer within sandbox \"49859323a4966c83cd55caf0c69de09e466c038055407cb9acb949d8fa4b0fb9\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 13 20:52:54.370296 containerd[1796]: time="2025-01-13T20:52:54.370253637Z" level=info msg="CreateContainer within sandbox \"49859323a4966c83cd55caf0c69de09e466c038055407cb9acb949d8fa4b0fb9\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"c005e1046a254d2247c83429933bd5312f702f526415680a7332e87cbb943c5d\"" Jan 13 20:52:54.370626 containerd[1796]: time="2025-01-13T20:52:54.370580340Z" level=info msg="StartContainer for \"c005e1046a254d2247c83429933bd5312f702f526415680a7332e87cbb943c5d\"" Jan 13 20:52:54.398306 systemd[1]: Started cri-containerd-c005e1046a254d2247c83429933bd5312f702f526415680a7332e87cbb943c5d.scope - libcontainer container c005e1046a254d2247c83429933bd5312f702f526415680a7332e87cbb943c5d. Jan 13 20:52:54.413770 containerd[1796]: time="2025-01-13T20:52:54.413717659Z" level=info msg="StartContainer for \"c005e1046a254d2247c83429933bd5312f702f526415680a7332e87cbb943c5d\" returns successfully" Jan 13 20:52:54.420132 systemd[1]: cri-containerd-c005e1046a254d2247c83429933bd5312f702f526415680a7332e87cbb943c5d.scope: Deactivated successfully. Jan 13 20:52:54.435963 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c005e1046a254d2247c83429933bd5312f702f526415680a7332e87cbb943c5d-rootfs.mount: Deactivated successfully. Jan 13 20:52:54.632746 kubelet[3275]: I0113 20:52:54.623517 3275 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 20:52:54.839815 containerd[1796]: time="2025-01-13T20:52:54.839786826Z" level=info msg="shim disconnected" id=c005e1046a254d2247c83429933bd5312f702f526415680a7332e87cbb943c5d namespace=k8s.io Jan 13 20:52:54.839815 containerd[1796]: time="2025-01-13T20:52:54.839813667Z" level=warning msg="cleaning up after shim disconnected" id=c005e1046a254d2247c83429933bd5312f702f526415680a7332e87cbb943c5d namespace=k8s.io Jan 13 20:52:54.839815 containerd[1796]: time="2025-01-13T20:52:54.839819494Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:52:55.549959 kubelet[3275]: E0113 20:52:55.549862 3275 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h7b85" podUID="041808ec-5de1-4583-be60-748668409e39" Jan 13 20:52:55.631869 containerd[1796]: time="2025-01-13T20:52:55.631753308Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 13 20:52:57.550209 kubelet[3275]: E0113 20:52:57.550185 3275 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h7b85" podUID="041808ec-5de1-4583-be60-748668409e39" Jan 13 20:52:57.926190 containerd[1796]: time="2025-01-13T20:52:57.926137410Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:52:57.926399 containerd[1796]: time="2025-01-13T20:52:57.926257641Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 13 20:52:57.926725 containerd[1796]: time="2025-01-13T20:52:57.926684786Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:52:57.927678 containerd[1796]: time="2025-01-13T20:52:57.927632852Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:52:57.928097 containerd[1796]: time="2025-01-13T20:52:57.928058065Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 2.29623858s" Jan 13 20:52:57.928097 containerd[1796]: time="2025-01-13T20:52:57.928071717Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 13 20:52:57.929102 containerd[1796]: time="2025-01-13T20:52:57.929088022Z" level=info msg="CreateContainer within sandbox \"49859323a4966c83cd55caf0c69de09e466c038055407cb9acb949d8fa4b0fb9\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 13 20:52:57.933525 containerd[1796]: time="2025-01-13T20:52:57.933509988Z" level=info msg="CreateContainer within sandbox \"49859323a4966c83cd55caf0c69de09e466c038055407cb9acb949d8fa4b0fb9\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"77b2984274d95a6b645c53a7d086afc7bcc61baad39171a43d1eb94d1b743bb8\"" Jan 13 20:52:57.933645 containerd[1796]: time="2025-01-13T20:52:57.933634458Z" level=info msg="StartContainer for \"77b2984274d95a6b645c53a7d086afc7bcc61baad39171a43d1eb94d1b743bb8\"" Jan 13 20:52:57.962476 systemd[1]: Started cri-containerd-77b2984274d95a6b645c53a7d086afc7bcc61baad39171a43d1eb94d1b743bb8.scope - libcontainer container 77b2984274d95a6b645c53a7d086afc7bcc61baad39171a43d1eb94d1b743bb8. Jan 13 20:52:57.976125 containerd[1796]: time="2025-01-13T20:52:57.976067093Z" level=info msg="StartContainer for \"77b2984274d95a6b645c53a7d086afc7bcc61baad39171a43d1eb94d1b743bb8\" returns successfully" Jan 13 20:52:58.503584 systemd[1]: cri-containerd-77b2984274d95a6b645c53a7d086afc7bcc61baad39171a43d1eb94d1b743bb8.scope: Deactivated successfully. Jan 13 20:52:58.555255 kubelet[3275]: I0113 20:52:58.555210 3275 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 13 20:52:58.571876 kubelet[3275]: I0113 20:52:58.571809 3275 topology_manager.go:215] "Topology Admit Handler" podUID="e3cc6f57-01fe-4853-a572-df94cbb24821" podNamespace="calico-apiserver" podName="calico-apiserver-7d4c785db6-799q5" Jan 13 20:52:58.572945 kubelet[3275]: I0113 20:52:58.572902 3275 topology_manager.go:215] "Topology Admit Handler" podUID="bf4f0ae4-b718-4aef-9209-5a0f4e005958" podNamespace="calico-apiserver" podName="calico-apiserver-7d4c785db6-2rkkg" Jan 13 20:52:58.573631 kubelet[3275]: I0113 20:52:58.573597 3275 topology_manager.go:215] "Topology Admit Handler" podUID="ce9b51cb-e99a-4abc-aeff-27a2e094f2d6" podNamespace="calico-system" podName="calico-kube-controllers-68d58fcc5b-n4w6q" Jan 13 20:52:58.576704 kubelet[3275]: I0113 20:52:58.575340 3275 topology_manager.go:215] "Topology Admit Handler" podUID="d2521fa2-61ba-4676-843f-c14c317e5358" podNamespace="kube-system" podName="coredns-7db6d8ff4d-nxnfb" Jan 13 20:52:58.577250 kubelet[3275]: I0113 20:52:58.577183 3275 topology_manager.go:215] "Topology Admit Handler" podUID="1958f0eb-003b-4101-8885-8177a1e71a0d" podNamespace="kube-system" podName="coredns-7db6d8ff4d-gnptx" Jan 13 20:52:58.583622 kubelet[3275]: I0113 20:52:58.583569 3275 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/bf4f0ae4-b718-4aef-9209-5a0f4e005958-calico-apiserver-certs\") pod \"calico-apiserver-7d4c785db6-2rkkg\" (UID: \"bf4f0ae4-b718-4aef-9209-5a0f4e005958\") " pod="calico-apiserver/calico-apiserver-7d4c785db6-2rkkg" Jan 13 20:52:58.583779 kubelet[3275]: I0113 20:52:58.583653 3275 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1958f0eb-003b-4101-8885-8177a1e71a0d-config-volume\") pod \"coredns-7db6d8ff4d-gnptx\" (UID: \"1958f0eb-003b-4101-8885-8177a1e71a0d\") " pod="kube-system/coredns-7db6d8ff4d-gnptx" Jan 13 20:52:58.583779 kubelet[3275]: I0113 20:52:58.583750 3275 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ce9b51cb-e99a-4abc-aeff-27a2e094f2d6-tigera-ca-bundle\") pod \"calico-kube-controllers-68d58fcc5b-n4w6q\" (UID: \"ce9b51cb-e99a-4abc-aeff-27a2e094f2d6\") " pod="calico-system/calico-kube-controllers-68d58fcc5b-n4w6q" Jan 13 20:52:58.583925 kubelet[3275]: I0113 20:52:58.583833 3275 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48wnd\" (UniqueName: \"kubernetes.io/projected/bf4f0ae4-b718-4aef-9209-5a0f4e005958-kube-api-access-48wnd\") pod \"calico-apiserver-7d4c785db6-2rkkg\" (UID: \"bf4f0ae4-b718-4aef-9209-5a0f4e005958\") " pod="calico-apiserver/calico-apiserver-7d4c785db6-2rkkg" Jan 13 20:52:58.583925 kubelet[3275]: I0113 20:52:58.583898 3275 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrwht\" (UniqueName: \"kubernetes.io/projected/ce9b51cb-e99a-4abc-aeff-27a2e094f2d6-kube-api-access-zrwht\") pod \"calico-kube-controllers-68d58fcc5b-n4w6q\" (UID: \"ce9b51cb-e99a-4abc-aeff-27a2e094f2d6\") " pod="calico-system/calico-kube-controllers-68d58fcc5b-n4w6q" Jan 13 20:52:58.584089 kubelet[3275]: I0113 20:52:58.583969 3275 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpb65\" (UniqueName: \"kubernetes.io/projected/1958f0eb-003b-4101-8885-8177a1e71a0d-kube-api-access-hpb65\") pod \"coredns-7db6d8ff4d-gnptx\" (UID: \"1958f0eb-003b-4101-8885-8177a1e71a0d\") " pod="kube-system/coredns-7db6d8ff4d-gnptx" Jan 13 20:52:58.584089 kubelet[3275]: I0113 20:52:58.584037 3275 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d2521fa2-61ba-4676-843f-c14c317e5358-config-volume\") pod \"coredns-7db6d8ff4d-nxnfb\" (UID: \"d2521fa2-61ba-4676-843f-c14c317e5358\") " pod="kube-system/coredns-7db6d8ff4d-nxnfb" Jan 13 20:52:58.584364 kubelet[3275]: I0113 20:52:58.584112 3275 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jp4tj\" (UniqueName: \"kubernetes.io/projected/e3cc6f57-01fe-4853-a572-df94cbb24821-kube-api-access-jp4tj\") pod \"calico-apiserver-7d4c785db6-799q5\" (UID: \"e3cc6f57-01fe-4853-a572-df94cbb24821\") " pod="calico-apiserver/calico-apiserver-7d4c785db6-799q5" Jan 13 20:52:58.584364 kubelet[3275]: I0113 20:52:58.584207 3275 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8r64h\" (UniqueName: \"kubernetes.io/projected/d2521fa2-61ba-4676-843f-c14c317e5358-kube-api-access-8r64h\") pod \"coredns-7db6d8ff4d-nxnfb\" (UID: \"d2521fa2-61ba-4676-843f-c14c317e5358\") " pod="kube-system/coredns-7db6d8ff4d-nxnfb" Jan 13 20:52:58.584364 kubelet[3275]: I0113 20:52:58.584276 3275 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e3cc6f57-01fe-4853-a572-df94cbb24821-calico-apiserver-certs\") pod \"calico-apiserver-7d4c785db6-799q5\" (UID: \"e3cc6f57-01fe-4853-a572-df94cbb24821\") " pod="calico-apiserver/calico-apiserver-7d4c785db6-799q5" Jan 13 20:52:58.585744 systemd[1]: Created slice kubepods-besteffort-pode3cc6f57_01fe_4853_a572_df94cbb24821.slice - libcontainer container kubepods-besteffort-pode3cc6f57_01fe_4853_a572_df94cbb24821.slice. Jan 13 20:52:58.595630 systemd[1]: Created slice kubepods-besteffort-podbf4f0ae4_b718_4aef_9209_5a0f4e005958.slice - libcontainer container kubepods-besteffort-podbf4f0ae4_b718_4aef_9209_5a0f4e005958.slice. Jan 13 20:52:58.606035 systemd[1]: Created slice kubepods-besteffort-podce9b51cb_e99a_4abc_aeff_27a2e094f2d6.slice - libcontainer container kubepods-besteffort-podce9b51cb_e99a_4abc_aeff_27a2e094f2d6.slice. Jan 13 20:52:58.616393 systemd[1]: Created slice kubepods-burstable-podd2521fa2_61ba_4676_843f_c14c317e5358.slice - libcontainer container kubepods-burstable-podd2521fa2_61ba_4676_843f_c14c317e5358.slice. Jan 13 20:52:58.624455 systemd[1]: Created slice kubepods-burstable-pod1958f0eb_003b_4101_8885_8177a1e71a0d.slice - libcontainer container kubepods-burstable-pod1958f0eb_003b_4101_8885_8177a1e71a0d.slice. Jan 13 20:52:58.891706 containerd[1796]: time="2025-01-13T20:52:58.891632964Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d4c785db6-799q5,Uid:e3cc6f57-01fe-4853-a572-df94cbb24821,Namespace:calico-apiserver,Attempt:0,}" Jan 13 20:52:58.902057 containerd[1796]: time="2025-01-13T20:52:58.901946118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d4c785db6-2rkkg,Uid:bf4f0ae4-b718-4aef-9209-5a0f4e005958,Namespace:calico-apiserver,Attempt:0,}" Jan 13 20:52:58.912610 containerd[1796]: time="2025-01-13T20:52:58.912485634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68d58fcc5b-n4w6q,Uid:ce9b51cb-e99a-4abc-aeff-27a2e094f2d6,Namespace:calico-system,Attempt:0,}" Jan 13 20:52:58.921968 containerd[1796]: time="2025-01-13T20:52:58.921839306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nxnfb,Uid:d2521fa2-61ba-4676-843f-c14c317e5358,Namespace:kube-system,Attempt:0,}" Jan 13 20:52:58.928345 containerd[1796]: time="2025-01-13T20:52:58.928231375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-gnptx,Uid:1958f0eb-003b-4101-8885-8177a1e71a0d,Namespace:kube-system,Attempt:0,}" Jan 13 20:52:58.950035 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-77b2984274d95a6b645c53a7d086afc7bcc61baad39171a43d1eb94d1b743bb8-rootfs.mount: Deactivated successfully. Jan 13 20:52:59.031012 containerd[1796]: time="2025-01-13T20:52:59.030912750Z" level=info msg="shim disconnected" id=77b2984274d95a6b645c53a7d086afc7bcc61baad39171a43d1eb94d1b743bb8 namespace=k8s.io Jan 13 20:52:59.031012 containerd[1796]: time="2025-01-13T20:52:59.030940135Z" level=warning msg="cleaning up after shim disconnected" id=77b2984274d95a6b645c53a7d086afc7bcc61baad39171a43d1eb94d1b743bb8 namespace=k8s.io Jan 13 20:52:59.031012 containerd[1796]: time="2025-01-13T20:52:59.030961526Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:52:59.070720 containerd[1796]: time="2025-01-13T20:52:59.070684676Z" level=error msg="Failed to destroy network for sandbox \"d15156abea73ec1e9b5ab33e4bbf79cb7b5b53fdcd536741876459cfab2afcae\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:52:59.071013 containerd[1796]: time="2025-01-13T20:52:59.070992925Z" level=error msg="encountered an error cleaning up failed sandbox \"d15156abea73ec1e9b5ab33e4bbf79cb7b5b53fdcd536741876459cfab2afcae\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:52:59.071062 containerd[1796]: time="2025-01-13T20:52:59.071046135Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68d58fcc5b-n4w6q,Uid:ce9b51cb-e99a-4abc-aeff-27a2e094f2d6,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d15156abea73ec1e9b5ab33e4bbf79cb7b5b53fdcd536741876459cfab2afcae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:52:59.071300 kubelet[3275]: E0113 20:52:59.071254 3275 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d15156abea73ec1e9b5ab33e4bbf79cb7b5b53fdcd536741876459cfab2afcae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:52:59.071401 kubelet[3275]: E0113 20:52:59.071337 3275 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d15156abea73ec1e9b5ab33e4bbf79cb7b5b53fdcd536741876459cfab2afcae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68d58fcc5b-n4w6q" Jan 13 20:52:59.071401 kubelet[3275]: E0113 20:52:59.071356 3275 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d15156abea73ec1e9b5ab33e4bbf79cb7b5b53fdcd536741876459cfab2afcae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68d58fcc5b-n4w6q" Jan 13 20:52:59.071479 kubelet[3275]: E0113 20:52:59.071398 3275 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-68d58fcc5b-n4w6q_calico-system(ce9b51cb-e99a-4abc-aeff-27a2e094f2d6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-68d58fcc5b-n4w6q_calico-system(ce9b51cb-e99a-4abc-aeff-27a2e094f2d6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d15156abea73ec1e9b5ab33e4bbf79cb7b5b53fdcd536741876459cfab2afcae\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-68d58fcc5b-n4w6q" podUID="ce9b51cb-e99a-4abc-aeff-27a2e094f2d6" Jan 13 20:52:59.072947 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d15156abea73ec1e9b5ab33e4bbf79cb7b5b53fdcd536741876459cfab2afcae-shm.mount: Deactivated successfully. Jan 13 20:52:59.080755 containerd[1796]: time="2025-01-13T20:52:59.080722576Z" level=error msg="Failed to destroy network for sandbox \"bc0d9cc2b9b8e93debedb3944b0e8b37375a66770da30a4aa85ea30a3e80e9f1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:52:59.080918 containerd[1796]: time="2025-01-13T20:52:59.080898742Z" level=error msg="Failed to destroy network for sandbox \"1fd3f15800d1c117e0bb75b7bd2633696ff5013f5fdfce3448414aebd683fd00\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:52:59.080995 containerd[1796]: time="2025-01-13T20:52:59.080976532Z" level=error msg="encountered an error cleaning up failed sandbox \"bc0d9cc2b9b8e93debedb3944b0e8b37375a66770da30a4aa85ea30a3e80e9f1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:52:59.081042 containerd[1796]: time="2025-01-13T20:52:59.081024081Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d4c785db6-2rkkg,Uid:bf4f0ae4-b718-4aef-9209-5a0f4e005958,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bc0d9cc2b9b8e93debedb3944b0e8b37375a66770da30a4aa85ea30a3e80e9f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:52:59.081106 containerd[1796]: time="2025-01-13T20:52:59.081062644Z" level=error msg="encountered an error cleaning up failed sandbox \"1fd3f15800d1c117e0bb75b7bd2633696ff5013f5fdfce3448414aebd683fd00\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:52:59.081106 containerd[1796]: time="2025-01-13T20:52:59.081095805Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d4c785db6-799q5,Uid:e3cc6f57-01fe-4853-a572-df94cbb24821,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1fd3f15800d1c117e0bb75b7bd2633696ff5013f5fdfce3448414aebd683fd00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:52:59.081198 kubelet[3275]: E0113 20:52:59.081175 3275 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1fd3f15800d1c117e0bb75b7bd2633696ff5013f5fdfce3448414aebd683fd00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:52:59.081238 kubelet[3275]: E0113 20:52:59.081201 3275 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc0d9cc2b9b8e93debedb3944b0e8b37375a66770da30a4aa85ea30a3e80e9f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:52:59.081238 kubelet[3275]: E0113 20:52:59.081216 3275 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1fd3f15800d1c117e0bb75b7bd2633696ff5013f5fdfce3448414aebd683fd00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d4c785db6-799q5" Jan 13 20:52:59.081238 kubelet[3275]: E0113 20:52:59.081222 3275 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc0d9cc2b9b8e93debedb3944b0e8b37375a66770da30a4aa85ea30a3e80e9f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d4c785db6-2rkkg" Jan 13 20:52:59.081238 kubelet[3275]: E0113 20:52:59.081229 3275 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1fd3f15800d1c117e0bb75b7bd2633696ff5013f5fdfce3448414aebd683fd00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d4c785db6-799q5" Jan 13 20:52:59.081312 kubelet[3275]: E0113 20:52:59.081233 3275 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc0d9cc2b9b8e93debedb3944b0e8b37375a66770da30a4aa85ea30a3e80e9f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d4c785db6-2rkkg" Jan 13 20:52:59.081312 kubelet[3275]: E0113 20:52:59.081253 3275 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7d4c785db6-2rkkg_calico-apiserver(bf4f0ae4-b718-4aef-9209-5a0f4e005958)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7d4c785db6-2rkkg_calico-apiserver(bf4f0ae4-b718-4aef-9209-5a0f4e005958)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bc0d9cc2b9b8e93debedb3944b0e8b37375a66770da30a4aa85ea30a3e80e9f1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d4c785db6-2rkkg" podUID="bf4f0ae4-b718-4aef-9209-5a0f4e005958" Jan 13 20:52:59.081362 kubelet[3275]: E0113 20:52:59.081253 3275 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7d4c785db6-799q5_calico-apiserver(e3cc6f57-01fe-4853-a572-df94cbb24821)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7d4c785db6-799q5_calico-apiserver(e3cc6f57-01fe-4853-a572-df94cbb24821)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1fd3f15800d1c117e0bb75b7bd2633696ff5013f5fdfce3448414aebd683fd00\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d4c785db6-799q5" podUID="e3cc6f57-01fe-4853-a572-df94cbb24821" Jan 13 20:52:59.083906 containerd[1796]: time="2025-01-13T20:52:59.083884675Z" level=error msg="Failed to destroy network for sandbox \"8ea3cd1e0823c9840b04ba6e04a90f50370ef737a9e00d5acbefd7978bcd15c3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:52:59.084084 containerd[1796]: time="2025-01-13T20:52:59.084072587Z" level=error msg="encountered an error cleaning up failed sandbox \"8ea3cd1e0823c9840b04ba6e04a90f50370ef737a9e00d5acbefd7978bcd15c3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:52:59.084118 containerd[1796]: time="2025-01-13T20:52:59.084107437Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nxnfb,Uid:d2521fa2-61ba-4676-843f-c14c317e5358,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8ea3cd1e0823c9840b04ba6e04a90f50370ef737a9e00d5acbefd7978bcd15c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:52:59.084217 kubelet[3275]: E0113 20:52:59.084200 3275 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ea3cd1e0823c9840b04ba6e04a90f50370ef737a9e00d5acbefd7978bcd15c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:52:59.084259 kubelet[3275]: E0113 20:52:59.084225 3275 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ea3cd1e0823c9840b04ba6e04a90f50370ef737a9e00d5acbefd7978bcd15c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-nxnfb" Jan 13 20:52:59.084259 kubelet[3275]: E0113 20:52:59.084236 3275 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ea3cd1e0823c9840b04ba6e04a90f50370ef737a9e00d5acbefd7978bcd15c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-nxnfb" Jan 13 20:52:59.084303 kubelet[3275]: E0113 20:52:59.084259 3275 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-nxnfb_kube-system(d2521fa2-61ba-4676-843f-c14c317e5358)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-nxnfb_kube-system(d2521fa2-61ba-4676-843f-c14c317e5358)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8ea3cd1e0823c9840b04ba6e04a90f50370ef737a9e00d5acbefd7978bcd15c3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-nxnfb" podUID="d2521fa2-61ba-4676-843f-c14c317e5358" Jan 13 20:52:59.084453 containerd[1796]: time="2025-01-13T20:52:59.084409384Z" level=error msg="Failed to destroy network for sandbox \"ef5f207624ba5c43d680c6587e4fc03354f47110d4ee73746c6408d8b13554e0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:52:59.084585 containerd[1796]: time="2025-01-13T20:52:59.084542807Z" level=error msg="encountered an error cleaning up failed sandbox \"ef5f207624ba5c43d680c6587e4fc03354f47110d4ee73746c6408d8b13554e0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:52:59.084585 containerd[1796]: time="2025-01-13T20:52:59.084571389Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-gnptx,Uid:1958f0eb-003b-4101-8885-8177a1e71a0d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ef5f207624ba5c43d680c6587e4fc03354f47110d4ee73746c6408d8b13554e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:52:59.084648 kubelet[3275]: E0113 20:52:59.084638 3275 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef5f207624ba5c43d680c6587e4fc03354f47110d4ee73746c6408d8b13554e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:52:59.084678 kubelet[3275]: E0113 20:52:59.084653 3275 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef5f207624ba5c43d680c6587e4fc03354f47110d4ee73746c6408d8b13554e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-gnptx" Jan 13 20:52:59.084678 kubelet[3275]: E0113 20:52:59.084662 3275 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef5f207624ba5c43d680c6587e4fc03354f47110d4ee73746c6408d8b13554e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-gnptx" Jan 13 20:52:59.084717 kubelet[3275]: E0113 20:52:59.084677 3275 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-gnptx_kube-system(1958f0eb-003b-4101-8885-8177a1e71a0d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-gnptx_kube-system(1958f0eb-003b-4101-8885-8177a1e71a0d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ef5f207624ba5c43d680c6587e4fc03354f47110d4ee73746c6408d8b13554e0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-gnptx" podUID="1958f0eb-003b-4101-8885-8177a1e71a0d" Jan 13 20:52:59.566178 systemd[1]: Created slice kubepods-besteffort-pod041808ec_5de1_4583_be60_748668409e39.slice - libcontainer container kubepods-besteffort-pod041808ec_5de1_4583_be60_748668409e39.slice. Jan 13 20:52:59.571333 containerd[1796]: time="2025-01-13T20:52:59.571250463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h7b85,Uid:041808ec-5de1-4583-be60-748668409e39,Namespace:calico-system,Attempt:0,}" Jan 13 20:52:59.608920 containerd[1796]: time="2025-01-13T20:52:59.608878650Z" level=error msg="Failed to destroy network for sandbox \"e1c4d74fea366482420e27e4d2319ef8fa6392820f0f49a85b79676210aee06f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:52:59.609114 containerd[1796]: time="2025-01-13T20:52:59.609100561Z" level=error msg="encountered an error cleaning up failed sandbox \"e1c4d74fea366482420e27e4d2319ef8fa6392820f0f49a85b79676210aee06f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:52:59.609161 containerd[1796]: time="2025-01-13T20:52:59.609141079Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h7b85,Uid:041808ec-5de1-4583-be60-748668409e39,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e1c4d74fea366482420e27e4d2319ef8fa6392820f0f49a85b79676210aee06f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:52:59.609394 kubelet[3275]: E0113 20:52:59.609329 3275 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1c4d74fea366482420e27e4d2319ef8fa6392820f0f49a85b79676210aee06f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:52:59.609574 kubelet[3275]: E0113 20:52:59.609396 3275 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1c4d74fea366482420e27e4d2319ef8fa6392820f0f49a85b79676210aee06f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h7b85" Jan 13 20:52:59.609574 kubelet[3275]: E0113 20:52:59.609408 3275 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1c4d74fea366482420e27e4d2319ef8fa6392820f0f49a85b79676210aee06f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h7b85" Jan 13 20:52:59.609574 kubelet[3275]: E0113 20:52:59.609433 3275 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-h7b85_calico-system(041808ec-5de1-4583-be60-748668409e39)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-h7b85_calico-system(041808ec-5de1-4583-be60-748668409e39)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e1c4d74fea366482420e27e4d2319ef8fa6392820f0f49a85b79676210aee06f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-h7b85" podUID="041808ec-5de1-4583-be60-748668409e39" Jan 13 20:52:59.636044 kubelet[3275]: I0113 20:52:59.636021 3275 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc0d9cc2b9b8e93debedb3944b0e8b37375a66770da30a4aa85ea30a3e80e9f1" Jan 13 20:52:59.636535 containerd[1796]: time="2025-01-13T20:52:59.636511048Z" level=info msg="StopPodSandbox for \"bc0d9cc2b9b8e93debedb3944b0e8b37375a66770da30a4aa85ea30a3e80e9f1\"" Jan 13 20:52:59.636721 containerd[1796]: time="2025-01-13T20:52:59.636706753Z" level=info msg="Ensure that sandbox bc0d9cc2b9b8e93debedb3944b0e8b37375a66770da30a4aa85ea30a3e80e9f1 in task-service has been cleanup successfully" Jan 13 20:52:59.636845 containerd[1796]: time="2025-01-13T20:52:59.636830572Z" level=info msg="TearDown network for sandbox \"bc0d9cc2b9b8e93debedb3944b0e8b37375a66770da30a4aa85ea30a3e80e9f1\" successfully" Jan 13 20:52:59.636845 containerd[1796]: time="2025-01-13T20:52:59.636842914Z" level=info msg="StopPodSandbox for \"bc0d9cc2b9b8e93debedb3944b0e8b37375a66770da30a4aa85ea30a3e80e9f1\" returns successfully" Jan 13 20:52:59.637150 containerd[1796]: time="2025-01-13T20:52:59.637130367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d4c785db6-2rkkg,Uid:bf4f0ae4-b718-4aef-9209-5a0f4e005958,Namespace:calico-apiserver,Attempt:1,}" Jan 13 20:52:59.637804 containerd[1796]: time="2025-01-13T20:52:59.637793210Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 13 20:52:59.637848 kubelet[3275]: I0113 20:52:59.637821 3275 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1c4d74fea366482420e27e4d2319ef8fa6392820f0f49a85b79676210aee06f" Jan 13 20:52:59.638012 containerd[1796]: time="2025-01-13T20:52:59.637999155Z" level=info msg="StopPodSandbox for \"e1c4d74fea366482420e27e4d2319ef8fa6392820f0f49a85b79676210aee06f\"" Jan 13 20:52:59.638113 containerd[1796]: time="2025-01-13T20:52:59.638102871Z" level=info msg="Ensure that sandbox e1c4d74fea366482420e27e4d2319ef8fa6392820f0f49a85b79676210aee06f in task-service has been cleanup successfully" Jan 13 20:52:59.638172 kubelet[3275]: I0113 20:52:59.638160 3275 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef5f207624ba5c43d680c6587e4fc03354f47110d4ee73746c6408d8b13554e0" Jan 13 20:52:59.638201 containerd[1796]: time="2025-01-13T20:52:59.638194953Z" level=info msg="TearDown network for sandbox \"e1c4d74fea366482420e27e4d2319ef8fa6392820f0f49a85b79676210aee06f\" successfully" Jan 13 20:52:59.638226 containerd[1796]: time="2025-01-13T20:52:59.638203014Z" level=info msg="StopPodSandbox for \"e1c4d74fea366482420e27e4d2319ef8fa6392820f0f49a85b79676210aee06f\" returns successfully" Jan 13 20:52:59.638355 containerd[1796]: time="2025-01-13T20:52:59.638342924Z" level=info msg="StopPodSandbox for \"ef5f207624ba5c43d680c6587e4fc03354f47110d4ee73746c6408d8b13554e0\"" Jan 13 20:52:59.638385 containerd[1796]: time="2025-01-13T20:52:59.638353517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h7b85,Uid:041808ec-5de1-4583-be60-748668409e39,Namespace:calico-system,Attempt:1,}" Jan 13 20:52:59.638446 containerd[1796]: time="2025-01-13T20:52:59.638435826Z" level=info msg="Ensure that sandbox ef5f207624ba5c43d680c6587e4fc03354f47110d4ee73746c6408d8b13554e0 in task-service has been cleanup successfully" Jan 13 20:52:59.638529 containerd[1796]: time="2025-01-13T20:52:59.638519387Z" level=info msg="TearDown network for sandbox \"ef5f207624ba5c43d680c6587e4fc03354f47110d4ee73746c6408d8b13554e0\" successfully" Jan 13 20:52:59.638552 containerd[1796]: time="2025-01-13T20:52:59.638529091Z" level=info msg="StopPodSandbox for \"ef5f207624ba5c43d680c6587e4fc03354f47110d4ee73746c6408d8b13554e0\" returns successfully" Jan 13 20:52:59.638571 kubelet[3275]: I0113 20:52:59.638545 3275 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ea3cd1e0823c9840b04ba6e04a90f50370ef737a9e00d5acbefd7978bcd15c3" Jan 13 20:52:59.638762 containerd[1796]: time="2025-01-13T20:52:59.638749330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-gnptx,Uid:1958f0eb-003b-4101-8885-8177a1e71a0d,Namespace:kube-system,Attempt:1,}" Jan 13 20:52:59.638874 containerd[1796]: time="2025-01-13T20:52:59.638749609Z" level=info msg="StopPodSandbox for \"8ea3cd1e0823c9840b04ba6e04a90f50370ef737a9e00d5acbefd7978bcd15c3\"" Jan 13 20:52:59.638947 containerd[1796]: time="2025-01-13T20:52:59.638935841Z" level=info msg="Ensure that sandbox 8ea3cd1e0823c9840b04ba6e04a90f50370ef737a9e00d5acbefd7978bcd15c3 in task-service has been cleanup successfully" Jan 13 20:52:59.638989 kubelet[3275]: I0113 20:52:59.638978 3275 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d15156abea73ec1e9b5ab33e4bbf79cb7b5b53fdcd536741876459cfab2afcae" Jan 13 20:52:59.639029 containerd[1796]: time="2025-01-13T20:52:59.639018819Z" level=info msg="TearDown network for sandbox \"8ea3cd1e0823c9840b04ba6e04a90f50370ef737a9e00d5acbefd7978bcd15c3\" successfully" Jan 13 20:52:59.639051 containerd[1796]: time="2025-01-13T20:52:59.639029816Z" level=info msg="StopPodSandbox for \"8ea3cd1e0823c9840b04ba6e04a90f50370ef737a9e00d5acbefd7978bcd15c3\" returns successfully" Jan 13 20:52:59.639207 containerd[1796]: time="2025-01-13T20:52:59.639197535Z" level=info msg="StopPodSandbox for \"d15156abea73ec1e9b5ab33e4bbf79cb7b5b53fdcd536741876459cfab2afcae\"" Jan 13 20:52:59.639266 containerd[1796]: time="2025-01-13T20:52:59.639255543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nxnfb,Uid:d2521fa2-61ba-4676-843f-c14c317e5358,Namespace:kube-system,Attempt:1,}" Jan 13 20:52:59.639291 containerd[1796]: time="2025-01-13T20:52:59.639277308Z" level=info msg="Ensure that sandbox d15156abea73ec1e9b5ab33e4bbf79cb7b5b53fdcd536741876459cfab2afcae in task-service has been cleanup successfully" Jan 13 20:52:59.639352 containerd[1796]: time="2025-01-13T20:52:59.639344653Z" level=info msg="TearDown network for sandbox \"d15156abea73ec1e9b5ab33e4bbf79cb7b5b53fdcd536741876459cfab2afcae\" successfully" Jan 13 20:52:59.639378 containerd[1796]: time="2025-01-13T20:52:59.639352358Z" level=info msg="StopPodSandbox for \"d15156abea73ec1e9b5ab33e4bbf79cb7b5b53fdcd536741876459cfab2afcae\" returns successfully" Jan 13 20:52:59.639480 kubelet[3275]: I0113 20:52:59.639471 3275 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1fd3f15800d1c117e0bb75b7bd2633696ff5013f5fdfce3448414aebd683fd00" Jan 13 20:52:59.639525 containerd[1796]: time="2025-01-13T20:52:59.639500394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68d58fcc5b-n4w6q,Uid:ce9b51cb-e99a-4abc-aeff-27a2e094f2d6,Namespace:calico-system,Attempt:1,}" Jan 13 20:52:59.639702 containerd[1796]: time="2025-01-13T20:52:59.639689584Z" level=info msg="StopPodSandbox for \"1fd3f15800d1c117e0bb75b7bd2633696ff5013f5fdfce3448414aebd683fd00\"" Jan 13 20:52:59.639814 containerd[1796]: time="2025-01-13T20:52:59.639803182Z" level=info msg="Ensure that sandbox 1fd3f15800d1c117e0bb75b7bd2633696ff5013f5fdfce3448414aebd683fd00 in task-service has been cleanup successfully" Jan 13 20:52:59.639904 containerd[1796]: time="2025-01-13T20:52:59.639893396Z" level=info msg="TearDown network for sandbox \"1fd3f15800d1c117e0bb75b7bd2633696ff5013f5fdfce3448414aebd683fd00\" successfully" Jan 13 20:52:59.639921 containerd[1796]: time="2025-01-13T20:52:59.639905238Z" level=info msg="StopPodSandbox for \"1fd3f15800d1c117e0bb75b7bd2633696ff5013f5fdfce3448414aebd683fd00\" returns successfully" Jan 13 20:52:59.640068 containerd[1796]: time="2025-01-13T20:52:59.640060268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d4c785db6-799q5,Uid:e3cc6f57-01fe-4853-a572-df94cbb24821,Namespace:calico-apiserver,Attempt:1,}" Jan 13 20:52:59.676129 containerd[1796]: time="2025-01-13T20:52:59.676093926Z" level=error msg="Failed to destroy network for sandbox \"c72566025eb6bf20e7a12141ba8d334f96ef61a7fe37ad141a1776f33dbbb2db\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:52:59.676328 containerd[1796]: time="2025-01-13T20:52:59.676315877Z" level=error msg="encountered an error cleaning up failed sandbox \"c72566025eb6bf20e7a12141ba8d334f96ef61a7fe37ad141a1776f33dbbb2db\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:52:59.676368 containerd[1796]: time="2025-01-13T20:52:59.676358134Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d4c785db6-2rkkg,Uid:bf4f0ae4-b718-4aef-9209-5a0f4e005958,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"c72566025eb6bf20e7a12141ba8d334f96ef61a7fe37ad141a1776f33dbbb2db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:52:59.676549 kubelet[3275]: E0113 20:52:59.676515 3275 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c72566025eb6bf20e7a12141ba8d334f96ef61a7fe37ad141a1776f33dbbb2db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:52:59.676595 kubelet[3275]: E0113 20:52:59.676576 3275 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c72566025eb6bf20e7a12141ba8d334f96ef61a7fe37ad141a1776f33dbbb2db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d4c785db6-2rkkg" Jan 13 20:52:59.676631 kubelet[3275]: E0113 20:52:59.676596 3275 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c72566025eb6bf20e7a12141ba8d334f96ef61a7fe37ad141a1776f33dbbb2db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d4c785db6-2rkkg" Jan 13 20:52:59.676668 kubelet[3275]: E0113 20:52:59.676635 3275 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7d4c785db6-2rkkg_calico-apiserver(bf4f0ae4-b718-4aef-9209-5a0f4e005958)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7d4c785db6-2rkkg_calico-apiserver(bf4f0ae4-b718-4aef-9209-5a0f4e005958)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c72566025eb6bf20e7a12141ba8d334f96ef61a7fe37ad141a1776f33dbbb2db\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d4c785db6-2rkkg" podUID="bf4f0ae4-b718-4aef-9209-5a0f4e005958" Jan 13 20:52:59.680233 containerd[1796]: time="2025-01-13T20:52:59.680152779Z" level=error msg="Failed to destroy network for sandbox \"0127c52613930b78e5e66edfcc01dcf3f62cba9d857bf91f828a4557bcfbb1d3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:52:59.680233 containerd[1796]: time="2025-01-13T20:52:59.680152814Z" level=error msg="Failed to destroy network for sandbox \"456e6343ee0f57bcb8dcd9404bcdf2a4c885fd2fd17130c9b0edfda25a692b9c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:52:59.680376 containerd[1796]: time="2025-01-13T20:52:59.680358866Z" level=error msg="encountered an error cleaning up failed sandbox \"0127c52613930b78e5e66edfcc01dcf3f62cba9d857bf91f828a4557bcfbb1d3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:52:59.680425 containerd[1796]: time="2025-01-13T20:52:59.680379911Z" level=error msg="Failed to destroy network for sandbox \"b4c4a6c10d20b2a33196606ebc3cbcd87a2cf342ce15bb27f9b1bfa2e4be7b59\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:52:59.680425 containerd[1796]: time="2025-01-13T20:52:59.680388027Z" level=error msg="encountered an error cleaning up failed sandbox \"456e6343ee0f57bcb8dcd9404bcdf2a4c885fd2fd17130c9b0edfda25a692b9c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:52:59.680489 containerd[1796]: time="2025-01-13T20:52:59.680435051Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nxnfb,Uid:d2521fa2-61ba-4676-843f-c14c317e5358,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"0127c52613930b78e5e66edfcc01dcf3f62cba9d857bf91f828a4557bcfbb1d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:52:59.680489 containerd[1796]: time="2025-01-13T20:52:59.680448330Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d4c785db6-799q5,Uid:e3cc6f57-01fe-4853-a572-df94cbb24821,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"456e6343ee0f57bcb8dcd9404bcdf2a4c885fd2fd17130c9b0edfda25a692b9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:52:59.680583 containerd[1796]: time="2025-01-13T20:52:59.680568009Z" level=error msg="encountered an error cleaning up failed sandbox \"b4c4a6c10d20b2a33196606ebc3cbcd87a2cf342ce15bb27f9b1bfa2e4be7b59\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:52:59.680611 kubelet[3275]: E0113 20:52:59.680580 3275 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0127c52613930b78e5e66edfcc01dcf3f62cba9d857bf91f828a4557bcfbb1d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:52:59.680637 kubelet[3275]: E0113 20:52:59.680614 3275 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0127c52613930b78e5e66edfcc01dcf3f62cba9d857bf91f828a4557bcfbb1d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-nxnfb" Jan 13 20:52:59.680658 containerd[1796]: time="2025-01-13T20:52:59.680599556Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-gnptx,Uid:1958f0eb-003b-4101-8885-8177a1e71a0d,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"b4c4a6c10d20b2a33196606ebc3cbcd87a2cf342ce15bb27f9b1bfa2e4be7b59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:52:59.680677 kubelet[3275]: E0113 20:52:59.680634 3275 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0127c52613930b78e5e66edfcc01dcf3f62cba9d857bf91f828a4557bcfbb1d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-nxnfb" Jan 13 20:52:59.680677 kubelet[3275]: E0113 20:52:59.680580 3275 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"456e6343ee0f57bcb8dcd9404bcdf2a4c885fd2fd17130c9b0edfda25a692b9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:52:59.680677 kubelet[3275]: E0113 20:52:59.680659 3275 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-nxnfb_kube-system(d2521fa2-61ba-4676-843f-c14c317e5358)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-nxnfb_kube-system(d2521fa2-61ba-4676-843f-c14c317e5358)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0127c52613930b78e5e66edfcc01dcf3f62cba9d857bf91f828a4557bcfbb1d3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-nxnfb" podUID="d2521fa2-61ba-4676-843f-c14c317e5358" Jan 13 20:52:59.680746 kubelet[3275]: E0113 20:52:59.680673 3275 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4c4a6c10d20b2a33196606ebc3cbcd87a2cf342ce15bb27f9b1bfa2e4be7b59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:52:59.680746 kubelet[3275]: E0113 20:52:59.680683 3275 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"456e6343ee0f57bcb8dcd9404bcdf2a4c885fd2fd17130c9b0edfda25a692b9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d4c785db6-799q5" Jan 13 20:52:59.680746 kubelet[3275]: E0113 20:52:59.680695 3275 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4c4a6c10d20b2a33196606ebc3cbcd87a2cf342ce15bb27f9b1bfa2e4be7b59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-gnptx" Jan 13 20:52:59.680746 kubelet[3275]: E0113 20:52:59.680706 3275 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4c4a6c10d20b2a33196606ebc3cbcd87a2cf342ce15bb27f9b1bfa2e4be7b59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-gnptx" Jan 13 20:52:59.680829 kubelet[3275]: E0113 20:52:59.680723 3275 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-gnptx_kube-system(1958f0eb-003b-4101-8885-8177a1e71a0d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-gnptx_kube-system(1958f0eb-003b-4101-8885-8177a1e71a0d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b4c4a6c10d20b2a33196606ebc3cbcd87a2cf342ce15bb27f9b1bfa2e4be7b59\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-gnptx" podUID="1958f0eb-003b-4101-8885-8177a1e71a0d" Jan 13 20:52:59.680829 kubelet[3275]: E0113 20:52:59.680696 3275 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"456e6343ee0f57bcb8dcd9404bcdf2a4c885fd2fd17130c9b0edfda25a692b9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d4c785db6-799q5" Jan 13 20:52:59.680829 kubelet[3275]: E0113 20:52:59.680744 3275 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7d4c785db6-799q5_calico-apiserver(e3cc6f57-01fe-4853-a572-df94cbb24821)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7d4c785db6-799q5_calico-apiserver(e3cc6f57-01fe-4853-a572-df94cbb24821)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"456e6343ee0f57bcb8dcd9404bcdf2a4c885fd2fd17130c9b0edfda25a692b9c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d4c785db6-799q5" podUID="e3cc6f57-01fe-4853-a572-df94cbb24821" Jan 13 20:52:59.685235 containerd[1796]: time="2025-01-13T20:52:59.685172465Z" level=error msg="Failed to destroy network for sandbox \"4cb413ded463b1acfb7b7fe1be41346d4c6a2ee23968bb4bbe29ce68e2883e00\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:52:59.685417 containerd[1796]: time="2025-01-13T20:52:59.685372792Z" level=error msg="encountered an error cleaning up failed sandbox \"4cb413ded463b1acfb7b7fe1be41346d4c6a2ee23968bb4bbe29ce68e2883e00\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:52:59.685417 containerd[1796]: time="2025-01-13T20:52:59.685405721Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68d58fcc5b-n4w6q,Uid:ce9b51cb-e99a-4abc-aeff-27a2e094f2d6,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"4cb413ded463b1acfb7b7fe1be41346d4c6a2ee23968bb4bbe29ce68e2883e00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:52:59.685547 kubelet[3275]: E0113 20:52:59.685530 3275 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4cb413ded463b1acfb7b7fe1be41346d4c6a2ee23968bb4bbe29ce68e2883e00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:52:59.685583 kubelet[3275]: E0113 20:52:59.685557 3275 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4cb413ded463b1acfb7b7fe1be41346d4c6a2ee23968bb4bbe29ce68e2883e00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68d58fcc5b-n4w6q" Jan 13 20:52:59.685583 kubelet[3275]: E0113 20:52:59.685569 3275 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4cb413ded463b1acfb7b7fe1be41346d4c6a2ee23968bb4bbe29ce68e2883e00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68d58fcc5b-n4w6q" Jan 13 20:52:59.685628 kubelet[3275]: E0113 20:52:59.685590 3275 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-68d58fcc5b-n4w6q_calico-system(ce9b51cb-e99a-4abc-aeff-27a2e094f2d6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-68d58fcc5b-n4w6q_calico-system(ce9b51cb-e99a-4abc-aeff-27a2e094f2d6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4cb413ded463b1acfb7b7fe1be41346d4c6a2ee23968bb4bbe29ce68e2883e00\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-68d58fcc5b-n4w6q" podUID="ce9b51cb-e99a-4abc-aeff-27a2e094f2d6" Jan 13 20:52:59.685671 containerd[1796]: time="2025-01-13T20:52:59.685591899Z" level=error msg="Failed to destroy network for sandbox \"fccc10c98a708ccfd7b3a1ab99162466d79ed142c90b93c1fa738a919e05a5a2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:52:59.685739 containerd[1796]: time="2025-01-13T20:52:59.685727143Z" level=error msg="encountered an error cleaning up failed sandbox \"fccc10c98a708ccfd7b3a1ab99162466d79ed142c90b93c1fa738a919e05a5a2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:52:59.685763 containerd[1796]: time="2025-01-13T20:52:59.685752595Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h7b85,Uid:041808ec-5de1-4583-be60-748668409e39,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"fccc10c98a708ccfd7b3a1ab99162466d79ed142c90b93c1fa738a919e05a5a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:52:59.685825 kubelet[3275]: E0113 20:52:59.685809 3275 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fccc10c98a708ccfd7b3a1ab99162466d79ed142c90b93c1fa738a919e05a5a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:52:59.685845 kubelet[3275]: E0113 20:52:59.685832 3275 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fccc10c98a708ccfd7b3a1ab99162466d79ed142c90b93c1fa738a919e05a5a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h7b85" Jan 13 20:52:59.685862 kubelet[3275]: E0113 20:52:59.685843 3275 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fccc10c98a708ccfd7b3a1ab99162466d79ed142c90b93c1fa738a919e05a5a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h7b85" Jan 13 20:52:59.685883 kubelet[3275]: E0113 20:52:59.685861 3275 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-h7b85_calico-system(041808ec-5de1-4583-be60-748668409e39)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-h7b85_calico-system(041808ec-5de1-4583-be60-748668409e39)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fccc10c98a708ccfd7b3a1ab99162466d79ed142c90b93c1fa738a919e05a5a2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-h7b85" podUID="041808ec-5de1-4583-be60-748668409e39" Jan 13 20:52:59.938862 systemd[1]: run-netns-cni\x2d74a490df\x2dfd84\x2d92e1\x2df94c\x2d6e457197f0b5.mount: Deactivated successfully. Jan 13 20:52:59.938912 systemd[1]: run-netns-cni\x2d33ee4287\x2d38ac\x2d2856\x2d2f9c\x2ded1579bcac32.mount: Deactivated successfully. Jan 13 20:52:59.938945 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ef5f207624ba5c43d680c6587e4fc03354f47110d4ee73746c6408d8b13554e0-shm.mount: Deactivated successfully. Jan 13 20:52:59.938988 systemd[1]: run-netns-cni\x2db637251b\x2d623b\x2d93e3\x2d057b\x2dc09a36a85b49.mount: Deactivated successfully. Jan 13 20:52:59.939020 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8ea3cd1e0823c9840b04ba6e04a90f50370ef737a9e00d5acbefd7978bcd15c3-shm.mount: Deactivated successfully. Jan 13 20:52:59.939054 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bc0d9cc2b9b8e93debedb3944b0e8b37375a66770da30a4aa85ea30a3e80e9f1-shm.mount: Deactivated successfully. Jan 13 20:52:59.939089 systemd[1]: run-netns-cni\x2d0f10bac8\x2d4437\x2dc785\x2d3fcb\x2dca76abcda359.mount: Deactivated successfully. Jan 13 20:52:59.939137 systemd[1]: run-netns-cni\x2d6981e2cf\x2d63ab\x2d1925\x2d56b3\x2dd198cb3f70c5.mount: Deactivated successfully. Jan 13 20:52:59.939224 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1fd3f15800d1c117e0bb75b7bd2633696ff5013f5fdfce3448414aebd683fd00-shm.mount: Deactivated successfully. Jan 13 20:53:00.641559 kubelet[3275]: I0113 20:53:00.641539 3275 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0127c52613930b78e5e66edfcc01dcf3f62cba9d857bf91f828a4557bcfbb1d3" Jan 13 20:53:00.641856 containerd[1796]: time="2025-01-13T20:53:00.641836678Z" level=info msg="StopPodSandbox for \"0127c52613930b78e5e66edfcc01dcf3f62cba9d857bf91f828a4557bcfbb1d3\"" Jan 13 20:53:00.642026 containerd[1796]: time="2025-01-13T20:53:00.641967911Z" level=info msg="Ensure that sandbox 0127c52613930b78e5e66edfcc01dcf3f62cba9d857bf91f828a4557bcfbb1d3 in task-service has been cleanup successfully" Jan 13 20:53:00.642092 containerd[1796]: time="2025-01-13T20:53:00.642078571Z" level=info msg="TearDown network for sandbox \"0127c52613930b78e5e66edfcc01dcf3f62cba9d857bf91f828a4557bcfbb1d3\" successfully" Jan 13 20:53:00.642124 containerd[1796]: time="2025-01-13T20:53:00.642092368Z" level=info msg="StopPodSandbox for \"0127c52613930b78e5e66edfcc01dcf3f62cba9d857bf91f828a4557bcfbb1d3\" returns successfully" Jan 13 20:53:00.642193 kubelet[3275]: I0113 20:53:00.642183 3275 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4cb413ded463b1acfb7b7fe1be41346d4c6a2ee23968bb4bbe29ce68e2883e00" Jan 13 20:53:00.642262 containerd[1796]: time="2025-01-13T20:53:00.642248987Z" level=info msg="StopPodSandbox for \"8ea3cd1e0823c9840b04ba6e04a90f50370ef737a9e00d5acbefd7978bcd15c3\"" Jan 13 20:53:00.642304 containerd[1796]: time="2025-01-13T20:53:00.642295743Z" level=info msg="TearDown network for sandbox \"8ea3cd1e0823c9840b04ba6e04a90f50370ef737a9e00d5acbefd7978bcd15c3\" successfully" Jan 13 20:53:00.642304 containerd[1796]: time="2025-01-13T20:53:00.642302852Z" level=info msg="StopPodSandbox for \"8ea3cd1e0823c9840b04ba6e04a90f50370ef737a9e00d5acbefd7978bcd15c3\" returns successfully" Jan 13 20:53:00.642447 containerd[1796]: time="2025-01-13T20:53:00.642434683Z" level=info msg="StopPodSandbox for \"4cb413ded463b1acfb7b7fe1be41346d4c6a2ee23968bb4bbe29ce68e2883e00\"" Jan 13 20:53:00.642521 containerd[1796]: time="2025-01-13T20:53:00.642507052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nxnfb,Uid:d2521fa2-61ba-4676-843f-c14c317e5358,Namespace:kube-system,Attempt:2,}" Jan 13 20:53:00.642575 containerd[1796]: time="2025-01-13T20:53:00.642562693Z" level=info msg="Ensure that sandbox 4cb413ded463b1acfb7b7fe1be41346d4c6a2ee23968bb4bbe29ce68e2883e00 in task-service has been cleanup successfully" Jan 13 20:53:00.642668 containerd[1796]: time="2025-01-13T20:53:00.642657440Z" level=info msg="TearDown network for sandbox \"4cb413ded463b1acfb7b7fe1be41346d4c6a2ee23968bb4bbe29ce68e2883e00\" successfully" Jan 13 20:53:00.642709 containerd[1796]: time="2025-01-13T20:53:00.642667958Z" level=info msg="StopPodSandbox for \"4cb413ded463b1acfb7b7fe1be41346d4c6a2ee23968bb4bbe29ce68e2883e00\" returns successfully" Jan 13 20:53:00.642798 containerd[1796]: time="2025-01-13T20:53:00.642785697Z" level=info msg="StopPodSandbox for \"d15156abea73ec1e9b5ab33e4bbf79cb7b5b53fdcd536741876459cfab2afcae\"" Jan 13 20:53:00.642828 kubelet[3275]: I0113 20:53:00.642806 3275 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="456e6343ee0f57bcb8dcd9404bcdf2a4c885fd2fd17130c9b0edfda25a692b9c" Jan 13 20:53:00.642851 containerd[1796]: time="2025-01-13T20:53:00.642831662Z" level=info msg="TearDown network for sandbox \"d15156abea73ec1e9b5ab33e4bbf79cb7b5b53fdcd536741876459cfab2afcae\" successfully" Jan 13 20:53:00.642851 containerd[1796]: time="2025-01-13T20:53:00.642839882Z" level=info msg="StopPodSandbox for \"d15156abea73ec1e9b5ab33e4bbf79cb7b5b53fdcd536741876459cfab2afcae\" returns successfully" Jan 13 20:53:00.643035 containerd[1796]: time="2025-01-13T20:53:00.643026619Z" level=info msg="StopPodSandbox for \"456e6343ee0f57bcb8dcd9404bcdf2a4c885fd2fd17130c9b0edfda25a692b9c\"" Jan 13 20:53:00.643062 containerd[1796]: time="2025-01-13T20:53:00.643045665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68d58fcc5b-n4w6q,Uid:ce9b51cb-e99a-4abc-aeff-27a2e094f2d6,Namespace:calico-system,Attempt:2,}" Jan 13 20:53:00.643127 containerd[1796]: time="2025-01-13T20:53:00.643118032Z" level=info msg="Ensure that sandbox 456e6343ee0f57bcb8dcd9404bcdf2a4c885fd2fd17130c9b0edfda25a692b9c in task-service has been cleanup successfully" Jan 13 20:53:00.643220 containerd[1796]: time="2025-01-13T20:53:00.643209002Z" level=info msg="TearDown network for sandbox \"456e6343ee0f57bcb8dcd9404bcdf2a4c885fd2fd17130c9b0edfda25a692b9c\" successfully" Jan 13 20:53:00.643246 containerd[1796]: time="2025-01-13T20:53:00.643220060Z" level=info msg="StopPodSandbox for \"456e6343ee0f57bcb8dcd9404bcdf2a4c885fd2fd17130c9b0edfda25a692b9c\" returns successfully" Jan 13 20:53:00.643321 containerd[1796]: time="2025-01-13T20:53:00.643313905Z" level=info msg="StopPodSandbox for \"1fd3f15800d1c117e0bb75b7bd2633696ff5013f5fdfce3448414aebd683fd00\"" Jan 13 20:53:00.643366 kubelet[3275]: I0113 20:53:00.643356 3275 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c72566025eb6bf20e7a12141ba8d334f96ef61a7fe37ad141a1776f33dbbb2db" Jan 13 20:53:00.643397 containerd[1796]: time="2025-01-13T20:53:00.643359693Z" level=info msg="TearDown network for sandbox \"1fd3f15800d1c117e0bb75b7bd2633696ff5013f5fdfce3448414aebd683fd00\" successfully" Jan 13 20:53:00.643397 containerd[1796]: time="2025-01-13T20:53:00.643365838Z" level=info msg="StopPodSandbox for \"1fd3f15800d1c117e0bb75b7bd2633696ff5013f5fdfce3448414aebd683fd00\" returns successfully" Jan 13 20:53:00.643791 systemd[1]: run-netns-cni\x2d87a6d9e3\x2db25f\x2d5ab1\x2df4a8\x2d775c384404bb.mount: Deactivated successfully. Jan 13 20:53:00.643971 containerd[1796]: time="2025-01-13T20:53:00.643790583Z" level=info msg="StopPodSandbox for \"c72566025eb6bf20e7a12141ba8d334f96ef61a7fe37ad141a1776f33dbbb2db\"" Jan 13 20:53:00.643971 containerd[1796]: time="2025-01-13T20:53:00.643964410Z" level=info msg="Ensure that sandbox c72566025eb6bf20e7a12141ba8d334f96ef61a7fe37ad141a1776f33dbbb2db in task-service has been cleanup successfully" Jan 13 20:53:00.644030 containerd[1796]: time="2025-01-13T20:53:00.643973032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d4c785db6-799q5,Uid:e3cc6f57-01fe-4853-a572-df94cbb24821,Namespace:calico-apiserver,Attempt:2,}" Jan 13 20:53:00.644651 kubelet[3275]: I0113 20:53:00.644576 3275 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fccc10c98a708ccfd7b3a1ab99162466d79ed142c90b93c1fa738a919e05a5a2" Jan 13 20:53:00.645035 containerd[1796]: time="2025-01-13T20:53:00.644999469Z" level=info msg="StopPodSandbox for \"fccc10c98a708ccfd7b3a1ab99162466d79ed142c90b93c1fa738a919e05a5a2\"" Jan 13 20:53:00.645220 containerd[1796]: time="2025-01-13T20:53:00.645202618Z" level=info msg="TearDown network for sandbox \"c72566025eb6bf20e7a12141ba8d334f96ef61a7fe37ad141a1776f33dbbb2db\" successfully" Jan 13 20:53:00.645259 containerd[1796]: time="2025-01-13T20:53:00.645214650Z" level=info msg="Ensure that sandbox fccc10c98a708ccfd7b3a1ab99162466d79ed142c90b93c1fa738a919e05a5a2 in task-service has been cleanup successfully" Jan 13 20:53:00.645360 containerd[1796]: time="2025-01-13T20:53:00.645219232Z" level=info msg="StopPodSandbox for \"c72566025eb6bf20e7a12141ba8d334f96ef61a7fe37ad141a1776f33dbbb2db\" returns successfully" Jan 13 20:53:00.645360 containerd[1796]: time="2025-01-13T20:53:00.645340606Z" level=info msg="TearDown network for sandbox \"fccc10c98a708ccfd7b3a1ab99162466d79ed142c90b93c1fa738a919e05a5a2\" successfully" Jan 13 20:53:00.645438 containerd[1796]: time="2025-01-13T20:53:00.645365140Z" level=info msg="StopPodSandbox for \"fccc10c98a708ccfd7b3a1ab99162466d79ed142c90b93c1fa738a919e05a5a2\" returns successfully" Jan 13 20:53:00.645710 containerd[1796]: time="2025-01-13T20:53:00.645696126Z" level=info msg="StopPodSandbox for \"e1c4d74fea366482420e27e4d2319ef8fa6392820f0f49a85b79676210aee06f\"" Jan 13 20:53:00.645766 containerd[1796]: time="2025-01-13T20:53:00.645706060Z" level=info msg="StopPodSandbox for \"bc0d9cc2b9b8e93debedb3944b0e8b37375a66770da30a4aa85ea30a3e80e9f1\"" Jan 13 20:53:00.645801 containerd[1796]: time="2025-01-13T20:53:00.645752060Z" level=info msg="TearDown network for sandbox \"e1c4d74fea366482420e27e4d2319ef8fa6392820f0f49a85b79676210aee06f\" successfully" Jan 13 20:53:00.645801 containerd[1796]: time="2025-01-13T20:53:00.645786050Z" level=info msg="StopPodSandbox for \"e1c4d74fea366482420e27e4d2319ef8fa6392820f0f49a85b79676210aee06f\" returns successfully" Jan 13 20:53:00.645864 containerd[1796]: time="2025-01-13T20:53:00.645767918Z" level=info msg="TearDown network for sandbox \"bc0d9cc2b9b8e93debedb3944b0e8b37375a66770da30a4aa85ea30a3e80e9f1\" successfully" Jan 13 20:53:00.645864 containerd[1796]: time="2025-01-13T20:53:00.645819547Z" level=info msg="StopPodSandbox for \"bc0d9cc2b9b8e93debedb3944b0e8b37375a66770da30a4aa85ea30a3e80e9f1\" returns successfully" Jan 13 20:53:00.646046 containerd[1796]: time="2025-01-13T20:53:00.646033847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d4c785db6-2rkkg,Uid:bf4f0ae4-b718-4aef-9209-5a0f4e005958,Namespace:calico-apiserver,Attempt:2,}" Jan 13 20:53:00.646081 kubelet[3275]: I0113 20:53:00.646044 3275 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4c4a6c10d20b2a33196606ebc3cbcd87a2cf342ce15bb27f9b1bfa2e4be7b59" Jan 13 20:53:00.646094 systemd[1]: run-netns-cni\x2d4d08cff2\x2d0a2c\x2def59\x2dc27b\x2d132ac31b53b3.mount: Deactivated successfully. Jan 13 20:53:00.646176 containerd[1796]: time="2025-01-13T20:53:00.646033369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h7b85,Uid:041808ec-5de1-4583-be60-748668409e39,Namespace:calico-system,Attempt:2,}" Jan 13 20:53:00.646163 systemd[1]: run-netns-cni\x2d52c93889\x2db578\x2d732d\x2de42d\x2dbc4cc46d1609.mount: Deactivated successfully. Jan 13 20:53:00.646316 containerd[1796]: time="2025-01-13T20:53:00.646306091Z" level=info msg="StopPodSandbox for \"b4c4a6c10d20b2a33196606ebc3cbcd87a2cf342ce15bb27f9b1bfa2e4be7b59\"" Jan 13 20:53:00.646429 containerd[1796]: time="2025-01-13T20:53:00.646418779Z" level=info msg="Ensure that sandbox b4c4a6c10d20b2a33196606ebc3cbcd87a2cf342ce15bb27f9b1bfa2e4be7b59 in task-service has been cleanup successfully" Jan 13 20:53:00.646534 containerd[1796]: time="2025-01-13T20:53:00.646520180Z" level=info msg="TearDown network for sandbox \"b4c4a6c10d20b2a33196606ebc3cbcd87a2cf342ce15bb27f9b1bfa2e4be7b59\" successfully" Jan 13 20:53:00.646567 containerd[1796]: time="2025-01-13T20:53:00.646531990Z" level=info msg="StopPodSandbox for \"b4c4a6c10d20b2a33196606ebc3cbcd87a2cf342ce15bb27f9b1bfa2e4be7b59\" returns successfully" Jan 13 20:53:00.646707 containerd[1796]: time="2025-01-13T20:53:00.646691583Z" level=info msg="StopPodSandbox for \"ef5f207624ba5c43d680c6587e4fc03354f47110d4ee73746c6408d8b13554e0\"" Jan 13 20:53:00.646751 containerd[1796]: time="2025-01-13T20:53:00.646742184Z" level=info msg="TearDown network for sandbox \"ef5f207624ba5c43d680c6587e4fc03354f47110d4ee73746c6408d8b13554e0\" successfully" Jan 13 20:53:00.646782 containerd[1796]: time="2025-01-13T20:53:00.646751060Z" level=info msg="StopPodSandbox for \"ef5f207624ba5c43d680c6587e4fc03354f47110d4ee73746c6408d8b13554e0\" returns successfully" Jan 13 20:53:00.646937 containerd[1796]: time="2025-01-13T20:53:00.646924672Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-gnptx,Uid:1958f0eb-003b-4101-8885-8177a1e71a0d,Namespace:kube-system,Attempt:2,}" Jan 13 20:53:00.647997 systemd[1]: run-netns-cni\x2d70439ef2\x2d61a0\x2dac22\x2dd1ae\x2d9b4b1db5ba21.mount: Deactivated successfully. Jan 13 20:53:00.648044 systemd[1]: run-netns-cni\x2d2edd6ec3\x2d45ca\x2d44a5\x2d845c\x2d6893281ef9e1.mount: Deactivated successfully. Jan 13 20:53:00.648080 systemd[1]: run-netns-cni\x2dc04ef944\x2d4a9a\x2d1f45\x2d0a64\x2d23d72809cac0.mount: Deactivated successfully. Jan 13 20:53:00.682666 containerd[1796]: time="2025-01-13T20:53:00.682617059Z" level=error msg="Failed to destroy network for sandbox \"2a5e6ccf3ede85835692172123dc1d34adf651b783f40d77483af428d5b7aacd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:00.682900 containerd[1796]: time="2025-01-13T20:53:00.682882250Z" level=error msg="encountered an error cleaning up failed sandbox \"2a5e6ccf3ede85835692172123dc1d34adf651b783f40d77483af428d5b7aacd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:00.682947 containerd[1796]: time="2025-01-13T20:53:00.682933497Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68d58fcc5b-n4w6q,Uid:ce9b51cb-e99a-4abc-aeff-27a2e094f2d6,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"2a5e6ccf3ede85835692172123dc1d34adf651b783f40d77483af428d5b7aacd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:00.683117 kubelet[3275]: E0113 20:53:00.683094 3275 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a5e6ccf3ede85835692172123dc1d34adf651b783f40d77483af428d5b7aacd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:00.683159 kubelet[3275]: E0113 20:53:00.683143 3275 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a5e6ccf3ede85835692172123dc1d34adf651b783f40d77483af428d5b7aacd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68d58fcc5b-n4w6q" Jan 13 20:53:00.683189 kubelet[3275]: E0113 20:53:00.683167 3275 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a5e6ccf3ede85835692172123dc1d34adf651b783f40d77483af428d5b7aacd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68d58fcc5b-n4w6q" Jan 13 20:53:00.683218 kubelet[3275]: E0113 20:53:00.683201 3275 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-68d58fcc5b-n4w6q_calico-system(ce9b51cb-e99a-4abc-aeff-27a2e094f2d6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-68d58fcc5b-n4w6q_calico-system(ce9b51cb-e99a-4abc-aeff-27a2e094f2d6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2a5e6ccf3ede85835692172123dc1d34adf651b783f40d77483af428d5b7aacd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-68d58fcc5b-n4w6q" podUID="ce9b51cb-e99a-4abc-aeff-27a2e094f2d6" Jan 13 20:53:00.685726 containerd[1796]: time="2025-01-13T20:53:00.685690001Z" level=error msg="Failed to destroy network for sandbox \"24c732b38f370468236a72ab32f80d6878d64538399391f045d0acac2b8b6fb1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:00.685795 containerd[1796]: time="2025-01-13T20:53:00.685703793Z" level=error msg="Failed to destroy network for sandbox \"0468e4bf08413113b31ecb582a34c0bafbe9d8746c97c8566d561f3d560fab33\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:00.685916 containerd[1796]: time="2025-01-13T20:53:00.685904912Z" level=error msg="encountered an error cleaning up failed sandbox \"0468e4bf08413113b31ecb582a34c0bafbe9d8746c97c8566d561f3d560fab33\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:00.685949 containerd[1796]: time="2025-01-13T20:53:00.685923312Z" level=error msg="encountered an error cleaning up failed sandbox \"24c732b38f370468236a72ab32f80d6878d64538399391f045d0acac2b8b6fb1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:00.685949 containerd[1796]: time="2025-01-13T20:53:00.685942100Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h7b85,Uid:041808ec-5de1-4583-be60-748668409e39,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"0468e4bf08413113b31ecb582a34c0bafbe9d8746c97c8566d561f3d560fab33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:00.685986 containerd[1796]: time="2025-01-13T20:53:00.685958552Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d4c785db6-2rkkg,Uid:bf4f0ae4-b718-4aef-9209-5a0f4e005958,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"24c732b38f370468236a72ab32f80d6878d64538399391f045d0acac2b8b6fb1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:00.686067 kubelet[3275]: E0113 20:53:00.686046 3275 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0468e4bf08413113b31ecb582a34c0bafbe9d8746c97c8566d561f3d560fab33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:00.686099 kubelet[3275]: E0113 20:53:00.686078 3275 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0468e4bf08413113b31ecb582a34c0bafbe9d8746c97c8566d561f3d560fab33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h7b85" Jan 13 20:53:00.686099 kubelet[3275]: E0113 20:53:00.686090 3275 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0468e4bf08413113b31ecb582a34c0bafbe9d8746c97c8566d561f3d560fab33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h7b85" Jan 13 20:53:00.686153 kubelet[3275]: E0113 20:53:00.686113 3275 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-h7b85_calico-system(041808ec-5de1-4583-be60-748668409e39)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-h7b85_calico-system(041808ec-5de1-4583-be60-748668409e39)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0468e4bf08413113b31ecb582a34c0bafbe9d8746c97c8566d561f3d560fab33\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-h7b85" podUID="041808ec-5de1-4583-be60-748668409e39" Jan 13 20:53:00.686153 kubelet[3275]: E0113 20:53:00.686049 3275 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24c732b38f370468236a72ab32f80d6878d64538399391f045d0acac2b8b6fb1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:00.686258 kubelet[3275]: E0113 20:53:00.686145 3275 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24c732b38f370468236a72ab32f80d6878d64538399391f045d0acac2b8b6fb1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d4c785db6-2rkkg" Jan 13 20:53:00.686258 kubelet[3275]: E0113 20:53:00.686187 3275 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24c732b38f370468236a72ab32f80d6878d64538399391f045d0acac2b8b6fb1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d4c785db6-2rkkg" Jan 13 20:53:00.686258 kubelet[3275]: E0113 20:53:00.686214 3275 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7d4c785db6-2rkkg_calico-apiserver(bf4f0ae4-b718-4aef-9209-5a0f4e005958)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7d4c785db6-2rkkg_calico-apiserver(bf4f0ae4-b718-4aef-9209-5a0f4e005958)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"24c732b38f370468236a72ab32f80d6878d64538399391f045d0acac2b8b6fb1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d4c785db6-2rkkg" podUID="bf4f0ae4-b718-4aef-9209-5a0f4e005958" Jan 13 20:53:00.686676 containerd[1796]: time="2025-01-13T20:53:00.686660432Z" level=error msg="Failed to destroy network for sandbox \"1aad0586da378838df74df5e9d655a3fc7187bde4f483a1c989f64fa61a8f116\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:00.686825 containerd[1796]: time="2025-01-13T20:53:00.686810908Z" level=error msg="encountered an error cleaning up failed sandbox \"1aad0586da378838df74df5e9d655a3fc7187bde4f483a1c989f64fa61a8f116\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:00.686865 containerd[1796]: time="2025-01-13T20:53:00.686842669Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nxnfb,Uid:d2521fa2-61ba-4676-843f-c14c317e5358,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"1aad0586da378838df74df5e9d655a3fc7187bde4f483a1c989f64fa61a8f116\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:00.686954 kubelet[3275]: E0113 20:53:00.686937 3275 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1aad0586da378838df74df5e9d655a3fc7187bde4f483a1c989f64fa61a8f116\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:00.686979 kubelet[3275]: E0113 20:53:00.686967 3275 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1aad0586da378838df74df5e9d655a3fc7187bde4f483a1c989f64fa61a8f116\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-nxnfb" Jan 13 20:53:00.687000 kubelet[3275]: E0113 20:53:00.686981 3275 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1aad0586da378838df74df5e9d655a3fc7187bde4f483a1c989f64fa61a8f116\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-nxnfb" Jan 13 20:53:00.687029 kubelet[3275]: E0113 20:53:00.687010 3275 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-nxnfb_kube-system(d2521fa2-61ba-4676-843f-c14c317e5358)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-nxnfb_kube-system(d2521fa2-61ba-4676-843f-c14c317e5358)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1aad0586da378838df74df5e9d655a3fc7187bde4f483a1c989f64fa61a8f116\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-nxnfb" podUID="d2521fa2-61ba-4676-843f-c14c317e5358" Jan 13 20:53:00.690910 containerd[1796]: time="2025-01-13T20:53:00.690862757Z" level=error msg="Failed to destroy network for sandbox \"8e84fc2c98de3c57d9293d78ddd5f8dc67449ab21cb4315a819585e3ab6855f0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:00.694492 containerd[1796]: time="2025-01-13T20:53:00.694447271Z" level=error msg="encountered an error cleaning up failed sandbox \"8e84fc2c98de3c57d9293d78ddd5f8dc67449ab21cb4315a819585e3ab6855f0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:00.694685 containerd[1796]: time="2025-01-13T20:53:00.694646668Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d4c785db6-799q5,Uid:e3cc6f57-01fe-4853-a572-df94cbb24821,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"8e84fc2c98de3c57d9293d78ddd5f8dc67449ab21cb4315a819585e3ab6855f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:00.694857 kubelet[3275]: E0113 20:53:00.694837 3275 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e84fc2c98de3c57d9293d78ddd5f8dc67449ab21cb4315a819585e3ab6855f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:00.694921 kubelet[3275]: E0113 20:53:00.694880 3275 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e84fc2c98de3c57d9293d78ddd5f8dc67449ab21cb4315a819585e3ab6855f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d4c785db6-799q5" Jan 13 20:53:00.694921 kubelet[3275]: E0113 20:53:00.694901 3275 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e84fc2c98de3c57d9293d78ddd5f8dc67449ab21cb4315a819585e3ab6855f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d4c785db6-799q5" Jan 13 20:53:00.694984 kubelet[3275]: E0113 20:53:00.694941 3275 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7d4c785db6-799q5_calico-apiserver(e3cc6f57-01fe-4853-a572-df94cbb24821)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7d4c785db6-799q5_calico-apiserver(e3cc6f57-01fe-4853-a572-df94cbb24821)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8e84fc2c98de3c57d9293d78ddd5f8dc67449ab21cb4315a819585e3ab6855f0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d4c785db6-799q5" podUID="e3cc6f57-01fe-4853-a572-df94cbb24821" Jan 13 20:53:00.695544 containerd[1796]: time="2025-01-13T20:53:00.695499150Z" level=error msg="Failed to destroy network for sandbox \"6d430c351838389f0ab1ccbec2e3f4dc91faeb821ac26da3ce257312d149e0a3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:00.695722 containerd[1796]: time="2025-01-13T20:53:00.695682425Z" level=error msg="encountered an error cleaning up failed sandbox \"6d430c351838389f0ab1ccbec2e3f4dc91faeb821ac26da3ce257312d149e0a3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:00.695754 containerd[1796]: time="2025-01-13T20:53:00.695715844Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-gnptx,Uid:1958f0eb-003b-4101-8885-8177a1e71a0d,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"6d430c351838389f0ab1ccbec2e3f4dc91faeb821ac26da3ce257312d149e0a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:00.695824 kubelet[3275]: E0113 20:53:00.695788 3275 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d430c351838389f0ab1ccbec2e3f4dc91faeb821ac26da3ce257312d149e0a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:00.695824 kubelet[3275]: E0113 20:53:00.695805 3275 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d430c351838389f0ab1ccbec2e3f4dc91faeb821ac26da3ce257312d149e0a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-gnptx" Jan 13 20:53:00.695824 kubelet[3275]: E0113 20:53:00.695815 3275 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d430c351838389f0ab1ccbec2e3f4dc91faeb821ac26da3ce257312d149e0a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-gnptx" Jan 13 20:53:00.695890 kubelet[3275]: E0113 20:53:00.695835 3275 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-gnptx_kube-system(1958f0eb-003b-4101-8885-8177a1e71a0d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-gnptx_kube-system(1958f0eb-003b-4101-8885-8177a1e71a0d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6d430c351838389f0ab1ccbec2e3f4dc91faeb821ac26da3ce257312d149e0a3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-gnptx" podUID="1958f0eb-003b-4101-8885-8177a1e71a0d" Jan 13 20:53:00.946464 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2a5e6ccf3ede85835692172123dc1d34adf651b783f40d77483af428d5b7aacd-shm.mount: Deactivated successfully. Jan 13 20:53:01.652896 kubelet[3275]: I0113 20:53:01.652828 3275 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0468e4bf08413113b31ecb582a34c0bafbe9d8746c97c8566d561f3d560fab33" Jan 13 20:53:01.654083 containerd[1796]: time="2025-01-13T20:53:01.654010179Z" level=info msg="StopPodSandbox for \"0468e4bf08413113b31ecb582a34c0bafbe9d8746c97c8566d561f3d560fab33\"" Jan 13 20:53:01.655050 containerd[1796]: time="2025-01-13T20:53:01.654756257Z" level=info msg="Ensure that sandbox 0468e4bf08413113b31ecb582a34c0bafbe9d8746c97c8566d561f3d560fab33 in task-service has been cleanup successfully" Jan 13 20:53:01.655765 containerd[1796]: time="2025-01-13T20:53:01.655685068Z" level=info msg="TearDown network for sandbox \"0468e4bf08413113b31ecb582a34c0bafbe9d8746c97c8566d561f3d560fab33\" successfully" Jan 13 20:53:01.655949 containerd[1796]: time="2025-01-13T20:53:01.655766125Z" level=info msg="StopPodSandbox for \"0468e4bf08413113b31ecb582a34c0bafbe9d8746c97c8566d561f3d560fab33\" returns successfully" Jan 13 20:53:01.656520 kubelet[3275]: I0113 20:53:01.656473 3275 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d430c351838389f0ab1ccbec2e3f4dc91faeb821ac26da3ce257312d149e0a3" Jan 13 20:53:01.656936 containerd[1796]: time="2025-01-13T20:53:01.656879110Z" level=info msg="StopPodSandbox for \"fccc10c98a708ccfd7b3a1ab99162466d79ed142c90b93c1fa738a919e05a5a2\"" Jan 13 20:53:01.657279 containerd[1796]: time="2025-01-13T20:53:01.657208326Z" level=info msg="TearDown network for sandbox \"fccc10c98a708ccfd7b3a1ab99162466d79ed142c90b93c1fa738a919e05a5a2\" successfully" Jan 13 20:53:01.657279 containerd[1796]: time="2025-01-13T20:53:01.657256536Z" level=info msg="StopPodSandbox for \"fccc10c98a708ccfd7b3a1ab99162466d79ed142c90b93c1fa738a919e05a5a2\" returns successfully" Jan 13 20:53:01.657933 containerd[1796]: time="2025-01-13T20:53:01.657873236Z" level=info msg="StopPodSandbox for \"e1c4d74fea366482420e27e4d2319ef8fa6392820f0f49a85b79676210aee06f\"" Jan 13 20:53:01.658120 containerd[1796]: time="2025-01-13T20:53:01.657890976Z" level=info msg="StopPodSandbox for \"6d430c351838389f0ab1ccbec2e3f4dc91faeb821ac26da3ce257312d149e0a3\"" Jan 13 20:53:01.658256 containerd[1796]: time="2025-01-13T20:53:01.658202750Z" level=info msg="TearDown network for sandbox \"e1c4d74fea366482420e27e4d2319ef8fa6392820f0f49a85b79676210aee06f\" successfully" Jan 13 20:53:01.658390 containerd[1796]: time="2025-01-13T20:53:01.658260621Z" level=info msg="StopPodSandbox for \"e1c4d74fea366482420e27e4d2319ef8fa6392820f0f49a85b79676210aee06f\" returns successfully" Jan 13 20:53:01.658533 containerd[1796]: time="2025-01-13T20:53:01.658477606Z" level=info msg="Ensure that sandbox 6d430c351838389f0ab1ccbec2e3f4dc91faeb821ac26da3ce257312d149e0a3 in task-service has been cleanup successfully" Jan 13 20:53:01.658976 containerd[1796]: time="2025-01-13T20:53:01.658923530Z" level=info msg="TearDown network for sandbox \"6d430c351838389f0ab1ccbec2e3f4dc91faeb821ac26da3ce257312d149e0a3\" successfully" Jan 13 20:53:01.659153 containerd[1796]: time="2025-01-13T20:53:01.658974436Z" level=info msg="StopPodSandbox for \"6d430c351838389f0ab1ccbec2e3f4dc91faeb821ac26da3ce257312d149e0a3\" returns successfully" Jan 13 20:53:01.659388 containerd[1796]: time="2025-01-13T20:53:01.659327085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h7b85,Uid:041808ec-5de1-4583-be60-748668409e39,Namespace:calico-system,Attempt:3,}" Jan 13 20:53:01.659618 containerd[1796]: time="2025-01-13T20:53:01.659559463Z" level=info msg="StopPodSandbox for \"b4c4a6c10d20b2a33196606ebc3cbcd87a2cf342ce15bb27f9b1bfa2e4be7b59\"" Jan 13 20:53:01.659826 containerd[1796]: time="2025-01-13T20:53:01.659765867Z" level=info msg="TearDown network for sandbox \"b4c4a6c10d20b2a33196606ebc3cbcd87a2cf342ce15bb27f9b1bfa2e4be7b59\" successfully" Jan 13 20:53:01.660013 containerd[1796]: time="2025-01-13T20:53:01.659822472Z" level=info msg="StopPodSandbox for \"b4c4a6c10d20b2a33196606ebc3cbcd87a2cf342ce15bb27f9b1bfa2e4be7b59\" returns successfully" Jan 13 20:53:01.660460 containerd[1796]: time="2025-01-13T20:53:01.660399373Z" level=info msg="StopPodSandbox for \"ef5f207624ba5c43d680c6587e4fc03354f47110d4ee73746c6408d8b13554e0\"" Jan 13 20:53:01.660739 containerd[1796]: time="2025-01-13T20:53:01.660680866Z" level=info msg="TearDown network for sandbox \"ef5f207624ba5c43d680c6587e4fc03354f47110d4ee73746c6408d8b13554e0\" successfully" Jan 13 20:53:01.660970 containerd[1796]: time="2025-01-13T20:53:01.660729324Z" level=info msg="StopPodSandbox for \"ef5f207624ba5c43d680c6587e4fc03354f47110d4ee73746c6408d8b13554e0\" returns successfully" Jan 13 20:53:01.661146 kubelet[3275]: I0113 20:53:01.660744 3275 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1aad0586da378838df74df5e9d655a3fc7187bde4f483a1c989f64fa61a8f116" Jan 13 20:53:01.661894 containerd[1796]: time="2025-01-13T20:53:01.661815767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-gnptx,Uid:1958f0eb-003b-4101-8885-8177a1e71a0d,Namespace:kube-system,Attempt:3,}" Jan 13 20:53:01.662138 containerd[1796]: time="2025-01-13T20:53:01.661827869Z" level=info msg="StopPodSandbox for \"1aad0586da378838df74df5e9d655a3fc7187bde4f483a1c989f64fa61a8f116\"" Jan 13 20:53:01.662832 containerd[1796]: time="2025-01-13T20:53:01.662716821Z" level=info msg="Ensure that sandbox 1aad0586da378838df74df5e9d655a3fc7187bde4f483a1c989f64fa61a8f116 in task-service has been cleanup successfully" Jan 13 20:53:01.663325 containerd[1796]: time="2025-01-13T20:53:01.663231472Z" level=info msg="TearDown network for sandbox \"1aad0586da378838df74df5e9d655a3fc7187bde4f483a1c989f64fa61a8f116\" successfully" Jan 13 20:53:01.663325 containerd[1796]: time="2025-01-13T20:53:01.663304710Z" level=info msg="StopPodSandbox for \"1aad0586da378838df74df5e9d655a3fc7187bde4f483a1c989f64fa61a8f116\" returns successfully" Jan 13 20:53:01.664000 containerd[1796]: time="2025-01-13T20:53:01.663933360Z" level=info msg="StopPodSandbox for \"0127c52613930b78e5e66edfcc01dcf3f62cba9d857bf91f828a4557bcfbb1d3\"" Jan 13 20:53:01.664375 containerd[1796]: time="2025-01-13T20:53:01.664271024Z" level=info msg="TearDown network for sandbox \"0127c52613930b78e5e66edfcc01dcf3f62cba9d857bf91f828a4557bcfbb1d3\" successfully" Jan 13 20:53:01.664375 containerd[1796]: time="2025-01-13T20:53:01.664336026Z" level=info msg="StopPodSandbox for \"0127c52613930b78e5e66edfcc01dcf3f62cba9d857bf91f828a4557bcfbb1d3\" returns successfully" Jan 13 20:53:01.664387 systemd[1]: run-netns-cni\x2d1af9aab4\x2d7398\x2d2768\x2d8cce\x2d8beb8e3bc40d.mount: Deactivated successfully. Jan 13 20:53:01.665388 kubelet[3275]: I0113 20:53:01.664454 3275 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a5e6ccf3ede85835692172123dc1d34adf651b783f40d77483af428d5b7aacd" Jan 13 20:53:01.665594 containerd[1796]: time="2025-01-13T20:53:01.664989286Z" level=info msg="StopPodSandbox for \"8ea3cd1e0823c9840b04ba6e04a90f50370ef737a9e00d5acbefd7978bcd15c3\"" Jan 13 20:53:01.665594 containerd[1796]: time="2025-01-13T20:53:01.665385363Z" level=info msg="TearDown network for sandbox \"8ea3cd1e0823c9840b04ba6e04a90f50370ef737a9e00d5acbefd7978bcd15c3\" successfully" Jan 13 20:53:01.665594 containerd[1796]: time="2025-01-13T20:53:01.665443838Z" level=info msg="StopPodSandbox for \"8ea3cd1e0823c9840b04ba6e04a90f50370ef737a9e00d5acbefd7978bcd15c3\" returns successfully" Jan 13 20:53:01.665594 containerd[1796]: time="2025-01-13T20:53:01.665511505Z" level=info msg="StopPodSandbox for \"2a5e6ccf3ede85835692172123dc1d34adf651b783f40d77483af428d5b7aacd\"" Jan 13 20:53:01.666331 containerd[1796]: time="2025-01-13T20:53:01.666253749Z" level=info msg="Ensure that sandbox 2a5e6ccf3ede85835692172123dc1d34adf651b783f40d77483af428d5b7aacd in task-service has been cleanup successfully" Jan 13 20:53:01.666513 containerd[1796]: time="2025-01-13T20:53:01.666449706Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nxnfb,Uid:d2521fa2-61ba-4676-843f-c14c317e5358,Namespace:kube-system,Attempt:3,}" Jan 13 20:53:01.666801 containerd[1796]: time="2025-01-13T20:53:01.666705733Z" level=info msg="TearDown network for sandbox \"2a5e6ccf3ede85835692172123dc1d34adf651b783f40d77483af428d5b7aacd\" successfully" Jan 13 20:53:01.666801 containerd[1796]: time="2025-01-13T20:53:01.666774209Z" level=info msg="StopPodSandbox for \"2a5e6ccf3ede85835692172123dc1d34adf651b783f40d77483af428d5b7aacd\" returns successfully" Jan 13 20:53:01.667638 containerd[1796]: time="2025-01-13T20:53:01.667586366Z" level=info msg="StopPodSandbox for \"4cb413ded463b1acfb7b7fe1be41346d4c6a2ee23968bb4bbe29ce68e2883e00\"" Jan 13 20:53:01.667883 containerd[1796]: time="2025-01-13T20:53:01.667825113Z" level=info msg="TearDown network for sandbox \"4cb413ded463b1acfb7b7fe1be41346d4c6a2ee23968bb4bbe29ce68e2883e00\" successfully" Jan 13 20:53:01.668024 containerd[1796]: time="2025-01-13T20:53:01.667882666Z" level=info msg="StopPodSandbox for \"4cb413ded463b1acfb7b7fe1be41346d4c6a2ee23968bb4bbe29ce68e2883e00\" returns successfully" Jan 13 20:53:01.668181 kubelet[3275]: I0113 20:53:01.667931 3275 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e84fc2c98de3c57d9293d78ddd5f8dc67449ab21cb4315a819585e3ab6855f0" Jan 13 20:53:01.668668 containerd[1796]: time="2025-01-13T20:53:01.668600410Z" level=info msg="StopPodSandbox for \"d15156abea73ec1e9b5ab33e4bbf79cb7b5b53fdcd536741876459cfab2afcae\"" Jan 13 20:53:01.668923 containerd[1796]: time="2025-01-13T20:53:01.668877376Z" level=info msg="TearDown network for sandbox \"d15156abea73ec1e9b5ab33e4bbf79cb7b5b53fdcd536741876459cfab2afcae\" successfully" Jan 13 20:53:01.669045 containerd[1796]: time="2025-01-13T20:53:01.668923296Z" level=info msg="StopPodSandbox for \"d15156abea73ec1e9b5ab33e4bbf79cb7b5b53fdcd536741876459cfab2afcae\" returns successfully" Jan 13 20:53:01.669244 containerd[1796]: time="2025-01-13T20:53:01.669024258Z" level=info msg="StopPodSandbox for \"8e84fc2c98de3c57d9293d78ddd5f8dc67449ab21cb4315a819585e3ab6855f0\"" Jan 13 20:53:01.669577 containerd[1796]: time="2025-01-13T20:53:01.669511622Z" level=info msg="Ensure that sandbox 8e84fc2c98de3c57d9293d78ddd5f8dc67449ab21cb4315a819585e3ab6855f0 in task-service has been cleanup successfully" Jan 13 20:53:01.669945 containerd[1796]: time="2025-01-13T20:53:01.669893253Z" level=info msg="TearDown network for sandbox \"8e84fc2c98de3c57d9293d78ddd5f8dc67449ab21cb4315a819585e3ab6855f0\" successfully" Jan 13 20:53:01.669945 containerd[1796]: time="2025-01-13T20:53:01.669936293Z" level=info msg="StopPodSandbox for \"8e84fc2c98de3c57d9293d78ddd5f8dc67449ab21cb4315a819585e3ab6855f0\" returns successfully" Jan 13 20:53:01.670375 containerd[1796]: time="2025-01-13T20:53:01.669944182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68d58fcc5b-n4w6q,Uid:ce9b51cb-e99a-4abc-aeff-27a2e094f2d6,Namespace:calico-system,Attempt:3,}" Jan 13 20:53:01.670614 containerd[1796]: time="2025-01-13T20:53:01.670529886Z" level=info msg="StopPodSandbox for \"456e6343ee0f57bcb8dcd9404bcdf2a4c885fd2fd17130c9b0edfda25a692b9c\"" Jan 13 20:53:01.670861 containerd[1796]: time="2025-01-13T20:53:01.670806853Z" level=info msg="TearDown network for sandbox \"456e6343ee0f57bcb8dcd9404bcdf2a4c885fd2fd17130c9b0edfda25a692b9c\" successfully" Jan 13 20:53:01.671051 containerd[1796]: time="2025-01-13T20:53:01.670864010Z" level=info msg="StopPodSandbox for \"456e6343ee0f57bcb8dcd9404bcdf2a4c885fd2fd17130c9b0edfda25a692b9c\" returns successfully" Jan 13 20:53:01.671355 kubelet[3275]: I0113 20:53:01.671304 3275 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="24c732b38f370468236a72ab32f80d6878d64538399391f045d0acac2b8b6fb1" Jan 13 20:53:01.671545 containerd[1796]: time="2025-01-13T20:53:01.671469743Z" level=info msg="StopPodSandbox for \"1fd3f15800d1c117e0bb75b7bd2633696ff5013f5fdfce3448414aebd683fd00\"" Jan 13 20:53:01.671753 containerd[1796]: time="2025-01-13T20:53:01.671670600Z" level=info msg="TearDown network for sandbox \"1fd3f15800d1c117e0bb75b7bd2633696ff5013f5fdfce3448414aebd683fd00\" successfully" Jan 13 20:53:01.671753 containerd[1796]: time="2025-01-13T20:53:01.671701222Z" level=info msg="StopPodSandbox for \"1fd3f15800d1c117e0bb75b7bd2633696ff5013f5fdfce3448414aebd683fd00\" returns successfully" Jan 13 20:53:01.672350 containerd[1796]: time="2025-01-13T20:53:01.672281635Z" level=info msg="StopPodSandbox for \"24c732b38f370468236a72ab32f80d6878d64538399391f045d0acac2b8b6fb1\"" Jan 13 20:53:01.672661 containerd[1796]: time="2025-01-13T20:53:01.672604417Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d4c785db6-799q5,Uid:e3cc6f57-01fe-4853-a572-df94cbb24821,Namespace:calico-apiserver,Attempt:3,}" Jan 13 20:53:01.672839 containerd[1796]: time="2025-01-13T20:53:01.672791016Z" level=info msg="Ensure that sandbox 24c732b38f370468236a72ab32f80d6878d64538399391f045d0acac2b8b6fb1 in task-service has been cleanup successfully" Jan 13 20:53:01.673246 containerd[1796]: time="2025-01-13T20:53:01.673177464Z" level=info msg="TearDown network for sandbox \"24c732b38f370468236a72ab32f80d6878d64538399391f045d0acac2b8b6fb1\" successfully" Jan 13 20:53:01.673246 containerd[1796]: time="2025-01-13T20:53:01.673225437Z" level=info msg="StopPodSandbox for \"24c732b38f370468236a72ab32f80d6878d64538399391f045d0acac2b8b6fb1\" returns successfully" Jan 13 20:53:01.673850 containerd[1796]: time="2025-01-13T20:53:01.673802183Z" level=info msg="StopPodSandbox for \"c72566025eb6bf20e7a12141ba8d334f96ef61a7fe37ad141a1776f33dbbb2db\"" Jan 13 20:53:01.674071 containerd[1796]: time="2025-01-13T20:53:01.674033761Z" level=info msg="TearDown network for sandbox \"c72566025eb6bf20e7a12141ba8d334f96ef61a7fe37ad141a1776f33dbbb2db\" successfully" Jan 13 20:53:01.674189 containerd[1796]: time="2025-01-13T20:53:01.674071008Z" level=info msg="StopPodSandbox for \"c72566025eb6bf20e7a12141ba8d334f96ef61a7fe37ad141a1776f33dbbb2db\" returns successfully" Jan 13 20:53:01.674649 containerd[1796]: time="2025-01-13T20:53:01.674560511Z" level=info msg="StopPodSandbox for \"bc0d9cc2b9b8e93debedb3944b0e8b37375a66770da30a4aa85ea30a3e80e9f1\"" Jan 13 20:53:01.674817 containerd[1796]: time="2025-01-13T20:53:01.674763904Z" level=info msg="TearDown network for sandbox \"bc0d9cc2b9b8e93debedb3944b0e8b37375a66770da30a4aa85ea30a3e80e9f1\" successfully" Jan 13 20:53:01.674817 containerd[1796]: time="2025-01-13T20:53:01.674797576Z" level=info msg="StopPodSandbox for \"bc0d9cc2b9b8e93debedb3944b0e8b37375a66770da30a4aa85ea30a3e80e9f1\" returns successfully" Jan 13 20:53:01.675713 containerd[1796]: time="2025-01-13T20:53:01.675650653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d4c785db6-2rkkg,Uid:bf4f0ae4-b718-4aef-9209-5a0f4e005958,Namespace:calico-apiserver,Attempt:3,}" Jan 13 20:53:01.677039 systemd[1]: run-netns-cni\x2d507f3642\x2d71e2\x2d64d1\x2de3eb\x2df4f63c2d4ac0.mount: Deactivated successfully. Jan 13 20:53:01.677362 systemd[1]: run-netns-cni\x2d79af1abc\x2db5b0\x2d92af\x2dba69\x2d1463cfed05a6.mount: Deactivated successfully. Jan 13 20:53:01.677556 systemd[1]: run-netns-cni\x2ddcc4f6ce\x2db265\x2dadf6\x2d3acf\x2d2b961ed9fde3.mount: Deactivated successfully. Jan 13 20:53:01.686355 systemd[1]: run-netns-cni\x2dacec417b\x2d13a9\x2d66ee\x2d3924\x2d30cfed87cecd.mount: Deactivated successfully. Jan 13 20:53:01.686488 systemd[1]: run-netns-cni\x2d4b004e16\x2d9863\x2db397\x2d892c\x2dfd58385fc9f8.mount: Deactivated successfully. Jan 13 20:53:01.753171 containerd[1796]: time="2025-01-13T20:53:01.753127748Z" level=error msg="Failed to destroy network for sandbox \"d181c18b3b9b760a94b9c0757331178506d162626304588b2d3668a8c6afdaa1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:01.753279 containerd[1796]: time="2025-01-13T20:53:01.753212638Z" level=error msg="Failed to destroy network for sandbox \"e2bd9901d93239de7de72a43ec0a8cf69dae7a9d44c98375eba4abd642185933\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:01.753545 containerd[1796]: time="2025-01-13T20:53:01.753401084Z" level=error msg="encountered an error cleaning up failed sandbox \"e2bd9901d93239de7de72a43ec0a8cf69dae7a9d44c98375eba4abd642185933\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:01.753545 containerd[1796]: time="2025-01-13T20:53:01.753419339Z" level=error msg="encountered an error cleaning up failed sandbox \"d181c18b3b9b760a94b9c0757331178506d162626304588b2d3668a8c6afdaa1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:01.753545 containerd[1796]: time="2025-01-13T20:53:01.753455026Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h7b85,Uid:041808ec-5de1-4583-be60-748668409e39,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"e2bd9901d93239de7de72a43ec0a8cf69dae7a9d44c98375eba4abd642185933\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:01.753545 containerd[1796]: time="2025-01-13T20:53:01.753466850Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nxnfb,Uid:d2521fa2-61ba-4676-843f-c14c317e5358,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"d181c18b3b9b760a94b9c0757331178506d162626304588b2d3668a8c6afdaa1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:01.753685 containerd[1796]: time="2025-01-13T20:53:01.753630559Z" level=error msg="Failed to destroy network for sandbox \"f92eaaeec95925ddfa10fb81b22e967777c08ea032a350ccfbacd2402b579795\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:01.753722 kubelet[3275]: E0113 20:53:01.753636 3275 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2bd9901d93239de7de72a43ec0a8cf69dae7a9d44c98375eba4abd642185933\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:01.753722 kubelet[3275]: E0113 20:53:01.753679 3275 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2bd9901d93239de7de72a43ec0a8cf69dae7a9d44c98375eba4abd642185933\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h7b85" Jan 13 20:53:01.753722 kubelet[3275]: E0113 20:53:01.753694 3275 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2bd9901d93239de7de72a43ec0a8cf69dae7a9d44c98375eba4abd642185933\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h7b85" Jan 13 20:53:01.753722 kubelet[3275]: E0113 20:53:01.753635 3275 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d181c18b3b9b760a94b9c0757331178506d162626304588b2d3668a8c6afdaa1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:01.753836 kubelet[3275]: E0113 20:53:01.753725 3275 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-h7b85_calico-system(041808ec-5de1-4583-be60-748668409e39)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-h7b85_calico-system(041808ec-5de1-4583-be60-748668409e39)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e2bd9901d93239de7de72a43ec0a8cf69dae7a9d44c98375eba4abd642185933\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-h7b85" podUID="041808ec-5de1-4583-be60-748668409e39" Jan 13 20:53:01.753836 kubelet[3275]: E0113 20:53:01.753740 3275 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d181c18b3b9b760a94b9c0757331178506d162626304588b2d3668a8c6afdaa1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-nxnfb" Jan 13 20:53:01.753836 kubelet[3275]: E0113 20:53:01.753760 3275 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d181c18b3b9b760a94b9c0757331178506d162626304588b2d3668a8c6afdaa1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-nxnfb" Jan 13 20:53:01.753929 containerd[1796]: time="2025-01-13T20:53:01.753770777Z" level=error msg="encountered an error cleaning up failed sandbox \"f92eaaeec95925ddfa10fb81b22e967777c08ea032a350ccfbacd2402b579795\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:01.753929 containerd[1796]: time="2025-01-13T20:53:01.753803363Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-gnptx,Uid:1958f0eb-003b-4101-8885-8177a1e71a0d,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"f92eaaeec95925ddfa10fb81b22e967777c08ea032a350ccfbacd2402b579795\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:01.753971 kubelet[3275]: E0113 20:53:01.753788 3275 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-nxnfb_kube-system(d2521fa2-61ba-4676-843f-c14c317e5358)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-nxnfb_kube-system(d2521fa2-61ba-4676-843f-c14c317e5358)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d181c18b3b9b760a94b9c0757331178506d162626304588b2d3668a8c6afdaa1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-nxnfb" podUID="d2521fa2-61ba-4676-843f-c14c317e5358" Jan 13 20:53:01.753971 kubelet[3275]: E0113 20:53:01.753885 3275 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f92eaaeec95925ddfa10fb81b22e967777c08ea032a350ccfbacd2402b579795\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:01.753971 kubelet[3275]: E0113 20:53:01.753911 3275 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f92eaaeec95925ddfa10fb81b22e967777c08ea032a350ccfbacd2402b579795\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-gnptx" Jan 13 20:53:01.754043 kubelet[3275]: E0113 20:53:01.753931 3275 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f92eaaeec95925ddfa10fb81b22e967777c08ea032a350ccfbacd2402b579795\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-gnptx" Jan 13 20:53:01.754043 kubelet[3275]: E0113 20:53:01.753958 3275 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-gnptx_kube-system(1958f0eb-003b-4101-8885-8177a1e71a0d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-gnptx_kube-system(1958f0eb-003b-4101-8885-8177a1e71a0d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f92eaaeec95925ddfa10fb81b22e967777c08ea032a350ccfbacd2402b579795\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-gnptx" podUID="1958f0eb-003b-4101-8885-8177a1e71a0d" Jan 13 20:53:01.754098 containerd[1796]: time="2025-01-13T20:53:01.754012859Z" level=error msg="Failed to destroy network for sandbox \"8c1dd8510bdfc4a9640e53dc0a2fd91fe1c138913a09aeab75084a3af1247e6b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:01.754171 containerd[1796]: time="2025-01-13T20:53:01.754158445Z" level=error msg="encountered an error cleaning up failed sandbox \"8c1dd8510bdfc4a9640e53dc0a2fd91fe1c138913a09aeab75084a3af1247e6b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:01.754208 containerd[1796]: time="2025-01-13T20:53:01.754181122Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68d58fcc5b-n4w6q,Uid:ce9b51cb-e99a-4abc-aeff-27a2e094f2d6,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"8c1dd8510bdfc4a9640e53dc0a2fd91fe1c138913a09aeab75084a3af1247e6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:01.754262 kubelet[3275]: E0113 20:53:01.754247 3275 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c1dd8510bdfc4a9640e53dc0a2fd91fe1c138913a09aeab75084a3af1247e6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:01.754298 kubelet[3275]: E0113 20:53:01.754268 3275 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c1dd8510bdfc4a9640e53dc0a2fd91fe1c138913a09aeab75084a3af1247e6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68d58fcc5b-n4w6q" Jan 13 20:53:01.754298 kubelet[3275]: E0113 20:53:01.754278 3275 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c1dd8510bdfc4a9640e53dc0a2fd91fe1c138913a09aeab75084a3af1247e6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68d58fcc5b-n4w6q" Jan 13 20:53:01.754354 kubelet[3275]: E0113 20:53:01.754296 3275 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-68d58fcc5b-n4w6q_calico-system(ce9b51cb-e99a-4abc-aeff-27a2e094f2d6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-68d58fcc5b-n4w6q_calico-system(ce9b51cb-e99a-4abc-aeff-27a2e094f2d6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8c1dd8510bdfc4a9640e53dc0a2fd91fe1c138913a09aeab75084a3af1247e6b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-68d58fcc5b-n4w6q" podUID="ce9b51cb-e99a-4abc-aeff-27a2e094f2d6" Jan 13 20:53:01.757337 containerd[1796]: time="2025-01-13T20:53:01.757307424Z" level=error msg="Failed to destroy network for sandbox \"d3d05756677299e199887f1cef0224a1ce4957a5cd37be0a13530a13e2555265\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:01.757491 containerd[1796]: time="2025-01-13T20:53:01.757478461Z" level=error msg="encountered an error cleaning up failed sandbox \"d3d05756677299e199887f1cef0224a1ce4957a5cd37be0a13530a13e2555265\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:01.757543 containerd[1796]: time="2025-01-13T20:53:01.757510013Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d4c785db6-2rkkg,Uid:bf4f0ae4-b718-4aef-9209-5a0f4e005958,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"d3d05756677299e199887f1cef0224a1ce4957a5cd37be0a13530a13e2555265\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:01.757621 kubelet[3275]: E0113 20:53:01.757603 3275 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3d05756677299e199887f1cef0224a1ce4957a5cd37be0a13530a13e2555265\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:01.757658 kubelet[3275]: E0113 20:53:01.757635 3275 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3d05756677299e199887f1cef0224a1ce4957a5cd37be0a13530a13e2555265\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d4c785db6-2rkkg" Jan 13 20:53:01.757658 kubelet[3275]: E0113 20:53:01.757649 3275 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3d05756677299e199887f1cef0224a1ce4957a5cd37be0a13530a13e2555265\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d4c785db6-2rkkg" Jan 13 20:53:01.757716 kubelet[3275]: E0113 20:53:01.757674 3275 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7d4c785db6-2rkkg_calico-apiserver(bf4f0ae4-b718-4aef-9209-5a0f4e005958)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7d4c785db6-2rkkg_calico-apiserver(bf4f0ae4-b718-4aef-9209-5a0f4e005958)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d3d05756677299e199887f1cef0224a1ce4957a5cd37be0a13530a13e2555265\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d4c785db6-2rkkg" podUID="bf4f0ae4-b718-4aef-9209-5a0f4e005958" Jan 13 20:53:01.757863 containerd[1796]: time="2025-01-13T20:53:01.757845567Z" level=error msg="Failed to destroy network for sandbox \"85d4f68b49e6b6618f8f77479f00c1e15c4237019feac121537907a5f77e1b37\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:01.758010 containerd[1796]: time="2025-01-13T20:53:01.757997574Z" level=error msg="encountered an error cleaning up failed sandbox \"85d4f68b49e6b6618f8f77479f00c1e15c4237019feac121537907a5f77e1b37\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:01.758031 containerd[1796]: time="2025-01-13T20:53:01.758021842Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d4c785db6-799q5,Uid:e3cc6f57-01fe-4853-a572-df94cbb24821,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"85d4f68b49e6b6618f8f77479f00c1e15c4237019feac121537907a5f77e1b37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:01.758113 kubelet[3275]: E0113 20:53:01.758097 3275 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85d4f68b49e6b6618f8f77479f00c1e15c4237019feac121537907a5f77e1b37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:01.758138 kubelet[3275]: E0113 20:53:01.758125 3275 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85d4f68b49e6b6618f8f77479f00c1e15c4237019feac121537907a5f77e1b37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d4c785db6-799q5" Jan 13 20:53:01.758165 kubelet[3275]: E0113 20:53:01.758142 3275 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85d4f68b49e6b6618f8f77479f00c1e15c4237019feac121537907a5f77e1b37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d4c785db6-799q5" Jan 13 20:53:01.758206 kubelet[3275]: E0113 20:53:01.758195 3275 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7d4c785db6-799q5_calico-apiserver(e3cc6f57-01fe-4853-a572-df94cbb24821)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7d4c785db6-799q5_calico-apiserver(e3cc6f57-01fe-4853-a572-df94cbb24821)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"85d4f68b49e6b6618f8f77479f00c1e15c4237019feac121537907a5f77e1b37\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d4c785db6-799q5" podUID="e3cc6f57-01fe-4853-a572-df94cbb24821" Jan 13 20:53:02.673566 kubelet[3275]: I0113 20:53:02.673549 3275 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2bd9901d93239de7de72a43ec0a8cf69dae7a9d44c98375eba4abd642185933" Jan 13 20:53:02.673889 containerd[1796]: time="2025-01-13T20:53:02.673869485Z" level=info msg="StopPodSandbox for \"e2bd9901d93239de7de72a43ec0a8cf69dae7a9d44c98375eba4abd642185933\"" Jan 13 20:53:02.674039 containerd[1796]: time="2025-01-13T20:53:02.674027267Z" level=info msg="Ensure that sandbox e2bd9901d93239de7de72a43ec0a8cf69dae7a9d44c98375eba4abd642185933 in task-service has been cleanup successfully" Jan 13 20:53:02.674158 containerd[1796]: time="2025-01-13T20:53:02.674139430Z" level=info msg="TearDown network for sandbox \"e2bd9901d93239de7de72a43ec0a8cf69dae7a9d44c98375eba4abd642185933\" successfully" Jan 13 20:53:02.674204 containerd[1796]: time="2025-01-13T20:53:02.674152935Z" level=info msg="StopPodSandbox for \"e2bd9901d93239de7de72a43ec0a8cf69dae7a9d44c98375eba4abd642185933\" returns successfully" Jan 13 20:53:02.674302 containerd[1796]: time="2025-01-13T20:53:02.674267650Z" level=info msg="StopPodSandbox for \"0468e4bf08413113b31ecb582a34c0bafbe9d8746c97c8566d561f3d560fab33\"" Jan 13 20:53:02.674375 containerd[1796]: time="2025-01-13T20:53:02.674316618Z" level=info msg="TearDown network for sandbox \"0468e4bf08413113b31ecb582a34c0bafbe9d8746c97c8566d561f3d560fab33\" successfully" Jan 13 20:53:02.674375 containerd[1796]: time="2025-01-13T20:53:02.674342721Z" level=info msg="StopPodSandbox for \"0468e4bf08413113b31ecb582a34c0bafbe9d8746c97c8566d561f3d560fab33\" returns successfully" Jan 13 20:53:02.674424 kubelet[3275]: I0113 20:53:02.674338 3275 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f92eaaeec95925ddfa10fb81b22e967777c08ea032a350ccfbacd2402b579795" Jan 13 20:53:02.674465 containerd[1796]: time="2025-01-13T20:53:02.674453843Z" level=info msg="StopPodSandbox for \"fccc10c98a708ccfd7b3a1ab99162466d79ed142c90b93c1fa738a919e05a5a2\"" Jan 13 20:53:02.674516 containerd[1796]: time="2025-01-13T20:53:02.674506434Z" level=info msg="TearDown network for sandbox \"fccc10c98a708ccfd7b3a1ab99162466d79ed142c90b93c1fa738a919e05a5a2\" successfully" Jan 13 20:53:02.674535 containerd[1796]: time="2025-01-13T20:53:02.674516352Z" level=info msg="StopPodSandbox for \"fccc10c98a708ccfd7b3a1ab99162466d79ed142c90b93c1fa738a919e05a5a2\" returns successfully" Jan 13 20:53:02.674587 containerd[1796]: time="2025-01-13T20:53:02.674574040Z" level=info msg="StopPodSandbox for \"f92eaaeec95925ddfa10fb81b22e967777c08ea032a350ccfbacd2402b579795\"" Jan 13 20:53:02.674617 containerd[1796]: time="2025-01-13T20:53:02.674605933Z" level=info msg="StopPodSandbox for \"e1c4d74fea366482420e27e4d2319ef8fa6392820f0f49a85b79676210aee06f\"" Jan 13 20:53:02.674686 containerd[1796]: time="2025-01-13T20:53:02.674655389Z" level=info msg="TearDown network for sandbox \"e1c4d74fea366482420e27e4d2319ef8fa6392820f0f49a85b79676210aee06f\" successfully" Jan 13 20:53:02.674705 containerd[1796]: time="2025-01-13T20:53:02.674686917Z" level=info msg="StopPodSandbox for \"e1c4d74fea366482420e27e4d2319ef8fa6392820f0f49a85b79676210aee06f\" returns successfully" Jan 13 20:53:02.674721 containerd[1796]: time="2025-01-13T20:53:02.674703555Z" level=info msg="Ensure that sandbox f92eaaeec95925ddfa10fb81b22e967777c08ea032a350ccfbacd2402b579795 in task-service has been cleanup successfully" Jan 13 20:53:02.674796 containerd[1796]: time="2025-01-13T20:53:02.674787504Z" level=info msg="TearDown network for sandbox \"f92eaaeec95925ddfa10fb81b22e967777c08ea032a350ccfbacd2402b579795\" successfully" Jan 13 20:53:02.674815 containerd[1796]: time="2025-01-13T20:53:02.674796004Z" level=info msg="StopPodSandbox for \"f92eaaeec95925ddfa10fb81b22e967777c08ea032a350ccfbacd2402b579795\" returns successfully" Jan 13 20:53:02.674904 containerd[1796]: time="2025-01-13T20:53:02.674894895Z" level=info msg="StopPodSandbox for \"6d430c351838389f0ab1ccbec2e3f4dc91faeb821ac26da3ce257312d149e0a3\"" Jan 13 20:53:02.674930 containerd[1796]: time="2025-01-13T20:53:02.674920665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h7b85,Uid:041808ec-5de1-4583-be60-748668409e39,Namespace:calico-system,Attempt:4,}" Jan 13 20:53:02.674961 containerd[1796]: time="2025-01-13T20:53:02.674939652Z" level=info msg="TearDown network for sandbox \"6d430c351838389f0ab1ccbec2e3f4dc91faeb821ac26da3ce257312d149e0a3\" successfully" Jan 13 20:53:02.674961 containerd[1796]: time="2025-01-13T20:53:02.674946678Z" level=info msg="StopPodSandbox for \"6d430c351838389f0ab1ccbec2e3f4dc91faeb821ac26da3ce257312d149e0a3\" returns successfully" Jan 13 20:53:02.675047 kubelet[3275]: I0113 20:53:02.675039 3275 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d3d05756677299e199887f1cef0224a1ce4957a5cd37be0a13530a13e2555265" Jan 13 20:53:02.675069 containerd[1796]: time="2025-01-13T20:53:02.675040307Z" level=info msg="StopPodSandbox for \"b4c4a6c10d20b2a33196606ebc3cbcd87a2cf342ce15bb27f9b1bfa2e4be7b59\"" Jan 13 20:53:02.675100 containerd[1796]: time="2025-01-13T20:53:02.675078588Z" level=info msg="TearDown network for sandbox \"b4c4a6c10d20b2a33196606ebc3cbcd87a2cf342ce15bb27f9b1bfa2e4be7b59\" successfully" Jan 13 20:53:02.675118 containerd[1796]: time="2025-01-13T20:53:02.675101648Z" level=info msg="StopPodSandbox for \"b4c4a6c10d20b2a33196606ebc3cbcd87a2cf342ce15bb27f9b1bfa2e4be7b59\" returns successfully" Jan 13 20:53:02.675225 containerd[1796]: time="2025-01-13T20:53:02.675216107Z" level=info msg="StopPodSandbox for \"ef5f207624ba5c43d680c6587e4fc03354f47110d4ee73746c6408d8b13554e0\"" Jan 13 20:53:02.675256 containerd[1796]: time="2025-01-13T20:53:02.675247014Z" level=info msg="StopPodSandbox for \"d3d05756677299e199887f1cef0224a1ce4957a5cd37be0a13530a13e2555265\"" Jan 13 20:53:02.675279 containerd[1796]: time="2025-01-13T20:53:02.675260806Z" level=info msg="TearDown network for sandbox \"ef5f207624ba5c43d680c6587e4fc03354f47110d4ee73746c6408d8b13554e0\" successfully" Jan 13 20:53:02.675279 containerd[1796]: time="2025-01-13T20:53:02.675270704Z" level=info msg="StopPodSandbox for \"ef5f207624ba5c43d680c6587e4fc03354f47110d4ee73746c6408d8b13554e0\" returns successfully" Jan 13 20:53:02.675352 containerd[1796]: time="2025-01-13T20:53:02.675341674Z" level=info msg="Ensure that sandbox d3d05756677299e199887f1cef0224a1ce4957a5cd37be0a13530a13e2555265 in task-service has been cleanup successfully" Jan 13 20:53:02.675426 containerd[1796]: time="2025-01-13T20:53:02.675418582Z" level=info msg="TearDown network for sandbox \"d3d05756677299e199887f1cef0224a1ce4957a5cd37be0a13530a13e2555265\" successfully" Jan 13 20:53:02.675449 containerd[1796]: time="2025-01-13T20:53:02.675426170Z" level=info msg="StopPodSandbox for \"d3d05756677299e199887f1cef0224a1ce4957a5cd37be0a13530a13e2555265\" returns successfully" Jan 13 20:53:02.675505 containerd[1796]: time="2025-01-13T20:53:02.675494281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-gnptx,Uid:1958f0eb-003b-4101-8885-8177a1e71a0d,Namespace:kube-system,Attempt:4,}" Jan 13 20:53:02.675540 containerd[1796]: time="2025-01-13T20:53:02.675531624Z" level=info msg="StopPodSandbox for \"24c732b38f370468236a72ab32f80d6878d64538399391f045d0acac2b8b6fb1\"" Jan 13 20:53:02.675584 containerd[1796]: time="2025-01-13T20:53:02.675574814Z" level=info msg="TearDown network for sandbox \"24c732b38f370468236a72ab32f80d6878d64538399391f045d0acac2b8b6fb1\" successfully" Jan 13 20:53:02.675604 containerd[1796]: time="2025-01-13T20:53:02.675585032Z" level=info msg="StopPodSandbox for \"24c732b38f370468236a72ab32f80d6878d64538399391f045d0acac2b8b6fb1\" returns successfully" Jan 13 20:53:02.675710 containerd[1796]: time="2025-01-13T20:53:02.675697854Z" level=info msg="StopPodSandbox for \"c72566025eb6bf20e7a12141ba8d334f96ef61a7fe37ad141a1776f33dbbb2db\"" Jan 13 20:53:02.675756 containerd[1796]: time="2025-01-13T20:53:02.675746968Z" level=info msg="TearDown network for sandbox \"c72566025eb6bf20e7a12141ba8d334f96ef61a7fe37ad141a1776f33dbbb2db\" successfully" Jan 13 20:53:02.675782 containerd[1796]: time="2025-01-13T20:53:02.675756599Z" level=info msg="StopPodSandbox for \"c72566025eb6bf20e7a12141ba8d334f96ef61a7fe37ad141a1776f33dbbb2db\" returns successfully" Jan 13 20:53:02.675894 containerd[1796]: time="2025-01-13T20:53:02.675886144Z" level=info msg="StopPodSandbox for \"bc0d9cc2b9b8e93debedb3944b0e8b37375a66770da30a4aa85ea30a3e80e9f1\"" Jan 13 20:53:02.675925 containerd[1796]: time="2025-01-13T20:53:02.675918487Z" level=info msg="TearDown network for sandbox \"bc0d9cc2b9b8e93debedb3944b0e8b37375a66770da30a4aa85ea30a3e80e9f1\" successfully" Jan 13 20:53:02.675925 containerd[1796]: time="2025-01-13T20:53:02.675924214Z" level=info msg="StopPodSandbox for \"bc0d9cc2b9b8e93debedb3944b0e8b37375a66770da30a4aa85ea30a3e80e9f1\" returns successfully" Jan 13 20:53:02.675979 kubelet[3275]: I0113 20:53:02.675970 3275 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d181c18b3b9b760a94b9c0757331178506d162626304588b2d3668a8c6afdaa1" Jan 13 20:53:02.676036 systemd[1]: run-netns-cni\x2d842f566d\x2da921\x2d0f10\x2d8edb\x2d6c82abe536de.mount: Deactivated successfully. Jan 13 20:53:02.676200 containerd[1796]: time="2025-01-13T20:53:02.676120132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d4c785db6-2rkkg,Uid:bf4f0ae4-b718-4aef-9209-5a0f4e005958,Namespace:calico-apiserver,Attempt:4,}" Jan 13 20:53:02.676262 containerd[1796]: time="2025-01-13T20:53:02.676249138Z" level=info msg="StopPodSandbox for \"d181c18b3b9b760a94b9c0757331178506d162626304588b2d3668a8c6afdaa1\"" Jan 13 20:53:02.676380 containerd[1796]: time="2025-01-13T20:53:02.676369114Z" level=info msg="Ensure that sandbox d181c18b3b9b760a94b9c0757331178506d162626304588b2d3668a8c6afdaa1 in task-service has been cleanup successfully" Jan 13 20:53:02.676482 containerd[1796]: time="2025-01-13T20:53:02.676469502Z" level=info msg="TearDown network for sandbox \"d181c18b3b9b760a94b9c0757331178506d162626304588b2d3668a8c6afdaa1\" successfully" Jan 13 20:53:02.676513 containerd[1796]: time="2025-01-13T20:53:02.676481672Z" level=info msg="StopPodSandbox for \"d181c18b3b9b760a94b9c0757331178506d162626304588b2d3668a8c6afdaa1\" returns successfully" Jan 13 20:53:02.676613 containerd[1796]: time="2025-01-13T20:53:02.676602272Z" level=info msg="StopPodSandbox for \"1aad0586da378838df74df5e9d655a3fc7187bde4f483a1c989f64fa61a8f116\"" Jan 13 20:53:02.676670 containerd[1796]: time="2025-01-13T20:53:02.676660997Z" level=info msg="TearDown network for sandbox \"1aad0586da378838df74df5e9d655a3fc7187bde4f483a1c989f64fa61a8f116\" successfully" Jan 13 20:53:02.676702 containerd[1796]: time="2025-01-13T20:53:02.676669559Z" level=info msg="StopPodSandbox for \"1aad0586da378838df74df5e9d655a3fc7187bde4f483a1c989f64fa61a8f116\" returns successfully" Jan 13 20:53:02.676792 containerd[1796]: time="2025-01-13T20:53:02.676778573Z" level=info msg="StopPodSandbox for \"0127c52613930b78e5e66edfcc01dcf3f62cba9d857bf91f828a4557bcfbb1d3\"" Jan 13 20:53:02.676837 containerd[1796]: time="2025-01-13T20:53:02.676829625Z" level=info msg="TearDown network for sandbox \"0127c52613930b78e5e66edfcc01dcf3f62cba9d857bf91f828a4557bcfbb1d3\" successfully" Jan 13 20:53:02.676860 containerd[1796]: time="2025-01-13T20:53:02.676837842Z" level=info msg="StopPodSandbox for \"0127c52613930b78e5e66edfcc01dcf3f62cba9d857bf91f828a4557bcfbb1d3\" returns successfully" Jan 13 20:53:02.676881 kubelet[3275]: I0113 20:53:02.676843 3275 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c1dd8510bdfc4a9640e53dc0a2fd91fe1c138913a09aeab75084a3af1247e6b" Jan 13 20:53:02.677268 containerd[1796]: time="2025-01-13T20:53:02.677138931Z" level=info msg="StopPodSandbox for \"8ea3cd1e0823c9840b04ba6e04a90f50370ef737a9e00d5acbefd7978bcd15c3\"" Jan 13 20:53:02.677268 containerd[1796]: time="2025-01-13T20:53:02.677205292Z" level=info msg="TearDown network for sandbox \"8ea3cd1e0823c9840b04ba6e04a90f50370ef737a9e00d5acbefd7978bcd15c3\" successfully" Jan 13 20:53:02.677268 containerd[1796]: time="2025-01-13T20:53:02.677215227Z" level=info msg="StopPodSandbox for \"8ea3cd1e0823c9840b04ba6e04a90f50370ef737a9e00d5acbefd7978bcd15c3\" returns successfully" Jan 13 20:53:02.678057 containerd[1796]: time="2025-01-13T20:53:02.677666132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nxnfb,Uid:d2521fa2-61ba-4676-843f-c14c317e5358,Namespace:kube-system,Attempt:4,}" Jan 13 20:53:02.678057 containerd[1796]: time="2025-01-13T20:53:02.677830551Z" level=info msg="StopPodSandbox for \"8c1dd8510bdfc4a9640e53dc0a2fd91fe1c138913a09aeab75084a3af1247e6b\"" Jan 13 20:53:02.678057 containerd[1796]: time="2025-01-13T20:53:02.677964949Z" level=info msg="Ensure that sandbox 8c1dd8510bdfc4a9640e53dc0a2fd91fe1c138913a09aeab75084a3af1247e6b in task-service has been cleanup successfully" Jan 13 20:53:02.678259 containerd[1796]: time="2025-01-13T20:53:02.678246741Z" level=info msg="TearDown network for sandbox \"8c1dd8510bdfc4a9640e53dc0a2fd91fe1c138913a09aeab75084a3af1247e6b\" successfully" Jan 13 20:53:02.678259 containerd[1796]: time="2025-01-13T20:53:02.678256900Z" level=info msg="StopPodSandbox for \"8c1dd8510bdfc4a9640e53dc0a2fd91fe1c138913a09aeab75084a3af1247e6b\" returns successfully" Jan 13 20:53:02.678489 systemd[1]: run-netns-cni\x2d587f7ac1\x2d9baf\x2d5906\x2d884c\x2df899a78249f2.mount: Deactivated successfully. Jan 13 20:53:02.678555 systemd[1]: run-netns-cni\x2d22c1c6c3\x2d0c29\x2ddbb9\x2d104e\x2dfa3fcaa1f9f8.mount: Deactivated successfully. Jan 13 20:53:02.678585 containerd[1796]: time="2025-01-13T20:53:02.678549495Z" level=info msg="StopPodSandbox for \"2a5e6ccf3ede85835692172123dc1d34adf651b783f40d77483af428d5b7aacd\"" Jan 13 20:53:02.678614 containerd[1796]: time="2025-01-13T20:53:02.678605624Z" level=info msg="TearDown network for sandbox \"2a5e6ccf3ede85835692172123dc1d34adf651b783f40d77483af428d5b7aacd\" successfully" Jan 13 20:53:02.678643 containerd[1796]: time="2025-01-13T20:53:02.678615630Z" level=info msg="StopPodSandbox for \"2a5e6ccf3ede85835692172123dc1d34adf651b783f40d77483af428d5b7aacd\" returns successfully" Jan 13 20:53:02.678614 systemd[1]: run-netns-cni\x2d6e437c8f\x2d441a\x2dff44\x2d7f4d\x2da70a18fc88de.mount: Deactivated successfully. Jan 13 20:53:02.678752 containerd[1796]: time="2025-01-13T20:53:02.678741546Z" level=info msg="StopPodSandbox for \"4cb413ded463b1acfb7b7fe1be41346d4c6a2ee23968bb4bbe29ce68e2883e00\"" Jan 13 20:53:02.678777 kubelet[3275]: I0113 20:53:02.678745 3275 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="85d4f68b49e6b6618f8f77479f00c1e15c4237019feac121537907a5f77e1b37" Jan 13 20:53:02.678814 containerd[1796]: time="2025-01-13T20:53:02.678793056Z" level=info msg="TearDown network for sandbox \"4cb413ded463b1acfb7b7fe1be41346d4c6a2ee23968bb4bbe29ce68e2883e00\" successfully" Jan 13 20:53:02.678814 containerd[1796]: time="2025-01-13T20:53:02.678802431Z" level=info msg="StopPodSandbox for \"4cb413ded463b1acfb7b7fe1be41346d4c6a2ee23968bb4bbe29ce68e2883e00\" returns successfully" Jan 13 20:53:02.678922 containerd[1796]: time="2025-01-13T20:53:02.678910475Z" level=info msg="StopPodSandbox for \"d15156abea73ec1e9b5ab33e4bbf79cb7b5b53fdcd536741876459cfab2afcae\"" Jan 13 20:53:02.678962 containerd[1796]: time="2025-01-13T20:53:02.678952492Z" level=info msg="TearDown network for sandbox \"d15156abea73ec1e9b5ab33e4bbf79cb7b5b53fdcd536741876459cfab2afcae\" successfully" Jan 13 20:53:02.678990 containerd[1796]: time="2025-01-13T20:53:02.678962436Z" level=info msg="StopPodSandbox for \"d15156abea73ec1e9b5ab33e4bbf79cb7b5b53fdcd536741876459cfab2afcae\" returns successfully" Jan 13 20:53:02.678990 containerd[1796]: time="2025-01-13T20:53:02.678975581Z" level=info msg="StopPodSandbox for \"85d4f68b49e6b6618f8f77479f00c1e15c4237019feac121537907a5f77e1b37\"" Jan 13 20:53:02.679105 containerd[1796]: time="2025-01-13T20:53:02.679094012Z" level=info msg="Ensure that sandbox 85d4f68b49e6b6618f8f77479f00c1e15c4237019feac121537907a5f77e1b37 in task-service has been cleanup successfully" Jan 13 20:53:02.679143 containerd[1796]: time="2025-01-13T20:53:02.679118600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68d58fcc5b-n4w6q,Uid:ce9b51cb-e99a-4abc-aeff-27a2e094f2d6,Namespace:calico-system,Attempt:4,}" Jan 13 20:53:02.679202 containerd[1796]: time="2025-01-13T20:53:02.679190631Z" level=info msg="TearDown network for sandbox \"85d4f68b49e6b6618f8f77479f00c1e15c4237019feac121537907a5f77e1b37\" successfully" Jan 13 20:53:02.679231 containerd[1796]: time="2025-01-13T20:53:02.679201385Z" level=info msg="StopPodSandbox for \"85d4f68b49e6b6618f8f77479f00c1e15c4237019feac121537907a5f77e1b37\" returns successfully" Jan 13 20:53:02.679318 containerd[1796]: time="2025-01-13T20:53:02.679307537Z" level=info msg="StopPodSandbox for \"8e84fc2c98de3c57d9293d78ddd5f8dc67449ab21cb4315a819585e3ab6855f0\"" Jan 13 20:53:02.679354 containerd[1796]: time="2025-01-13T20:53:02.679347167Z" level=info msg="TearDown network for sandbox \"8e84fc2c98de3c57d9293d78ddd5f8dc67449ab21cb4315a819585e3ab6855f0\" successfully" Jan 13 20:53:02.679354 containerd[1796]: time="2025-01-13T20:53:02.679353846Z" level=info msg="StopPodSandbox for \"8e84fc2c98de3c57d9293d78ddd5f8dc67449ab21cb4315a819585e3ab6855f0\" returns successfully" Jan 13 20:53:02.679467 containerd[1796]: time="2025-01-13T20:53:02.679456636Z" level=info msg="StopPodSandbox for \"456e6343ee0f57bcb8dcd9404bcdf2a4c885fd2fd17130c9b0edfda25a692b9c\"" Jan 13 20:53:02.679507 containerd[1796]: time="2025-01-13T20:53:02.679499430Z" level=info msg="TearDown network for sandbox \"456e6343ee0f57bcb8dcd9404bcdf2a4c885fd2fd17130c9b0edfda25a692b9c\" successfully" Jan 13 20:53:02.679524 containerd[1796]: time="2025-01-13T20:53:02.679507553Z" level=info msg="StopPodSandbox for \"456e6343ee0f57bcb8dcd9404bcdf2a4c885fd2fd17130c9b0edfda25a692b9c\" returns successfully" Jan 13 20:53:02.679622 containerd[1796]: time="2025-01-13T20:53:02.679611007Z" level=info msg="StopPodSandbox for \"1fd3f15800d1c117e0bb75b7bd2633696ff5013f5fdfce3448414aebd683fd00\"" Jan 13 20:53:02.679673 containerd[1796]: time="2025-01-13T20:53:02.679663587Z" level=info msg="TearDown network for sandbox \"1fd3f15800d1c117e0bb75b7bd2633696ff5013f5fdfce3448414aebd683fd00\" successfully" Jan 13 20:53:02.679692 containerd[1796]: time="2025-01-13T20:53:02.679673813Z" level=info msg="StopPodSandbox for \"1fd3f15800d1c117e0bb75b7bd2633696ff5013f5fdfce3448414aebd683fd00\" returns successfully" Jan 13 20:53:02.679837 containerd[1796]: time="2025-01-13T20:53:02.679829017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d4c785db6-799q5,Uid:e3cc6f57-01fe-4853-a572-df94cbb24821,Namespace:calico-apiserver,Attempt:4,}" Jan 13 20:53:02.680854 systemd[1]: run-netns-cni\x2ddbe8c9c0\x2da3c0\x2ddf58\x2d8654\x2d61ade1112f1a.mount: Deactivated successfully. Jan 13 20:53:02.680905 systemd[1]: run-netns-cni\x2d6c755273\x2d1816\x2d27d6\x2d0792\x2d53cc7b3b01b0.mount: Deactivated successfully. Jan 13 20:53:02.719550 containerd[1796]: time="2025-01-13T20:53:02.719516071Z" level=error msg="Failed to destroy network for sandbox \"a42031632e47f03510b85f23f6e159c9b3c2b84567be9c0492165f91acc0e677\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:02.719743 containerd[1796]: time="2025-01-13T20:53:02.719727666Z" level=error msg="encountered an error cleaning up failed sandbox \"a42031632e47f03510b85f23f6e159c9b3c2b84567be9c0492165f91acc0e677\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:02.719788 containerd[1796]: time="2025-01-13T20:53:02.719768980Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h7b85,Uid:041808ec-5de1-4583-be60-748668409e39,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"a42031632e47f03510b85f23f6e159c9b3c2b84567be9c0492165f91acc0e677\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:02.719937 kubelet[3275]: E0113 20:53:02.719913 3275 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a42031632e47f03510b85f23f6e159c9b3c2b84567be9c0492165f91acc0e677\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:02.719994 kubelet[3275]: E0113 20:53:02.719959 3275 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a42031632e47f03510b85f23f6e159c9b3c2b84567be9c0492165f91acc0e677\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h7b85" Jan 13 20:53:02.719994 kubelet[3275]: E0113 20:53:02.719980 3275 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a42031632e47f03510b85f23f6e159c9b3c2b84567be9c0492165f91acc0e677\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h7b85" Jan 13 20:53:02.720050 kubelet[3275]: E0113 20:53:02.720023 3275 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-h7b85_calico-system(041808ec-5de1-4583-be60-748668409e39)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-h7b85_calico-system(041808ec-5de1-4583-be60-748668409e39)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a42031632e47f03510b85f23f6e159c9b3c2b84567be9c0492165f91acc0e677\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-h7b85" podUID="041808ec-5de1-4583-be60-748668409e39" Jan 13 20:53:02.723676 containerd[1796]: time="2025-01-13T20:53:02.723640520Z" level=error msg="Failed to destroy network for sandbox \"a9d6f41974eb9dabe1cf88f294b457bd53658ea8127e0f87575120f54cb81670\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:02.723853 containerd[1796]: time="2025-01-13T20:53:02.723838255Z" level=error msg="encountered an error cleaning up failed sandbox \"a9d6f41974eb9dabe1cf88f294b457bd53658ea8127e0f87575120f54cb81670\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:02.723889 containerd[1796]: time="2025-01-13T20:53:02.723874775Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-gnptx,Uid:1958f0eb-003b-4101-8885-8177a1e71a0d,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"a9d6f41974eb9dabe1cf88f294b457bd53658ea8127e0f87575120f54cb81670\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:02.724037 kubelet[3275]: E0113 20:53:02.724008 3275 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a9d6f41974eb9dabe1cf88f294b457bd53658ea8127e0f87575120f54cb81670\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:02.724077 kubelet[3275]: E0113 20:53:02.724063 3275 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a9d6f41974eb9dabe1cf88f294b457bd53658ea8127e0f87575120f54cb81670\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-gnptx" Jan 13 20:53:02.724118 kubelet[3275]: E0113 20:53:02.724083 3275 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a9d6f41974eb9dabe1cf88f294b457bd53658ea8127e0f87575120f54cb81670\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-gnptx" Jan 13 20:53:02.724149 kubelet[3275]: E0113 20:53:02.724122 3275 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-gnptx_kube-system(1958f0eb-003b-4101-8885-8177a1e71a0d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-gnptx_kube-system(1958f0eb-003b-4101-8885-8177a1e71a0d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a9d6f41974eb9dabe1cf88f294b457bd53658ea8127e0f87575120f54cb81670\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-gnptx" podUID="1958f0eb-003b-4101-8885-8177a1e71a0d" Jan 13 20:53:02.724893 containerd[1796]: time="2025-01-13T20:53:02.724874198Z" level=error msg="Failed to destroy network for sandbox \"7d89662bcfc6315ce75a7989c9fcff2544e25856cc29723aa4c2a8615c036df4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:02.725077 containerd[1796]: time="2025-01-13T20:53:02.725060121Z" level=error msg="encountered an error cleaning up failed sandbox \"7d89662bcfc6315ce75a7989c9fcff2544e25856cc29723aa4c2a8615c036df4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:02.725108 containerd[1796]: time="2025-01-13T20:53:02.725095529Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d4c785db6-2rkkg,Uid:bf4f0ae4-b718-4aef-9209-5a0f4e005958,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"7d89662bcfc6315ce75a7989c9fcff2544e25856cc29723aa4c2a8615c036df4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:02.725221 kubelet[3275]: E0113 20:53:02.725205 3275 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d89662bcfc6315ce75a7989c9fcff2544e25856cc29723aa4c2a8615c036df4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:02.725260 kubelet[3275]: E0113 20:53:02.725233 3275 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d89662bcfc6315ce75a7989c9fcff2544e25856cc29723aa4c2a8615c036df4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d4c785db6-2rkkg" Jan 13 20:53:02.725260 kubelet[3275]: E0113 20:53:02.725251 3275 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d89662bcfc6315ce75a7989c9fcff2544e25856cc29723aa4c2a8615c036df4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d4c785db6-2rkkg" Jan 13 20:53:02.725314 kubelet[3275]: E0113 20:53:02.725279 3275 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7d4c785db6-2rkkg_calico-apiserver(bf4f0ae4-b718-4aef-9209-5a0f4e005958)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7d4c785db6-2rkkg_calico-apiserver(bf4f0ae4-b718-4aef-9209-5a0f4e005958)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7d89662bcfc6315ce75a7989c9fcff2544e25856cc29723aa4c2a8615c036df4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d4c785db6-2rkkg" podUID="bf4f0ae4-b718-4aef-9209-5a0f4e005958" Jan 13 20:53:02.727145 containerd[1796]: time="2025-01-13T20:53:02.727123040Z" level=error msg="Failed to destroy network for sandbox \"a69420d9ba2ab6d95a92d6b72a8223726624071e1ee839bf966541793cfb6160\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:02.727322 containerd[1796]: time="2025-01-13T20:53:02.727307965Z" level=error msg="encountered an error cleaning up failed sandbox \"a69420d9ba2ab6d95a92d6b72a8223726624071e1ee839bf966541793cfb6160\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:02.727378 containerd[1796]: time="2025-01-13T20:53:02.727347819Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68d58fcc5b-n4w6q,Uid:ce9b51cb-e99a-4abc-aeff-27a2e094f2d6,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"a69420d9ba2ab6d95a92d6b72a8223726624071e1ee839bf966541793cfb6160\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:02.727478 kubelet[3275]: E0113 20:53:02.727462 3275 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a69420d9ba2ab6d95a92d6b72a8223726624071e1ee839bf966541793cfb6160\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:02.727515 kubelet[3275]: E0113 20:53:02.727491 3275 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a69420d9ba2ab6d95a92d6b72a8223726624071e1ee839bf966541793cfb6160\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68d58fcc5b-n4w6q" Jan 13 20:53:02.727515 kubelet[3275]: E0113 20:53:02.727502 3275 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a69420d9ba2ab6d95a92d6b72a8223726624071e1ee839bf966541793cfb6160\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68d58fcc5b-n4w6q" Jan 13 20:53:02.727558 kubelet[3275]: E0113 20:53:02.727527 3275 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-68d58fcc5b-n4w6q_calico-system(ce9b51cb-e99a-4abc-aeff-27a2e094f2d6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-68d58fcc5b-n4w6q_calico-system(ce9b51cb-e99a-4abc-aeff-27a2e094f2d6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a69420d9ba2ab6d95a92d6b72a8223726624071e1ee839bf966541793cfb6160\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-68d58fcc5b-n4w6q" podUID="ce9b51cb-e99a-4abc-aeff-27a2e094f2d6" Jan 13 20:53:02.727814 containerd[1796]: time="2025-01-13T20:53:02.727758313Z" level=error msg="Failed to destroy network for sandbox \"fa58dd889033e2e9bc7de6022862af87daf1e36fed64e7f7a5092e31f397bcf9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:02.727929 containerd[1796]: time="2025-01-13T20:53:02.727917663Z" level=error msg="encountered an error cleaning up failed sandbox \"fa58dd889033e2e9bc7de6022862af87daf1e36fed64e7f7a5092e31f397bcf9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:02.727959 containerd[1796]: time="2025-01-13T20:53:02.727943539Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nxnfb,Uid:d2521fa2-61ba-4676-843f-c14c317e5358,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"fa58dd889033e2e9bc7de6022862af87daf1e36fed64e7f7a5092e31f397bcf9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:02.728035 kubelet[3275]: E0113 20:53:02.728015 3275 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa58dd889033e2e9bc7de6022862af87daf1e36fed64e7f7a5092e31f397bcf9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:02.728064 kubelet[3275]: E0113 20:53:02.728047 3275 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa58dd889033e2e9bc7de6022862af87daf1e36fed64e7f7a5092e31f397bcf9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-nxnfb" Jan 13 20:53:02.728086 kubelet[3275]: E0113 20:53:02.728063 3275 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa58dd889033e2e9bc7de6022862af87daf1e36fed64e7f7a5092e31f397bcf9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-nxnfb" Jan 13 20:53:02.728109 kubelet[3275]: E0113 20:53:02.728091 3275 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-nxnfb_kube-system(d2521fa2-61ba-4676-843f-c14c317e5358)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-nxnfb_kube-system(d2521fa2-61ba-4676-843f-c14c317e5358)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fa58dd889033e2e9bc7de6022862af87daf1e36fed64e7f7a5092e31f397bcf9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-nxnfb" podUID="d2521fa2-61ba-4676-843f-c14c317e5358" Jan 13 20:53:02.728289 containerd[1796]: time="2025-01-13T20:53:02.728274444Z" level=error msg="Failed to destroy network for sandbox \"0ce5f45924b9e06d91af147bed525b856e7423dbb61ace896410f734e6c8b005\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:02.728432 containerd[1796]: time="2025-01-13T20:53:02.728418861Z" level=error msg="encountered an error cleaning up failed sandbox \"0ce5f45924b9e06d91af147bed525b856e7423dbb61ace896410f734e6c8b005\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:02.728455 containerd[1796]: time="2025-01-13T20:53:02.728443230Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d4c785db6-799q5,Uid:e3cc6f57-01fe-4853-a572-df94cbb24821,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"0ce5f45924b9e06d91af147bed525b856e7423dbb61ace896410f734e6c8b005\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:02.728517 kubelet[3275]: E0113 20:53:02.728503 3275 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ce5f45924b9e06d91af147bed525b856e7423dbb61ace896410f734e6c8b005\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:02.728554 kubelet[3275]: E0113 20:53:02.728524 3275 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ce5f45924b9e06d91af147bed525b856e7423dbb61ace896410f734e6c8b005\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d4c785db6-799q5" Jan 13 20:53:02.728554 kubelet[3275]: E0113 20:53:02.728534 3275 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ce5f45924b9e06d91af147bed525b856e7423dbb61ace896410f734e6c8b005\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d4c785db6-799q5" Jan 13 20:53:02.728595 kubelet[3275]: E0113 20:53:02.728551 3275 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7d4c785db6-799q5_calico-apiserver(e3cc6f57-01fe-4853-a572-df94cbb24821)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7d4c785db6-799q5_calico-apiserver(e3cc6f57-01fe-4853-a572-df94cbb24821)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0ce5f45924b9e06d91af147bed525b856e7423dbb61ace896410f734e6c8b005\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d4c785db6-799q5" podUID="e3cc6f57-01fe-4853-a572-df94cbb24821" Jan 13 20:53:02.935156 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a42031632e47f03510b85f23f6e159c9b3c2b84567be9c0492165f91acc0e677-shm.mount: Deactivated successfully. Jan 13 20:53:03.066827 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2880477796.mount: Deactivated successfully. Jan 13 20:53:03.079711 containerd[1796]: time="2025-01-13T20:53:03.079663591Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:53:03.079934 containerd[1796]: time="2025-01-13T20:53:03.079891771Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 13 20:53:03.080210 containerd[1796]: time="2025-01-13T20:53:03.080177189Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:53:03.081122 containerd[1796]: time="2025-01-13T20:53:03.081081361Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:53:03.081497 containerd[1796]: time="2025-01-13T20:53:03.081456510Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 3.443648435s" Jan 13 20:53:03.081497 containerd[1796]: time="2025-01-13T20:53:03.081470784Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 13 20:53:03.084942 containerd[1796]: time="2025-01-13T20:53:03.084920044Z" level=info msg="CreateContainer within sandbox \"49859323a4966c83cd55caf0c69de09e466c038055407cb9acb949d8fa4b0fb9\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 13 20:53:03.096223 containerd[1796]: time="2025-01-13T20:53:03.096177254Z" level=info msg="CreateContainer within sandbox \"49859323a4966c83cd55caf0c69de09e466c038055407cb9acb949d8fa4b0fb9\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"f289be6d0631cad26080698a15eeb0ff9183b7c25dd053913e6c376753d44dc9\"" Jan 13 20:53:03.096383 containerd[1796]: time="2025-01-13T20:53:03.096354632Z" level=info msg="StartContainer for \"f289be6d0631cad26080698a15eeb0ff9183b7c25dd053913e6c376753d44dc9\"" Jan 13 20:53:03.120343 systemd[1]: Started cri-containerd-f289be6d0631cad26080698a15eeb0ff9183b7c25dd053913e6c376753d44dc9.scope - libcontainer container f289be6d0631cad26080698a15eeb0ff9183b7c25dd053913e6c376753d44dc9. Jan 13 20:53:03.137516 containerd[1796]: time="2025-01-13T20:53:03.137466351Z" level=info msg="StartContainer for \"f289be6d0631cad26080698a15eeb0ff9183b7c25dd053913e6c376753d44dc9\" returns successfully" Jan 13 20:53:03.220918 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 13 20:53:03.220971 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 13 20:53:03.681063 kubelet[3275]: I0113 20:53:03.681048 3275 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a42031632e47f03510b85f23f6e159c9b3c2b84567be9c0492165f91acc0e677" Jan 13 20:53:03.681381 containerd[1796]: time="2025-01-13T20:53:03.681364767Z" level=info msg="StopPodSandbox for \"a42031632e47f03510b85f23f6e159c9b3c2b84567be9c0492165f91acc0e677\"" Jan 13 20:53:03.681503 containerd[1796]: time="2025-01-13T20:53:03.681494141Z" level=info msg="Ensure that sandbox a42031632e47f03510b85f23f6e159c9b3c2b84567be9c0492165f91acc0e677 in task-service has been cleanup successfully" Jan 13 20:53:03.681586 containerd[1796]: time="2025-01-13T20:53:03.681576810Z" level=info msg="TearDown network for sandbox \"a42031632e47f03510b85f23f6e159c9b3c2b84567be9c0492165f91acc0e677\" successfully" Jan 13 20:53:03.681606 containerd[1796]: time="2025-01-13T20:53:03.681586048Z" level=info msg="StopPodSandbox for \"a42031632e47f03510b85f23f6e159c9b3c2b84567be9c0492165f91acc0e677\" returns successfully" Jan 13 20:53:03.681702 containerd[1796]: time="2025-01-13T20:53:03.681693724Z" level=info msg="StopPodSandbox for \"e2bd9901d93239de7de72a43ec0a8cf69dae7a9d44c98375eba4abd642185933\"" Jan 13 20:53:03.681775 containerd[1796]: time="2025-01-13T20:53:03.681738049Z" level=info msg="TearDown network for sandbox \"e2bd9901d93239de7de72a43ec0a8cf69dae7a9d44c98375eba4abd642185933\" successfully" Jan 13 20:53:03.681775 containerd[1796]: time="2025-01-13T20:53:03.681748024Z" level=info msg="StopPodSandbox for \"e2bd9901d93239de7de72a43ec0a8cf69dae7a9d44c98375eba4abd642185933\" returns successfully" Jan 13 20:53:03.681865 containerd[1796]: time="2025-01-13T20:53:03.681852958Z" level=info msg="StopPodSandbox for \"0468e4bf08413113b31ecb582a34c0bafbe9d8746c97c8566d561f3d560fab33\"" Jan 13 20:53:03.681910 containerd[1796]: time="2025-01-13T20:53:03.681902008Z" level=info msg="TearDown network for sandbox \"0468e4bf08413113b31ecb582a34c0bafbe9d8746c97c8566d561f3d560fab33\" successfully" Jan 13 20:53:03.681933 containerd[1796]: time="2025-01-13T20:53:03.681912358Z" level=info msg="StopPodSandbox for \"0468e4bf08413113b31ecb582a34c0bafbe9d8746c97c8566d561f3d560fab33\" returns successfully" Jan 13 20:53:03.681996 kubelet[3275]: I0113 20:53:03.681987 3275 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9d6f41974eb9dabe1cf88f294b457bd53658ea8127e0f87575120f54cb81670" Jan 13 20:53:03.682084 containerd[1796]: time="2025-01-13T20:53:03.682075716Z" level=info msg="StopPodSandbox for \"fccc10c98a708ccfd7b3a1ab99162466d79ed142c90b93c1fa738a919e05a5a2\"" Jan 13 20:53:03.682117 containerd[1796]: time="2025-01-13T20:53:03.682110767Z" level=info msg="TearDown network for sandbox \"fccc10c98a708ccfd7b3a1ab99162466d79ed142c90b93c1fa738a919e05a5a2\" successfully" Jan 13 20:53:03.682135 containerd[1796]: time="2025-01-13T20:53:03.682117347Z" level=info msg="StopPodSandbox for \"fccc10c98a708ccfd7b3a1ab99162466d79ed142c90b93c1fa738a919e05a5a2\" returns successfully" Jan 13 20:53:03.682193 containerd[1796]: time="2025-01-13T20:53:03.682184316Z" level=info msg="StopPodSandbox for \"a9d6f41974eb9dabe1cf88f294b457bd53658ea8127e0f87575120f54cb81670\"" Jan 13 20:53:03.682235 containerd[1796]: time="2025-01-13T20:53:03.682221972Z" level=info msg="StopPodSandbox for \"e1c4d74fea366482420e27e4d2319ef8fa6392820f0f49a85b79676210aee06f\"" Jan 13 20:53:03.682272 containerd[1796]: time="2025-01-13T20:53:03.682263780Z" level=info msg="Ensure that sandbox a9d6f41974eb9dabe1cf88f294b457bd53658ea8127e0f87575120f54cb81670 in task-service has been cleanup successfully" Jan 13 20:53:03.682307 containerd[1796]: time="2025-01-13T20:53:03.682279669Z" level=info msg="TearDown network for sandbox \"e1c4d74fea366482420e27e4d2319ef8fa6392820f0f49a85b79676210aee06f\" successfully" Jan 13 20:53:03.682325 containerd[1796]: time="2025-01-13T20:53:03.682308679Z" level=info msg="StopPodSandbox for \"e1c4d74fea366482420e27e4d2319ef8fa6392820f0f49a85b79676210aee06f\" returns successfully" Jan 13 20:53:03.682353 containerd[1796]: time="2025-01-13T20:53:03.682345514Z" level=info msg="TearDown network for sandbox \"a9d6f41974eb9dabe1cf88f294b457bd53658ea8127e0f87575120f54cb81670\" successfully" Jan 13 20:53:03.682376 containerd[1796]: time="2025-01-13T20:53:03.682353818Z" level=info msg="StopPodSandbox for \"a9d6f41974eb9dabe1cf88f294b457bd53658ea8127e0f87575120f54cb81670\" returns successfully" Jan 13 20:53:03.682491 containerd[1796]: time="2025-01-13T20:53:03.682478696Z" level=info msg="StopPodSandbox for \"f92eaaeec95925ddfa10fb81b22e967777c08ea032a350ccfbacd2402b579795\"" Jan 13 20:53:03.682537 containerd[1796]: time="2025-01-13T20:53:03.682529162Z" level=info msg="TearDown network for sandbox \"f92eaaeec95925ddfa10fb81b22e967777c08ea032a350ccfbacd2402b579795\" successfully" Jan 13 20:53:03.682555 containerd[1796]: time="2025-01-13T20:53:03.682537538Z" level=info msg="StopPodSandbox for \"f92eaaeec95925ddfa10fb81b22e967777c08ea032a350ccfbacd2402b579795\" returns successfully" Jan 13 20:53:03.682571 containerd[1796]: time="2025-01-13T20:53:03.682484130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h7b85,Uid:041808ec-5de1-4583-be60-748668409e39,Namespace:calico-system,Attempt:5,}" Jan 13 20:53:03.682662 containerd[1796]: time="2025-01-13T20:53:03.682654700Z" level=info msg="StopPodSandbox for \"6d430c351838389f0ab1ccbec2e3f4dc91faeb821ac26da3ce257312d149e0a3\"" Jan 13 20:53:03.682700 containerd[1796]: time="2025-01-13T20:53:03.682691450Z" level=info msg="TearDown network for sandbox \"6d430c351838389f0ab1ccbec2e3f4dc91faeb821ac26da3ce257312d149e0a3\" successfully" Jan 13 20:53:03.682721 containerd[1796]: time="2025-01-13T20:53:03.682701354Z" level=info msg="StopPodSandbox for \"6d430c351838389f0ab1ccbec2e3f4dc91faeb821ac26da3ce257312d149e0a3\" returns successfully" Jan 13 20:53:03.682841 containerd[1796]: time="2025-01-13T20:53:03.682832353Z" level=info msg="StopPodSandbox for \"b4c4a6c10d20b2a33196606ebc3cbcd87a2cf342ce15bb27f9b1bfa2e4be7b59\"" Jan 13 20:53:03.682923 containerd[1796]: time="2025-01-13T20:53:03.682876566Z" level=info msg="TearDown network for sandbox \"b4c4a6c10d20b2a33196606ebc3cbcd87a2cf342ce15bb27f9b1bfa2e4be7b59\" successfully" Jan 13 20:53:03.682923 containerd[1796]: time="2025-01-13T20:53:03.682883972Z" level=info msg="StopPodSandbox for \"b4c4a6c10d20b2a33196606ebc3cbcd87a2cf342ce15bb27f9b1bfa2e4be7b59\" returns successfully" Jan 13 20:53:03.682980 kubelet[3275]: I0113 20:53:03.682905 3275 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa58dd889033e2e9bc7de6022862af87daf1e36fed64e7f7a5092e31f397bcf9" Jan 13 20:53:03.683038 containerd[1796]: time="2025-01-13T20:53:03.683024369Z" level=info msg="StopPodSandbox for \"ef5f207624ba5c43d680c6587e4fc03354f47110d4ee73746c6408d8b13554e0\"" Jan 13 20:53:03.683093 containerd[1796]: time="2025-01-13T20:53:03.683084982Z" level=info msg="TearDown network for sandbox \"ef5f207624ba5c43d680c6587e4fc03354f47110d4ee73746c6408d8b13554e0\" successfully" Jan 13 20:53:03.683115 containerd[1796]: time="2025-01-13T20:53:03.683093115Z" level=info msg="StopPodSandbox for \"ef5f207624ba5c43d680c6587e4fc03354f47110d4ee73746c6408d8b13554e0\" returns successfully" Jan 13 20:53:03.683131 containerd[1796]: time="2025-01-13T20:53:03.683122751Z" level=info msg="StopPodSandbox for \"fa58dd889033e2e9bc7de6022862af87daf1e36fed64e7f7a5092e31f397bcf9\"" Jan 13 20:53:03.683314 containerd[1796]: time="2025-01-13T20:53:03.683254496Z" level=info msg="Ensure that sandbox fa58dd889033e2e9bc7de6022862af87daf1e36fed64e7f7a5092e31f397bcf9 in task-service has been cleanup successfully" Jan 13 20:53:03.683349 containerd[1796]: time="2025-01-13T20:53:03.683338181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-gnptx,Uid:1958f0eb-003b-4101-8885-8177a1e71a0d,Namespace:kube-system,Attempt:5,}" Jan 13 20:53:03.683393 containerd[1796]: time="2025-01-13T20:53:03.683379644Z" level=info msg="TearDown network for sandbox \"fa58dd889033e2e9bc7de6022862af87daf1e36fed64e7f7a5092e31f397bcf9\" successfully" Jan 13 20:53:03.683425 containerd[1796]: time="2025-01-13T20:53:03.683390929Z" level=info msg="StopPodSandbox for \"fa58dd889033e2e9bc7de6022862af87daf1e36fed64e7f7a5092e31f397bcf9\" returns successfully" Jan 13 20:53:03.683502 containerd[1796]: time="2025-01-13T20:53:03.683490355Z" level=info msg="StopPodSandbox for \"d181c18b3b9b760a94b9c0757331178506d162626304588b2d3668a8c6afdaa1\"" Jan 13 20:53:03.683550 containerd[1796]: time="2025-01-13T20:53:03.683528349Z" level=info msg="TearDown network for sandbox \"d181c18b3b9b760a94b9c0757331178506d162626304588b2d3668a8c6afdaa1\" successfully" Jan 13 20:53:03.683576 containerd[1796]: time="2025-01-13T20:53:03.683548648Z" level=info msg="StopPodSandbox for \"d181c18b3b9b760a94b9c0757331178506d162626304588b2d3668a8c6afdaa1\" returns successfully" Jan 13 20:53:03.683689 containerd[1796]: time="2025-01-13T20:53:03.683678760Z" level=info msg="StopPodSandbox for \"1aad0586da378838df74df5e9d655a3fc7187bde4f483a1c989f64fa61a8f116\"" Jan 13 20:53:03.683749 containerd[1796]: time="2025-01-13T20:53:03.683739020Z" level=info msg="TearDown network for sandbox \"1aad0586da378838df74df5e9d655a3fc7187bde4f483a1c989f64fa61a8f116\" successfully" Jan 13 20:53:03.683749 containerd[1796]: time="2025-01-13T20:53:03.683747435Z" level=info msg="StopPodSandbox for \"1aad0586da378838df74df5e9d655a3fc7187bde4f483a1c989f64fa61a8f116\" returns successfully" Jan 13 20:53:03.683857 kubelet[3275]: I0113 20:53:03.683848 3275 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a69420d9ba2ab6d95a92d6b72a8223726624071e1ee839bf966541793cfb6160" Jan 13 20:53:03.683886 containerd[1796]: time="2025-01-13T20:53:03.683868357Z" level=info msg="StopPodSandbox for \"0127c52613930b78e5e66edfcc01dcf3f62cba9d857bf91f828a4557bcfbb1d3\"" Jan 13 20:53:03.683928 containerd[1796]: time="2025-01-13T20:53:03.683917522Z" level=info msg="TearDown network for sandbox \"0127c52613930b78e5e66edfcc01dcf3f62cba9d857bf91f828a4557bcfbb1d3\" successfully" Jan 13 20:53:03.683954 containerd[1796]: time="2025-01-13T20:53:03.683928055Z" level=info msg="StopPodSandbox for \"0127c52613930b78e5e66edfcc01dcf3f62cba9d857bf91f828a4557bcfbb1d3\" returns successfully" Jan 13 20:53:03.684028 containerd[1796]: time="2025-01-13T20:53:03.684018862Z" level=info msg="StopPodSandbox for \"8ea3cd1e0823c9840b04ba6e04a90f50370ef737a9e00d5acbefd7978bcd15c3\"" Jan 13 20:53:03.684059 containerd[1796]: time="2025-01-13T20:53:03.684051366Z" level=info msg="StopPodSandbox for \"a69420d9ba2ab6d95a92d6b72a8223726624071e1ee839bf966541793cfb6160\"" Jan 13 20:53:03.684093 containerd[1796]: time="2025-01-13T20:53:03.684065779Z" level=info msg="TearDown network for sandbox \"8ea3cd1e0823c9840b04ba6e04a90f50370ef737a9e00d5acbefd7978bcd15c3\" successfully" Jan 13 20:53:03.684093 containerd[1796]: time="2025-01-13T20:53:03.684075757Z" level=info msg="StopPodSandbox for \"8ea3cd1e0823c9840b04ba6e04a90f50370ef737a9e00d5acbefd7978bcd15c3\" returns successfully" Jan 13 20:53:03.684173 containerd[1796]: time="2025-01-13T20:53:03.684161341Z" level=info msg="Ensure that sandbox a69420d9ba2ab6d95a92d6b72a8223726624071e1ee839bf966541793cfb6160 in task-service has been cleanup successfully" Jan 13 20:53:03.684262 containerd[1796]: time="2025-01-13T20:53:03.684252758Z" level=info msg="TearDown network for sandbox \"a69420d9ba2ab6d95a92d6b72a8223726624071e1ee839bf966541793cfb6160\" successfully" Jan 13 20:53:03.684292 containerd[1796]: time="2025-01-13T20:53:03.684261879Z" level=info msg="StopPodSandbox for \"a69420d9ba2ab6d95a92d6b72a8223726624071e1ee839bf966541793cfb6160\" returns successfully" Jan 13 20:53:03.684292 containerd[1796]: time="2025-01-13T20:53:03.684253199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nxnfb,Uid:d2521fa2-61ba-4676-843f-c14c317e5358,Namespace:kube-system,Attempt:5,}" Jan 13 20:53:03.684777 containerd[1796]: time="2025-01-13T20:53:03.684376895Z" level=info msg="StopPodSandbox for \"8c1dd8510bdfc4a9640e53dc0a2fd91fe1c138913a09aeab75084a3af1247e6b\"" Jan 13 20:53:03.684777 containerd[1796]: time="2025-01-13T20:53:03.684409862Z" level=info msg="TearDown network for sandbox \"8c1dd8510bdfc4a9640e53dc0a2fd91fe1c138913a09aeab75084a3af1247e6b\" successfully" Jan 13 20:53:03.684777 containerd[1796]: time="2025-01-13T20:53:03.684415473Z" level=info msg="StopPodSandbox for \"8c1dd8510bdfc4a9640e53dc0a2fd91fe1c138913a09aeab75084a3af1247e6b\" returns successfully" Jan 13 20:53:03.684777 containerd[1796]: time="2025-01-13T20:53:03.684498335Z" level=info msg="StopPodSandbox for \"2a5e6ccf3ede85835692172123dc1d34adf651b783f40d77483af428d5b7aacd\"" Jan 13 20:53:03.684777 containerd[1796]: time="2025-01-13T20:53:03.684535892Z" level=info msg="TearDown network for sandbox \"2a5e6ccf3ede85835692172123dc1d34adf651b783f40d77483af428d5b7aacd\" successfully" Jan 13 20:53:03.684777 containerd[1796]: time="2025-01-13T20:53:03.684542294Z" level=info msg="StopPodSandbox for \"2a5e6ccf3ede85835692172123dc1d34adf651b783f40d77483af428d5b7aacd\" returns successfully" Jan 13 20:53:03.684777 containerd[1796]: time="2025-01-13T20:53:03.684679349Z" level=info msg="StopPodSandbox for \"4cb413ded463b1acfb7b7fe1be41346d4c6a2ee23968bb4bbe29ce68e2883e00\"" Jan 13 20:53:03.684777 containerd[1796]: time="2025-01-13T20:53:03.684715226Z" level=info msg="TearDown network for sandbox \"4cb413ded463b1acfb7b7fe1be41346d4c6a2ee23968bb4bbe29ce68e2883e00\" successfully" Jan 13 20:53:03.684777 containerd[1796]: time="2025-01-13T20:53:03.684721033Z" level=info msg="StopPodSandbox for \"4cb413ded463b1acfb7b7fe1be41346d4c6a2ee23968bb4bbe29ce68e2883e00\" returns successfully" Jan 13 20:53:03.684996 kubelet[3275]: I0113 20:53:03.684613 3275 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ce5f45924b9e06d91af147bed525b856e7423dbb61ace896410f734e6c8b005" Jan 13 20:53:03.685029 containerd[1796]: time="2025-01-13T20:53:03.684825530Z" level=info msg="StopPodSandbox for \"d15156abea73ec1e9b5ab33e4bbf79cb7b5b53fdcd536741876459cfab2afcae\"" Jan 13 20:53:03.685029 containerd[1796]: time="2025-01-13T20:53:03.684850006Z" level=info msg="StopPodSandbox for \"0ce5f45924b9e06d91af147bed525b856e7423dbb61ace896410f734e6c8b005\"" Jan 13 20:53:03.685029 containerd[1796]: time="2025-01-13T20:53:03.684873352Z" level=info msg="TearDown network for sandbox \"d15156abea73ec1e9b5ab33e4bbf79cb7b5b53fdcd536741876459cfab2afcae\" successfully" Jan 13 20:53:03.685029 containerd[1796]: time="2025-01-13T20:53:03.684882272Z" level=info msg="StopPodSandbox for \"d15156abea73ec1e9b5ab33e4bbf79cb7b5b53fdcd536741876459cfab2afcae\" returns successfully" Jan 13 20:53:03.685029 containerd[1796]: time="2025-01-13T20:53:03.684928233Z" level=info msg="Ensure that sandbox 0ce5f45924b9e06d91af147bed525b856e7423dbb61ace896410f734e6c8b005 in task-service has been cleanup successfully" Jan 13 20:53:03.685029 containerd[1796]: time="2025-01-13T20:53:03.684992441Z" level=info msg="TearDown network for sandbox \"0ce5f45924b9e06d91af147bed525b856e7423dbb61ace896410f734e6c8b005\" successfully" Jan 13 20:53:03.685029 containerd[1796]: time="2025-01-13T20:53:03.684999003Z" level=info msg="StopPodSandbox for \"0ce5f45924b9e06d91af147bed525b856e7423dbb61ace896410f734e6c8b005\" returns successfully" Jan 13 20:53:03.685185 containerd[1796]: time="2025-01-13T20:53:03.685075424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68d58fcc5b-n4w6q,Uid:ce9b51cb-e99a-4abc-aeff-27a2e094f2d6,Namespace:calico-system,Attempt:5,}" Jan 13 20:53:03.685185 containerd[1796]: time="2025-01-13T20:53:03.685097193Z" level=info msg="StopPodSandbox for \"85d4f68b49e6b6618f8f77479f00c1e15c4237019feac121537907a5f77e1b37\"" Jan 13 20:53:03.685185 containerd[1796]: time="2025-01-13T20:53:03.685133847Z" level=info msg="TearDown network for sandbox \"85d4f68b49e6b6618f8f77479f00c1e15c4237019feac121537907a5f77e1b37\" successfully" Jan 13 20:53:03.685185 containerd[1796]: time="2025-01-13T20:53:03.685139939Z" level=info msg="StopPodSandbox for \"85d4f68b49e6b6618f8f77479f00c1e15c4237019feac121537907a5f77e1b37\" returns successfully" Jan 13 20:53:03.685274 containerd[1796]: time="2025-01-13T20:53:03.685237529Z" level=info msg="StopPodSandbox for \"8e84fc2c98de3c57d9293d78ddd5f8dc67449ab21cb4315a819585e3ab6855f0\"" Jan 13 20:53:03.685295 containerd[1796]: time="2025-01-13T20:53:03.685281888Z" level=info msg="TearDown network for sandbox \"8e84fc2c98de3c57d9293d78ddd5f8dc67449ab21cb4315a819585e3ab6855f0\" successfully" Jan 13 20:53:03.685319 containerd[1796]: time="2025-01-13T20:53:03.685291944Z" level=info msg="StopPodSandbox for \"8e84fc2c98de3c57d9293d78ddd5f8dc67449ab21cb4315a819585e3ab6855f0\" returns successfully" Jan 13 20:53:03.685422 containerd[1796]: time="2025-01-13T20:53:03.685411757Z" level=info msg="StopPodSandbox for \"456e6343ee0f57bcb8dcd9404bcdf2a4c885fd2fd17130c9b0edfda25a692b9c\"" Jan 13 20:53:03.685465 kubelet[3275]: I0113 20:53:03.685456 3275 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d89662bcfc6315ce75a7989c9fcff2544e25856cc29723aa4c2a8615c036df4" Jan 13 20:53:03.685494 containerd[1796]: time="2025-01-13T20:53:03.685461255Z" level=info msg="TearDown network for sandbox \"456e6343ee0f57bcb8dcd9404bcdf2a4c885fd2fd17130c9b0edfda25a692b9c\" successfully" Jan 13 20:53:03.685494 containerd[1796]: time="2025-01-13T20:53:03.685470833Z" level=info msg="StopPodSandbox for \"456e6343ee0f57bcb8dcd9404bcdf2a4c885fd2fd17130c9b0edfda25a692b9c\" returns successfully" Jan 13 20:53:03.685585 containerd[1796]: time="2025-01-13T20:53:03.685572971Z" level=info msg="StopPodSandbox for \"1fd3f15800d1c117e0bb75b7bd2633696ff5013f5fdfce3448414aebd683fd00\"" Jan 13 20:53:03.685638 containerd[1796]: time="2025-01-13T20:53:03.685626126Z" level=info msg="TearDown network for sandbox \"1fd3f15800d1c117e0bb75b7bd2633696ff5013f5fdfce3448414aebd683fd00\" successfully" Jan 13 20:53:03.685671 containerd[1796]: time="2025-01-13T20:53:03.685636176Z" level=info msg="StopPodSandbox for \"1fd3f15800d1c117e0bb75b7bd2633696ff5013f5fdfce3448414aebd683fd00\" returns successfully" Jan 13 20:53:03.685727 containerd[1796]: time="2025-01-13T20:53:03.685715457Z" level=info msg="StopPodSandbox for \"7d89662bcfc6315ce75a7989c9fcff2544e25856cc29723aa4c2a8615c036df4\"" Jan 13 20:53:03.685788 containerd[1796]: time="2025-01-13T20:53:03.685779048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d4c785db6-799q5,Uid:e3cc6f57-01fe-4853-a572-df94cbb24821,Namespace:calico-apiserver,Attempt:5,}" Jan 13 20:53:03.685836 containerd[1796]: time="2025-01-13T20:53:03.685826691Z" level=info msg="Ensure that sandbox 7d89662bcfc6315ce75a7989c9fcff2544e25856cc29723aa4c2a8615c036df4 in task-service has been cleanup successfully" Jan 13 20:53:03.685917 containerd[1796]: time="2025-01-13T20:53:03.685907988Z" level=info msg="TearDown network for sandbox \"7d89662bcfc6315ce75a7989c9fcff2544e25856cc29723aa4c2a8615c036df4\" successfully" Jan 13 20:53:03.685954 containerd[1796]: time="2025-01-13T20:53:03.685916662Z" level=info msg="StopPodSandbox for \"7d89662bcfc6315ce75a7989c9fcff2544e25856cc29723aa4c2a8615c036df4\" returns successfully" Jan 13 20:53:03.686061 containerd[1796]: time="2025-01-13T20:53:03.686050494Z" level=info msg="StopPodSandbox for \"d3d05756677299e199887f1cef0224a1ce4957a5cd37be0a13530a13e2555265\"" Jan 13 20:53:03.686106 containerd[1796]: time="2025-01-13T20:53:03.686098408Z" level=info msg="TearDown network for sandbox \"d3d05756677299e199887f1cef0224a1ce4957a5cd37be0a13530a13e2555265\" successfully" Jan 13 20:53:03.686106 containerd[1796]: time="2025-01-13T20:53:03.686105719Z" level=info msg="StopPodSandbox for \"d3d05756677299e199887f1cef0224a1ce4957a5cd37be0a13530a13e2555265\" returns successfully" Jan 13 20:53:03.686271 containerd[1796]: time="2025-01-13T20:53:03.686262105Z" level=info msg="StopPodSandbox for \"24c732b38f370468236a72ab32f80d6878d64538399391f045d0acac2b8b6fb1\"" Jan 13 20:53:03.686304 containerd[1796]: time="2025-01-13T20:53:03.686297675Z" level=info msg="TearDown network for sandbox \"24c732b38f370468236a72ab32f80d6878d64538399391f045d0acac2b8b6fb1\" successfully" Jan 13 20:53:03.686326 containerd[1796]: time="2025-01-13T20:53:03.686305875Z" level=info msg="StopPodSandbox for \"24c732b38f370468236a72ab32f80d6878d64538399391f045d0acac2b8b6fb1\" returns successfully" Jan 13 20:53:03.686423 containerd[1796]: time="2025-01-13T20:53:03.686413517Z" level=info msg="StopPodSandbox for \"c72566025eb6bf20e7a12141ba8d334f96ef61a7fe37ad141a1776f33dbbb2db\"" Jan 13 20:53:03.686488 containerd[1796]: time="2025-01-13T20:53:03.686477953Z" level=info msg="TearDown network for sandbox \"c72566025eb6bf20e7a12141ba8d334f96ef61a7fe37ad141a1776f33dbbb2db\" successfully" Jan 13 20:53:03.686488 containerd[1796]: time="2025-01-13T20:53:03.686485666Z" level=info msg="StopPodSandbox for \"c72566025eb6bf20e7a12141ba8d334f96ef61a7fe37ad141a1776f33dbbb2db\" returns successfully" Jan 13 20:53:03.686687 containerd[1796]: time="2025-01-13T20:53:03.686675047Z" level=info msg="StopPodSandbox for \"bc0d9cc2b9b8e93debedb3944b0e8b37375a66770da30a4aa85ea30a3e80e9f1\"" Jan 13 20:53:03.686721 containerd[1796]: time="2025-01-13T20:53:03.686713165Z" level=info msg="TearDown network for sandbox \"bc0d9cc2b9b8e93debedb3944b0e8b37375a66770da30a4aa85ea30a3e80e9f1\" successfully" Jan 13 20:53:03.686746 containerd[1796]: time="2025-01-13T20:53:03.686719867Z" level=info msg="StopPodSandbox for \"bc0d9cc2b9b8e93debedb3944b0e8b37375a66770da30a4aa85ea30a3e80e9f1\" returns successfully" Jan 13 20:53:03.686874 containerd[1796]: time="2025-01-13T20:53:03.686864188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d4c785db6-2rkkg,Uid:bf4f0ae4-b718-4aef-9209-5a0f4e005958,Namespace:calico-apiserver,Attempt:5,}" Jan 13 20:53:03.695686 kubelet[3275]: I0113 20:53:03.695648 3275 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-skxv7" podStartSLOduration=1.056921883 podStartE2EDuration="12.695635775s" podCreationTimestamp="2025-01-13 20:52:51 +0000 UTC" firstStartedPulling="2025-01-13 20:52:51.443124927 +0000 UTC m=+23.942431531" lastFinishedPulling="2025-01-13 20:53:03.08183882 +0000 UTC m=+35.581145423" observedRunningTime="2025-01-13 20:53:03.695434489 +0000 UTC m=+36.194741094" watchObservedRunningTime="2025-01-13 20:53:03.695635775 +0000 UTC m=+36.194942376" Jan 13 20:53:03.756461 systemd-networkd[1716]: cali111be27302f: Link UP Jan 13 20:53:03.756579 systemd-networkd[1716]: cali111be27302f: Gained carrier Jan 13 20:53:03.763357 containerd[1796]: 2025-01-13 20:53:03.704 [INFO][5731] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 20:53:03.763357 containerd[1796]: 2025-01-13 20:53:03.712 [INFO][5731] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4186.1.0--a--3c6cffff8a-k8s-csi--node--driver--h7b85-eth0 csi-node-driver- calico-system 041808ec-5de1-4583-be60-748668409e39 599 0 2025-01-13 20:52:51 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4186.1.0-a-3c6cffff8a csi-node-driver-h7b85 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali111be27302f [] []}} ContainerID="cfbe9409969b4e083f3f044a754478e1b0404329813783118e95763654dd633f" Namespace="calico-system" Pod="csi-node-driver-h7b85" WorkloadEndpoint="ci--4186.1.0--a--3c6cffff8a-k8s-csi--node--driver--h7b85-" Jan 13 20:53:03.763357 containerd[1796]: 2025-01-13 20:53:03.712 [INFO][5731] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="cfbe9409969b4e083f3f044a754478e1b0404329813783118e95763654dd633f" Namespace="calico-system" Pod="csi-node-driver-h7b85" WorkloadEndpoint="ci--4186.1.0--a--3c6cffff8a-k8s-csi--node--driver--h7b85-eth0" Jan 13 20:53:03.763357 containerd[1796]: 2025-01-13 20:53:03.731 [INFO][5860] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cfbe9409969b4e083f3f044a754478e1b0404329813783118e95763654dd633f" HandleID="k8s-pod-network.cfbe9409969b4e083f3f044a754478e1b0404329813783118e95763654dd633f" Workload="ci--4186.1.0--a--3c6cffff8a-k8s-csi--node--driver--h7b85-eth0" Jan 13 20:53:03.763357 containerd[1796]: 2025-01-13 20:53:03.738 [INFO][5860] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cfbe9409969b4e083f3f044a754478e1b0404329813783118e95763654dd633f" HandleID="k8s-pod-network.cfbe9409969b4e083f3f044a754478e1b0404329813783118e95763654dd633f" Workload="ci--4186.1.0--a--3c6cffff8a-k8s-csi--node--driver--h7b85-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e7660), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4186.1.0-a-3c6cffff8a", "pod":"csi-node-driver-h7b85", "timestamp":"2025-01-13 20:53:03.731715356 +0000 UTC"}, Hostname:"ci-4186.1.0-a-3c6cffff8a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 20:53:03.763357 containerd[1796]: 2025-01-13 20:53:03.738 [INFO][5860] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 20:53:03.763357 containerd[1796]: 2025-01-13 20:53:03.738 [INFO][5860] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 20:53:03.763357 containerd[1796]: 2025-01-13 20:53:03.738 [INFO][5860] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4186.1.0-a-3c6cffff8a' Jan 13 20:53:03.763357 containerd[1796]: 2025-01-13 20:53:03.739 [INFO][5860] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.cfbe9409969b4e083f3f044a754478e1b0404329813783118e95763654dd633f" host="ci-4186.1.0-a-3c6cffff8a" Jan 13 20:53:03.763357 containerd[1796]: 2025-01-13 20:53:03.741 [INFO][5860] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4186.1.0-a-3c6cffff8a" Jan 13 20:53:03.763357 containerd[1796]: 2025-01-13 20:53:03.744 [INFO][5860] ipam/ipam.go 489: Trying affinity for 192.168.12.192/26 host="ci-4186.1.0-a-3c6cffff8a" Jan 13 20:53:03.763357 containerd[1796]: 2025-01-13 20:53:03.745 [INFO][5860] ipam/ipam.go 155: Attempting to load block cidr=192.168.12.192/26 host="ci-4186.1.0-a-3c6cffff8a" Jan 13 20:53:03.763357 containerd[1796]: 2025-01-13 20:53:03.746 [INFO][5860] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.12.192/26 host="ci-4186.1.0-a-3c6cffff8a" Jan 13 20:53:03.763357 containerd[1796]: 2025-01-13 20:53:03.746 [INFO][5860] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.12.192/26 handle="k8s-pod-network.cfbe9409969b4e083f3f044a754478e1b0404329813783118e95763654dd633f" host="ci-4186.1.0-a-3c6cffff8a" Jan 13 20:53:03.763357 containerd[1796]: 2025-01-13 20:53:03.747 [INFO][5860] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.cfbe9409969b4e083f3f044a754478e1b0404329813783118e95763654dd633f Jan 13 20:53:03.763357 containerd[1796]: 2025-01-13 20:53:03.748 [INFO][5860] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.12.192/26 handle="k8s-pod-network.cfbe9409969b4e083f3f044a754478e1b0404329813783118e95763654dd633f" host="ci-4186.1.0-a-3c6cffff8a" Jan 13 20:53:03.763357 containerd[1796]: 2025-01-13 20:53:03.751 [INFO][5860] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.12.193/26] block=192.168.12.192/26 handle="k8s-pod-network.cfbe9409969b4e083f3f044a754478e1b0404329813783118e95763654dd633f" host="ci-4186.1.0-a-3c6cffff8a" Jan 13 20:53:03.763357 containerd[1796]: 2025-01-13 20:53:03.751 [INFO][5860] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.12.193/26] handle="k8s-pod-network.cfbe9409969b4e083f3f044a754478e1b0404329813783118e95763654dd633f" host="ci-4186.1.0-a-3c6cffff8a" Jan 13 20:53:03.763357 containerd[1796]: 2025-01-13 20:53:03.751 [INFO][5860] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 20:53:03.763357 containerd[1796]: 2025-01-13 20:53:03.751 [INFO][5860] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.12.193/26] IPv6=[] ContainerID="cfbe9409969b4e083f3f044a754478e1b0404329813783118e95763654dd633f" HandleID="k8s-pod-network.cfbe9409969b4e083f3f044a754478e1b0404329813783118e95763654dd633f" Workload="ci--4186.1.0--a--3c6cffff8a-k8s-csi--node--driver--h7b85-eth0" Jan 13 20:53:03.763767 containerd[1796]: 2025-01-13 20:53:03.752 [INFO][5731] cni-plugin/k8s.go 386: Populated endpoint ContainerID="cfbe9409969b4e083f3f044a754478e1b0404329813783118e95763654dd633f" Namespace="calico-system" Pod="csi-node-driver-h7b85" WorkloadEndpoint="ci--4186.1.0--a--3c6cffff8a-k8s-csi--node--driver--h7b85-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.1.0--a--3c6cffff8a-k8s-csi--node--driver--h7b85-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"041808ec-5de1-4583-be60-748668409e39", ResourceVersion:"599", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 52, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.1.0-a-3c6cffff8a", ContainerID:"", Pod:"csi-node-driver-h7b85", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.12.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali111be27302f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:53:03.763767 containerd[1796]: 2025-01-13 20:53:03.752 [INFO][5731] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.12.193/32] ContainerID="cfbe9409969b4e083f3f044a754478e1b0404329813783118e95763654dd633f" Namespace="calico-system" Pod="csi-node-driver-h7b85" WorkloadEndpoint="ci--4186.1.0--a--3c6cffff8a-k8s-csi--node--driver--h7b85-eth0" Jan 13 20:53:03.763767 containerd[1796]: 2025-01-13 20:53:03.752 [INFO][5731] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali111be27302f ContainerID="cfbe9409969b4e083f3f044a754478e1b0404329813783118e95763654dd633f" Namespace="calico-system" Pod="csi-node-driver-h7b85" WorkloadEndpoint="ci--4186.1.0--a--3c6cffff8a-k8s-csi--node--driver--h7b85-eth0" Jan 13 20:53:03.763767 containerd[1796]: 2025-01-13 20:53:03.756 [INFO][5731] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cfbe9409969b4e083f3f044a754478e1b0404329813783118e95763654dd633f" Namespace="calico-system" Pod="csi-node-driver-h7b85" WorkloadEndpoint="ci--4186.1.0--a--3c6cffff8a-k8s-csi--node--driver--h7b85-eth0" Jan 13 20:53:03.763767 containerd[1796]: 2025-01-13 20:53:03.756 [INFO][5731] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="cfbe9409969b4e083f3f044a754478e1b0404329813783118e95763654dd633f" Namespace="calico-system" Pod="csi-node-driver-h7b85" WorkloadEndpoint="ci--4186.1.0--a--3c6cffff8a-k8s-csi--node--driver--h7b85-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.1.0--a--3c6cffff8a-k8s-csi--node--driver--h7b85-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"041808ec-5de1-4583-be60-748668409e39", ResourceVersion:"599", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 52, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.1.0-a-3c6cffff8a", ContainerID:"cfbe9409969b4e083f3f044a754478e1b0404329813783118e95763654dd633f", Pod:"csi-node-driver-h7b85", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.12.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali111be27302f", MAC:"2a:b2:55:a3:1e:c6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:53:03.763767 containerd[1796]: 2025-01-13 20:53:03.761 [INFO][5731] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="cfbe9409969b4e083f3f044a754478e1b0404329813783118e95763654dd633f" Namespace="calico-system" Pod="csi-node-driver-h7b85" WorkloadEndpoint="ci--4186.1.0--a--3c6cffff8a-k8s-csi--node--driver--h7b85-eth0" Jan 13 20:53:03.764869 systemd-networkd[1716]: cali95fe0ed7402: Link UP Jan 13 20:53:03.764987 systemd-networkd[1716]: cali95fe0ed7402: Gained carrier Jan 13 20:53:03.769389 containerd[1796]: 2025-01-13 20:53:03.711 [INFO][5788] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 20:53:03.769389 containerd[1796]: 2025-01-13 20:53:03.718 [INFO][5788] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4186.1.0--a--3c6cffff8a-k8s-calico--apiserver--7d4c785db6--799q5-eth0 calico-apiserver-7d4c785db6- calico-apiserver e3cc6f57-01fe-4853-a572-df94cbb24821 666 0 2025-01-13 20:52:51 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7d4c785db6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4186.1.0-a-3c6cffff8a calico-apiserver-7d4c785db6-799q5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali95fe0ed7402 [] []}} ContainerID="2697e809f52fb39ec9238f2ebcabdf86de06eb575e2159d4b4e9a296557b6aef" Namespace="calico-apiserver" Pod="calico-apiserver-7d4c785db6-799q5" WorkloadEndpoint="ci--4186.1.0--a--3c6cffff8a-k8s-calico--apiserver--7d4c785db6--799q5-" Jan 13 20:53:03.769389 containerd[1796]: 2025-01-13 20:53:03.718 [INFO][5788] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2697e809f52fb39ec9238f2ebcabdf86de06eb575e2159d4b4e9a296557b6aef" Namespace="calico-apiserver" Pod="calico-apiserver-7d4c785db6-799q5" WorkloadEndpoint="ci--4186.1.0--a--3c6cffff8a-k8s-calico--apiserver--7d4c785db6--799q5-eth0" Jan 13 20:53:03.769389 containerd[1796]: 2025-01-13 20:53:03.735 [INFO][5871] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2697e809f52fb39ec9238f2ebcabdf86de06eb575e2159d4b4e9a296557b6aef" HandleID="k8s-pod-network.2697e809f52fb39ec9238f2ebcabdf86de06eb575e2159d4b4e9a296557b6aef" Workload="ci--4186.1.0--a--3c6cffff8a-k8s-calico--apiserver--7d4c785db6--799q5-eth0" Jan 13 20:53:03.769389 containerd[1796]: 2025-01-13 20:53:03.740 [INFO][5871] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2697e809f52fb39ec9238f2ebcabdf86de06eb575e2159d4b4e9a296557b6aef" HandleID="k8s-pod-network.2697e809f52fb39ec9238f2ebcabdf86de06eb575e2159d4b4e9a296557b6aef" Workload="ci--4186.1.0--a--3c6cffff8a-k8s-calico--apiserver--7d4c785db6--799q5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000364d30), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4186.1.0-a-3c6cffff8a", "pod":"calico-apiserver-7d4c785db6-799q5", "timestamp":"2025-01-13 20:53:03.735491126 +0000 UTC"}, Hostname:"ci-4186.1.0-a-3c6cffff8a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 20:53:03.769389 containerd[1796]: 2025-01-13 20:53:03.740 [INFO][5871] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 20:53:03.769389 containerd[1796]: 2025-01-13 20:53:03.751 [INFO][5871] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 20:53:03.769389 containerd[1796]: 2025-01-13 20:53:03.751 [INFO][5871] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4186.1.0-a-3c6cffff8a' Jan 13 20:53:03.769389 containerd[1796]: 2025-01-13 20:53:03.752 [INFO][5871] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2697e809f52fb39ec9238f2ebcabdf86de06eb575e2159d4b4e9a296557b6aef" host="ci-4186.1.0-a-3c6cffff8a" Jan 13 20:53:03.769389 containerd[1796]: 2025-01-13 20:53:03.753 [INFO][5871] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4186.1.0-a-3c6cffff8a" Jan 13 20:53:03.769389 containerd[1796]: 2025-01-13 20:53:03.756 [INFO][5871] ipam/ipam.go 489: Trying affinity for 192.168.12.192/26 host="ci-4186.1.0-a-3c6cffff8a" Jan 13 20:53:03.769389 containerd[1796]: 2025-01-13 20:53:03.757 [INFO][5871] ipam/ipam.go 155: Attempting to load block cidr=192.168.12.192/26 host="ci-4186.1.0-a-3c6cffff8a" Jan 13 20:53:03.769389 containerd[1796]: 2025-01-13 20:53:03.757 [INFO][5871] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.12.192/26 host="ci-4186.1.0-a-3c6cffff8a" Jan 13 20:53:03.769389 containerd[1796]: 2025-01-13 20:53:03.758 [INFO][5871] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.12.192/26 handle="k8s-pod-network.2697e809f52fb39ec9238f2ebcabdf86de06eb575e2159d4b4e9a296557b6aef" host="ci-4186.1.0-a-3c6cffff8a" Jan 13 20:53:03.769389 containerd[1796]: 2025-01-13 20:53:03.758 [INFO][5871] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2697e809f52fb39ec9238f2ebcabdf86de06eb575e2159d4b4e9a296557b6aef Jan 13 20:53:03.769389 containerd[1796]: 2025-01-13 20:53:03.760 [INFO][5871] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.12.192/26 handle="k8s-pod-network.2697e809f52fb39ec9238f2ebcabdf86de06eb575e2159d4b4e9a296557b6aef" host="ci-4186.1.0-a-3c6cffff8a" Jan 13 20:53:03.769389 containerd[1796]: 2025-01-13 20:53:03.763 [INFO][5871] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.12.194/26] block=192.168.12.192/26 handle="k8s-pod-network.2697e809f52fb39ec9238f2ebcabdf86de06eb575e2159d4b4e9a296557b6aef" host="ci-4186.1.0-a-3c6cffff8a" Jan 13 20:53:03.769389 containerd[1796]: 2025-01-13 20:53:03.763 [INFO][5871] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.12.194/26] handle="k8s-pod-network.2697e809f52fb39ec9238f2ebcabdf86de06eb575e2159d4b4e9a296557b6aef" host="ci-4186.1.0-a-3c6cffff8a" Jan 13 20:53:03.769389 containerd[1796]: 2025-01-13 20:53:03.763 [INFO][5871] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 20:53:03.769389 containerd[1796]: 2025-01-13 20:53:03.763 [INFO][5871] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.12.194/26] IPv6=[] ContainerID="2697e809f52fb39ec9238f2ebcabdf86de06eb575e2159d4b4e9a296557b6aef" HandleID="k8s-pod-network.2697e809f52fb39ec9238f2ebcabdf86de06eb575e2159d4b4e9a296557b6aef" Workload="ci--4186.1.0--a--3c6cffff8a-k8s-calico--apiserver--7d4c785db6--799q5-eth0" Jan 13 20:53:03.769795 containerd[1796]: 2025-01-13 20:53:03.763 [INFO][5788] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2697e809f52fb39ec9238f2ebcabdf86de06eb575e2159d4b4e9a296557b6aef" Namespace="calico-apiserver" Pod="calico-apiserver-7d4c785db6-799q5" WorkloadEndpoint="ci--4186.1.0--a--3c6cffff8a-k8s-calico--apiserver--7d4c785db6--799q5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.1.0--a--3c6cffff8a-k8s-calico--apiserver--7d4c785db6--799q5-eth0", GenerateName:"calico-apiserver-7d4c785db6-", Namespace:"calico-apiserver", SelfLink:"", UID:"e3cc6f57-01fe-4853-a572-df94cbb24821", ResourceVersion:"666", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 52, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d4c785db6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.1.0-a-3c6cffff8a", ContainerID:"", Pod:"calico-apiserver-7d4c785db6-799q5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.12.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali95fe0ed7402", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:53:03.769795 containerd[1796]: 2025-01-13 20:53:03.764 [INFO][5788] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.12.194/32] ContainerID="2697e809f52fb39ec9238f2ebcabdf86de06eb575e2159d4b4e9a296557b6aef" Namespace="calico-apiserver" Pod="calico-apiserver-7d4c785db6-799q5" WorkloadEndpoint="ci--4186.1.0--a--3c6cffff8a-k8s-calico--apiserver--7d4c785db6--799q5-eth0" Jan 13 20:53:03.769795 containerd[1796]: 2025-01-13 20:53:03.764 [INFO][5788] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali95fe0ed7402 ContainerID="2697e809f52fb39ec9238f2ebcabdf86de06eb575e2159d4b4e9a296557b6aef" Namespace="calico-apiserver" Pod="calico-apiserver-7d4c785db6-799q5" WorkloadEndpoint="ci--4186.1.0--a--3c6cffff8a-k8s-calico--apiserver--7d4c785db6--799q5-eth0" Jan 13 20:53:03.769795 containerd[1796]: 2025-01-13 20:53:03.764 [INFO][5788] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2697e809f52fb39ec9238f2ebcabdf86de06eb575e2159d4b4e9a296557b6aef" Namespace="calico-apiserver" Pod="calico-apiserver-7d4c785db6-799q5" WorkloadEndpoint="ci--4186.1.0--a--3c6cffff8a-k8s-calico--apiserver--7d4c785db6--799q5-eth0" Jan 13 20:53:03.769795 containerd[1796]: 2025-01-13 20:53:03.765 [INFO][5788] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2697e809f52fb39ec9238f2ebcabdf86de06eb575e2159d4b4e9a296557b6aef" Namespace="calico-apiserver" Pod="calico-apiserver-7d4c785db6-799q5" WorkloadEndpoint="ci--4186.1.0--a--3c6cffff8a-k8s-calico--apiserver--7d4c785db6--799q5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.1.0--a--3c6cffff8a-k8s-calico--apiserver--7d4c785db6--799q5-eth0", GenerateName:"calico-apiserver-7d4c785db6-", Namespace:"calico-apiserver", SelfLink:"", UID:"e3cc6f57-01fe-4853-a572-df94cbb24821", ResourceVersion:"666", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 52, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d4c785db6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.1.0-a-3c6cffff8a", ContainerID:"2697e809f52fb39ec9238f2ebcabdf86de06eb575e2159d4b4e9a296557b6aef", Pod:"calico-apiserver-7d4c785db6-799q5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.12.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali95fe0ed7402", MAC:"0e:28:0d:77:95:44", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:53:03.769795 containerd[1796]: 2025-01-13 20:53:03.768 [INFO][5788] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2697e809f52fb39ec9238f2ebcabdf86de06eb575e2159d4b4e9a296557b6aef" Namespace="calico-apiserver" Pod="calico-apiserver-7d4c785db6-799q5" WorkloadEndpoint="ci--4186.1.0--a--3c6cffff8a-k8s-calico--apiserver--7d4c785db6--799q5-eth0" Jan 13 20:53:03.773649 containerd[1796]: time="2025-01-13T20:53:03.773604896Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:53:03.773649 containerd[1796]: time="2025-01-13T20:53:03.773638410Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:53:03.773649 containerd[1796]: time="2025-01-13T20:53:03.773645590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:53:03.773768 containerd[1796]: time="2025-01-13T20:53:03.773689259Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:53:03.779375 systemd-networkd[1716]: calibe5ec296e9d: Link UP Jan 13 20:53:03.779500 systemd-networkd[1716]: calibe5ec296e9d: Gained carrier Jan 13 20:53:03.780158 containerd[1796]: time="2025-01-13T20:53:03.780108276Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:53:03.780204 containerd[1796]: time="2025-01-13T20:53:03.780158061Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:53:03.780204 containerd[1796]: time="2025-01-13T20:53:03.780172357Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:53:03.780275 containerd[1796]: time="2025-01-13T20:53:03.780229214Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:53:03.784594 containerd[1796]: 2025-01-13 20:53:03.711 [INFO][5755] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 20:53:03.784594 containerd[1796]: 2025-01-13 20:53:03.717 [INFO][5755] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4186.1.0--a--3c6cffff8a-k8s-coredns--7db6d8ff4d--gnptx-eth0 coredns-7db6d8ff4d- kube-system 1958f0eb-003b-4101-8885-8177a1e71a0d 672 0 2025-01-13 20:52:41 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4186.1.0-a-3c6cffff8a coredns-7db6d8ff4d-gnptx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calibe5ec296e9d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="de6761271d58af0d1526c1c872d2e2920f689a2a8f97e864b6613eda030521ee" Namespace="kube-system" Pod="coredns-7db6d8ff4d-gnptx" WorkloadEndpoint="ci--4186.1.0--a--3c6cffff8a-k8s-coredns--7db6d8ff4d--gnptx-" Jan 13 20:53:03.784594 containerd[1796]: 2025-01-13 20:53:03.717 [INFO][5755] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="de6761271d58af0d1526c1c872d2e2920f689a2a8f97e864b6613eda030521ee" Namespace="kube-system" Pod="coredns-7db6d8ff4d-gnptx" WorkloadEndpoint="ci--4186.1.0--a--3c6cffff8a-k8s-coredns--7db6d8ff4d--gnptx-eth0" Jan 13 20:53:03.784594 containerd[1796]: 2025-01-13 20:53:03.736 [INFO][5869] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="de6761271d58af0d1526c1c872d2e2920f689a2a8f97e864b6613eda030521ee" HandleID="k8s-pod-network.de6761271d58af0d1526c1c872d2e2920f689a2a8f97e864b6613eda030521ee" Workload="ci--4186.1.0--a--3c6cffff8a-k8s-coredns--7db6d8ff4d--gnptx-eth0" Jan 13 20:53:03.784594 containerd[1796]: 2025-01-13 20:53:03.741 [INFO][5869] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="de6761271d58af0d1526c1c872d2e2920f689a2a8f97e864b6613eda030521ee" HandleID="k8s-pod-network.de6761271d58af0d1526c1c872d2e2920f689a2a8f97e864b6613eda030521ee" Workload="ci--4186.1.0--a--3c6cffff8a-k8s-coredns--7db6d8ff4d--gnptx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000364e90), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4186.1.0-a-3c6cffff8a", "pod":"coredns-7db6d8ff4d-gnptx", "timestamp":"2025-01-13 20:53:03.7367676 +0000 UTC"}, Hostname:"ci-4186.1.0-a-3c6cffff8a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 20:53:03.784594 containerd[1796]: 2025-01-13 20:53:03.741 [INFO][5869] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 20:53:03.784594 containerd[1796]: 2025-01-13 20:53:03.763 [INFO][5869] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 20:53:03.784594 containerd[1796]: 2025-01-13 20:53:03.763 [INFO][5869] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4186.1.0-a-3c6cffff8a' Jan 13 20:53:03.784594 containerd[1796]: 2025-01-13 20:53:03.764 [INFO][5869] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.de6761271d58af0d1526c1c872d2e2920f689a2a8f97e864b6613eda030521ee" host="ci-4186.1.0-a-3c6cffff8a" Jan 13 20:53:03.784594 containerd[1796]: 2025-01-13 20:53:03.766 [INFO][5869] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4186.1.0-a-3c6cffff8a" Jan 13 20:53:03.784594 containerd[1796]: 2025-01-13 20:53:03.769 [INFO][5869] ipam/ipam.go 489: Trying affinity for 192.168.12.192/26 host="ci-4186.1.0-a-3c6cffff8a" Jan 13 20:53:03.784594 containerd[1796]: 2025-01-13 20:53:03.770 [INFO][5869] ipam/ipam.go 155: Attempting to load block cidr=192.168.12.192/26 host="ci-4186.1.0-a-3c6cffff8a" Jan 13 20:53:03.784594 containerd[1796]: 2025-01-13 20:53:03.771 [INFO][5869] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.12.192/26 host="ci-4186.1.0-a-3c6cffff8a" Jan 13 20:53:03.784594 containerd[1796]: 2025-01-13 20:53:03.771 [INFO][5869] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.12.192/26 handle="k8s-pod-network.de6761271d58af0d1526c1c872d2e2920f689a2a8f97e864b6613eda030521ee" host="ci-4186.1.0-a-3c6cffff8a" Jan 13 20:53:03.784594 containerd[1796]: 2025-01-13 20:53:03.772 [INFO][5869] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.de6761271d58af0d1526c1c872d2e2920f689a2a8f97e864b6613eda030521ee Jan 13 20:53:03.784594 containerd[1796]: 2025-01-13 20:53:03.774 [INFO][5869] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.12.192/26 handle="k8s-pod-network.de6761271d58af0d1526c1c872d2e2920f689a2a8f97e864b6613eda030521ee" host="ci-4186.1.0-a-3c6cffff8a" Jan 13 20:53:03.784594 containerd[1796]: 2025-01-13 20:53:03.777 [INFO][5869] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.12.195/26] block=192.168.12.192/26 handle="k8s-pod-network.de6761271d58af0d1526c1c872d2e2920f689a2a8f97e864b6613eda030521ee" host="ci-4186.1.0-a-3c6cffff8a" Jan 13 20:53:03.784594 containerd[1796]: 2025-01-13 20:53:03.777 [INFO][5869] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.12.195/26] handle="k8s-pod-network.de6761271d58af0d1526c1c872d2e2920f689a2a8f97e864b6613eda030521ee" host="ci-4186.1.0-a-3c6cffff8a" Jan 13 20:53:03.784594 containerd[1796]: 2025-01-13 20:53:03.777 [INFO][5869] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 20:53:03.784594 containerd[1796]: 2025-01-13 20:53:03.777 [INFO][5869] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.12.195/26] IPv6=[] ContainerID="de6761271d58af0d1526c1c872d2e2920f689a2a8f97e864b6613eda030521ee" HandleID="k8s-pod-network.de6761271d58af0d1526c1c872d2e2920f689a2a8f97e864b6613eda030521ee" Workload="ci--4186.1.0--a--3c6cffff8a-k8s-coredns--7db6d8ff4d--gnptx-eth0" Jan 13 20:53:03.785182 containerd[1796]: 2025-01-13 20:53:03.778 [INFO][5755] cni-plugin/k8s.go 386: Populated endpoint ContainerID="de6761271d58af0d1526c1c872d2e2920f689a2a8f97e864b6613eda030521ee" Namespace="kube-system" Pod="coredns-7db6d8ff4d-gnptx" WorkloadEndpoint="ci--4186.1.0--a--3c6cffff8a-k8s-coredns--7db6d8ff4d--gnptx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.1.0--a--3c6cffff8a-k8s-coredns--7db6d8ff4d--gnptx-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"1958f0eb-003b-4101-8885-8177a1e71a0d", ResourceVersion:"672", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 52, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.1.0-a-3c6cffff8a", ContainerID:"", Pod:"coredns-7db6d8ff4d-gnptx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.12.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibe5ec296e9d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:53:03.785182 containerd[1796]: 2025-01-13 20:53:03.778 [INFO][5755] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.12.195/32] ContainerID="de6761271d58af0d1526c1c872d2e2920f689a2a8f97e864b6613eda030521ee" Namespace="kube-system" Pod="coredns-7db6d8ff4d-gnptx" WorkloadEndpoint="ci--4186.1.0--a--3c6cffff8a-k8s-coredns--7db6d8ff4d--gnptx-eth0" Jan 13 20:53:03.785182 containerd[1796]: 2025-01-13 20:53:03.778 [INFO][5755] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibe5ec296e9d ContainerID="de6761271d58af0d1526c1c872d2e2920f689a2a8f97e864b6613eda030521ee" Namespace="kube-system" Pod="coredns-7db6d8ff4d-gnptx" WorkloadEndpoint="ci--4186.1.0--a--3c6cffff8a-k8s-coredns--7db6d8ff4d--gnptx-eth0" Jan 13 20:53:03.785182 containerd[1796]: 2025-01-13 20:53:03.779 [INFO][5755] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="de6761271d58af0d1526c1c872d2e2920f689a2a8f97e864b6613eda030521ee" Namespace="kube-system" Pod="coredns-7db6d8ff4d-gnptx" WorkloadEndpoint="ci--4186.1.0--a--3c6cffff8a-k8s-coredns--7db6d8ff4d--gnptx-eth0" Jan 13 20:53:03.785182 containerd[1796]: 2025-01-13 20:53:03.779 [INFO][5755] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="de6761271d58af0d1526c1c872d2e2920f689a2a8f97e864b6613eda030521ee" Namespace="kube-system" Pod="coredns-7db6d8ff4d-gnptx" WorkloadEndpoint="ci--4186.1.0--a--3c6cffff8a-k8s-coredns--7db6d8ff4d--gnptx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.1.0--a--3c6cffff8a-k8s-coredns--7db6d8ff4d--gnptx-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"1958f0eb-003b-4101-8885-8177a1e71a0d", ResourceVersion:"672", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 52, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.1.0-a-3c6cffff8a", ContainerID:"de6761271d58af0d1526c1c872d2e2920f689a2a8f97e864b6613eda030521ee", Pod:"coredns-7db6d8ff4d-gnptx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.12.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibe5ec296e9d", MAC:"4a:de:47:98:cc:ae", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:53:03.785182 containerd[1796]: 2025-01-13 20:53:03.783 [INFO][5755] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="de6761271d58af0d1526c1c872d2e2920f689a2a8f97e864b6613eda030521ee" Namespace="kube-system" Pod="coredns-7db6d8ff4d-gnptx" WorkloadEndpoint="ci--4186.1.0--a--3c6cffff8a-k8s-coredns--7db6d8ff4d--gnptx-eth0" Jan 13 20:53:03.791358 systemd[1]: Started cri-containerd-cfbe9409969b4e083f3f044a754478e1b0404329813783118e95763654dd633f.scope - libcontainer container cfbe9409969b4e083f3f044a754478e1b0404329813783118e95763654dd633f. Jan 13 20:53:03.793472 systemd[1]: Started cri-containerd-2697e809f52fb39ec9238f2ebcabdf86de06eb575e2159d4b4e9a296557b6aef.scope - libcontainer container 2697e809f52fb39ec9238f2ebcabdf86de06eb575e2159d4b4e9a296557b6aef. Jan 13 20:53:03.794285 containerd[1796]: time="2025-01-13T20:53:03.794052440Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:53:03.794285 containerd[1796]: time="2025-01-13T20:53:03.794273834Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:53:03.794285 containerd[1796]: time="2025-01-13T20:53:03.794282148Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:53:03.794399 containerd[1796]: time="2025-01-13T20:53:03.794329989Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:53:03.800279 systemd[1]: Started cri-containerd-de6761271d58af0d1526c1c872d2e2920f689a2a8f97e864b6613eda030521ee.scope - libcontainer container de6761271d58af0d1526c1c872d2e2920f689a2a8f97e864b6613eda030521ee. Jan 13 20:53:03.802140 containerd[1796]: time="2025-01-13T20:53:03.802117171Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h7b85,Uid:041808ec-5de1-4583-be60-748668409e39,Namespace:calico-system,Attempt:5,} returns sandbox id \"cfbe9409969b4e083f3f044a754478e1b0404329813783118e95763654dd633f\"" Jan 13 20:53:03.802859 containerd[1796]: time="2025-01-13T20:53:03.802846119Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 13 20:53:03.809635 systemd-networkd[1716]: califf555266521: Link UP Jan 13 20:53:03.809813 systemd-networkd[1716]: califf555266521: Gained carrier Jan 13 20:53:03.814926 containerd[1796]: 2025-01-13 20:53:03.712 [INFO][5775] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 20:53:03.814926 containerd[1796]: 2025-01-13 20:53:03.718 [INFO][5775] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4186.1.0--a--3c6cffff8a-k8s-calico--apiserver--7d4c785db6--2rkkg-eth0 calico-apiserver-7d4c785db6- calico-apiserver bf4f0ae4-b718-4aef-9209-5a0f4e005958 669 0 2025-01-13 20:52:51 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7d4c785db6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4186.1.0-a-3c6cffff8a calico-apiserver-7d4c785db6-2rkkg eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] califf555266521 [] []}} ContainerID="0a94b10d8212a5ff60ed2dde2ba6e1c47964a1c36756cd098382aaf5ac408acb" Namespace="calico-apiserver" Pod="calico-apiserver-7d4c785db6-2rkkg" WorkloadEndpoint="ci--4186.1.0--a--3c6cffff8a-k8s-calico--apiserver--7d4c785db6--2rkkg-" Jan 13 20:53:03.814926 containerd[1796]: 2025-01-13 20:53:03.718 [INFO][5775] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0a94b10d8212a5ff60ed2dde2ba6e1c47964a1c36756cd098382aaf5ac408acb" Namespace="calico-apiserver" Pod="calico-apiserver-7d4c785db6-2rkkg" WorkloadEndpoint="ci--4186.1.0--a--3c6cffff8a-k8s-calico--apiserver--7d4c785db6--2rkkg-eth0" Jan 13 20:53:03.814926 containerd[1796]: 2025-01-13 20:53:03.736 [INFO][5892] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0a94b10d8212a5ff60ed2dde2ba6e1c47964a1c36756cd098382aaf5ac408acb" HandleID="k8s-pod-network.0a94b10d8212a5ff60ed2dde2ba6e1c47964a1c36756cd098382aaf5ac408acb" Workload="ci--4186.1.0--a--3c6cffff8a-k8s-calico--apiserver--7d4c785db6--2rkkg-eth0" Jan 13 20:53:03.814926 containerd[1796]: 2025-01-13 20:53:03.741 [INFO][5892] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0a94b10d8212a5ff60ed2dde2ba6e1c47964a1c36756cd098382aaf5ac408acb" HandleID="k8s-pod-network.0a94b10d8212a5ff60ed2dde2ba6e1c47964a1c36756cd098382aaf5ac408acb" Workload="ci--4186.1.0--a--3c6cffff8a-k8s-calico--apiserver--7d4c785db6--2rkkg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00021b7b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4186.1.0-a-3c6cffff8a", "pod":"calico-apiserver-7d4c785db6-2rkkg", "timestamp":"2025-01-13 20:53:03.736520201 +0000 UTC"}, Hostname:"ci-4186.1.0-a-3c6cffff8a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 20:53:03.814926 containerd[1796]: 2025-01-13 20:53:03.741 [INFO][5892] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 20:53:03.814926 containerd[1796]: 2025-01-13 20:53:03.777 [INFO][5892] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 20:53:03.814926 containerd[1796]: 2025-01-13 20:53:03.777 [INFO][5892] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4186.1.0-a-3c6cffff8a' Jan 13 20:53:03.814926 containerd[1796]: 2025-01-13 20:53:03.778 [INFO][5892] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0a94b10d8212a5ff60ed2dde2ba6e1c47964a1c36756cd098382aaf5ac408acb" host="ci-4186.1.0-a-3c6cffff8a" Jan 13 20:53:03.814926 containerd[1796]: 2025-01-13 20:53:03.781 [INFO][5892] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4186.1.0-a-3c6cffff8a" Jan 13 20:53:03.814926 containerd[1796]: 2025-01-13 20:53:03.783 [INFO][5892] ipam/ipam.go 489: Trying affinity for 192.168.12.192/26 host="ci-4186.1.0-a-3c6cffff8a" Jan 13 20:53:03.814926 containerd[1796]: 2025-01-13 20:53:03.784 [INFO][5892] ipam/ipam.go 155: Attempting to load block cidr=192.168.12.192/26 host="ci-4186.1.0-a-3c6cffff8a" Jan 13 20:53:03.814926 containerd[1796]: 2025-01-13 20:53:03.786 [INFO][5892] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.12.192/26 host="ci-4186.1.0-a-3c6cffff8a" Jan 13 20:53:03.814926 containerd[1796]: 2025-01-13 20:53:03.786 [INFO][5892] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.12.192/26 handle="k8s-pod-network.0a94b10d8212a5ff60ed2dde2ba6e1c47964a1c36756cd098382aaf5ac408acb" host="ci-4186.1.0-a-3c6cffff8a" Jan 13 20:53:03.814926 containerd[1796]: 2025-01-13 20:53:03.786 [INFO][5892] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.0a94b10d8212a5ff60ed2dde2ba6e1c47964a1c36756cd098382aaf5ac408acb Jan 13 20:53:03.814926 containerd[1796]: 2025-01-13 20:53:03.788 [INFO][5892] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.12.192/26 handle="k8s-pod-network.0a94b10d8212a5ff60ed2dde2ba6e1c47964a1c36756cd098382aaf5ac408acb" host="ci-4186.1.0-a-3c6cffff8a" Jan 13 20:53:03.814926 containerd[1796]: 2025-01-13 20:53:03.807 [INFO][5892] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.12.196/26] block=192.168.12.192/26 handle="k8s-pod-network.0a94b10d8212a5ff60ed2dde2ba6e1c47964a1c36756cd098382aaf5ac408acb" host="ci-4186.1.0-a-3c6cffff8a" Jan 13 20:53:03.814926 containerd[1796]: 2025-01-13 20:53:03.807 [INFO][5892] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.12.196/26] handle="k8s-pod-network.0a94b10d8212a5ff60ed2dde2ba6e1c47964a1c36756cd098382aaf5ac408acb" host="ci-4186.1.0-a-3c6cffff8a" Jan 13 20:53:03.814926 containerd[1796]: 2025-01-13 20:53:03.808 [INFO][5892] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 20:53:03.814926 containerd[1796]: 2025-01-13 20:53:03.808 [INFO][5892] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.12.196/26] IPv6=[] ContainerID="0a94b10d8212a5ff60ed2dde2ba6e1c47964a1c36756cd098382aaf5ac408acb" HandleID="k8s-pod-network.0a94b10d8212a5ff60ed2dde2ba6e1c47964a1c36756cd098382aaf5ac408acb" Workload="ci--4186.1.0--a--3c6cffff8a-k8s-calico--apiserver--7d4c785db6--2rkkg-eth0" Jan 13 20:53:03.815367 containerd[1796]: 2025-01-13 20:53:03.808 [INFO][5775] cni-plugin/k8s.go 386: Populated endpoint ContainerID="0a94b10d8212a5ff60ed2dde2ba6e1c47964a1c36756cd098382aaf5ac408acb" Namespace="calico-apiserver" Pod="calico-apiserver-7d4c785db6-2rkkg" WorkloadEndpoint="ci--4186.1.0--a--3c6cffff8a-k8s-calico--apiserver--7d4c785db6--2rkkg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.1.0--a--3c6cffff8a-k8s-calico--apiserver--7d4c785db6--2rkkg-eth0", GenerateName:"calico-apiserver-7d4c785db6-", Namespace:"calico-apiserver", SelfLink:"", UID:"bf4f0ae4-b718-4aef-9209-5a0f4e005958", ResourceVersion:"669", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 52, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d4c785db6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.1.0-a-3c6cffff8a", ContainerID:"", Pod:"calico-apiserver-7d4c785db6-2rkkg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.12.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califf555266521", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:53:03.815367 containerd[1796]: 2025-01-13 20:53:03.808 [INFO][5775] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.12.196/32] ContainerID="0a94b10d8212a5ff60ed2dde2ba6e1c47964a1c36756cd098382aaf5ac408acb" Namespace="calico-apiserver" Pod="calico-apiserver-7d4c785db6-2rkkg" WorkloadEndpoint="ci--4186.1.0--a--3c6cffff8a-k8s-calico--apiserver--7d4c785db6--2rkkg-eth0" Jan 13 20:53:03.815367 containerd[1796]: 2025-01-13 20:53:03.808 [INFO][5775] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califf555266521 ContainerID="0a94b10d8212a5ff60ed2dde2ba6e1c47964a1c36756cd098382aaf5ac408acb" Namespace="calico-apiserver" Pod="calico-apiserver-7d4c785db6-2rkkg" WorkloadEndpoint="ci--4186.1.0--a--3c6cffff8a-k8s-calico--apiserver--7d4c785db6--2rkkg-eth0" Jan 13 20:53:03.815367 containerd[1796]: 2025-01-13 20:53:03.809 [INFO][5775] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0a94b10d8212a5ff60ed2dde2ba6e1c47964a1c36756cd098382aaf5ac408acb" Namespace="calico-apiserver" Pod="calico-apiserver-7d4c785db6-2rkkg" WorkloadEndpoint="ci--4186.1.0--a--3c6cffff8a-k8s-calico--apiserver--7d4c785db6--2rkkg-eth0" Jan 13 20:53:03.815367 containerd[1796]: 2025-01-13 20:53:03.809 [INFO][5775] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0a94b10d8212a5ff60ed2dde2ba6e1c47964a1c36756cd098382aaf5ac408acb" Namespace="calico-apiserver" Pod="calico-apiserver-7d4c785db6-2rkkg" WorkloadEndpoint="ci--4186.1.0--a--3c6cffff8a-k8s-calico--apiserver--7d4c785db6--2rkkg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.1.0--a--3c6cffff8a-k8s-calico--apiserver--7d4c785db6--2rkkg-eth0", GenerateName:"calico-apiserver-7d4c785db6-", Namespace:"calico-apiserver", SelfLink:"", UID:"bf4f0ae4-b718-4aef-9209-5a0f4e005958", ResourceVersion:"669", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 52, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d4c785db6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.1.0-a-3c6cffff8a", ContainerID:"0a94b10d8212a5ff60ed2dde2ba6e1c47964a1c36756cd098382aaf5ac408acb", Pod:"calico-apiserver-7d4c785db6-2rkkg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.12.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califf555266521", MAC:"86:49:09:c4:13:54", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:53:03.815367 containerd[1796]: 2025-01-13 20:53:03.814 [INFO][5775] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="0a94b10d8212a5ff60ed2dde2ba6e1c47964a1c36756cd098382aaf5ac408acb" Namespace="calico-apiserver" Pod="calico-apiserver-7d4c785db6-2rkkg" WorkloadEndpoint="ci--4186.1.0--a--3c6cffff8a-k8s-calico--apiserver--7d4c785db6--2rkkg-eth0" Jan 13 20:53:03.817081 containerd[1796]: time="2025-01-13T20:53:03.817059214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d4c785db6-799q5,Uid:e3cc6f57-01fe-4853-a572-df94cbb24821,Namespace:calico-apiserver,Attempt:5,} returns sandbox id \"2697e809f52fb39ec9238f2ebcabdf86de06eb575e2159d4b4e9a296557b6aef\"" Jan 13 20:53:03.824348 containerd[1796]: time="2025-01-13T20:53:03.824305906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-gnptx,Uid:1958f0eb-003b-4101-8885-8177a1e71a0d,Namespace:kube-system,Attempt:5,} returns sandbox id \"de6761271d58af0d1526c1c872d2e2920f689a2a8f97e864b6613eda030521ee\"" Jan 13 20:53:03.825019 containerd[1796]: time="2025-01-13T20:53:03.824758502Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:53:03.825084 containerd[1796]: time="2025-01-13T20:53:03.825016333Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:53:03.825084 containerd[1796]: time="2025-01-13T20:53:03.825025982Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:53:03.825139 containerd[1796]: time="2025-01-13T20:53:03.825079730Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:53:03.825231 systemd-networkd[1716]: cali2d48c5cb859: Link UP Jan 13 20:53:03.825354 systemd-networkd[1716]: cali2d48c5cb859: Gained carrier Jan 13 20:53:03.825996 containerd[1796]: time="2025-01-13T20:53:03.825977145Z" level=info msg="CreateContainer within sandbox \"de6761271d58af0d1526c1c872d2e2920f689a2a8f97e864b6613eda030521ee\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 20:53:03.830087 containerd[1796]: 2025-01-13 20:53:03.711 [INFO][5762] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 20:53:03.830087 containerd[1796]: 2025-01-13 20:53:03.718 [INFO][5762] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4186.1.0--a--3c6cffff8a-k8s-coredns--7db6d8ff4d--nxnfb-eth0 coredns-7db6d8ff4d- kube-system d2521fa2-61ba-4676-843f-c14c317e5358 671 0 2025-01-13 20:52:41 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4186.1.0-a-3c6cffff8a coredns-7db6d8ff4d-nxnfb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2d48c5cb859 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="90b14324155afe4fcd59f4ca17ab26a1ea0057aa8614c2eaff7352078ca58e1c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nxnfb" WorkloadEndpoint="ci--4186.1.0--a--3c6cffff8a-k8s-coredns--7db6d8ff4d--nxnfb-" Jan 13 20:53:03.830087 containerd[1796]: 2025-01-13 20:53:03.718 [INFO][5762] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="90b14324155afe4fcd59f4ca17ab26a1ea0057aa8614c2eaff7352078ca58e1c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nxnfb" WorkloadEndpoint="ci--4186.1.0--a--3c6cffff8a-k8s-coredns--7db6d8ff4d--nxnfb-eth0" Jan 13 20:53:03.830087 containerd[1796]: 2025-01-13 20:53:03.738 [INFO][5868] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="90b14324155afe4fcd59f4ca17ab26a1ea0057aa8614c2eaff7352078ca58e1c" HandleID="k8s-pod-network.90b14324155afe4fcd59f4ca17ab26a1ea0057aa8614c2eaff7352078ca58e1c" Workload="ci--4186.1.0--a--3c6cffff8a-k8s-coredns--7db6d8ff4d--nxnfb-eth0" Jan 13 20:53:03.830087 containerd[1796]: 2025-01-13 20:53:03.742 [INFO][5868] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="90b14324155afe4fcd59f4ca17ab26a1ea0057aa8614c2eaff7352078ca58e1c" HandleID="k8s-pod-network.90b14324155afe4fcd59f4ca17ab26a1ea0057aa8614c2eaff7352078ca58e1c" Workload="ci--4186.1.0--a--3c6cffff8a-k8s-coredns--7db6d8ff4d--nxnfb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005020c0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4186.1.0-a-3c6cffff8a", "pod":"coredns-7db6d8ff4d-nxnfb", "timestamp":"2025-01-13 20:53:03.738017 +0000 UTC"}, Hostname:"ci-4186.1.0-a-3c6cffff8a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 20:53:03.830087 containerd[1796]: 2025-01-13 20:53:03.742 [INFO][5868] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 20:53:03.830087 containerd[1796]: 2025-01-13 20:53:03.808 [INFO][5868] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 20:53:03.830087 containerd[1796]: 2025-01-13 20:53:03.808 [INFO][5868] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4186.1.0-a-3c6cffff8a' Jan 13 20:53:03.830087 containerd[1796]: 2025-01-13 20:53:03.809 [INFO][5868] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.90b14324155afe4fcd59f4ca17ab26a1ea0057aa8614c2eaff7352078ca58e1c" host="ci-4186.1.0-a-3c6cffff8a" Jan 13 20:53:03.830087 containerd[1796]: 2025-01-13 20:53:03.812 [INFO][5868] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4186.1.0-a-3c6cffff8a" Jan 13 20:53:03.830087 containerd[1796]: 2025-01-13 20:53:03.815 [INFO][5868] ipam/ipam.go 489: Trying affinity for 192.168.12.192/26 host="ci-4186.1.0-a-3c6cffff8a" Jan 13 20:53:03.830087 containerd[1796]: 2025-01-13 20:53:03.816 [INFO][5868] ipam/ipam.go 155: Attempting to load block cidr=192.168.12.192/26 host="ci-4186.1.0-a-3c6cffff8a" Jan 13 20:53:03.830087 containerd[1796]: 2025-01-13 20:53:03.817 [INFO][5868] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.12.192/26 host="ci-4186.1.0-a-3c6cffff8a" Jan 13 20:53:03.830087 containerd[1796]: 2025-01-13 20:53:03.817 [INFO][5868] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.12.192/26 handle="k8s-pod-network.90b14324155afe4fcd59f4ca17ab26a1ea0057aa8614c2eaff7352078ca58e1c" host="ci-4186.1.0-a-3c6cffff8a" Jan 13 20:53:03.830087 containerd[1796]: 2025-01-13 20:53:03.818 [INFO][5868] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.90b14324155afe4fcd59f4ca17ab26a1ea0057aa8614c2eaff7352078ca58e1c Jan 13 20:53:03.830087 containerd[1796]: 2025-01-13 20:53:03.820 [INFO][5868] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.12.192/26 handle="k8s-pod-network.90b14324155afe4fcd59f4ca17ab26a1ea0057aa8614c2eaff7352078ca58e1c" host="ci-4186.1.0-a-3c6cffff8a" Jan 13 20:53:03.830087 containerd[1796]: 2025-01-13 20:53:03.823 [INFO][5868] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.12.197/26] block=192.168.12.192/26 handle="k8s-pod-network.90b14324155afe4fcd59f4ca17ab26a1ea0057aa8614c2eaff7352078ca58e1c" host="ci-4186.1.0-a-3c6cffff8a" Jan 13 20:53:03.830087 containerd[1796]: 2025-01-13 20:53:03.823 [INFO][5868] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.12.197/26] handle="k8s-pod-network.90b14324155afe4fcd59f4ca17ab26a1ea0057aa8614c2eaff7352078ca58e1c" host="ci-4186.1.0-a-3c6cffff8a" Jan 13 20:53:03.830087 containerd[1796]: 2025-01-13 20:53:03.823 [INFO][5868] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 20:53:03.830087 containerd[1796]: 2025-01-13 20:53:03.823 [INFO][5868] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.12.197/26] IPv6=[] ContainerID="90b14324155afe4fcd59f4ca17ab26a1ea0057aa8614c2eaff7352078ca58e1c" HandleID="k8s-pod-network.90b14324155afe4fcd59f4ca17ab26a1ea0057aa8614c2eaff7352078ca58e1c" Workload="ci--4186.1.0--a--3c6cffff8a-k8s-coredns--7db6d8ff4d--nxnfb-eth0" Jan 13 20:53:03.830637 containerd[1796]: 2025-01-13 20:53:03.824 [INFO][5762] cni-plugin/k8s.go 386: Populated endpoint ContainerID="90b14324155afe4fcd59f4ca17ab26a1ea0057aa8614c2eaff7352078ca58e1c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nxnfb" WorkloadEndpoint="ci--4186.1.0--a--3c6cffff8a-k8s-coredns--7db6d8ff4d--nxnfb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.1.0--a--3c6cffff8a-k8s-coredns--7db6d8ff4d--nxnfb-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"d2521fa2-61ba-4676-843f-c14c317e5358", ResourceVersion:"671", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 52, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.1.0-a-3c6cffff8a", ContainerID:"", Pod:"coredns-7db6d8ff4d-nxnfb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.12.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2d48c5cb859", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:53:03.830637 containerd[1796]: 2025-01-13 20:53:03.824 [INFO][5762] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.12.197/32] ContainerID="90b14324155afe4fcd59f4ca17ab26a1ea0057aa8614c2eaff7352078ca58e1c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nxnfb" WorkloadEndpoint="ci--4186.1.0--a--3c6cffff8a-k8s-coredns--7db6d8ff4d--nxnfb-eth0" Jan 13 20:53:03.830637 containerd[1796]: 2025-01-13 20:53:03.824 [INFO][5762] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2d48c5cb859 ContainerID="90b14324155afe4fcd59f4ca17ab26a1ea0057aa8614c2eaff7352078ca58e1c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nxnfb" WorkloadEndpoint="ci--4186.1.0--a--3c6cffff8a-k8s-coredns--7db6d8ff4d--nxnfb-eth0" Jan 13 20:53:03.830637 containerd[1796]: 2025-01-13 20:53:03.825 [INFO][5762] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="90b14324155afe4fcd59f4ca17ab26a1ea0057aa8614c2eaff7352078ca58e1c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nxnfb" WorkloadEndpoint="ci--4186.1.0--a--3c6cffff8a-k8s-coredns--7db6d8ff4d--nxnfb-eth0" Jan 13 20:53:03.830637 containerd[1796]: 2025-01-13 20:53:03.825 [INFO][5762] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="90b14324155afe4fcd59f4ca17ab26a1ea0057aa8614c2eaff7352078ca58e1c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nxnfb" WorkloadEndpoint="ci--4186.1.0--a--3c6cffff8a-k8s-coredns--7db6d8ff4d--nxnfb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.1.0--a--3c6cffff8a-k8s-coredns--7db6d8ff4d--nxnfb-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"d2521fa2-61ba-4676-843f-c14c317e5358", ResourceVersion:"671", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 52, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.1.0-a-3c6cffff8a", ContainerID:"90b14324155afe4fcd59f4ca17ab26a1ea0057aa8614c2eaff7352078ca58e1c", Pod:"coredns-7db6d8ff4d-nxnfb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.12.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2d48c5cb859", MAC:"fa:ad:77:61:8c:bf", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:53:03.830637 containerd[1796]: 2025-01-13 20:53:03.829 [INFO][5762] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="90b14324155afe4fcd59f4ca17ab26a1ea0057aa8614c2eaff7352078ca58e1c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nxnfb" WorkloadEndpoint="ci--4186.1.0--a--3c6cffff8a-k8s-coredns--7db6d8ff4d--nxnfb-eth0" Jan 13 20:53:03.831733 containerd[1796]: time="2025-01-13T20:53:03.831712176Z" level=info msg="CreateContainer within sandbox \"de6761271d58af0d1526c1c872d2e2920f689a2a8f97e864b6613eda030521ee\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f4b411aa8a7cb3eb6057f56138243891894026766e3ccb4a73b458203b7e5293\"" Jan 13 20:53:03.832017 containerd[1796]: time="2025-01-13T20:53:03.832002653Z" level=info msg="StartContainer for \"f4b411aa8a7cb3eb6057f56138243891894026766e3ccb4a73b458203b7e5293\"" Jan 13 20:53:03.839803 containerd[1796]: time="2025-01-13T20:53:03.839753163Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:53:03.839803 containerd[1796]: time="2025-01-13T20:53:03.839792853Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:53:03.839803 containerd[1796]: time="2025-01-13T20:53:03.839804334Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:53:03.839915 containerd[1796]: time="2025-01-13T20:53:03.839850459Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:53:03.840558 systemd-networkd[1716]: cali9f5e42529c0: Link UP Jan 13 20:53:03.840970 systemd-networkd[1716]: cali9f5e42529c0: Gained carrier Jan 13 20:53:03.845263 systemd[1]: Started cri-containerd-0a94b10d8212a5ff60ed2dde2ba6e1c47964a1c36756cd098382aaf5ac408acb.scope - libcontainer container 0a94b10d8212a5ff60ed2dde2ba6e1c47964a1c36756cd098382aaf5ac408acb. Jan 13 20:53:03.846968 containerd[1796]: 2025-01-13 20:53:03.712 [INFO][5783] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 20:53:03.846968 containerd[1796]: 2025-01-13 20:53:03.718 [INFO][5783] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4186.1.0--a--3c6cffff8a-k8s-calico--kube--controllers--68d58fcc5b--n4w6q-eth0 calico-kube-controllers-68d58fcc5b- calico-system ce9b51cb-e99a-4abc-aeff-27a2e094f2d6 670 0 2025-01-13 20:52:51 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:68d58fcc5b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4186.1.0-a-3c6cffff8a calico-kube-controllers-68d58fcc5b-n4w6q eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali9f5e42529c0 [] []}} ContainerID="70ebd4ff280de1a1730bd274d2be4a48ff1109ad3cd99aa5c815898788274065" Namespace="calico-system" Pod="calico-kube-controllers-68d58fcc5b-n4w6q" WorkloadEndpoint="ci--4186.1.0--a--3c6cffff8a-k8s-calico--kube--controllers--68d58fcc5b--n4w6q-" Jan 13 20:53:03.846968 containerd[1796]: 2025-01-13 20:53:03.718 [INFO][5783] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="70ebd4ff280de1a1730bd274d2be4a48ff1109ad3cd99aa5c815898788274065" Namespace="calico-system" Pod="calico-kube-controllers-68d58fcc5b-n4w6q" WorkloadEndpoint="ci--4186.1.0--a--3c6cffff8a-k8s-calico--kube--controllers--68d58fcc5b--n4w6q-eth0" Jan 13 20:53:03.846968 containerd[1796]: 2025-01-13 20:53:03.738 [INFO][5870] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="70ebd4ff280de1a1730bd274d2be4a48ff1109ad3cd99aa5c815898788274065" HandleID="k8s-pod-network.70ebd4ff280de1a1730bd274d2be4a48ff1109ad3cd99aa5c815898788274065" Workload="ci--4186.1.0--a--3c6cffff8a-k8s-calico--kube--controllers--68d58fcc5b--n4w6q-eth0" Jan 13 20:53:03.846968 containerd[1796]: 2025-01-13 20:53:03.742 [INFO][5870] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="70ebd4ff280de1a1730bd274d2be4a48ff1109ad3cd99aa5c815898788274065" HandleID="k8s-pod-network.70ebd4ff280de1a1730bd274d2be4a48ff1109ad3cd99aa5c815898788274065" Workload="ci--4186.1.0--a--3c6cffff8a-k8s-calico--kube--controllers--68d58fcc5b--n4w6q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f5bb0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4186.1.0-a-3c6cffff8a", "pod":"calico-kube-controllers-68d58fcc5b-n4w6q", "timestamp":"2025-01-13 20:53:03.738780027 +0000 UTC"}, Hostname:"ci-4186.1.0-a-3c6cffff8a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 20:53:03.846968 containerd[1796]: 2025-01-13 20:53:03.742 [INFO][5870] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 20:53:03.846968 containerd[1796]: 2025-01-13 20:53:03.823 [INFO][5870] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 20:53:03.846968 containerd[1796]: 2025-01-13 20:53:03.823 [INFO][5870] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4186.1.0-a-3c6cffff8a' Jan 13 20:53:03.846968 containerd[1796]: 2025-01-13 20:53:03.824 [INFO][5870] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.70ebd4ff280de1a1730bd274d2be4a48ff1109ad3cd99aa5c815898788274065" host="ci-4186.1.0-a-3c6cffff8a" Jan 13 20:53:03.846968 containerd[1796]: 2025-01-13 20:53:03.827 [INFO][5870] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4186.1.0-a-3c6cffff8a" Jan 13 20:53:03.846968 containerd[1796]: 2025-01-13 20:53:03.829 [INFO][5870] ipam/ipam.go 489: Trying affinity for 192.168.12.192/26 host="ci-4186.1.0-a-3c6cffff8a" Jan 13 20:53:03.846968 containerd[1796]: 2025-01-13 20:53:03.830 [INFO][5870] ipam/ipam.go 155: Attempting to load block cidr=192.168.12.192/26 host="ci-4186.1.0-a-3c6cffff8a" Jan 13 20:53:03.846968 containerd[1796]: 2025-01-13 20:53:03.832 [INFO][5870] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.12.192/26 host="ci-4186.1.0-a-3c6cffff8a" Jan 13 20:53:03.846968 containerd[1796]: 2025-01-13 20:53:03.832 [INFO][5870] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.12.192/26 handle="k8s-pod-network.70ebd4ff280de1a1730bd274d2be4a48ff1109ad3cd99aa5c815898788274065" host="ci-4186.1.0-a-3c6cffff8a" Jan 13 20:53:03.846968 containerd[1796]: 2025-01-13 20:53:03.832 [INFO][5870] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.70ebd4ff280de1a1730bd274d2be4a48ff1109ad3cd99aa5c815898788274065 Jan 13 20:53:03.846968 containerd[1796]: 2025-01-13 20:53:03.835 [INFO][5870] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.12.192/26 handle="k8s-pod-network.70ebd4ff280de1a1730bd274d2be4a48ff1109ad3cd99aa5c815898788274065" host="ci-4186.1.0-a-3c6cffff8a" Jan 13 20:53:03.846968 containerd[1796]: 2025-01-13 20:53:03.838 [INFO][5870] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.12.198/26] block=192.168.12.192/26 handle="k8s-pod-network.70ebd4ff280de1a1730bd274d2be4a48ff1109ad3cd99aa5c815898788274065" host="ci-4186.1.0-a-3c6cffff8a" Jan 13 20:53:03.846968 containerd[1796]: 2025-01-13 20:53:03.838 [INFO][5870] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.12.198/26] handle="k8s-pod-network.70ebd4ff280de1a1730bd274d2be4a48ff1109ad3cd99aa5c815898788274065" host="ci-4186.1.0-a-3c6cffff8a" Jan 13 20:53:03.846968 containerd[1796]: 2025-01-13 20:53:03.838 [INFO][5870] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 20:53:03.846968 containerd[1796]: 2025-01-13 20:53:03.838 [INFO][5870] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.12.198/26] IPv6=[] ContainerID="70ebd4ff280de1a1730bd274d2be4a48ff1109ad3cd99aa5c815898788274065" HandleID="k8s-pod-network.70ebd4ff280de1a1730bd274d2be4a48ff1109ad3cd99aa5c815898788274065" Workload="ci--4186.1.0--a--3c6cffff8a-k8s-calico--kube--controllers--68d58fcc5b--n4w6q-eth0" Jan 13 20:53:03.847418 containerd[1796]: 2025-01-13 20:53:03.839 [INFO][5783] cni-plugin/k8s.go 386: Populated endpoint ContainerID="70ebd4ff280de1a1730bd274d2be4a48ff1109ad3cd99aa5c815898788274065" Namespace="calico-system" Pod="calico-kube-controllers-68d58fcc5b-n4w6q" WorkloadEndpoint="ci--4186.1.0--a--3c6cffff8a-k8s-calico--kube--controllers--68d58fcc5b--n4w6q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.1.0--a--3c6cffff8a-k8s-calico--kube--controllers--68d58fcc5b--n4w6q-eth0", GenerateName:"calico-kube-controllers-68d58fcc5b-", Namespace:"calico-system", SelfLink:"", UID:"ce9b51cb-e99a-4abc-aeff-27a2e094f2d6", ResourceVersion:"670", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 52, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"68d58fcc5b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.1.0-a-3c6cffff8a", ContainerID:"", Pod:"calico-kube-controllers-68d58fcc5b-n4w6q", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.12.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9f5e42529c0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:53:03.847418 containerd[1796]: 2025-01-13 20:53:03.839 [INFO][5783] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.12.198/32] ContainerID="70ebd4ff280de1a1730bd274d2be4a48ff1109ad3cd99aa5c815898788274065" Namespace="calico-system" Pod="calico-kube-controllers-68d58fcc5b-n4w6q" WorkloadEndpoint="ci--4186.1.0--a--3c6cffff8a-k8s-calico--kube--controllers--68d58fcc5b--n4w6q-eth0" Jan 13 20:53:03.847418 containerd[1796]: 2025-01-13 20:53:03.839 [INFO][5783] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9f5e42529c0 ContainerID="70ebd4ff280de1a1730bd274d2be4a48ff1109ad3cd99aa5c815898788274065" Namespace="calico-system" Pod="calico-kube-controllers-68d58fcc5b-n4w6q" WorkloadEndpoint="ci--4186.1.0--a--3c6cffff8a-k8s-calico--kube--controllers--68d58fcc5b--n4w6q-eth0" Jan 13 20:53:03.847418 containerd[1796]: 2025-01-13 20:53:03.841 [INFO][5783] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="70ebd4ff280de1a1730bd274d2be4a48ff1109ad3cd99aa5c815898788274065" Namespace="calico-system" Pod="calico-kube-controllers-68d58fcc5b-n4w6q" WorkloadEndpoint="ci--4186.1.0--a--3c6cffff8a-k8s-calico--kube--controllers--68d58fcc5b--n4w6q-eth0" Jan 13 20:53:03.847418 containerd[1796]: 2025-01-13 20:53:03.841 [INFO][5783] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="70ebd4ff280de1a1730bd274d2be4a48ff1109ad3cd99aa5c815898788274065" Namespace="calico-system" Pod="calico-kube-controllers-68d58fcc5b-n4w6q" WorkloadEndpoint="ci--4186.1.0--a--3c6cffff8a-k8s-calico--kube--controllers--68d58fcc5b--n4w6q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.1.0--a--3c6cffff8a-k8s-calico--kube--controllers--68d58fcc5b--n4w6q-eth0", GenerateName:"calico-kube-controllers-68d58fcc5b-", Namespace:"calico-system", SelfLink:"", UID:"ce9b51cb-e99a-4abc-aeff-27a2e094f2d6", ResourceVersion:"670", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 52, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"68d58fcc5b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.1.0-a-3c6cffff8a", ContainerID:"70ebd4ff280de1a1730bd274d2be4a48ff1109ad3cd99aa5c815898788274065", Pod:"calico-kube-controllers-68d58fcc5b-n4w6q", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.12.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9f5e42529c0", MAC:"52:b7:1d:ef:59:a0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:53:03.847418 containerd[1796]: 2025-01-13 20:53:03.846 [INFO][5783] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="70ebd4ff280de1a1730bd274d2be4a48ff1109ad3cd99aa5c815898788274065" Namespace="calico-system" Pod="calico-kube-controllers-68d58fcc5b-n4w6q" WorkloadEndpoint="ci--4186.1.0--a--3c6cffff8a-k8s-calico--kube--controllers--68d58fcc5b--n4w6q-eth0" Jan 13 20:53:03.849308 systemd[1]: Started cri-containerd-90b14324155afe4fcd59f4ca17ab26a1ea0057aa8614c2eaff7352078ca58e1c.scope - libcontainer container 90b14324155afe4fcd59f4ca17ab26a1ea0057aa8614c2eaff7352078ca58e1c. Jan 13 20:53:03.850119 systemd[1]: Started cri-containerd-f4b411aa8a7cb3eb6057f56138243891894026766e3ccb4a73b458203b7e5293.scope - libcontainer container f4b411aa8a7cb3eb6057f56138243891894026766e3ccb4a73b458203b7e5293. Jan 13 20:53:03.856943 containerd[1796]: time="2025-01-13T20:53:03.856710196Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:53:03.856943 containerd[1796]: time="2025-01-13T20:53:03.856936785Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:53:03.857014 containerd[1796]: time="2025-01-13T20:53:03.856944631Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:53:03.857014 containerd[1796]: time="2025-01-13T20:53:03.856982035Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:53:03.863301 containerd[1796]: time="2025-01-13T20:53:03.863277686Z" level=info msg="StartContainer for \"f4b411aa8a7cb3eb6057f56138243891894026766e3ccb4a73b458203b7e5293\" returns successfully" Jan 13 20:53:03.872381 systemd[1]: Started cri-containerd-70ebd4ff280de1a1730bd274d2be4a48ff1109ad3cd99aa5c815898788274065.scope - libcontainer container 70ebd4ff280de1a1730bd274d2be4a48ff1109ad3cd99aa5c815898788274065. Jan 13 20:53:03.877174 containerd[1796]: time="2025-01-13T20:53:03.877144353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nxnfb,Uid:d2521fa2-61ba-4676-843f-c14c317e5358,Namespace:kube-system,Attempt:5,} returns sandbox id \"90b14324155afe4fcd59f4ca17ab26a1ea0057aa8614c2eaff7352078ca58e1c\"" Jan 13 20:53:03.877264 containerd[1796]: time="2025-01-13T20:53:03.877196365Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d4c785db6-2rkkg,Uid:bf4f0ae4-b718-4aef-9209-5a0f4e005958,Namespace:calico-apiserver,Attempt:5,} returns sandbox id \"0a94b10d8212a5ff60ed2dde2ba6e1c47964a1c36756cd098382aaf5ac408acb\"" Jan 13 20:53:03.878462 containerd[1796]: time="2025-01-13T20:53:03.878445993Z" level=info msg="CreateContainer within sandbox \"90b14324155afe4fcd59f4ca17ab26a1ea0057aa8614c2eaff7352078ca58e1c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 20:53:03.883059 containerd[1796]: time="2025-01-13T20:53:03.883039890Z" level=info msg="CreateContainer within sandbox \"90b14324155afe4fcd59f4ca17ab26a1ea0057aa8614c2eaff7352078ca58e1c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fe891cbd59086e7e5f4c15130aeb951ecebbdf4068f2eef96eaaa74093832330\"" Jan 13 20:53:03.883324 containerd[1796]: time="2025-01-13T20:53:03.883310464Z" level=info msg="StartContainer for \"fe891cbd59086e7e5f4c15130aeb951ecebbdf4068f2eef96eaaa74093832330\"" Jan 13 20:53:03.907301 systemd[1]: Started cri-containerd-fe891cbd59086e7e5f4c15130aeb951ecebbdf4068f2eef96eaaa74093832330.scope - libcontainer container fe891cbd59086e7e5f4c15130aeb951ecebbdf4068f2eef96eaaa74093832330. Jan 13 20:53:03.907513 containerd[1796]: time="2025-01-13T20:53:03.907497833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68d58fcc5b-n4w6q,Uid:ce9b51cb-e99a-4abc-aeff-27a2e094f2d6,Namespace:calico-system,Attempt:5,} returns sandbox id \"70ebd4ff280de1a1730bd274d2be4a48ff1109ad3cd99aa5c815898788274065\"" Jan 13 20:53:03.918886 containerd[1796]: time="2025-01-13T20:53:03.918865621Z" level=info msg="StartContainer for \"fe891cbd59086e7e5f4c15130aeb951ecebbdf4068f2eef96eaaa74093832330\" returns successfully" Jan 13 20:53:03.942487 systemd[1]: run-netns-cni\x2dee71970b\x2d368d\x2d56af\x2d592e\x2df8919393a806.mount: Deactivated successfully. Jan 13 20:53:03.942540 systemd[1]: run-netns-cni\x2d0999edeb\x2d91a3\x2dc72d\x2d2819\x2d1c14ddcf3040.mount: Deactivated successfully. Jan 13 20:53:03.942574 systemd[1]: run-netns-cni\x2dbbc83b01\x2df141\x2d333a\x2d1d41\x2d24e6232bf611.mount: Deactivated successfully. Jan 13 20:53:03.942606 systemd[1]: run-netns-cni\x2d1baf7eeb\x2d4459\x2d55b5\x2d8d73\x2d69dfe8661bf8.mount: Deactivated successfully. Jan 13 20:53:03.942641 systemd[1]: run-netns-cni\x2d60c9981c\x2d82ff\x2d31ab\x2d46c5\x2df09cedb7f176.mount: Deactivated successfully. Jan 13 20:53:03.942672 systemd[1]: run-netns-cni\x2d43cb411b\x2de6db\x2ddfaf\x2dbf73\x2d3cb3669a446e.mount: Deactivated successfully. Jan 13 20:53:04.699295 kubelet[3275]: I0113 20:53:04.699253 3275 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-nxnfb" podStartSLOduration=23.699235794 podStartE2EDuration="23.699235794s" podCreationTimestamp="2025-01-13 20:52:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:53:04.698981962 +0000 UTC m=+37.198288578" watchObservedRunningTime="2025-01-13 20:53:04.699235794 +0000 UTC m=+37.198542405" Jan 13 20:53:04.707609 kubelet[3275]: I0113 20:53:04.707567 3275 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-gnptx" podStartSLOduration=23.707550050000002 podStartE2EDuration="23.70755005s" podCreationTimestamp="2025-01-13 20:52:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:53:04.707472606 +0000 UTC m=+37.206779213" watchObservedRunningTime="2025-01-13 20:53:04.70755005 +0000 UTC m=+37.206856659" Jan 13 20:53:04.937439 systemd-networkd[1716]: calibe5ec296e9d: Gained IPv6LL Jan 13 20:53:04.938139 systemd-networkd[1716]: cali9f5e42529c0: Gained IPv6LL Jan 13 20:53:05.065306 systemd-networkd[1716]: cali95fe0ed7402: Gained IPv6LL Jan 13 20:53:05.129299 systemd-networkd[1716]: cali111be27302f: Gained IPv6LL Jan 13 20:53:05.153929 containerd[1796]: time="2025-01-13T20:53:05.153904879Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:53:05.154204 containerd[1796]: time="2025-01-13T20:53:05.154114236Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 13 20:53:05.154542 containerd[1796]: time="2025-01-13T20:53:05.154526940Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:53:05.155451 containerd[1796]: time="2025-01-13T20:53:05.155436174Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:53:05.155841 containerd[1796]: time="2025-01-13T20:53:05.155825690Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.352964149s" Jan 13 20:53:05.155872 containerd[1796]: time="2025-01-13T20:53:05.155842725Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 13 20:53:05.156329 containerd[1796]: time="2025-01-13T20:53:05.156319852Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 13 20:53:05.156871 containerd[1796]: time="2025-01-13T20:53:05.156856815Z" level=info msg="CreateContainer within sandbox \"cfbe9409969b4e083f3f044a754478e1b0404329813783118e95763654dd633f\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 13 20:53:05.162166 containerd[1796]: time="2025-01-13T20:53:05.162112000Z" level=info msg="CreateContainer within sandbox \"cfbe9409969b4e083f3f044a754478e1b0404329813783118e95763654dd633f\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"2052b49b390f158f868b932eb9a9d61d17ca9bfa7fec3585891be2e5815ff0ca\"" Jan 13 20:53:05.162350 containerd[1796]: time="2025-01-13T20:53:05.162337131Z" level=info msg="StartContainer for \"2052b49b390f158f868b932eb9a9d61d17ca9bfa7fec3585891be2e5815ff0ca\"" Jan 13 20:53:05.184465 systemd[1]: Started cri-containerd-2052b49b390f158f868b932eb9a9d61d17ca9bfa7fec3585891be2e5815ff0ca.scope - libcontainer container 2052b49b390f158f868b932eb9a9d61d17ca9bfa7fec3585891be2e5815ff0ca. Jan 13 20:53:05.202740 containerd[1796]: time="2025-01-13T20:53:05.202718158Z" level=info msg="StartContainer for \"2052b49b390f158f868b932eb9a9d61d17ca9bfa7fec3585891be2e5815ff0ca\" returns successfully" Jan 13 20:53:05.321625 systemd-networkd[1716]: cali2d48c5cb859: Gained IPv6LL Jan 13 20:53:05.577448 systemd-networkd[1716]: califf555266521: Gained IPv6LL Jan 13 20:53:06.939568 containerd[1796]: time="2025-01-13T20:53:06.939544503Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:53:06.939835 containerd[1796]: time="2025-01-13T20:53:06.939819458Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Jan 13 20:53:06.940594 containerd[1796]: time="2025-01-13T20:53:06.940522808Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:53:06.942031 containerd[1796]: time="2025-01-13T20:53:06.941987297Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:53:06.942484 containerd[1796]: time="2025-01-13T20:53:06.942431251Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 1.786095494s" Jan 13 20:53:06.942484 containerd[1796]: time="2025-01-13T20:53:06.942452653Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 13 20:53:06.942939 containerd[1796]: time="2025-01-13T20:53:06.942929929Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 13 20:53:06.943515 containerd[1796]: time="2025-01-13T20:53:06.943474717Z" level=info msg="CreateContainer within sandbox \"2697e809f52fb39ec9238f2ebcabdf86de06eb575e2159d4b4e9a296557b6aef\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 13 20:53:06.947450 containerd[1796]: time="2025-01-13T20:53:06.947437328Z" level=info msg="CreateContainer within sandbox \"2697e809f52fb39ec9238f2ebcabdf86de06eb575e2159d4b4e9a296557b6aef\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"b64d64444114f510f4d0f7d393e5c33becfa2237c85c2707757bcf9c63ea1d69\"" Jan 13 20:53:06.947687 containerd[1796]: time="2025-01-13T20:53:06.947671486Z" level=info msg="StartContainer for \"b64d64444114f510f4d0f7d393e5c33becfa2237c85c2707757bcf9c63ea1d69\"" Jan 13 20:53:06.977505 systemd[1]: Started cri-containerd-b64d64444114f510f4d0f7d393e5c33becfa2237c85c2707757bcf9c63ea1d69.scope - libcontainer container b64d64444114f510f4d0f7d393e5c33becfa2237c85c2707757bcf9c63ea1d69. Jan 13 20:53:07.007287 containerd[1796]: time="2025-01-13T20:53:07.007235389Z" level=info msg="StartContainer for \"b64d64444114f510f4d0f7d393e5c33becfa2237c85c2707757bcf9c63ea1d69\" returns successfully" Jan 13 20:53:07.285427 containerd[1796]: time="2025-01-13T20:53:07.285344454Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:53:07.285580 containerd[1796]: time="2025-01-13T20:53:07.285521467Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 13 20:53:07.286902 containerd[1796]: time="2025-01-13T20:53:07.286860633Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 343.917144ms" Jan 13 20:53:07.286902 containerd[1796]: time="2025-01-13T20:53:07.286875760Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 13 20:53:07.287463 containerd[1796]: time="2025-01-13T20:53:07.287425530Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 13 20:53:07.288229 containerd[1796]: time="2025-01-13T20:53:07.288161218Z" level=info msg="CreateContainer within sandbox \"0a94b10d8212a5ff60ed2dde2ba6e1c47964a1c36756cd098382aaf5ac408acb\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 13 20:53:07.292828 containerd[1796]: time="2025-01-13T20:53:07.292787600Z" level=info msg="CreateContainer within sandbox \"0a94b10d8212a5ff60ed2dde2ba6e1c47964a1c36756cd098382aaf5ac408acb\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"2b15da9864a1e0a58cbc82d994a63b26cebe86e5b2cbcabb241311d2ffc49ea1\"" Jan 13 20:53:07.293100 containerd[1796]: time="2025-01-13T20:53:07.293056572Z" level=info msg="StartContainer for \"2b15da9864a1e0a58cbc82d994a63b26cebe86e5b2cbcabb241311d2ffc49ea1\"" Jan 13 20:53:07.311366 systemd[1]: Started cri-containerd-2b15da9864a1e0a58cbc82d994a63b26cebe86e5b2cbcabb241311d2ffc49ea1.scope - libcontainer container 2b15da9864a1e0a58cbc82d994a63b26cebe86e5b2cbcabb241311d2ffc49ea1. Jan 13 20:53:07.336737 containerd[1796]: time="2025-01-13T20:53:07.336689804Z" level=info msg="StartContainer for \"2b15da9864a1e0a58cbc82d994a63b26cebe86e5b2cbcabb241311d2ffc49ea1\" returns successfully" Jan 13 20:53:07.717030 kubelet[3275]: I0113 20:53:07.716999 3275 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7d4c785db6-2rkkg" podStartSLOduration=13.307421917 podStartE2EDuration="16.716985887s" podCreationTimestamp="2025-01-13 20:52:51 +0000 UTC" firstStartedPulling="2025-01-13 20:53:03.87779382 +0000 UTC m=+36.377100424" lastFinishedPulling="2025-01-13 20:53:07.287357779 +0000 UTC m=+39.786664394" observedRunningTime="2025-01-13 20:53:07.716706388 +0000 UTC m=+40.216012993" watchObservedRunningTime="2025-01-13 20:53:07.716985887 +0000 UTC m=+40.216292487" Jan 13 20:53:07.721396 kubelet[3275]: I0113 20:53:07.721365 3275 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7d4c785db6-799q5" podStartSLOduration=13.596174545 podStartE2EDuration="16.72135512s" podCreationTimestamp="2025-01-13 20:52:51 +0000 UTC" firstStartedPulling="2025-01-13 20:53:03.817678644 +0000 UTC m=+36.316985253" lastFinishedPulling="2025-01-13 20:53:06.942859221 +0000 UTC m=+39.442165828" observedRunningTime="2025-01-13 20:53:07.721078143 +0000 UTC m=+40.220384747" watchObservedRunningTime="2025-01-13 20:53:07.72135512 +0000 UTC m=+40.220661720" Jan 13 20:53:08.714991 kubelet[3275]: I0113 20:53:08.714970 3275 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 20:53:08.715112 kubelet[3275]: I0113 20:53:08.714970 3275 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 20:53:08.902024 containerd[1796]: time="2025-01-13T20:53:08.901976609Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:53:08.902253 containerd[1796]: time="2025-01-13T20:53:08.902134263Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Jan 13 20:53:08.902569 containerd[1796]: time="2025-01-13T20:53:08.902556720Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:53:08.903517 containerd[1796]: time="2025-01-13T20:53:08.903477222Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:53:08.904222 containerd[1796]: time="2025-01-13T20:53:08.904189858Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 1.616750819s" Jan 13 20:53:08.904222 containerd[1796]: time="2025-01-13T20:53:08.904204560Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Jan 13 20:53:08.904719 containerd[1796]: time="2025-01-13T20:53:08.904680230Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 13 20:53:08.907650 containerd[1796]: time="2025-01-13T20:53:08.907632577Z" level=info msg="CreateContainer within sandbox \"70ebd4ff280de1a1730bd274d2be4a48ff1109ad3cd99aa5c815898788274065\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 13 20:53:08.912685 containerd[1796]: time="2025-01-13T20:53:08.912633570Z" level=info msg="CreateContainer within sandbox \"70ebd4ff280de1a1730bd274d2be4a48ff1109ad3cd99aa5c815898788274065\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"8fdbfe55517faa6e9bff3ab3eac9cbcef64634768592fbd26ae1860b7f4050a6\"" Jan 13 20:53:08.912907 containerd[1796]: time="2025-01-13T20:53:08.912894583Z" level=info msg="StartContainer for \"8fdbfe55517faa6e9bff3ab3eac9cbcef64634768592fbd26ae1860b7f4050a6\"" Jan 13 20:53:08.943463 systemd[1]: Started cri-containerd-8fdbfe55517faa6e9bff3ab3eac9cbcef64634768592fbd26ae1860b7f4050a6.scope - libcontainer container 8fdbfe55517faa6e9bff3ab3eac9cbcef64634768592fbd26ae1860b7f4050a6. Jan 13 20:53:08.970136 containerd[1796]: time="2025-01-13T20:53:08.970076069Z" level=info msg="StartContainer for \"8fdbfe55517faa6e9bff3ab3eac9cbcef64634768592fbd26ae1860b7f4050a6\" returns successfully" Jan 13 20:53:09.255788 kubelet[3275]: I0113 20:53:09.255577 3275 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 20:53:09.573196 kernel: bpftool[6963]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 13 20:53:09.719248 systemd-networkd[1716]: vxlan.calico: Link UP Jan 13 20:53:09.719252 systemd-networkd[1716]: vxlan.calico: Gained carrier Jan 13 20:53:09.724835 kubelet[3275]: I0113 20:53:09.724796 3275 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-68d58fcc5b-n4w6q" podStartSLOduration=13.728295236 podStartE2EDuration="18.724779333s" podCreationTimestamp="2025-01-13 20:52:51 +0000 UTC" firstStartedPulling="2025-01-13 20:53:03.908138425 +0000 UTC m=+36.407445036" lastFinishedPulling="2025-01-13 20:53:08.904622529 +0000 UTC m=+41.403929133" observedRunningTime="2025-01-13 20:53:09.724484556 +0000 UTC m=+42.223791160" watchObservedRunningTime="2025-01-13 20:53:09.724779333 +0000 UTC m=+42.224085935" Jan 13 20:53:10.365980 containerd[1796]: time="2025-01-13T20:53:10.365957589Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:53:10.366198 containerd[1796]: time="2025-01-13T20:53:10.366163432Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 13 20:53:10.366569 containerd[1796]: time="2025-01-13T20:53:10.366529152Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:53:10.367491 containerd[1796]: time="2025-01-13T20:53:10.367451668Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:53:10.368205 containerd[1796]: time="2025-01-13T20:53:10.368183123Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.463487533s" Jan 13 20:53:10.368205 containerd[1796]: time="2025-01-13T20:53:10.368200355Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 13 20:53:10.369093 containerd[1796]: time="2025-01-13T20:53:10.369081515Z" level=info msg="CreateContainer within sandbox \"cfbe9409969b4e083f3f044a754478e1b0404329813783118e95763654dd633f\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 13 20:53:10.373889 containerd[1796]: time="2025-01-13T20:53:10.373839763Z" level=info msg="CreateContainer within sandbox \"cfbe9409969b4e083f3f044a754478e1b0404329813783118e95763654dd633f\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"47532903f744189622b4eb63442947a002f6227bfdbeb5e64a31a2f781523895\"" Jan 13 20:53:10.374055 containerd[1796]: time="2025-01-13T20:53:10.374020030Z" level=info msg="StartContainer for \"47532903f744189622b4eb63442947a002f6227bfdbeb5e64a31a2f781523895\"" Jan 13 20:53:10.404460 systemd[1]: Started cri-containerd-47532903f744189622b4eb63442947a002f6227bfdbeb5e64a31a2f781523895.scope - libcontainer container 47532903f744189622b4eb63442947a002f6227bfdbeb5e64a31a2f781523895. Jan 13 20:53:10.420823 containerd[1796]: time="2025-01-13T20:53:10.420799337Z" level=info msg="StartContainer for \"47532903f744189622b4eb63442947a002f6227bfdbeb5e64a31a2f781523895\" returns successfully" Jan 13 20:53:10.588755 kubelet[3275]: I0113 20:53:10.588691 3275 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 13 20:53:10.589746 kubelet[3275]: I0113 20:53:10.588801 3275 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 13 20:53:10.751663 kubelet[3275]: I0113 20:53:10.751491 3275 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-h7b85" podStartSLOduration=13.185675485 podStartE2EDuration="19.751438717s" podCreationTimestamp="2025-01-13 20:52:51 +0000 UTC" firstStartedPulling="2025-01-13 20:53:03.802716897 +0000 UTC m=+36.302023501" lastFinishedPulling="2025-01-13 20:53:10.36848013 +0000 UTC m=+42.867786733" observedRunningTime="2025-01-13 20:53:10.750213119 +0000 UTC m=+43.249519810" watchObservedRunningTime="2025-01-13 20:53:10.751438717 +0000 UTC m=+43.250745379" Jan 13 20:53:11.465456 systemd-networkd[1716]: vxlan.calico: Gained IPv6LL Jan 13 20:53:27.547575 containerd[1796]: time="2025-01-13T20:53:27.547527721Z" level=info msg="StopPodSandbox for \"1fd3f15800d1c117e0bb75b7bd2633696ff5013f5fdfce3448414aebd683fd00\"" Jan 13 20:53:27.547826 containerd[1796]: time="2025-01-13T20:53:27.547597391Z" level=info msg="TearDown network for sandbox \"1fd3f15800d1c117e0bb75b7bd2633696ff5013f5fdfce3448414aebd683fd00\" successfully" Jan 13 20:53:27.547826 containerd[1796]: time="2025-01-13T20:53:27.547625664Z" level=info msg="StopPodSandbox for \"1fd3f15800d1c117e0bb75b7bd2633696ff5013f5fdfce3448414aebd683fd00\" returns successfully" Jan 13 20:53:27.547826 containerd[1796]: time="2025-01-13T20:53:27.547799550Z" level=info msg="RemovePodSandbox for \"1fd3f15800d1c117e0bb75b7bd2633696ff5013f5fdfce3448414aebd683fd00\"" Jan 13 20:53:27.547826 containerd[1796]: time="2025-01-13T20:53:27.547815490Z" level=info msg="Forcibly stopping sandbox \"1fd3f15800d1c117e0bb75b7bd2633696ff5013f5fdfce3448414aebd683fd00\"" Jan 13 20:53:27.547900 containerd[1796]: time="2025-01-13T20:53:27.547853405Z" level=info msg="TearDown network for sandbox \"1fd3f15800d1c117e0bb75b7bd2633696ff5013f5fdfce3448414aebd683fd00\" successfully" Jan 13 20:53:27.549347 containerd[1796]: time="2025-01-13T20:53:27.549335417Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1fd3f15800d1c117e0bb75b7bd2633696ff5013f5fdfce3448414aebd683fd00\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:53:27.549374 containerd[1796]: time="2025-01-13T20:53:27.549356974Z" level=info msg="RemovePodSandbox \"1fd3f15800d1c117e0bb75b7bd2633696ff5013f5fdfce3448414aebd683fd00\" returns successfully" Jan 13 20:53:27.549639 containerd[1796]: time="2025-01-13T20:53:27.549603526Z" level=info msg="StopPodSandbox for \"456e6343ee0f57bcb8dcd9404bcdf2a4c885fd2fd17130c9b0edfda25a692b9c\"" Jan 13 20:53:27.549760 containerd[1796]: time="2025-01-13T20:53:27.549711396Z" level=info msg="TearDown network for sandbox \"456e6343ee0f57bcb8dcd9404bcdf2a4c885fd2fd17130c9b0edfda25a692b9c\" successfully" Jan 13 20:53:27.549760 containerd[1796]: time="2025-01-13T20:53:27.549736759Z" level=info msg="StopPodSandbox for \"456e6343ee0f57bcb8dcd9404bcdf2a4c885fd2fd17130c9b0edfda25a692b9c\" returns successfully" Jan 13 20:53:27.549982 containerd[1796]: time="2025-01-13T20:53:27.549949941Z" level=info msg="RemovePodSandbox for \"456e6343ee0f57bcb8dcd9404bcdf2a4c885fd2fd17130c9b0edfda25a692b9c\"" Jan 13 20:53:27.550027 containerd[1796]: time="2025-01-13T20:53:27.549981209Z" level=info msg="Forcibly stopping sandbox \"456e6343ee0f57bcb8dcd9404bcdf2a4c885fd2fd17130c9b0edfda25a692b9c\"" Jan 13 20:53:27.550046 containerd[1796]: time="2025-01-13T20:53:27.550030392Z" level=info msg="TearDown network for sandbox \"456e6343ee0f57bcb8dcd9404bcdf2a4c885fd2fd17130c9b0edfda25a692b9c\" successfully" Jan 13 20:53:27.551397 containerd[1796]: time="2025-01-13T20:53:27.551380781Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"456e6343ee0f57bcb8dcd9404bcdf2a4c885fd2fd17130c9b0edfda25a692b9c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:53:27.551438 containerd[1796]: time="2025-01-13T20:53:27.551407559Z" level=info msg="RemovePodSandbox \"456e6343ee0f57bcb8dcd9404bcdf2a4c885fd2fd17130c9b0edfda25a692b9c\" returns successfully" Jan 13 20:53:27.551577 containerd[1796]: time="2025-01-13T20:53:27.551551592Z" level=info msg="StopPodSandbox for \"8e84fc2c98de3c57d9293d78ddd5f8dc67449ab21cb4315a819585e3ab6855f0\"" Jan 13 20:53:27.551640 containerd[1796]: time="2025-01-13T20:53:27.551630288Z" level=info msg="TearDown network for sandbox \"8e84fc2c98de3c57d9293d78ddd5f8dc67449ab21cb4315a819585e3ab6855f0\" successfully" Jan 13 20:53:27.551711 containerd[1796]: time="2025-01-13T20:53:27.551640122Z" level=info msg="StopPodSandbox for \"8e84fc2c98de3c57d9293d78ddd5f8dc67449ab21cb4315a819585e3ab6855f0\" returns successfully" Jan 13 20:53:27.551792 containerd[1796]: time="2025-01-13T20:53:27.551780985Z" level=info msg="RemovePodSandbox for \"8e84fc2c98de3c57d9293d78ddd5f8dc67449ab21cb4315a819585e3ab6855f0\"" Jan 13 20:53:27.551815 containerd[1796]: time="2025-01-13T20:53:27.551795250Z" level=info msg="Forcibly stopping sandbox \"8e84fc2c98de3c57d9293d78ddd5f8dc67449ab21cb4315a819585e3ab6855f0\"" Jan 13 20:53:27.551853 containerd[1796]: time="2025-01-13T20:53:27.551835934Z" level=info msg="TearDown network for sandbox \"8e84fc2c98de3c57d9293d78ddd5f8dc67449ab21cb4315a819585e3ab6855f0\" successfully" Jan 13 20:53:27.553058 containerd[1796]: time="2025-01-13T20:53:27.553046357Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8e84fc2c98de3c57d9293d78ddd5f8dc67449ab21cb4315a819585e3ab6855f0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:53:27.553087 containerd[1796]: time="2025-01-13T20:53:27.553065110Z" level=info msg="RemovePodSandbox \"8e84fc2c98de3c57d9293d78ddd5f8dc67449ab21cb4315a819585e3ab6855f0\" returns successfully" Jan 13 20:53:27.553241 containerd[1796]: time="2025-01-13T20:53:27.553214014Z" level=info msg="StopPodSandbox for \"85d4f68b49e6b6618f8f77479f00c1e15c4237019feac121537907a5f77e1b37\"" Jan 13 20:53:27.553275 containerd[1796]: time="2025-01-13T20:53:27.553268609Z" level=info msg="TearDown network for sandbox \"85d4f68b49e6b6618f8f77479f00c1e15c4237019feac121537907a5f77e1b37\" successfully" Jan 13 20:53:27.553295 containerd[1796]: time="2025-01-13T20:53:27.553275309Z" level=info msg="StopPodSandbox for \"85d4f68b49e6b6618f8f77479f00c1e15c4237019feac121537907a5f77e1b37\" returns successfully" Jan 13 20:53:27.553426 containerd[1796]: time="2025-01-13T20:53:27.553402036Z" level=info msg="RemovePodSandbox for \"85d4f68b49e6b6618f8f77479f00c1e15c4237019feac121537907a5f77e1b37\"" Jan 13 20:53:27.553458 containerd[1796]: time="2025-01-13T20:53:27.553428101Z" level=info msg="Forcibly stopping sandbox \"85d4f68b49e6b6618f8f77479f00c1e15c4237019feac121537907a5f77e1b37\"" Jan 13 20:53:27.553493 containerd[1796]: time="2025-01-13T20:53:27.553471678Z" level=info msg="TearDown network for sandbox \"85d4f68b49e6b6618f8f77479f00c1e15c4237019feac121537907a5f77e1b37\" successfully" Jan 13 20:53:27.554602 containerd[1796]: time="2025-01-13T20:53:27.554589902Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"85d4f68b49e6b6618f8f77479f00c1e15c4237019feac121537907a5f77e1b37\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:53:27.554650 containerd[1796]: time="2025-01-13T20:53:27.554610871Z" level=info msg="RemovePodSandbox \"85d4f68b49e6b6618f8f77479f00c1e15c4237019feac121537907a5f77e1b37\" returns successfully" Jan 13 20:53:27.554804 containerd[1796]: time="2025-01-13T20:53:27.554780785Z" level=info msg="StopPodSandbox for \"0ce5f45924b9e06d91af147bed525b856e7423dbb61ace896410f734e6c8b005\"" Jan 13 20:53:27.554875 containerd[1796]: time="2025-01-13T20:53:27.554868350Z" level=info msg="TearDown network for sandbox \"0ce5f45924b9e06d91af147bed525b856e7423dbb61ace896410f734e6c8b005\" successfully" Jan 13 20:53:27.554912 containerd[1796]: time="2025-01-13T20:53:27.554875389Z" level=info msg="StopPodSandbox for \"0ce5f45924b9e06d91af147bed525b856e7423dbb61ace896410f734e6c8b005\" returns successfully" Jan 13 20:53:27.554986 containerd[1796]: time="2025-01-13T20:53:27.554977928Z" level=info msg="RemovePodSandbox for \"0ce5f45924b9e06d91af147bed525b856e7423dbb61ace896410f734e6c8b005\"" Jan 13 20:53:27.555006 containerd[1796]: time="2025-01-13T20:53:27.554989064Z" level=info msg="Forcibly stopping sandbox \"0ce5f45924b9e06d91af147bed525b856e7423dbb61ace896410f734e6c8b005\"" Jan 13 20:53:27.555041 containerd[1796]: time="2025-01-13T20:53:27.555026124Z" level=info msg="TearDown network for sandbox \"0ce5f45924b9e06d91af147bed525b856e7423dbb61ace896410f734e6c8b005\" successfully" Jan 13 20:53:27.556133 containerd[1796]: time="2025-01-13T20:53:27.556122379Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0ce5f45924b9e06d91af147bed525b856e7423dbb61ace896410f734e6c8b005\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:53:27.556195 containerd[1796]: time="2025-01-13T20:53:27.556151585Z" level=info msg="RemovePodSandbox \"0ce5f45924b9e06d91af147bed525b856e7423dbb61ace896410f734e6c8b005\" returns successfully" Jan 13 20:53:27.556397 containerd[1796]: time="2025-01-13T20:53:27.556365064Z" level=info msg="StopPodSandbox for \"bc0d9cc2b9b8e93debedb3944b0e8b37375a66770da30a4aa85ea30a3e80e9f1\"" Jan 13 20:53:27.556516 containerd[1796]: time="2025-01-13T20:53:27.556467048Z" level=info msg="TearDown network for sandbox \"bc0d9cc2b9b8e93debedb3944b0e8b37375a66770da30a4aa85ea30a3e80e9f1\" successfully" Jan 13 20:53:27.556516 containerd[1796]: time="2025-01-13T20:53:27.556473934Z" level=info msg="StopPodSandbox for \"bc0d9cc2b9b8e93debedb3944b0e8b37375a66770da30a4aa85ea30a3e80e9f1\" returns successfully" Jan 13 20:53:27.556700 containerd[1796]: time="2025-01-13T20:53:27.556685996Z" level=info msg="RemovePodSandbox for \"bc0d9cc2b9b8e93debedb3944b0e8b37375a66770da30a4aa85ea30a3e80e9f1\"" Jan 13 20:53:27.556739 containerd[1796]: time="2025-01-13T20:53:27.556702948Z" level=info msg="Forcibly stopping sandbox \"bc0d9cc2b9b8e93debedb3944b0e8b37375a66770da30a4aa85ea30a3e80e9f1\"" Jan 13 20:53:27.556772 containerd[1796]: time="2025-01-13T20:53:27.556751068Z" level=info msg="TearDown network for sandbox \"bc0d9cc2b9b8e93debedb3944b0e8b37375a66770da30a4aa85ea30a3e80e9f1\" successfully" Jan 13 20:53:27.557928 containerd[1796]: time="2025-01-13T20:53:27.557914730Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bc0d9cc2b9b8e93debedb3944b0e8b37375a66770da30a4aa85ea30a3e80e9f1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:53:27.558043 containerd[1796]: time="2025-01-13T20:53:27.557949923Z" level=info msg="RemovePodSandbox \"bc0d9cc2b9b8e93debedb3944b0e8b37375a66770da30a4aa85ea30a3e80e9f1\" returns successfully" Jan 13 20:53:27.558084 containerd[1796]: time="2025-01-13T20:53:27.558074700Z" level=info msg="StopPodSandbox for \"c72566025eb6bf20e7a12141ba8d334f96ef61a7fe37ad141a1776f33dbbb2db\"" Jan 13 20:53:27.558167 containerd[1796]: time="2025-01-13T20:53:27.558137005Z" level=info msg="TearDown network for sandbox \"c72566025eb6bf20e7a12141ba8d334f96ef61a7fe37ad141a1776f33dbbb2db\" successfully" Jan 13 20:53:27.558167 containerd[1796]: time="2025-01-13T20:53:27.558162641Z" level=info msg="StopPodSandbox for \"c72566025eb6bf20e7a12141ba8d334f96ef61a7fe37ad141a1776f33dbbb2db\" returns successfully" Jan 13 20:53:27.558325 containerd[1796]: time="2025-01-13T20:53:27.558315677Z" level=info msg="RemovePodSandbox for \"c72566025eb6bf20e7a12141ba8d334f96ef61a7fe37ad141a1776f33dbbb2db\"" Jan 13 20:53:27.558353 containerd[1796]: time="2025-01-13T20:53:27.558326699Z" level=info msg="Forcibly stopping sandbox \"c72566025eb6bf20e7a12141ba8d334f96ef61a7fe37ad141a1776f33dbbb2db\"" Jan 13 20:53:27.558381 containerd[1796]: time="2025-01-13T20:53:27.558364131Z" level=info msg="TearDown network for sandbox \"c72566025eb6bf20e7a12141ba8d334f96ef61a7fe37ad141a1776f33dbbb2db\" successfully" Jan 13 20:53:27.559452 containerd[1796]: time="2025-01-13T20:53:27.559442402Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c72566025eb6bf20e7a12141ba8d334f96ef61a7fe37ad141a1776f33dbbb2db\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:53:27.559505 containerd[1796]: time="2025-01-13T20:53:27.559459561Z" level=info msg="RemovePodSandbox \"c72566025eb6bf20e7a12141ba8d334f96ef61a7fe37ad141a1776f33dbbb2db\" returns successfully" Jan 13 20:53:27.559577 containerd[1796]: time="2025-01-13T20:53:27.559565922Z" level=info msg="StopPodSandbox for \"24c732b38f370468236a72ab32f80d6878d64538399391f045d0acac2b8b6fb1\"" Jan 13 20:53:27.559664 containerd[1796]: time="2025-01-13T20:53:27.559655300Z" level=info msg="TearDown network for sandbox \"24c732b38f370468236a72ab32f80d6878d64538399391f045d0acac2b8b6fb1\" successfully" Jan 13 20:53:27.559698 containerd[1796]: time="2025-01-13T20:53:27.559664176Z" level=info msg="StopPodSandbox for \"24c732b38f370468236a72ab32f80d6878d64538399391f045d0acac2b8b6fb1\" returns successfully" Jan 13 20:53:27.559787 containerd[1796]: time="2025-01-13T20:53:27.559779291Z" level=info msg="RemovePodSandbox for \"24c732b38f370468236a72ab32f80d6878d64538399391f045d0acac2b8b6fb1\"" Jan 13 20:53:27.559809 containerd[1796]: time="2025-01-13T20:53:27.559789226Z" level=info msg="Forcibly stopping sandbox \"24c732b38f370468236a72ab32f80d6878d64538399391f045d0acac2b8b6fb1\"" Jan 13 20:53:27.559836 containerd[1796]: time="2025-01-13T20:53:27.559817605Z" level=info msg="TearDown network for sandbox \"24c732b38f370468236a72ab32f80d6878d64538399391f045d0acac2b8b6fb1\" successfully" Jan 13 20:53:27.560984 containerd[1796]: time="2025-01-13T20:53:27.560972438Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"24c732b38f370468236a72ab32f80d6878d64538399391f045d0acac2b8b6fb1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:53:27.561013 containerd[1796]: time="2025-01-13T20:53:27.560991140Z" level=info msg="RemovePodSandbox \"24c732b38f370468236a72ab32f80d6878d64538399391f045d0acac2b8b6fb1\" returns successfully" Jan 13 20:53:27.561152 containerd[1796]: time="2025-01-13T20:53:27.561143022Z" level=info msg="StopPodSandbox for \"d3d05756677299e199887f1cef0224a1ce4957a5cd37be0a13530a13e2555265\"" Jan 13 20:53:27.561247 containerd[1796]: time="2025-01-13T20:53:27.561212988Z" level=info msg="TearDown network for sandbox \"d3d05756677299e199887f1cef0224a1ce4957a5cd37be0a13530a13e2555265\" successfully" Jan 13 20:53:27.561247 containerd[1796]: time="2025-01-13T20:53:27.561219526Z" level=info msg="StopPodSandbox for \"d3d05756677299e199887f1cef0224a1ce4957a5cd37be0a13530a13e2555265\" returns successfully" Jan 13 20:53:27.561492 containerd[1796]: time="2025-01-13T20:53:27.561467201Z" level=info msg="RemovePodSandbox for \"d3d05756677299e199887f1cef0224a1ce4957a5cd37be0a13530a13e2555265\"" Jan 13 20:53:27.561562 containerd[1796]: time="2025-01-13T20:53:27.561493477Z" level=info msg="Forcibly stopping sandbox \"d3d05756677299e199887f1cef0224a1ce4957a5cd37be0a13530a13e2555265\"" Jan 13 20:53:27.561594 containerd[1796]: time="2025-01-13T20:53:27.561567456Z" level=info msg="TearDown network for sandbox \"d3d05756677299e199887f1cef0224a1ce4957a5cd37be0a13530a13e2555265\" successfully" Jan 13 20:53:27.562684 containerd[1796]: time="2025-01-13T20:53:27.562671425Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d3d05756677299e199887f1cef0224a1ce4957a5cd37be0a13530a13e2555265\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:53:27.562744 containerd[1796]: time="2025-01-13T20:53:27.562694098Z" level=info msg="RemovePodSandbox \"d3d05756677299e199887f1cef0224a1ce4957a5cd37be0a13530a13e2555265\" returns successfully" Jan 13 20:53:27.562905 containerd[1796]: time="2025-01-13T20:53:27.562896323Z" level=info msg="StopPodSandbox for \"7d89662bcfc6315ce75a7989c9fcff2544e25856cc29723aa4c2a8615c036df4\"" Jan 13 20:53:27.562983 containerd[1796]: time="2025-01-13T20:53:27.562976181Z" level=info msg="TearDown network for sandbox \"7d89662bcfc6315ce75a7989c9fcff2544e25856cc29723aa4c2a8615c036df4\" successfully" Jan 13 20:53:27.563006 containerd[1796]: time="2025-01-13T20:53:27.562983105Z" level=info msg="StopPodSandbox for \"7d89662bcfc6315ce75a7989c9fcff2544e25856cc29723aa4c2a8615c036df4\" returns successfully" Jan 13 20:53:27.563094 containerd[1796]: time="2025-01-13T20:53:27.563084781Z" level=info msg="RemovePodSandbox for \"7d89662bcfc6315ce75a7989c9fcff2544e25856cc29723aa4c2a8615c036df4\"" Jan 13 20:53:27.563115 containerd[1796]: time="2025-01-13T20:53:27.563096737Z" level=info msg="Forcibly stopping sandbox \"7d89662bcfc6315ce75a7989c9fcff2544e25856cc29723aa4c2a8615c036df4\"" Jan 13 20:53:27.563147 containerd[1796]: time="2025-01-13T20:53:27.563131474Z" level=info msg="TearDown network for sandbox \"7d89662bcfc6315ce75a7989c9fcff2544e25856cc29723aa4c2a8615c036df4\" successfully" Jan 13 20:53:27.564405 containerd[1796]: time="2025-01-13T20:53:27.564364526Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7d89662bcfc6315ce75a7989c9fcff2544e25856cc29723aa4c2a8615c036df4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:53:27.564405 containerd[1796]: time="2025-01-13T20:53:27.564388971Z" level=info msg="RemovePodSandbox \"7d89662bcfc6315ce75a7989c9fcff2544e25856cc29723aa4c2a8615c036df4\" returns successfully" Jan 13 20:53:27.564634 containerd[1796]: time="2025-01-13T20:53:27.564623708Z" level=info msg="StopPodSandbox for \"ef5f207624ba5c43d680c6587e4fc03354f47110d4ee73746c6408d8b13554e0\"" Jan 13 20:53:27.564736 containerd[1796]: time="2025-01-13T20:53:27.564678904Z" level=info msg="TearDown network for sandbox \"ef5f207624ba5c43d680c6587e4fc03354f47110d4ee73746c6408d8b13554e0\" successfully" Jan 13 20:53:27.564736 containerd[1796]: time="2025-01-13T20:53:27.564703708Z" level=info msg="StopPodSandbox for \"ef5f207624ba5c43d680c6587e4fc03354f47110d4ee73746c6408d8b13554e0\" returns successfully" Jan 13 20:53:27.564862 containerd[1796]: time="2025-01-13T20:53:27.564853799Z" level=info msg="RemovePodSandbox for \"ef5f207624ba5c43d680c6587e4fc03354f47110d4ee73746c6408d8b13554e0\"" Jan 13 20:53:27.564887 containerd[1796]: time="2025-01-13T20:53:27.564862895Z" level=info msg="Forcibly stopping sandbox \"ef5f207624ba5c43d680c6587e4fc03354f47110d4ee73746c6408d8b13554e0\"" Jan 13 20:53:27.564920 containerd[1796]: time="2025-01-13T20:53:27.564891271Z" level=info msg="TearDown network for sandbox \"ef5f207624ba5c43d680c6587e4fc03354f47110d4ee73746c6408d8b13554e0\" successfully" Jan 13 20:53:27.566028 containerd[1796]: time="2025-01-13T20:53:27.565986514Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ef5f207624ba5c43d680c6587e4fc03354f47110d4ee73746c6408d8b13554e0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:53:27.566028 containerd[1796]: time="2025-01-13T20:53:27.566003847Z" level=info msg="RemovePodSandbox \"ef5f207624ba5c43d680c6587e4fc03354f47110d4ee73746c6408d8b13554e0\" returns successfully" Jan 13 20:53:27.566137 containerd[1796]: time="2025-01-13T20:53:27.566129317Z" level=info msg="StopPodSandbox for \"b4c4a6c10d20b2a33196606ebc3cbcd87a2cf342ce15bb27f9b1bfa2e4be7b59\"" Jan 13 20:53:27.566237 containerd[1796]: time="2025-01-13T20:53:27.566171096Z" level=info msg="TearDown network for sandbox \"b4c4a6c10d20b2a33196606ebc3cbcd87a2cf342ce15bb27f9b1bfa2e4be7b59\" successfully" Jan 13 20:53:27.566237 containerd[1796]: time="2025-01-13T20:53:27.566177531Z" level=info msg="StopPodSandbox for \"b4c4a6c10d20b2a33196606ebc3cbcd87a2cf342ce15bb27f9b1bfa2e4be7b59\" returns successfully" Jan 13 20:53:27.566398 containerd[1796]: time="2025-01-13T20:53:27.566377046Z" level=info msg="RemovePodSandbox for \"b4c4a6c10d20b2a33196606ebc3cbcd87a2cf342ce15bb27f9b1bfa2e4be7b59\"" Jan 13 20:53:27.566398 containerd[1796]: time="2025-01-13T20:53:27.566386747Z" level=info msg="Forcibly stopping sandbox \"b4c4a6c10d20b2a33196606ebc3cbcd87a2cf342ce15bb27f9b1bfa2e4be7b59\"" Jan 13 20:53:27.566463 containerd[1796]: time="2025-01-13T20:53:27.566415918Z" level=info msg="TearDown network for sandbox \"b4c4a6c10d20b2a33196606ebc3cbcd87a2cf342ce15bb27f9b1bfa2e4be7b59\" successfully" Jan 13 20:53:27.567604 containerd[1796]: time="2025-01-13T20:53:27.567559679Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b4c4a6c10d20b2a33196606ebc3cbcd87a2cf342ce15bb27f9b1bfa2e4be7b59\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:53:27.567642 containerd[1796]: time="2025-01-13T20:53:27.567602600Z" level=info msg="RemovePodSandbox \"b4c4a6c10d20b2a33196606ebc3cbcd87a2cf342ce15bb27f9b1bfa2e4be7b59\" returns successfully" Jan 13 20:53:27.567783 containerd[1796]: time="2025-01-13T20:53:27.567757604Z" level=info msg="StopPodSandbox for \"6d430c351838389f0ab1ccbec2e3f4dc91faeb821ac26da3ce257312d149e0a3\"" Jan 13 20:53:27.567857 containerd[1796]: time="2025-01-13T20:53:27.567811975Z" level=info msg="TearDown network for sandbox \"6d430c351838389f0ab1ccbec2e3f4dc91faeb821ac26da3ce257312d149e0a3\" successfully" Jan 13 20:53:27.567857 containerd[1796]: time="2025-01-13T20:53:27.567832343Z" level=info msg="StopPodSandbox for \"6d430c351838389f0ab1ccbec2e3f4dc91faeb821ac26da3ce257312d149e0a3\" returns successfully" Jan 13 20:53:27.567943 containerd[1796]: time="2025-01-13T20:53:27.567934658Z" level=info msg="RemovePodSandbox for \"6d430c351838389f0ab1ccbec2e3f4dc91faeb821ac26da3ce257312d149e0a3\"" Jan 13 20:53:27.567966 containerd[1796]: time="2025-01-13T20:53:27.567946191Z" level=info msg="Forcibly stopping sandbox \"6d430c351838389f0ab1ccbec2e3f4dc91faeb821ac26da3ce257312d149e0a3\"" Jan 13 20:53:27.567991 containerd[1796]: time="2025-01-13T20:53:27.567976670Z" level=info msg="TearDown network for sandbox \"6d430c351838389f0ab1ccbec2e3f4dc91faeb821ac26da3ce257312d149e0a3\" successfully" Jan 13 20:53:27.569088 containerd[1796]: time="2025-01-13T20:53:27.569045658Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6d430c351838389f0ab1ccbec2e3f4dc91faeb821ac26da3ce257312d149e0a3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:53:27.569088 containerd[1796]: time="2025-01-13T20:53:27.569085125Z" level=info msg="RemovePodSandbox \"6d430c351838389f0ab1ccbec2e3f4dc91faeb821ac26da3ce257312d149e0a3\" returns successfully" Jan 13 20:53:27.569261 containerd[1796]: time="2025-01-13T20:53:27.569250670Z" level=info msg="StopPodSandbox for \"f92eaaeec95925ddfa10fb81b22e967777c08ea032a350ccfbacd2402b579795\"" Jan 13 20:53:27.569326 containerd[1796]: time="2025-01-13T20:53:27.569319928Z" level=info msg="TearDown network for sandbox \"f92eaaeec95925ddfa10fb81b22e967777c08ea032a350ccfbacd2402b579795\" successfully" Jan 13 20:53:27.569424 containerd[1796]: time="2025-01-13T20:53:27.569325588Z" level=info msg="StopPodSandbox for \"f92eaaeec95925ddfa10fb81b22e967777c08ea032a350ccfbacd2402b579795\" returns successfully" Jan 13 20:53:27.569511 containerd[1796]: time="2025-01-13T20:53:27.569499969Z" level=info msg="RemovePodSandbox for \"f92eaaeec95925ddfa10fb81b22e967777c08ea032a350ccfbacd2402b579795\"" Jan 13 20:53:27.569567 containerd[1796]: time="2025-01-13T20:53:27.569512596Z" level=info msg="Forcibly stopping sandbox \"f92eaaeec95925ddfa10fb81b22e967777c08ea032a350ccfbacd2402b579795\"" Jan 13 20:53:27.569600 containerd[1796]: time="2025-01-13T20:53:27.569577487Z" level=info msg="TearDown network for sandbox \"f92eaaeec95925ddfa10fb81b22e967777c08ea032a350ccfbacd2402b579795\" successfully" Jan 13 20:53:27.570713 containerd[1796]: time="2025-01-13T20:53:27.570699735Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f92eaaeec95925ddfa10fb81b22e967777c08ea032a350ccfbacd2402b579795\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:53:27.570762 containerd[1796]: time="2025-01-13T20:53:27.570722659Z" level=info msg="RemovePodSandbox \"f92eaaeec95925ddfa10fb81b22e967777c08ea032a350ccfbacd2402b579795\" returns successfully" Jan 13 20:53:27.570839 containerd[1796]: time="2025-01-13T20:53:27.570828634Z" level=info msg="StopPodSandbox for \"a9d6f41974eb9dabe1cf88f294b457bd53658ea8127e0f87575120f54cb81670\"" Jan 13 20:53:27.570890 containerd[1796]: time="2025-01-13T20:53:27.570880492Z" level=info msg="TearDown network for sandbox \"a9d6f41974eb9dabe1cf88f294b457bd53658ea8127e0f87575120f54cb81670\" successfully" Jan 13 20:53:27.570929 containerd[1796]: time="2025-01-13T20:53:27.570888916Z" level=info msg="StopPodSandbox for \"a9d6f41974eb9dabe1cf88f294b457bd53658ea8127e0f87575120f54cb81670\" returns successfully" Jan 13 20:53:27.571039 containerd[1796]: time="2025-01-13T20:53:27.571028815Z" level=info msg="RemovePodSandbox for \"a9d6f41974eb9dabe1cf88f294b457bd53658ea8127e0f87575120f54cb81670\"" Jan 13 20:53:27.571074 containerd[1796]: time="2025-01-13T20:53:27.571041510Z" level=info msg="Forcibly stopping sandbox \"a9d6f41974eb9dabe1cf88f294b457bd53658ea8127e0f87575120f54cb81670\"" Jan 13 20:53:27.571109 containerd[1796]: time="2025-01-13T20:53:27.571085185Z" level=info msg="TearDown network for sandbox \"a9d6f41974eb9dabe1cf88f294b457bd53658ea8127e0f87575120f54cb81670\" successfully" Jan 13 20:53:27.572257 containerd[1796]: time="2025-01-13T20:53:27.572243050Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a9d6f41974eb9dabe1cf88f294b457bd53658ea8127e0f87575120f54cb81670\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:53:27.572296 containerd[1796]: time="2025-01-13T20:53:27.572269058Z" level=info msg="RemovePodSandbox \"a9d6f41974eb9dabe1cf88f294b457bd53658ea8127e0f87575120f54cb81670\" returns successfully" Jan 13 20:53:27.572480 containerd[1796]: time="2025-01-13T20:53:27.572454840Z" level=info msg="StopPodSandbox for \"8ea3cd1e0823c9840b04ba6e04a90f50370ef737a9e00d5acbefd7978bcd15c3\"" Jan 13 20:53:27.572558 containerd[1796]: time="2025-01-13T20:53:27.572548831Z" level=info msg="TearDown network for sandbox \"8ea3cd1e0823c9840b04ba6e04a90f50370ef737a9e00d5acbefd7978bcd15c3\" successfully" Jan 13 20:53:27.572607 containerd[1796]: time="2025-01-13T20:53:27.572556919Z" level=info msg="StopPodSandbox for \"8ea3cd1e0823c9840b04ba6e04a90f50370ef737a9e00d5acbefd7978bcd15c3\" returns successfully" Jan 13 20:53:27.572715 containerd[1796]: time="2025-01-13T20:53:27.572703913Z" level=info msg="RemovePodSandbox for \"8ea3cd1e0823c9840b04ba6e04a90f50370ef737a9e00d5acbefd7978bcd15c3\"" Jan 13 20:53:27.572748 containerd[1796]: time="2025-01-13T20:53:27.572714572Z" level=info msg="Forcibly stopping sandbox \"8ea3cd1e0823c9840b04ba6e04a90f50370ef737a9e00d5acbefd7978bcd15c3\"" Jan 13 20:53:27.572769 containerd[1796]: time="2025-01-13T20:53:27.572745991Z" level=info msg="TearDown network for sandbox \"8ea3cd1e0823c9840b04ba6e04a90f50370ef737a9e00d5acbefd7978bcd15c3\" successfully" Jan 13 20:53:27.573930 containerd[1796]: time="2025-01-13T20:53:27.573894434Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8ea3cd1e0823c9840b04ba6e04a90f50370ef737a9e00d5acbefd7978bcd15c3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:53:27.573930 containerd[1796]: time="2025-01-13T20:53:27.573914905Z" level=info msg="RemovePodSandbox \"8ea3cd1e0823c9840b04ba6e04a90f50370ef737a9e00d5acbefd7978bcd15c3\" returns successfully" Jan 13 20:53:27.574041 containerd[1796]: time="2025-01-13T20:53:27.574030857Z" level=info msg="StopPodSandbox for \"0127c52613930b78e5e66edfcc01dcf3f62cba9d857bf91f828a4557bcfbb1d3\"" Jan 13 20:53:27.574091 containerd[1796]: time="2025-01-13T20:53:27.574082681Z" level=info msg="TearDown network for sandbox \"0127c52613930b78e5e66edfcc01dcf3f62cba9d857bf91f828a4557bcfbb1d3\" successfully" Jan 13 20:53:27.574091 containerd[1796]: time="2025-01-13T20:53:27.574089960Z" level=info msg="StopPodSandbox for \"0127c52613930b78e5e66edfcc01dcf3f62cba9d857bf91f828a4557bcfbb1d3\" returns successfully" Jan 13 20:53:27.574221 containerd[1796]: time="2025-01-13T20:53:27.574194295Z" level=info msg="RemovePodSandbox for \"0127c52613930b78e5e66edfcc01dcf3f62cba9d857bf91f828a4557bcfbb1d3\"" Jan 13 20:53:27.574296 containerd[1796]: time="2025-01-13T20:53:27.574221698Z" level=info msg="Forcibly stopping sandbox \"0127c52613930b78e5e66edfcc01dcf3f62cba9d857bf91f828a4557bcfbb1d3\"" Jan 13 20:53:27.574340 containerd[1796]: time="2025-01-13T20:53:27.574297774Z" level=info msg="TearDown network for sandbox \"0127c52613930b78e5e66edfcc01dcf3f62cba9d857bf91f828a4557bcfbb1d3\" successfully" Jan 13 20:53:27.575385 containerd[1796]: time="2025-01-13T20:53:27.575372169Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0127c52613930b78e5e66edfcc01dcf3f62cba9d857bf91f828a4557bcfbb1d3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:53:27.575432 containerd[1796]: time="2025-01-13T20:53:27.575393563Z" level=info msg="RemovePodSandbox \"0127c52613930b78e5e66edfcc01dcf3f62cba9d857bf91f828a4557bcfbb1d3\" returns successfully" Jan 13 20:53:27.575610 containerd[1796]: time="2025-01-13T20:53:27.575596730Z" level=info msg="StopPodSandbox for \"1aad0586da378838df74df5e9d655a3fc7187bde4f483a1c989f64fa61a8f116\"" Jan 13 20:53:27.575693 containerd[1796]: time="2025-01-13T20:53:27.575652949Z" level=info msg="TearDown network for sandbox \"1aad0586da378838df74df5e9d655a3fc7187bde4f483a1c989f64fa61a8f116\" successfully" Jan 13 20:53:27.575693 containerd[1796]: time="2025-01-13T20:53:27.575659554Z" level=info msg="StopPodSandbox for \"1aad0586da378838df74df5e9d655a3fc7187bde4f483a1c989f64fa61a8f116\" returns successfully" Jan 13 20:53:27.575888 containerd[1796]: time="2025-01-13T20:53:27.575854336Z" level=info msg="RemovePodSandbox for \"1aad0586da378838df74df5e9d655a3fc7187bde4f483a1c989f64fa61a8f116\"" Jan 13 20:53:27.575888 containerd[1796]: time="2025-01-13T20:53:27.575865171Z" level=info msg="Forcibly stopping sandbox \"1aad0586da378838df74df5e9d655a3fc7187bde4f483a1c989f64fa61a8f116\"" Jan 13 20:53:27.575928 containerd[1796]: time="2025-01-13T20:53:27.575910401Z" level=info msg="TearDown network for sandbox \"1aad0586da378838df74df5e9d655a3fc7187bde4f483a1c989f64fa61a8f116\" successfully" Jan 13 20:53:27.577097 containerd[1796]: time="2025-01-13T20:53:27.577085408Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1aad0586da378838df74df5e9d655a3fc7187bde4f483a1c989f64fa61a8f116\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:53:27.577122 containerd[1796]: time="2025-01-13T20:53:27.577105307Z" level=info msg="RemovePodSandbox \"1aad0586da378838df74df5e9d655a3fc7187bde4f483a1c989f64fa61a8f116\" returns successfully" Jan 13 20:53:27.577311 containerd[1796]: time="2025-01-13T20:53:27.577272043Z" level=info msg="StopPodSandbox for \"d181c18b3b9b760a94b9c0757331178506d162626304588b2d3668a8c6afdaa1\"" Jan 13 20:53:27.577373 containerd[1796]: time="2025-01-13T20:53:27.577332745Z" level=info msg="TearDown network for sandbox \"d181c18b3b9b760a94b9c0757331178506d162626304588b2d3668a8c6afdaa1\" successfully" Jan 13 20:53:27.577373 containerd[1796]: time="2025-01-13T20:53:27.577356459Z" level=info msg="StopPodSandbox for \"d181c18b3b9b760a94b9c0757331178506d162626304588b2d3668a8c6afdaa1\" returns successfully" Jan 13 20:53:27.577494 containerd[1796]: time="2025-01-13T20:53:27.577484231Z" level=info msg="RemovePodSandbox for \"d181c18b3b9b760a94b9c0757331178506d162626304588b2d3668a8c6afdaa1\"" Jan 13 20:53:27.577523 containerd[1796]: time="2025-01-13T20:53:27.577495623Z" level=info msg="Forcibly stopping sandbox \"d181c18b3b9b760a94b9c0757331178506d162626304588b2d3668a8c6afdaa1\"" Jan 13 20:53:27.577560 containerd[1796]: time="2025-01-13T20:53:27.577526340Z" level=info msg="TearDown network for sandbox \"d181c18b3b9b760a94b9c0757331178506d162626304588b2d3668a8c6afdaa1\" successfully" Jan 13 20:53:27.579398 containerd[1796]: time="2025-01-13T20:53:27.579357851Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d181c18b3b9b760a94b9c0757331178506d162626304588b2d3668a8c6afdaa1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:53:27.579439 containerd[1796]: time="2025-01-13T20:53:27.579400962Z" level=info msg="RemovePodSandbox \"d181c18b3b9b760a94b9c0757331178506d162626304588b2d3668a8c6afdaa1\" returns successfully" Jan 13 20:53:27.579614 containerd[1796]: time="2025-01-13T20:53:27.579568691Z" level=info msg="StopPodSandbox for \"fa58dd889033e2e9bc7de6022862af87daf1e36fed64e7f7a5092e31f397bcf9\"" Jan 13 20:53:27.579646 containerd[1796]: time="2025-01-13T20:53:27.579625191Z" level=info msg="TearDown network for sandbox \"fa58dd889033e2e9bc7de6022862af87daf1e36fed64e7f7a5092e31f397bcf9\" successfully" Jan 13 20:53:27.579671 containerd[1796]: time="2025-01-13T20:53:27.579646081Z" level=info msg="StopPodSandbox for \"fa58dd889033e2e9bc7de6022862af87daf1e36fed64e7f7a5092e31f397bcf9\" returns successfully" Jan 13 20:53:27.579888 containerd[1796]: time="2025-01-13T20:53:27.579832267Z" level=info msg="RemovePodSandbox for \"fa58dd889033e2e9bc7de6022862af87daf1e36fed64e7f7a5092e31f397bcf9\"" Jan 13 20:53:27.579888 containerd[1796]: time="2025-01-13T20:53:27.579860866Z" level=info msg="Forcibly stopping sandbox \"fa58dd889033e2e9bc7de6022862af87daf1e36fed64e7f7a5092e31f397bcf9\"" Jan 13 20:53:27.579954 containerd[1796]: time="2025-01-13T20:53:27.579924398Z" level=info msg="TearDown network for sandbox \"fa58dd889033e2e9bc7de6022862af87daf1e36fed64e7f7a5092e31f397bcf9\" successfully" Jan 13 20:53:27.581113 containerd[1796]: time="2025-01-13T20:53:27.581072856Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fa58dd889033e2e9bc7de6022862af87daf1e36fed64e7f7a5092e31f397bcf9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:53:27.581113 containerd[1796]: time="2025-01-13T20:53:27.581091302Z" level=info msg="RemovePodSandbox \"fa58dd889033e2e9bc7de6022862af87daf1e36fed64e7f7a5092e31f397bcf9\" returns successfully" Jan 13 20:53:27.581312 containerd[1796]: time="2025-01-13T20:53:27.581273414Z" level=info msg="StopPodSandbox for \"e1c4d74fea366482420e27e4d2319ef8fa6392820f0f49a85b79676210aee06f\"" Jan 13 20:53:27.581361 containerd[1796]: time="2025-01-13T20:53:27.581333259Z" level=info msg="TearDown network for sandbox \"e1c4d74fea366482420e27e4d2319ef8fa6392820f0f49a85b79676210aee06f\" successfully" Jan 13 20:53:27.581361 containerd[1796]: time="2025-01-13T20:53:27.581339019Z" level=info msg="StopPodSandbox for \"e1c4d74fea366482420e27e4d2319ef8fa6392820f0f49a85b79676210aee06f\" returns successfully" Jan 13 20:53:27.581557 containerd[1796]: time="2025-01-13T20:53:27.581522426Z" level=info msg="RemovePodSandbox for \"e1c4d74fea366482420e27e4d2319ef8fa6392820f0f49a85b79676210aee06f\"" Jan 13 20:53:27.581557 containerd[1796]: time="2025-01-13T20:53:27.581550955Z" level=info msg="Forcibly stopping sandbox \"e1c4d74fea366482420e27e4d2319ef8fa6392820f0f49a85b79676210aee06f\"" Jan 13 20:53:27.581622 containerd[1796]: time="2025-01-13T20:53:27.581600886Z" level=info msg="TearDown network for sandbox \"e1c4d74fea366482420e27e4d2319ef8fa6392820f0f49a85b79676210aee06f\" successfully" Jan 13 20:53:27.582889 containerd[1796]: time="2025-01-13T20:53:27.582845586Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e1c4d74fea366482420e27e4d2319ef8fa6392820f0f49a85b79676210aee06f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:53:27.582889 containerd[1796]: time="2025-01-13T20:53:27.582864745Z" level=info msg="RemovePodSandbox \"e1c4d74fea366482420e27e4d2319ef8fa6392820f0f49a85b79676210aee06f\" returns successfully" Jan 13 20:53:27.583041 containerd[1796]: time="2025-01-13T20:53:27.583028314Z" level=info msg="StopPodSandbox for \"fccc10c98a708ccfd7b3a1ab99162466d79ed142c90b93c1fa738a919e05a5a2\"" Jan 13 20:53:27.583076 containerd[1796]: time="2025-01-13T20:53:27.583070183Z" level=info msg="TearDown network for sandbox \"fccc10c98a708ccfd7b3a1ab99162466d79ed142c90b93c1fa738a919e05a5a2\" successfully" Jan 13 20:53:27.583114 containerd[1796]: time="2025-01-13T20:53:27.583076515Z" level=info msg="StopPodSandbox for \"fccc10c98a708ccfd7b3a1ab99162466d79ed142c90b93c1fa738a919e05a5a2\" returns successfully" Jan 13 20:53:27.583289 containerd[1796]: time="2025-01-13T20:53:27.583232900Z" level=info msg="RemovePodSandbox for \"fccc10c98a708ccfd7b3a1ab99162466d79ed142c90b93c1fa738a919e05a5a2\"" Jan 13 20:53:27.583289 containerd[1796]: time="2025-01-13T20:53:27.583244786Z" level=info msg="Forcibly stopping sandbox \"fccc10c98a708ccfd7b3a1ab99162466d79ed142c90b93c1fa738a919e05a5a2\"" Jan 13 20:53:27.583390 containerd[1796]: time="2025-01-13T20:53:27.583318945Z" level=info msg="TearDown network for sandbox \"fccc10c98a708ccfd7b3a1ab99162466d79ed142c90b93c1fa738a919e05a5a2\" successfully" Jan 13 20:53:27.584466 containerd[1796]: time="2025-01-13T20:53:27.584426658Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fccc10c98a708ccfd7b3a1ab99162466d79ed142c90b93c1fa738a919e05a5a2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:53:27.584466 containerd[1796]: time="2025-01-13T20:53:27.584443710Z" level=info msg="RemovePodSandbox \"fccc10c98a708ccfd7b3a1ab99162466d79ed142c90b93c1fa738a919e05a5a2\" returns successfully" Jan 13 20:53:27.584710 containerd[1796]: time="2025-01-13T20:53:27.584669589Z" level=info msg="StopPodSandbox for \"0468e4bf08413113b31ecb582a34c0bafbe9d8746c97c8566d561f3d560fab33\"" Jan 13 20:53:27.584760 containerd[1796]: time="2025-01-13T20:53:27.584743262Z" level=info msg="TearDown network for sandbox \"0468e4bf08413113b31ecb582a34c0bafbe9d8746c97c8566d561f3d560fab33\" successfully" Jan 13 20:53:27.584760 containerd[1796]: time="2025-01-13T20:53:27.584752759Z" level=info msg="StopPodSandbox for \"0468e4bf08413113b31ecb582a34c0bafbe9d8746c97c8566d561f3d560fab33\" returns successfully" Jan 13 20:53:27.585045 containerd[1796]: time="2025-01-13T20:53:27.584993891Z" level=info msg="RemovePodSandbox for \"0468e4bf08413113b31ecb582a34c0bafbe9d8746c97c8566d561f3d560fab33\"" Jan 13 20:53:27.585045 containerd[1796]: time="2025-01-13T20:53:27.585020429Z" level=info msg="Forcibly stopping sandbox \"0468e4bf08413113b31ecb582a34c0bafbe9d8746c97c8566d561f3d560fab33\"" Jan 13 20:53:27.585095 containerd[1796]: time="2025-01-13T20:53:27.585071765Z" level=info msg="TearDown network for sandbox \"0468e4bf08413113b31ecb582a34c0bafbe9d8746c97c8566d561f3d560fab33\" successfully" Jan 13 20:53:27.586182 containerd[1796]: time="2025-01-13T20:53:27.586126821Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0468e4bf08413113b31ecb582a34c0bafbe9d8746c97c8566d561f3d560fab33\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:53:27.586228 containerd[1796]: time="2025-01-13T20:53:27.586187000Z" level=info msg="RemovePodSandbox \"0468e4bf08413113b31ecb582a34c0bafbe9d8746c97c8566d561f3d560fab33\" returns successfully" Jan 13 20:53:27.586421 containerd[1796]: time="2025-01-13T20:53:27.586388377Z" level=info msg="StopPodSandbox for \"e2bd9901d93239de7de72a43ec0a8cf69dae7a9d44c98375eba4abd642185933\"" Jan 13 20:53:27.586482 containerd[1796]: time="2025-01-13T20:53:27.586473340Z" level=info msg="TearDown network for sandbox \"e2bd9901d93239de7de72a43ec0a8cf69dae7a9d44c98375eba4abd642185933\" successfully" Jan 13 20:53:27.586482 containerd[1796]: time="2025-01-13T20:53:27.586479502Z" level=info msg="StopPodSandbox for \"e2bd9901d93239de7de72a43ec0a8cf69dae7a9d44c98375eba4abd642185933\" returns successfully" Jan 13 20:53:27.586689 containerd[1796]: time="2025-01-13T20:53:27.586656876Z" level=info msg="RemovePodSandbox for \"e2bd9901d93239de7de72a43ec0a8cf69dae7a9d44c98375eba4abd642185933\"" Jan 13 20:53:27.586689 containerd[1796]: time="2025-01-13T20:53:27.586666422Z" level=info msg="Forcibly stopping sandbox \"e2bd9901d93239de7de72a43ec0a8cf69dae7a9d44c98375eba4abd642185933\"" Jan 13 20:53:27.586751 containerd[1796]: time="2025-01-13T20:53:27.586733928Z" level=info msg="TearDown network for sandbox \"e2bd9901d93239de7de72a43ec0a8cf69dae7a9d44c98375eba4abd642185933\" successfully" Jan 13 20:53:27.588041 containerd[1796]: time="2025-01-13T20:53:27.588000809Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e2bd9901d93239de7de72a43ec0a8cf69dae7a9d44c98375eba4abd642185933\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:53:27.588041 containerd[1796]: time="2025-01-13T20:53:27.588017418Z" level=info msg="RemovePodSandbox \"e2bd9901d93239de7de72a43ec0a8cf69dae7a9d44c98375eba4abd642185933\" returns successfully" Jan 13 20:53:27.588223 containerd[1796]: time="2025-01-13T20:53:27.588163042Z" level=info msg="StopPodSandbox for \"a42031632e47f03510b85f23f6e159c9b3c2b84567be9c0492165f91acc0e677\"" Jan 13 20:53:27.588253 containerd[1796]: time="2025-01-13T20:53:27.588231261Z" level=info msg="TearDown network for sandbox \"a42031632e47f03510b85f23f6e159c9b3c2b84567be9c0492165f91acc0e677\" successfully" Jan 13 20:53:27.588253 containerd[1796]: time="2025-01-13T20:53:27.588238362Z" level=info msg="StopPodSandbox for \"a42031632e47f03510b85f23f6e159c9b3c2b84567be9c0492165f91acc0e677\" returns successfully" Jan 13 20:53:27.588432 containerd[1796]: time="2025-01-13T20:53:27.588403673Z" level=info msg="RemovePodSandbox for \"a42031632e47f03510b85f23f6e159c9b3c2b84567be9c0492165f91acc0e677\"" Jan 13 20:53:27.588432 containerd[1796]: time="2025-01-13T20:53:27.588414652Z" level=info msg="Forcibly stopping sandbox \"a42031632e47f03510b85f23f6e159c9b3c2b84567be9c0492165f91acc0e677\"" Jan 13 20:53:27.588485 containerd[1796]: time="2025-01-13T20:53:27.588445215Z" level=info msg="TearDown network for sandbox \"a42031632e47f03510b85f23f6e159c9b3c2b84567be9c0492165f91acc0e677\" successfully" Jan 13 20:53:27.589548 containerd[1796]: time="2025-01-13T20:53:27.589507940Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a42031632e47f03510b85f23f6e159c9b3c2b84567be9c0492165f91acc0e677\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:53:27.589548 containerd[1796]: time="2025-01-13T20:53:27.589524210Z" level=info msg="RemovePodSandbox \"a42031632e47f03510b85f23f6e159c9b3c2b84567be9c0492165f91acc0e677\" returns successfully" Jan 13 20:53:27.589812 containerd[1796]: time="2025-01-13T20:53:27.589760953Z" level=info msg="StopPodSandbox for \"d15156abea73ec1e9b5ab33e4bbf79cb7b5b53fdcd536741876459cfab2afcae\"" Jan 13 20:53:27.589904 containerd[1796]: time="2025-01-13T20:53:27.589884269Z" level=info msg="TearDown network for sandbox \"d15156abea73ec1e9b5ab33e4bbf79cb7b5b53fdcd536741876459cfab2afcae\" successfully" Jan 13 20:53:27.589938 containerd[1796]: time="2025-01-13T20:53:27.589906854Z" level=info msg="StopPodSandbox for \"d15156abea73ec1e9b5ab33e4bbf79cb7b5b53fdcd536741876459cfab2afcae\" returns successfully" Jan 13 20:53:27.590109 containerd[1796]: time="2025-01-13T20:53:27.590077436Z" level=info msg="RemovePodSandbox for \"d15156abea73ec1e9b5ab33e4bbf79cb7b5b53fdcd536741876459cfab2afcae\"" Jan 13 20:53:27.590109 containerd[1796]: time="2025-01-13T20:53:27.590106574Z" level=info msg="Forcibly stopping sandbox \"d15156abea73ec1e9b5ab33e4bbf79cb7b5b53fdcd536741876459cfab2afcae\"" Jan 13 20:53:27.590157 containerd[1796]: time="2025-01-13T20:53:27.590138970Z" level=info msg="TearDown network for sandbox \"d15156abea73ec1e9b5ab33e4bbf79cb7b5b53fdcd536741876459cfab2afcae\" successfully" Jan 13 20:53:27.591432 containerd[1796]: time="2025-01-13T20:53:27.591393114Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d15156abea73ec1e9b5ab33e4bbf79cb7b5b53fdcd536741876459cfab2afcae\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:53:27.591432 containerd[1796]: time="2025-01-13T20:53:27.591410111Z" level=info msg="RemovePodSandbox \"d15156abea73ec1e9b5ab33e4bbf79cb7b5b53fdcd536741876459cfab2afcae\" returns successfully" Jan 13 20:53:27.591628 containerd[1796]: time="2025-01-13T20:53:27.591588540Z" level=info msg="StopPodSandbox for \"4cb413ded463b1acfb7b7fe1be41346d4c6a2ee23968bb4bbe29ce68e2883e00\"" Jan 13 20:53:27.591628 containerd[1796]: time="2025-01-13T20:53:27.591626921Z" level=info msg="TearDown network for sandbox \"4cb413ded463b1acfb7b7fe1be41346d4c6a2ee23968bb4bbe29ce68e2883e00\" successfully" Jan 13 20:53:27.591710 containerd[1796]: time="2025-01-13T20:53:27.591633693Z" level=info msg="StopPodSandbox for \"4cb413ded463b1acfb7b7fe1be41346d4c6a2ee23968bb4bbe29ce68e2883e00\" returns successfully" Jan 13 20:53:27.591796 containerd[1796]: time="2025-01-13T20:53:27.591758248Z" level=info msg="RemovePodSandbox for \"4cb413ded463b1acfb7b7fe1be41346d4c6a2ee23968bb4bbe29ce68e2883e00\"" Jan 13 20:53:27.591796 containerd[1796]: time="2025-01-13T20:53:27.591789406Z" level=info msg="Forcibly stopping sandbox \"4cb413ded463b1acfb7b7fe1be41346d4c6a2ee23968bb4bbe29ce68e2883e00\"" Jan 13 20:53:27.591846 containerd[1796]: time="2025-01-13T20:53:27.591824280Z" level=info msg="TearDown network for sandbox \"4cb413ded463b1acfb7b7fe1be41346d4c6a2ee23968bb4bbe29ce68e2883e00\" successfully" Jan 13 20:53:27.592945 containerd[1796]: time="2025-01-13T20:53:27.592906041Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4cb413ded463b1acfb7b7fe1be41346d4c6a2ee23968bb4bbe29ce68e2883e00\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:53:27.592945 containerd[1796]: time="2025-01-13T20:53:27.592923048Z" level=info msg="RemovePodSandbox \"4cb413ded463b1acfb7b7fe1be41346d4c6a2ee23968bb4bbe29ce68e2883e00\" returns successfully" Jan 13 20:53:27.593061 containerd[1796]: time="2025-01-13T20:53:27.593050001Z" level=info msg="StopPodSandbox for \"2a5e6ccf3ede85835692172123dc1d34adf651b783f40d77483af428d5b7aacd\"" Jan 13 20:53:27.593097 containerd[1796]: time="2025-01-13T20:53:27.593090302Z" level=info msg="TearDown network for sandbox \"2a5e6ccf3ede85835692172123dc1d34adf651b783f40d77483af428d5b7aacd\" successfully" Jan 13 20:53:27.593118 containerd[1796]: time="2025-01-13T20:53:27.593097036Z" level=info msg="StopPodSandbox for \"2a5e6ccf3ede85835692172123dc1d34adf651b783f40d77483af428d5b7aacd\" returns successfully" Jan 13 20:53:27.593294 containerd[1796]: time="2025-01-13T20:53:27.593252769Z" level=info msg="RemovePodSandbox for \"2a5e6ccf3ede85835692172123dc1d34adf651b783f40d77483af428d5b7aacd\"" Jan 13 20:53:27.593294 containerd[1796]: time="2025-01-13T20:53:27.593262850Z" level=info msg="Forcibly stopping sandbox \"2a5e6ccf3ede85835692172123dc1d34adf651b783f40d77483af428d5b7aacd\"" Jan 13 20:53:27.593402 containerd[1796]: time="2025-01-13T20:53:27.593294182Z" level=info msg="TearDown network for sandbox \"2a5e6ccf3ede85835692172123dc1d34adf651b783f40d77483af428d5b7aacd\" successfully" Jan 13 20:53:27.594545 containerd[1796]: time="2025-01-13T20:53:27.594500822Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2a5e6ccf3ede85835692172123dc1d34adf651b783f40d77483af428d5b7aacd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:53:27.594545 containerd[1796]: time="2025-01-13T20:53:27.594519492Z" level=info msg="RemovePodSandbox \"2a5e6ccf3ede85835692172123dc1d34adf651b783f40d77483af428d5b7aacd\" returns successfully" Jan 13 20:53:27.594696 containerd[1796]: time="2025-01-13T20:53:27.594663203Z" level=info msg="StopPodSandbox for \"8c1dd8510bdfc4a9640e53dc0a2fd91fe1c138913a09aeab75084a3af1247e6b\"" Jan 13 20:53:27.594758 containerd[1796]: time="2025-01-13T20:53:27.594731989Z" level=info msg="TearDown network for sandbox \"8c1dd8510bdfc4a9640e53dc0a2fd91fe1c138913a09aeab75084a3af1247e6b\" successfully" Jan 13 20:53:27.594758 containerd[1796]: time="2025-01-13T20:53:27.594738308Z" level=info msg="StopPodSandbox for \"8c1dd8510bdfc4a9640e53dc0a2fd91fe1c138913a09aeab75084a3af1247e6b\" returns successfully" Jan 13 20:53:27.594905 containerd[1796]: time="2025-01-13T20:53:27.594864810Z" level=info msg="RemovePodSandbox for \"8c1dd8510bdfc4a9640e53dc0a2fd91fe1c138913a09aeab75084a3af1247e6b\"" Jan 13 20:53:27.594905 containerd[1796]: time="2025-01-13T20:53:27.594874453Z" level=info msg="Forcibly stopping sandbox \"8c1dd8510bdfc4a9640e53dc0a2fd91fe1c138913a09aeab75084a3af1247e6b\"" Jan 13 20:53:27.594994 containerd[1796]: time="2025-01-13T20:53:27.594906369Z" level=info msg="TearDown network for sandbox \"8c1dd8510bdfc4a9640e53dc0a2fd91fe1c138913a09aeab75084a3af1247e6b\" successfully" Jan 13 20:53:27.596067 containerd[1796]: time="2025-01-13T20:53:27.596028332Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8c1dd8510bdfc4a9640e53dc0a2fd91fe1c138913a09aeab75084a3af1247e6b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:53:27.596067 containerd[1796]: time="2025-01-13T20:53:27.596053783Z" level=info msg="RemovePodSandbox \"8c1dd8510bdfc4a9640e53dc0a2fd91fe1c138913a09aeab75084a3af1247e6b\" returns successfully" Jan 13 20:53:27.596294 containerd[1796]: time="2025-01-13T20:53:27.596247065Z" level=info msg="StopPodSandbox for \"a69420d9ba2ab6d95a92d6b72a8223726624071e1ee839bf966541793cfb6160\"" Jan 13 20:53:27.596356 containerd[1796]: time="2025-01-13T20:53:27.596311015Z" level=info msg="TearDown network for sandbox \"a69420d9ba2ab6d95a92d6b72a8223726624071e1ee839bf966541793cfb6160\" successfully" Jan 13 20:53:27.596356 containerd[1796]: time="2025-01-13T20:53:27.596334853Z" level=info msg="StopPodSandbox for \"a69420d9ba2ab6d95a92d6b72a8223726624071e1ee839bf966541793cfb6160\" returns successfully" Jan 13 20:53:27.596556 containerd[1796]: time="2025-01-13T20:53:27.596521532Z" level=info msg="RemovePodSandbox for \"a69420d9ba2ab6d95a92d6b72a8223726624071e1ee839bf966541793cfb6160\"" Jan 13 20:53:27.596556 containerd[1796]: time="2025-01-13T20:53:27.596551876Z" level=info msg="Forcibly stopping sandbox \"a69420d9ba2ab6d95a92d6b72a8223726624071e1ee839bf966541793cfb6160\"" Jan 13 20:53:27.596605 containerd[1796]: time="2025-01-13T20:53:27.596583995Z" level=info msg="TearDown network for sandbox \"a69420d9ba2ab6d95a92d6b72a8223726624071e1ee839bf966541793cfb6160\" successfully" Jan 13 20:53:27.597786 containerd[1796]: time="2025-01-13T20:53:27.597741577Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a69420d9ba2ab6d95a92d6b72a8223726624071e1ee839bf966541793cfb6160\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:53:27.597786 containerd[1796]: time="2025-01-13T20:53:27.597759436Z" level=info msg="RemovePodSandbox \"a69420d9ba2ab6d95a92d6b72a8223726624071e1ee839bf966541793cfb6160\" returns successfully" Jan 13 20:53:28.171945 kubelet[3275]: I0113 20:53:28.171840 3275 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 20:53:32.639306 kubelet[3275]: I0113 20:53:32.639221 3275 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 20:58:54.641060 systemd[1]: Started sshd@9-147.28.180.137:22-147.75.109.163:39694.service - OpenSSH per-connection server daemon (147.75.109.163:39694). Jan 13 20:58:54.691181 sshd[7919]: Accepted publickey for core from 147.75.109.163 port 39694 ssh2: RSA SHA256:e+MgUisS161AGA1r9VNVQu08608PiJAdYjzTmlTIRrQ Jan 13 20:58:54.692294 sshd-session[7919]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:58:54.696550 systemd-logind[1785]: New session 12 of user core. Jan 13 20:58:54.708340 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 13 20:58:54.844391 sshd[7921]: Connection closed by 147.75.109.163 port 39694 Jan 13 20:58:54.844559 sshd-session[7919]: pam_unix(sshd:session): session closed for user core Jan 13 20:58:54.846073 systemd[1]: sshd@9-147.28.180.137:22-147.75.109.163:39694.service: Deactivated successfully. Jan 13 20:58:54.847109 systemd[1]: session-12.scope: Deactivated successfully. Jan 13 20:58:54.847841 systemd-logind[1785]: Session 12 logged out. Waiting for processes to exit. Jan 13 20:58:54.848566 systemd-logind[1785]: Removed session 12. Jan 13 20:58:59.866644 systemd[1]: Started sshd@10-147.28.180.137:22-147.75.109.163:38956.service - OpenSSH per-connection server daemon (147.75.109.163:38956). Jan 13 20:58:59.900555 sshd[7987]: Accepted publickey for core from 147.75.109.163 port 38956 ssh2: RSA SHA256:e+MgUisS161AGA1r9VNVQu08608PiJAdYjzTmlTIRrQ Jan 13 20:58:59.901203 sshd-session[7987]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:58:59.903724 systemd-logind[1785]: New session 13 of user core. Jan 13 20:58:59.921473 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 13 20:59:00.011464 sshd[7989]: Connection closed by 147.75.109.163 port 38956 Jan 13 20:59:00.011671 sshd-session[7987]: pam_unix(sshd:session): session closed for user core Jan 13 20:59:00.013038 systemd[1]: sshd@10-147.28.180.137:22-147.75.109.163:38956.service: Deactivated successfully. Jan 13 20:59:00.013957 systemd[1]: session-13.scope: Deactivated successfully. Jan 13 20:59:00.014687 systemd-logind[1785]: Session 13 logged out. Waiting for processes to exit. Jan 13 20:59:00.015256 systemd-logind[1785]: Removed session 13. Jan 13 20:59:05.032778 systemd[1]: Started sshd@11-147.28.180.137:22-147.75.109.163:38966.service - OpenSSH per-connection server daemon (147.75.109.163:38966). Jan 13 20:59:05.066644 sshd[8042]: Accepted publickey for core from 147.75.109.163 port 38966 ssh2: RSA SHA256:e+MgUisS161AGA1r9VNVQu08608PiJAdYjzTmlTIRrQ Jan 13 20:59:05.067282 sshd-session[8042]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:59:05.069916 systemd-logind[1785]: New session 14 of user core. Jan 13 20:59:05.083483 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 13 20:59:05.166987 sshd[8044]: Connection closed by 147.75.109.163 port 38966 Jan 13 20:59:05.167164 sshd-session[8042]: pam_unix(sshd:session): session closed for user core Jan 13 20:59:05.185055 systemd[1]: sshd@11-147.28.180.137:22-147.75.109.163:38966.service: Deactivated successfully. Jan 13 20:59:05.185934 systemd[1]: session-14.scope: Deactivated successfully. Jan 13 20:59:05.186717 systemd-logind[1785]: Session 14 logged out. Waiting for processes to exit. Jan 13 20:59:05.187402 systemd[1]: Started sshd@12-147.28.180.137:22-147.75.109.163:38972.service - OpenSSH per-connection server daemon (147.75.109.163:38972). Jan 13 20:59:05.187945 systemd-logind[1785]: Removed session 14. Jan 13 20:59:05.223812 sshd[8065]: Accepted publickey for core from 147.75.109.163 port 38972 ssh2: RSA SHA256:e+MgUisS161AGA1r9VNVQu08608PiJAdYjzTmlTIRrQ Jan 13 20:59:05.224539 sshd-session[8065]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:59:05.227557 systemd-logind[1785]: New session 15 of user core. Jan 13 20:59:05.244424 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 13 20:59:05.367744 sshd[8068]: Connection closed by 147.75.109.163 port 38972 Jan 13 20:59:05.367921 sshd-session[8065]: pam_unix(sshd:session): session closed for user core Jan 13 20:59:05.380017 systemd[1]: sshd@12-147.28.180.137:22-147.75.109.163:38972.service: Deactivated successfully. Jan 13 20:59:05.380918 systemd[1]: session-15.scope: Deactivated successfully. Jan 13 20:59:05.381628 systemd-logind[1785]: Session 15 logged out. Waiting for processes to exit. Jan 13 20:59:05.382246 systemd[1]: Started sshd@13-147.28.180.137:22-147.75.109.163:38976.service - OpenSSH per-connection server daemon (147.75.109.163:38976). Jan 13 20:59:05.382725 systemd-logind[1785]: Removed session 15. Jan 13 20:59:05.412053 sshd[8090]: Accepted publickey for core from 147.75.109.163 port 38976 ssh2: RSA SHA256:e+MgUisS161AGA1r9VNVQu08608PiJAdYjzTmlTIRrQ Jan 13 20:59:05.412674 sshd-session[8090]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:59:05.415065 systemd-logind[1785]: New session 16 of user core. Jan 13 20:59:05.439327 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 13 20:59:05.567123 sshd[8093]: Connection closed by 147.75.109.163 port 38976 Jan 13 20:59:05.567330 sshd-session[8090]: pam_unix(sshd:session): session closed for user core Jan 13 20:59:05.568898 systemd[1]: sshd@13-147.28.180.137:22-147.75.109.163:38976.service: Deactivated successfully. Jan 13 20:59:05.569862 systemd[1]: session-16.scope: Deactivated successfully. Jan 13 20:59:05.570617 systemd-logind[1785]: Session 16 logged out. Waiting for processes to exit. Jan 13 20:59:05.571142 systemd-logind[1785]: Removed session 16. Jan 13 20:59:10.591455 systemd[1]: Started sshd@14-147.28.180.137:22-147.75.109.163:58616.service - OpenSSH per-connection server daemon (147.75.109.163:58616). Jan 13 20:59:10.668884 sshd[8123]: Accepted publickey for core from 147.75.109.163 port 58616 ssh2: RSA SHA256:e+MgUisS161AGA1r9VNVQu08608PiJAdYjzTmlTIRrQ Jan 13 20:59:10.670805 sshd-session[8123]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:59:10.676900 systemd-logind[1785]: New session 17 of user core. Jan 13 20:59:10.697721 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 13 20:59:10.791145 sshd[8125]: Connection closed by 147.75.109.163 port 58616 Jan 13 20:59:10.791332 sshd-session[8123]: pam_unix(sshd:session): session closed for user core Jan 13 20:59:10.792851 systemd[1]: sshd@14-147.28.180.137:22-147.75.109.163:58616.service: Deactivated successfully. Jan 13 20:59:10.793777 systemd[1]: session-17.scope: Deactivated successfully. Jan 13 20:59:10.794484 systemd-logind[1785]: Session 17 logged out. Waiting for processes to exit. Jan 13 20:59:10.795007 systemd-logind[1785]: Removed session 17. Jan 13 20:59:15.829343 systemd[1]: Started sshd@15-147.28.180.137:22-147.75.109.163:58624.service - OpenSSH per-connection server daemon (147.75.109.163:58624). Jan 13 20:59:15.862936 sshd[8150]: Accepted publickey for core from 147.75.109.163 port 58624 ssh2: RSA SHA256:e+MgUisS161AGA1r9VNVQu08608PiJAdYjzTmlTIRrQ Jan 13 20:59:15.863657 sshd-session[8150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:59:15.866535 systemd-logind[1785]: New session 18 of user core. Jan 13 20:59:15.886374 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 13 20:59:15.972329 sshd[8152]: Connection closed by 147.75.109.163 port 58624 Jan 13 20:59:15.972508 sshd-session[8150]: pam_unix(sshd:session): session closed for user core Jan 13 20:59:15.993659 systemd[1]: sshd@15-147.28.180.137:22-147.75.109.163:58624.service: Deactivated successfully. Jan 13 20:59:15.997782 systemd[1]: session-18.scope: Deactivated successfully. Jan 13 20:59:16.001225 systemd-logind[1785]: Session 18 logged out. Waiting for processes to exit. Jan 13 20:59:16.004539 systemd[1]: Started sshd@16-147.28.180.137:22-147.75.109.163:58638.service - OpenSSH per-connection server daemon (147.75.109.163:58638). Jan 13 20:59:16.007168 systemd-logind[1785]: Removed session 18. Jan 13 20:59:16.074181 sshd[8176]: Accepted publickey for core from 147.75.109.163 port 58638 ssh2: RSA SHA256:e+MgUisS161AGA1r9VNVQu08608PiJAdYjzTmlTIRrQ Jan 13 20:59:16.074811 sshd-session[8176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:59:16.077500 systemd-logind[1785]: New session 19 of user core. Jan 13 20:59:16.093465 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 13 20:59:16.215813 sshd[8179]: Connection closed by 147.75.109.163 port 58638 Jan 13 20:59:16.216019 sshd-session[8176]: pam_unix(sshd:session): session closed for user core Jan 13 20:59:16.226899 systemd[1]: sshd@16-147.28.180.137:22-147.75.109.163:58638.service: Deactivated successfully. Jan 13 20:59:16.227765 systemd[1]: session-19.scope: Deactivated successfully. Jan 13 20:59:16.228442 systemd-logind[1785]: Session 19 logged out. Waiting for processes to exit. Jan 13 20:59:16.229207 systemd[1]: Started sshd@17-147.28.180.137:22-147.75.109.163:58644.service - OpenSSH per-connection server daemon (147.75.109.163:58644). Jan 13 20:59:16.229793 systemd-logind[1785]: Removed session 19. Jan 13 20:59:16.273646 sshd[8201]: Accepted publickey for core from 147.75.109.163 port 58644 ssh2: RSA SHA256:e+MgUisS161AGA1r9VNVQu08608PiJAdYjzTmlTIRrQ Jan 13 20:59:16.274445 sshd-session[8201]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:59:16.277498 systemd-logind[1785]: New session 20 of user core. Jan 13 20:59:16.293353 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 13 20:59:17.531785 sshd[8203]: Connection closed by 147.75.109.163 port 58644 Jan 13 20:59:17.531968 sshd-session[8201]: pam_unix(sshd:session): session closed for user core Jan 13 20:59:17.540006 systemd[1]: sshd@17-147.28.180.137:22-147.75.109.163:58644.service: Deactivated successfully. Jan 13 20:59:17.540934 systemd[1]: session-20.scope: Deactivated successfully. Jan 13 20:59:17.541598 systemd-logind[1785]: Session 20 logged out. Waiting for processes to exit. Jan 13 20:59:17.542202 systemd[1]: Started sshd@18-147.28.180.137:22-147.75.109.163:55184.service - OpenSSH per-connection server daemon (147.75.109.163:55184). Jan 13 20:59:17.542627 systemd-logind[1785]: Removed session 20. Jan 13 20:59:17.571815 sshd[8232]: Accepted publickey for core from 147.75.109.163 port 55184 ssh2: RSA SHA256:e+MgUisS161AGA1r9VNVQu08608PiJAdYjzTmlTIRrQ Jan 13 20:59:17.572538 sshd-session[8232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:59:17.575213 systemd-logind[1785]: New session 21 of user core. Jan 13 20:59:17.597441 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 13 20:59:17.797112 sshd[8237]: Connection closed by 147.75.109.163 port 55184 Jan 13 20:59:17.797336 sshd-session[8232]: pam_unix(sshd:session): session closed for user core Jan 13 20:59:17.810443 systemd[1]: sshd@18-147.28.180.137:22-147.75.109.163:55184.service: Deactivated successfully. Jan 13 20:59:17.812195 systemd[1]: session-21.scope: Deactivated successfully. Jan 13 20:59:17.813631 systemd-logind[1785]: Session 21 logged out. Waiting for processes to exit. Jan 13 20:59:17.815020 systemd[1]: Started sshd@19-147.28.180.137:22-147.75.109.163:55188.service - OpenSSH per-connection server daemon (147.75.109.163:55188). Jan 13 20:59:17.816005 systemd-logind[1785]: Removed session 21. Jan 13 20:59:17.887273 sshd[8259]: Accepted publickey for core from 147.75.109.163 port 55188 ssh2: RSA SHA256:e+MgUisS161AGA1r9VNVQu08608PiJAdYjzTmlTIRrQ Jan 13 20:59:17.888492 sshd-session[8259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:59:17.892755 systemd-logind[1785]: New session 22 of user core. Jan 13 20:59:17.907382 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 13 20:59:18.036642 sshd[8262]: Connection closed by 147.75.109.163 port 55188 Jan 13 20:59:18.036852 sshd-session[8259]: pam_unix(sshd:session): session closed for user core Jan 13 20:59:18.038408 systemd[1]: sshd@19-147.28.180.137:22-147.75.109.163:55188.service: Deactivated successfully. Jan 13 20:59:18.039317 systemd[1]: session-22.scope: Deactivated successfully. Jan 13 20:59:18.039967 systemd-logind[1785]: Session 22 logged out. Waiting for processes to exit. Jan 13 20:59:18.040533 systemd-logind[1785]: Removed session 22. Jan 13 20:59:23.087480 systemd[1]: Started sshd@20-147.28.180.137:22-147.75.109.163:55196.service - OpenSSH per-connection server daemon (147.75.109.163:55196). Jan 13 20:59:23.115776 sshd[8290]: Accepted publickey for core from 147.75.109.163 port 55196 ssh2: RSA SHA256:e+MgUisS161AGA1r9VNVQu08608PiJAdYjzTmlTIRrQ Jan 13 20:59:23.116420 sshd-session[8290]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:59:23.118874 systemd-logind[1785]: New session 23 of user core. Jan 13 20:59:23.119466 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 13 20:59:23.208947 sshd[8292]: Connection closed by 147.75.109.163 port 55196 Jan 13 20:59:23.209112 sshd-session[8290]: pam_unix(sshd:session): session closed for user core Jan 13 20:59:23.211107 systemd[1]: sshd@20-147.28.180.137:22-147.75.109.163:55196.service: Deactivated successfully. Jan 13 20:59:23.211991 systemd[1]: session-23.scope: Deactivated successfully. Jan 13 20:59:23.212448 systemd-logind[1785]: Session 23 logged out. Waiting for processes to exit. Jan 13 20:59:23.213069 systemd-logind[1785]: Removed session 23. Jan 13 20:59:28.219849 systemd[1]: Started sshd@21-147.28.180.137:22-147.75.109.163:34212.service - OpenSSH per-connection server daemon (147.75.109.163:34212). Jan 13 20:59:28.252875 sshd[8319]: Accepted publickey for core from 147.75.109.163 port 34212 ssh2: RSA SHA256:e+MgUisS161AGA1r9VNVQu08608PiJAdYjzTmlTIRrQ Jan 13 20:59:28.253508 sshd-session[8319]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:59:28.255866 systemd-logind[1785]: New session 24 of user core. Jan 13 20:59:28.280399 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 13 20:59:28.366745 sshd[8321]: Connection closed by 147.75.109.163 port 34212 Jan 13 20:59:28.366921 sshd-session[8319]: pam_unix(sshd:session): session closed for user core Jan 13 20:59:28.368576 systemd[1]: sshd@21-147.28.180.137:22-147.75.109.163:34212.service: Deactivated successfully. Jan 13 20:59:28.369563 systemd[1]: session-24.scope: Deactivated successfully. Jan 13 20:59:28.370332 systemd-logind[1785]: Session 24 logged out. Waiting for processes to exit. Jan 13 20:59:28.370974 systemd-logind[1785]: Removed session 24. Jan 13 20:59:33.396479 systemd[1]: Started sshd@22-147.28.180.137:22-147.75.109.163:34222.service - OpenSSH per-connection server daemon (147.75.109.163:34222). Jan 13 20:59:33.429547 sshd[8398]: Accepted publickey for core from 147.75.109.163 port 34222 ssh2: RSA SHA256:e+MgUisS161AGA1r9VNVQu08608PiJAdYjzTmlTIRrQ Jan 13 20:59:33.430295 sshd-session[8398]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:59:33.432871 systemd-logind[1785]: New session 25 of user core. Jan 13 20:59:33.444357 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 13 20:59:33.527130 sshd[8400]: Connection closed by 147.75.109.163 port 34222 Jan 13 20:59:33.527341 sshd-session[8398]: pam_unix(sshd:session): session closed for user core Jan 13 20:59:33.529024 systemd[1]: sshd@22-147.28.180.137:22-147.75.109.163:34222.service: Deactivated successfully. Jan 13 20:59:33.530071 systemd[1]: session-25.scope: Deactivated successfully. Jan 13 20:59:33.530870 systemd-logind[1785]: Session 25 logged out. Waiting for processes to exit. Jan 13 20:59:33.531629 systemd-logind[1785]: Removed session 25.