Feb 13 19:44:46.471674 kernel: microcode: updated early: 0xf4 -> 0x100, date = 2024-02-05 Feb 13 19:44:46.471688 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 17:41:03 -00 2025 Feb 13 19:44:46.471694 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=015d1d9e5e601f6a4e226c935072d3d0819e7eb2da20e68715973498f21aa3fe Feb 13 19:44:46.471700 kernel: BIOS-provided physical RAM map: Feb 13 19:44:46.471704 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Feb 13 19:44:46.471708 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Feb 13 19:44:46.471713 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Feb 13 19:44:46.471717 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Feb 13 19:44:46.471721 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Feb 13 19:44:46.471725 kernel: BIOS-e820: [mem 0x0000000040400000-0x0000000081b2afff] usable Feb 13 19:44:46.471730 kernel: BIOS-e820: [mem 0x0000000081b2b000-0x0000000081b2bfff] ACPI NVS Feb 13 19:44:46.471734 kernel: BIOS-e820: [mem 0x0000000081b2c000-0x0000000081b2cfff] reserved Feb 13 19:44:46.471739 kernel: BIOS-e820: [mem 0x0000000081b2d000-0x000000008afccfff] usable Feb 13 19:44:46.471743 kernel: BIOS-e820: [mem 0x000000008afcd000-0x000000008c0b1fff] reserved Feb 13 19:44:46.471749 kernel: BIOS-e820: [mem 0x000000008c0b2000-0x000000008c23afff] usable Feb 13 19:44:46.471753 kernel: BIOS-e820: [mem 0x000000008c23b000-0x000000008c66cfff] ACPI NVS Feb 13 19:44:46.471759 kernel: BIOS-e820: [mem 0x000000008c66d000-0x000000008eefefff] reserved Feb 13 19:44:46.471764 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable Feb 13 19:44:46.471768 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved Feb 13 19:44:46.471773 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Feb 13 19:44:46.471778 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Feb 13 19:44:46.471782 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Feb 13 19:44:46.471790 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Feb 13 19:44:46.471813 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Feb 13 19:44:46.471818 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable Feb 13 19:44:46.471823 kernel: NX (Execute Disable) protection: active Feb 13 19:44:46.471827 kernel: APIC: Static calls initialized Feb 13 19:44:46.471846 kernel: SMBIOS 3.2.1 present. Feb 13 19:44:46.471852 kernel: DMI: Supermicro SYS-5019C-MR-PH004/X11SCM-F, BIOS 1.9 09/16/2022 Feb 13 19:44:46.471856 kernel: tsc: Detected 3400.000 MHz processor Feb 13 19:44:46.471861 kernel: tsc: Detected 3399.906 MHz TSC Feb 13 19:44:46.471866 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 19:44:46.471871 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 19:44:46.471876 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 Feb 13 19:44:46.471881 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 23), built from 10 variable MTRRs Feb 13 19:44:46.471886 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 19:44:46.471891 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 Feb 13 19:44:46.471896 kernel: Using GB pages for direct mapping Feb 13 19:44:46.471901 kernel: ACPI: Early table checksum verification disabled Feb 13 19:44:46.471907 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Feb 13 19:44:46.471913 kernel: ACPI: XSDT 0x000000008C54E0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Feb 13 19:44:46.471918 kernel: ACPI: FACP 0x000000008C58A670 000114 (v06 01072009 AMI 00010013) Feb 13 19:44:46.471924 kernel: ACPI: DSDT 0x000000008C54E268 03C404 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Feb 13 19:44:46.471929 kernel: ACPI: FACS 0x000000008C66CF80 000040 Feb 13 19:44:46.471935 kernel: ACPI: APIC 0x000000008C58A788 00012C (v04 01072009 AMI 00010013) Feb 13 19:44:46.471940 kernel: ACPI: FPDT 0x000000008C58A8B8 000044 (v01 01072009 AMI 00010013) Feb 13 19:44:46.471945 kernel: ACPI: FIDT 0x000000008C58A900 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Feb 13 19:44:46.471950 kernel: ACPI: MCFG 0x000000008C58A9A0 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Feb 13 19:44:46.471955 kernel: ACPI: SPMI 0x000000008C58A9E0 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Feb 13 19:44:46.471960 kernel: ACPI: SSDT 0x000000008C58AA28 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Feb 13 19:44:46.471965 kernel: ACPI: SSDT 0x000000008C58C548 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Feb 13 19:44:46.471970 kernel: ACPI: SSDT 0x000000008C58F710 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Feb 13 19:44:46.471976 kernel: ACPI: HPET 0x000000008C591A40 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 13 19:44:46.471981 kernel: ACPI: SSDT 0x000000008C591A78 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Feb 13 19:44:46.471986 kernel: ACPI: SSDT 0x000000008C592A28 0008F4 (v02 INTEL xh_mossb 00000000 INTL 20160527) Feb 13 19:44:46.471991 kernel: ACPI: UEFI 0x000000008C593320 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 13 19:44:46.471997 kernel: ACPI: LPIT 0x000000008C593368 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 13 19:44:46.472002 kernel: ACPI: SSDT 0x000000008C593400 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Feb 13 19:44:46.472007 kernel: ACPI: SSDT 0x000000008C595BE0 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Feb 13 19:44:46.472012 kernel: ACPI: DBGP 0x000000008C5970C8 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 13 19:44:46.472018 kernel: ACPI: DBG2 0x000000008C597100 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Feb 13 19:44:46.472023 kernel: ACPI: SSDT 0x000000008C597158 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Feb 13 19:44:46.472028 kernel: ACPI: DMAR 0x000000008C598CC0 000070 (v01 INTEL EDK2 00000002 01000013) Feb 13 19:44:46.472033 kernel: ACPI: SSDT 0x000000008C598D30 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Feb 13 19:44:46.472038 kernel: ACPI: TPM2 0x000000008C598E78 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Feb 13 19:44:46.472044 kernel: ACPI: SSDT 0x000000008C598EB0 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Feb 13 19:44:46.472049 kernel: ACPI: WSMT 0x000000008C599C40 000028 (v01 SUPERM 01072009 AMI 00010013) Feb 13 19:44:46.472054 kernel: ACPI: EINJ 0x000000008C599C68 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Feb 13 19:44:46.472059 kernel: ACPI: ERST 0x000000008C599D98 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Feb 13 19:44:46.472065 kernel: ACPI: BERT 0x000000008C599FC8 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Feb 13 19:44:46.472070 kernel: ACPI: HEST 0x000000008C599FF8 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Feb 13 19:44:46.472075 kernel: ACPI: SSDT 0x000000008C59A278 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Feb 13 19:44:46.472080 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58a670-0x8c58a783] Feb 13 19:44:46.472085 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54e268-0x8c58a66b] Feb 13 19:44:46.472091 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66cf80-0x8c66cfbf] Feb 13 19:44:46.472096 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58a788-0x8c58a8b3] Feb 13 19:44:46.472101 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58a8b8-0x8c58a8fb] Feb 13 19:44:46.472107 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58a900-0x8c58a99b] Feb 13 19:44:46.472112 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58a9a0-0x8c58a9db] Feb 13 19:44:46.472117 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58a9e0-0x8c58aa20] Feb 13 19:44:46.472122 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58aa28-0x8c58c543] Feb 13 19:44:46.472127 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58c548-0x8c58f70d] Feb 13 19:44:46.472132 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58f710-0x8c591a3a] Feb 13 19:44:46.472137 kernel: ACPI: Reserving HPET table memory at [mem 0x8c591a40-0x8c591a77] Feb 13 19:44:46.472142 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c591a78-0x8c592a25] Feb 13 19:44:46.472147 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c592a28-0x8c59331b] Feb 13 19:44:46.472152 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c593320-0x8c593361] Feb 13 19:44:46.472158 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c593368-0x8c5933fb] Feb 13 19:44:46.472163 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c593400-0x8c595bdd] Feb 13 19:44:46.472168 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c595be0-0x8c5970c1] Feb 13 19:44:46.472174 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c5970c8-0x8c5970fb] Feb 13 19:44:46.472179 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c597100-0x8c597153] Feb 13 19:44:46.472184 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c597158-0x8c598cbe] Feb 13 19:44:46.472189 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c598cc0-0x8c598d2f] Feb 13 19:44:46.472194 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598d30-0x8c598e73] Feb 13 19:44:46.472199 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c598e78-0x8c598eab] Feb 13 19:44:46.472205 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598eb0-0x8c599c3e] Feb 13 19:44:46.472211 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c599c40-0x8c599c67] Feb 13 19:44:46.472216 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c599c68-0x8c599d97] Feb 13 19:44:46.472221 kernel: ACPI: Reserving ERST table memory at [mem 0x8c599d98-0x8c599fc7] Feb 13 19:44:46.472226 kernel: ACPI: Reserving BERT table memory at [mem 0x8c599fc8-0x8c599ff7] Feb 13 19:44:46.472231 kernel: ACPI: Reserving HEST table memory at [mem 0x8c599ff8-0x8c59a273] Feb 13 19:44:46.472236 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59a278-0x8c59a3d9] Feb 13 19:44:46.472241 kernel: No NUMA configuration found Feb 13 19:44:46.472246 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] Feb 13 19:44:46.472252 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] Feb 13 19:44:46.472257 kernel: Zone ranges: Feb 13 19:44:46.472263 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 19:44:46.472268 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Feb 13 19:44:46.472273 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] Feb 13 19:44:46.472278 kernel: Movable zone start for each node Feb 13 19:44:46.472283 kernel: Early memory node ranges Feb 13 19:44:46.472288 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Feb 13 19:44:46.472293 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Feb 13 19:44:46.472298 kernel: node 0: [mem 0x0000000040400000-0x0000000081b2afff] Feb 13 19:44:46.472304 kernel: node 0: [mem 0x0000000081b2d000-0x000000008afccfff] Feb 13 19:44:46.472309 kernel: node 0: [mem 0x000000008c0b2000-0x000000008c23afff] Feb 13 19:44:46.472315 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] Feb 13 19:44:46.472323 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] Feb 13 19:44:46.472329 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] Feb 13 19:44:46.472335 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 19:44:46.472340 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Feb 13 19:44:46.472346 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Feb 13 19:44:46.472352 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Feb 13 19:44:46.472357 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges Feb 13 19:44:46.472362 kernel: On node 0, zone DMA32: 11460 pages in unavailable ranges Feb 13 19:44:46.472368 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges Feb 13 19:44:46.472373 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges Feb 13 19:44:46.472379 kernel: ACPI: PM-Timer IO Port: 0x1808 Feb 13 19:44:46.472384 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Feb 13 19:44:46.472390 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Feb 13 19:44:46.472396 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Feb 13 19:44:46.472401 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Feb 13 19:44:46.472407 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Feb 13 19:44:46.472412 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Feb 13 19:44:46.472417 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Feb 13 19:44:46.472423 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Feb 13 19:44:46.472428 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Feb 13 19:44:46.472433 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Feb 13 19:44:46.472439 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Feb 13 19:44:46.472444 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Feb 13 19:44:46.472450 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Feb 13 19:44:46.472455 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Feb 13 19:44:46.472461 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Feb 13 19:44:46.472466 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Feb 13 19:44:46.472472 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Feb 13 19:44:46.472477 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 13 19:44:46.472483 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 19:44:46.472488 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 19:44:46.472493 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 13 19:44:46.472500 kernel: TSC deadline timer available Feb 13 19:44:46.472505 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Feb 13 19:44:46.472511 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices Feb 13 19:44:46.472516 kernel: Booting paravirtualized kernel on bare hardware Feb 13 19:44:46.472522 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 19:44:46.472528 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Feb 13 19:44:46.472533 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Feb 13 19:44:46.472538 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Feb 13 19:44:46.472544 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Feb 13 19:44:46.472550 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=015d1d9e5e601f6a4e226c935072d3d0819e7eb2da20e68715973498f21aa3fe Feb 13 19:44:46.472556 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 19:44:46.472562 kernel: random: crng init done Feb 13 19:44:46.472567 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Feb 13 19:44:46.472572 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Feb 13 19:44:46.472578 kernel: Fallback order for Node 0: 0 Feb 13 19:44:46.472583 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232415 Feb 13 19:44:46.472588 kernel: Policy zone: Normal Feb 13 19:44:46.472595 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 19:44:46.472600 kernel: software IO TLB: area num 16. Feb 13 19:44:46.472606 kernel: Memory: 32718252K/33452980K available (14336K kernel code, 2301K rwdata, 22800K rodata, 43320K init, 1752K bss, 734468K reserved, 0K cma-reserved) Feb 13 19:44:46.472612 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Feb 13 19:44:46.472617 kernel: ftrace: allocating 37893 entries in 149 pages Feb 13 19:44:46.472622 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 19:44:46.472628 kernel: Dynamic Preempt: voluntary Feb 13 19:44:46.472633 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 19:44:46.472639 kernel: rcu: RCU event tracing is enabled. Feb 13 19:44:46.472646 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Feb 13 19:44:46.472651 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 19:44:46.472657 kernel: Rude variant of Tasks RCU enabled. Feb 13 19:44:46.472662 kernel: Tracing variant of Tasks RCU enabled. Feb 13 19:44:46.472667 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 19:44:46.472673 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Feb 13 19:44:46.472678 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Feb 13 19:44:46.472684 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 19:44:46.472689 kernel: Console: colour VGA+ 80x25 Feb 13 19:44:46.472696 kernel: printk: console [tty0] enabled Feb 13 19:44:46.472701 kernel: printk: console [ttyS1] enabled Feb 13 19:44:46.472706 kernel: ACPI: Core revision 20230628 Feb 13 19:44:46.472712 kernel: hpet: HPET dysfunctional in PC10. Force disabled. Feb 13 19:44:46.472717 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 19:44:46.472723 kernel: DMAR: Host address width 39 Feb 13 19:44:46.472728 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Feb 13 19:44:46.472734 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Feb 13 19:44:46.472739 kernel: DMAR: RMRR base: 0x0000008cf18000 end: 0x0000008d161fff Feb 13 19:44:46.472746 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 Feb 13 19:44:46.472751 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Feb 13 19:44:46.472757 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Feb 13 19:44:46.472762 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Feb 13 19:44:46.472768 kernel: x2apic enabled Feb 13 19:44:46.472773 kernel: APIC: Switched APIC routing to: cluster x2apic Feb 13 19:44:46.472779 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Feb 13 19:44:46.472784 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Feb 13 19:44:46.472808 kernel: CPU0: Thermal monitoring enabled (TM1) Feb 13 19:44:46.472815 kernel: process: using mwait in idle threads Feb 13 19:44:46.472835 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 13 19:44:46.472840 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 13 19:44:46.472846 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 19:44:46.472851 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Feb 13 19:44:46.472856 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Feb 13 19:44:46.472862 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Feb 13 19:44:46.472867 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 19:44:46.472872 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Feb 13 19:44:46.472878 kernel: RETBleed: Mitigation: Enhanced IBRS Feb 13 19:44:46.472883 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 13 19:44:46.472889 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Feb 13 19:44:46.472895 kernel: TAA: Mitigation: TSX disabled Feb 13 19:44:46.472900 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Feb 13 19:44:46.472905 kernel: SRBDS: Mitigation: Microcode Feb 13 19:44:46.472911 kernel: GDS: Mitigation: Microcode Feb 13 19:44:46.472916 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 19:44:46.472921 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 19:44:46.472927 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 19:44:46.472932 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Feb 13 19:44:46.472937 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Feb 13 19:44:46.472943 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 19:44:46.472949 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Feb 13 19:44:46.472955 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Feb 13 19:44:46.472960 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Feb 13 19:44:46.472965 kernel: Freeing SMP alternatives memory: 32K Feb 13 19:44:46.472971 kernel: pid_max: default: 32768 minimum: 301 Feb 13 19:44:46.472976 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 19:44:46.472981 kernel: landlock: Up and running. Feb 13 19:44:46.472987 kernel: SELinux: Initializing. Feb 13 19:44:46.472992 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 19:44:46.472997 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 19:44:46.473003 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Feb 13 19:44:46.473008 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Feb 13 19:44:46.473015 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Feb 13 19:44:46.473020 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Feb 13 19:44:46.473026 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Feb 13 19:44:46.473031 kernel: ... version: 4 Feb 13 19:44:46.473037 kernel: ... bit width: 48 Feb 13 19:44:46.473042 kernel: ... generic registers: 4 Feb 13 19:44:46.473047 kernel: ... value mask: 0000ffffffffffff Feb 13 19:44:46.473053 kernel: ... max period: 00007fffffffffff Feb 13 19:44:46.473058 kernel: ... fixed-purpose events: 3 Feb 13 19:44:46.473064 kernel: ... event mask: 000000070000000f Feb 13 19:44:46.473070 kernel: signal: max sigframe size: 2032 Feb 13 19:44:46.473075 kernel: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Feb 13 19:44:46.473081 kernel: rcu: Hierarchical SRCU implementation. Feb 13 19:44:46.473086 kernel: rcu: Max phase no-delay instances is 400. Feb 13 19:44:46.473092 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Feb 13 19:44:46.473097 kernel: smp: Bringing up secondary CPUs ... Feb 13 19:44:46.473103 kernel: smpboot: x86: Booting SMP configuration: Feb 13 19:44:46.473108 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 #15 Feb 13 19:44:46.473115 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 13 19:44:46.473120 kernel: smp: Brought up 1 node, 16 CPUs Feb 13 19:44:46.473126 kernel: smpboot: Max logical packages: 1 Feb 13 19:44:46.473131 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Feb 13 19:44:46.473137 kernel: devtmpfs: initialized Feb 13 19:44:46.473142 kernel: x86/mm: Memory block size: 128MB Feb 13 19:44:46.473148 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x81b2b000-0x81b2bfff] (4096 bytes) Feb 13 19:44:46.473153 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23b000-0x8c66cfff] (4399104 bytes) Feb 13 19:44:46.473159 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 19:44:46.473165 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Feb 13 19:44:46.473170 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 19:44:46.473176 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 19:44:46.473181 kernel: audit: initializing netlink subsys (disabled) Feb 13 19:44:46.473187 kernel: audit: type=2000 audit(1739475881.042:1): state=initialized audit_enabled=0 res=1 Feb 13 19:44:46.473192 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 19:44:46.473198 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 19:44:46.473203 kernel: cpuidle: using governor menu Feb 13 19:44:46.473209 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 19:44:46.473215 kernel: dca service started, version 1.12.1 Feb 13 19:44:46.473220 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Feb 13 19:44:46.473226 kernel: PCI: Using configuration type 1 for base access Feb 13 19:44:46.473231 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Feb 13 19:44:46.473236 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 19:44:46.473242 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 19:44:46.473247 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 19:44:46.473253 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 19:44:46.473259 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 19:44:46.473264 kernel: ACPI: Added _OSI(Module Device) Feb 13 19:44:46.473270 kernel: ACPI: Added _OSI(Processor Device) Feb 13 19:44:46.473275 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 19:44:46.473281 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 19:44:46.473286 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Feb 13 19:44:46.473292 kernel: ACPI: Dynamic OEM Table Load: Feb 13 19:44:46.473297 kernel: ACPI: SSDT 0xFFFF8986816BB400 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Feb 13 19:44:46.473303 kernel: ACPI: Dynamic OEM Table Load: Feb 13 19:44:46.473309 kernel: ACPI: SSDT 0xFFFF8986816B7000 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Feb 13 19:44:46.473314 kernel: ACPI: Dynamic OEM Table Load: Feb 13 19:44:46.473320 kernel: ACPI: SSDT 0xFFFF89868169A000 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Feb 13 19:44:46.473325 kernel: ACPI: Dynamic OEM Table Load: Feb 13 19:44:46.473331 kernel: ACPI: SSDT 0xFFFF8986816B4800 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Feb 13 19:44:46.473336 kernel: ACPI: Dynamic OEM Table Load: Feb 13 19:44:46.473341 kernel: ACPI: SSDT 0xFFFF8986816C7000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Feb 13 19:44:46.473347 kernel: ACPI: Dynamic OEM Table Load: Feb 13 19:44:46.473352 kernel: ACPI: SSDT 0xFFFF898680FA5C00 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Feb 13 19:44:46.473358 kernel: ACPI: _OSC evaluated successfully for all CPUs Feb 13 19:44:46.473364 kernel: ACPI: Interpreter enabled Feb 13 19:44:46.473369 kernel: ACPI: PM: (supports S0 S5) Feb 13 19:44:46.473375 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 19:44:46.473380 kernel: HEST: Enabling Firmware First mode for corrected errors. Feb 13 19:44:46.473385 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Feb 13 19:44:46.473391 kernel: HEST: Table parsing has been initialized. Feb 13 19:44:46.473396 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Feb 13 19:44:46.473402 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 19:44:46.473407 kernel: PCI: Using E820 reservations for host bridge windows Feb 13 19:44:46.473413 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Feb 13 19:44:46.473419 kernel: ACPI: \_SB_.PCI0.XDCI.USBC: New power resource Feb 13 19:44:46.473425 kernel: ACPI: \_SB_.PCI0.SAT0.VOL0.V0PR: New power resource Feb 13 19:44:46.473431 kernel: ACPI: \_SB_.PCI0.SAT0.VOL1.V1PR: New power resource Feb 13 19:44:46.473436 kernel: ACPI: \_SB_.PCI0.SAT0.VOL2.V2PR: New power resource Feb 13 19:44:46.473442 kernel: ACPI: \_SB_.PCI0.CNVW.WRST: New power resource Feb 13 19:44:46.473447 kernel: ACPI: \_TZ_.FN00: New power resource Feb 13 19:44:46.473452 kernel: ACPI: \_TZ_.FN01: New power resource Feb 13 19:44:46.473458 kernel: ACPI: \_TZ_.FN02: New power resource Feb 13 19:44:46.473464 kernel: ACPI: \_TZ_.FN03: New power resource Feb 13 19:44:46.473470 kernel: ACPI: \_TZ_.FN04: New power resource Feb 13 19:44:46.473475 kernel: ACPI: \PIN_: New power resource Feb 13 19:44:46.473481 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Feb 13 19:44:46.473555 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 19:44:46.473609 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Feb 13 19:44:46.473657 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Feb 13 19:44:46.473667 kernel: PCI host bridge to bus 0000:00 Feb 13 19:44:46.473717 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 19:44:46.473762 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 19:44:46.473828 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 19:44:46.473884 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] Feb 13 19:44:46.473926 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Feb 13 19:44:46.473969 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Feb 13 19:44:46.474033 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Feb 13 19:44:46.474091 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Feb 13 19:44:46.474141 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Feb 13 19:44:46.474196 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Feb 13 19:44:46.474244 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] Feb 13 19:44:46.474297 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Feb 13 19:44:46.474348 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit] Feb 13 19:44:46.474402 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Feb 13 19:44:46.474450 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] Feb 13 19:44:46.474498 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Feb 13 19:44:46.474551 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Feb 13 19:44:46.474600 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] Feb 13 19:44:46.474651 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit] Feb 13 19:44:46.474703 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Feb 13 19:44:46.474752 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Feb 13 19:44:46.474827 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Feb 13 19:44:46.474892 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Feb 13 19:44:46.474945 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Feb 13 19:44:46.474996 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] Feb 13 19:44:46.475046 kernel: pci 0000:00:16.0: PME# supported from D3hot Feb 13 19:44:46.475106 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Feb 13 19:44:46.475156 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] Feb 13 19:44:46.475205 kernel: pci 0000:00:16.1: PME# supported from D3hot Feb 13 19:44:46.475258 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Feb 13 19:44:46.475306 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] Feb 13 19:44:46.475356 kernel: pci 0000:00:16.4: PME# supported from D3hot Feb 13 19:44:46.475408 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Feb 13 19:44:46.475458 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] Feb 13 19:44:46.475505 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] Feb 13 19:44:46.475554 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] Feb 13 19:44:46.475601 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] Feb 13 19:44:46.475650 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] Feb 13 19:44:46.475700 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] Feb 13 19:44:46.475748 kernel: pci 0000:00:17.0: PME# supported from D3hot Feb 13 19:44:46.475825 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Feb 13 19:44:46.475892 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Feb 13 19:44:46.475948 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Feb 13 19:44:46.476002 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Feb 13 19:44:46.476055 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Feb 13 19:44:46.476105 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Feb 13 19:44:46.476157 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Feb 13 19:44:46.476207 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Feb 13 19:44:46.476261 kernel: pci 0000:00:1c.3: [8086:a33b] type 01 class 0x060400 Feb 13 19:44:46.476311 kernel: pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold Feb 13 19:44:46.476364 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Feb 13 19:44:46.476412 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Feb 13 19:44:46.476465 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Feb 13 19:44:46.476520 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Feb 13 19:44:46.476571 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] Feb 13 19:44:46.476619 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Feb 13 19:44:46.476673 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Feb 13 19:44:46.476721 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Feb 13 19:44:46.476779 kernel: pci 0000:01:00.0: [15b3:1015] type 00 class 0x020000 Feb 13 19:44:46.476868 kernel: pci 0000:01:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Feb 13 19:44:46.476918 kernel: pci 0000:01:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] Feb 13 19:44:46.476971 kernel: pci 0000:01:00.0: PME# supported from D3cold Feb 13 19:44:46.477020 kernel: pci 0000:01:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Feb 13 19:44:46.477071 kernel: pci 0000:01:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Feb 13 19:44:46.477127 kernel: pci 0000:01:00.1: [15b3:1015] type 00 class 0x020000 Feb 13 19:44:46.477178 kernel: pci 0000:01:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Feb 13 19:44:46.477227 kernel: pci 0000:01:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] Feb 13 19:44:46.477276 kernel: pci 0000:01:00.1: PME# supported from D3cold Feb 13 19:44:46.477327 kernel: pci 0000:01:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Feb 13 19:44:46.477377 kernel: pci 0000:01:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Feb 13 19:44:46.477428 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Feb 13 19:44:46.477477 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Feb 13 19:44:46.477526 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Feb 13 19:44:46.477574 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Feb 13 19:44:46.477628 kernel: pci 0000:03:00.0: working around ROM BAR overlap defect Feb 13 19:44:46.477681 kernel: pci 0000:03:00.0: [8086:1533] type 00 class 0x020000 Feb 13 19:44:46.477731 kernel: pci 0000:03:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] Feb 13 19:44:46.477780 kernel: pci 0000:03:00.0: reg 0x18: [io 0x5000-0x501f] Feb 13 19:44:46.477866 kernel: pci 0000:03:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] Feb 13 19:44:46.477915 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Feb 13 19:44:46.477964 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Feb 13 19:44:46.478014 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Feb 13 19:44:46.478065 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Feb 13 19:44:46.478121 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect Feb 13 19:44:46.478172 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Feb 13 19:44:46.478224 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] Feb 13 19:44:46.478274 kernel: pci 0000:04:00.0: reg 0x18: [io 0x4000-0x401f] Feb 13 19:44:46.478323 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] Feb 13 19:44:46.478373 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Feb 13 19:44:46.478423 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Feb 13 19:44:46.478475 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Feb 13 19:44:46.478524 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Feb 13 19:44:46.478574 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Feb 13 19:44:46.478630 kernel: pci 0000:06:00.0: [1a03:1150] type 01 class 0x060400 Feb 13 19:44:46.478681 kernel: pci 0000:06:00.0: enabling Extended Tags Feb 13 19:44:46.478731 kernel: pci 0000:06:00.0: supports D1 D2 Feb 13 19:44:46.478898 kernel: pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 19:44:46.478950 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Feb 13 19:44:46.479000 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Feb 13 19:44:46.479078 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Feb 13 19:44:46.479132 kernel: pci_bus 0000:07: extended config space not accessible Feb 13 19:44:46.479189 kernel: pci 0000:07:00.0: [1a03:2000] type 00 class 0x030000 Feb 13 19:44:46.479241 kernel: pci 0000:07:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] Feb 13 19:44:46.479294 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] Feb 13 19:44:46.479348 kernel: pci 0000:07:00.0: reg 0x18: [io 0x3000-0x307f] Feb 13 19:44:46.479400 kernel: pci 0000:07:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 13 19:44:46.479452 kernel: pci 0000:07:00.0: supports D1 D2 Feb 13 19:44:46.479503 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 19:44:46.479555 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Feb 13 19:44:46.479604 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Feb 13 19:44:46.479654 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Feb 13 19:44:46.479663 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Feb 13 19:44:46.479671 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Feb 13 19:44:46.479677 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Feb 13 19:44:46.479683 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Feb 13 19:44:46.479689 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Feb 13 19:44:46.479695 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Feb 13 19:44:46.479701 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Feb 13 19:44:46.479707 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Feb 13 19:44:46.479713 kernel: iommu: Default domain type: Translated Feb 13 19:44:46.479719 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 19:44:46.479725 kernel: PCI: Using ACPI for IRQ routing Feb 13 19:44:46.479731 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 19:44:46.479737 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Feb 13 19:44:46.479743 kernel: e820: reserve RAM buffer [mem 0x81b2b000-0x83ffffff] Feb 13 19:44:46.479749 kernel: e820: reserve RAM buffer [mem 0x8afcd000-0x8bffffff] Feb 13 19:44:46.479754 kernel: e820: reserve RAM buffer [mem 0x8c23b000-0x8fffffff] Feb 13 19:44:46.479760 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] Feb 13 19:44:46.479765 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] Feb 13 19:44:46.479839 kernel: pci 0000:07:00.0: vgaarb: setting as boot VGA device Feb 13 19:44:46.479908 kernel: pci 0000:07:00.0: vgaarb: bridge control possible Feb 13 19:44:46.479963 kernel: pci 0000:07:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 13 19:44:46.479971 kernel: vgaarb: loaded Feb 13 19:44:46.479977 kernel: clocksource: Switched to clocksource tsc-early Feb 13 19:44:46.479983 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 19:44:46.479989 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 19:44:46.479995 kernel: pnp: PnP ACPI init Feb 13 19:44:46.480113 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Feb 13 19:44:46.480164 kernel: pnp 00:02: [dma 0 disabled] Feb 13 19:44:46.480212 kernel: pnp 00:03: [dma 0 disabled] Feb 13 19:44:46.480263 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Feb 13 19:44:46.480308 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Feb 13 19:44:46.480357 kernel: system 00:05: [io 0x1854-0x1857] has been reserved Feb 13 19:44:46.480405 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved Feb 13 19:44:46.480452 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved Feb 13 19:44:46.480497 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved Feb 13 19:44:46.480543 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved Feb 13 19:44:46.480589 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved Feb 13 19:44:46.480635 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved Feb 13 19:44:46.480680 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved Feb 13 19:44:46.480726 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved Feb 13 19:44:46.480777 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved Feb 13 19:44:46.480879 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved Feb 13 19:44:46.480924 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved Feb 13 19:44:46.480968 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved Feb 13 19:44:46.481012 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved Feb 13 19:44:46.481056 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved Feb 13 19:44:46.481100 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved Feb 13 19:44:46.481151 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved Feb 13 19:44:46.481160 kernel: pnp: PnP ACPI: found 10 devices Feb 13 19:44:46.481166 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 19:44:46.481172 kernel: NET: Registered PF_INET protocol family Feb 13 19:44:46.481178 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 19:44:46.481184 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Feb 13 19:44:46.481190 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 19:44:46.481196 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 19:44:46.481203 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Feb 13 19:44:46.481210 kernel: TCP: Hash tables configured (established 262144 bind 65536) Feb 13 19:44:46.481216 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 13 19:44:46.481221 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 13 19:44:46.481227 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 19:44:46.481233 kernel: NET: Registered PF_XDP protocol family Feb 13 19:44:46.481282 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] Feb 13 19:44:46.481332 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] Feb 13 19:44:46.481381 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] Feb 13 19:44:46.481506 kernel: pci 0000:01:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Feb 13 19:44:46.481555 kernel: pci 0000:01:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Feb 13 19:44:46.481606 kernel: pci 0000:01:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Feb 13 19:44:46.481657 kernel: pci 0000:01:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Feb 13 19:44:46.481706 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Feb 13 19:44:46.481756 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Feb 13 19:44:46.481828 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Feb 13 19:44:46.481891 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Feb 13 19:44:46.481943 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Feb 13 19:44:46.481991 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Feb 13 19:44:46.482039 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Feb 13 19:44:46.482088 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Feb 13 19:44:46.482140 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Feb 13 19:44:46.482189 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Feb 13 19:44:46.482237 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Feb 13 19:44:46.482287 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Feb 13 19:44:46.482337 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Feb 13 19:44:46.482386 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Feb 13 19:44:46.482435 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Feb 13 19:44:46.482482 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Feb 13 19:44:46.482531 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Feb 13 19:44:46.482578 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Feb 13 19:44:46.482622 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 19:44:46.482667 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 19:44:46.482711 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 19:44:46.482754 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] Feb 13 19:44:46.482913 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Feb 13 19:44:46.482964 kernel: pci_bus 0000:01: resource 1 [mem 0x95100000-0x952fffff] Feb 13 19:44:46.483010 kernel: pci_bus 0000:01: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Feb 13 19:44:46.483063 kernel: pci_bus 0000:03: resource 0 [io 0x5000-0x5fff] Feb 13 19:44:46.483109 kernel: pci_bus 0000:03: resource 1 [mem 0x95400000-0x954fffff] Feb 13 19:44:46.483158 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Feb 13 19:44:46.483203 kernel: pci_bus 0000:04: resource 1 [mem 0x95300000-0x953fffff] Feb 13 19:44:46.483252 kernel: pci_bus 0000:06: resource 0 [io 0x3000-0x3fff] Feb 13 19:44:46.483297 kernel: pci_bus 0000:06: resource 1 [mem 0x94000000-0x950fffff] Feb 13 19:44:46.483348 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Feb 13 19:44:46.483395 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] Feb 13 19:44:46.483403 kernel: PCI: CLS 64 bytes, default 64 Feb 13 19:44:46.483409 kernel: DMAR: No ATSR found Feb 13 19:44:46.483415 kernel: DMAR: No SATC found Feb 13 19:44:46.483421 kernel: DMAR: dmar0: Using Queued invalidation Feb 13 19:44:46.483470 kernel: pci 0000:00:00.0: Adding to iommu group 0 Feb 13 19:44:46.483519 kernel: pci 0000:00:01.0: Adding to iommu group 1 Feb 13 19:44:46.483571 kernel: pci 0000:00:08.0: Adding to iommu group 2 Feb 13 19:44:46.483619 kernel: pci 0000:00:12.0: Adding to iommu group 3 Feb 13 19:44:46.483668 kernel: pci 0000:00:14.0: Adding to iommu group 4 Feb 13 19:44:46.483716 kernel: pci 0000:00:14.2: Adding to iommu group 4 Feb 13 19:44:46.483764 kernel: pci 0000:00:15.0: Adding to iommu group 5 Feb 13 19:44:46.483836 kernel: pci 0000:00:15.1: Adding to iommu group 5 Feb 13 19:44:46.483899 kernel: pci 0000:00:16.0: Adding to iommu group 6 Feb 13 19:44:46.483947 kernel: pci 0000:00:16.1: Adding to iommu group 6 Feb 13 19:44:46.483999 kernel: pci 0000:00:16.4: Adding to iommu group 6 Feb 13 19:44:46.484048 kernel: pci 0000:00:17.0: Adding to iommu group 7 Feb 13 19:44:46.484097 kernel: pci 0000:00:1b.0: Adding to iommu group 8 Feb 13 19:44:46.484145 kernel: pci 0000:00:1b.4: Adding to iommu group 9 Feb 13 19:44:46.484262 kernel: pci 0000:00:1b.5: Adding to iommu group 10 Feb 13 19:44:46.484311 kernel: pci 0000:00:1c.0: Adding to iommu group 11 Feb 13 19:44:46.484358 kernel: pci 0000:00:1c.3: Adding to iommu group 12 Feb 13 19:44:46.484407 kernel: pci 0000:00:1e.0: Adding to iommu group 13 Feb 13 19:44:46.484457 kernel: pci 0000:00:1f.0: Adding to iommu group 14 Feb 13 19:44:46.484506 kernel: pci 0000:00:1f.4: Adding to iommu group 14 Feb 13 19:44:46.484554 kernel: pci 0000:00:1f.5: Adding to iommu group 14 Feb 13 19:44:46.484604 kernel: pci 0000:01:00.0: Adding to iommu group 1 Feb 13 19:44:46.484653 kernel: pci 0000:01:00.1: Adding to iommu group 1 Feb 13 19:44:46.484703 kernel: pci 0000:03:00.0: Adding to iommu group 15 Feb 13 19:44:46.484752 kernel: pci 0000:04:00.0: Adding to iommu group 16 Feb 13 19:44:46.484826 kernel: pci 0000:06:00.0: Adding to iommu group 17 Feb 13 19:44:46.484894 kernel: pci 0000:07:00.0: Adding to iommu group 17 Feb 13 19:44:46.484903 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Feb 13 19:44:46.484910 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 13 19:44:46.484915 kernel: software IO TLB: mapped [mem 0x0000000086fcd000-0x000000008afcd000] (64MB) Feb 13 19:44:46.484921 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer Feb 13 19:44:46.484927 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Feb 13 19:44:46.484933 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Feb 13 19:44:46.484939 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Feb 13 19:44:46.484989 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Feb 13 19:44:46.485000 kernel: Initialise system trusted keyrings Feb 13 19:44:46.485005 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Feb 13 19:44:46.485011 kernel: Key type asymmetric registered Feb 13 19:44:46.485017 kernel: Asymmetric key parser 'x509' registered Feb 13 19:44:46.485023 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 19:44:46.485029 kernel: io scheduler mq-deadline registered Feb 13 19:44:46.485035 kernel: io scheduler kyber registered Feb 13 19:44:46.485040 kernel: io scheduler bfq registered Feb 13 19:44:46.485089 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 Feb 13 19:44:46.485139 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 122 Feb 13 19:44:46.485188 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 123 Feb 13 19:44:46.485235 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 124 Feb 13 19:44:46.485283 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 125 Feb 13 19:44:46.485331 kernel: pcieport 0000:00:1c.3: PME: Signaling with IRQ 126 Feb 13 19:44:46.485387 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Feb 13 19:44:46.485397 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Feb 13 19:44:46.485404 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Feb 13 19:44:46.485411 kernel: pstore: Using crash dump compression: deflate Feb 13 19:44:46.485416 kernel: pstore: Registered erst as persistent store backend Feb 13 19:44:46.485422 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 19:44:46.485428 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 19:44:46.485434 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 19:44:46.485440 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Feb 13 19:44:46.485446 kernel: hpet_acpi_add: no address or irqs in _CRS Feb 13 19:44:46.485495 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Feb 13 19:44:46.485505 kernel: i8042: PNP: No PS/2 controller found. Feb 13 19:44:46.485551 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Feb 13 19:44:46.485639 kernel: rtc_cmos rtc_cmos: registered as rtc0 Feb 13 19:44:46.485683 kernel: rtc_cmos rtc_cmos: setting system clock to 2025-02-13T19:44:45 UTC (1739475885) Feb 13 19:44:46.485728 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Feb 13 19:44:46.485737 kernel: intel_pstate: Intel P-state driver initializing Feb 13 19:44:46.485743 kernel: intel_pstate: Disabling energy efficiency optimization Feb 13 19:44:46.485751 kernel: intel_pstate: HWP enabled Feb 13 19:44:46.485757 kernel: NET: Registered PF_INET6 protocol family Feb 13 19:44:46.485763 kernel: Segment Routing with IPv6 Feb 13 19:44:46.485768 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 19:44:46.485774 kernel: NET: Registered PF_PACKET protocol family Feb 13 19:44:46.485780 kernel: Key type dns_resolver registered Feb 13 19:44:46.485788 kernel: microcode: Microcode Update Driver: v2.2. Feb 13 19:44:46.485794 kernel: IPI shorthand broadcast: enabled Feb 13 19:44:46.485818 kernel: sched_clock: Marking stable (2490190132, 1448772286)->(4502213957, -563251539) Feb 13 19:44:46.485826 kernel: registered taskstats version 1 Feb 13 19:44:46.485832 kernel: Loading compiled-in X.509 certificates Feb 13 19:44:46.485851 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: b3acedbed401b3cd9632ee9302ddcce254d8924d' Feb 13 19:44:46.485857 kernel: Key type .fscrypt registered Feb 13 19:44:46.485863 kernel: Key type fscrypt-provisioning registered Feb 13 19:44:46.485869 kernel: ima: Allocated hash algorithm: sha1 Feb 13 19:44:46.485875 kernel: ima: No architecture policies found Feb 13 19:44:46.485881 kernel: clk: Disabling unused clocks Feb 13 19:44:46.485886 kernel: Freeing unused kernel image (initmem) memory: 43320K Feb 13 19:44:46.485893 kernel: Write protecting the kernel read-only data: 38912k Feb 13 19:44:46.485899 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Feb 13 19:44:46.485905 kernel: Run /init as init process Feb 13 19:44:46.485911 kernel: with arguments: Feb 13 19:44:46.485917 kernel: /init Feb 13 19:44:46.485922 kernel: with environment: Feb 13 19:44:46.485928 kernel: HOME=/ Feb 13 19:44:46.485934 kernel: TERM=linux Feb 13 19:44:46.485940 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 19:44:46.485948 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:44:46.485955 systemd[1]: Detected architecture x86-64. Feb 13 19:44:46.485961 systemd[1]: Running in initrd. Feb 13 19:44:46.485967 systemd[1]: No hostname configured, using default hostname. Feb 13 19:44:46.485973 systemd[1]: Hostname set to . Feb 13 19:44:46.485979 systemd[1]: Initializing machine ID from random generator. Feb 13 19:44:46.485985 systemd[1]: Queued start job for default target initrd.target. Feb 13 19:44:46.485992 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:44:46.485998 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:44:46.486004 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 19:44:46.486011 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:44:46.486017 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 19:44:46.486023 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 19:44:46.486029 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 19:44:46.486037 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 19:44:46.486043 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:44:46.486049 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:44:46.486055 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:44:46.486061 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:44:46.486068 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:44:46.486074 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:44:46.486080 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:44:46.486087 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:44:46.486093 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 19:44:46.486099 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 19:44:46.486106 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:44:46.486112 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:44:46.486118 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:44:46.486124 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:44:46.486130 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 19:44:46.486136 kernel: tsc: Refined TSC clocksource calibration: 3407.999 MHz Feb 13 19:44:46.486143 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd336761, max_idle_ns: 440795243819 ns Feb 13 19:44:46.486149 kernel: clocksource: Switched to clocksource tsc Feb 13 19:44:46.486155 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:44:46.486161 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 19:44:46.486167 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 19:44:46.486173 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:44:46.486193 systemd-journald[269]: Collecting audit messages is disabled. Feb 13 19:44:46.486243 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:44:46.486266 systemd-journald[269]: Journal started Feb 13 19:44:46.486280 systemd-journald[269]: Runtime Journal (/run/log/journal/5ce4f4ccc06444bb90dac19f1ef9d6a0) is 8.0M, max 639.9M, 631.9M free. Feb 13 19:44:46.488463 systemd-modules-load[270]: Inserted module 'overlay' Feb 13 19:44:46.505907 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:44:46.527832 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 19:44:46.527849 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:44:46.534996 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 19:44:46.535139 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:44:46.535234 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 19:44:46.536222 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:44:46.540642 systemd-modules-load[270]: Inserted module 'br_netfilter' Feb 13 19:44:46.540790 kernel: Bridge firewalling registered Feb 13 19:44:46.541099 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:44:46.561553 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:44:46.640920 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:44:46.669604 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:44:46.693451 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:44:46.729226 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:44:46.758037 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:44:46.768436 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:44:46.768617 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:44:46.769633 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:44:46.774571 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:44:46.779051 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:44:46.790561 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 19:44:46.793247 systemd-resolved[298]: Positive Trust Anchors: Feb 13 19:44:46.793253 systemd-resolved[298]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:44:46.793291 systemd-resolved[298]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:44:46.795553 systemd-resolved[298]: Defaulting to hostname 'linux'. Feb 13 19:44:46.801091 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:44:46.818144 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:44:46.942915 dracut-cmdline[311]: dracut-dracut-053 Feb 13 19:44:46.942915 dracut-cmdline[311]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=015d1d9e5e601f6a4e226c935072d3d0819e7eb2da20e68715973498f21aa3fe Feb 13 19:44:46.998866 kernel: SCSI subsystem initialized Feb 13 19:44:47.005792 kernel: Loading iSCSI transport class v2.0-870. Feb 13 19:44:47.018831 kernel: iscsi: registered transport (tcp) Feb 13 19:44:47.040515 kernel: iscsi: registered transport (qla4xxx) Feb 13 19:44:47.040534 kernel: QLogic iSCSI HBA Driver Feb 13 19:44:47.063626 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 19:44:47.075091 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 19:44:47.166644 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 19:44:47.166664 kernel: device-mapper: uevent: version 1.0.3 Feb 13 19:44:47.175460 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 19:44:47.210824 kernel: raid6: avx2x4 gen() 46473 MB/s Feb 13 19:44:47.231856 kernel: raid6: avx2x2 gen() 53056 MB/s Feb 13 19:44:47.257938 kernel: raid6: avx2x1 gen() 44793 MB/s Feb 13 19:44:47.257956 kernel: raid6: using algorithm avx2x2 gen() 53056 MB/s Feb 13 19:44:47.285041 kernel: raid6: .... xor() 32197 MB/s, rmw enabled Feb 13 19:44:47.285059 kernel: raid6: using avx2x2 recovery algorithm Feb 13 19:44:47.304814 kernel: xor: automatically using best checksumming function avx Feb 13 19:44:47.402822 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 19:44:47.408379 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:44:47.442104 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:44:47.448824 systemd-udevd[499]: Using default interface naming scheme 'v255'. Feb 13 19:44:47.451255 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:44:47.488187 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 19:44:47.550858 dracut-pre-trigger[511]: rd.md=0: removing MD RAID activation Feb 13 19:44:47.622510 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:44:47.650226 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:44:47.752240 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:44:47.776023 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 13 19:44:47.776073 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 13 19:44:47.782792 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 19:44:47.796736 kernel: ACPI: bus type USB registered Feb 13 19:44:47.796880 kernel: usbcore: registered new interface driver usbfs Feb 13 19:44:47.803426 kernel: usbcore: registered new interface driver hub Feb 13 19:44:47.808799 kernel: usbcore: registered new device driver usb Feb 13 19:44:47.814792 kernel: PTP clock support registered Feb 13 19:44:47.814808 kernel: libata version 3.00 loaded. Feb 13 19:44:47.816968 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 19:44:47.975677 kernel: ahci 0000:00:17.0: version 3.0 Feb 13 19:44:47.975784 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 19:44:47.975797 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 7 ports 6 Gbps 0x7f impl SATA mode Feb 13 19:44:47.975867 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Feb 13 19:44:47.975932 kernel: AES CTR mode by8 optimization enabled Feb 13 19:44:47.975941 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Feb 13 19:44:47.975948 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Feb 13 19:44:47.975956 kernel: scsi host0: ahci Feb 13 19:44:47.976020 kernel: scsi host1: ahci Feb 13 19:44:47.976083 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Feb 13 19:44:47.995911 kernel: scsi host2: ahci Feb 13 19:44:47.995985 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Feb 13 19:44:47.996053 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Feb 13 19:44:47.996119 kernel: scsi host3: ahci Feb 13 19:44:47.996184 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Feb 13 19:44:47.996247 kernel: pps pps0: new PPS source ptp0 Feb 13 19:44:47.996314 kernel: scsi host4: ahci Feb 13 19:44:47.996376 kernel: scsi host5: ahci Feb 13 19:44:47.996435 kernel: scsi host6: ahci Feb 13 19:44:47.996494 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 127 Feb 13 19:44:47.996503 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 127 Feb 13 19:44:47.996511 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 127 Feb 13 19:44:47.996518 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 127 Feb 13 19:44:47.996528 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 127 Feb 13 19:44:47.996535 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 127 Feb 13 19:44:47.996542 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 127 Feb 13 19:44:47.996550 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Feb 13 19:44:47.996614 kernel: igb 0000:03:00.0: added PHC on eth0 Feb 13 19:44:47.996682 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Feb 13 19:44:47.996744 kernel: igb 0000:03:00.0: Intel(R) Gigabit Ethernet Network Connection Feb 13 19:44:47.996814 kernel: hub 1-0:1.0: USB hub found Feb 13 19:44:47.996888 kernel: igb 0000:03:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:6a:f0:44 Feb 13 19:44:47.996953 kernel: igb 0000:03:00.0: eth0: PBA No: 010000-000 Feb 13 19:44:47.997016 kernel: hub 1-0:1.0: 16 ports detected Feb 13 19:44:47.997076 kernel: igb 0000:03:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Feb 13 19:44:47.997138 kernel: hub 2-0:1.0: USB hub found Feb 13 19:44:47.997206 kernel: pps pps1: new PPS source ptp1 Feb 13 19:44:47.997264 kernel: hub 2-0:1.0: 10 ports detected Feb 13 19:44:47.997326 kernel: igb 0000:04:00.0: added PHC on eth1 Feb 13 19:44:48.085632 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Feb 13 19:44:48.085740 kernel: igb 0000:04:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:6a:f0:45 Feb 13 19:44:48.085852 kernel: igb 0000:04:00.0: eth1: PBA No: 010000-000 Feb 13 19:44:48.085951 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Feb 13 19:44:47.853028 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 19:44:48.114832 kernel: mlx5_core 0000:01:00.0: firmware version: 14.27.1016 Feb 13 19:44:48.581528 kernel: mlx5_core 0000:01:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Feb 13 19:44:48.581611 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Feb 13 19:44:48.757595 kernel: ata5: SATA link down (SStatus 0 SControl 300) Feb 13 19:44:48.757607 kernel: ata4: SATA link down (SStatus 0 SControl 300) Feb 13 19:44:48.757615 kernel: ata7: SATA link down (SStatus 0 SControl 300) Feb 13 19:44:48.757622 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Feb 13 19:44:48.757630 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Feb 13 19:44:48.757641 kernel: ata3: SATA link down (SStatus 0 SControl 300) Feb 13 19:44:48.757648 kernel: ata6: SATA link down (SStatus 0 SControl 300) Feb 13 19:44:48.757656 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Feb 13 19:44:48.757663 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Feb 13 19:44:48.757671 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Feb 13 19:44:48.757678 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Feb 13 19:44:48.757686 kernel: ata1.00: Features: NCQ-prio Feb 13 19:44:48.757693 kernel: ata2.00: Features: NCQ-prio Feb 13 19:44:48.757700 kernel: ata1.00: configured for UDMA/133 Feb 13 19:44:48.757708 kernel: ata2.00: configured for UDMA/133 Feb 13 19:44:48.757716 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Feb 13 19:44:48.757814 kernel: hub 1-14:1.0: USB hub found Feb 13 19:44:48.757901 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Feb 13 19:44:48.757983 kernel: mlx5_core 0000:01:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Feb 13 19:44:48.758095 kernel: hub 1-14:1.0: 4 ports detected Feb 13 19:44:48.758176 kernel: mlx5_core 0000:01:00.0: Port module event: module 0, Cable plugged Feb 13 19:44:48.758249 kernel: igb 0000:03:00.0 eno1: renamed from eth0 Feb 13 19:44:48.758329 kernel: ata1.00: Enabling discard_zeroes_data Feb 13 19:44:48.758338 kernel: ata2.00: Enabling discard_zeroes_data Feb 13 19:44:48.758345 kernel: sd 0:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Feb 13 19:44:48.758411 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Feb 13 19:44:48.758485 kernel: sd 1:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Feb 13 19:44:48.758549 kernel: sd 1:0:0:0: [sdb] 4096-byte physical blocks Feb 13 19:44:48.758611 kernel: sd 1:0:0:0: [sdb] Write Protect is off Feb 13 19:44:48.758686 kernel: sd 0:0:0:0: [sda] Write Protect is off Feb 13 19:44:48.758749 kernel: sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Feb 13 19:44:48.758816 kernel: sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 Feb 13 19:44:48.758885 kernel: sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Feb 13 19:44:48.758953 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Feb 13 19:44:48.759015 kernel: sd 1:0:0:0: [sdb] Preferred minimum I/O size 4096 bytes Feb 13 19:44:48.759074 kernel: sd 0:0:0:0: [sda] Preferred minimum I/O size 4096 bytes Feb 13 19:44:48.759149 kernel: ata2.00: Enabling discard_zeroes_data Feb 13 19:44:48.759158 kernel: ata1.00: Enabling discard_zeroes_data Feb 13 19:44:48.759165 kernel: sd 1:0:0:0: [sdb] Attached SCSI disk Feb 13 19:44:48.759227 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 19:44:48.759236 kernel: GPT:9289727 != 937703087 Feb 13 19:44:48.759243 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 19:44:48.759250 kernel: GPT:9289727 != 937703087 Feb 13 19:44:48.759257 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 19:44:48.759267 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 19:44:48.759280 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Feb 13 19:44:48.759348 kernel: igb 0000:04:00.0 eno2: renamed from eth1 Feb 13 19:44:48.759426 kernel: BTRFS: device fsid c7adc9b8-df7f-4a5f-93bf-204def2767a9 devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (673) Feb 13 19:44:48.759436 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by (udev-worker) (563) Feb 13 19:44:48.759444 kernel: mlx5_core 0000:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Feb 13 19:44:48.759513 kernel: mlx5_core 0000:01:00.1: firmware version: 14.27.1016 Feb 13 19:44:49.123552 kernel: mlx5_core 0000:01:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Feb 13 19:44:49.124130 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Feb 13 19:44:49.124929 kernel: ata1.00: Enabling discard_zeroes_data Feb 13 19:44:49.125000 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 19:44:49.125062 kernel: ata1.00: Enabling discard_zeroes_data Feb 13 19:44:49.125122 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 19:44:49.125181 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 19:44:49.125239 kernel: usbcore: registered new interface driver usbhid Feb 13 19:44:49.125302 kernel: usbhid: USB HID core driver Feb 13 19:44:49.125362 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Feb 13 19:44:49.125423 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Feb 13 19:44:49.126018 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Feb 13 19:44:49.126081 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Feb 13 19:44:49.126460 kernel: mlx5_core 0000:01:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Feb 13 19:44:49.126824 kernel: mlx5_core 0000:01:00.1: Port module event: module 1, Cable plugged Feb 13 19:44:49.127170 kernel: mlx5_core 0000:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Feb 13 19:44:48.096602 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:44:49.150252 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: renamed from eth1 Feb 13 19:44:49.150334 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: renamed from eth0 Feb 13 19:44:48.126945 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:44:48.138957 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:44:48.164993 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 19:44:48.175076 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:44:48.185150 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:44:48.185211 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:44:49.211957 disk-uuid[705]: Primary Header is updated. Feb 13 19:44:49.211957 disk-uuid[705]: Secondary Entries is updated. Feb 13 19:44:49.211957 disk-uuid[705]: Secondary Header is updated. Feb 13 19:44:48.195939 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:44:48.212510 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:44:48.212583 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:44:48.279878 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:44:48.312939 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:44:48.440922 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Micron_5300_MTFDDAK480TDT ROOT. Feb 13 19:44:48.569996 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:44:48.608775 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Micron_5300_MTFDDAK480TDT EFI-SYSTEM. Feb 13 19:44:48.625584 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Micron_5300_MTFDDAK480TDT USR-A. Feb 13 19:44:48.636825 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Micron_5300_MTFDDAK480TDT USR-A. Feb 13 19:44:48.651606 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. Feb 13 19:44:48.687918 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 19:44:48.709301 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:44:48.736589 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:44:49.714687 kernel: ata1.00: Enabling discard_zeroes_data Feb 13 19:44:49.722695 disk-uuid[706]: The operation has completed successfully. Feb 13 19:44:49.730910 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 19:44:49.760362 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 19:44:49.760408 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 19:44:49.802135 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 19:44:49.828849 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 13 19:44:49.828865 sh[744]: Success Feb 13 19:44:49.864765 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 19:44:49.874709 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 19:44:49.884425 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 19:44:49.929702 kernel: BTRFS info (device dm-0): first mount of filesystem c7adc9b8-df7f-4a5f-93bf-204def2767a9 Feb 13 19:44:49.929722 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:44:49.939261 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 19:44:49.946284 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 19:44:49.952135 kernel: BTRFS info (device dm-0): using free space tree Feb 13 19:44:49.964836 kernel: BTRFS info (device dm-0): enabling ssd optimizations Feb 13 19:44:49.966281 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 19:44:49.975223 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 19:44:49.987924 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 19:44:50.018464 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 19:44:50.087001 kernel: BTRFS info (device sda6): first mount of filesystem 60a376b4-1193-4e0b-af89-a0e6d698bf0f Feb 13 19:44:50.087022 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:44:50.087035 kernel: BTRFS info (device sda6): using free space tree Feb 13 19:44:50.087048 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 19:44:50.087060 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 19:44:50.087072 kernel: BTRFS info (device sda6): last unmount of filesystem 60a376b4-1193-4e0b-af89-a0e6d698bf0f Feb 13 19:44:50.072620 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 19:44:50.085280 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 19:44:50.092975 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 19:44:50.145113 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:44:50.161950 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:44:50.177662 unknown[804]: fetched base config from "system" Feb 13 19:44:50.175408 ignition[804]: Ignition 2.20.0 Feb 13 19:44:50.177666 unknown[804]: fetched user config from "system" Feb 13 19:44:50.175412 ignition[804]: Stage: fetch-offline Feb 13 19:44:50.178508 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:44:50.175431 ignition[804]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:44:50.189203 systemd-networkd[928]: lo: Link UP Feb 13 19:44:50.175436 ignition[804]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 19:44:50.189205 systemd-networkd[928]: lo: Gained carrier Feb 13 19:44:50.175488 ignition[804]: parsed url from cmdline: "" Feb 13 19:44:50.191577 systemd-networkd[928]: Enumeration completed Feb 13 19:44:50.175490 ignition[804]: no config URL provided Feb 13 19:44:50.192386 systemd-networkd[928]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:44:50.175492 ignition[804]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 19:44:50.197958 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:44:50.175514 ignition[804]: parsing config with SHA512: afd9dbabe7e3aa0c235b5a2d30f9c892de9430cb254c3807c21ab4aa66412bdb9d9b1316b130c4b1d91b3c97f9d0ef45973c9c95961b69a809be86005ddb4f65 Feb 13 19:44:50.206300 systemd[1]: Reached target network.target - Network. Feb 13 19:44:50.177866 ignition[804]: fetch-offline: fetch-offline passed Feb 13 19:44:50.220678 systemd-networkd[928]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:44:50.177868 ignition[804]: POST message to Packet Timeline Feb 13 19:44:50.220959 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 19:44:50.177870 ignition[804]: POST Status error: resource requires networking Feb 13 19:44:50.235041 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 19:44:50.417961 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Feb 13 19:44:50.177910 ignition[804]: Ignition finished successfully Feb 13 19:44:50.248509 systemd-networkd[928]: enp1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:44:50.244181 ignition[941]: Ignition 2.20.0 Feb 13 19:44:50.414379 systemd-networkd[928]: enp1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:44:50.244185 ignition[941]: Stage: kargs Feb 13 19:44:50.244296 ignition[941]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:44:50.244302 ignition[941]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 19:44:50.244818 ignition[941]: kargs: kargs passed Feb 13 19:44:50.244821 ignition[941]: POST message to Packet Timeline Feb 13 19:44:50.244833 ignition[941]: GET https://metadata.packet.net/metadata: attempt #1 Feb 13 19:44:50.245310 ignition[941]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:53287->[::1]:53: read: connection refused Feb 13 19:44:50.446468 ignition[941]: GET https://metadata.packet.net/metadata: attempt #2 Feb 13 19:44:50.447442 ignition[941]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:45273->[::1]:53: read: connection refused Feb 13 19:44:50.630834 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Feb 13 19:44:50.631558 systemd-networkd[928]: eno1: Link UP Feb 13 19:44:50.631712 systemd-networkd[928]: eno2: Link UP Feb 13 19:44:50.631862 systemd-networkd[928]: enp1s0f0np0: Link UP Feb 13 19:44:50.632032 systemd-networkd[928]: enp1s0f0np0: Gained carrier Feb 13 19:44:50.645134 systemd-networkd[928]: enp1s0f1np1: Link UP Feb 13 19:44:50.678147 systemd-networkd[928]: enp1s0f0np0: DHCPv4 address 147.28.180.89/31, gateway 147.28.180.88 acquired from 145.40.83.140 Feb 13 19:44:50.847735 ignition[941]: GET https://metadata.packet.net/metadata: attempt #3 Feb 13 19:44:50.849021 ignition[941]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:57142->[::1]:53: read: connection refused Feb 13 19:44:51.419583 systemd-networkd[928]: enp1s0f1np1: Gained carrier Feb 13 19:44:51.649452 ignition[941]: GET https://metadata.packet.net/metadata: attempt #4 Feb 13 19:44:51.651124 ignition[941]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:55100->[::1]:53: read: connection refused Feb 13 19:44:51.675311 systemd-networkd[928]: enp1s0f0np0: Gained IPv6LL Feb 13 19:44:53.251927 ignition[941]: GET https://metadata.packet.net/metadata: attempt #5 Feb 13 19:44:53.253012 ignition[941]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:54027->[::1]:53: read: connection refused Feb 13 19:44:53.275317 systemd-networkd[928]: enp1s0f1np1: Gained IPv6LL Feb 13 19:44:56.455814 ignition[941]: GET https://metadata.packet.net/metadata: attempt #6 Feb 13 19:44:56.620861 ignition[941]: GET result: OK Feb 13 19:44:56.962935 ignition[941]: Ignition finished successfully Feb 13 19:44:56.968288 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 19:44:56.996077 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 19:44:57.002361 ignition[960]: Ignition 2.20.0 Feb 13 19:44:57.002365 ignition[960]: Stage: disks Feb 13 19:44:57.002468 ignition[960]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:44:57.002475 ignition[960]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 19:44:57.002982 ignition[960]: disks: disks passed Feb 13 19:44:57.002985 ignition[960]: POST message to Packet Timeline Feb 13 19:44:57.002998 ignition[960]: GET https://metadata.packet.net/metadata: attempt #1 Feb 13 19:44:57.332095 ignition[960]: GET result: OK Feb 13 19:44:57.671530 ignition[960]: Ignition finished successfully Feb 13 19:44:57.674353 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 19:44:57.690290 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 19:44:57.698106 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 19:44:57.726960 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:44:57.727097 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:44:57.753360 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:44:57.789171 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 19:44:57.849737 systemd-fsck[976]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 19:44:57.861255 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 19:44:57.879012 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 19:44:57.959506 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 19:44:57.975041 kernel: EXT4-fs (sda9): mounted filesystem 7d46b70d-4c30-46e6-9935-e1f7fb523560 r/w with ordered data mode. Quota mode: none. Feb 13 19:44:57.959749 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 19:44:57.996995 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:44:58.049833 kernel: BTRFS: device label OEM devid 1 transid 18 /dev/sda6 scanned by mount (986) Feb 13 19:44:58.049850 kernel: BTRFS info (device sda6): first mount of filesystem 60a376b4-1193-4e0b-af89-a0e6d698bf0f Feb 13 19:44:58.049858 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:44:58.049866 kernel: BTRFS info (device sda6): using free space tree Feb 13 19:44:58.049873 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 19:44:58.005970 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 19:44:58.065069 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 19:44:58.072028 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Feb 13 19:44:58.094298 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... Feb 13 19:44:58.105837 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 19:44:58.105858 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:44:58.161985 coreos-metadata[1003]: Feb 13 19:44:58.123 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 13 19:44:58.130988 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:44:58.189961 coreos-metadata[1004]: Feb 13 19:44:58.141 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 13 19:44:58.151033 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 19:44:58.185125 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 19:44:58.229917 initrd-setup-root[1018]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 19:44:58.241032 initrd-setup-root[1025]: cut: /sysroot/etc/group: No such file or directory Feb 13 19:44:58.251002 initrd-setup-root[1032]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 19:44:58.260902 initrd-setup-root[1039]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 19:44:58.275934 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 19:44:58.295905 coreos-metadata[1003]: Feb 13 19:44:58.277 INFO Fetch successful Feb 13 19:44:58.300927 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 19:44:58.334019 kernel: BTRFS info (device sda6): last unmount of filesystem 60a376b4-1193-4e0b-af89-a0e6d698bf0f Feb 13 19:44:58.334125 coreos-metadata[1003]: Feb 13 19:44:58.314 INFO wrote hostname ci-4186.1.1-a-a8b3a25f31 to /sysroot/etc/hostname Feb 13 19:44:58.323385 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 19:44:58.342587 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 19:44:58.342917 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 19:44:58.404968 ignition[1110]: INFO : Ignition 2.20.0 Feb 13 19:44:58.404968 ignition[1110]: INFO : Stage: mount Feb 13 19:44:58.404968 ignition[1110]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:44:58.404968 ignition[1110]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 19:44:58.404968 ignition[1110]: INFO : mount: mount passed Feb 13 19:44:58.404968 ignition[1110]: INFO : POST message to Packet Timeline Feb 13 19:44:58.404968 ignition[1110]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Feb 13 19:44:58.382058 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 19:44:58.473874 coreos-metadata[1004]: Feb 13 19:44:58.415 INFO Fetch successful Feb 13 19:44:58.488943 systemd[1]: flatcar-static-network.service: Deactivated successfully. Feb 13 19:44:58.489002 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. Feb 13 19:44:58.890688 ignition[1110]: INFO : GET result: OK Feb 13 19:44:59.173281 ignition[1110]: INFO : Ignition finished successfully Feb 13 19:44:59.174494 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 19:44:59.204005 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 19:44:59.214193 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:44:59.269721 kernel: BTRFS: device label OEM devid 1 transid 19 /dev/sda6 scanned by mount (1129) Feb 13 19:44:59.269744 kernel: BTRFS info (device sda6): first mount of filesystem 60a376b4-1193-4e0b-af89-a0e6d698bf0f Feb 13 19:44:59.277903 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:44:59.283801 kernel: BTRFS info (device sda6): using free space tree Feb 13 19:44:59.298953 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 19:44:59.298969 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 19:44:59.300866 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:44:59.332873 ignition[1146]: INFO : Ignition 2.20.0 Feb 13 19:44:59.332873 ignition[1146]: INFO : Stage: files Feb 13 19:44:59.348009 ignition[1146]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:44:59.348009 ignition[1146]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 19:44:59.348009 ignition[1146]: DEBUG : files: compiled without relabeling support, skipping Feb 13 19:44:59.348009 ignition[1146]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 19:44:59.348009 ignition[1146]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 19:44:59.348009 ignition[1146]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 19:44:59.348009 ignition[1146]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 19:44:59.348009 ignition[1146]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 19:44:59.348009 ignition[1146]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 19:44:59.348009 ignition[1146]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 13 19:44:59.337072 unknown[1146]: wrote ssh authorized keys file for user: core Feb 13 19:44:59.480028 ignition[1146]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 19:44:59.480028 ignition[1146]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 19:44:59.480028 ignition[1146]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Feb 13 19:44:59.480028 ignition[1146]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 19:44:59.480028 ignition[1146]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:44:59.480028 ignition[1146]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:44:59.480028 ignition[1146]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:44:59.480028 ignition[1146]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:44:59.480028 ignition[1146]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:44:59.480028 ignition[1146]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:44:59.480028 ignition[1146]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:44:59.480028 ignition[1146]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:44:59.480028 ignition[1146]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 19:44:59.480028 ignition[1146]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 19:44:59.480028 ignition[1146]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 19:44:59.733120 ignition[1146]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Feb 13 19:44:59.849383 ignition[1146]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 13 19:45:00.085243 ignition[1146]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 19:45:00.085243 ignition[1146]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Feb 13 19:45:00.116012 ignition[1146]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:45:00.116012 ignition[1146]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:45:00.116012 ignition[1146]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Feb 13 19:45:00.116012 ignition[1146]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Feb 13 19:45:00.116012 ignition[1146]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 19:45:00.116012 ignition[1146]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:45:00.116012 ignition[1146]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:45:00.116012 ignition[1146]: INFO : files: files passed Feb 13 19:45:00.116012 ignition[1146]: INFO : POST message to Packet Timeline Feb 13 19:45:00.116012 ignition[1146]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Feb 13 19:45:00.607739 ignition[1146]: INFO : GET result: OK Feb 13 19:45:01.486213 ignition[1146]: INFO : Ignition finished successfully Feb 13 19:45:01.489555 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 19:45:01.524058 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 19:45:01.524586 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 19:45:01.553255 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 19:45:01.553326 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 19:45:01.605064 initrd-setup-root-after-ignition[1185]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:45:01.605064 initrd-setup-root-after-ignition[1185]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:45:01.576412 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:45:01.643072 initrd-setup-root-after-ignition[1189]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:45:01.597088 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 19:45:01.634042 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 19:45:01.699191 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 19:45:01.699244 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 19:45:01.718191 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 19:45:01.739018 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 19:45:01.759204 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 19:45:01.774904 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 19:45:01.825973 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:45:01.855257 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 19:45:01.873759 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:45:01.899082 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:45:01.911119 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 19:45:01.929160 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 19:45:01.929325 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:45:01.958528 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 19:45:01.980406 systemd[1]: Stopped target basic.target - Basic System. Feb 13 19:45:01.999397 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 19:45:02.018511 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:45:02.039412 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 19:45:02.060418 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 19:45:02.080403 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:45:02.101457 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 19:45:02.122435 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 19:45:02.142403 systemd[1]: Stopped target swap.target - Swaps. Feb 13 19:45:02.160309 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 19:45:02.160702 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:45:02.186618 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:45:02.206434 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:45:02.227280 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 19:45:02.227728 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:45:02.249296 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 19:45:02.249693 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 19:45:02.281414 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 19:45:02.281886 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:45:02.301603 systemd[1]: Stopped target paths.target - Path Units. Feb 13 19:45:02.319279 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 19:45:02.319709 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:45:02.340415 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 19:45:02.359398 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 19:45:02.378487 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 19:45:02.378815 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:45:02.398436 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 19:45:02.398735 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:45:02.421526 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 19:45:02.421953 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:45:02.533073 ignition[1210]: INFO : Ignition 2.20.0 Feb 13 19:45:02.533073 ignition[1210]: INFO : Stage: umount Feb 13 19:45:02.533073 ignition[1210]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:45:02.533073 ignition[1210]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 19:45:02.533073 ignition[1210]: INFO : umount: umount passed Feb 13 19:45:02.533073 ignition[1210]: INFO : POST message to Packet Timeline Feb 13 19:45:02.533073 ignition[1210]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Feb 13 19:45:02.441499 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 19:45:02.441903 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 19:45:02.459501 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 13 19:45:02.459908 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 19:45:02.670019 ignition[1210]: INFO : GET result: OK Feb 13 19:45:02.498054 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 19:45:02.514599 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 19:45:02.532999 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 19:45:02.533216 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:45:02.544266 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 19:45:02.544424 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:45:02.604226 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 19:45:02.606046 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 19:45:02.606133 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 19:45:02.719689 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 19:45:02.720029 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 19:45:03.596599 ignition[1210]: INFO : Ignition finished successfully Feb 13 19:45:03.599784 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 19:45:03.600207 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 19:45:03.617318 systemd[1]: Stopped target network.target - Network. Feb 13 19:45:03.625300 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 19:45:03.625487 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 19:45:03.650223 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 19:45:03.650369 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 19:45:03.668293 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 19:45:03.668450 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 19:45:03.676558 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 19:45:03.676721 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 19:45:03.704266 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 19:45:03.704436 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 19:45:03.712964 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 19:45:03.727936 systemd-networkd[928]: enp1s0f0np0: DHCPv6 lease lost Feb 13 19:45:03.735003 systemd-networkd[928]: enp1s0f1np1: DHCPv6 lease lost Feb 13 19:45:03.739387 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 19:45:03.758037 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 19:45:03.758417 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 19:45:03.777301 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 19:45:03.777676 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 19:45:03.797855 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 19:45:03.797973 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:45:03.831972 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 19:45:03.853934 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 19:45:03.853976 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:45:03.874187 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:45:03.874266 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:45:03.893189 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 19:45:03.893356 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 19:45:03.913188 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 19:45:03.913354 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:45:03.934410 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:45:03.956161 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 19:45:03.956536 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:45:03.985526 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 19:45:03.985564 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 19:45:03.993087 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 19:45:03.993109 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:45:04.021104 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 19:45:04.021175 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:45:04.051295 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 19:45:04.051462 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 19:45:04.080253 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:45:04.080418 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:45:04.139145 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 19:45:04.379998 systemd-journald[269]: Received SIGTERM from PID 1 (systemd). Feb 13 19:45:04.142219 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 19:45:04.142369 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:45:04.173880 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 19:45:04.173988 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:45:04.193943 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 19:45:04.194048 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:45:04.217955 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:45:04.218061 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:45:04.239436 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 19:45:04.239510 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 19:45:04.259296 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 19:45:04.259366 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 19:45:04.280537 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 19:45:04.298050 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 19:45:04.327469 systemd[1]: Switching root. Feb 13 19:45:04.490958 systemd-journald[269]: Journal stopped Feb 13 19:44:46.471674 kernel: microcode: updated early: 0xf4 -> 0x100, date = 2024-02-05 Feb 13 19:44:46.471688 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 17:41:03 -00 2025 Feb 13 19:44:46.471694 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=015d1d9e5e601f6a4e226c935072d3d0819e7eb2da20e68715973498f21aa3fe Feb 13 19:44:46.471700 kernel: BIOS-provided physical RAM map: Feb 13 19:44:46.471704 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Feb 13 19:44:46.471708 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Feb 13 19:44:46.471713 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Feb 13 19:44:46.471717 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Feb 13 19:44:46.471721 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Feb 13 19:44:46.471725 kernel: BIOS-e820: [mem 0x0000000040400000-0x0000000081b2afff] usable Feb 13 19:44:46.471730 kernel: BIOS-e820: [mem 0x0000000081b2b000-0x0000000081b2bfff] ACPI NVS Feb 13 19:44:46.471734 kernel: BIOS-e820: [mem 0x0000000081b2c000-0x0000000081b2cfff] reserved Feb 13 19:44:46.471739 kernel: BIOS-e820: [mem 0x0000000081b2d000-0x000000008afccfff] usable Feb 13 19:44:46.471743 kernel: BIOS-e820: [mem 0x000000008afcd000-0x000000008c0b1fff] reserved Feb 13 19:44:46.471749 kernel: BIOS-e820: [mem 0x000000008c0b2000-0x000000008c23afff] usable Feb 13 19:44:46.471753 kernel: BIOS-e820: [mem 0x000000008c23b000-0x000000008c66cfff] ACPI NVS Feb 13 19:44:46.471759 kernel: BIOS-e820: [mem 0x000000008c66d000-0x000000008eefefff] reserved Feb 13 19:44:46.471764 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable Feb 13 19:44:46.471768 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved Feb 13 19:44:46.471773 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Feb 13 19:44:46.471778 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Feb 13 19:44:46.471782 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Feb 13 19:44:46.471790 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Feb 13 19:44:46.471813 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Feb 13 19:44:46.471818 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable Feb 13 19:44:46.471823 kernel: NX (Execute Disable) protection: active Feb 13 19:44:46.471827 kernel: APIC: Static calls initialized Feb 13 19:44:46.471846 kernel: SMBIOS 3.2.1 present. Feb 13 19:44:46.471852 kernel: DMI: Supermicro SYS-5019C-MR-PH004/X11SCM-F, BIOS 1.9 09/16/2022 Feb 13 19:44:46.471856 kernel: tsc: Detected 3400.000 MHz processor Feb 13 19:44:46.471861 kernel: tsc: Detected 3399.906 MHz TSC Feb 13 19:44:46.471866 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 19:44:46.471871 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 19:44:46.471876 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 Feb 13 19:44:46.471881 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 23), built from 10 variable MTRRs Feb 13 19:44:46.471886 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 19:44:46.471891 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 Feb 13 19:44:46.471896 kernel: Using GB pages for direct mapping Feb 13 19:44:46.471901 kernel: ACPI: Early table checksum verification disabled Feb 13 19:44:46.471907 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Feb 13 19:44:46.471913 kernel: ACPI: XSDT 0x000000008C54E0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Feb 13 19:44:46.471918 kernel: ACPI: FACP 0x000000008C58A670 000114 (v06 01072009 AMI 00010013) Feb 13 19:44:46.471924 kernel: ACPI: DSDT 0x000000008C54E268 03C404 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Feb 13 19:44:46.471929 kernel: ACPI: FACS 0x000000008C66CF80 000040 Feb 13 19:44:46.471935 kernel: ACPI: APIC 0x000000008C58A788 00012C (v04 01072009 AMI 00010013) Feb 13 19:44:46.471940 kernel: ACPI: FPDT 0x000000008C58A8B8 000044 (v01 01072009 AMI 00010013) Feb 13 19:44:46.471945 kernel: ACPI: FIDT 0x000000008C58A900 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Feb 13 19:44:46.471950 kernel: ACPI: MCFG 0x000000008C58A9A0 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Feb 13 19:44:46.471955 kernel: ACPI: SPMI 0x000000008C58A9E0 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Feb 13 19:44:46.471960 kernel: ACPI: SSDT 0x000000008C58AA28 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Feb 13 19:44:46.471965 kernel: ACPI: SSDT 0x000000008C58C548 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Feb 13 19:44:46.471970 kernel: ACPI: SSDT 0x000000008C58F710 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Feb 13 19:44:46.471976 kernel: ACPI: HPET 0x000000008C591A40 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 13 19:44:46.471981 kernel: ACPI: SSDT 0x000000008C591A78 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Feb 13 19:44:46.471986 kernel: ACPI: SSDT 0x000000008C592A28 0008F4 (v02 INTEL xh_mossb 00000000 INTL 20160527) Feb 13 19:44:46.471991 kernel: ACPI: UEFI 0x000000008C593320 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 13 19:44:46.471997 kernel: ACPI: LPIT 0x000000008C593368 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 13 19:44:46.472002 kernel: ACPI: SSDT 0x000000008C593400 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Feb 13 19:44:46.472007 kernel: ACPI: SSDT 0x000000008C595BE0 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Feb 13 19:44:46.472012 kernel: ACPI: DBGP 0x000000008C5970C8 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 13 19:44:46.472018 kernel: ACPI: DBG2 0x000000008C597100 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Feb 13 19:44:46.472023 kernel: ACPI: SSDT 0x000000008C597158 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Feb 13 19:44:46.472028 kernel: ACPI: DMAR 0x000000008C598CC0 000070 (v01 INTEL EDK2 00000002 01000013) Feb 13 19:44:46.472033 kernel: ACPI: SSDT 0x000000008C598D30 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Feb 13 19:44:46.472038 kernel: ACPI: TPM2 0x000000008C598E78 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Feb 13 19:44:46.472044 kernel: ACPI: SSDT 0x000000008C598EB0 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Feb 13 19:44:46.472049 kernel: ACPI: WSMT 0x000000008C599C40 000028 (v01 SUPERM 01072009 AMI 00010013) Feb 13 19:44:46.472054 kernel: ACPI: EINJ 0x000000008C599C68 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Feb 13 19:44:46.472059 kernel: ACPI: ERST 0x000000008C599D98 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Feb 13 19:44:46.472065 kernel: ACPI: BERT 0x000000008C599FC8 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Feb 13 19:44:46.472070 kernel: ACPI: HEST 0x000000008C599FF8 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Feb 13 19:44:46.472075 kernel: ACPI: SSDT 0x000000008C59A278 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Feb 13 19:44:46.472080 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58a670-0x8c58a783] Feb 13 19:44:46.472085 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54e268-0x8c58a66b] Feb 13 19:44:46.472091 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66cf80-0x8c66cfbf] Feb 13 19:44:46.472096 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58a788-0x8c58a8b3] Feb 13 19:44:46.472101 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58a8b8-0x8c58a8fb] Feb 13 19:44:46.472107 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58a900-0x8c58a99b] Feb 13 19:44:46.472112 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58a9a0-0x8c58a9db] Feb 13 19:44:46.472117 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58a9e0-0x8c58aa20] Feb 13 19:44:46.472122 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58aa28-0x8c58c543] Feb 13 19:44:46.472127 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58c548-0x8c58f70d] Feb 13 19:44:46.472132 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58f710-0x8c591a3a] Feb 13 19:44:46.472137 kernel: ACPI: Reserving HPET table memory at [mem 0x8c591a40-0x8c591a77] Feb 13 19:44:46.472142 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c591a78-0x8c592a25] Feb 13 19:44:46.472147 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c592a28-0x8c59331b] Feb 13 19:44:46.472152 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c593320-0x8c593361] Feb 13 19:44:46.472158 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c593368-0x8c5933fb] Feb 13 19:44:46.472163 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c593400-0x8c595bdd] Feb 13 19:44:46.472168 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c595be0-0x8c5970c1] Feb 13 19:44:46.472174 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c5970c8-0x8c5970fb] Feb 13 19:44:46.472179 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c597100-0x8c597153] Feb 13 19:44:46.472184 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c597158-0x8c598cbe] Feb 13 19:44:46.472189 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c598cc0-0x8c598d2f] Feb 13 19:44:46.472194 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598d30-0x8c598e73] Feb 13 19:44:46.472199 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c598e78-0x8c598eab] Feb 13 19:44:46.472205 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598eb0-0x8c599c3e] Feb 13 19:44:46.472211 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c599c40-0x8c599c67] Feb 13 19:44:46.472216 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c599c68-0x8c599d97] Feb 13 19:44:46.472221 kernel: ACPI: Reserving ERST table memory at [mem 0x8c599d98-0x8c599fc7] Feb 13 19:44:46.472226 kernel: ACPI: Reserving BERT table memory at [mem 0x8c599fc8-0x8c599ff7] Feb 13 19:44:46.472231 kernel: ACPI: Reserving HEST table memory at [mem 0x8c599ff8-0x8c59a273] Feb 13 19:44:46.472236 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59a278-0x8c59a3d9] Feb 13 19:44:46.472241 kernel: No NUMA configuration found Feb 13 19:44:46.472246 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] Feb 13 19:44:46.472252 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] Feb 13 19:44:46.472257 kernel: Zone ranges: Feb 13 19:44:46.472263 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 19:44:46.472268 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Feb 13 19:44:46.472273 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] Feb 13 19:44:46.472278 kernel: Movable zone start for each node Feb 13 19:44:46.472283 kernel: Early memory node ranges Feb 13 19:44:46.472288 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Feb 13 19:44:46.472293 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Feb 13 19:44:46.472298 kernel: node 0: [mem 0x0000000040400000-0x0000000081b2afff] Feb 13 19:44:46.472304 kernel: node 0: [mem 0x0000000081b2d000-0x000000008afccfff] Feb 13 19:44:46.472309 kernel: node 0: [mem 0x000000008c0b2000-0x000000008c23afff] Feb 13 19:44:46.472315 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] Feb 13 19:44:46.472323 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] Feb 13 19:44:46.472329 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] Feb 13 19:44:46.472335 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 19:44:46.472340 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Feb 13 19:44:46.472346 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Feb 13 19:44:46.472352 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Feb 13 19:44:46.472357 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges Feb 13 19:44:46.472362 kernel: On node 0, zone DMA32: 11460 pages in unavailable ranges Feb 13 19:44:46.472368 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges Feb 13 19:44:46.472373 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges Feb 13 19:44:46.472379 kernel: ACPI: PM-Timer IO Port: 0x1808 Feb 13 19:44:46.472384 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Feb 13 19:44:46.472390 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Feb 13 19:44:46.472396 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Feb 13 19:44:46.472401 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Feb 13 19:44:46.472407 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Feb 13 19:44:46.472412 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Feb 13 19:44:46.472417 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Feb 13 19:44:46.472423 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Feb 13 19:44:46.472428 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Feb 13 19:44:46.472433 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Feb 13 19:44:46.472439 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Feb 13 19:44:46.472444 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Feb 13 19:44:46.472450 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Feb 13 19:44:46.472455 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Feb 13 19:44:46.472461 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Feb 13 19:44:46.472466 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Feb 13 19:44:46.472472 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Feb 13 19:44:46.472477 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 13 19:44:46.472483 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 19:44:46.472488 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 19:44:46.472493 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 13 19:44:46.472500 kernel: TSC deadline timer available Feb 13 19:44:46.472505 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Feb 13 19:44:46.472511 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices Feb 13 19:44:46.472516 kernel: Booting paravirtualized kernel on bare hardware Feb 13 19:44:46.472522 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 19:44:46.472528 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Feb 13 19:44:46.472533 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Feb 13 19:44:46.472538 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Feb 13 19:44:46.472544 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Feb 13 19:44:46.472550 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=015d1d9e5e601f6a4e226c935072d3d0819e7eb2da20e68715973498f21aa3fe Feb 13 19:44:46.472556 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 19:44:46.472562 kernel: random: crng init done Feb 13 19:44:46.472567 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Feb 13 19:44:46.472572 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Feb 13 19:44:46.472578 kernel: Fallback order for Node 0: 0 Feb 13 19:44:46.472583 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232415 Feb 13 19:44:46.472588 kernel: Policy zone: Normal Feb 13 19:44:46.472595 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 19:44:46.472600 kernel: software IO TLB: area num 16. Feb 13 19:44:46.472606 kernel: Memory: 32718252K/33452980K available (14336K kernel code, 2301K rwdata, 22800K rodata, 43320K init, 1752K bss, 734468K reserved, 0K cma-reserved) Feb 13 19:44:46.472612 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Feb 13 19:44:46.472617 kernel: ftrace: allocating 37893 entries in 149 pages Feb 13 19:44:46.472622 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 19:44:46.472628 kernel: Dynamic Preempt: voluntary Feb 13 19:44:46.472633 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 19:44:46.472639 kernel: rcu: RCU event tracing is enabled. Feb 13 19:44:46.472646 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Feb 13 19:44:46.472651 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 19:44:46.472657 kernel: Rude variant of Tasks RCU enabled. Feb 13 19:44:46.472662 kernel: Tracing variant of Tasks RCU enabled. Feb 13 19:44:46.472667 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 19:44:46.472673 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Feb 13 19:44:46.472678 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Feb 13 19:44:46.472684 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 19:44:46.472689 kernel: Console: colour VGA+ 80x25 Feb 13 19:44:46.472696 kernel: printk: console [tty0] enabled Feb 13 19:44:46.472701 kernel: printk: console [ttyS1] enabled Feb 13 19:44:46.472706 kernel: ACPI: Core revision 20230628 Feb 13 19:44:46.472712 kernel: hpet: HPET dysfunctional in PC10. Force disabled. Feb 13 19:44:46.472717 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 19:44:46.472723 kernel: DMAR: Host address width 39 Feb 13 19:44:46.472728 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Feb 13 19:44:46.472734 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Feb 13 19:44:46.472739 kernel: DMAR: RMRR base: 0x0000008cf18000 end: 0x0000008d161fff Feb 13 19:44:46.472746 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 Feb 13 19:44:46.472751 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Feb 13 19:44:46.472757 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Feb 13 19:44:46.472762 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Feb 13 19:44:46.472768 kernel: x2apic enabled Feb 13 19:44:46.472773 kernel: APIC: Switched APIC routing to: cluster x2apic Feb 13 19:44:46.472779 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Feb 13 19:44:46.472784 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Feb 13 19:44:46.472808 kernel: CPU0: Thermal monitoring enabled (TM1) Feb 13 19:44:46.472815 kernel: process: using mwait in idle threads Feb 13 19:44:46.472835 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 13 19:44:46.472840 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 13 19:44:46.472846 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 19:44:46.472851 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Feb 13 19:44:46.472856 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Feb 13 19:44:46.472862 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Feb 13 19:44:46.472867 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 19:44:46.472872 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Feb 13 19:44:46.472878 kernel: RETBleed: Mitigation: Enhanced IBRS Feb 13 19:44:46.472883 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 13 19:44:46.472889 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Feb 13 19:44:46.472895 kernel: TAA: Mitigation: TSX disabled Feb 13 19:44:46.472900 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Feb 13 19:44:46.472905 kernel: SRBDS: Mitigation: Microcode Feb 13 19:44:46.472911 kernel: GDS: Mitigation: Microcode Feb 13 19:44:46.472916 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 19:44:46.472921 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 19:44:46.472927 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 19:44:46.472932 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Feb 13 19:44:46.472937 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Feb 13 19:44:46.472943 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 19:44:46.472949 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Feb 13 19:44:46.472955 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Feb 13 19:44:46.472960 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Feb 13 19:44:46.472965 kernel: Freeing SMP alternatives memory: 32K Feb 13 19:44:46.472971 kernel: pid_max: default: 32768 minimum: 301 Feb 13 19:44:46.472976 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 19:44:46.472981 kernel: landlock: Up and running. Feb 13 19:44:46.472987 kernel: SELinux: Initializing. Feb 13 19:44:46.472992 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 19:44:46.472997 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 19:44:46.473003 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Feb 13 19:44:46.473008 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Feb 13 19:44:46.473015 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Feb 13 19:44:46.473020 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Feb 13 19:44:46.473026 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Feb 13 19:44:46.473031 kernel: ... version: 4 Feb 13 19:44:46.473037 kernel: ... bit width: 48 Feb 13 19:44:46.473042 kernel: ... generic registers: 4 Feb 13 19:44:46.473047 kernel: ... value mask: 0000ffffffffffff Feb 13 19:44:46.473053 kernel: ... max period: 00007fffffffffff Feb 13 19:44:46.473058 kernel: ... fixed-purpose events: 3 Feb 13 19:44:46.473064 kernel: ... event mask: 000000070000000f Feb 13 19:44:46.473070 kernel: signal: max sigframe size: 2032 Feb 13 19:44:46.473075 kernel: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Feb 13 19:44:46.473081 kernel: rcu: Hierarchical SRCU implementation. Feb 13 19:44:46.473086 kernel: rcu: Max phase no-delay instances is 400. Feb 13 19:44:46.473092 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Feb 13 19:44:46.473097 kernel: smp: Bringing up secondary CPUs ... Feb 13 19:44:46.473103 kernel: smpboot: x86: Booting SMP configuration: Feb 13 19:44:46.473108 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 #15 Feb 13 19:44:46.473115 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 13 19:44:46.473120 kernel: smp: Brought up 1 node, 16 CPUs Feb 13 19:44:46.473126 kernel: smpboot: Max logical packages: 1 Feb 13 19:44:46.473131 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Feb 13 19:44:46.473137 kernel: devtmpfs: initialized Feb 13 19:44:46.473142 kernel: x86/mm: Memory block size: 128MB Feb 13 19:44:46.473148 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x81b2b000-0x81b2bfff] (4096 bytes) Feb 13 19:44:46.473153 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23b000-0x8c66cfff] (4399104 bytes) Feb 13 19:44:46.473159 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 19:44:46.473165 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Feb 13 19:44:46.473170 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 19:44:46.473176 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 19:44:46.473181 kernel: audit: initializing netlink subsys (disabled) Feb 13 19:44:46.473187 kernel: audit: type=2000 audit(1739475881.042:1): state=initialized audit_enabled=0 res=1 Feb 13 19:44:46.473192 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 19:44:46.473198 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 19:44:46.473203 kernel: cpuidle: using governor menu Feb 13 19:44:46.473209 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 19:44:46.473215 kernel: dca service started, version 1.12.1 Feb 13 19:44:46.473220 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Feb 13 19:44:46.473226 kernel: PCI: Using configuration type 1 for base access Feb 13 19:44:46.473231 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Feb 13 19:44:46.473236 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 19:44:46.473242 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 19:44:46.473247 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 19:44:46.473253 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 19:44:46.473259 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 19:44:46.473264 kernel: ACPI: Added _OSI(Module Device) Feb 13 19:44:46.473270 kernel: ACPI: Added _OSI(Processor Device) Feb 13 19:44:46.473275 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 19:44:46.473281 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 19:44:46.473286 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Feb 13 19:44:46.473292 kernel: ACPI: Dynamic OEM Table Load: Feb 13 19:44:46.473297 kernel: ACPI: SSDT 0xFFFF8986816BB400 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Feb 13 19:44:46.473303 kernel: ACPI: Dynamic OEM Table Load: Feb 13 19:44:46.473309 kernel: ACPI: SSDT 0xFFFF8986816B7000 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Feb 13 19:44:46.473314 kernel: ACPI: Dynamic OEM Table Load: Feb 13 19:44:46.473320 kernel: ACPI: SSDT 0xFFFF89868169A000 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Feb 13 19:44:46.473325 kernel: ACPI: Dynamic OEM Table Load: Feb 13 19:44:46.473331 kernel: ACPI: SSDT 0xFFFF8986816B4800 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Feb 13 19:44:46.473336 kernel: ACPI: Dynamic OEM Table Load: Feb 13 19:44:46.473341 kernel: ACPI: SSDT 0xFFFF8986816C7000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Feb 13 19:44:46.473347 kernel: ACPI: Dynamic OEM Table Load: Feb 13 19:44:46.473352 kernel: ACPI: SSDT 0xFFFF898680FA5C00 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Feb 13 19:44:46.473358 kernel: ACPI: _OSC evaluated successfully for all CPUs Feb 13 19:44:46.473364 kernel: ACPI: Interpreter enabled Feb 13 19:44:46.473369 kernel: ACPI: PM: (supports S0 S5) Feb 13 19:44:46.473375 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 19:44:46.473380 kernel: HEST: Enabling Firmware First mode for corrected errors. Feb 13 19:44:46.473385 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Feb 13 19:44:46.473391 kernel: HEST: Table parsing has been initialized. Feb 13 19:44:46.473396 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Feb 13 19:44:46.473402 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 19:44:46.473407 kernel: PCI: Using E820 reservations for host bridge windows Feb 13 19:44:46.473413 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Feb 13 19:44:46.473419 kernel: ACPI: \_SB_.PCI0.XDCI.USBC: New power resource Feb 13 19:44:46.473425 kernel: ACPI: \_SB_.PCI0.SAT0.VOL0.V0PR: New power resource Feb 13 19:44:46.473431 kernel: ACPI: \_SB_.PCI0.SAT0.VOL1.V1PR: New power resource Feb 13 19:44:46.473436 kernel: ACPI: \_SB_.PCI0.SAT0.VOL2.V2PR: New power resource Feb 13 19:44:46.473442 kernel: ACPI: \_SB_.PCI0.CNVW.WRST: New power resource Feb 13 19:44:46.473447 kernel: ACPI: \_TZ_.FN00: New power resource Feb 13 19:44:46.473452 kernel: ACPI: \_TZ_.FN01: New power resource Feb 13 19:44:46.473458 kernel: ACPI: \_TZ_.FN02: New power resource Feb 13 19:44:46.473464 kernel: ACPI: \_TZ_.FN03: New power resource Feb 13 19:44:46.473470 kernel: ACPI: \_TZ_.FN04: New power resource Feb 13 19:44:46.473475 kernel: ACPI: \PIN_: New power resource Feb 13 19:44:46.473481 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Feb 13 19:44:46.473555 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 19:44:46.473609 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Feb 13 19:44:46.473657 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Feb 13 19:44:46.473667 kernel: PCI host bridge to bus 0000:00 Feb 13 19:44:46.473717 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 19:44:46.473762 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 19:44:46.473828 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 19:44:46.473884 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] Feb 13 19:44:46.473926 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Feb 13 19:44:46.473969 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Feb 13 19:44:46.474033 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Feb 13 19:44:46.474091 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Feb 13 19:44:46.474141 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Feb 13 19:44:46.474196 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Feb 13 19:44:46.474244 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] Feb 13 19:44:46.474297 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Feb 13 19:44:46.474348 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit] Feb 13 19:44:46.474402 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Feb 13 19:44:46.474450 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] Feb 13 19:44:46.474498 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Feb 13 19:44:46.474551 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Feb 13 19:44:46.474600 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] Feb 13 19:44:46.474651 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit] Feb 13 19:44:46.474703 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Feb 13 19:44:46.474752 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Feb 13 19:44:46.474827 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Feb 13 19:44:46.474892 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Feb 13 19:44:46.474945 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Feb 13 19:44:46.474996 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] Feb 13 19:44:46.475046 kernel: pci 0000:00:16.0: PME# supported from D3hot Feb 13 19:44:46.475106 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Feb 13 19:44:46.475156 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] Feb 13 19:44:46.475205 kernel: pci 0000:00:16.1: PME# supported from D3hot Feb 13 19:44:46.475258 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Feb 13 19:44:46.475306 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] Feb 13 19:44:46.475356 kernel: pci 0000:00:16.4: PME# supported from D3hot Feb 13 19:44:46.475408 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Feb 13 19:44:46.475458 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] Feb 13 19:44:46.475505 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] Feb 13 19:44:46.475554 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] Feb 13 19:44:46.475601 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] Feb 13 19:44:46.475650 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] Feb 13 19:44:46.475700 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] Feb 13 19:44:46.475748 kernel: pci 0000:00:17.0: PME# supported from D3hot Feb 13 19:44:46.475825 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Feb 13 19:44:46.475892 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Feb 13 19:44:46.475948 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Feb 13 19:44:46.476002 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Feb 13 19:44:46.476055 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Feb 13 19:44:46.476105 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Feb 13 19:44:46.476157 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Feb 13 19:44:46.476207 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Feb 13 19:44:46.476261 kernel: pci 0000:00:1c.3: [8086:a33b] type 01 class 0x060400 Feb 13 19:44:46.476311 kernel: pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold Feb 13 19:44:46.476364 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Feb 13 19:44:46.476412 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Feb 13 19:44:46.476465 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Feb 13 19:44:46.476520 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Feb 13 19:44:46.476571 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] Feb 13 19:44:46.476619 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Feb 13 19:44:46.476673 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Feb 13 19:44:46.476721 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Feb 13 19:44:46.476779 kernel: pci 0000:01:00.0: [15b3:1015] type 00 class 0x020000 Feb 13 19:44:46.476868 kernel: pci 0000:01:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Feb 13 19:44:46.476918 kernel: pci 0000:01:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] Feb 13 19:44:46.476971 kernel: pci 0000:01:00.0: PME# supported from D3cold Feb 13 19:44:46.477020 kernel: pci 0000:01:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Feb 13 19:44:46.477071 kernel: pci 0000:01:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Feb 13 19:44:46.477127 kernel: pci 0000:01:00.1: [15b3:1015] type 00 class 0x020000 Feb 13 19:44:46.477178 kernel: pci 0000:01:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Feb 13 19:44:46.477227 kernel: pci 0000:01:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] Feb 13 19:44:46.477276 kernel: pci 0000:01:00.1: PME# supported from D3cold Feb 13 19:44:46.477327 kernel: pci 0000:01:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Feb 13 19:44:46.477377 kernel: pci 0000:01:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Feb 13 19:44:46.477428 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Feb 13 19:44:46.477477 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Feb 13 19:44:46.477526 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Feb 13 19:44:46.477574 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Feb 13 19:44:46.477628 kernel: pci 0000:03:00.0: working around ROM BAR overlap defect Feb 13 19:44:46.477681 kernel: pci 0000:03:00.0: [8086:1533] type 00 class 0x020000 Feb 13 19:44:46.477731 kernel: pci 0000:03:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] Feb 13 19:44:46.477780 kernel: pci 0000:03:00.0: reg 0x18: [io 0x5000-0x501f] Feb 13 19:44:46.477866 kernel: pci 0000:03:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] Feb 13 19:44:46.477915 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Feb 13 19:44:46.477964 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Feb 13 19:44:46.478014 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Feb 13 19:44:46.478065 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Feb 13 19:44:46.478121 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect Feb 13 19:44:46.478172 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Feb 13 19:44:46.478224 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] Feb 13 19:44:46.478274 kernel: pci 0000:04:00.0: reg 0x18: [io 0x4000-0x401f] Feb 13 19:44:46.478323 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] Feb 13 19:44:46.478373 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Feb 13 19:44:46.478423 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Feb 13 19:44:46.478475 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Feb 13 19:44:46.478524 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Feb 13 19:44:46.478574 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Feb 13 19:44:46.478630 kernel: pci 0000:06:00.0: [1a03:1150] type 01 class 0x060400 Feb 13 19:44:46.478681 kernel: pci 0000:06:00.0: enabling Extended Tags Feb 13 19:44:46.478731 kernel: pci 0000:06:00.0: supports D1 D2 Feb 13 19:44:46.478898 kernel: pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 19:44:46.478950 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Feb 13 19:44:46.479000 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Feb 13 19:44:46.479078 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Feb 13 19:44:46.479132 kernel: pci_bus 0000:07: extended config space not accessible Feb 13 19:44:46.479189 kernel: pci 0000:07:00.0: [1a03:2000] type 00 class 0x030000 Feb 13 19:44:46.479241 kernel: pci 0000:07:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] Feb 13 19:44:46.479294 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] Feb 13 19:44:46.479348 kernel: pci 0000:07:00.0: reg 0x18: [io 0x3000-0x307f] Feb 13 19:44:46.479400 kernel: pci 0000:07:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 13 19:44:46.479452 kernel: pci 0000:07:00.0: supports D1 D2 Feb 13 19:44:46.479503 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 19:44:46.479555 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Feb 13 19:44:46.479604 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Feb 13 19:44:46.479654 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Feb 13 19:44:46.479663 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Feb 13 19:44:46.479671 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Feb 13 19:44:46.479677 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Feb 13 19:44:46.479683 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Feb 13 19:44:46.479689 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Feb 13 19:44:46.479695 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Feb 13 19:44:46.479701 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Feb 13 19:44:46.479707 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Feb 13 19:44:46.479713 kernel: iommu: Default domain type: Translated Feb 13 19:44:46.479719 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 19:44:46.479725 kernel: PCI: Using ACPI for IRQ routing Feb 13 19:44:46.479731 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 19:44:46.479737 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Feb 13 19:44:46.479743 kernel: e820: reserve RAM buffer [mem 0x81b2b000-0x83ffffff] Feb 13 19:44:46.479749 kernel: e820: reserve RAM buffer [mem 0x8afcd000-0x8bffffff] Feb 13 19:44:46.479754 kernel: e820: reserve RAM buffer [mem 0x8c23b000-0x8fffffff] Feb 13 19:44:46.479760 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] Feb 13 19:44:46.479765 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] Feb 13 19:44:46.479839 kernel: pci 0000:07:00.0: vgaarb: setting as boot VGA device Feb 13 19:44:46.479908 kernel: pci 0000:07:00.0: vgaarb: bridge control possible Feb 13 19:44:46.479963 kernel: pci 0000:07:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 13 19:44:46.479971 kernel: vgaarb: loaded Feb 13 19:44:46.479977 kernel: clocksource: Switched to clocksource tsc-early Feb 13 19:44:46.479983 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 19:44:46.479989 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 19:44:46.479995 kernel: pnp: PnP ACPI init Feb 13 19:44:46.480113 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Feb 13 19:44:46.480164 kernel: pnp 00:02: [dma 0 disabled] Feb 13 19:44:46.480212 kernel: pnp 00:03: [dma 0 disabled] Feb 13 19:44:46.480263 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Feb 13 19:44:46.480308 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Feb 13 19:44:46.480357 kernel: system 00:05: [io 0x1854-0x1857] has been reserved Feb 13 19:44:46.480405 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved Feb 13 19:44:46.480452 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved Feb 13 19:44:46.480497 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved Feb 13 19:44:46.480543 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved Feb 13 19:44:46.480589 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved Feb 13 19:44:46.480635 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved Feb 13 19:44:46.480680 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved Feb 13 19:44:46.480726 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved Feb 13 19:44:46.480777 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved Feb 13 19:44:46.480879 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved Feb 13 19:44:46.480924 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved Feb 13 19:44:46.480968 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved Feb 13 19:44:46.481012 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved Feb 13 19:44:46.481056 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved Feb 13 19:44:46.481100 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved Feb 13 19:44:46.481151 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved Feb 13 19:44:46.481160 kernel: pnp: PnP ACPI: found 10 devices Feb 13 19:44:46.481166 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 19:44:46.481172 kernel: NET: Registered PF_INET protocol family Feb 13 19:44:46.481178 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 19:44:46.481184 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Feb 13 19:44:46.481190 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 19:44:46.481196 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 19:44:46.481203 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Feb 13 19:44:46.481210 kernel: TCP: Hash tables configured (established 262144 bind 65536) Feb 13 19:44:46.481216 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 13 19:44:46.481221 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 13 19:44:46.481227 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 19:44:46.481233 kernel: NET: Registered PF_XDP protocol family Feb 13 19:44:46.481282 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] Feb 13 19:44:46.481332 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] Feb 13 19:44:46.481381 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] Feb 13 19:44:46.481506 kernel: pci 0000:01:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Feb 13 19:44:46.481555 kernel: pci 0000:01:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Feb 13 19:44:46.481606 kernel: pci 0000:01:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Feb 13 19:44:46.481657 kernel: pci 0000:01:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Feb 13 19:44:46.481706 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Feb 13 19:44:46.481756 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Feb 13 19:44:46.481828 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Feb 13 19:44:46.481891 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Feb 13 19:44:46.481943 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Feb 13 19:44:46.481991 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Feb 13 19:44:46.482039 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Feb 13 19:44:46.482088 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Feb 13 19:44:46.482140 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Feb 13 19:44:46.482189 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Feb 13 19:44:46.482237 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Feb 13 19:44:46.482287 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Feb 13 19:44:46.482337 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Feb 13 19:44:46.482386 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Feb 13 19:44:46.482435 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Feb 13 19:44:46.482482 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Feb 13 19:44:46.482531 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Feb 13 19:44:46.482578 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Feb 13 19:44:46.482622 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 19:44:46.482667 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 19:44:46.482711 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 19:44:46.482754 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] Feb 13 19:44:46.482913 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Feb 13 19:44:46.482964 kernel: pci_bus 0000:01: resource 1 [mem 0x95100000-0x952fffff] Feb 13 19:44:46.483010 kernel: pci_bus 0000:01: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Feb 13 19:44:46.483063 kernel: pci_bus 0000:03: resource 0 [io 0x5000-0x5fff] Feb 13 19:44:46.483109 kernel: pci_bus 0000:03: resource 1 [mem 0x95400000-0x954fffff] Feb 13 19:44:46.483158 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Feb 13 19:44:46.483203 kernel: pci_bus 0000:04: resource 1 [mem 0x95300000-0x953fffff] Feb 13 19:44:46.483252 kernel: pci_bus 0000:06: resource 0 [io 0x3000-0x3fff] Feb 13 19:44:46.483297 kernel: pci_bus 0000:06: resource 1 [mem 0x94000000-0x950fffff] Feb 13 19:44:46.483348 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Feb 13 19:44:46.483395 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] Feb 13 19:44:46.483403 kernel: PCI: CLS 64 bytes, default 64 Feb 13 19:44:46.483409 kernel: DMAR: No ATSR found Feb 13 19:44:46.483415 kernel: DMAR: No SATC found Feb 13 19:44:46.483421 kernel: DMAR: dmar0: Using Queued invalidation Feb 13 19:44:46.483470 kernel: pci 0000:00:00.0: Adding to iommu group 0 Feb 13 19:44:46.483519 kernel: pci 0000:00:01.0: Adding to iommu group 1 Feb 13 19:44:46.483571 kernel: pci 0000:00:08.0: Adding to iommu group 2 Feb 13 19:44:46.483619 kernel: pci 0000:00:12.0: Adding to iommu group 3 Feb 13 19:44:46.483668 kernel: pci 0000:00:14.0: Adding to iommu group 4 Feb 13 19:44:46.483716 kernel: pci 0000:00:14.2: Adding to iommu group 4 Feb 13 19:44:46.483764 kernel: pci 0000:00:15.0: Adding to iommu group 5 Feb 13 19:44:46.483836 kernel: pci 0000:00:15.1: Adding to iommu group 5 Feb 13 19:44:46.483899 kernel: pci 0000:00:16.0: Adding to iommu group 6 Feb 13 19:44:46.483947 kernel: pci 0000:00:16.1: Adding to iommu group 6 Feb 13 19:44:46.483999 kernel: pci 0000:00:16.4: Adding to iommu group 6 Feb 13 19:44:46.484048 kernel: pci 0000:00:17.0: Adding to iommu group 7 Feb 13 19:44:46.484097 kernel: pci 0000:00:1b.0: Adding to iommu group 8 Feb 13 19:44:46.484145 kernel: pci 0000:00:1b.4: Adding to iommu group 9 Feb 13 19:44:46.484262 kernel: pci 0000:00:1b.5: Adding to iommu group 10 Feb 13 19:44:46.484311 kernel: pci 0000:00:1c.0: Adding to iommu group 11 Feb 13 19:44:46.484358 kernel: pci 0000:00:1c.3: Adding to iommu group 12 Feb 13 19:44:46.484407 kernel: pci 0000:00:1e.0: Adding to iommu group 13 Feb 13 19:44:46.484457 kernel: pci 0000:00:1f.0: Adding to iommu group 14 Feb 13 19:44:46.484506 kernel: pci 0000:00:1f.4: Adding to iommu group 14 Feb 13 19:44:46.484554 kernel: pci 0000:00:1f.5: Adding to iommu group 14 Feb 13 19:44:46.484604 kernel: pci 0000:01:00.0: Adding to iommu group 1 Feb 13 19:44:46.484653 kernel: pci 0000:01:00.1: Adding to iommu group 1 Feb 13 19:44:46.484703 kernel: pci 0000:03:00.0: Adding to iommu group 15 Feb 13 19:44:46.484752 kernel: pci 0000:04:00.0: Adding to iommu group 16 Feb 13 19:44:46.484826 kernel: pci 0000:06:00.0: Adding to iommu group 17 Feb 13 19:44:46.484894 kernel: pci 0000:07:00.0: Adding to iommu group 17 Feb 13 19:44:46.484903 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Feb 13 19:44:46.484910 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 13 19:44:46.484915 kernel: software IO TLB: mapped [mem 0x0000000086fcd000-0x000000008afcd000] (64MB) Feb 13 19:44:46.484921 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer Feb 13 19:44:46.484927 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Feb 13 19:44:46.484933 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Feb 13 19:44:46.484939 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Feb 13 19:44:46.484989 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Feb 13 19:44:46.485000 kernel: Initialise system trusted keyrings Feb 13 19:44:46.485005 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Feb 13 19:44:46.485011 kernel: Key type asymmetric registered Feb 13 19:44:46.485017 kernel: Asymmetric key parser 'x509' registered Feb 13 19:44:46.485023 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 19:44:46.485029 kernel: io scheduler mq-deadline registered Feb 13 19:44:46.485035 kernel: io scheduler kyber registered Feb 13 19:44:46.485040 kernel: io scheduler bfq registered Feb 13 19:44:46.485089 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 Feb 13 19:44:46.485139 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 122 Feb 13 19:44:46.485188 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 123 Feb 13 19:44:46.485235 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 124 Feb 13 19:44:46.485283 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 125 Feb 13 19:44:46.485331 kernel: pcieport 0000:00:1c.3: PME: Signaling with IRQ 126 Feb 13 19:44:46.485387 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Feb 13 19:44:46.485397 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Feb 13 19:44:46.485404 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Feb 13 19:44:46.485411 kernel: pstore: Using crash dump compression: deflate Feb 13 19:44:46.485416 kernel: pstore: Registered erst as persistent store backend Feb 13 19:44:46.485422 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 19:44:46.485428 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 19:44:46.485434 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 19:44:46.485440 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Feb 13 19:44:46.485446 kernel: hpet_acpi_add: no address or irqs in _CRS Feb 13 19:44:46.485495 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Feb 13 19:44:46.485505 kernel: i8042: PNP: No PS/2 controller found. Feb 13 19:44:46.485551 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Feb 13 19:44:46.485639 kernel: rtc_cmos rtc_cmos: registered as rtc0 Feb 13 19:44:46.485683 kernel: rtc_cmos rtc_cmos: setting system clock to 2025-02-13T19:44:45 UTC (1739475885) Feb 13 19:44:46.485728 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Feb 13 19:44:46.485737 kernel: intel_pstate: Intel P-state driver initializing Feb 13 19:44:46.485743 kernel: intel_pstate: Disabling energy efficiency optimization Feb 13 19:44:46.485751 kernel: intel_pstate: HWP enabled Feb 13 19:44:46.485757 kernel: NET: Registered PF_INET6 protocol family Feb 13 19:44:46.485763 kernel: Segment Routing with IPv6 Feb 13 19:44:46.485768 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 19:44:46.485774 kernel: NET: Registered PF_PACKET protocol family Feb 13 19:44:46.485780 kernel: Key type dns_resolver registered Feb 13 19:44:46.485788 kernel: microcode: Microcode Update Driver: v2.2. Feb 13 19:44:46.485794 kernel: IPI shorthand broadcast: enabled Feb 13 19:44:46.485818 kernel: sched_clock: Marking stable (2490190132, 1448772286)->(4502213957, -563251539) Feb 13 19:44:46.485826 kernel: registered taskstats version 1 Feb 13 19:44:46.485832 kernel: Loading compiled-in X.509 certificates Feb 13 19:44:46.485851 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: b3acedbed401b3cd9632ee9302ddcce254d8924d' Feb 13 19:44:46.485857 kernel: Key type .fscrypt registered Feb 13 19:44:46.485863 kernel: Key type fscrypt-provisioning registered Feb 13 19:44:46.485869 kernel: ima: Allocated hash algorithm: sha1 Feb 13 19:44:46.485875 kernel: ima: No architecture policies found Feb 13 19:44:46.485881 kernel: clk: Disabling unused clocks Feb 13 19:44:46.485886 kernel: Freeing unused kernel image (initmem) memory: 43320K Feb 13 19:44:46.485893 kernel: Write protecting the kernel read-only data: 38912k Feb 13 19:44:46.485899 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Feb 13 19:44:46.485905 kernel: Run /init as init process Feb 13 19:44:46.485911 kernel: with arguments: Feb 13 19:44:46.485917 kernel: /init Feb 13 19:44:46.485922 kernel: with environment: Feb 13 19:44:46.485928 kernel: HOME=/ Feb 13 19:44:46.485934 kernel: TERM=linux Feb 13 19:44:46.485940 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 19:44:46.485948 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:44:46.485955 systemd[1]: Detected architecture x86-64. Feb 13 19:44:46.485961 systemd[1]: Running in initrd. Feb 13 19:44:46.485967 systemd[1]: No hostname configured, using default hostname. Feb 13 19:44:46.485973 systemd[1]: Hostname set to . Feb 13 19:44:46.485979 systemd[1]: Initializing machine ID from random generator. Feb 13 19:44:46.485985 systemd[1]: Queued start job for default target initrd.target. Feb 13 19:44:46.485992 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:44:46.485998 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:44:46.486004 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 19:44:46.486011 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:44:46.486017 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 19:44:46.486023 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 19:44:46.486029 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 19:44:46.486037 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 19:44:46.486043 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:44:46.486049 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:44:46.486055 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:44:46.486061 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:44:46.486068 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:44:46.486074 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:44:46.486080 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:44:46.486087 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:44:46.486093 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 19:44:46.486099 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 19:44:46.486106 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:44:46.486112 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:44:46.486118 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:44:46.486124 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:44:46.486130 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 19:44:46.486136 kernel: tsc: Refined TSC clocksource calibration: 3407.999 MHz Feb 13 19:44:46.486143 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd336761, max_idle_ns: 440795243819 ns Feb 13 19:44:46.486149 kernel: clocksource: Switched to clocksource tsc Feb 13 19:44:46.486155 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:44:46.486161 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 19:44:46.486167 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 19:44:46.486173 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:44:46.486193 systemd-journald[269]: Collecting audit messages is disabled. Feb 13 19:44:46.486243 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:44:46.486266 systemd-journald[269]: Journal started Feb 13 19:44:46.486280 systemd-journald[269]: Runtime Journal (/run/log/journal/5ce4f4ccc06444bb90dac19f1ef9d6a0) is 8.0M, max 639.9M, 631.9M free. Feb 13 19:44:46.488463 systemd-modules-load[270]: Inserted module 'overlay' Feb 13 19:44:46.505907 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:44:46.527832 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 19:44:46.527849 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:44:46.534996 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 19:44:46.535139 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:44:46.535234 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 19:44:46.536222 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:44:46.540642 systemd-modules-load[270]: Inserted module 'br_netfilter' Feb 13 19:44:46.540790 kernel: Bridge firewalling registered Feb 13 19:44:46.541099 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:44:46.561553 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:44:46.640920 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:44:46.669604 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:44:46.693451 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:44:46.729226 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:44:46.758037 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:44:46.768436 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:44:46.768617 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:44:46.769633 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:44:46.774571 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:44:46.779051 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:44:46.790561 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 19:44:46.793247 systemd-resolved[298]: Positive Trust Anchors: Feb 13 19:44:46.793253 systemd-resolved[298]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:44:46.793291 systemd-resolved[298]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:44:46.795553 systemd-resolved[298]: Defaulting to hostname 'linux'. Feb 13 19:44:46.801091 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:44:46.818144 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:44:46.942915 dracut-cmdline[311]: dracut-dracut-053 Feb 13 19:44:46.942915 dracut-cmdline[311]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=015d1d9e5e601f6a4e226c935072d3d0819e7eb2da20e68715973498f21aa3fe Feb 13 19:44:46.998866 kernel: SCSI subsystem initialized Feb 13 19:44:47.005792 kernel: Loading iSCSI transport class v2.0-870. Feb 13 19:44:47.018831 kernel: iscsi: registered transport (tcp) Feb 13 19:44:47.040515 kernel: iscsi: registered transport (qla4xxx) Feb 13 19:44:47.040534 kernel: QLogic iSCSI HBA Driver Feb 13 19:44:47.063626 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 19:44:47.075091 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 19:44:47.166644 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 19:44:47.166664 kernel: device-mapper: uevent: version 1.0.3 Feb 13 19:44:47.175460 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 19:44:47.210824 kernel: raid6: avx2x4 gen() 46473 MB/s Feb 13 19:44:47.231856 kernel: raid6: avx2x2 gen() 53056 MB/s Feb 13 19:44:47.257938 kernel: raid6: avx2x1 gen() 44793 MB/s Feb 13 19:44:47.257956 kernel: raid6: using algorithm avx2x2 gen() 53056 MB/s Feb 13 19:44:47.285041 kernel: raid6: .... xor() 32197 MB/s, rmw enabled Feb 13 19:44:47.285059 kernel: raid6: using avx2x2 recovery algorithm Feb 13 19:44:47.304814 kernel: xor: automatically using best checksumming function avx Feb 13 19:44:47.402822 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 19:44:47.408379 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:44:47.442104 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:44:47.448824 systemd-udevd[499]: Using default interface naming scheme 'v255'. Feb 13 19:44:47.451255 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:44:47.488187 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 19:44:47.550858 dracut-pre-trigger[511]: rd.md=0: removing MD RAID activation Feb 13 19:44:47.622510 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:44:47.650226 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:44:47.752240 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:44:47.776023 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 13 19:44:47.776073 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 13 19:44:47.782792 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 19:44:47.796736 kernel: ACPI: bus type USB registered Feb 13 19:44:47.796880 kernel: usbcore: registered new interface driver usbfs Feb 13 19:44:47.803426 kernel: usbcore: registered new interface driver hub Feb 13 19:44:47.808799 kernel: usbcore: registered new device driver usb Feb 13 19:44:47.814792 kernel: PTP clock support registered Feb 13 19:44:47.814808 kernel: libata version 3.00 loaded. Feb 13 19:44:47.816968 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 19:44:47.975677 kernel: ahci 0000:00:17.0: version 3.0 Feb 13 19:44:47.975784 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 19:44:47.975797 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 7 ports 6 Gbps 0x7f impl SATA mode Feb 13 19:44:47.975867 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Feb 13 19:44:47.975932 kernel: AES CTR mode by8 optimization enabled Feb 13 19:44:47.975941 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Feb 13 19:44:47.975948 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Feb 13 19:44:47.975956 kernel: scsi host0: ahci Feb 13 19:44:47.976020 kernel: scsi host1: ahci Feb 13 19:44:47.976083 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Feb 13 19:44:47.995911 kernel: scsi host2: ahci Feb 13 19:44:47.995985 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Feb 13 19:44:47.996053 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Feb 13 19:44:47.996119 kernel: scsi host3: ahci Feb 13 19:44:47.996184 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Feb 13 19:44:47.996247 kernel: pps pps0: new PPS source ptp0 Feb 13 19:44:47.996314 kernel: scsi host4: ahci Feb 13 19:44:47.996376 kernel: scsi host5: ahci Feb 13 19:44:47.996435 kernel: scsi host6: ahci Feb 13 19:44:47.996494 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 127 Feb 13 19:44:47.996503 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 127 Feb 13 19:44:47.996511 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 127 Feb 13 19:44:47.996518 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 127 Feb 13 19:44:47.996528 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 127 Feb 13 19:44:47.996535 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 127 Feb 13 19:44:47.996542 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 127 Feb 13 19:44:47.996550 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Feb 13 19:44:47.996614 kernel: igb 0000:03:00.0: added PHC on eth0 Feb 13 19:44:47.996682 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Feb 13 19:44:47.996744 kernel: igb 0000:03:00.0: Intel(R) Gigabit Ethernet Network Connection Feb 13 19:44:47.996814 kernel: hub 1-0:1.0: USB hub found Feb 13 19:44:47.996888 kernel: igb 0000:03:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:6a:f0:44 Feb 13 19:44:47.996953 kernel: igb 0000:03:00.0: eth0: PBA No: 010000-000 Feb 13 19:44:47.997016 kernel: hub 1-0:1.0: 16 ports detected Feb 13 19:44:47.997076 kernel: igb 0000:03:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Feb 13 19:44:47.997138 kernel: hub 2-0:1.0: USB hub found Feb 13 19:44:47.997206 kernel: pps pps1: new PPS source ptp1 Feb 13 19:44:47.997264 kernel: hub 2-0:1.0: 10 ports detected Feb 13 19:44:47.997326 kernel: igb 0000:04:00.0: added PHC on eth1 Feb 13 19:44:48.085632 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Feb 13 19:44:48.085740 kernel: igb 0000:04:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:6a:f0:45 Feb 13 19:44:48.085852 kernel: igb 0000:04:00.0: eth1: PBA No: 010000-000 Feb 13 19:44:48.085951 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Feb 13 19:44:47.853028 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 19:44:48.114832 kernel: mlx5_core 0000:01:00.0: firmware version: 14.27.1016 Feb 13 19:44:48.581528 kernel: mlx5_core 0000:01:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Feb 13 19:44:48.581611 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Feb 13 19:44:48.757595 kernel: ata5: SATA link down (SStatus 0 SControl 300) Feb 13 19:44:48.757607 kernel: ata4: SATA link down (SStatus 0 SControl 300) Feb 13 19:44:48.757615 kernel: ata7: SATA link down (SStatus 0 SControl 300) Feb 13 19:44:48.757622 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Feb 13 19:44:48.757630 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Feb 13 19:44:48.757641 kernel: ata3: SATA link down (SStatus 0 SControl 300) Feb 13 19:44:48.757648 kernel: ata6: SATA link down (SStatus 0 SControl 300) Feb 13 19:44:48.757656 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Feb 13 19:44:48.757663 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Feb 13 19:44:48.757671 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Feb 13 19:44:48.757678 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Feb 13 19:44:48.757686 kernel: ata1.00: Features: NCQ-prio Feb 13 19:44:48.757693 kernel: ata2.00: Features: NCQ-prio Feb 13 19:44:48.757700 kernel: ata1.00: configured for UDMA/133 Feb 13 19:44:48.757708 kernel: ata2.00: configured for UDMA/133 Feb 13 19:44:48.757716 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Feb 13 19:44:48.757814 kernel: hub 1-14:1.0: USB hub found Feb 13 19:44:48.757901 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Feb 13 19:44:48.757983 kernel: mlx5_core 0000:01:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Feb 13 19:44:48.758095 kernel: hub 1-14:1.0: 4 ports detected Feb 13 19:44:48.758176 kernel: mlx5_core 0000:01:00.0: Port module event: module 0, Cable plugged Feb 13 19:44:48.758249 kernel: igb 0000:03:00.0 eno1: renamed from eth0 Feb 13 19:44:48.758329 kernel: ata1.00: Enabling discard_zeroes_data Feb 13 19:44:48.758338 kernel: ata2.00: Enabling discard_zeroes_data Feb 13 19:44:48.758345 kernel: sd 0:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Feb 13 19:44:48.758411 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Feb 13 19:44:48.758485 kernel: sd 1:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Feb 13 19:44:48.758549 kernel: sd 1:0:0:0: [sdb] 4096-byte physical blocks Feb 13 19:44:48.758611 kernel: sd 1:0:0:0: [sdb] Write Protect is off Feb 13 19:44:48.758686 kernel: sd 0:0:0:0: [sda] Write Protect is off Feb 13 19:44:48.758749 kernel: sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Feb 13 19:44:48.758816 kernel: sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 Feb 13 19:44:48.758885 kernel: sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Feb 13 19:44:48.758953 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Feb 13 19:44:48.759015 kernel: sd 1:0:0:0: [sdb] Preferred minimum I/O size 4096 bytes Feb 13 19:44:48.759074 kernel: sd 0:0:0:0: [sda] Preferred minimum I/O size 4096 bytes Feb 13 19:44:48.759149 kernel: ata2.00: Enabling discard_zeroes_data Feb 13 19:44:48.759158 kernel: ata1.00: Enabling discard_zeroes_data Feb 13 19:44:48.759165 kernel: sd 1:0:0:0: [sdb] Attached SCSI disk Feb 13 19:44:48.759227 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 19:44:48.759236 kernel: GPT:9289727 != 937703087 Feb 13 19:44:48.759243 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 19:44:48.759250 kernel: GPT:9289727 != 937703087 Feb 13 19:44:48.759257 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 19:44:48.759267 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 19:44:48.759280 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Feb 13 19:44:48.759348 kernel: igb 0000:04:00.0 eno2: renamed from eth1 Feb 13 19:44:48.759426 kernel: BTRFS: device fsid c7adc9b8-df7f-4a5f-93bf-204def2767a9 devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (673) Feb 13 19:44:48.759436 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by (udev-worker) (563) Feb 13 19:44:48.759444 kernel: mlx5_core 0000:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Feb 13 19:44:48.759513 kernel: mlx5_core 0000:01:00.1: firmware version: 14.27.1016 Feb 13 19:44:49.123552 kernel: mlx5_core 0000:01:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Feb 13 19:44:49.124130 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Feb 13 19:44:49.124929 kernel: ata1.00: Enabling discard_zeroes_data Feb 13 19:44:49.125000 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 19:44:49.125062 kernel: ata1.00: Enabling discard_zeroes_data Feb 13 19:44:49.125122 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 19:44:49.125181 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 19:44:49.125239 kernel: usbcore: registered new interface driver usbhid Feb 13 19:44:49.125302 kernel: usbhid: USB HID core driver Feb 13 19:44:49.125362 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Feb 13 19:44:49.125423 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Feb 13 19:44:49.126018 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Feb 13 19:44:49.126081 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Feb 13 19:44:49.126460 kernel: mlx5_core 0000:01:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Feb 13 19:44:49.126824 kernel: mlx5_core 0000:01:00.1: Port module event: module 1, Cable plugged Feb 13 19:44:49.127170 kernel: mlx5_core 0000:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Feb 13 19:44:48.096602 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:44:49.150252 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: renamed from eth1 Feb 13 19:44:49.150334 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: renamed from eth0 Feb 13 19:44:48.126945 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:44:48.138957 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:44:48.164993 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 19:44:48.175076 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:44:48.185150 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:44:48.185211 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:44:49.211957 disk-uuid[705]: Primary Header is updated. Feb 13 19:44:49.211957 disk-uuid[705]: Secondary Entries is updated. Feb 13 19:44:49.211957 disk-uuid[705]: Secondary Header is updated. Feb 13 19:44:48.195939 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:44:48.212510 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:44:48.212583 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:44:48.279878 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:44:48.312939 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:44:48.440922 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Micron_5300_MTFDDAK480TDT ROOT. Feb 13 19:44:48.569996 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:44:48.608775 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Micron_5300_MTFDDAK480TDT EFI-SYSTEM. Feb 13 19:44:48.625584 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Micron_5300_MTFDDAK480TDT USR-A. Feb 13 19:44:48.636825 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Micron_5300_MTFDDAK480TDT USR-A. Feb 13 19:44:48.651606 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. Feb 13 19:44:48.687918 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 19:44:48.709301 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:44:48.736589 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:44:49.714687 kernel: ata1.00: Enabling discard_zeroes_data Feb 13 19:44:49.722695 disk-uuid[706]: The operation has completed successfully. Feb 13 19:44:49.730910 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 19:44:49.760362 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 19:44:49.760408 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 19:44:49.802135 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 19:44:49.828849 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 13 19:44:49.828865 sh[744]: Success Feb 13 19:44:49.864765 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 19:44:49.874709 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 19:44:49.884425 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 19:44:49.929702 kernel: BTRFS info (device dm-0): first mount of filesystem c7adc9b8-df7f-4a5f-93bf-204def2767a9 Feb 13 19:44:49.929722 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:44:49.939261 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 19:44:49.946284 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 19:44:49.952135 kernel: BTRFS info (device dm-0): using free space tree Feb 13 19:44:49.964836 kernel: BTRFS info (device dm-0): enabling ssd optimizations Feb 13 19:44:49.966281 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 19:44:49.975223 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 19:44:49.987924 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 19:44:50.018464 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 19:44:50.087001 kernel: BTRFS info (device sda6): first mount of filesystem 60a376b4-1193-4e0b-af89-a0e6d698bf0f Feb 13 19:44:50.087022 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:44:50.087035 kernel: BTRFS info (device sda6): using free space tree Feb 13 19:44:50.087048 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 19:44:50.087060 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 19:44:50.087072 kernel: BTRFS info (device sda6): last unmount of filesystem 60a376b4-1193-4e0b-af89-a0e6d698bf0f Feb 13 19:44:50.072620 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 19:44:50.085280 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 19:44:50.092975 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 19:44:50.145113 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:44:50.161950 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:44:50.177662 unknown[804]: fetched base config from "system" Feb 13 19:44:50.175408 ignition[804]: Ignition 2.20.0 Feb 13 19:44:50.177666 unknown[804]: fetched user config from "system" Feb 13 19:44:50.175412 ignition[804]: Stage: fetch-offline Feb 13 19:44:50.178508 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:44:50.175431 ignition[804]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:44:50.189203 systemd-networkd[928]: lo: Link UP Feb 13 19:44:50.175436 ignition[804]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 19:44:50.189205 systemd-networkd[928]: lo: Gained carrier Feb 13 19:44:50.175488 ignition[804]: parsed url from cmdline: "" Feb 13 19:44:50.191577 systemd-networkd[928]: Enumeration completed Feb 13 19:44:50.175490 ignition[804]: no config URL provided Feb 13 19:44:50.192386 systemd-networkd[928]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:44:50.175492 ignition[804]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 19:44:50.197958 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:44:50.175514 ignition[804]: parsing config with SHA512: afd9dbabe7e3aa0c235b5a2d30f9c892de9430cb254c3807c21ab4aa66412bdb9d9b1316b130c4b1d91b3c97f9d0ef45973c9c95961b69a809be86005ddb4f65 Feb 13 19:44:50.206300 systemd[1]: Reached target network.target - Network. Feb 13 19:44:50.177866 ignition[804]: fetch-offline: fetch-offline passed Feb 13 19:44:50.220678 systemd-networkd[928]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:44:50.177868 ignition[804]: POST message to Packet Timeline Feb 13 19:44:50.220959 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 19:44:50.177870 ignition[804]: POST Status error: resource requires networking Feb 13 19:44:50.235041 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 19:44:50.417961 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Feb 13 19:44:50.177910 ignition[804]: Ignition finished successfully Feb 13 19:44:50.248509 systemd-networkd[928]: enp1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:44:50.244181 ignition[941]: Ignition 2.20.0 Feb 13 19:44:50.414379 systemd-networkd[928]: enp1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:44:50.244185 ignition[941]: Stage: kargs Feb 13 19:44:50.244296 ignition[941]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:44:50.244302 ignition[941]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 19:44:50.244818 ignition[941]: kargs: kargs passed Feb 13 19:44:50.244821 ignition[941]: POST message to Packet Timeline Feb 13 19:44:50.244833 ignition[941]: GET https://metadata.packet.net/metadata: attempt #1 Feb 13 19:44:50.245310 ignition[941]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:53287->[::1]:53: read: connection refused Feb 13 19:44:50.446468 ignition[941]: GET https://metadata.packet.net/metadata: attempt #2 Feb 13 19:44:50.447442 ignition[941]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:45273->[::1]:53: read: connection refused Feb 13 19:44:50.630834 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Feb 13 19:44:50.631558 systemd-networkd[928]: eno1: Link UP Feb 13 19:44:50.631712 systemd-networkd[928]: eno2: Link UP Feb 13 19:44:50.631862 systemd-networkd[928]: enp1s0f0np0: Link UP Feb 13 19:44:50.632032 systemd-networkd[928]: enp1s0f0np0: Gained carrier Feb 13 19:44:50.645134 systemd-networkd[928]: enp1s0f1np1: Link UP Feb 13 19:44:50.678147 systemd-networkd[928]: enp1s0f0np0: DHCPv4 address 147.28.180.89/31, gateway 147.28.180.88 acquired from 145.40.83.140 Feb 13 19:44:50.847735 ignition[941]: GET https://metadata.packet.net/metadata: attempt #3 Feb 13 19:44:50.849021 ignition[941]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:57142->[::1]:53: read: connection refused Feb 13 19:44:51.419583 systemd-networkd[928]: enp1s0f1np1: Gained carrier Feb 13 19:44:51.649452 ignition[941]: GET https://metadata.packet.net/metadata: attempt #4 Feb 13 19:44:51.651124 ignition[941]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:55100->[::1]:53: read: connection refused Feb 13 19:44:51.675311 systemd-networkd[928]: enp1s0f0np0: Gained IPv6LL Feb 13 19:44:53.251927 ignition[941]: GET https://metadata.packet.net/metadata: attempt #5 Feb 13 19:44:53.253012 ignition[941]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:54027->[::1]:53: read: connection refused Feb 13 19:44:53.275317 systemd-networkd[928]: enp1s0f1np1: Gained IPv6LL Feb 13 19:44:56.455814 ignition[941]: GET https://metadata.packet.net/metadata: attempt #6 Feb 13 19:44:56.620861 ignition[941]: GET result: OK Feb 13 19:44:56.962935 ignition[941]: Ignition finished successfully Feb 13 19:44:56.968288 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 19:44:56.996077 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 19:44:57.002361 ignition[960]: Ignition 2.20.0 Feb 13 19:44:57.002365 ignition[960]: Stage: disks Feb 13 19:44:57.002468 ignition[960]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:44:57.002475 ignition[960]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 19:44:57.002982 ignition[960]: disks: disks passed Feb 13 19:44:57.002985 ignition[960]: POST message to Packet Timeline Feb 13 19:44:57.002998 ignition[960]: GET https://metadata.packet.net/metadata: attempt #1 Feb 13 19:44:57.332095 ignition[960]: GET result: OK Feb 13 19:44:57.671530 ignition[960]: Ignition finished successfully Feb 13 19:44:57.674353 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 19:44:57.690290 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 19:44:57.698106 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 19:44:57.726960 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:44:57.727097 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:44:57.753360 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:44:57.789171 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 19:44:57.849737 systemd-fsck[976]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 19:44:57.861255 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 19:44:57.879012 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 19:44:57.959506 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 19:44:57.975041 kernel: EXT4-fs (sda9): mounted filesystem 7d46b70d-4c30-46e6-9935-e1f7fb523560 r/w with ordered data mode. Quota mode: none. Feb 13 19:44:57.959749 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 19:44:57.996995 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:44:58.049833 kernel: BTRFS: device label OEM devid 1 transid 18 /dev/sda6 scanned by mount (986) Feb 13 19:44:58.049850 kernel: BTRFS info (device sda6): first mount of filesystem 60a376b4-1193-4e0b-af89-a0e6d698bf0f Feb 13 19:44:58.049858 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:44:58.049866 kernel: BTRFS info (device sda6): using free space tree Feb 13 19:44:58.049873 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 19:44:58.005970 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 19:44:58.065069 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 19:44:58.072028 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Feb 13 19:44:58.094298 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... Feb 13 19:44:58.105837 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 19:44:58.105858 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:44:58.161985 coreos-metadata[1003]: Feb 13 19:44:58.123 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 13 19:44:58.130988 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:44:58.189961 coreos-metadata[1004]: Feb 13 19:44:58.141 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 13 19:44:58.151033 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 19:44:58.185125 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 19:44:58.229917 initrd-setup-root[1018]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 19:44:58.241032 initrd-setup-root[1025]: cut: /sysroot/etc/group: No such file or directory Feb 13 19:44:58.251002 initrd-setup-root[1032]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 19:44:58.260902 initrd-setup-root[1039]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 19:44:58.275934 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 19:44:58.295905 coreos-metadata[1003]: Feb 13 19:44:58.277 INFO Fetch successful Feb 13 19:44:58.300927 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 19:44:58.334019 kernel: BTRFS info (device sda6): last unmount of filesystem 60a376b4-1193-4e0b-af89-a0e6d698bf0f Feb 13 19:44:58.334125 coreos-metadata[1003]: Feb 13 19:44:58.314 INFO wrote hostname ci-4186.1.1-a-a8b3a25f31 to /sysroot/etc/hostname Feb 13 19:44:58.323385 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 19:44:58.342587 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 19:44:58.342917 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 19:44:58.404968 ignition[1110]: INFO : Ignition 2.20.0 Feb 13 19:44:58.404968 ignition[1110]: INFO : Stage: mount Feb 13 19:44:58.404968 ignition[1110]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:44:58.404968 ignition[1110]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 19:44:58.404968 ignition[1110]: INFO : mount: mount passed Feb 13 19:44:58.404968 ignition[1110]: INFO : POST message to Packet Timeline Feb 13 19:44:58.404968 ignition[1110]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Feb 13 19:44:58.382058 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 19:44:58.473874 coreos-metadata[1004]: Feb 13 19:44:58.415 INFO Fetch successful Feb 13 19:44:58.488943 systemd[1]: flatcar-static-network.service: Deactivated successfully. Feb 13 19:44:58.489002 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. Feb 13 19:44:58.890688 ignition[1110]: INFO : GET result: OK Feb 13 19:44:59.173281 ignition[1110]: INFO : Ignition finished successfully Feb 13 19:44:59.174494 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 19:44:59.204005 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 19:44:59.214193 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:44:59.269721 kernel: BTRFS: device label OEM devid 1 transid 19 /dev/sda6 scanned by mount (1129) Feb 13 19:44:59.269744 kernel: BTRFS info (device sda6): first mount of filesystem 60a376b4-1193-4e0b-af89-a0e6d698bf0f Feb 13 19:44:59.277903 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:44:59.283801 kernel: BTRFS info (device sda6): using free space tree Feb 13 19:44:59.298953 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 19:44:59.298969 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 19:44:59.300866 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:44:59.332873 ignition[1146]: INFO : Ignition 2.20.0 Feb 13 19:44:59.332873 ignition[1146]: INFO : Stage: files Feb 13 19:44:59.348009 ignition[1146]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:44:59.348009 ignition[1146]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 19:44:59.348009 ignition[1146]: DEBUG : files: compiled without relabeling support, skipping Feb 13 19:44:59.348009 ignition[1146]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 19:44:59.348009 ignition[1146]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 19:44:59.348009 ignition[1146]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 19:44:59.348009 ignition[1146]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 19:44:59.348009 ignition[1146]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 19:44:59.348009 ignition[1146]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 19:44:59.348009 ignition[1146]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 13 19:44:59.337072 unknown[1146]: wrote ssh authorized keys file for user: core Feb 13 19:44:59.480028 ignition[1146]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 19:44:59.480028 ignition[1146]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 19:44:59.480028 ignition[1146]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Feb 13 19:44:59.480028 ignition[1146]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 19:44:59.480028 ignition[1146]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:44:59.480028 ignition[1146]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:44:59.480028 ignition[1146]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:44:59.480028 ignition[1146]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:44:59.480028 ignition[1146]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:44:59.480028 ignition[1146]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:44:59.480028 ignition[1146]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:44:59.480028 ignition[1146]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:44:59.480028 ignition[1146]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 19:44:59.480028 ignition[1146]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 19:44:59.480028 ignition[1146]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 19:44:59.733120 ignition[1146]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Feb 13 19:44:59.849383 ignition[1146]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 13 19:45:00.085243 ignition[1146]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 19:45:00.085243 ignition[1146]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Feb 13 19:45:00.116012 ignition[1146]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:45:00.116012 ignition[1146]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:45:00.116012 ignition[1146]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Feb 13 19:45:00.116012 ignition[1146]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Feb 13 19:45:00.116012 ignition[1146]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 19:45:00.116012 ignition[1146]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:45:00.116012 ignition[1146]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:45:00.116012 ignition[1146]: INFO : files: files passed Feb 13 19:45:00.116012 ignition[1146]: INFO : POST message to Packet Timeline Feb 13 19:45:00.116012 ignition[1146]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Feb 13 19:45:00.607739 ignition[1146]: INFO : GET result: OK Feb 13 19:45:01.486213 ignition[1146]: INFO : Ignition finished successfully Feb 13 19:45:01.489555 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 19:45:01.524058 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 19:45:01.524586 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 19:45:01.553255 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 19:45:01.553326 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 19:45:01.605064 initrd-setup-root-after-ignition[1185]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:45:01.605064 initrd-setup-root-after-ignition[1185]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:45:01.576412 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:45:01.643072 initrd-setup-root-after-ignition[1189]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:45:01.597088 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 19:45:01.634042 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 19:45:01.699191 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 19:45:01.699244 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 19:45:01.718191 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 19:45:01.739018 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 19:45:01.759204 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 19:45:01.774904 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 19:45:01.825973 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:45:01.855257 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 19:45:01.873759 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:45:01.899082 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:45:01.911119 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 19:45:01.929160 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 19:45:01.929325 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:45:01.958528 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 19:45:01.980406 systemd[1]: Stopped target basic.target - Basic System. Feb 13 19:45:01.999397 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 19:45:02.018511 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:45:02.039412 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 19:45:02.060418 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 19:45:02.080403 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:45:02.101457 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 19:45:02.122435 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 19:45:02.142403 systemd[1]: Stopped target swap.target - Swaps. Feb 13 19:45:02.160309 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 19:45:02.160702 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:45:02.186618 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:45:02.206434 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:45:02.227280 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 19:45:02.227728 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:45:02.249296 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 19:45:02.249693 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 19:45:02.281414 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 19:45:02.281886 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:45:02.301603 systemd[1]: Stopped target paths.target - Path Units. Feb 13 19:45:02.319279 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 19:45:02.319709 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:45:02.340415 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 19:45:02.359398 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 19:45:02.378487 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 19:45:02.378815 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:45:02.398436 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 19:45:02.398735 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:45:02.421526 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 19:45:02.421953 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:45:02.533073 ignition[1210]: INFO : Ignition 2.20.0 Feb 13 19:45:02.533073 ignition[1210]: INFO : Stage: umount Feb 13 19:45:02.533073 ignition[1210]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:45:02.533073 ignition[1210]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 19:45:02.533073 ignition[1210]: INFO : umount: umount passed Feb 13 19:45:02.533073 ignition[1210]: INFO : POST message to Packet Timeline Feb 13 19:45:02.533073 ignition[1210]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Feb 13 19:45:02.441499 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 19:45:02.441903 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 19:45:02.459501 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 13 19:45:02.459908 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 19:45:02.670019 ignition[1210]: INFO : GET result: OK Feb 13 19:45:02.498054 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 19:45:02.514599 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 19:45:02.532999 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 19:45:02.533216 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:45:02.544266 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 19:45:02.544424 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:45:02.604226 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 19:45:02.606046 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 19:45:02.606133 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 19:45:02.719689 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 19:45:02.720029 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 19:45:03.596599 ignition[1210]: INFO : Ignition finished successfully Feb 13 19:45:03.599784 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 19:45:03.600207 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 19:45:03.617318 systemd[1]: Stopped target network.target - Network. Feb 13 19:45:03.625300 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 19:45:03.625487 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 19:45:03.650223 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 19:45:03.650369 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 19:45:03.668293 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 19:45:03.668450 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 19:45:03.676558 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 19:45:03.676721 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 19:45:03.704266 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 19:45:03.704436 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 19:45:03.712964 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 19:45:03.727936 systemd-networkd[928]: enp1s0f0np0: DHCPv6 lease lost Feb 13 19:45:03.735003 systemd-networkd[928]: enp1s0f1np1: DHCPv6 lease lost Feb 13 19:45:03.739387 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 19:45:03.758037 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 19:45:03.758417 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 19:45:03.777301 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 19:45:03.777676 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 19:45:03.797855 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 19:45:03.797973 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:45:03.831972 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 19:45:03.853934 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 19:45:03.853976 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:45:03.874187 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:45:03.874266 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:45:03.893189 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 19:45:03.893356 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 19:45:03.913188 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 19:45:03.913354 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:45:03.934410 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:45:03.956161 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 19:45:03.956536 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:45:03.985526 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 19:45:03.985564 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 19:45:03.993087 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 19:45:03.993109 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:45:04.021104 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 19:45:04.021175 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:45:04.051295 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 19:45:04.051462 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 19:45:04.080253 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:45:04.080418 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:45:04.139145 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 19:45:04.379998 systemd-journald[269]: Received SIGTERM from PID 1 (systemd). Feb 13 19:45:04.142219 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 19:45:04.142369 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:45:04.173880 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 19:45:04.173988 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:45:04.193943 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 19:45:04.194048 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:45:04.217955 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:45:04.218061 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:45:04.239436 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 19:45:04.239510 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 19:45:04.259296 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 19:45:04.259366 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 19:45:04.280537 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 19:45:04.298050 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 19:45:04.327469 systemd[1]: Switching root. Feb 13 19:45:04.490958 systemd-journald[269]: Journal stopped Feb 13 19:45:06.104304 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 19:45:06.104333 kernel: SELinux: policy capability open_perms=1 Feb 13 19:45:06.104351 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 19:45:06.104364 kernel: SELinux: policy capability always_check_network=0 Feb 13 19:45:06.104378 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 19:45:06.104390 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 19:45:06.104404 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 19:45:06.104413 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 19:45:06.104426 kernel: audit: type=1403 audit(1739475904.609:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 19:45:06.104440 systemd[1]: Successfully loaded SELinux policy in 79.540ms. Feb 13 19:45:06.104457 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.048ms. Feb 13 19:45:06.104475 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:45:06.104488 systemd[1]: Detected architecture x86-64. Feb 13 19:45:06.104501 systemd[1]: Detected first boot. Feb 13 19:45:06.104514 systemd[1]: Hostname set to . Feb 13 19:45:06.104531 systemd[1]: Initializing machine ID from random generator. Feb 13 19:45:06.104545 zram_generator::config[1259]: No configuration found. Feb 13 19:45:06.104559 systemd[1]: Populated /etc with preset unit settings. Feb 13 19:45:06.104572 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 19:45:06.104585 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 19:45:06.104599 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 19:45:06.104615 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 19:45:06.104630 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 19:45:06.104642 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 19:45:06.104656 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 19:45:06.104669 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 19:45:06.104683 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 19:45:06.104696 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 19:45:06.104710 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 19:45:06.104726 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:45:06.104740 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:45:06.104753 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 19:45:06.104767 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 19:45:06.104779 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 19:45:06.104797 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:45:06.104814 systemd[1]: Expecting device dev-ttyS1.device - /dev/ttyS1... Feb 13 19:45:06.104827 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:45:06.104841 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 19:45:06.104854 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 19:45:06.104868 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 19:45:06.104886 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 19:45:06.104899 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:45:06.104912 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:45:06.104926 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:45:06.104938 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:45:06.104956 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 19:45:06.104969 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 19:45:06.104982 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:45:06.104996 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:45:06.105009 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:45:06.105026 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 19:45:06.105040 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 19:45:06.105053 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 19:45:06.105067 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 19:45:06.105080 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:45:06.105093 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 19:45:06.105106 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 19:45:06.105119 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 19:45:06.105138 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 19:45:06.105150 systemd[1]: Reached target machines.target - Containers. Feb 13 19:45:06.105164 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 19:45:06.105177 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:45:06.105190 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:45:06.105204 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 19:45:06.105217 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:45:06.105229 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:45:06.105247 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:45:06.105260 kernel: ACPI: bus type drm_connector registered Feb 13 19:45:06.105273 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 19:45:06.105287 kernel: fuse: init (API version 7.39) Feb 13 19:45:06.105301 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:45:06.105314 kernel: loop: module loaded Feb 13 19:45:06.105328 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 19:45:06.105341 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 19:45:06.105354 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 19:45:06.105367 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 19:45:06.105380 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 19:45:06.105394 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:45:06.105429 systemd-journald[1362]: Collecting audit messages is disabled. Feb 13 19:45:06.105456 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:45:06.105469 systemd-journald[1362]: Journal started Feb 13 19:45:06.105491 systemd-journald[1362]: Runtime Journal (/run/log/journal/66a8c746106349a1b1d68c4506a7fd29) is 8.0M, max 639.9M, 631.9M free. Feb 13 19:45:05.008992 systemd[1]: Queued start job for default target multi-user.target. Feb 13 19:45:05.027260 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Feb 13 19:45:05.027554 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 19:45:06.132792 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 19:45:06.143792 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 19:45:06.174792 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:45:06.195908 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 19:45:06.196067 systemd[1]: Stopped verity-setup.service. Feb 13 19:45:06.227720 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:45:06.227881 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:45:06.238235 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 19:45:06.247949 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 19:45:06.259101 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 19:45:06.270084 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 19:45:06.281075 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 19:45:06.291028 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 19:45:06.301122 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 19:45:06.312037 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:45:06.323128 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 19:45:06.323225 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 19:45:06.334129 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:45:06.334225 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:45:06.345125 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:45:06.345220 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:45:06.355123 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:45:06.355214 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:45:06.366127 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 19:45:06.366219 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 19:45:06.376122 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:45:06.376215 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:45:06.386136 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:45:06.396124 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 19:45:06.407125 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 19:45:06.418127 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:45:06.434468 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 19:45:06.455983 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 19:45:06.466684 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 19:45:06.476983 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 19:45:06.477006 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:45:06.487723 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 19:45:06.507995 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 19:45:06.519679 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 19:45:06.529015 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:45:06.542758 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 19:45:06.552502 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 19:45:06.563902 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:45:06.564635 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 19:45:06.571589 systemd-journald[1362]: Time spent on flushing to /var/log/journal/66a8c746106349a1b1d68c4506a7fd29 is 14.270ms for 1364 entries. Feb 13 19:45:06.571589 systemd-journald[1362]: System Journal (/var/log/journal/66a8c746106349a1b1d68c4506a7fd29) is 8.0M, max 195.6M, 187.6M free. Feb 13 19:45:06.596763 systemd-journald[1362]: Received client request to flush runtime journal. Feb 13 19:45:06.580487 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:45:06.581210 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:45:06.591594 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 19:45:06.603583 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:45:06.616838 kernel: loop0: detected capacity change from 0 to 141000 Feb 13 19:45:06.630304 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 19:45:06.642189 systemd-tmpfiles[1397]: ACLs are not supported, ignoring. Feb 13 19:45:06.642200 systemd-tmpfiles[1397]: ACLs are not supported, ignoring. Feb 13 19:45:06.642792 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 19:45:06.648900 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 19:45:06.659980 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 19:45:06.671007 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 19:45:06.680836 kernel: loop1: detected capacity change from 0 to 210664 Feb 13 19:45:06.688112 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 19:45:06.699009 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 19:45:06.710010 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:45:06.719986 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:45:06.736824 kernel: loop2: detected capacity change from 0 to 8 Feb 13 19:45:06.738707 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 19:45:06.766007 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 19:45:06.778491 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 19:45:06.786793 kernel: loop3: detected capacity change from 0 to 138184 Feb 13 19:45:06.795342 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 19:45:06.819048 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 19:45:06.830367 udevadm[1398]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 19:45:06.837259 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 19:45:06.847812 kernel: loop4: detected capacity change from 0 to 141000 Feb 13 19:45:06.866794 kernel: loop5: detected capacity change from 0 to 210664 Feb 13 19:45:06.867044 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:45:06.875341 systemd-tmpfiles[1418]: ACLs are not supported, ignoring. Feb 13 19:45:06.875351 systemd-tmpfiles[1418]: ACLs are not supported, ignoring. Feb 13 19:45:06.878104 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:45:06.893359 ldconfig[1388]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 19:45:06.893791 kernel: loop6: detected capacity change from 0 to 8 Feb 13 19:45:06.893947 kernel: loop7: detected capacity change from 0 to 138184 Feb 13 19:45:06.894484 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 19:45:06.913376 (sd-merge)[1416]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-packet'. Feb 13 19:45:06.913634 (sd-merge)[1416]: Merged extensions into '/usr'. Feb 13 19:45:06.916503 systemd[1]: Reloading requested from client PID 1393 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 19:45:06.916512 systemd[1]: Reloading... Feb 13 19:45:06.939867 zram_generator::config[1446]: No configuration found. Feb 13 19:45:07.008725 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:45:07.047950 systemd[1]: Reloading finished in 131 ms. Feb 13 19:45:07.076716 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 19:45:07.088169 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 19:45:07.112010 systemd[1]: Starting ensure-sysext.service... Feb 13 19:45:07.119777 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:45:07.132140 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:45:07.146522 systemd[1]: Reloading requested from client PID 1503 ('systemctl') (unit ensure-sysext.service)... Feb 13 19:45:07.146539 systemd[1]: Reloading... Feb 13 19:45:07.147223 systemd-tmpfiles[1504]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 19:45:07.147580 systemd-tmpfiles[1504]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 19:45:07.148659 systemd-tmpfiles[1504]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 19:45:07.149044 systemd-tmpfiles[1504]: ACLs are not supported, ignoring. Feb 13 19:45:07.149119 systemd-tmpfiles[1504]: ACLs are not supported, ignoring. Feb 13 19:45:07.152908 systemd-tmpfiles[1504]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:45:07.152913 systemd-tmpfiles[1504]: Skipping /boot Feb 13 19:45:07.159357 systemd-tmpfiles[1504]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:45:07.159362 systemd-tmpfiles[1504]: Skipping /boot Feb 13 19:45:07.165022 systemd-udevd[1505]: Using default interface naming scheme 'v255'. Feb 13 19:45:07.176794 zram_generator::config[1532]: No configuration found. Feb 13 19:45:07.204799 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 Feb 13 19:45:07.204872 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 19:45:07.204894 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1564) Feb 13 19:45:07.216618 kernel: ACPI: button: Sleep Button [SLPB] Feb 13 19:45:07.239547 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Feb 13 19:45:07.240793 kernel: IPMI message handler: version 39.2 Feb 13 19:45:07.244796 kernel: ACPI: button: Power Button [PWRF] Feb 13 19:45:07.255795 kernel: ipmi device interface Feb 13 19:45:07.289552 kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set Feb 13 19:45:07.297679 kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt Feb 13 19:45:07.297819 kernel: mei_me 0000:00:16.4: Device doesn't have valid ME Interface Feb 13 19:45:07.297950 kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface Feb 13 19:45:07.298074 kernel: i2c i2c-0: 2/4 memory slots populated (from DMI) Feb 13 19:45:07.298196 kernel: ipmi_si: IPMI System Interface driver Feb 13 19:45:07.297149 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:45:07.308532 kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS Feb 13 19:45:07.322489 kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 Feb 13 19:45:07.322500 kernel: ipmi_si: Adding SMBIOS-specified kcs state machine Feb 13 19:45:07.322509 kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI Feb 13 19:45:07.352191 kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2] regsize 1 spacing 1 irq 0 Feb 13 19:45:07.352273 kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI Feb 13 19:45:07.352343 kernel: ipmi_si: Adding ACPI-specified kcs state machine Feb 13 19:45:07.352358 kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 Feb 13 19:45:07.351445 systemd[1]: Condition check resulted in dev-ttyS1.device - /dev/ttyS1 being skipped. Feb 13 19:45:07.351699 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. Feb 13 19:45:07.374794 kernel: iTCO_vendor_support: vendor-support=0 Feb 13 19:45:07.378950 systemd[1]: Reloading finished in 231 ms. Feb 13 19:45:07.406433 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:45:07.420325 kernel: iTCO_wdt iTCO_wdt: Found a Intel PCH TCO device (Version=6, TCOBASE=0x0400) Feb 13 19:45:07.420675 kernel: iTCO_wdt iTCO_wdt: initialized. heartbeat=30 sec (nowayout=0) Feb 13 19:45:07.444064 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:45:07.447655 kernel: intel_rapl_common: Found RAPL domain package Feb 13 19:45:07.447682 kernel: intel_rapl_common: Found RAPL domain core Feb 13 19:45:07.453092 kernel: intel_rapl_common: Found RAPL domain dram Feb 13 19:45:07.472835 kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. Feb 13 19:45:07.486416 systemd[1]: Finished ensure-sysext.service. Feb 13 19:45:07.506603 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:45:07.511829 kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1b0f, dev_id: 0x20) Feb 13 19:45:07.519949 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:45:07.528752 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 19:45:07.538807 augenrules[1708]: No rules Feb 13 19:45:07.540933 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:45:07.541549 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:45:07.551436 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:45:07.561421 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:45:07.574802 kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized Feb 13 19:45:07.581792 kernel: ipmi_ssif: IPMI SSIF Interface driver Feb 13 19:45:07.582124 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:45:07.591910 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:45:07.592517 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 19:45:07.614154 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 19:45:07.626749 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:45:07.627728 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:45:07.628617 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 19:45:07.643615 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 19:45:07.671185 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:45:07.681830 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:45:07.682365 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 19:45:07.693034 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:45:07.693121 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:45:07.693429 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 19:45:07.693583 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:45:07.693655 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:45:07.693807 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:45:07.693874 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:45:07.694018 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:45:07.694081 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:45:07.694226 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:45:07.694298 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:45:07.694437 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 19:45:07.694632 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 19:45:07.699795 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 19:45:07.699828 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:45:07.699864 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:45:07.700474 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 19:45:07.701323 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 19:45:07.701353 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 19:45:07.701605 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 19:45:07.707474 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 19:45:07.708388 lvm[1736]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:45:07.723221 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 19:45:07.749401 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 19:45:07.762950 systemd-resolved[1721]: Positive Trust Anchors: Feb 13 19:45:07.762960 systemd-resolved[1721]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:45:07.762999 systemd-resolved[1721]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:45:07.765668 systemd-resolved[1721]: Using system hostname 'ci-4186.1.1-a-a8b3a25f31'. Feb 13 19:45:07.768739 systemd-networkd[1720]: lo: Link UP Feb 13 19:45:07.768742 systemd-networkd[1720]: lo: Gained carrier Feb 13 19:45:07.771602 systemd-networkd[1720]: bond0: netdev ready Feb 13 19:45:07.772585 systemd-networkd[1720]: Enumeration completed Feb 13 19:45:07.778931 systemd-networkd[1720]: enp1s0f0np0: Configuring with /etc/systemd/network/10-1c:34:da:42:8e:40.network. Feb 13 19:45:07.823000 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 19:45:07.834098 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:45:07.843900 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:45:07.853999 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:45:07.866058 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:45:07.875861 systemd[1]: Reached target network.target - Network. Feb 13 19:45:07.883858 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:45:07.894859 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:45:07.904915 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 19:45:07.915877 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 19:45:07.926875 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 19:45:07.937863 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 19:45:07.937898 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:45:07.945866 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 19:45:07.955957 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 19:45:07.965933 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 19:45:07.976857 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:45:07.985650 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 19:45:07.995566 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 19:45:08.005224 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 19:45:08.015605 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 19:45:08.038399 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 19:45:08.041093 lvm[1759]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:45:08.049437 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 19:45:08.059051 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:45:08.068886 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:45:08.077968 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Feb 13 19:45:08.081923 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:45:08.081939 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:45:08.090842 kernel: bond0: (slave enp1s0f0np0): Enslaving as a backup interface with an up link Feb 13 19:45:08.092622 systemd-networkd[1720]: enp1s0f1np1: Configuring with /etc/systemd/network/10-1c:34:da:42:8e:41.network. Feb 13 19:45:08.095912 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 19:45:08.106994 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 19:45:08.117486 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 19:45:08.126612 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 19:45:08.130606 coreos-metadata[1762]: Feb 13 19:45:08.130 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 13 19:45:08.131513 coreos-metadata[1762]: Feb 13 19:45:08.131 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) Feb 13 19:45:08.136457 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 19:45:08.139964 dbus-daemon[1763]: [system] SELinux support is enabled Feb 13 19:45:08.140115 jq[1766]: false Feb 13 19:45:08.145894 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 19:45:08.146564 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 19:45:08.153930 extend-filesystems[1768]: Found loop4 Feb 13 19:45:08.153930 extend-filesystems[1768]: Found loop5 Feb 13 19:45:08.179562 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 116605649 blocks Feb 13 19:45:08.179586 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1576) Feb 13 19:45:08.179618 extend-filesystems[1768]: Found loop6 Feb 13 19:45:08.179618 extend-filesystems[1768]: Found loop7 Feb 13 19:45:08.179618 extend-filesystems[1768]: Found sda Feb 13 19:45:08.179618 extend-filesystems[1768]: Found sda1 Feb 13 19:45:08.179618 extend-filesystems[1768]: Found sda2 Feb 13 19:45:08.179618 extend-filesystems[1768]: Found sda3 Feb 13 19:45:08.179618 extend-filesystems[1768]: Found usr Feb 13 19:45:08.179618 extend-filesystems[1768]: Found sda4 Feb 13 19:45:08.179618 extend-filesystems[1768]: Found sda6 Feb 13 19:45:08.179618 extend-filesystems[1768]: Found sda7 Feb 13 19:45:08.179618 extend-filesystems[1768]: Found sda9 Feb 13 19:45:08.179618 extend-filesystems[1768]: Checking size of /dev/sda9 Feb 13 19:45:08.179618 extend-filesystems[1768]: Resized partition /dev/sda9 Feb 13 19:45:08.369831 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Feb 13 19:45:08.369952 kernel: bond0: (slave enp1s0f1np1): Enslaving as a backup interface with an up link Feb 13 19:45:08.369963 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Feb 13 19:45:08.369973 kernel: bond0: (slave enp1s0f0np0): link status definitely up, 10000 Mbps full duplex Feb 13 19:45:08.369982 kernel: bond0: active interface up! Feb 13 19:45:08.157591 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 19:45:08.370055 extend-filesystems[1777]: resize2fs 1.47.1 (20-May-2024) Feb 13 19:45:08.180275 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 19:45:08.208457 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 19:45:08.229555 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 19:45:08.240630 systemd-networkd[1720]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Feb 13 19:45:08.387136 sshd_keygen[1792]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 19:45:08.242010 systemd-networkd[1720]: enp1s0f0np0: Link UP Feb 13 19:45:08.387247 update_engine[1793]: I20250213 19:45:08.321534 1793 main.cc:92] Flatcar Update Engine starting Feb 13 19:45:08.387247 update_engine[1793]: I20250213 19:45:08.322157 1793 update_check_scheduler.cc:74] Next update check in 4m56s Feb 13 19:45:08.242212 systemd-networkd[1720]: enp1s0f0np0: Gained carrier Feb 13 19:45:08.387408 jq[1794]: true Feb 13 19:45:08.259753 systemd[1]: Starting tcsd.service - TCG Core Services Daemon... Feb 13 19:45:08.264002 systemd-networkd[1720]: enp1s0f1np1: Reconfiguring with /etc/systemd/network/10-1c:34:da:42:8e:40.network. Feb 13 19:45:08.264185 systemd-networkd[1720]: enp1s0f1np1: Link UP Feb 13 19:45:08.264370 systemd-networkd[1720]: enp1s0f1np1: Gained carrier Feb 13 19:45:08.276910 systemd-networkd[1720]: bond0: Link UP Feb 13 19:45:08.277091 systemd-networkd[1720]: bond0: Gained carrier Feb 13 19:45:08.277215 systemd-timesyncd[1722]: Network configuration changed, trying to establish connection. Feb 13 19:45:08.277506 systemd-timesyncd[1722]: Network configuration changed, trying to establish connection. Feb 13 19:45:08.277689 systemd-timesyncd[1722]: Network configuration changed, trying to establish connection. Feb 13 19:45:08.277774 systemd-timesyncd[1722]: Network configuration changed, trying to establish connection. Feb 13 19:45:08.294954 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 19:45:08.295376 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 19:45:08.305448 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 19:45:08.313245 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 19:45:08.317604 systemd-logind[1788]: Watching system buttons on /dev/input/event3 (Power Button) Feb 13 19:45:08.317616 systemd-logind[1788]: Watching system buttons on /dev/input/event2 (Sleep Button) Feb 13 19:45:08.317627 systemd-logind[1788]: Watching system buttons on /dev/input/event0 (HID 0557:2419) Feb 13 19:45:08.317779 systemd-logind[1788]: New seat seat0. Feb 13 19:45:08.362992 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 19:45:08.378964 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 19:45:08.416025 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 19:45:08.416112 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 19:45:08.416281 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 19:45:08.416360 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 19:45:08.426342 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 19:45:08.426425 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 19:45:08.437054 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 19:45:08.451748 (ntainerd)[1805]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 19:45:08.453235 jq[1804]: true Feb 13 19:45:08.456245 dbus-daemon[1763]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 13 19:45:08.457816 tar[1803]: linux-amd64/helm Feb 13 19:45:08.461805 systemd[1]: tcsd.service: Skipped due to 'exec-condition'. Feb 13 19:45:08.461910 systemd[1]: Condition check resulted in tcsd.service - TCG Core Services Daemon being skipped. Feb 13 19:45:08.461995 systemd[1]: Started update-engine.service - Update Engine. Feb 13 19:45:08.477795 kernel: bond0: (slave enp1s0f1np1): link status definitely up, 10000 Mbps full duplex Feb 13 19:45:08.485524 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 19:45:08.493862 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 19:45:08.493959 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 19:45:08.504928 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 19:45:08.505011 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 19:45:08.521674 bash[1834]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:45:08.525955 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 19:45:08.537776 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 19:45:08.549219 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 19:45:08.549347 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 19:45:08.550776 locksmithd[1842]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 19:45:08.574049 systemd[1]: Starting sshkeys.service... Feb 13 19:45:08.581594 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 19:45:08.602133 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 19:45:08.613736 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 19:45:08.621618 containerd[1805]: time="2025-02-13T19:45:08.621550308Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 19:45:08.625262 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 19:45:08.634777 containerd[1805]: time="2025-02-13T19:45:08.634753214Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:45:08.635600 containerd[1805]: time="2025-02-13T19:45:08.635583605Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:45:08.635600 containerd[1805]: time="2025-02-13T19:45:08.635599551Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 19:45:08.635644 containerd[1805]: time="2025-02-13T19:45:08.635608835Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 19:45:08.635702 containerd[1805]: time="2025-02-13T19:45:08.635694178Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 19:45:08.635722 containerd[1805]: time="2025-02-13T19:45:08.635704172Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 19:45:08.635746 containerd[1805]: time="2025-02-13T19:45:08.635736948Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:45:08.635746 containerd[1805]: time="2025-02-13T19:45:08.635744877Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:45:08.635900 containerd[1805]: time="2025-02-13T19:45:08.635839683Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:45:08.635900 containerd[1805]: time="2025-02-13T19:45:08.635848793Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 19:45:08.635900 containerd[1805]: time="2025-02-13T19:45:08.635856386Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:45:08.635900 containerd[1805]: time="2025-02-13T19:45:08.635861899Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 19:45:08.635975 containerd[1805]: time="2025-02-13T19:45:08.635903630Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:45:08.636023 containerd[1805]: time="2025-02-13T19:45:08.636015558Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:45:08.636086 containerd[1805]: time="2025-02-13T19:45:08.636075024Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:45:08.636106 containerd[1805]: time="2025-02-13T19:45:08.636087344Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 19:45:08.636150 containerd[1805]: time="2025-02-13T19:45:08.636141735Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 19:45:08.636183 containerd[1805]: time="2025-02-13T19:45:08.636176476Z" level=info msg="metadata content store policy set" policy=shared Feb 13 19:45:08.636573 coreos-metadata[1856]: Feb 13 19:45:08.636 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 13 19:45:08.641201 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 19:45:08.646018 containerd[1805]: time="2025-02-13T19:45:08.646003247Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 19:45:08.646050 containerd[1805]: time="2025-02-13T19:45:08.646030255Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 19:45:08.646050 containerd[1805]: time="2025-02-13T19:45:08.646040505Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 19:45:08.646098 containerd[1805]: time="2025-02-13T19:45:08.646050429Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 19:45:08.646098 containerd[1805]: time="2025-02-13T19:45:08.646058460Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 19:45:08.646142 containerd[1805]: time="2025-02-13T19:45:08.646131887Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 19:45:08.646304 containerd[1805]: time="2025-02-13T19:45:08.646259429Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 19:45:08.646336 containerd[1805]: time="2025-02-13T19:45:08.646312571Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 19:45:08.646336 containerd[1805]: time="2025-02-13T19:45:08.646321897Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 19:45:08.646336 containerd[1805]: time="2025-02-13T19:45:08.646330394Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 19:45:08.646392 containerd[1805]: time="2025-02-13T19:45:08.646337849Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 19:45:08.646392 containerd[1805]: time="2025-02-13T19:45:08.646345051Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 19:45:08.646392 containerd[1805]: time="2025-02-13T19:45:08.646351626Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 19:45:08.646392 containerd[1805]: time="2025-02-13T19:45:08.646359157Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 19:45:08.646392 containerd[1805]: time="2025-02-13T19:45:08.646367184Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 19:45:08.646392 containerd[1805]: time="2025-02-13T19:45:08.646377308Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 19:45:08.646392 containerd[1805]: time="2025-02-13T19:45:08.646384731Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 19:45:08.646392 containerd[1805]: time="2025-02-13T19:45:08.646391289Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 19:45:08.646522 containerd[1805]: time="2025-02-13T19:45:08.646405335Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 19:45:08.646522 containerd[1805]: time="2025-02-13T19:45:08.646413483Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 19:45:08.646522 containerd[1805]: time="2025-02-13T19:45:08.646420545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 19:45:08.646522 containerd[1805]: time="2025-02-13T19:45:08.646427978Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 19:45:08.646522 containerd[1805]: time="2025-02-13T19:45:08.646434546Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 19:45:08.646522 containerd[1805]: time="2025-02-13T19:45:08.646441632Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 19:45:08.646522 containerd[1805]: time="2025-02-13T19:45:08.646447834Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 19:45:08.646522 containerd[1805]: time="2025-02-13T19:45:08.646454405Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 19:45:08.646522 containerd[1805]: time="2025-02-13T19:45:08.646461714Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 19:45:08.646522 containerd[1805]: time="2025-02-13T19:45:08.646469487Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 19:45:08.646522 containerd[1805]: time="2025-02-13T19:45:08.646476171Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 19:45:08.646522 containerd[1805]: time="2025-02-13T19:45:08.646482606Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 19:45:08.646522 containerd[1805]: time="2025-02-13T19:45:08.646488994Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 19:45:08.646522 containerd[1805]: time="2025-02-13T19:45:08.646496610Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 19:45:08.646522 containerd[1805]: time="2025-02-13T19:45:08.646507965Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 19:45:08.646760 containerd[1805]: time="2025-02-13T19:45:08.646515305Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 19:45:08.646760 containerd[1805]: time="2025-02-13T19:45:08.646521708Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 19:45:08.646878 containerd[1805]: time="2025-02-13T19:45:08.646869613Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 19:45:08.646906 containerd[1805]: time="2025-02-13T19:45:08.646883835Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 19:45:08.646906 containerd[1805]: time="2025-02-13T19:45:08.646890573Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 19:45:08.646906 containerd[1805]: time="2025-02-13T19:45:08.646897544Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 19:45:08.646906 containerd[1805]: time="2025-02-13T19:45:08.646902854Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 19:45:08.646975 containerd[1805]: time="2025-02-13T19:45:08.646910353Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 19:45:08.646975 containerd[1805]: time="2025-02-13T19:45:08.646916731Z" level=info msg="NRI interface is disabled by configuration." Feb 13 19:45:08.646975 containerd[1805]: time="2025-02-13T19:45:08.646925526Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 19:45:08.647112 containerd[1805]: time="2025-02-13T19:45:08.647088085Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 19:45:08.647201 containerd[1805]: time="2025-02-13T19:45:08.647117094Z" level=info msg="Connect containerd service" Feb 13 19:45:08.647201 containerd[1805]: time="2025-02-13T19:45:08.647134314Z" level=info msg="using legacy CRI server" Feb 13 19:45:08.647201 containerd[1805]: time="2025-02-13T19:45:08.647138518Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 19:45:08.647201 containerd[1805]: time="2025-02-13T19:45:08.647197410Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 19:45:08.647485 containerd[1805]: time="2025-02-13T19:45:08.647474312Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:45:08.647583 containerd[1805]: time="2025-02-13T19:45:08.647563323Z" level=info msg="Start subscribing containerd event" Feb 13 19:45:08.647605 containerd[1805]: time="2025-02-13T19:45:08.647595827Z" level=info msg="Start recovering state" Feb 13 19:45:08.647648 containerd[1805]: time="2025-02-13T19:45:08.647641544Z" level=info msg="Start event monitor" Feb 13 19:45:08.647666 containerd[1805]: time="2025-02-13T19:45:08.647646088Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 19:45:08.647683 containerd[1805]: time="2025-02-13T19:45:08.647671598Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 19:45:08.647709 containerd[1805]: time="2025-02-13T19:45:08.647652082Z" level=info msg="Start snapshots syncer" Feb 13 19:45:08.647709 containerd[1805]: time="2025-02-13T19:45:08.647693774Z" level=info msg="Start cni network conf syncer for default" Feb 13 19:45:08.647709 containerd[1805]: time="2025-02-13T19:45:08.647698665Z" level=info msg="Start streaming server" Feb 13 19:45:08.647751 containerd[1805]: time="2025-02-13T19:45:08.647724818Z" level=info msg="containerd successfully booted in 0.026999s" Feb 13 19:45:08.649675 systemd[1]: Started serial-getty@ttyS1.service - Serial Getty on ttyS1. Feb 13 19:45:08.659020 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 19:45:08.667149 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 19:45:08.723501 tar[1803]: linux-amd64/LICENSE Feb 13 19:45:08.723501 tar[1803]: linux-amd64/README.md Feb 13 19:45:08.729848 kernel: EXT4-fs (sda9): resized filesystem to 116605649 Feb 13 19:45:08.753099 extend-filesystems[1777]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Feb 13 19:45:08.753099 extend-filesystems[1777]: old_desc_blocks = 1, new_desc_blocks = 56 Feb 13 19:45:08.753099 extend-filesystems[1777]: The filesystem on /dev/sda9 is now 116605649 (4k) blocks long. Feb 13 19:45:08.794822 extend-filesystems[1768]: Resized filesystem in /dev/sda9 Feb 13 19:45:08.794822 extend-filesystems[1768]: Found sdb Feb 13 19:45:08.753899 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 19:45:08.753998 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 19:45:08.807061 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 19:45:09.131592 coreos-metadata[1762]: Feb 13 19:45:09.131 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Feb 13 19:45:09.786972 systemd-networkd[1720]: bond0: Gained IPv6LL Feb 13 19:45:09.787203 systemd-timesyncd[1722]: Network configuration changed, trying to establish connection. Feb 13 19:45:09.979063 systemd-timesyncd[1722]: Network configuration changed, trying to establish connection. Feb 13 19:45:09.979159 systemd-timesyncd[1722]: Network configuration changed, trying to establish connection. Feb 13 19:45:09.980453 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 19:45:09.992206 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 19:45:10.012972 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:45:10.023476 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 19:45:10.041716 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 19:45:10.661610 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:45:10.673298 (kubelet)[1897]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:45:10.930890 kernel: mlx5_core 0000:01:00.0: lag map: port 1:1 port 2:2 Feb 13 19:45:10.931036 kernel: mlx5_core 0000:01:00.0: shared_fdb:0 mode:queue_affinity Feb 13 19:45:11.143089 kubelet[1897]: E0213 19:45:11.143013 1897 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:45:11.144187 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:45:11.144264 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:45:11.877200 coreos-metadata[1856]: Feb 13 19:45:11.877 INFO Fetch successful Feb 13 19:45:11.948105 coreos-metadata[1762]: Feb 13 19:45:11.948 INFO Fetch successful Feb 13 19:45:11.958635 unknown[1856]: wrote ssh authorized keys file for user: core Feb 13 19:45:11.980983 update-ssh-keys[1918]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:45:11.981268 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 19:45:11.994697 systemd[1]: Finished sshkeys.service. Feb 13 19:45:12.006603 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 19:45:12.017952 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 19:45:12.038179 systemd[1]: Starting packet-phone-home.service - Report Success to Packet... Feb 13 19:45:12.051960 systemd[1]: Started sshd@0-147.28.180.89:22-139.178.89.65:45068.service - OpenSSH per-connection server daemon (139.178.89.65:45068). Feb 13 19:45:12.109794 sshd[1928]: Accepted publickey for core from 139.178.89.65 port 45068 ssh2: RSA SHA256:oqFzLKltKjIjWJ2xRNYaxZupDvhRtAv9cyn8jloyOa0 Feb 13 19:45:12.110984 sshd-session[1928]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:45:12.116632 systemd-logind[1788]: New session 1 of user core. Feb 13 19:45:12.117242 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 19:45:12.137091 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 19:45:12.151311 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 19:45:12.181113 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 19:45:12.192154 (systemd)[1932]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 19:45:12.269071 systemd[1932]: Queued start job for default target default.target. Feb 13 19:45:12.280446 systemd[1932]: Created slice app.slice - User Application Slice. Feb 13 19:45:12.280460 systemd[1932]: Reached target paths.target - Paths. Feb 13 19:45:12.280469 systemd[1932]: Reached target timers.target - Timers. Feb 13 19:45:12.281092 systemd[1932]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 19:45:12.286951 systemd[1932]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 19:45:12.286978 systemd[1932]: Reached target sockets.target - Sockets. Feb 13 19:45:12.286987 systemd[1932]: Reached target basic.target - Basic System. Feb 13 19:45:12.287008 systemd[1932]: Reached target default.target - Main User Target. Feb 13 19:45:12.287025 systemd[1932]: Startup finished in 90ms. Feb 13 19:45:12.287091 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 19:45:12.298720 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 19:45:12.366680 systemd[1]: Started sshd@1-147.28.180.89:22-139.178.89.65:45084.service - OpenSSH per-connection server daemon (139.178.89.65:45084). Feb 13 19:45:12.406214 sshd[1943]: Accepted publickey for core from 139.178.89.65 port 45084 ssh2: RSA SHA256:oqFzLKltKjIjWJ2xRNYaxZupDvhRtAv9cyn8jloyOa0 Feb 13 19:45:12.406784 sshd-session[1943]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:45:12.409428 systemd-logind[1788]: New session 2 of user core. Feb 13 19:45:12.418916 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 19:45:12.481238 sshd[1945]: Connection closed by 139.178.89.65 port 45084 Feb 13 19:45:12.481410 sshd-session[1943]: pam_unix(sshd:session): session closed for user core Feb 13 19:45:12.500518 systemd[1]: sshd@1-147.28.180.89:22-139.178.89.65:45084.service: Deactivated successfully. Feb 13 19:45:12.501352 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 19:45:12.502114 systemd-logind[1788]: Session 2 logged out. Waiting for processes to exit. Feb 13 19:45:12.502880 systemd[1]: Started sshd@2-147.28.180.89:22-139.178.89.65:45098.service - OpenSSH per-connection server daemon (139.178.89.65:45098). Feb 13 19:45:12.514645 systemd-logind[1788]: Removed session 2. Feb 13 19:45:12.571603 systemd[1]: Finished packet-phone-home.service - Report Success to Packet. Feb 13 19:45:12.580379 sshd[1950]: Accepted publickey for core from 139.178.89.65 port 45098 ssh2: RSA SHA256:oqFzLKltKjIjWJ2xRNYaxZupDvhRtAv9cyn8jloyOa0 Feb 13 19:45:12.581653 sshd-session[1950]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:45:12.583543 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 19:45:12.593469 systemd[1]: Startup finished in 2.683s (kernel) + 18.762s (initrd) + 8.062s (userspace) = 29.507s. Feb 13 19:45:12.596373 systemd-logind[1788]: New session 3 of user core. Feb 13 19:45:12.601433 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 19:45:12.606244 agetty[1870]: failed to open credentials directory Feb 13 19:45:12.606262 agetty[1875]: failed to open credentials directory Feb 13 19:45:12.610965 login[1875]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 13 19:45:12.613562 systemd-logind[1788]: New session 4 of user core. Feb 13 19:45:12.614179 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 19:45:12.616463 login[1870]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 13 19:45:12.618546 systemd-logind[1788]: New session 5 of user core. Feb 13 19:45:12.619126 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 19:45:12.648952 sshd[1953]: Connection closed by 139.178.89.65 port 45098 Feb 13 19:45:12.649131 sshd-session[1950]: pam_unix(sshd:session): session closed for user core Feb 13 19:45:12.650791 systemd[1]: sshd@2-147.28.180.89:22-139.178.89.65:45098.service: Deactivated successfully. Feb 13 19:45:12.651563 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 19:45:12.652030 systemd-logind[1788]: Session 3 logged out. Waiting for processes to exit. Feb 13 19:45:12.652652 systemd-logind[1788]: Removed session 3. Feb 13 19:45:21.272203 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 19:45:21.282044 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:45:21.469897 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:45:21.474242 (kubelet)[1989]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:45:21.542806 kubelet[1989]: E0213 19:45:21.542680 1989 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:45:21.544974 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:45:21.545049 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:45:22.665215 systemd[1]: Started sshd@3-147.28.180.89:22-139.178.89.65:52914.service - OpenSSH per-connection server daemon (139.178.89.65:52914). Feb 13 19:45:22.701111 sshd[2011]: Accepted publickey for core from 139.178.89.65 port 52914 ssh2: RSA SHA256:oqFzLKltKjIjWJ2xRNYaxZupDvhRtAv9cyn8jloyOa0 Feb 13 19:45:22.701750 sshd-session[2011]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:45:22.704404 systemd-logind[1788]: New session 6 of user core. Feb 13 19:45:22.714049 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 19:45:22.765607 sshd[2013]: Connection closed by 139.178.89.65 port 52914 Feb 13 19:45:22.765823 sshd-session[2011]: pam_unix(sshd:session): session closed for user core Feb 13 19:45:22.777413 systemd[1]: sshd@3-147.28.180.89:22-139.178.89.65:52914.service: Deactivated successfully. Feb 13 19:45:22.778168 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 19:45:22.778798 systemd-logind[1788]: Session 6 logged out. Waiting for processes to exit. Feb 13 19:45:22.779515 systemd[1]: Started sshd@4-147.28.180.89:22-139.178.89.65:52920.service - OpenSSH per-connection server daemon (139.178.89.65:52920). Feb 13 19:45:22.780128 systemd-logind[1788]: Removed session 6. Feb 13 19:45:22.819390 sshd[2018]: Accepted publickey for core from 139.178.89.65 port 52920 ssh2: RSA SHA256:oqFzLKltKjIjWJ2xRNYaxZupDvhRtAv9cyn8jloyOa0 Feb 13 19:45:22.820189 sshd-session[2018]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:45:22.823426 systemd-logind[1788]: New session 7 of user core. Feb 13 19:45:22.843039 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 19:45:22.893749 sshd[2020]: Connection closed by 139.178.89.65 port 52920 Feb 13 19:45:22.893896 sshd-session[2018]: pam_unix(sshd:session): session closed for user core Feb 13 19:45:22.904415 systemd[1]: sshd@4-147.28.180.89:22-139.178.89.65:52920.service: Deactivated successfully. Feb 13 19:45:22.905172 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 19:45:22.905891 systemd-logind[1788]: Session 7 logged out. Waiting for processes to exit. Feb 13 19:45:22.906544 systemd[1]: Started sshd@5-147.28.180.89:22-139.178.89.65:52924.service - OpenSSH per-connection server daemon (139.178.89.65:52924). Feb 13 19:45:22.907046 systemd-logind[1788]: Removed session 7. Feb 13 19:45:22.947281 sshd[2025]: Accepted publickey for core from 139.178.89.65 port 52924 ssh2: RSA SHA256:oqFzLKltKjIjWJ2xRNYaxZupDvhRtAv9cyn8jloyOa0 Feb 13 19:45:22.948103 sshd-session[2025]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:45:22.951551 systemd-logind[1788]: New session 8 of user core. Feb 13 19:45:22.963026 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 19:45:23.024334 sshd[2027]: Connection closed by 139.178.89.65 port 52924 Feb 13 19:45:23.025013 sshd-session[2025]: pam_unix(sshd:session): session closed for user core Feb 13 19:45:23.047242 systemd[1]: sshd@5-147.28.180.89:22-139.178.89.65:52924.service: Deactivated successfully. Feb 13 19:45:23.050869 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 19:45:23.054147 systemd-logind[1788]: Session 8 logged out. Waiting for processes to exit. Feb 13 19:45:23.065559 systemd[1]: Started sshd@6-147.28.180.89:22-139.178.89.65:52930.service - OpenSSH per-connection server daemon (139.178.89.65:52930). Feb 13 19:45:23.068065 systemd-logind[1788]: Removed session 8. Feb 13 19:45:23.130888 sshd[2032]: Accepted publickey for core from 139.178.89.65 port 52930 ssh2: RSA SHA256:oqFzLKltKjIjWJ2xRNYaxZupDvhRtAv9cyn8jloyOa0 Feb 13 19:45:23.131500 sshd-session[2032]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:45:23.134108 systemd-logind[1788]: New session 9 of user core. Feb 13 19:45:23.148075 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 19:45:23.204084 sudo[2035]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 19:45:23.204233 sudo[2035]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:45:23.217378 sudo[2035]: pam_unix(sudo:session): session closed for user root Feb 13 19:45:23.218000 sshd[2034]: Connection closed by 139.178.89.65 port 52930 Feb 13 19:45:23.218216 sshd-session[2032]: pam_unix(sshd:session): session closed for user core Feb 13 19:45:23.230750 systemd[1]: sshd@6-147.28.180.89:22-139.178.89.65:52930.service: Deactivated successfully. Feb 13 19:45:23.231650 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 19:45:23.232494 systemd-logind[1788]: Session 9 logged out. Waiting for processes to exit. Feb 13 19:45:23.233423 systemd[1]: Started sshd@7-147.28.180.89:22-139.178.89.65:52944.service - OpenSSH per-connection server daemon (139.178.89.65:52944). Feb 13 19:45:23.234095 systemd-logind[1788]: Removed session 9. Feb 13 19:45:23.288442 sshd[2040]: Accepted publickey for core from 139.178.89.65 port 52944 ssh2: RSA SHA256:oqFzLKltKjIjWJ2xRNYaxZupDvhRtAv9cyn8jloyOa0 Feb 13 19:45:23.289450 sshd-session[2040]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:45:23.293151 systemd-logind[1788]: New session 10 of user core. Feb 13 19:45:23.317132 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 19:45:23.375937 sudo[2044]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 19:45:23.376083 sudo[2044]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:45:23.378093 sudo[2044]: pam_unix(sudo:session): session closed for user root Feb 13 19:45:23.380687 sudo[2043]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 19:45:23.380836 sudo[2043]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:45:23.395127 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:45:23.415672 augenrules[2066]: No rules Feb 13 19:45:23.416285 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:45:23.416454 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:45:23.417311 sudo[2043]: pam_unix(sudo:session): session closed for user root Feb 13 19:45:23.418452 sshd[2042]: Connection closed by 139.178.89.65 port 52944 Feb 13 19:45:23.418742 sshd-session[2040]: pam_unix(sshd:session): session closed for user core Feb 13 19:45:23.422279 systemd[1]: sshd@7-147.28.180.89:22-139.178.89.65:52944.service: Deactivated successfully. Feb 13 19:45:23.423896 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 19:45:23.424857 systemd-logind[1788]: Session 10 logged out. Waiting for processes to exit. Feb 13 19:45:23.426885 systemd[1]: Started sshd@8-147.28.180.89:22-139.178.89.65:52954.service - OpenSSH per-connection server daemon (139.178.89.65:52954). Feb 13 19:45:23.428127 systemd-logind[1788]: Removed session 10. Feb 13 19:45:23.502292 sshd[2074]: Accepted publickey for core from 139.178.89.65 port 52954 ssh2: RSA SHA256:oqFzLKltKjIjWJ2xRNYaxZupDvhRtAv9cyn8jloyOa0 Feb 13 19:45:23.503146 sshd-session[2074]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:45:23.506441 systemd-logind[1788]: New session 11 of user core. Feb 13 19:45:23.516038 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 19:45:23.576142 sudo[2077]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 19:45:23.576947 sudo[2077]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:45:23.942149 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 19:45:23.942221 (dockerd)[2106]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 19:45:24.184184 dockerd[2106]: time="2025-02-13T19:45:24.184150768Z" level=info msg="Starting up" Feb 13 19:45:24.265588 dockerd[2106]: time="2025-02-13T19:45:24.265516032Z" level=info msg="Loading containers: start." Feb 13 19:45:24.384824 kernel: Initializing XFRM netlink socket Feb 13 19:45:24.400443 systemd-timesyncd[1722]: Network configuration changed, trying to establish connection. Feb 13 19:45:24.476494 systemd-networkd[1720]: docker0: Link UP Feb 13 19:45:24.507877 dockerd[2106]: time="2025-02-13T19:45:24.507815701Z" level=info msg="Loading containers: done." Feb 13 19:45:24.519442 dockerd[2106]: time="2025-02-13T19:45:24.519394252Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 19:45:24.519506 dockerd[2106]: time="2025-02-13T19:45:24.519442466Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Feb 13 19:45:24.519506 dockerd[2106]: time="2025-02-13T19:45:24.519499344Z" level=info msg="Daemon has completed initialization" Feb 13 19:45:24.519781 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2743498079-merged.mount: Deactivated successfully. Feb 13 19:45:24.534098 dockerd[2106]: time="2025-02-13T19:45:24.534073427Z" level=info msg="API listen on /run/docker.sock" Feb 13 19:45:24.534160 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 19:45:25.277529 systemd-resolved[1721]: Clock change detected. Flushing caches. Feb 13 19:45:25.277615 systemd-timesyncd[1722]: Contacted time server [2604:2dc0:101:200::151]:123 (2.flatcar.pool.ntp.org). Feb 13 19:45:25.277654 systemd-timesyncd[1722]: Initial clock synchronization to Thu 2025-02-13 19:45:25.277461 UTC. Feb 13 19:45:26.157348 containerd[1805]: time="2025-02-13T19:45:26.157326299Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\"" Feb 13 19:45:26.825726 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1759187658.mount: Deactivated successfully. Feb 13 19:45:27.649002 containerd[1805]: time="2025-02-13T19:45:27.648948549Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:45:27.649203 containerd[1805]: time="2025-02-13T19:45:27.649056686Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.10: active requests=0, bytes read=32678214" Feb 13 19:45:27.649637 containerd[1805]: time="2025-02-13T19:45:27.649598013Z" level=info msg="ImageCreate event name:\"sha256:172a4e0b731db1008c5339e0b8ef232f5c55632099e37cccfb9ba786c19580c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:45:27.651227 containerd[1805]: time="2025-02-13T19:45:27.651187000Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:45:27.651869 containerd[1805]: time="2025-02-13T19:45:27.651828151Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.10\" with image id \"sha256:172a4e0b731db1008c5339e0b8ef232f5c55632099e37cccfb9ba786c19580c4\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\", size \"32675014\" in 1.494480729s" Feb 13 19:45:27.651869 containerd[1805]: time="2025-02-13T19:45:27.651846268Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\" returns image reference \"sha256:172a4e0b731db1008c5339e0b8ef232f5c55632099e37cccfb9ba786c19580c4\"" Feb 13 19:45:27.662665 containerd[1805]: time="2025-02-13T19:45:27.662602852Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\"" Feb 13 19:45:28.781328 containerd[1805]: time="2025-02-13T19:45:28.781301655Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:45:28.781548 containerd[1805]: time="2025-02-13T19:45:28.781503689Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.10: active requests=0, bytes read=29611545" Feb 13 19:45:28.781938 containerd[1805]: time="2025-02-13T19:45:28.781903115Z" level=info msg="ImageCreate event name:\"sha256:f81ad4d47d77570472cf20a1f6b008ece135be405b2f52f50ed6820f2b6f9a5f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:45:28.783944 containerd[1805]: time="2025-02-13T19:45:28.783903587Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:45:28.784432 containerd[1805]: time="2025-02-13T19:45:28.784388570Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.10\" with image id \"sha256:f81ad4d47d77570472cf20a1f6b008ece135be405b2f52f50ed6820f2b6f9a5f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\", size \"31058091\" in 1.121763532s" Feb 13 19:45:28.784432 containerd[1805]: time="2025-02-13T19:45:28.784404388Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\" returns image reference \"sha256:f81ad4d47d77570472cf20a1f6b008ece135be405b2f52f50ed6820f2b6f9a5f\"" Feb 13 19:45:28.795426 containerd[1805]: time="2025-02-13T19:45:28.795398616Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\"" Feb 13 19:45:29.696883 containerd[1805]: time="2025-02-13T19:45:29.696828146Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:45:29.697007 containerd[1805]: time="2025-02-13T19:45:29.696984501Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.10: active requests=0, bytes read=17782130" Feb 13 19:45:29.697609 containerd[1805]: time="2025-02-13T19:45:29.697598679Z" level=info msg="ImageCreate event name:\"sha256:64edffde4bf75617ad8fc73556d5e80d34b9425c79106b7f74b2059243b2ffe8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:45:29.699256 containerd[1805]: time="2025-02-13T19:45:29.699214441Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:45:29.699858 containerd[1805]: time="2025-02-13T19:45:29.699816191Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.10\" with image id \"sha256:64edffde4bf75617ad8fc73556d5e80d34b9425c79106b7f74b2059243b2ffe8\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\", size \"19228694\" in 904.395642ms" Feb 13 19:45:29.699858 containerd[1805]: time="2025-02-13T19:45:29.699833750Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\" returns image reference \"sha256:64edffde4bf75617ad8fc73556d5e80d34b9425c79106b7f74b2059243b2ffe8\"" Feb 13 19:45:29.710471 containerd[1805]: time="2025-02-13T19:45:29.710449174Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\"" Feb 13 19:45:30.523896 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1264375326.mount: Deactivated successfully. Feb 13 19:45:30.703990 containerd[1805]: time="2025-02-13T19:45:30.703935513Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:45:30.704188 containerd[1805]: time="2025-02-13T19:45:30.704150741Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.10: active requests=0, bytes read=29057858" Feb 13 19:45:30.704736 containerd[1805]: time="2025-02-13T19:45:30.704706828Z" level=info msg="ImageCreate event name:\"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:45:30.706047 containerd[1805]: time="2025-02-13T19:45:30.706006273Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:45:30.706764 containerd[1805]: time="2025-02-13T19:45:30.706724176Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.10\" with image id \"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\", repo tag \"registry.k8s.io/kube-proxy:v1.30.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\", size \"29056877\" in 996.253707ms" Feb 13 19:45:30.706764 containerd[1805]: time="2025-02-13T19:45:30.706741559Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\" returns image reference \"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\"" Feb 13 19:45:30.717649 containerd[1805]: time="2025-02-13T19:45:30.717626386Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 19:45:31.213093 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3210553573.mount: Deactivated successfully. Feb 13 19:45:31.697000 containerd[1805]: time="2025-02-13T19:45:31.696932597Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:45:31.697218 containerd[1805]: time="2025-02-13T19:45:31.697174321Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Feb 13 19:45:31.697661 containerd[1805]: time="2025-02-13T19:45:31.697619816Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:45:31.699251 containerd[1805]: time="2025-02-13T19:45:31.699209233Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:45:31.699869 containerd[1805]: time="2025-02-13T19:45:31.699827506Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 982.179449ms" Feb 13 19:45:31.699869 containerd[1805]: time="2025-02-13T19:45:31.699842705Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Feb 13 19:45:31.711132 containerd[1805]: time="2025-02-13T19:45:31.711111460Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 13 19:45:32.207335 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3964337773.mount: Deactivated successfully. Feb 13 19:45:32.208525 containerd[1805]: time="2025-02-13T19:45:32.208508913Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:45:32.208681 containerd[1805]: time="2025-02-13T19:45:32.208661561Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Feb 13 19:45:32.209101 containerd[1805]: time="2025-02-13T19:45:32.209088590Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:45:32.210275 containerd[1805]: time="2025-02-13T19:45:32.210235405Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:45:32.211124 containerd[1805]: time="2025-02-13T19:45:32.211078221Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 499.94679ms" Feb 13 19:45:32.211124 containerd[1805]: time="2025-02-13T19:45:32.211091019Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 13 19:45:32.223093 containerd[1805]: time="2025-02-13T19:45:32.223071894Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Feb 13 19:45:32.459144 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 19:45:32.476757 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:45:32.697443 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:45:32.700349 (kubelet)[2532]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:45:32.721311 kubelet[2532]: E0213 19:45:32.721239 2532 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:45:32.722415 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:45:32.722504 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:45:32.892187 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2230472562.mount: Deactivated successfully. Feb 13 19:45:34.010311 containerd[1805]: time="2025-02-13T19:45:34.010257078Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:45:34.010526 containerd[1805]: time="2025-02-13T19:45:34.010487767Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Feb 13 19:45:34.010935 containerd[1805]: time="2025-02-13T19:45:34.010896362Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:45:34.012995 containerd[1805]: time="2025-02-13T19:45:34.012954215Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:45:34.013523 containerd[1805]: time="2025-02-13T19:45:34.013476300Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 1.790379629s" Feb 13 19:45:34.013523 containerd[1805]: time="2025-02-13T19:45:34.013497638Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Feb 13 19:45:35.946577 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:45:35.956783 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:45:35.966134 systemd[1]: Reloading requested from client PID 2748 ('systemctl') (unit session-11.scope)... Feb 13 19:45:35.966141 systemd[1]: Reloading... Feb 13 19:45:36.030450 zram_generator::config[2787]: No configuration found. Feb 13 19:45:36.097323 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:45:36.158798 systemd[1]: Reloading finished in 192 ms. Feb 13 19:45:36.205715 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:45:36.207166 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:45:36.208054 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 19:45:36.208153 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:45:36.209017 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:45:36.412800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:45:36.417635 (kubelet)[2856]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:45:36.440056 kubelet[2856]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:45:36.440056 kubelet[2856]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 19:45:36.440056 kubelet[2856]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:45:36.441107 kubelet[2856]: I0213 19:45:36.441063 2856 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:45:36.570608 kubelet[2856]: I0213 19:45:36.570562 2856 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 19:45:36.570608 kubelet[2856]: I0213 19:45:36.570573 2856 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:45:36.570717 kubelet[2856]: I0213 19:45:36.570672 2856 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 19:45:36.581060 kubelet[2856]: E0213 19:45:36.581021 2856 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://147.28.180.89:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 147.28.180.89:6443: connect: connection refused Feb 13 19:45:36.581480 kubelet[2856]: I0213 19:45:36.581469 2856 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:45:36.597713 kubelet[2856]: I0213 19:45:36.597683 2856 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:45:36.600122 kubelet[2856]: I0213 19:45:36.600081 2856 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:45:36.600249 kubelet[2856]: I0213 19:45:36.600117 2856 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4186.1.1-a-a8b3a25f31","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 19:45:36.600672 kubelet[2856]: I0213 19:45:36.600644 2856 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:45:36.600672 kubelet[2856]: I0213 19:45:36.600667 2856 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 19:45:36.600780 kubelet[2856]: I0213 19:45:36.600730 2856 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:45:36.601495 kubelet[2856]: I0213 19:45:36.601441 2856 kubelet.go:400] "Attempting to sync node with API server" Feb 13 19:45:36.601495 kubelet[2856]: I0213 19:45:36.601470 2856 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:45:36.601495 kubelet[2856]: I0213 19:45:36.601497 2856 kubelet.go:312] "Adding apiserver pod source" Feb 13 19:45:36.601640 kubelet[2856]: I0213 19:45:36.601506 2856 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:45:36.601896 kubelet[2856]: W0213 19:45:36.601870 2856 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://147.28.180.89:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.1.1-a-a8b3a25f31&limit=500&resourceVersion=0": dial tcp 147.28.180.89:6443: connect: connection refused Feb 13 19:45:36.601929 kubelet[2856]: E0213 19:45:36.601905 2856 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://147.28.180.89:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.1.1-a-a8b3a25f31&limit=500&resourceVersion=0": dial tcp 147.28.180.89:6443: connect: connection refused Feb 13 19:45:36.601929 kubelet[2856]: W0213 19:45:36.601870 2856 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://147.28.180.89:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 147.28.180.89:6443: connect: connection refused Feb 13 19:45:36.601929 kubelet[2856]: E0213 19:45:36.601917 2856 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://147.28.180.89:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 147.28.180.89:6443: connect: connection refused Feb 13 19:45:36.604881 kubelet[2856]: I0213 19:45:36.604855 2856 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 19:45:36.606066 kubelet[2856]: I0213 19:45:36.606057 2856 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:45:36.606100 kubelet[2856]: W0213 19:45:36.606082 2856 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 19:45:36.606402 kubelet[2856]: I0213 19:45:36.606394 2856 server.go:1264] "Started kubelet" Feb 13 19:45:36.606471 kubelet[2856]: I0213 19:45:36.606460 2856 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:45:36.606562 kubelet[2856]: I0213 19:45:36.606519 2856 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:45:36.606744 kubelet[2856]: I0213 19:45:36.606734 2856 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:45:36.607266 kubelet[2856]: I0213 19:45:36.607237 2856 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:45:36.607302 kubelet[2856]: I0213 19:45:36.607288 2856 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 19:45:36.607494 kubelet[2856]: E0213 19:45:36.607462 2856 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4186.1.1-a-a8b3a25f31\" not found" Feb 13 19:45:36.607494 kubelet[2856]: I0213 19:45:36.607488 2856 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:45:36.607612 kubelet[2856]: W0213 19:45:36.607546 2856 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://147.28.180.89:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.28.180.89:6443: connect: connection refused Feb 13 19:45:36.607612 kubelet[2856]: E0213 19:45:36.607599 2856 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://147.28.180.89:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.28.180.89:6443: connect: connection refused Feb 13 19:45:36.607662 kubelet[2856]: I0213 19:45:36.607648 2856 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 19:45:36.607793 kubelet[2856]: E0213 19:45:36.607772 2856 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.28.180.89:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.1.1-a-a8b3a25f31?timeout=10s\": dial tcp 147.28.180.89:6443: connect: connection refused" interval="200ms" Feb 13 19:45:36.607793 kubelet[2856]: I0213 19:45:36.607791 2856 server.go:455] "Adding debug handlers to kubelet server" Feb 13 19:45:36.610069 kubelet[2856]: I0213 19:45:36.610054 2856 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:45:36.610150 kubelet[2856]: I0213 19:45:36.610135 2856 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:45:36.611487 kubelet[2856]: I0213 19:45:36.611475 2856 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:45:36.612028 kubelet[2856]: E0213 19:45:36.612017 2856 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:45:36.613234 kubelet[2856]: E0213 19:45:36.613181 2856 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://147.28.180.89:6443/api/v1/namespaces/default/events\": dial tcp 147.28.180.89:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4186.1.1-a-a8b3a25f31.1823dc25f0c32c03 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4186.1.1-a-a8b3a25f31,UID:ci-4186.1.1-a-a8b3a25f31,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4186.1.1-a-a8b3a25f31,},FirstTimestamp:2025-02-13 19:45:36.606366723 +0000 UTC m=+0.186777785,LastTimestamp:2025-02-13 19:45:36.606366723 +0000 UTC m=+0.186777785,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4186.1.1-a-a8b3a25f31,}" Feb 13 19:45:36.618129 kubelet[2856]: I0213 19:45:36.618110 2856 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:45:36.618749 kubelet[2856]: I0213 19:45:36.618739 2856 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:45:36.618777 kubelet[2856]: I0213 19:45:36.618756 2856 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 19:45:36.618777 kubelet[2856]: I0213 19:45:36.618766 2856 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 19:45:36.618806 kubelet[2856]: E0213 19:45:36.618785 2856 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:45:36.619083 kubelet[2856]: W0213 19:45:36.619058 2856 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://147.28.180.89:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.28.180.89:6443: connect: connection refused Feb 13 19:45:36.619128 kubelet[2856]: E0213 19:45:36.619091 2856 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://147.28.180.89:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.28.180.89:6443: connect: connection refused Feb 13 19:45:36.633172 kubelet[2856]: I0213 19:45:36.633133 2856 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 19:45:36.633172 kubelet[2856]: I0213 19:45:36.633141 2856 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 19:45:36.633172 kubelet[2856]: I0213 19:45:36.633152 2856 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:45:36.634246 kubelet[2856]: I0213 19:45:36.634211 2856 policy_none.go:49] "None policy: Start" Feb 13 19:45:36.634405 kubelet[2856]: I0213 19:45:36.634396 2856 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 19:45:36.634435 kubelet[2856]: I0213 19:45:36.634416 2856 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:45:36.637194 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 19:45:36.656409 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 19:45:36.658628 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 19:45:36.671202 kubelet[2856]: I0213 19:45:36.671159 2856 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:45:36.671366 kubelet[2856]: I0213 19:45:36.671312 2856 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:45:36.671440 kubelet[2856]: I0213 19:45:36.671404 2856 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:45:36.672132 kubelet[2856]: E0213 19:45:36.672104 2856 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4186.1.1-a-a8b3a25f31\" not found" Feb 13 19:45:36.711747 kubelet[2856]: I0213 19:45:36.711687 2856 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186.1.1-a-a8b3a25f31" Feb 13 19:45:36.712495 kubelet[2856]: E0213 19:45:36.712400 2856 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://147.28.180.89:6443/api/v1/nodes\": dial tcp 147.28.180.89:6443: connect: connection refused" node="ci-4186.1.1-a-a8b3a25f31" Feb 13 19:45:36.719769 kubelet[2856]: I0213 19:45:36.719647 2856 topology_manager.go:215] "Topology Admit Handler" podUID="57f9d83b0ade17eb9cfabe6a5c1be627" podNamespace="kube-system" podName="kube-apiserver-ci-4186.1.1-a-a8b3a25f31" Feb 13 19:45:36.723142 kubelet[2856]: I0213 19:45:36.723085 2856 topology_manager.go:215] "Topology Admit Handler" podUID="2c11bf11d65c4349ca9049a850ac8f79" podNamespace="kube-system" podName="kube-controller-manager-ci-4186.1.1-a-a8b3a25f31" Feb 13 19:45:36.726744 kubelet[2856]: I0213 19:45:36.726686 2856 topology_manager.go:215] "Topology Admit Handler" podUID="20d751495dc0f627ad51149682451ca8" podNamespace="kube-system" podName="kube-scheduler-ci-4186.1.1-a-a8b3a25f31" Feb 13 19:45:36.741195 systemd[1]: Created slice kubepods-burstable-pod57f9d83b0ade17eb9cfabe6a5c1be627.slice - libcontainer container kubepods-burstable-pod57f9d83b0ade17eb9cfabe6a5c1be627.slice. Feb 13 19:45:36.759510 systemd[1]: Created slice kubepods-burstable-pod2c11bf11d65c4349ca9049a850ac8f79.slice - libcontainer container kubepods-burstable-pod2c11bf11d65c4349ca9049a850ac8f79.slice. Feb 13 19:45:36.778465 systemd[1]: Created slice kubepods-burstable-pod20d751495dc0f627ad51149682451ca8.slice - libcontainer container kubepods-burstable-pod20d751495dc0f627ad51149682451ca8.slice. Feb 13 19:45:36.808277 kubelet[2856]: I0213 19:45:36.808202 2856 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/57f9d83b0ade17eb9cfabe6a5c1be627-ca-certs\") pod \"kube-apiserver-ci-4186.1.1-a-a8b3a25f31\" (UID: \"57f9d83b0ade17eb9cfabe6a5c1be627\") " pod="kube-system/kube-apiserver-ci-4186.1.1-a-a8b3a25f31" Feb 13 19:45:36.808778 kubelet[2856]: E0213 19:45:36.808665 2856 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.28.180.89:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.1.1-a-a8b3a25f31?timeout=10s\": dial tcp 147.28.180.89:6443: connect: connection refused" interval="400ms" Feb 13 19:45:36.908829 kubelet[2856]: I0213 19:45:36.908552 2856 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/57f9d83b0ade17eb9cfabe6a5c1be627-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4186.1.1-a-a8b3a25f31\" (UID: \"57f9d83b0ade17eb9cfabe6a5c1be627\") " pod="kube-system/kube-apiserver-ci-4186.1.1-a-a8b3a25f31" Feb 13 19:45:36.908829 kubelet[2856]: I0213 19:45:36.908661 2856 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2c11bf11d65c4349ca9049a850ac8f79-ca-certs\") pod \"kube-controller-manager-ci-4186.1.1-a-a8b3a25f31\" (UID: \"2c11bf11d65c4349ca9049a850ac8f79\") " pod="kube-system/kube-controller-manager-ci-4186.1.1-a-a8b3a25f31" Feb 13 19:45:36.908829 kubelet[2856]: I0213 19:45:36.908716 2856 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2c11bf11d65c4349ca9049a850ac8f79-flexvolume-dir\") pod \"kube-controller-manager-ci-4186.1.1-a-a8b3a25f31\" (UID: \"2c11bf11d65c4349ca9049a850ac8f79\") " pod="kube-system/kube-controller-manager-ci-4186.1.1-a-a8b3a25f31" Feb 13 19:45:36.908829 kubelet[2856]: I0213 19:45:36.908766 2856 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2c11bf11d65c4349ca9049a850ac8f79-k8s-certs\") pod \"kube-controller-manager-ci-4186.1.1-a-a8b3a25f31\" (UID: \"2c11bf11d65c4349ca9049a850ac8f79\") " pod="kube-system/kube-controller-manager-ci-4186.1.1-a-a8b3a25f31" Feb 13 19:45:36.908829 kubelet[2856]: I0213 19:45:36.908818 2856 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/20d751495dc0f627ad51149682451ca8-kubeconfig\") pod \"kube-scheduler-ci-4186.1.1-a-a8b3a25f31\" (UID: \"20d751495dc0f627ad51149682451ca8\") " pod="kube-system/kube-scheduler-ci-4186.1.1-a-a8b3a25f31" Feb 13 19:45:36.909597 kubelet[2856]: I0213 19:45:36.908922 2856 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/57f9d83b0ade17eb9cfabe6a5c1be627-k8s-certs\") pod \"kube-apiserver-ci-4186.1.1-a-a8b3a25f31\" (UID: \"57f9d83b0ade17eb9cfabe6a5c1be627\") " pod="kube-system/kube-apiserver-ci-4186.1.1-a-a8b3a25f31" Feb 13 19:45:36.909597 kubelet[2856]: I0213 19:45:36.908990 2856 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2c11bf11d65c4349ca9049a850ac8f79-kubeconfig\") pod \"kube-controller-manager-ci-4186.1.1-a-a8b3a25f31\" (UID: \"2c11bf11d65c4349ca9049a850ac8f79\") " pod="kube-system/kube-controller-manager-ci-4186.1.1-a-a8b3a25f31" Feb 13 19:45:36.909597 kubelet[2856]: I0213 19:45:36.909057 2856 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2c11bf11d65c4349ca9049a850ac8f79-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4186.1.1-a-a8b3a25f31\" (UID: \"2c11bf11d65c4349ca9049a850ac8f79\") " pod="kube-system/kube-controller-manager-ci-4186.1.1-a-a8b3a25f31" Feb 13 19:45:36.916828 kubelet[2856]: I0213 19:45:36.916778 2856 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186.1.1-a-a8b3a25f31" Feb 13 19:45:36.917683 kubelet[2856]: E0213 19:45:36.917576 2856 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://147.28.180.89:6443/api/v1/nodes\": dial tcp 147.28.180.89:6443: connect: connection refused" node="ci-4186.1.1-a-a8b3a25f31" Feb 13 19:45:37.059503 containerd[1805]: time="2025-02-13T19:45:37.059364989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4186.1.1-a-a8b3a25f31,Uid:57f9d83b0ade17eb9cfabe6a5c1be627,Namespace:kube-system,Attempt:0,}" Feb 13 19:45:37.072472 containerd[1805]: time="2025-02-13T19:45:37.072455101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4186.1.1-a-a8b3a25f31,Uid:2c11bf11d65c4349ca9049a850ac8f79,Namespace:kube-system,Attempt:0,}" Feb 13 19:45:37.084070 containerd[1805]: time="2025-02-13T19:45:37.084005839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4186.1.1-a-a8b3a25f31,Uid:20d751495dc0f627ad51149682451ca8,Namespace:kube-system,Attempt:0,}" Feb 13 19:45:37.209541 kubelet[2856]: E0213 19:45:37.209409 2856 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.28.180.89:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.1.1-a-a8b3a25f31?timeout=10s\": dial tcp 147.28.180.89:6443: connect: connection refused" interval="800ms" Feb 13 19:45:37.321869 kubelet[2856]: I0213 19:45:37.321812 2856 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186.1.1-a-a8b3a25f31" Feb 13 19:45:37.322666 kubelet[2856]: E0213 19:45:37.322552 2856 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://147.28.180.89:6443/api/v1/nodes\": dial tcp 147.28.180.89:6443: connect: connection refused" node="ci-4186.1.1-a-a8b3a25f31" Feb 13 19:45:37.586110 kubelet[2856]: W0213 19:45:37.586003 2856 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://147.28.180.89:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.1.1-a-a8b3a25f31&limit=500&resourceVersion=0": dial tcp 147.28.180.89:6443: connect: connection refused Feb 13 19:45:37.586110 kubelet[2856]: E0213 19:45:37.586042 2856 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://147.28.180.89:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.1.1-a-a8b3a25f31&limit=500&resourceVersion=0": dial tcp 147.28.180.89:6443: connect: connection refused Feb 13 19:45:37.595609 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount207204652.mount: Deactivated successfully. Feb 13 19:45:37.597170 containerd[1805]: time="2025-02-13T19:45:37.597131543Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:45:37.597370 containerd[1805]: time="2025-02-13T19:45:37.597353385Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Feb 13 19:45:37.598333 containerd[1805]: time="2025-02-13T19:45:37.598292734Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:45:37.598827 containerd[1805]: time="2025-02-13T19:45:37.598785141Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:45:37.599474 containerd[1805]: time="2025-02-13T19:45:37.599422623Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:45:37.600278 containerd[1805]: time="2025-02-13T19:45:37.600266029Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:45:37.600402 containerd[1805]: time="2025-02-13T19:45:37.600387295Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:45:37.600996 containerd[1805]: time="2025-02-13T19:45:37.600956104Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:45:37.601472 containerd[1805]: time="2025-02-13T19:45:37.601434332Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 541.81313ms" Feb 13 19:45:37.603376 containerd[1805]: time="2025-02-13T19:45:37.603363943Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 530.870913ms" Feb 13 19:45:37.603808 containerd[1805]: time="2025-02-13T19:45:37.603796466Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 519.741481ms" Feb 13 19:45:37.710944 containerd[1805]: time="2025-02-13T19:45:37.710893215Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:45:37.710944 containerd[1805]: time="2025-02-13T19:45:37.710921491Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:45:37.710944 containerd[1805]: time="2025-02-13T19:45:37.710928916Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:45:37.711131 containerd[1805]: time="2025-02-13T19:45:37.710933690Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:45:37.711131 containerd[1805]: time="2025-02-13T19:45:37.710954523Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:45:37.711131 containerd[1805]: time="2025-02-13T19:45:37.710961937Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:45:37.711131 containerd[1805]: time="2025-02-13T19:45:37.710968329Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:45:37.711131 containerd[1805]: time="2025-02-13T19:45:37.710949748Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:45:37.711131 containerd[1805]: time="2025-02-13T19:45:37.710970315Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:45:37.711131 containerd[1805]: time="2025-02-13T19:45:37.710977325Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:45:37.711131 containerd[1805]: time="2025-02-13T19:45:37.711000347Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:45:37.711131 containerd[1805]: time="2025-02-13T19:45:37.711016879Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:45:37.730725 systemd[1]: Started cri-containerd-6f4d877990af6ac6c71bb1308f4f1897d94aadc6aca5be7b04bf7d1eec9b93a7.scope - libcontainer container 6f4d877990af6ac6c71bb1308f4f1897d94aadc6aca5be7b04bf7d1eec9b93a7. Feb 13 19:45:37.731503 systemd[1]: Started cri-containerd-7cfff5495d32dac26a9c25d42fbdcfefcd6332e9602a0c40b433d968839b8ddb.scope - libcontainer container 7cfff5495d32dac26a9c25d42fbdcfefcd6332e9602a0c40b433d968839b8ddb. Feb 13 19:45:37.732338 systemd[1]: Started cri-containerd-c89d2fd2b20e5acee82bc77c0c7dae21831067c3bcf0d0f7408ceb8ff40d420d.scope - libcontainer container c89d2fd2b20e5acee82bc77c0c7dae21831067c3bcf0d0f7408ceb8ff40d420d. Feb 13 19:45:37.757455 containerd[1805]: time="2025-02-13T19:45:37.757412171Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4186.1.1-a-a8b3a25f31,Uid:57f9d83b0ade17eb9cfabe6a5c1be627,Namespace:kube-system,Attempt:0,} returns sandbox id \"6f4d877990af6ac6c71bb1308f4f1897d94aadc6aca5be7b04bf7d1eec9b93a7\"" Feb 13 19:45:37.758146 containerd[1805]: time="2025-02-13T19:45:37.758124313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4186.1.1-a-a8b3a25f31,Uid:20d751495dc0f627ad51149682451ca8,Namespace:kube-system,Attempt:0,} returns sandbox id \"7cfff5495d32dac26a9c25d42fbdcfefcd6332e9602a0c40b433d968839b8ddb\"" Feb 13 19:45:37.759415 containerd[1805]: time="2025-02-13T19:45:37.759396185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4186.1.1-a-a8b3a25f31,Uid:2c11bf11d65c4349ca9049a850ac8f79,Namespace:kube-system,Attempt:0,} returns sandbox id \"c89d2fd2b20e5acee82bc77c0c7dae21831067c3bcf0d0f7408ceb8ff40d420d\"" Feb 13 19:45:37.759902 containerd[1805]: time="2025-02-13T19:45:37.759885779Z" level=info msg="CreateContainer within sandbox \"6f4d877990af6ac6c71bb1308f4f1897d94aadc6aca5be7b04bf7d1eec9b93a7\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 19:45:37.759969 containerd[1805]: time="2025-02-13T19:45:37.759890825Z" level=info msg="CreateContainer within sandbox \"7cfff5495d32dac26a9c25d42fbdcfefcd6332e9602a0c40b433d968839b8ddb\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 19:45:37.760646 containerd[1805]: time="2025-02-13T19:45:37.760632680Z" level=info msg="CreateContainer within sandbox \"c89d2fd2b20e5acee82bc77c0c7dae21831067c3bcf0d0f7408ceb8ff40d420d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 19:45:37.766352 containerd[1805]: time="2025-02-13T19:45:37.766315394Z" level=info msg="CreateContainer within sandbox \"6f4d877990af6ac6c71bb1308f4f1897d94aadc6aca5be7b04bf7d1eec9b93a7\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"af7ecf34646d36ee11d9e29fd4b24f0deafbb2285650932b5f0aaace6cd8c873\"" Feb 13 19:45:37.766631 containerd[1805]: time="2025-02-13T19:45:37.766591211Z" level=info msg="StartContainer for \"af7ecf34646d36ee11d9e29fd4b24f0deafbb2285650932b5f0aaace6cd8c873\"" Feb 13 19:45:37.767365 containerd[1805]: time="2025-02-13T19:45:37.767351896Z" level=info msg="CreateContainer within sandbox \"7cfff5495d32dac26a9c25d42fbdcfefcd6332e9602a0c40b433d968839b8ddb\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c2ebe63df3fbd49c11b950ff659540f2f06fedb733188e81f0b23710e926203a\"" Feb 13 19:45:37.767554 containerd[1805]: time="2025-02-13T19:45:37.767542371Z" level=info msg="StartContainer for \"c2ebe63df3fbd49c11b950ff659540f2f06fedb733188e81f0b23710e926203a\"" Feb 13 19:45:37.769283 containerd[1805]: time="2025-02-13T19:45:37.769265557Z" level=info msg="CreateContainer within sandbox \"c89d2fd2b20e5acee82bc77c0c7dae21831067c3bcf0d0f7408ceb8ff40d420d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5bac8a422528d83846662e581569b830cd184ad484bedd3697063ae7946f75de\"" Feb 13 19:45:37.769534 containerd[1805]: time="2025-02-13T19:45:37.769495577Z" level=info msg="StartContainer for \"5bac8a422528d83846662e581569b830cd184ad484bedd3697063ae7946f75de\"" Feb 13 19:45:37.790717 systemd[1]: Started cri-containerd-af7ecf34646d36ee11d9e29fd4b24f0deafbb2285650932b5f0aaace6cd8c873.scope - libcontainer container af7ecf34646d36ee11d9e29fd4b24f0deafbb2285650932b5f0aaace6cd8c873. Feb 13 19:45:37.791318 systemd[1]: Started cri-containerd-c2ebe63df3fbd49c11b950ff659540f2f06fedb733188e81f0b23710e926203a.scope - libcontainer container c2ebe63df3fbd49c11b950ff659540f2f06fedb733188e81f0b23710e926203a. Feb 13 19:45:37.792848 systemd[1]: Started cri-containerd-5bac8a422528d83846662e581569b830cd184ad484bedd3697063ae7946f75de.scope - libcontainer container 5bac8a422528d83846662e581569b830cd184ad484bedd3697063ae7946f75de. Feb 13 19:45:37.815169 containerd[1805]: time="2025-02-13T19:45:37.815144823Z" level=info msg="StartContainer for \"c2ebe63df3fbd49c11b950ff659540f2f06fedb733188e81f0b23710e926203a\" returns successfully" Feb 13 19:45:37.815273 containerd[1805]: time="2025-02-13T19:45:37.815221494Z" level=info msg="StartContainer for \"af7ecf34646d36ee11d9e29fd4b24f0deafbb2285650932b5f0aaace6cd8c873\" returns successfully" Feb 13 19:45:37.816314 containerd[1805]: time="2025-02-13T19:45:37.816297947Z" level=info msg="StartContainer for \"5bac8a422528d83846662e581569b830cd184ad484bedd3697063ae7946f75de\" returns successfully" Feb 13 19:45:38.124305 kubelet[2856]: I0213 19:45:38.124289 2856 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186.1.1-a-a8b3a25f31" Feb 13 19:45:38.364189 kubelet[2856]: E0213 19:45:38.364155 2856 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4186.1.1-a-a8b3a25f31\" not found" node="ci-4186.1.1-a-a8b3a25f31" Feb 13 19:45:38.457870 kubelet[2856]: I0213 19:45:38.457818 2856 kubelet_node_status.go:76] "Successfully registered node" node="ci-4186.1.1-a-a8b3a25f31" Feb 13 19:45:38.462819 kubelet[2856]: E0213 19:45:38.462810 2856 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4186.1.1-a-a8b3a25f31\" not found" Feb 13 19:45:38.563586 kubelet[2856]: E0213 19:45:38.563568 2856 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4186.1.1-a-a8b3a25f31\" not found" Feb 13 19:45:38.664098 kubelet[2856]: E0213 19:45:38.664053 2856 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4186.1.1-a-a8b3a25f31\" not found" Feb 13 19:45:38.765053 kubelet[2856]: E0213 19:45:38.764927 2856 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4186.1.1-a-a8b3a25f31\" not found" Feb 13 19:45:38.866087 kubelet[2856]: E0213 19:45:38.865978 2856 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4186.1.1-a-a8b3a25f31\" not found" Feb 13 19:45:38.967154 kubelet[2856]: E0213 19:45:38.967054 2856 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4186.1.1-a-a8b3a25f31\" not found" Feb 13 19:45:39.068373 kubelet[2856]: E0213 19:45:39.068148 2856 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4186.1.1-a-a8b3a25f31\" not found" Feb 13 19:45:39.169478 kubelet[2856]: E0213 19:45:39.169358 2856 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4186.1.1-a-a8b3a25f31\" not found" Feb 13 19:45:39.270439 kubelet[2856]: E0213 19:45:39.270309 2856 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4186.1.1-a-a8b3a25f31\" not found" Feb 13 19:45:39.371343 kubelet[2856]: E0213 19:45:39.371250 2856 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4186.1.1-a-a8b3a25f31\" not found" Feb 13 19:45:39.472147 kubelet[2856]: E0213 19:45:39.472083 2856 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4186.1.1-a-a8b3a25f31\" not found" Feb 13 19:45:39.572632 kubelet[2856]: E0213 19:45:39.572576 2856 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4186.1.1-a-a8b3a25f31\" not found" Feb 13 19:45:39.673069 kubelet[2856]: E0213 19:45:39.672972 2856 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4186.1.1-a-a8b3a25f31\" not found" Feb 13 19:45:39.773989 kubelet[2856]: E0213 19:45:39.773862 2856 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4186.1.1-a-a8b3a25f31\" not found" Feb 13 19:45:39.874194 kubelet[2856]: E0213 19:45:39.874079 2856 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4186.1.1-a-a8b3a25f31\" not found" Feb 13 19:45:39.974934 kubelet[2856]: E0213 19:45:39.974881 2856 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4186.1.1-a-a8b3a25f31\" not found" Feb 13 19:45:40.075924 kubelet[2856]: E0213 19:45:40.075847 2856 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4186.1.1-a-a8b3a25f31\" not found" Feb 13 19:45:40.176674 kubelet[2856]: E0213 19:45:40.176601 2856 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4186.1.1-a-a8b3a25f31\" not found" Feb 13 19:45:40.356094 systemd[1]: Reloading requested from client PID 3168 ('systemctl') (unit session-11.scope)... Feb 13 19:45:40.356102 systemd[1]: Reloading... Feb 13 19:45:40.396456 zram_generator::config[3207]: No configuration found. Feb 13 19:45:40.483510 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:45:40.552722 systemd[1]: Reloading finished in 196 ms. Feb 13 19:45:40.574679 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:45:40.580027 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 19:45:40.580130 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:45:40.601640 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:45:40.824168 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:45:40.826586 (kubelet)[3272]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:45:40.850954 kubelet[3272]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:45:40.850954 kubelet[3272]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 19:45:40.850954 kubelet[3272]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:45:40.851205 kubelet[3272]: I0213 19:45:40.850986 3272 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:45:40.853670 kubelet[3272]: I0213 19:45:40.853629 3272 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 19:45:40.853670 kubelet[3272]: I0213 19:45:40.853641 3272 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:45:40.853803 kubelet[3272]: I0213 19:45:40.853766 3272 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 19:45:40.854614 kubelet[3272]: I0213 19:45:40.854577 3272 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 19:45:40.855787 kubelet[3272]: I0213 19:45:40.855728 3272 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:45:40.865088 kubelet[3272]: I0213 19:45:40.865044 3272 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:45:40.865252 kubelet[3272]: I0213 19:45:40.865203 3272 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:45:40.865360 kubelet[3272]: I0213 19:45:40.865224 3272 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4186.1.1-a-a8b3a25f31","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 19:45:40.865437 kubelet[3272]: I0213 19:45:40.865369 3272 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:45:40.865437 kubelet[3272]: I0213 19:45:40.865377 3272 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 19:45:40.865437 kubelet[3272]: I0213 19:45:40.865406 3272 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:45:40.865506 kubelet[3272]: I0213 19:45:40.865474 3272 kubelet.go:400] "Attempting to sync node with API server" Feb 13 19:45:40.865506 kubelet[3272]: I0213 19:45:40.865483 3272 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:45:40.865506 kubelet[3272]: I0213 19:45:40.865498 3272 kubelet.go:312] "Adding apiserver pod source" Feb 13 19:45:40.865580 kubelet[3272]: I0213 19:45:40.865509 3272 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:45:40.865911 kubelet[3272]: I0213 19:45:40.865899 3272 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 19:45:40.866038 kubelet[3272]: I0213 19:45:40.866027 3272 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:45:40.866313 kubelet[3272]: I0213 19:45:40.866304 3272 server.go:1264] "Started kubelet" Feb 13 19:45:40.866383 kubelet[3272]: I0213 19:45:40.866358 3272 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:45:40.866430 kubelet[3272]: I0213 19:45:40.866378 3272 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:45:40.866559 kubelet[3272]: I0213 19:45:40.866546 3272 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:45:40.868081 kubelet[3272]: I0213 19:45:40.868037 3272 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:45:40.868168 kubelet[3272]: I0213 19:45:40.868132 3272 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 19:45:40.868168 kubelet[3272]: I0213 19:45:40.868145 3272 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 19:45:40.868629 kubelet[3272]: E0213 19:45:40.868611 3272 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:45:40.868743 kubelet[3272]: I0213 19:45:40.868729 3272 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:45:40.869100 kubelet[3272]: I0213 19:45:40.869091 3272 server.go:455] "Adding debug handlers to kubelet server" Feb 13 19:45:40.869658 kubelet[3272]: I0213 19:45:40.869648 3272 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:45:40.869658 kubelet[3272]: I0213 19:45:40.869660 3272 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:45:40.869730 kubelet[3272]: I0213 19:45:40.869708 3272 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:45:40.874183 kubelet[3272]: I0213 19:45:40.874153 3272 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:45:40.874735 kubelet[3272]: I0213 19:45:40.874726 3272 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:45:40.874786 kubelet[3272]: I0213 19:45:40.874742 3272 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 19:45:40.874786 kubelet[3272]: I0213 19:45:40.874753 3272 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 19:45:40.874786 kubelet[3272]: E0213 19:45:40.874775 3272 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:45:40.883914 kubelet[3272]: I0213 19:45:40.883873 3272 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 19:45:40.883914 kubelet[3272]: I0213 19:45:40.883882 3272 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 19:45:40.883914 kubelet[3272]: I0213 19:45:40.883890 3272 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:45:40.884014 kubelet[3272]: I0213 19:45:40.883973 3272 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 19:45:40.884014 kubelet[3272]: I0213 19:45:40.883981 3272 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 19:45:40.884014 kubelet[3272]: I0213 19:45:40.883993 3272 policy_none.go:49] "None policy: Start" Feb 13 19:45:40.884245 kubelet[3272]: I0213 19:45:40.884213 3272 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 19:45:40.884245 kubelet[3272]: I0213 19:45:40.884225 3272 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:45:40.884300 kubelet[3272]: I0213 19:45:40.884295 3272 state_mem.go:75] "Updated machine memory state" Feb 13 19:45:40.886324 kubelet[3272]: I0213 19:45:40.886287 3272 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:45:40.886377 kubelet[3272]: I0213 19:45:40.886363 3272 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:45:40.886414 kubelet[3272]: I0213 19:45:40.886409 3272 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:45:40.975900 kubelet[3272]: I0213 19:45:40.975636 3272 topology_manager.go:215] "Topology Admit Handler" podUID="57f9d83b0ade17eb9cfabe6a5c1be627" podNamespace="kube-system" podName="kube-apiserver-ci-4186.1.1-a-a8b3a25f31" Feb 13 19:45:40.977738 kubelet[3272]: I0213 19:45:40.976800 3272 topology_manager.go:215] "Topology Admit Handler" podUID="2c11bf11d65c4349ca9049a850ac8f79" podNamespace="kube-system" podName="kube-controller-manager-ci-4186.1.1-a-a8b3a25f31" Feb 13 19:45:40.977738 kubelet[3272]: I0213 19:45:40.977302 3272 topology_manager.go:215] "Topology Admit Handler" podUID="20d751495dc0f627ad51149682451ca8" podNamespace="kube-system" podName="kube-scheduler-ci-4186.1.1-a-a8b3a25f31" Feb 13 19:45:40.979615 kubelet[3272]: I0213 19:45:40.979552 3272 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186.1.1-a-a8b3a25f31" Feb 13 19:45:40.989288 kubelet[3272]: I0213 19:45:40.989191 3272 kubelet_node_status.go:112] "Node was previously registered" node="ci-4186.1.1-a-a8b3a25f31" Feb 13 19:45:40.989540 kubelet[3272]: I0213 19:45:40.989396 3272 kubelet_node_status.go:76] "Successfully registered node" node="ci-4186.1.1-a-a8b3a25f31" Feb 13 19:45:41.001461 kubelet[3272]: W0213 19:45:41.001393 3272 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 19:45:41.002459 kubelet[3272]: W0213 19:45:41.002377 3272 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 19:45:41.002459 kubelet[3272]: W0213 19:45:41.002413 3272 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 19:45:41.069597 kubelet[3272]: I0213 19:45:41.069513 3272 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/57f9d83b0ade17eb9cfabe6a5c1be627-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4186.1.1-a-a8b3a25f31\" (UID: \"57f9d83b0ade17eb9cfabe6a5c1be627\") " pod="kube-system/kube-apiserver-ci-4186.1.1-a-a8b3a25f31" Feb 13 19:45:41.069597 kubelet[3272]: I0213 19:45:41.069611 3272 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2c11bf11d65c4349ca9049a850ac8f79-ca-certs\") pod \"kube-controller-manager-ci-4186.1.1-a-a8b3a25f31\" (UID: \"2c11bf11d65c4349ca9049a850ac8f79\") " pod="kube-system/kube-controller-manager-ci-4186.1.1-a-a8b3a25f31" Feb 13 19:45:41.070018 kubelet[3272]: I0213 19:45:41.069672 3272 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2c11bf11d65c4349ca9049a850ac8f79-flexvolume-dir\") pod \"kube-controller-manager-ci-4186.1.1-a-a8b3a25f31\" (UID: \"2c11bf11d65c4349ca9049a850ac8f79\") " pod="kube-system/kube-controller-manager-ci-4186.1.1-a-a8b3a25f31" Feb 13 19:45:41.070018 kubelet[3272]: I0213 19:45:41.069790 3272 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2c11bf11d65c4349ca9049a850ac8f79-k8s-certs\") pod \"kube-controller-manager-ci-4186.1.1-a-a8b3a25f31\" (UID: \"2c11bf11d65c4349ca9049a850ac8f79\") " pod="kube-system/kube-controller-manager-ci-4186.1.1-a-a8b3a25f31" Feb 13 19:45:41.070018 kubelet[3272]: I0213 19:45:41.069886 3272 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2c11bf11d65c4349ca9049a850ac8f79-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4186.1.1-a-a8b3a25f31\" (UID: \"2c11bf11d65c4349ca9049a850ac8f79\") " pod="kube-system/kube-controller-manager-ci-4186.1.1-a-a8b3a25f31" Feb 13 19:45:41.070018 kubelet[3272]: I0213 19:45:41.069973 3272 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/57f9d83b0ade17eb9cfabe6a5c1be627-ca-certs\") pod \"kube-apiserver-ci-4186.1.1-a-a8b3a25f31\" (UID: \"57f9d83b0ade17eb9cfabe6a5c1be627\") " pod="kube-system/kube-apiserver-ci-4186.1.1-a-a8b3a25f31" Feb 13 19:45:41.070387 kubelet[3272]: I0213 19:45:41.070024 3272 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/57f9d83b0ade17eb9cfabe6a5c1be627-k8s-certs\") pod \"kube-apiserver-ci-4186.1.1-a-a8b3a25f31\" (UID: \"57f9d83b0ade17eb9cfabe6a5c1be627\") " pod="kube-system/kube-apiserver-ci-4186.1.1-a-a8b3a25f31" Feb 13 19:45:41.070387 kubelet[3272]: I0213 19:45:41.070071 3272 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2c11bf11d65c4349ca9049a850ac8f79-kubeconfig\") pod \"kube-controller-manager-ci-4186.1.1-a-a8b3a25f31\" (UID: \"2c11bf11d65c4349ca9049a850ac8f79\") " pod="kube-system/kube-controller-manager-ci-4186.1.1-a-a8b3a25f31" Feb 13 19:45:41.070387 kubelet[3272]: I0213 19:45:41.070119 3272 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/20d751495dc0f627ad51149682451ca8-kubeconfig\") pod \"kube-scheduler-ci-4186.1.1-a-a8b3a25f31\" (UID: \"20d751495dc0f627ad51149682451ca8\") " pod="kube-system/kube-scheduler-ci-4186.1.1-a-a8b3a25f31" Feb 13 19:45:41.866481 kubelet[3272]: I0213 19:45:41.866424 3272 apiserver.go:52] "Watching apiserver" Feb 13 19:45:41.868892 kubelet[3272]: I0213 19:45:41.868855 3272 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 19:45:41.881380 kubelet[3272]: W0213 19:45:41.881320 3272 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 19:45:41.881380 kubelet[3272]: W0213 19:45:41.881378 3272 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 19:45:41.881463 kubelet[3272]: E0213 19:45:41.881389 3272 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4186.1.1-a-a8b3a25f31\" already exists" pod="kube-system/kube-controller-manager-ci-4186.1.1-a-a8b3a25f31" Feb 13 19:45:41.881463 kubelet[3272]: E0213 19:45:41.881405 3272 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4186.1.1-a-a8b3a25f31\" already exists" pod="kube-system/kube-apiserver-ci-4186.1.1-a-a8b3a25f31" Feb 13 19:45:41.905643 kubelet[3272]: I0213 19:45:41.905576 3272 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4186.1.1-a-a8b3a25f31" podStartSLOduration=1.905563216 podStartE2EDuration="1.905563216s" podCreationTimestamp="2025-02-13 19:45:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:45:41.900812647 +0000 UTC m=+1.072019307" watchObservedRunningTime="2025-02-13 19:45:41.905563216 +0000 UTC m=+1.076769879" Feb 13 19:45:41.905749 kubelet[3272]: I0213 19:45:41.905644 3272 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4186.1.1-a-a8b3a25f31" podStartSLOduration=1.905640918 podStartE2EDuration="1.905640918s" podCreationTimestamp="2025-02-13 19:45:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:45:41.905556677 +0000 UTC m=+1.076763340" watchObservedRunningTime="2025-02-13 19:45:41.905640918 +0000 UTC m=+1.076847579" Feb 13 19:45:41.910361 kubelet[3272]: I0213 19:45:41.910335 3272 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4186.1.1-a-a8b3a25f31" podStartSLOduration=1.910326213 podStartE2EDuration="1.910326213s" podCreationTimestamp="2025-02-13 19:45:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:45:41.910183121 +0000 UTC m=+1.081389787" watchObservedRunningTime="2025-02-13 19:45:41.910326213 +0000 UTC m=+1.081532870" Feb 13 19:45:44.994945 sudo[2077]: pam_unix(sudo:session): session closed for user root Feb 13 19:45:44.995669 sshd[2076]: Connection closed by 139.178.89.65 port 52954 Feb 13 19:45:44.995824 sshd-session[2074]: pam_unix(sshd:session): session closed for user core Feb 13 19:45:44.997271 systemd[1]: sshd@8-147.28.180.89:22-139.178.89.65:52954.service: Deactivated successfully. Feb 13 19:45:44.998137 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 19:45:44.998217 systemd[1]: session-11.scope: Consumed 3.321s CPU time, 199.3M memory peak, 0B memory swap peak. Feb 13 19:45:44.998796 systemd-logind[1788]: Session 11 logged out. Waiting for processes to exit. Feb 13 19:45:44.999365 systemd-logind[1788]: Removed session 11. Feb 13 19:45:53.782711 update_engine[1793]: I20250213 19:45:53.782563 1793 update_attempter.cc:509] Updating boot flags... Feb 13 19:45:53.819455 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (3435) Feb 13 19:45:53.846465 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (3439) Feb 13 19:45:56.064786 kubelet[3272]: I0213 19:45:56.064696 3272 topology_manager.go:215] "Topology Admit Handler" podUID="b4e20463-b236-4450-a27b-ea334ff37573" podNamespace="kube-system" podName="kube-proxy-44jt5" Feb 13 19:45:56.081848 systemd[1]: Created slice kubepods-besteffort-podb4e20463_b236_4450_a27b_ea334ff37573.slice - libcontainer container kubepods-besteffort-podb4e20463_b236_4450_a27b_ea334ff37573.slice. Feb 13 19:45:56.086664 kubelet[3272]: I0213 19:45:56.086622 3272 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b4e20463-b236-4450-a27b-ea334ff37573-xtables-lock\") pod \"kube-proxy-44jt5\" (UID: \"b4e20463-b236-4450-a27b-ea334ff37573\") " pod="kube-system/kube-proxy-44jt5" Feb 13 19:45:56.086819 kubelet[3272]: I0213 19:45:56.086694 3272 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b4e20463-b236-4450-a27b-ea334ff37573-kube-proxy\") pod \"kube-proxy-44jt5\" (UID: \"b4e20463-b236-4450-a27b-ea334ff37573\") " pod="kube-system/kube-proxy-44jt5" Feb 13 19:45:56.086819 kubelet[3272]: I0213 19:45:56.086743 3272 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b4e20463-b236-4450-a27b-ea334ff37573-lib-modules\") pod \"kube-proxy-44jt5\" (UID: \"b4e20463-b236-4450-a27b-ea334ff37573\") " pod="kube-system/kube-proxy-44jt5" Feb 13 19:45:56.086819 kubelet[3272]: I0213 19:45:56.086792 3272 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdn96\" (UniqueName: \"kubernetes.io/projected/b4e20463-b236-4450-a27b-ea334ff37573-kube-api-access-fdn96\") pod \"kube-proxy-44jt5\" (UID: \"b4e20463-b236-4450-a27b-ea334ff37573\") " pod="kube-system/kube-proxy-44jt5" Feb 13 19:45:56.087184 kubelet[3272]: I0213 19:45:56.087133 3272 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 19:45:56.087579 containerd[1805]: time="2025-02-13T19:45:56.087533973Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 19:45:56.088005 kubelet[3272]: I0213 19:45:56.087819 3272 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 19:45:56.200478 kubelet[3272]: E0213 19:45:56.200373 3272 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Feb 13 19:45:56.200478 kubelet[3272]: E0213 19:45:56.200463 3272 projected.go:200] Error preparing data for projected volume kube-api-access-fdn96 for pod kube-system/kube-proxy-44jt5: configmap "kube-root-ca.crt" not found Feb 13 19:45:56.200860 kubelet[3272]: E0213 19:45:56.200598 3272 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b4e20463-b236-4450-a27b-ea334ff37573-kube-api-access-fdn96 podName:b4e20463-b236-4450-a27b-ea334ff37573 nodeName:}" failed. No retries permitted until 2025-02-13 19:45:56.700552069 +0000 UTC m=+15.871758801 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-fdn96" (UniqueName: "kubernetes.io/projected/b4e20463-b236-4450-a27b-ea334ff37573-kube-api-access-fdn96") pod "kube-proxy-44jt5" (UID: "b4e20463-b236-4450-a27b-ea334ff37573") : configmap "kube-root-ca.crt" not found Feb 13 19:45:57.003622 containerd[1805]: time="2025-02-13T19:45:57.003523289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-44jt5,Uid:b4e20463-b236-4450-a27b-ea334ff37573,Namespace:kube-system,Attempt:0,}" Feb 13 19:45:57.014274 containerd[1805]: time="2025-02-13T19:45:57.014230875Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:45:57.014274 containerd[1805]: time="2025-02-13T19:45:57.014260561Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:45:57.014547 containerd[1805]: time="2025-02-13T19:45:57.014281397Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:45:57.014595 containerd[1805]: time="2025-02-13T19:45:57.014581403Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:45:57.032724 systemd[1]: Started cri-containerd-98c49a578b7da6b39093765419c63a851662e8d1bc4a0b152142c04559ac4db3.scope - libcontainer container 98c49a578b7da6b39093765419c63a851662e8d1bc4a0b152142c04559ac4db3. Feb 13 19:45:57.043182 containerd[1805]: time="2025-02-13T19:45:57.043157771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-44jt5,Uid:b4e20463-b236-4450-a27b-ea334ff37573,Namespace:kube-system,Attempt:0,} returns sandbox id \"98c49a578b7da6b39093765419c63a851662e8d1bc4a0b152142c04559ac4db3\"" Feb 13 19:45:57.044468 containerd[1805]: time="2025-02-13T19:45:57.044429026Z" level=info msg="CreateContainer within sandbox \"98c49a578b7da6b39093765419c63a851662e8d1bc4a0b152142c04559ac4db3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 19:45:57.050103 containerd[1805]: time="2025-02-13T19:45:57.050063916Z" level=info msg="CreateContainer within sandbox \"98c49a578b7da6b39093765419c63a851662e8d1bc4a0b152142c04559ac4db3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2d93ed7956d51b1fefe2617449fe342e4060b10cb6699fdf1e683825e58972c4\"" Feb 13 19:45:57.050351 containerd[1805]: time="2025-02-13T19:45:57.050338898Z" level=info msg="StartContainer for \"2d93ed7956d51b1fefe2617449fe342e4060b10cb6699fdf1e683825e58972c4\"" Feb 13 19:45:57.073963 systemd[1]: Started cri-containerd-2d93ed7956d51b1fefe2617449fe342e4060b10cb6699fdf1e683825e58972c4.scope - libcontainer container 2d93ed7956d51b1fefe2617449fe342e4060b10cb6699fdf1e683825e58972c4. Feb 13 19:45:57.127573 containerd[1805]: time="2025-02-13T19:45:57.127526114Z" level=info msg="StartContainer for \"2d93ed7956d51b1fefe2617449fe342e4060b10cb6699fdf1e683825e58972c4\" returns successfully" Feb 13 19:45:57.143977 kubelet[3272]: I0213 19:45:57.143935 3272 topology_manager.go:215] "Topology Admit Handler" podUID="d15a2e04-fd34-416b-9a45-7323784538cd" podNamespace="tigera-operator" podName="tigera-operator-7bc55997bb-6x5gl" Feb 13 19:45:57.150683 systemd[1]: Created slice kubepods-besteffort-podd15a2e04_fd34_416b_9a45_7323784538cd.slice - libcontainer container kubepods-besteffort-podd15a2e04_fd34_416b_9a45_7323784538cd.slice. Feb 13 19:45:57.195975 kubelet[3272]: I0213 19:45:57.195893 3272 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d15a2e04-fd34-416b-9a45-7323784538cd-var-lib-calico\") pod \"tigera-operator-7bc55997bb-6x5gl\" (UID: \"d15a2e04-fd34-416b-9a45-7323784538cd\") " pod="tigera-operator/tigera-operator-7bc55997bb-6x5gl" Feb 13 19:45:57.196220 kubelet[3272]: I0213 19:45:57.196009 3272 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tpx9\" (UniqueName: \"kubernetes.io/projected/d15a2e04-fd34-416b-9a45-7323784538cd-kube-api-access-9tpx9\") pod \"tigera-operator-7bc55997bb-6x5gl\" (UID: \"d15a2e04-fd34-416b-9a45-7323784538cd\") " pod="tigera-operator/tigera-operator-7bc55997bb-6x5gl" Feb 13 19:45:57.454149 containerd[1805]: time="2025-02-13T19:45:57.454085310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-6x5gl,Uid:d15a2e04-fd34-416b-9a45-7323784538cd,Namespace:tigera-operator,Attempt:0,}" Feb 13 19:45:57.463890 containerd[1805]: time="2025-02-13T19:45:57.463822866Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:45:57.463890 containerd[1805]: time="2025-02-13T19:45:57.463850325Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:45:57.463890 containerd[1805]: time="2025-02-13T19:45:57.463857389Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:45:57.464005 containerd[1805]: time="2025-02-13T19:45:57.463893186Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:45:57.478509 systemd[1]: Started cri-containerd-c88681eb18ba91b7758a8e17dd0df90d06e93c63aefc5ea0b1784e6f43e1609e.scope - libcontainer container c88681eb18ba91b7758a8e17dd0df90d06e93c63aefc5ea0b1784e6f43e1609e. Feb 13 19:45:57.500370 containerd[1805]: time="2025-02-13T19:45:57.500348964Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-6x5gl,Uid:d15a2e04-fd34-416b-9a45-7323784538cd,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"c88681eb18ba91b7758a8e17dd0df90d06e93c63aefc5ea0b1784e6f43e1609e\"" Feb 13 19:45:57.501107 containerd[1805]: time="2025-02-13T19:45:57.501070685Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Feb 13 19:45:57.804306 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1359020015.mount: Deactivated successfully. Feb 13 19:45:57.930939 kubelet[3272]: I0213 19:45:57.930874 3272 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-44jt5" podStartSLOduration=1.930850746 podStartE2EDuration="1.930850746s" podCreationTimestamp="2025-02-13 19:45:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:45:57.930810204 +0000 UTC m=+17.102016865" watchObservedRunningTime="2025-02-13 19:45:57.930850746 +0000 UTC m=+17.102057406" Feb 13 19:45:58.846904 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3201913061.mount: Deactivated successfully. Feb 13 19:45:59.121322 containerd[1805]: time="2025-02-13T19:45:59.121246463Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:45:59.121530 containerd[1805]: time="2025-02-13T19:45:59.121404873Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21762497" Feb 13 19:45:59.121876 containerd[1805]: time="2025-02-13T19:45:59.121834759Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:45:59.122902 containerd[1805]: time="2025-02-13T19:45:59.122864227Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:45:59.123713 containerd[1805]: time="2025-02-13T19:45:59.123672028Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 1.622584442s" Feb 13 19:45:59.123713 containerd[1805]: time="2025-02-13T19:45:59.123685759Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Feb 13 19:45:59.124561 containerd[1805]: time="2025-02-13T19:45:59.124546002Z" level=info msg="CreateContainer within sandbox \"c88681eb18ba91b7758a8e17dd0df90d06e93c63aefc5ea0b1784e6f43e1609e\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Feb 13 19:45:59.128652 containerd[1805]: time="2025-02-13T19:45:59.128606059Z" level=info msg="CreateContainer within sandbox \"c88681eb18ba91b7758a8e17dd0df90d06e93c63aefc5ea0b1784e6f43e1609e\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"5af966995ec31e1fc6904fdb4277a18cf1b207e27251d1383aa1c92ec83067cb\"" Feb 13 19:45:59.128812 containerd[1805]: time="2025-02-13T19:45:59.128765802Z" level=info msg="StartContainer for \"5af966995ec31e1fc6904fdb4277a18cf1b207e27251d1383aa1c92ec83067cb\"" Feb 13 19:45:59.153660 systemd[1]: Started cri-containerd-5af966995ec31e1fc6904fdb4277a18cf1b207e27251d1383aa1c92ec83067cb.scope - libcontainer container 5af966995ec31e1fc6904fdb4277a18cf1b207e27251d1383aa1c92ec83067cb. Feb 13 19:45:59.165214 containerd[1805]: time="2025-02-13T19:45:59.165162326Z" level=info msg="StartContainer for \"5af966995ec31e1fc6904fdb4277a18cf1b207e27251d1383aa1c92ec83067cb\" returns successfully" Feb 13 19:45:59.941759 kubelet[3272]: I0213 19:45:59.941636 3272 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7bc55997bb-6x5gl" podStartSLOduration=1.318464689 podStartE2EDuration="2.941602114s" podCreationTimestamp="2025-02-13 19:45:57 +0000 UTC" firstStartedPulling="2025-02-13 19:45:57.500883257 +0000 UTC m=+16.672089918" lastFinishedPulling="2025-02-13 19:45:59.124020682 +0000 UTC m=+18.295227343" observedRunningTime="2025-02-13 19:45:59.941240144 +0000 UTC m=+19.112446873" watchObservedRunningTime="2025-02-13 19:45:59.941602114 +0000 UTC m=+19.112808826" Feb 13 19:46:01.964154 kubelet[3272]: I0213 19:46:01.964075 3272 topology_manager.go:215] "Topology Admit Handler" podUID="237000b9-618a-4d7e-b3cb-65338842bc59" podNamespace="calico-system" podName="calico-typha-6cd4466f65-h6w7m" Feb 13 19:46:01.977230 systemd[1]: Created slice kubepods-besteffort-pod237000b9_618a_4d7e_b3cb_65338842bc59.slice - libcontainer container kubepods-besteffort-pod237000b9_618a_4d7e_b3cb_65338842bc59.slice. Feb 13 19:46:01.994797 kubelet[3272]: I0213 19:46:01.994760 3272 topology_manager.go:215] "Topology Admit Handler" podUID="d0b55f4b-4ae8-4f4b-b6fd-289fb418fd56" podNamespace="calico-system" podName="calico-node-9sh44" Feb 13 19:46:02.000641 systemd[1]: Created slice kubepods-besteffort-podd0b55f4b_4ae8_4f4b_b6fd_289fb418fd56.slice - libcontainer container kubepods-besteffort-podd0b55f4b_4ae8_4f4b_b6fd_289fb418fd56.slice. Feb 13 19:46:02.029765 kubelet[3272]: I0213 19:46:02.029724 3272 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/d0b55f4b-4ae8-4f4b-b6fd-289fb418fd56-cni-net-dir\") pod \"calico-node-9sh44\" (UID: \"d0b55f4b-4ae8-4f4b-b6fd-289fb418fd56\") " pod="calico-system/calico-node-9sh44" Feb 13 19:46:02.029765 kubelet[3272]: I0213 19:46:02.029745 3272 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/d0b55f4b-4ae8-4f4b-b6fd-289fb418fd56-node-certs\") pod \"calico-node-9sh44\" (UID: \"d0b55f4b-4ae8-4f4b-b6fd-289fb418fd56\") " pod="calico-system/calico-node-9sh44" Feb 13 19:46:02.029765 kubelet[3272]: I0213 19:46:02.029756 3272 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/d0b55f4b-4ae8-4f4b-b6fd-289fb418fd56-cni-log-dir\") pod \"calico-node-9sh44\" (UID: \"d0b55f4b-4ae8-4f4b-b6fd-289fb418fd56\") " pod="calico-system/calico-node-9sh44" Feb 13 19:46:02.029885 kubelet[3272]: I0213 19:46:02.029787 3272 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/237000b9-618a-4d7e-b3cb-65338842bc59-tigera-ca-bundle\") pod \"calico-typha-6cd4466f65-h6w7m\" (UID: \"237000b9-618a-4d7e-b3cb-65338842bc59\") " pod="calico-system/calico-typha-6cd4466f65-h6w7m" Feb 13 19:46:02.029885 kubelet[3272]: I0213 19:46:02.029802 3272 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7bkp\" (UniqueName: \"kubernetes.io/projected/237000b9-618a-4d7e-b3cb-65338842bc59-kube-api-access-b7bkp\") pod \"calico-typha-6cd4466f65-h6w7m\" (UID: \"237000b9-618a-4d7e-b3cb-65338842bc59\") " pod="calico-system/calico-typha-6cd4466f65-h6w7m" Feb 13 19:46:02.029885 kubelet[3272]: I0213 19:46:02.029812 3272 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d0b55f4b-4ae8-4f4b-b6fd-289fb418fd56-lib-modules\") pod \"calico-node-9sh44\" (UID: \"d0b55f4b-4ae8-4f4b-b6fd-289fb418fd56\") " pod="calico-system/calico-node-9sh44" Feb 13 19:46:02.029885 kubelet[3272]: I0213 19:46:02.029820 3272 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d0b55f4b-4ae8-4f4b-b6fd-289fb418fd56-xtables-lock\") pod \"calico-node-9sh44\" (UID: \"d0b55f4b-4ae8-4f4b-b6fd-289fb418fd56\") " pod="calico-system/calico-node-9sh44" Feb 13 19:46:02.029885 kubelet[3272]: I0213 19:46:02.029829 3272 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/d0b55f4b-4ae8-4f4b-b6fd-289fb418fd56-flexvol-driver-host\") pod \"calico-node-9sh44\" (UID: \"d0b55f4b-4ae8-4f4b-b6fd-289fb418fd56\") " pod="calico-system/calico-node-9sh44" Feb 13 19:46:02.029969 kubelet[3272]: I0213 19:46:02.029837 3272 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rptxr\" (UniqueName: \"kubernetes.io/projected/d0b55f4b-4ae8-4f4b-b6fd-289fb418fd56-kube-api-access-rptxr\") pod \"calico-node-9sh44\" (UID: \"d0b55f4b-4ae8-4f4b-b6fd-289fb418fd56\") " pod="calico-system/calico-node-9sh44" Feb 13 19:46:02.029969 kubelet[3272]: I0213 19:46:02.029846 3272 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/d0b55f4b-4ae8-4f4b-b6fd-289fb418fd56-policysync\") pod \"calico-node-9sh44\" (UID: \"d0b55f4b-4ae8-4f4b-b6fd-289fb418fd56\") " pod="calico-system/calico-node-9sh44" Feb 13 19:46:02.029969 kubelet[3272]: I0213 19:46:02.029854 3272 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/237000b9-618a-4d7e-b3cb-65338842bc59-typha-certs\") pod \"calico-typha-6cd4466f65-h6w7m\" (UID: \"237000b9-618a-4d7e-b3cb-65338842bc59\") " pod="calico-system/calico-typha-6cd4466f65-h6w7m" Feb 13 19:46:02.029969 kubelet[3272]: I0213 19:46:02.029862 3272 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d0b55f4b-4ae8-4f4b-b6fd-289fb418fd56-var-lib-calico\") pod \"calico-node-9sh44\" (UID: \"d0b55f4b-4ae8-4f4b-b6fd-289fb418fd56\") " pod="calico-system/calico-node-9sh44" Feb 13 19:46:02.029969 kubelet[3272]: I0213 19:46:02.029871 3272 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/d0b55f4b-4ae8-4f4b-b6fd-289fb418fd56-cni-bin-dir\") pod \"calico-node-9sh44\" (UID: \"d0b55f4b-4ae8-4f4b-b6fd-289fb418fd56\") " pod="calico-system/calico-node-9sh44" Feb 13 19:46:02.030046 kubelet[3272]: I0213 19:46:02.029900 3272 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d0b55f4b-4ae8-4f4b-b6fd-289fb418fd56-tigera-ca-bundle\") pod \"calico-node-9sh44\" (UID: \"d0b55f4b-4ae8-4f4b-b6fd-289fb418fd56\") " pod="calico-system/calico-node-9sh44" Feb 13 19:46:02.030046 kubelet[3272]: I0213 19:46:02.029917 3272 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/d0b55f4b-4ae8-4f4b-b6fd-289fb418fd56-var-run-calico\") pod \"calico-node-9sh44\" (UID: \"d0b55f4b-4ae8-4f4b-b6fd-289fb418fd56\") " pod="calico-system/calico-node-9sh44" Feb 13 19:46:02.123852 kubelet[3272]: I0213 19:46:02.123773 3272 topology_manager.go:215] "Topology Admit Handler" podUID="f3327118-2549-4d08-a802-8c7cfa7fb673" podNamespace="calico-system" podName="csi-node-driver-vkkt7" Feb 13 19:46:02.124590 kubelet[3272]: E0213 19:46:02.124481 3272 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vkkt7" podUID="f3327118-2549-4d08-a802-8c7cfa7fb673" Feb 13 19:46:02.134103 kubelet[3272]: E0213 19:46:02.134008 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:46:02.134103 kubelet[3272]: W0213 19:46:02.134074 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:46:02.134665 kubelet[3272]: E0213 19:46:02.134164 3272 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:46:02.135048 kubelet[3272]: E0213 19:46:02.134994 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:46:02.135525 kubelet[3272]: W0213 19:46:02.135356 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:46:02.135525 kubelet[3272]: E0213 19:46:02.135439 3272 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:46:02.143527 kubelet[3272]: E0213 19:46:02.143471 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:46:02.143527 kubelet[3272]: W0213 19:46:02.143520 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:46:02.143913 kubelet[3272]: E0213 19:46:02.143573 3272 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:46:02.144263 kubelet[3272]: E0213 19:46:02.144223 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:46:02.144263 kubelet[3272]: W0213 19:46:02.144257 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:46:02.144602 kubelet[3272]: E0213 19:46:02.144288 3272 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:46:02.152274 kubelet[3272]: E0213 19:46:02.152249 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:46:02.152274 kubelet[3272]: W0213 19:46:02.152269 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:46:02.152480 kubelet[3272]: E0213 19:46:02.152294 3272 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:46:02.152582 kubelet[3272]: E0213 19:46:02.152541 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:46:02.152582 kubelet[3272]: W0213 19:46:02.152552 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:46:02.152582 kubelet[3272]: E0213 19:46:02.152564 3272 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:46:02.217777 kubelet[3272]: E0213 19:46:02.217675 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:46:02.217777 kubelet[3272]: W0213 19:46:02.217690 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:46:02.217777 kubelet[3272]: E0213 19:46:02.217705 3272 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:46:02.217912 kubelet[3272]: E0213 19:46:02.217849 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:46:02.217912 kubelet[3272]: W0213 19:46:02.217859 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:46:02.217912 kubelet[3272]: E0213 19:46:02.217869 3272 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:46:02.217992 kubelet[3272]: E0213 19:46:02.217985 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:46:02.218017 kubelet[3272]: W0213 19:46:02.217991 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:46:02.218017 kubelet[3272]: E0213 19:46:02.217998 3272 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:46:02.218143 kubelet[3272]: E0213 19:46:02.218102 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:46:02.218143 kubelet[3272]: W0213 19:46:02.218111 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:46:02.218143 kubelet[3272]: E0213 19:46:02.218118 3272 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:46:02.218297 kubelet[3272]: E0213 19:46:02.218226 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:46:02.218297 kubelet[3272]: W0213 19:46:02.218233 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:46:02.218297 kubelet[3272]: E0213 19:46:02.218239 3272 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:46:02.218410 kubelet[3272]: E0213 19:46:02.218338 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:46:02.218410 kubelet[3272]: W0213 19:46:02.218344 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:46:02.218410 kubelet[3272]: E0213 19:46:02.218350 3272 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:46:02.218500 kubelet[3272]: E0213 19:46:02.218486 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:46:02.218500 kubelet[3272]: W0213 19:46:02.218493 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:46:02.218500 kubelet[3272]: E0213 19:46:02.218499 3272 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:46:02.218657 kubelet[3272]: E0213 19:46:02.218644 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:46:02.218657 kubelet[3272]: W0213 19:46:02.218651 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:46:02.218657 kubelet[3272]: E0213 19:46:02.218657 3272 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:46:02.218791 kubelet[3272]: E0213 19:46:02.218785 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:46:02.218791 kubelet[3272]: W0213 19:46:02.218792 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:46:02.218869 kubelet[3272]: E0213 19:46:02.218798 3272 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:46:02.218929 kubelet[3272]: E0213 19:46:02.218916 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:46:02.218929 kubelet[3272]: W0213 19:46:02.218925 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:46:02.219032 kubelet[3272]: E0213 19:46:02.218936 3272 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:46:02.219075 kubelet[3272]: E0213 19:46:02.219059 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:46:02.219075 kubelet[3272]: W0213 19:46:02.219069 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:46:02.219128 kubelet[3272]: E0213 19:46:02.219077 3272 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:46:02.219225 kubelet[3272]: E0213 19:46:02.219197 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:46:02.219225 kubelet[3272]: W0213 19:46:02.219205 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:46:02.219225 kubelet[3272]: E0213 19:46:02.219216 3272 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:46:02.219361 kubelet[3272]: E0213 19:46:02.219352 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:46:02.219361 kubelet[3272]: W0213 19:46:02.219359 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:46:02.219416 kubelet[3272]: E0213 19:46:02.219366 3272 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:46:02.219508 kubelet[3272]: E0213 19:46:02.219500 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:46:02.219534 kubelet[3272]: W0213 19:46:02.219510 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:46:02.219534 kubelet[3272]: E0213 19:46:02.219520 3272 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:46:02.219660 kubelet[3272]: E0213 19:46:02.219653 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:46:02.219687 kubelet[3272]: W0213 19:46:02.219660 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:46:02.219687 kubelet[3272]: E0213 19:46:02.219667 3272 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:46:02.219770 kubelet[3272]: E0213 19:46:02.219763 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:46:02.219796 kubelet[3272]: W0213 19:46:02.219771 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:46:02.219796 kubelet[3272]: E0213 19:46:02.219778 3272 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:46:02.219885 kubelet[3272]: E0213 19:46:02.219877 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:46:02.219885 kubelet[3272]: W0213 19:46:02.219884 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:46:02.219935 kubelet[3272]: E0213 19:46:02.219890 3272 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:46:02.219987 kubelet[3272]: E0213 19:46:02.219981 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:46:02.220016 kubelet[3272]: W0213 19:46:02.219987 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:46:02.220016 kubelet[3272]: E0213 19:46:02.219993 3272 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:46:02.220095 kubelet[3272]: E0213 19:46:02.220089 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:46:02.220120 kubelet[3272]: W0213 19:46:02.220095 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:46:02.220120 kubelet[3272]: E0213 19:46:02.220101 3272 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:46:02.220198 kubelet[3272]: E0213 19:46:02.220191 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:46:02.220228 kubelet[3272]: W0213 19:46:02.220197 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:46:02.220228 kubelet[3272]: E0213 19:46:02.220203 3272 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:46:02.231576 kubelet[3272]: E0213 19:46:02.231560 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:46:02.231576 kubelet[3272]: W0213 19:46:02.231572 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:46:02.231660 kubelet[3272]: E0213 19:46:02.231583 3272 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:46:02.231660 kubelet[3272]: I0213 19:46:02.231606 3272 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f3327118-2549-4d08-a802-8c7cfa7fb673-registration-dir\") pod \"csi-node-driver-vkkt7\" (UID: \"f3327118-2549-4d08-a802-8c7cfa7fb673\") " pod="calico-system/csi-node-driver-vkkt7" Feb 13 19:46:02.231844 kubelet[3272]: E0213 19:46:02.231829 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:46:02.231890 kubelet[3272]: W0213 19:46:02.231843 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:46:02.231890 kubelet[3272]: E0213 19:46:02.231856 3272 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:46:02.231890 kubelet[3272]: I0213 19:46:02.231875 3272 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f3327118-2549-4d08-a802-8c7cfa7fb673-kubelet-dir\") pod \"csi-node-driver-vkkt7\" (UID: \"f3327118-2549-4d08-a802-8c7cfa7fb673\") " pod="calico-system/csi-node-driver-vkkt7" Feb 13 19:46:02.232064 kubelet[3272]: E0213 19:46:02.232048 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:46:02.232101 kubelet[3272]: W0213 19:46:02.232062 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:46:02.232101 kubelet[3272]: E0213 19:46:02.232086 3272 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:46:02.232171 kubelet[3272]: I0213 19:46:02.232105 3272 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/f3327118-2549-4d08-a802-8c7cfa7fb673-varrun\") pod \"csi-node-driver-vkkt7\" (UID: \"f3327118-2549-4d08-a802-8c7cfa7fb673\") " pod="calico-system/csi-node-driver-vkkt7" Feb 13 19:46:02.232279 kubelet[3272]: E0213 19:46:02.232267 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:46:02.232279 kubelet[3272]: W0213 19:46:02.232277 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:46:02.232353 kubelet[3272]: E0213 19:46:02.232290 3272 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:46:02.232353 kubelet[3272]: I0213 19:46:02.232305 3272 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f3327118-2549-4d08-a802-8c7cfa7fb673-socket-dir\") pod \"csi-node-driver-vkkt7\" (UID: \"f3327118-2549-4d08-a802-8c7cfa7fb673\") " pod="calico-system/csi-node-driver-vkkt7" Feb 13 19:46:02.232505 kubelet[3272]: E0213 19:46:02.232494 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:46:02.232505 kubelet[3272]: W0213 19:46:02.232504 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:46:02.232580 kubelet[3272]: E0213 19:46:02.232516 3272 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:46:02.232580 kubelet[3272]: I0213 19:46:02.232530 3272 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9gmq\" (UniqueName: \"kubernetes.io/projected/f3327118-2549-4d08-a802-8c7cfa7fb673-kube-api-access-c9gmq\") pod \"csi-node-driver-vkkt7\" (UID: \"f3327118-2549-4d08-a802-8c7cfa7fb673\") " pod="calico-system/csi-node-driver-vkkt7" Feb 13 19:46:02.232719 kubelet[3272]: E0213 19:46:02.232707 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:46:02.232719 kubelet[3272]: W0213 19:46:02.232717 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:46:02.232797 kubelet[3272]: E0213 19:46:02.232729 3272 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:46:02.232896 kubelet[3272]: E0213 19:46:02.232888 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:46:02.232930 kubelet[3272]: W0213 19:46:02.232896 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:46:02.232964 kubelet[3272]: E0213 19:46:02.232940 3272 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:46:02.233056 kubelet[3272]: E0213 19:46:02.233048 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:46:02.233088 kubelet[3272]: W0213 19:46:02.233056 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:46:02.233088 kubelet[3272]: E0213 19:46:02.233074 3272 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:46:02.233196 kubelet[3272]: E0213 19:46:02.233188 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:46:02.233233 kubelet[3272]: W0213 19:46:02.233196 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:46:02.233233 kubelet[3272]: E0213 19:46:02.233214 3272 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:46:02.233328 kubelet[3272]: E0213 19:46:02.233319 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:46:02.233363 kubelet[3272]: W0213 19:46:02.233327 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:46:02.233363 kubelet[3272]: E0213 19:46:02.233337 3272 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:46:02.233477 kubelet[3272]: E0213 19:46:02.233468 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:46:02.233477 kubelet[3272]: W0213 19:46:02.233476 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:46:02.233553 kubelet[3272]: E0213 19:46:02.233486 3272 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:46:02.233647 kubelet[3272]: E0213 19:46:02.233638 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:46:02.233686 kubelet[3272]: W0213 19:46:02.233648 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:46:02.233686 kubelet[3272]: E0213 19:46:02.233658 3272 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:46:02.233794 kubelet[3272]: E0213 19:46:02.233786 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:46:02.233830 kubelet[3272]: W0213 19:46:02.233794 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:46:02.233830 kubelet[3272]: E0213 19:46:02.233802 3272 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:46:02.233935 kubelet[3272]: E0213 19:46:02.233926 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:46:02.233973 kubelet[3272]: W0213 19:46:02.233935 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:46:02.233973 kubelet[3272]: E0213 19:46:02.233943 3272 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:46:02.234078 kubelet[3272]: E0213 19:46:02.234069 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:46:02.234110 kubelet[3272]: W0213 19:46:02.234077 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:46:02.234110 kubelet[3272]: E0213 19:46:02.234085 3272 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:46:02.285844 containerd[1805]: time="2025-02-13T19:46:02.283658918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6cd4466f65-h6w7m,Uid:237000b9-618a-4d7e-b3cb-65338842bc59,Namespace:calico-system,Attempt:0,}" Feb 13 19:46:02.295933 containerd[1805]: time="2025-02-13T19:46:02.295891605Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:46:02.296151 containerd[1805]: time="2025-02-13T19:46:02.295919798Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:46:02.296151 containerd[1805]: time="2025-02-13T19:46:02.296123403Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:46:02.296205 containerd[1805]: time="2025-02-13T19:46:02.296167983Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:46:02.302780 containerd[1805]: time="2025-02-13T19:46:02.302756019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9sh44,Uid:d0b55f4b-4ae8-4f4b-b6fd-289fb418fd56,Namespace:calico-system,Attempt:0,}" Feb 13 19:46:02.311575 containerd[1805]: time="2025-02-13T19:46:02.311459141Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:46:02.311575 containerd[1805]: time="2025-02-13T19:46:02.311499571Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:46:02.311575 containerd[1805]: time="2025-02-13T19:46:02.311509432Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:46:02.311759 containerd[1805]: time="2025-02-13T19:46:02.311710674Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:46:02.322698 systemd[1]: Started cri-containerd-bce7b8a3a6e1285c5558bbcdcaedade754cc893a5488dfb12ab7b611b3e1286c.scope - libcontainer container bce7b8a3a6e1285c5558bbcdcaedade754cc893a5488dfb12ab7b611b3e1286c. Feb 13 19:46:02.324337 systemd[1]: Started cri-containerd-91182644a72703e3fbdf2a623f27621c8437476e7a62e635a877b014bfbcf140.scope - libcontainer container 91182644a72703e3fbdf2a623f27621c8437476e7a62e635a877b014bfbcf140. Feb 13 19:46:02.332979 kubelet[3272]: E0213 19:46:02.332958 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:46:02.332979 kubelet[3272]: W0213 19:46:02.332976 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:46:02.333092 kubelet[3272]: E0213 19:46:02.332994 3272 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:46:02.333144 kubelet[3272]: E0213 19:46:02.333135 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:46:02.333166 kubelet[3272]: W0213 19:46:02.333146 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:46:02.333166 kubelet[3272]: E0213 19:46:02.333161 3272 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:46:02.333321 kubelet[3272]: E0213 19:46:02.333314 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:46:02.333343 kubelet[3272]: W0213 19:46:02.333322 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:46:02.333343 kubelet[3272]: E0213 19:46:02.333334 3272 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:46:02.333509 kubelet[3272]: E0213 19:46:02.333471 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:46:02.333509 kubelet[3272]: W0213 19:46:02.333480 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:46:02.333509 kubelet[3272]: E0213 19:46:02.333492 3272 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:46:02.333662 kubelet[3272]: E0213 19:46:02.333624 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:46:02.333662 kubelet[3272]: W0213 19:46:02.333633 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:46:02.333662 kubelet[3272]: E0213 19:46:02.333644 3272 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:46:02.333799 kubelet[3272]: E0213 19:46:02.333792 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:46:02.333830 kubelet[3272]: W0213 19:46:02.333801 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:46:02.333830 kubelet[3272]: E0213 19:46:02.333813 3272 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:46:02.333970 kubelet[3272]: E0213 19:46:02.333963 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:46:02.333992 kubelet[3272]: W0213 19:46:02.333971 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:46:02.333992 kubelet[3272]: E0213 19:46:02.333983 3272 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:46:02.334126 kubelet[3272]: E0213 19:46:02.334119 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:46:02.334175 kubelet[3272]: W0213 19:46:02.334127 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:46:02.334175 kubelet[3272]: E0213 19:46:02.334138 3272 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:46:02.334267 kubelet[3272]: E0213 19:46:02.334260 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:46:02.334290 kubelet[3272]: W0213 19:46:02.334268 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:46:02.334311 kubelet[3272]: E0213 19:46:02.334284 3272 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:46:02.334382 kubelet[3272]: E0213 19:46:02.334373 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:46:02.334382 kubelet[3272]: W0213 19:46:02.334381 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:46:02.334474 kubelet[3272]: E0213 19:46:02.334402 3272 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:46:02.334507 kubelet[3272]: E0213 19:46:02.334500 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:46:02.334532 kubelet[3272]: W0213 19:46:02.334508 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:46:02.334532 kubelet[3272]: E0213 19:46:02.334522 3272 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:46:02.334658 kubelet[3272]: E0213 19:46:02.334650 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:46:02.334658 kubelet[3272]: W0213 19:46:02.334657 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:46:02.334739 kubelet[3272]: E0213 19:46:02.334670 3272 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:46:02.334774 kubelet[3272]: E0213 19:46:02.334760 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:46:02.334774 kubelet[3272]: W0213 19:46:02.334766 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:46:02.334834 kubelet[3272]: E0213 19:46:02.334777 3272 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:46:02.334866 kubelet[3272]: E0213 19:46:02.334860 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:46:02.334902 kubelet[3272]: W0213 19:46:02.334867 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:46:02.334902 kubelet[3272]: E0213 19:46:02.334877 3272 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:46:02.334987 kubelet[3272]: E0213 19:46:02.334979 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:46:02.334987 kubelet[3272]: W0213 19:46:02.334987 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:46:02.335055 kubelet[3272]: E0213 19:46:02.334996 3272 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:46:02.335092 containerd[1805]: time="2025-02-13T19:46:02.334981135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9sh44,Uid:d0b55f4b-4ae8-4f4b-b6fd-289fb418fd56,Namespace:calico-system,Attempt:0,} returns sandbox id \"91182644a72703e3fbdf2a623f27621c8437476e7a62e635a877b014bfbcf140\"" Feb 13 19:46:02.335153 kubelet[3272]: E0213 19:46:02.335144 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:46:02.335188 kubelet[3272]: W0213 19:46:02.335152 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:46:02.335188 kubelet[3272]: E0213 19:46:02.335161 3272 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:46:02.335291 kubelet[3272]: E0213 19:46:02.335282 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:46:02.335291 kubelet[3272]: W0213 19:46:02.335290 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:46:02.335358 kubelet[3272]: E0213 19:46:02.335301 3272 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:46:02.335465 kubelet[3272]: E0213 19:46:02.335417 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:46:02.335490 kubelet[3272]: W0213 19:46:02.335467 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:46:02.335490 kubelet[3272]: E0213 19:46:02.335476 3272 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:46:02.335639 kubelet[3272]: E0213 19:46:02.335631 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:46:02.335666 kubelet[3272]: W0213 19:46:02.335639 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:46:02.335666 kubelet[3272]: E0213 19:46:02.335650 3272 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:46:02.335772 kubelet[3272]: E0213 19:46:02.335765 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:46:02.335800 kubelet[3272]: W0213 19:46:02.335774 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:46:02.335800 kubelet[3272]: E0213 19:46:02.335783 3272 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:46:02.335866 containerd[1805]: time="2025-02-13T19:46:02.335800395Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Feb 13 19:46:02.335911 kubelet[3272]: E0213 19:46:02.335902 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:46:02.335911 kubelet[3272]: W0213 19:46:02.335909 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:46:02.335981 kubelet[3272]: E0213 19:46:02.335926 3272 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:46:02.336033 kubelet[3272]: E0213 19:46:02.336024 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:46:02.336033 kubelet[3272]: W0213 19:46:02.336032 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:46:02.336098 kubelet[3272]: E0213 19:46:02.336042 3272 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:46:02.336163 kubelet[3272]: E0213 19:46:02.336154 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:46:02.336163 kubelet[3272]: W0213 19:46:02.336162 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:46:02.336240 kubelet[3272]: E0213 19:46:02.336172 3272 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:46:02.336370 kubelet[3272]: E0213 19:46:02.336359 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:46:02.336370 kubelet[3272]: W0213 19:46:02.336368 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:46:02.336451 kubelet[3272]: E0213 19:46:02.336377 3272 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:46:02.336515 kubelet[3272]: E0213 19:46:02.336507 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:46:02.336540 kubelet[3272]: W0213 19:46:02.336516 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:46:02.336540 kubelet[3272]: E0213 19:46:02.336526 3272 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:46:02.340113 kubelet[3272]: E0213 19:46:02.340100 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:46:02.340113 kubelet[3272]: W0213 19:46:02.340110 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:46:02.340181 kubelet[3272]: E0213 19:46:02.340121 3272 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:46:02.348140 containerd[1805]: time="2025-02-13T19:46:02.348094758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6cd4466f65-h6w7m,Uid:237000b9-618a-4d7e-b3cb-65338842bc59,Namespace:calico-system,Attempt:0,} returns sandbox id \"bce7b8a3a6e1285c5558bbcdcaedade754cc893a5488dfb12ab7b611b3e1286c\"" Feb 13 19:46:03.876063 kubelet[3272]: E0213 19:46:03.875950 3272 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vkkt7" podUID="f3327118-2549-4d08-a802-8c7cfa7fb673" Feb 13 19:46:03.938346 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount278158038.mount: Deactivated successfully. Feb 13 19:46:03.977948 containerd[1805]: time="2025-02-13T19:46:03.977924331Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:46:03.978206 containerd[1805]: time="2025-02-13T19:46:03.978161432Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Feb 13 19:46:03.978432 containerd[1805]: time="2025-02-13T19:46:03.978416354Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:46:03.979497 containerd[1805]: time="2025-02-13T19:46:03.979473327Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:46:03.980176 containerd[1805]: time="2025-02-13T19:46:03.980161432Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.644342257s" Feb 13 19:46:03.980217 containerd[1805]: time="2025-02-13T19:46:03.980178140Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Feb 13 19:46:03.980645 containerd[1805]: time="2025-02-13T19:46:03.980633838Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Feb 13 19:46:03.981163 containerd[1805]: time="2025-02-13T19:46:03.981150936Z" level=info msg="CreateContainer within sandbox \"91182644a72703e3fbdf2a623f27621c8437476e7a62e635a877b014bfbcf140\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 13 19:46:03.986494 containerd[1805]: time="2025-02-13T19:46:03.986464642Z" level=info msg="CreateContainer within sandbox \"91182644a72703e3fbdf2a623f27621c8437476e7a62e635a877b014bfbcf140\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"73730e66be865ba22be902376a1b3209560b29d7da8402479d3e85eba7155667\"" Feb 13 19:46:03.987154 containerd[1805]: time="2025-02-13T19:46:03.986803236Z" level=info msg="StartContainer for \"73730e66be865ba22be902376a1b3209560b29d7da8402479d3e85eba7155667\"" Feb 13 19:46:04.015697 systemd[1]: Started cri-containerd-73730e66be865ba22be902376a1b3209560b29d7da8402479d3e85eba7155667.scope - libcontainer container 73730e66be865ba22be902376a1b3209560b29d7da8402479d3e85eba7155667. Feb 13 19:46:04.029933 containerd[1805]: time="2025-02-13T19:46:04.029905331Z" level=info msg="StartContainer for \"73730e66be865ba22be902376a1b3209560b29d7da8402479d3e85eba7155667\" returns successfully" Feb 13 19:46:04.035741 systemd[1]: cri-containerd-73730e66be865ba22be902376a1b3209560b29d7da8402479d3e85eba7155667.scope: Deactivated successfully. Feb 13 19:46:04.144177 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-73730e66be865ba22be902376a1b3209560b29d7da8402479d3e85eba7155667-rootfs.mount: Deactivated successfully. Feb 13 19:46:04.291166 containerd[1805]: time="2025-02-13T19:46:04.291121910Z" level=info msg="shim disconnected" id=73730e66be865ba22be902376a1b3209560b29d7da8402479d3e85eba7155667 namespace=k8s.io Feb 13 19:46:04.291166 containerd[1805]: time="2025-02-13T19:46:04.291159379Z" level=warning msg="cleaning up after shim disconnected" id=73730e66be865ba22be902376a1b3209560b29d7da8402479d3e85eba7155667 namespace=k8s.io Feb 13 19:46:04.291166 containerd[1805]: time="2025-02-13T19:46:04.291167956Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:46:05.672428 containerd[1805]: time="2025-02-13T19:46:05.672395441Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:46:05.672701 containerd[1805]: time="2025-02-13T19:46:05.672599620Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29850141" Feb 13 19:46:05.673023 containerd[1805]: time="2025-02-13T19:46:05.673007640Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:46:05.674074 containerd[1805]: time="2025-02-13T19:46:05.674062097Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:46:05.674379 containerd[1805]: time="2025-02-13T19:46:05.674363488Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 1.693713854s" Feb 13 19:46:05.674423 containerd[1805]: time="2025-02-13T19:46:05.674380207Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Feb 13 19:46:05.674866 containerd[1805]: time="2025-02-13T19:46:05.674854581Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Feb 13 19:46:05.677890 containerd[1805]: time="2025-02-13T19:46:05.677834390Z" level=info msg="CreateContainer within sandbox \"bce7b8a3a6e1285c5558bbcdcaedade754cc893a5488dfb12ab7b611b3e1286c\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Feb 13 19:46:05.682021 containerd[1805]: time="2025-02-13T19:46:05.682005709Z" level=info msg="CreateContainer within sandbox \"bce7b8a3a6e1285c5558bbcdcaedade754cc893a5488dfb12ab7b611b3e1286c\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"841abb9fc6d37a7ff5f609276da62a1c1910934dbe2e0225532f68e99c9b1336\"" Feb 13 19:46:05.682187 containerd[1805]: time="2025-02-13T19:46:05.682175180Z" level=info msg="StartContainer for \"841abb9fc6d37a7ff5f609276da62a1c1910934dbe2e0225532f68e99c9b1336\"" Feb 13 19:46:05.717597 systemd[1]: Started cri-containerd-841abb9fc6d37a7ff5f609276da62a1c1910934dbe2e0225532f68e99c9b1336.scope - libcontainer container 841abb9fc6d37a7ff5f609276da62a1c1910934dbe2e0225532f68e99c9b1336. Feb 13 19:46:05.753126 containerd[1805]: time="2025-02-13T19:46:05.753098224Z" level=info msg="StartContainer for \"841abb9fc6d37a7ff5f609276da62a1c1910934dbe2e0225532f68e99c9b1336\" returns successfully" Feb 13 19:46:05.875385 kubelet[3272]: E0213 19:46:05.875291 3272 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vkkt7" podUID="f3327118-2549-4d08-a802-8c7cfa7fb673" Feb 13 19:46:05.968480 kubelet[3272]: I0213 19:46:05.968333 3272 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6cd4466f65-h6w7m" podStartSLOduration=1.6421384909999999 podStartE2EDuration="4.968295674s" podCreationTimestamp="2025-02-13 19:46:01 +0000 UTC" firstStartedPulling="2025-02-13 19:46:02.348646651 +0000 UTC m=+21.519853311" lastFinishedPulling="2025-02-13 19:46:05.674803833 +0000 UTC m=+24.846010494" observedRunningTime="2025-02-13 19:46:05.967352822 +0000 UTC m=+25.138559578" watchObservedRunningTime="2025-02-13 19:46:05.968295674 +0000 UTC m=+25.139502386" Feb 13 19:46:06.948722 kubelet[3272]: I0213 19:46:06.948624 3272 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 19:46:07.875715 kubelet[3272]: E0213 19:46:07.875576 3272 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vkkt7" podUID="f3327118-2549-4d08-a802-8c7cfa7fb673" Feb 13 19:46:08.517060 containerd[1805]: time="2025-02-13T19:46:08.517008998Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:46:08.517265 containerd[1805]: time="2025-02-13T19:46:08.517246060Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Feb 13 19:46:08.517599 containerd[1805]: time="2025-02-13T19:46:08.517556415Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:46:08.518576 containerd[1805]: time="2025-02-13T19:46:08.518530548Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:46:08.518957 containerd[1805]: time="2025-02-13T19:46:08.518915185Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 2.844046267s" Feb 13 19:46:08.518957 containerd[1805]: time="2025-02-13T19:46:08.518930682Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Feb 13 19:46:08.519896 containerd[1805]: time="2025-02-13T19:46:08.519882467Z" level=info msg="CreateContainer within sandbox \"91182644a72703e3fbdf2a623f27621c8437476e7a62e635a877b014bfbcf140\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 19:46:08.524504 containerd[1805]: time="2025-02-13T19:46:08.524490048Z" level=info msg="CreateContainer within sandbox \"91182644a72703e3fbdf2a623f27621c8437476e7a62e635a877b014bfbcf140\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"59bcf14a68945dc02e5fb7f59adc8c173169122c3dfc12180bdb640254010f4c\"" Feb 13 19:46:08.524710 containerd[1805]: time="2025-02-13T19:46:08.524699879Z" level=info msg="StartContainer for \"59bcf14a68945dc02e5fb7f59adc8c173169122c3dfc12180bdb640254010f4c\"" Feb 13 19:46:08.545593 systemd[1]: Started cri-containerd-59bcf14a68945dc02e5fb7f59adc8c173169122c3dfc12180bdb640254010f4c.scope - libcontainer container 59bcf14a68945dc02e5fb7f59adc8c173169122c3dfc12180bdb640254010f4c. Feb 13 19:46:08.560358 containerd[1805]: time="2025-02-13T19:46:08.560333995Z" level=info msg="StartContainer for \"59bcf14a68945dc02e5fb7f59adc8c173169122c3dfc12180bdb640254010f4c\" returns successfully" Feb 13 19:46:09.072414 systemd[1]: cri-containerd-59bcf14a68945dc02e5fb7f59adc8c173169122c3dfc12180bdb640254010f4c.scope: Deactivated successfully. Feb 13 19:46:09.082473 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-59bcf14a68945dc02e5fb7f59adc8c173169122c3dfc12180bdb640254010f4c-rootfs.mount: Deactivated successfully. Feb 13 19:46:09.135170 kubelet[3272]: I0213 19:46:09.135079 3272 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 13 19:46:09.179208 kubelet[3272]: I0213 19:46:09.179071 3272 topology_manager.go:215] "Topology Admit Handler" podUID="c5186765-6941-4a28-a06d-cf22cd68adee" podNamespace="kube-system" podName="coredns-7db6d8ff4d-xvtfk" Feb 13 19:46:09.179995 kubelet[3272]: I0213 19:46:09.179955 3272 topology_manager.go:215] "Topology Admit Handler" podUID="a3049fc5-6472-40ea-b289-504e898e9372" podNamespace="calico-system" podName="calico-kube-controllers-54fdf9b76f-wsc67" Feb 13 19:46:09.180263 kubelet[3272]: I0213 19:46:09.180234 3272 topology_manager.go:215] "Topology Admit Handler" podUID="4bb94446-f95c-44d1-9b40-e90a44987989" podNamespace="kube-system" podName="coredns-7db6d8ff4d-t77lg" Feb 13 19:46:09.180588 kubelet[3272]: I0213 19:46:09.180572 3272 topology_manager.go:215] "Topology Admit Handler" podUID="168182d0-b9c4-48bd-9c74-28acdd82becf" podNamespace="calico-apiserver" podName="calico-apiserver-7469c76fc6-shmk8" Feb 13 19:46:09.181034 kubelet[3272]: I0213 19:46:09.181006 3272 topology_manager.go:215] "Topology Admit Handler" podUID="922c820f-72e7-49c5-977f-4e21e9e5b030" podNamespace="calico-apiserver" podName="calico-apiserver-7469c76fc6-p6gv2" Feb 13 19:46:09.185903 systemd[1]: Created slice kubepods-burstable-podc5186765_6941_4a28_a06d_cf22cd68adee.slice - libcontainer container kubepods-burstable-podc5186765_6941_4a28_a06d_cf22cd68adee.slice. Feb 13 19:46:09.193994 systemd[1]: Created slice kubepods-besteffort-poda3049fc5_6472_40ea_b289_504e898e9372.slice - libcontainer container kubepods-besteffort-poda3049fc5_6472_40ea_b289_504e898e9372.slice. Feb 13 19:46:09.198563 systemd[1]: Created slice kubepods-burstable-pod4bb94446_f95c_44d1_9b40_e90a44987989.slice - libcontainer container kubepods-burstable-pod4bb94446_f95c_44d1_9b40_e90a44987989.slice. Feb 13 19:46:09.204548 systemd[1]: Created slice kubepods-besteffort-pod168182d0_b9c4_48bd_9c74_28acdd82becf.slice - libcontainer container kubepods-besteffort-pod168182d0_b9c4_48bd_9c74_28acdd82becf.slice. Feb 13 19:46:09.209903 systemd[1]: Created slice kubepods-besteffort-pod922c820f_72e7_49c5_977f_4e21e9e5b030.slice - libcontainer container kubepods-besteffort-pod922c820f_72e7_49c5_977f_4e21e9e5b030.slice. Feb 13 19:46:09.283256 kubelet[3272]: I0213 19:46:09.283148 3272 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhghk\" (UniqueName: \"kubernetes.io/projected/c5186765-6941-4a28-a06d-cf22cd68adee-kube-api-access-lhghk\") pod \"coredns-7db6d8ff4d-xvtfk\" (UID: \"c5186765-6941-4a28-a06d-cf22cd68adee\") " pod="kube-system/coredns-7db6d8ff4d-xvtfk" Feb 13 19:46:09.283256 kubelet[3272]: I0213 19:46:09.283247 3272 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a3049fc5-6472-40ea-b289-504e898e9372-tigera-ca-bundle\") pod \"calico-kube-controllers-54fdf9b76f-wsc67\" (UID: \"a3049fc5-6472-40ea-b289-504e898e9372\") " pod="calico-system/calico-kube-controllers-54fdf9b76f-wsc67" Feb 13 19:46:09.283781 kubelet[3272]: I0213 19:46:09.283379 3272 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cl9bz\" (UniqueName: \"kubernetes.io/projected/922c820f-72e7-49c5-977f-4e21e9e5b030-kube-api-access-cl9bz\") pod \"calico-apiserver-7469c76fc6-p6gv2\" (UID: \"922c820f-72e7-49c5-977f-4e21e9e5b030\") " pod="calico-apiserver/calico-apiserver-7469c76fc6-p6gv2" Feb 13 19:46:09.283781 kubelet[3272]: I0213 19:46:09.283492 3272 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c5186765-6941-4a28-a06d-cf22cd68adee-config-volume\") pod \"coredns-7db6d8ff4d-xvtfk\" (UID: \"c5186765-6941-4a28-a06d-cf22cd68adee\") " pod="kube-system/coredns-7db6d8ff4d-xvtfk" Feb 13 19:46:09.283781 kubelet[3272]: I0213 19:46:09.283553 3272 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8rl8\" (UniqueName: \"kubernetes.io/projected/4bb94446-f95c-44d1-9b40-e90a44987989-kube-api-access-k8rl8\") pod \"coredns-7db6d8ff4d-t77lg\" (UID: \"4bb94446-f95c-44d1-9b40-e90a44987989\") " pod="kube-system/coredns-7db6d8ff4d-t77lg" Feb 13 19:46:09.283781 kubelet[3272]: I0213 19:46:09.283660 3272 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/168182d0-b9c4-48bd-9c74-28acdd82becf-calico-apiserver-certs\") pod \"calico-apiserver-7469c76fc6-shmk8\" (UID: \"168182d0-b9c4-48bd-9c74-28acdd82becf\") " pod="calico-apiserver/calico-apiserver-7469c76fc6-shmk8" Feb 13 19:46:09.283781 kubelet[3272]: I0213 19:46:09.283711 3272 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/922c820f-72e7-49c5-977f-4e21e9e5b030-calico-apiserver-certs\") pod \"calico-apiserver-7469c76fc6-p6gv2\" (UID: \"922c820f-72e7-49c5-977f-4e21e9e5b030\") " pod="calico-apiserver/calico-apiserver-7469c76fc6-p6gv2" Feb 13 19:46:09.284248 kubelet[3272]: I0213 19:46:09.283758 3272 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nbqf\" (UniqueName: \"kubernetes.io/projected/168182d0-b9c4-48bd-9c74-28acdd82becf-kube-api-access-7nbqf\") pod \"calico-apiserver-7469c76fc6-shmk8\" (UID: \"168182d0-b9c4-48bd-9c74-28acdd82becf\") " pod="calico-apiserver/calico-apiserver-7469c76fc6-shmk8" Feb 13 19:46:09.284248 kubelet[3272]: I0213 19:46:09.283850 3272 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4bb94446-f95c-44d1-9b40-e90a44987989-config-volume\") pod \"coredns-7db6d8ff4d-t77lg\" (UID: \"4bb94446-f95c-44d1-9b40-e90a44987989\") " pod="kube-system/coredns-7db6d8ff4d-t77lg" Feb 13 19:46:09.284248 kubelet[3272]: I0213 19:46:09.283938 3272 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2jcl\" (UniqueName: \"kubernetes.io/projected/a3049fc5-6472-40ea-b289-504e898e9372-kube-api-access-b2jcl\") pod \"calico-kube-controllers-54fdf9b76f-wsc67\" (UID: \"a3049fc5-6472-40ea-b289-504e898e9372\") " pod="calico-system/calico-kube-controllers-54fdf9b76f-wsc67" Feb 13 19:46:09.490609 containerd[1805]: time="2025-02-13T19:46:09.490528579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-xvtfk,Uid:c5186765-6941-4a28-a06d-cf22cd68adee,Namespace:kube-system,Attempt:0,}" Feb 13 19:46:09.497625 containerd[1805]: time="2025-02-13T19:46:09.497557416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54fdf9b76f-wsc67,Uid:a3049fc5-6472-40ea-b289-504e898e9372,Namespace:calico-system,Attempt:0,}" Feb 13 19:46:09.502846 containerd[1805]: time="2025-02-13T19:46:09.502778352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-t77lg,Uid:4bb94446-f95c-44d1-9b40-e90a44987989,Namespace:kube-system,Attempt:0,}" Feb 13 19:46:09.508182 containerd[1805]: time="2025-02-13T19:46:09.508082866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7469c76fc6-shmk8,Uid:168182d0-b9c4-48bd-9c74-28acdd82becf,Namespace:calico-apiserver,Attempt:0,}" Feb 13 19:46:09.513385 containerd[1805]: time="2025-02-13T19:46:09.513291872Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7469c76fc6-p6gv2,Uid:922c820f-72e7-49c5-977f-4e21e9e5b030,Namespace:calico-apiserver,Attempt:0,}" Feb 13 19:46:09.745100 containerd[1805]: time="2025-02-13T19:46:09.745002618Z" level=info msg="shim disconnected" id=59bcf14a68945dc02e5fb7f59adc8c173169122c3dfc12180bdb640254010f4c namespace=k8s.io Feb 13 19:46:09.745100 containerd[1805]: time="2025-02-13T19:46:09.745046677Z" level=warning msg="cleaning up after shim disconnected" id=59bcf14a68945dc02e5fb7f59adc8c173169122c3dfc12180bdb640254010f4c namespace=k8s.io Feb 13 19:46:09.745100 containerd[1805]: time="2025-02-13T19:46:09.745052756Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:46:09.785315 containerd[1805]: time="2025-02-13T19:46:09.785267041Z" level=error msg="Failed to destroy network for sandbox \"80232807547030d8f07b5c4f1cde555f2f89298b4b252d2c8d124b51c03bbfbb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:09.785600 containerd[1805]: time="2025-02-13T19:46:09.785582863Z" level=error msg="Failed to destroy network for sandbox \"12474318e4f6f721cac7083f9ef1f395d02fee9ab3fb10b76216dc654e5175bb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:09.785654 containerd[1805]: time="2025-02-13T19:46:09.785597701Z" level=error msg="encountered an error cleaning up failed sandbox \"80232807547030d8f07b5c4f1cde555f2f89298b4b252d2c8d124b51c03bbfbb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:09.785690 containerd[1805]: time="2025-02-13T19:46:09.785650025Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54fdf9b76f-wsc67,Uid:a3049fc5-6472-40ea-b289-504e898e9372,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"80232807547030d8f07b5c4f1cde555f2f89298b4b252d2c8d124b51c03bbfbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:09.785751 containerd[1805]: time="2025-02-13T19:46:09.785723719Z" level=error msg="encountered an error cleaning up failed sandbox \"12474318e4f6f721cac7083f9ef1f395d02fee9ab3fb10b76216dc654e5175bb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:09.785791 containerd[1805]: time="2025-02-13T19:46:09.785749291Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-xvtfk,Uid:c5186765-6941-4a28-a06d-cf22cd68adee,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"12474318e4f6f721cac7083f9ef1f395d02fee9ab3fb10b76216dc654e5175bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:09.785833 kubelet[3272]: E0213 19:46:09.785810 3272 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12474318e4f6f721cac7083f9ef1f395d02fee9ab3fb10b76216dc654e5175bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:09.785876 kubelet[3272]: E0213 19:46:09.785854 3272 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12474318e4f6f721cac7083f9ef1f395d02fee9ab3fb10b76216dc654e5175bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-xvtfk" Feb 13 19:46:09.785876 kubelet[3272]: E0213 19:46:09.785873 3272 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12474318e4f6f721cac7083f9ef1f395d02fee9ab3fb10b76216dc654e5175bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-xvtfk" Feb 13 19:46:09.785946 kubelet[3272]: E0213 19:46:09.785804 3272 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80232807547030d8f07b5c4f1cde555f2f89298b4b252d2c8d124b51c03bbfbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:09.785946 kubelet[3272]: E0213 19:46:09.785901 3272 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-xvtfk_kube-system(c5186765-6941-4a28-a06d-cf22cd68adee)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-xvtfk_kube-system(c5186765-6941-4a28-a06d-cf22cd68adee)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"12474318e4f6f721cac7083f9ef1f395d02fee9ab3fb10b76216dc654e5175bb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-xvtfk" podUID="c5186765-6941-4a28-a06d-cf22cd68adee" Feb 13 19:46:09.785946 kubelet[3272]: E0213 19:46:09.785908 3272 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80232807547030d8f07b5c4f1cde555f2f89298b4b252d2c8d124b51c03bbfbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-54fdf9b76f-wsc67" Feb 13 19:46:09.786060 kubelet[3272]: E0213 19:46:09.785919 3272 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80232807547030d8f07b5c4f1cde555f2f89298b4b252d2c8d124b51c03bbfbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-54fdf9b76f-wsc67" Feb 13 19:46:09.786060 kubelet[3272]: E0213 19:46:09.785939 3272 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-54fdf9b76f-wsc67_calico-system(a3049fc5-6472-40ea-b289-504e898e9372)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-54fdf9b76f-wsc67_calico-system(a3049fc5-6472-40ea-b289-504e898e9372)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"80232807547030d8f07b5c4f1cde555f2f89298b4b252d2c8d124b51c03bbfbb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-54fdf9b76f-wsc67" podUID="a3049fc5-6472-40ea-b289-504e898e9372" Feb 13 19:46:09.792215 containerd[1805]: time="2025-02-13T19:46:09.792186644Z" level=error msg="Failed to destroy network for sandbox \"9a29ac16eb479f3a98d75645f8fdb66ad9a507e136a4a0536faab04d17a70738\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:09.792382 containerd[1805]: time="2025-02-13T19:46:09.792369054Z" level=error msg="encountered an error cleaning up failed sandbox \"9a29ac16eb479f3a98d75645f8fdb66ad9a507e136a4a0536faab04d17a70738\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:09.792427 containerd[1805]: time="2025-02-13T19:46:09.792410564Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-t77lg,Uid:4bb94446-f95c-44d1-9b40-e90a44987989,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9a29ac16eb479f3a98d75645f8fdb66ad9a507e136a4a0536faab04d17a70738\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:09.792539 containerd[1805]: time="2025-02-13T19:46:09.792513307Z" level=error msg="Failed to destroy network for sandbox \"45c87c61ef4cd93a6712ac1a472c5dc83746a8c57cbce7faf17a97105703dc6f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:09.792598 kubelet[3272]: E0213 19:46:09.792573 3272 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a29ac16eb479f3a98d75645f8fdb66ad9a507e136a4a0536faab04d17a70738\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:09.792629 kubelet[3272]: E0213 19:46:09.792615 3272 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a29ac16eb479f3a98d75645f8fdb66ad9a507e136a4a0536faab04d17a70738\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-t77lg" Feb 13 19:46:09.792649 kubelet[3272]: E0213 19:46:09.792628 3272 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a29ac16eb479f3a98d75645f8fdb66ad9a507e136a4a0536faab04d17a70738\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-t77lg" Feb 13 19:46:09.792668 kubelet[3272]: E0213 19:46:09.792652 3272 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-t77lg_kube-system(4bb94446-f95c-44d1-9b40-e90a44987989)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-t77lg_kube-system(4bb94446-f95c-44d1-9b40-e90a44987989)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9a29ac16eb479f3a98d75645f8fdb66ad9a507e136a4a0536faab04d17a70738\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-t77lg" podUID="4bb94446-f95c-44d1-9b40-e90a44987989" Feb 13 19:46:09.792715 containerd[1805]: time="2025-02-13T19:46:09.792671482Z" level=error msg="encountered an error cleaning up failed sandbox \"45c87c61ef4cd93a6712ac1a472c5dc83746a8c57cbce7faf17a97105703dc6f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:09.792715 containerd[1805]: time="2025-02-13T19:46:09.792695348Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7469c76fc6-p6gv2,Uid:922c820f-72e7-49c5-977f-4e21e9e5b030,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"45c87c61ef4cd93a6712ac1a472c5dc83746a8c57cbce7faf17a97105703dc6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:09.792788 kubelet[3272]: E0213 19:46:09.792771 3272 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45c87c61ef4cd93a6712ac1a472c5dc83746a8c57cbce7faf17a97105703dc6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:09.792826 kubelet[3272]: E0213 19:46:09.792794 3272 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45c87c61ef4cd93a6712ac1a472c5dc83746a8c57cbce7faf17a97105703dc6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7469c76fc6-p6gv2" Feb 13 19:46:09.792826 kubelet[3272]: E0213 19:46:09.792807 3272 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45c87c61ef4cd93a6712ac1a472c5dc83746a8c57cbce7faf17a97105703dc6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7469c76fc6-p6gv2" Feb 13 19:46:09.792880 kubelet[3272]: E0213 19:46:09.792857 3272 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7469c76fc6-p6gv2_calico-apiserver(922c820f-72e7-49c5-977f-4e21e9e5b030)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7469c76fc6-p6gv2_calico-apiserver(922c820f-72e7-49c5-977f-4e21e9e5b030)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"45c87c61ef4cd93a6712ac1a472c5dc83746a8c57cbce7faf17a97105703dc6f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7469c76fc6-p6gv2" podUID="922c820f-72e7-49c5-977f-4e21e9e5b030" Feb 13 19:46:09.793080 containerd[1805]: time="2025-02-13T19:46:09.793065633Z" level=error msg="Failed to destroy network for sandbox \"171aaacb6955d25fb1b3e8a93959e2fe163538dfa3fba461e8d6106fdaceba89\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:09.793206 containerd[1805]: time="2025-02-13T19:46:09.793193905Z" level=error msg="encountered an error cleaning up failed sandbox \"171aaacb6955d25fb1b3e8a93959e2fe163538dfa3fba461e8d6106fdaceba89\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:09.793226 containerd[1805]: time="2025-02-13T19:46:09.793218050Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7469c76fc6-shmk8,Uid:168182d0-b9c4-48bd-9c74-28acdd82becf,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"171aaacb6955d25fb1b3e8a93959e2fe163538dfa3fba461e8d6106fdaceba89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:09.793288 kubelet[3272]: E0213 19:46:09.793279 3272 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"171aaacb6955d25fb1b3e8a93959e2fe163538dfa3fba461e8d6106fdaceba89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:09.793310 kubelet[3272]: E0213 19:46:09.793295 3272 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"171aaacb6955d25fb1b3e8a93959e2fe163538dfa3fba461e8d6106fdaceba89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7469c76fc6-shmk8" Feb 13 19:46:09.793310 kubelet[3272]: E0213 19:46:09.793305 3272 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"171aaacb6955d25fb1b3e8a93959e2fe163538dfa3fba461e8d6106fdaceba89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7469c76fc6-shmk8" Feb 13 19:46:09.793345 kubelet[3272]: E0213 19:46:09.793321 3272 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7469c76fc6-shmk8_calico-apiserver(168182d0-b9c4-48bd-9c74-28acdd82becf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7469c76fc6-shmk8_calico-apiserver(168182d0-b9c4-48bd-9c74-28acdd82becf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"171aaacb6955d25fb1b3e8a93959e2fe163538dfa3fba461e8d6106fdaceba89\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7469c76fc6-shmk8" podUID="168182d0-b9c4-48bd-9c74-28acdd82becf" Feb 13 19:46:09.878806 systemd[1]: Created slice kubepods-besteffort-podf3327118_2549_4d08_a802_8c7cfa7fb673.slice - libcontainer container kubepods-besteffort-podf3327118_2549_4d08_a802_8c7cfa7fb673.slice. Feb 13 19:46:09.880116 containerd[1805]: time="2025-02-13T19:46:09.880065127Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vkkt7,Uid:f3327118-2549-4d08-a802-8c7cfa7fb673,Namespace:calico-system,Attempt:0,}" Feb 13 19:46:09.907896 containerd[1805]: time="2025-02-13T19:46:09.907843752Z" level=error msg="Failed to destroy network for sandbox \"40cc26bfd58a26e7cb37e7b5a69dffb2a31831b9cdfaa60054c153d9762e12fa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:09.908053 containerd[1805]: time="2025-02-13T19:46:09.908012756Z" level=error msg="encountered an error cleaning up failed sandbox \"40cc26bfd58a26e7cb37e7b5a69dffb2a31831b9cdfaa60054c153d9762e12fa\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:09.908053 containerd[1805]: time="2025-02-13T19:46:09.908043553Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vkkt7,Uid:f3327118-2549-4d08-a802-8c7cfa7fb673,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"40cc26bfd58a26e7cb37e7b5a69dffb2a31831b9cdfaa60054c153d9762e12fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:09.908217 kubelet[3272]: E0213 19:46:09.908183 3272 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40cc26bfd58a26e7cb37e7b5a69dffb2a31831b9cdfaa60054c153d9762e12fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:09.908249 kubelet[3272]: E0213 19:46:09.908219 3272 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40cc26bfd58a26e7cb37e7b5a69dffb2a31831b9cdfaa60054c153d9762e12fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vkkt7" Feb 13 19:46:09.908249 kubelet[3272]: E0213 19:46:09.908232 3272 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40cc26bfd58a26e7cb37e7b5a69dffb2a31831b9cdfaa60054c153d9762e12fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vkkt7" Feb 13 19:46:09.908287 kubelet[3272]: E0213 19:46:09.908259 3272 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-vkkt7_calico-system(f3327118-2549-4d08-a802-8c7cfa7fb673)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-vkkt7_calico-system(f3327118-2549-4d08-a802-8c7cfa7fb673)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"40cc26bfd58a26e7cb37e7b5a69dffb2a31831b9cdfaa60054c153d9762e12fa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-vkkt7" podUID="f3327118-2549-4d08-a802-8c7cfa7fb673" Feb 13 19:46:09.955521 kubelet[3272]: I0213 19:46:09.955495 3272 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="80232807547030d8f07b5c4f1cde555f2f89298b4b252d2c8d124b51c03bbfbb" Feb 13 19:46:09.956078 containerd[1805]: time="2025-02-13T19:46:09.956017285Z" level=info msg="StopPodSandbox for \"80232807547030d8f07b5c4f1cde555f2f89298b4b252d2c8d124b51c03bbfbb\"" Feb 13 19:46:09.956345 containerd[1805]: time="2025-02-13T19:46:09.956313116Z" level=info msg="Ensure that sandbox 80232807547030d8f07b5c4f1cde555f2f89298b4b252d2c8d124b51c03bbfbb in task-service has been cleanup successfully" Feb 13 19:46:09.956469 kubelet[3272]: I0213 19:46:09.956369 3272 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="171aaacb6955d25fb1b3e8a93959e2fe163538dfa3fba461e8d6106fdaceba89" Feb 13 19:46:09.956610 containerd[1805]: time="2025-02-13T19:46:09.956576067Z" level=info msg="TearDown network for sandbox \"80232807547030d8f07b5c4f1cde555f2f89298b4b252d2c8d124b51c03bbfbb\" successfully" Feb 13 19:46:09.956708 containerd[1805]: time="2025-02-13T19:46:09.956607221Z" level=info msg="StopPodSandbox for \"80232807547030d8f07b5c4f1cde555f2f89298b4b252d2c8d124b51c03bbfbb\" returns successfully" Feb 13 19:46:09.956897 containerd[1805]: time="2025-02-13T19:46:09.956865850Z" level=info msg="StopPodSandbox for \"171aaacb6955d25fb1b3e8a93959e2fe163538dfa3fba461e8d6106fdaceba89\"" Feb 13 19:46:09.957119 containerd[1805]: time="2025-02-13T19:46:09.957085036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54fdf9b76f-wsc67,Uid:a3049fc5-6472-40ea-b289-504e898e9372,Namespace:calico-system,Attempt:1,}" Feb 13 19:46:09.957218 containerd[1805]: time="2025-02-13T19:46:09.957100910Z" level=info msg="Ensure that sandbox 171aaacb6955d25fb1b3e8a93959e2fe163538dfa3fba461e8d6106fdaceba89 in task-service has been cleanup successfully" Feb 13 19:46:09.957375 containerd[1805]: time="2025-02-13T19:46:09.957345687Z" level=info msg="TearDown network for sandbox \"171aaacb6955d25fb1b3e8a93959e2fe163538dfa3fba461e8d6106fdaceba89\" successfully" Feb 13 19:46:09.957375 containerd[1805]: time="2025-02-13T19:46:09.957368793Z" level=info msg="StopPodSandbox for \"171aaacb6955d25fb1b3e8a93959e2fe163538dfa3fba461e8d6106fdaceba89\" returns successfully" Feb 13 19:46:09.957531 kubelet[3272]: I0213 19:46:09.957378 3272 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a29ac16eb479f3a98d75645f8fdb66ad9a507e136a4a0536faab04d17a70738" Feb 13 19:46:09.957719 containerd[1805]: time="2025-02-13T19:46:09.957702964Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7469c76fc6-shmk8,Uid:168182d0-b9c4-48bd-9c74-28acdd82becf,Namespace:calico-apiserver,Attempt:1,}" Feb 13 19:46:09.957793 containerd[1805]: time="2025-02-13T19:46:09.957782337Z" level=info msg="StopPodSandbox for \"9a29ac16eb479f3a98d75645f8fdb66ad9a507e136a4a0536faab04d17a70738\"" Feb 13 19:46:09.957880 containerd[1805]: time="2025-02-13T19:46:09.957868847Z" level=info msg="Ensure that sandbox 9a29ac16eb479f3a98d75645f8fdb66ad9a507e136a4a0536faab04d17a70738 in task-service has been cleanup successfully" Feb 13 19:46:09.957975 containerd[1805]: time="2025-02-13T19:46:09.957965565Z" level=info msg="TearDown network for sandbox \"9a29ac16eb479f3a98d75645f8fdb66ad9a507e136a4a0536faab04d17a70738\" successfully" Feb 13 19:46:09.958005 containerd[1805]: time="2025-02-13T19:46:09.957975208Z" level=info msg="StopPodSandbox for \"9a29ac16eb479f3a98d75645f8fdb66ad9a507e136a4a0536faab04d17a70738\" returns successfully" Feb 13 19:46:09.958056 kubelet[3272]: I0213 19:46:09.958049 3272 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="12474318e4f6f721cac7083f9ef1f395d02fee9ab3fb10b76216dc654e5175bb" Feb 13 19:46:09.958139 containerd[1805]: time="2025-02-13T19:46:09.958127868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-t77lg,Uid:4bb94446-f95c-44d1-9b40-e90a44987989,Namespace:kube-system,Attempt:1,}" Feb 13 19:46:09.958253 containerd[1805]: time="2025-02-13T19:46:09.958243023Z" level=info msg="StopPodSandbox for \"12474318e4f6f721cac7083f9ef1f395d02fee9ab3fb10b76216dc654e5175bb\"" Feb 13 19:46:09.958331 containerd[1805]: time="2025-02-13T19:46:09.958323312Z" level=info msg="Ensure that sandbox 12474318e4f6f721cac7083f9ef1f395d02fee9ab3fb10b76216dc654e5175bb in task-service has been cleanup successfully" Feb 13 19:46:09.958399 containerd[1805]: time="2025-02-13T19:46:09.958391745Z" level=info msg="TearDown network for sandbox \"12474318e4f6f721cac7083f9ef1f395d02fee9ab3fb10b76216dc654e5175bb\" successfully" Feb 13 19:46:09.958428 containerd[1805]: time="2025-02-13T19:46:09.958400086Z" level=info msg="StopPodSandbox for \"12474318e4f6f721cac7083f9ef1f395d02fee9ab3fb10b76216dc654e5175bb\" returns successfully" Feb 13 19:46:09.958453 kubelet[3272]: I0213 19:46:09.958430 3272 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40cc26bfd58a26e7cb37e7b5a69dffb2a31831b9cdfaa60054c153d9762e12fa" Feb 13 19:46:09.958640 containerd[1805]: time="2025-02-13T19:46:09.958627337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-xvtfk,Uid:c5186765-6941-4a28-a06d-cf22cd68adee,Namespace:kube-system,Attempt:1,}" Feb 13 19:46:09.958671 containerd[1805]: time="2025-02-13T19:46:09.958643548Z" level=info msg="StopPodSandbox for \"40cc26bfd58a26e7cb37e7b5a69dffb2a31831b9cdfaa60054c153d9762e12fa\"" Feb 13 19:46:09.958740 containerd[1805]: time="2025-02-13T19:46:09.958731558Z" level=info msg="Ensure that sandbox 40cc26bfd58a26e7cb37e7b5a69dffb2a31831b9cdfaa60054c153d9762e12fa in task-service has been cleanup successfully" Feb 13 19:46:09.958888 containerd[1805]: time="2025-02-13T19:46:09.958879765Z" level=info msg="TearDown network for sandbox \"40cc26bfd58a26e7cb37e7b5a69dffb2a31831b9cdfaa60054c153d9762e12fa\" successfully" Feb 13 19:46:09.958913 containerd[1805]: time="2025-02-13T19:46:09.958887685Z" level=info msg="StopPodSandbox for \"40cc26bfd58a26e7cb37e7b5a69dffb2a31831b9cdfaa60054c153d9762e12fa\" returns successfully" Feb 13 19:46:09.959060 containerd[1805]: time="2025-02-13T19:46:09.959050116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vkkt7,Uid:f3327118-2549-4d08-a802-8c7cfa7fb673,Namespace:calico-system,Attempt:1,}" Feb 13 19:46:09.959589 kubelet[3272]: I0213 19:46:09.959554 3272 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45c87c61ef4cd93a6712ac1a472c5dc83746a8c57cbce7faf17a97105703dc6f" Feb 13 19:46:09.959678 containerd[1805]: time="2025-02-13T19:46:09.959667094Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Feb 13 19:46:09.959739 containerd[1805]: time="2025-02-13T19:46:09.959729081Z" level=info msg="StopPodSandbox for \"45c87c61ef4cd93a6712ac1a472c5dc83746a8c57cbce7faf17a97105703dc6f\"" Feb 13 19:46:09.959821 containerd[1805]: time="2025-02-13T19:46:09.959811605Z" level=info msg="Ensure that sandbox 45c87c61ef4cd93a6712ac1a472c5dc83746a8c57cbce7faf17a97105703dc6f in task-service has been cleanup successfully" Feb 13 19:46:09.959903 containerd[1805]: time="2025-02-13T19:46:09.959894215Z" level=info msg="TearDown network for sandbox \"45c87c61ef4cd93a6712ac1a472c5dc83746a8c57cbce7faf17a97105703dc6f\" successfully" Feb 13 19:46:09.959925 containerd[1805]: time="2025-02-13T19:46:09.959902221Z" level=info msg="StopPodSandbox for \"45c87c61ef4cd93a6712ac1a472c5dc83746a8c57cbce7faf17a97105703dc6f\" returns successfully" Feb 13 19:46:09.960066 containerd[1805]: time="2025-02-13T19:46:09.960054635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7469c76fc6-p6gv2,Uid:922c820f-72e7-49c5-977f-4e21e9e5b030,Namespace:calico-apiserver,Attempt:1,}" Feb 13 19:46:09.998694 containerd[1805]: time="2025-02-13T19:46:09.998143892Z" level=error msg="Failed to destroy network for sandbox \"83e7edb6536af5c49bf3dcdb8edee0f773f777fb86a2c95fed050893a903a67c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:09.998694 containerd[1805]: time="2025-02-13T19:46:09.998383625Z" level=error msg="Failed to destroy network for sandbox \"86d53dc9a54aa1591336fabe505d737e9ac9ca61810835e49d34bd02140a924c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:09.998694 containerd[1805]: time="2025-02-13T19:46:09.998543272Z" level=error msg="encountered an error cleaning up failed sandbox \"83e7edb6536af5c49bf3dcdb8edee0f773f777fb86a2c95fed050893a903a67c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:09.998694 containerd[1805]: time="2025-02-13T19:46:09.998602103Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7469c76fc6-p6gv2,Uid:922c820f-72e7-49c5-977f-4e21e9e5b030,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"83e7edb6536af5c49bf3dcdb8edee0f773f777fb86a2c95fed050893a903a67c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:09.998694 containerd[1805]: time="2025-02-13T19:46:09.998551051Z" level=error msg="encountered an error cleaning up failed sandbox \"86d53dc9a54aa1591336fabe505d737e9ac9ca61810835e49d34bd02140a924c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:09.998694 containerd[1805]: time="2025-02-13T19:46:09.998652454Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vkkt7,Uid:f3327118-2549-4d08-a802-8c7cfa7fb673,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"86d53dc9a54aa1591336fabe505d737e9ac9ca61810835e49d34bd02140a924c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:09.998956 kubelet[3272]: E0213 19:46:09.998753 3272 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86d53dc9a54aa1591336fabe505d737e9ac9ca61810835e49d34bd02140a924c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:09.998956 kubelet[3272]: E0213 19:46:09.998794 3272 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86d53dc9a54aa1591336fabe505d737e9ac9ca61810835e49d34bd02140a924c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vkkt7" Feb 13 19:46:09.998956 kubelet[3272]: E0213 19:46:09.998811 3272 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86d53dc9a54aa1591336fabe505d737e9ac9ca61810835e49d34bd02140a924c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vkkt7" Feb 13 19:46:09.998956 kubelet[3272]: E0213 19:46:09.998753 3272 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83e7edb6536af5c49bf3dcdb8edee0f773f777fb86a2c95fed050893a903a67c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:09.999089 kubelet[3272]: E0213 19:46:09.998842 3272 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-vkkt7_calico-system(f3327118-2549-4d08-a802-8c7cfa7fb673)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-vkkt7_calico-system(f3327118-2549-4d08-a802-8c7cfa7fb673)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"86d53dc9a54aa1591336fabe505d737e9ac9ca61810835e49d34bd02140a924c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-vkkt7" podUID="f3327118-2549-4d08-a802-8c7cfa7fb673" Feb 13 19:46:09.999089 kubelet[3272]: E0213 19:46:09.998849 3272 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83e7edb6536af5c49bf3dcdb8edee0f773f777fb86a2c95fed050893a903a67c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7469c76fc6-p6gv2" Feb 13 19:46:09.999089 kubelet[3272]: E0213 19:46:09.998863 3272 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83e7edb6536af5c49bf3dcdb8edee0f773f777fb86a2c95fed050893a903a67c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7469c76fc6-p6gv2" Feb 13 19:46:09.999201 kubelet[3272]: E0213 19:46:09.998881 3272 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7469c76fc6-p6gv2_calico-apiserver(922c820f-72e7-49c5-977f-4e21e9e5b030)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7469c76fc6-p6gv2_calico-apiserver(922c820f-72e7-49c5-977f-4e21e9e5b030)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"83e7edb6536af5c49bf3dcdb8edee0f773f777fb86a2c95fed050893a903a67c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7469c76fc6-p6gv2" podUID="922c820f-72e7-49c5-977f-4e21e9e5b030" Feb 13 19:46:09.999908 containerd[1805]: time="2025-02-13T19:46:09.999889264Z" level=error msg="Failed to destroy network for sandbox \"8065894b6a1abeb1668ff49ae8d1a1408a0af9ccf6106a60de8a212a434edc9e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:10.000013 containerd[1805]: time="2025-02-13T19:46:09.999995763Z" level=error msg="Failed to destroy network for sandbox \"ce3e2500fc4278415f6ebbad079169c382c4f49b1ed76d104c93e357f67447a4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:10.000058 containerd[1805]: time="2025-02-13T19:46:10.000043530Z" level=error msg="Failed to destroy network for sandbox \"17c86641c0094d0d8e43422341e1cc10769b1be113bacfab3dfb5b9b7eacc05c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:10.000090 containerd[1805]: time="2025-02-13T19:46:10.000045906Z" level=error msg="encountered an error cleaning up failed sandbox \"8065894b6a1abeb1668ff49ae8d1a1408a0af9ccf6106a60de8a212a434edc9e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:10.000124 containerd[1805]: time="2025-02-13T19:46:10.000108799Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7469c76fc6-shmk8,Uid:168182d0-b9c4-48bd-9c74-28acdd82becf,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"8065894b6a1abeb1668ff49ae8d1a1408a0af9ccf6106a60de8a212a434edc9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:10.000174 containerd[1805]: time="2025-02-13T19:46:10.000154039Z" level=error msg="encountered an error cleaning up failed sandbox \"ce3e2500fc4278415f6ebbad079169c382c4f49b1ed76d104c93e357f67447a4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:10.000206 containerd[1805]: time="2025-02-13T19:46:10.000177558Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-xvtfk,Uid:c5186765-6941-4a28-a06d-cf22cd68adee,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"ce3e2500fc4278415f6ebbad079169c382c4f49b1ed76d104c93e357f67447a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:10.000237 kubelet[3272]: E0213 19:46:10.000187 3272 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8065894b6a1abeb1668ff49ae8d1a1408a0af9ccf6106a60de8a212a434edc9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:10.000237 kubelet[3272]: E0213 19:46:10.000208 3272 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8065894b6a1abeb1668ff49ae8d1a1408a0af9ccf6106a60de8a212a434edc9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7469c76fc6-shmk8" Feb 13 19:46:10.000237 kubelet[3272]: E0213 19:46:10.000220 3272 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8065894b6a1abeb1668ff49ae8d1a1408a0af9ccf6106a60de8a212a434edc9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7469c76fc6-shmk8" Feb 13 19:46:10.000333 containerd[1805]: time="2025-02-13T19:46:10.000181003Z" level=error msg="encountered an error cleaning up failed sandbox \"17c86641c0094d0d8e43422341e1cc10769b1be113bacfab3dfb5b9b7eacc05c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:10.000333 containerd[1805]: time="2025-02-13T19:46:10.000227544Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-t77lg,Uid:4bb94446-f95c-44d1-9b40-e90a44987989,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"17c86641c0094d0d8e43422341e1cc10769b1be113bacfab3dfb5b9b7eacc05c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:10.000393 kubelet[3272]: E0213 19:46:10.000236 3272 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7469c76fc6-shmk8_calico-apiserver(168182d0-b9c4-48bd-9c74-28acdd82becf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7469c76fc6-shmk8_calico-apiserver(168182d0-b9c4-48bd-9c74-28acdd82becf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8065894b6a1abeb1668ff49ae8d1a1408a0af9ccf6106a60de8a212a434edc9e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7469c76fc6-shmk8" podUID="168182d0-b9c4-48bd-9c74-28acdd82becf" Feb 13 19:46:10.000393 kubelet[3272]: E0213 19:46:10.000276 3272 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17c86641c0094d0d8e43422341e1cc10769b1be113bacfab3dfb5b9b7eacc05c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:10.000393 kubelet[3272]: E0213 19:46:10.000290 3272 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17c86641c0094d0d8e43422341e1cc10769b1be113bacfab3dfb5b9b7eacc05c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-t77lg" Feb 13 19:46:10.000513 kubelet[3272]: E0213 19:46:10.000299 3272 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17c86641c0094d0d8e43422341e1cc10769b1be113bacfab3dfb5b9b7eacc05c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-t77lg" Feb 13 19:46:10.000513 kubelet[3272]: E0213 19:46:10.000311 3272 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-t77lg_kube-system(4bb94446-f95c-44d1-9b40-e90a44987989)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-t77lg_kube-system(4bb94446-f95c-44d1-9b40-e90a44987989)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"17c86641c0094d0d8e43422341e1cc10769b1be113bacfab3dfb5b9b7eacc05c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-t77lg" podUID="4bb94446-f95c-44d1-9b40-e90a44987989" Feb 13 19:46:10.000513 kubelet[3272]: E0213 19:46:10.000327 3272 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce3e2500fc4278415f6ebbad079169c382c4f49b1ed76d104c93e357f67447a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:10.000625 kubelet[3272]: E0213 19:46:10.000336 3272 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce3e2500fc4278415f6ebbad079169c382c4f49b1ed76d104c93e357f67447a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-xvtfk" Feb 13 19:46:10.000625 kubelet[3272]: E0213 19:46:10.000344 3272 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce3e2500fc4278415f6ebbad079169c382c4f49b1ed76d104c93e357f67447a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-xvtfk" Feb 13 19:46:10.000625 kubelet[3272]: E0213 19:46:10.000357 3272 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-xvtfk_kube-system(c5186765-6941-4a28-a06d-cf22cd68adee)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-xvtfk_kube-system(c5186765-6941-4a28-a06d-cf22cd68adee)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ce3e2500fc4278415f6ebbad079169c382c4f49b1ed76d104c93e357f67447a4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-xvtfk" podUID="c5186765-6941-4a28-a06d-cf22cd68adee" Feb 13 19:46:10.001178 containerd[1805]: time="2025-02-13T19:46:10.001166974Z" level=error msg="Failed to destroy network for sandbox \"a76e4eaee661cfd563bcd392434c458182000015a3325e637abcb2534d276298\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:10.001294 containerd[1805]: time="2025-02-13T19:46:10.001283503Z" level=error msg="encountered an error cleaning up failed sandbox \"a76e4eaee661cfd563bcd392434c458182000015a3325e637abcb2534d276298\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:10.001315 containerd[1805]: time="2025-02-13T19:46:10.001305608Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54fdf9b76f-wsc67,Uid:a3049fc5-6472-40ea-b289-504e898e9372,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"a76e4eaee661cfd563bcd392434c458182000015a3325e637abcb2534d276298\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:10.001371 kubelet[3272]: E0213 19:46:10.001362 3272 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a76e4eaee661cfd563bcd392434c458182000015a3325e637abcb2534d276298\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:10.001392 kubelet[3272]: E0213 19:46:10.001377 3272 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a76e4eaee661cfd563bcd392434c458182000015a3325e637abcb2534d276298\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-54fdf9b76f-wsc67" Feb 13 19:46:10.001392 kubelet[3272]: E0213 19:46:10.001387 3272 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a76e4eaee661cfd563bcd392434c458182000015a3325e637abcb2534d276298\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-54fdf9b76f-wsc67" Feb 13 19:46:10.001436 kubelet[3272]: E0213 19:46:10.001402 3272 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-54fdf9b76f-wsc67_calico-system(a3049fc5-6472-40ea-b289-504e898e9372)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-54fdf9b76f-wsc67_calico-system(a3049fc5-6472-40ea-b289-504e898e9372)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a76e4eaee661cfd563bcd392434c458182000015a3325e637abcb2534d276298\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-54fdf9b76f-wsc67" podUID="a3049fc5-6472-40ea-b289-504e898e9372" Feb 13 19:46:10.526571 systemd[1]: run-netns-cni\x2df3265d47\x2dd2d8\x2d7347\x2dab6b\x2d274b6503bf17.mount: Deactivated successfully. Feb 13 19:46:10.526637 systemd[1]: run-netns-cni\x2d3a9e86d0\x2d504f\x2d9d31\x2d7c83\x2d9ed6ccaf62c4.mount: Deactivated successfully. Feb 13 19:46:10.526683 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-45c87c61ef4cd93a6712ac1a472c5dc83746a8c57cbce7faf17a97105703dc6f-shm.mount: Deactivated successfully. Feb 13 19:46:10.526738 systemd[1]: run-netns-cni\x2de13b3559\x2dc781\x2db6bf\x2d01df\x2d3ee236a58698.mount: Deactivated successfully. Feb 13 19:46:10.526781 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-171aaacb6955d25fb1b3e8a93959e2fe163538dfa3fba461e8d6106fdaceba89-shm.mount: Deactivated successfully. Feb 13 19:46:10.526828 systemd[1]: run-netns-cni\x2da092d755\x2db6ee\x2d3de1\x2d7eab\x2d506cc65354db.mount: Deactivated successfully. Feb 13 19:46:10.526872 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9a29ac16eb479f3a98d75645f8fdb66ad9a507e136a4a0536faab04d17a70738-shm.mount: Deactivated successfully. Feb 13 19:46:10.526919 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-80232807547030d8f07b5c4f1cde555f2f89298b4b252d2c8d124b51c03bbfbb-shm.mount: Deactivated successfully. Feb 13 19:46:10.526966 systemd[1]: run-netns-cni\x2dec34c083\x2d368d\x2ddd66\x2d143b\x2d26f941d6500a.mount: Deactivated successfully. Feb 13 19:46:10.527009 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-12474318e4f6f721cac7083f9ef1f395d02fee9ab3fb10b76216dc654e5175bb-shm.mount: Deactivated successfully. Feb 13 19:46:10.961756 kubelet[3272]: I0213 19:46:10.961743 3272 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="83e7edb6536af5c49bf3dcdb8edee0f773f777fb86a2c95fed050893a903a67c" Feb 13 19:46:10.962060 containerd[1805]: time="2025-02-13T19:46:10.962037768Z" level=info msg="StopPodSandbox for \"83e7edb6536af5c49bf3dcdb8edee0f773f777fb86a2c95fed050893a903a67c\"" Feb 13 19:46:10.962229 containerd[1805]: time="2025-02-13T19:46:10.962218405Z" level=info msg="Ensure that sandbox 83e7edb6536af5c49bf3dcdb8edee0f773f777fb86a2c95fed050893a903a67c in task-service has been cleanup successfully" Feb 13 19:46:10.962318 kubelet[3272]: I0213 19:46:10.962308 3272 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a76e4eaee661cfd563bcd392434c458182000015a3325e637abcb2534d276298" Feb 13 19:46:10.962350 containerd[1805]: time="2025-02-13T19:46:10.962333065Z" level=info msg="TearDown network for sandbox \"83e7edb6536af5c49bf3dcdb8edee0f773f777fb86a2c95fed050893a903a67c\" successfully" Feb 13 19:46:10.962350 containerd[1805]: time="2025-02-13T19:46:10.962346965Z" level=info msg="StopPodSandbox for \"83e7edb6536af5c49bf3dcdb8edee0f773f777fb86a2c95fed050893a903a67c\" returns successfully" Feb 13 19:46:10.962493 containerd[1805]: time="2025-02-13T19:46:10.962479556Z" level=info msg="StopPodSandbox for \"45c87c61ef4cd93a6712ac1a472c5dc83746a8c57cbce7faf17a97105703dc6f\"" Feb 13 19:46:10.962559 containerd[1805]: time="2025-02-13T19:46:10.962526796Z" level=info msg="TearDown network for sandbox \"45c87c61ef4cd93a6712ac1a472c5dc83746a8c57cbce7faf17a97105703dc6f\" successfully" Feb 13 19:46:10.962559 containerd[1805]: time="2025-02-13T19:46:10.962556924Z" level=info msg="StopPodSandbox for \"45c87c61ef4cd93a6712ac1a472c5dc83746a8c57cbce7faf17a97105703dc6f\" returns successfully" Feb 13 19:46:10.962636 containerd[1805]: time="2025-02-13T19:46:10.962598806Z" level=info msg="StopPodSandbox for \"a76e4eaee661cfd563bcd392434c458182000015a3325e637abcb2534d276298\"" Feb 13 19:46:10.962766 containerd[1805]: time="2025-02-13T19:46:10.962750262Z" level=info msg="Ensure that sandbox a76e4eaee661cfd563bcd392434c458182000015a3325e637abcb2534d276298 in task-service has been cleanup successfully" Feb 13 19:46:10.962810 containerd[1805]: time="2025-02-13T19:46:10.962788334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7469c76fc6-p6gv2,Uid:922c820f-72e7-49c5-977f-4e21e9e5b030,Namespace:calico-apiserver,Attempt:2,}" Feb 13 19:46:10.962860 containerd[1805]: time="2025-02-13T19:46:10.962847784Z" level=info msg="TearDown network for sandbox \"a76e4eaee661cfd563bcd392434c458182000015a3325e637abcb2534d276298\" successfully" Feb 13 19:46:10.962898 containerd[1805]: time="2025-02-13T19:46:10.962859437Z" level=info msg="StopPodSandbox for \"a76e4eaee661cfd563bcd392434c458182000015a3325e637abcb2534d276298\" returns successfully" Feb 13 19:46:10.962997 containerd[1805]: time="2025-02-13T19:46:10.962980824Z" level=info msg="StopPodSandbox for \"80232807547030d8f07b5c4f1cde555f2f89298b4b252d2c8d124b51c03bbfbb\"" Feb 13 19:46:10.963073 containerd[1805]: time="2025-02-13T19:46:10.963034456Z" level=info msg="TearDown network for sandbox \"80232807547030d8f07b5c4f1cde555f2f89298b4b252d2c8d124b51c03bbfbb\" successfully" Feb 13 19:46:10.963073 containerd[1805]: time="2025-02-13T19:46:10.963042854Z" level=info msg="StopPodSandbox for \"80232807547030d8f07b5c4f1cde555f2f89298b4b252d2c8d124b51c03bbfbb\" returns successfully" Feb 13 19:46:10.963132 kubelet[3272]: I0213 19:46:10.963023 3272 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce3e2500fc4278415f6ebbad079169c382c4f49b1ed76d104c93e357f67447a4" Feb 13 19:46:10.963335 containerd[1805]: time="2025-02-13T19:46:10.963324093Z" level=info msg="StopPodSandbox for \"ce3e2500fc4278415f6ebbad079169c382c4f49b1ed76d104c93e357f67447a4\"" Feb 13 19:46:10.963367 containerd[1805]: time="2025-02-13T19:46:10.963329326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54fdf9b76f-wsc67,Uid:a3049fc5-6472-40ea-b289-504e898e9372,Namespace:calico-system,Attempt:2,}" Feb 13 19:46:10.963414 containerd[1805]: time="2025-02-13T19:46:10.963406130Z" level=info msg="Ensure that sandbox ce3e2500fc4278415f6ebbad079169c382c4f49b1ed76d104c93e357f67447a4 in task-service has been cleanup successfully" Feb 13 19:46:10.963525 containerd[1805]: time="2025-02-13T19:46:10.963514618Z" level=info msg="TearDown network for sandbox \"ce3e2500fc4278415f6ebbad079169c382c4f49b1ed76d104c93e357f67447a4\" successfully" Feb 13 19:46:10.963525 containerd[1805]: time="2025-02-13T19:46:10.963523998Z" level=info msg="StopPodSandbox for \"ce3e2500fc4278415f6ebbad079169c382c4f49b1ed76d104c93e357f67447a4\" returns successfully" Feb 13 19:46:10.963634 containerd[1805]: time="2025-02-13T19:46:10.963624275Z" level=info msg="StopPodSandbox for \"12474318e4f6f721cac7083f9ef1f395d02fee9ab3fb10b76216dc654e5175bb\"" Feb 13 19:46:10.963659 kubelet[3272]: I0213 19:46:10.963650 3272 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86d53dc9a54aa1591336fabe505d737e9ac9ca61810835e49d34bd02140a924c" Feb 13 19:46:10.963687 containerd[1805]: time="2025-02-13T19:46:10.963665540Z" level=info msg="TearDown network for sandbox \"12474318e4f6f721cac7083f9ef1f395d02fee9ab3fb10b76216dc654e5175bb\" successfully" Feb 13 19:46:10.963719 containerd[1805]: time="2025-02-13T19:46:10.963687189Z" level=info msg="StopPodSandbox for \"12474318e4f6f721cac7083f9ef1f395d02fee9ab3fb10b76216dc654e5175bb\" returns successfully" Feb 13 19:46:10.963866 containerd[1805]: time="2025-02-13T19:46:10.963852168Z" level=info msg="StopPodSandbox for \"86d53dc9a54aa1591336fabe505d737e9ac9ca61810835e49d34bd02140a924c\"" Feb 13 19:46:10.963896 containerd[1805]: time="2025-02-13T19:46:10.963868940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-xvtfk,Uid:c5186765-6941-4a28-a06d-cf22cd68adee,Namespace:kube-system,Attempt:2,}" Feb 13 19:46:10.963974 containerd[1805]: time="2025-02-13T19:46:10.963963539Z" level=info msg="Ensure that sandbox 86d53dc9a54aa1591336fabe505d737e9ac9ca61810835e49d34bd02140a924c in task-service has been cleanup successfully" Feb 13 19:46:10.964048 containerd[1805]: time="2025-02-13T19:46:10.964037520Z" level=info msg="TearDown network for sandbox \"86d53dc9a54aa1591336fabe505d737e9ac9ca61810835e49d34bd02140a924c\" successfully" Feb 13 19:46:10.964074 containerd[1805]: time="2025-02-13T19:46:10.964049040Z" level=info msg="StopPodSandbox for \"86d53dc9a54aa1591336fabe505d737e9ac9ca61810835e49d34bd02140a924c\" returns successfully" Feb 13 19:46:10.964068 systemd[1]: run-netns-cni\x2d6933e36b\x2d299b\x2d2b26\x2dd186\x2dc594093c1f2b.mount: Deactivated successfully. Feb 13 19:46:10.964211 containerd[1805]: time="2025-02-13T19:46:10.964159602Z" level=info msg="StopPodSandbox for \"40cc26bfd58a26e7cb37e7b5a69dffb2a31831b9cdfaa60054c153d9762e12fa\"" Feb 13 19:46:10.964237 kubelet[3272]: I0213 19:46:10.964174 3272 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8065894b6a1abeb1668ff49ae8d1a1408a0af9ccf6106a60de8a212a434edc9e" Feb 13 19:46:10.964259 containerd[1805]: time="2025-02-13T19:46:10.964200266Z" level=info msg="TearDown network for sandbox \"40cc26bfd58a26e7cb37e7b5a69dffb2a31831b9cdfaa60054c153d9762e12fa\" successfully" Feb 13 19:46:10.964259 containerd[1805]: time="2025-02-13T19:46:10.964218275Z" level=info msg="StopPodSandbox for \"40cc26bfd58a26e7cb37e7b5a69dffb2a31831b9cdfaa60054c153d9762e12fa\" returns successfully" Feb 13 19:46:10.964367 containerd[1805]: time="2025-02-13T19:46:10.964354222Z" level=info msg="StopPodSandbox for \"8065894b6a1abeb1668ff49ae8d1a1408a0af9ccf6106a60de8a212a434edc9e\"" Feb 13 19:46:10.964469 containerd[1805]: time="2025-02-13T19:46:10.964459960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vkkt7,Uid:f3327118-2549-4d08-a802-8c7cfa7fb673,Namespace:calico-system,Attempt:2,}" Feb 13 19:46:10.964509 containerd[1805]: time="2025-02-13T19:46:10.964473039Z" level=info msg="Ensure that sandbox 8065894b6a1abeb1668ff49ae8d1a1408a0af9ccf6106a60de8a212a434edc9e in task-service has been cleanup successfully" Feb 13 19:46:10.964576 containerd[1805]: time="2025-02-13T19:46:10.964565623Z" level=info msg="TearDown network for sandbox \"8065894b6a1abeb1668ff49ae8d1a1408a0af9ccf6106a60de8a212a434edc9e\" successfully" Feb 13 19:46:10.964600 containerd[1805]: time="2025-02-13T19:46:10.964576460Z" level=info msg="StopPodSandbox for \"8065894b6a1abeb1668ff49ae8d1a1408a0af9ccf6106a60de8a212a434edc9e\" returns successfully" Feb 13 19:46:10.964690 containerd[1805]: time="2025-02-13T19:46:10.964678995Z" level=info msg="StopPodSandbox for \"171aaacb6955d25fb1b3e8a93959e2fe163538dfa3fba461e8d6106fdaceba89\"" Feb 13 19:46:10.964726 kubelet[3272]: I0213 19:46:10.964715 3272 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17c86641c0094d0d8e43422341e1cc10769b1be113bacfab3dfb5b9b7eacc05c" Feb 13 19:46:10.964757 containerd[1805]: time="2025-02-13T19:46:10.964734622Z" level=info msg="TearDown network for sandbox \"171aaacb6955d25fb1b3e8a93959e2fe163538dfa3fba461e8d6106fdaceba89\" successfully" Feb 13 19:46:10.964757 containerd[1805]: time="2025-02-13T19:46:10.964744743Z" level=info msg="StopPodSandbox for \"171aaacb6955d25fb1b3e8a93959e2fe163538dfa3fba461e8d6106fdaceba89\" returns successfully" Feb 13 19:46:10.964920 containerd[1805]: time="2025-02-13T19:46:10.964909621Z" level=info msg="StopPodSandbox for \"17c86641c0094d0d8e43422341e1cc10769b1be113bacfab3dfb5b9b7eacc05c\"" Feb 13 19:46:10.964949 containerd[1805]: time="2025-02-13T19:46:10.964921604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7469c76fc6-shmk8,Uid:168182d0-b9c4-48bd-9c74-28acdd82becf,Namespace:calico-apiserver,Attempt:2,}" Feb 13 19:46:10.965007 containerd[1805]: time="2025-02-13T19:46:10.964998087Z" level=info msg="Ensure that sandbox 17c86641c0094d0d8e43422341e1cc10769b1be113bacfab3dfb5b9b7eacc05c in task-service has been cleanup successfully" Feb 13 19:46:10.965076 containerd[1805]: time="2025-02-13T19:46:10.965068791Z" level=info msg="TearDown network for sandbox \"17c86641c0094d0d8e43422341e1cc10769b1be113bacfab3dfb5b9b7eacc05c\" successfully" Feb 13 19:46:10.965094 containerd[1805]: time="2025-02-13T19:46:10.965076788Z" level=info msg="StopPodSandbox for \"17c86641c0094d0d8e43422341e1cc10769b1be113bacfab3dfb5b9b7eacc05c\" returns successfully" Feb 13 19:46:10.965195 containerd[1805]: time="2025-02-13T19:46:10.965186950Z" level=info msg="StopPodSandbox for \"9a29ac16eb479f3a98d75645f8fdb66ad9a507e136a4a0536faab04d17a70738\"" Feb 13 19:46:10.965230 containerd[1805]: time="2025-02-13T19:46:10.965224369Z" level=info msg="TearDown network for sandbox \"9a29ac16eb479f3a98d75645f8fdb66ad9a507e136a4a0536faab04d17a70738\" successfully" Feb 13 19:46:10.965252 containerd[1805]: time="2025-02-13T19:46:10.965230743Z" level=info msg="StopPodSandbox for \"9a29ac16eb479f3a98d75645f8fdb66ad9a507e136a4a0536faab04d17a70738\" returns successfully" Feb 13 19:46:10.965398 containerd[1805]: time="2025-02-13T19:46:10.965388724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-t77lg,Uid:4bb94446-f95c-44d1-9b40-e90a44987989,Namespace:kube-system,Attempt:2,}" Feb 13 19:46:10.965819 systemd[1]: run-netns-cni\x2d5cbff562\x2d5afc\x2ddc06\x2d17a4\x2d9cb96e7bf9bf.mount: Deactivated successfully. Feb 13 19:46:10.965868 systemd[1]: run-netns-cni\x2d46fccf34\x2de407\x2d68b6\x2de0b2\x2d3b5053cae4fc.mount: Deactivated successfully. Feb 13 19:46:10.965902 systemd[1]: run-netns-cni\x2da3acc132\x2d7eaa\x2d5b4f\x2d37b1\x2d1b0334b5fab4.mount: Deactivated successfully. Feb 13 19:46:10.965934 systemd[1]: run-netns-cni\x2d833a3ff9\x2d51e9\x2d1f86\x2d8616\x2de6a13154be56.mount: Deactivated successfully. Feb 13 19:46:10.968031 systemd[1]: run-netns-cni\x2de5a703b1\x2d7473\x2df428\x2dffac\x2d91ace9f29a65.mount: Deactivated successfully. Feb 13 19:46:11.003297 containerd[1805]: time="2025-02-13T19:46:11.003251346Z" level=error msg="Failed to destroy network for sandbox \"273e7d864ff43274281d57fb2f27941c5b34017810dda5ef7ecc8294315a449b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:11.003397 containerd[1805]: time="2025-02-13T19:46:11.003374417Z" level=error msg="Failed to destroy network for sandbox \"515e2dbbd1be1c8fb85c1e550f65c5d2a7c456a8e8d1bd8d101a078c05308133\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:11.003599 containerd[1805]: time="2025-02-13T19:46:11.003581092Z" level=error msg="encountered an error cleaning up failed sandbox \"515e2dbbd1be1c8fb85c1e550f65c5d2a7c456a8e8d1bd8d101a078c05308133\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:11.003637 containerd[1805]: time="2025-02-13T19:46:11.003598835Z" level=error msg="encountered an error cleaning up failed sandbox \"273e7d864ff43274281d57fb2f27941c5b34017810dda5ef7ecc8294315a449b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:11.003688 containerd[1805]: time="2025-02-13T19:46:11.003627719Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7469c76fc6-p6gv2,Uid:922c820f-72e7-49c5-977f-4e21e9e5b030,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"515e2dbbd1be1c8fb85c1e550f65c5d2a7c456a8e8d1bd8d101a078c05308133\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:11.003752 containerd[1805]: time="2025-02-13T19:46:11.003633972Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54fdf9b76f-wsc67,Uid:a3049fc5-6472-40ea-b289-504e898e9372,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"273e7d864ff43274281d57fb2f27941c5b34017810dda5ef7ecc8294315a449b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:11.003848 kubelet[3272]: E0213 19:46:11.003828 3272 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"515e2dbbd1be1c8fb85c1e550f65c5d2a7c456a8e8d1bd8d101a078c05308133\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:11.003890 kubelet[3272]: E0213 19:46:11.003867 3272 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"515e2dbbd1be1c8fb85c1e550f65c5d2a7c456a8e8d1bd8d101a078c05308133\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7469c76fc6-p6gv2" Feb 13 19:46:11.003890 kubelet[3272]: E0213 19:46:11.003880 3272 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"515e2dbbd1be1c8fb85c1e550f65c5d2a7c456a8e8d1bd8d101a078c05308133\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7469c76fc6-p6gv2" Feb 13 19:46:11.003955 kubelet[3272]: E0213 19:46:11.003828 3272 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"273e7d864ff43274281d57fb2f27941c5b34017810dda5ef7ecc8294315a449b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:11.003955 kubelet[3272]: E0213 19:46:11.003906 3272 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7469c76fc6-p6gv2_calico-apiserver(922c820f-72e7-49c5-977f-4e21e9e5b030)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7469c76fc6-p6gv2_calico-apiserver(922c820f-72e7-49c5-977f-4e21e9e5b030)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"515e2dbbd1be1c8fb85c1e550f65c5d2a7c456a8e8d1bd8d101a078c05308133\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7469c76fc6-p6gv2" podUID="922c820f-72e7-49c5-977f-4e21e9e5b030" Feb 13 19:46:11.003955 kubelet[3272]: E0213 19:46:11.003922 3272 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"273e7d864ff43274281d57fb2f27941c5b34017810dda5ef7ecc8294315a449b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-54fdf9b76f-wsc67" Feb 13 19:46:11.004067 kubelet[3272]: E0213 19:46:11.003940 3272 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"273e7d864ff43274281d57fb2f27941c5b34017810dda5ef7ecc8294315a449b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-54fdf9b76f-wsc67" Feb 13 19:46:11.004067 kubelet[3272]: E0213 19:46:11.003958 3272 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-54fdf9b76f-wsc67_calico-system(a3049fc5-6472-40ea-b289-504e898e9372)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-54fdf9b76f-wsc67_calico-system(a3049fc5-6472-40ea-b289-504e898e9372)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"273e7d864ff43274281d57fb2f27941c5b34017810dda5ef7ecc8294315a449b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-54fdf9b76f-wsc67" podUID="a3049fc5-6472-40ea-b289-504e898e9372" Feb 13 19:46:11.005607 containerd[1805]: time="2025-02-13T19:46:11.005583470Z" level=error msg="Failed to destroy network for sandbox \"27d8eb9af9f42e9a53c98084e642c5a6c62e6a1a37b6fe8e8b95f03324695b3d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:11.005803 containerd[1805]: time="2025-02-13T19:46:11.005758235Z" level=error msg="encountered an error cleaning up failed sandbox \"27d8eb9af9f42e9a53c98084e642c5a6c62e6a1a37b6fe8e8b95f03324695b3d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:11.005855 containerd[1805]: time="2025-02-13T19:46:11.005809575Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vkkt7,Uid:f3327118-2549-4d08-a802-8c7cfa7fb673,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"27d8eb9af9f42e9a53c98084e642c5a6c62e6a1a37b6fe8e8b95f03324695b3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:11.005951 kubelet[3272]: E0213 19:46:11.005930 3272 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27d8eb9af9f42e9a53c98084e642c5a6c62e6a1a37b6fe8e8b95f03324695b3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:11.005982 kubelet[3272]: E0213 19:46:11.005969 3272 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27d8eb9af9f42e9a53c98084e642c5a6c62e6a1a37b6fe8e8b95f03324695b3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vkkt7" Feb 13 19:46:11.006001 kubelet[3272]: E0213 19:46:11.005987 3272 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27d8eb9af9f42e9a53c98084e642c5a6c62e6a1a37b6fe8e8b95f03324695b3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vkkt7" Feb 13 19:46:11.006042 kubelet[3272]: E0213 19:46:11.006023 3272 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-vkkt7_calico-system(f3327118-2549-4d08-a802-8c7cfa7fb673)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-vkkt7_calico-system(f3327118-2549-4d08-a802-8c7cfa7fb673)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"27d8eb9af9f42e9a53c98084e642c5a6c62e6a1a37b6fe8e8b95f03324695b3d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-vkkt7" podUID="f3327118-2549-4d08-a802-8c7cfa7fb673" Feb 13 19:46:11.006576 containerd[1805]: time="2025-02-13T19:46:11.006559024Z" level=error msg="Failed to destroy network for sandbox \"64b94a48217994b33c55bb7987c78d0f884f37cbe7f8cf23e7deaf8ba4c86bbe\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:11.006741 containerd[1805]: time="2025-02-13T19:46:11.006704371Z" level=error msg="encountered an error cleaning up failed sandbox \"64b94a48217994b33c55bb7987c78d0f884f37cbe7f8cf23e7deaf8ba4c86bbe\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:11.006741 containerd[1805]: time="2025-02-13T19:46:11.006730779Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7469c76fc6-shmk8,Uid:168182d0-b9c4-48bd-9c74-28acdd82becf,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"64b94a48217994b33c55bb7987c78d0f884f37cbe7f8cf23e7deaf8ba4c86bbe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:11.006816 kubelet[3272]: E0213 19:46:11.006805 3272 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64b94a48217994b33c55bb7987c78d0f884f37cbe7f8cf23e7deaf8ba4c86bbe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:11.006837 kubelet[3272]: E0213 19:46:11.006826 3272 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64b94a48217994b33c55bb7987c78d0f884f37cbe7f8cf23e7deaf8ba4c86bbe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7469c76fc6-shmk8" Feb 13 19:46:11.006859 containerd[1805]: time="2025-02-13T19:46:11.006826372Z" level=error msg="Failed to destroy network for sandbox \"cd78dbf308776311aab128402becbbd5898d7d765367e436b435cde3f537df0d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:11.006881 kubelet[3272]: E0213 19:46:11.006837 3272 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64b94a48217994b33c55bb7987c78d0f884f37cbe7f8cf23e7deaf8ba4c86bbe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7469c76fc6-shmk8" Feb 13 19:46:11.006881 kubelet[3272]: E0213 19:46:11.006853 3272 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7469c76fc6-shmk8_calico-apiserver(168182d0-b9c4-48bd-9c74-28acdd82becf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7469c76fc6-shmk8_calico-apiserver(168182d0-b9c4-48bd-9c74-28acdd82becf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"64b94a48217994b33c55bb7987c78d0f884f37cbe7f8cf23e7deaf8ba4c86bbe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7469c76fc6-shmk8" podUID="168182d0-b9c4-48bd-9c74-28acdd82becf" Feb 13 19:46:11.006964 containerd[1805]: time="2025-02-13T19:46:11.006953532Z" level=error msg="encountered an error cleaning up failed sandbox \"cd78dbf308776311aab128402becbbd5898d7d765367e436b435cde3f537df0d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:11.006985 containerd[1805]: time="2025-02-13T19:46:11.006977102Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-xvtfk,Uid:c5186765-6941-4a28-a06d-cf22cd68adee,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"cd78dbf308776311aab128402becbbd5898d7d765367e436b435cde3f537df0d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:11.007044 kubelet[3272]: E0213 19:46:11.007030 3272 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd78dbf308776311aab128402becbbd5898d7d765367e436b435cde3f537df0d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:11.007066 kubelet[3272]: E0213 19:46:11.007052 3272 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd78dbf308776311aab128402becbbd5898d7d765367e436b435cde3f537df0d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-xvtfk" Feb 13 19:46:11.007084 kubelet[3272]: E0213 19:46:11.007064 3272 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd78dbf308776311aab128402becbbd5898d7d765367e436b435cde3f537df0d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-xvtfk" Feb 13 19:46:11.007126 kubelet[3272]: E0213 19:46:11.007081 3272 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-xvtfk_kube-system(c5186765-6941-4a28-a06d-cf22cd68adee)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-xvtfk_kube-system(c5186765-6941-4a28-a06d-cf22cd68adee)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cd78dbf308776311aab128402becbbd5898d7d765367e436b435cde3f537df0d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-xvtfk" podUID="c5186765-6941-4a28-a06d-cf22cd68adee" Feb 13 19:46:11.009412 containerd[1805]: time="2025-02-13T19:46:11.009391842Z" level=error msg="Failed to destroy network for sandbox \"e51c93fcab25bcc6c22126e7803498a376f32272e77511b247d4ab18c089d1d8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:11.009597 containerd[1805]: time="2025-02-13T19:46:11.009559396Z" level=error msg="encountered an error cleaning up failed sandbox \"e51c93fcab25bcc6c22126e7803498a376f32272e77511b247d4ab18c089d1d8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:11.009659 containerd[1805]: time="2025-02-13T19:46:11.009604211Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-t77lg,Uid:4bb94446-f95c-44d1-9b40-e90a44987989,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"e51c93fcab25bcc6c22126e7803498a376f32272e77511b247d4ab18c089d1d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:11.009785 kubelet[3272]: E0213 19:46:11.009731 3272 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e51c93fcab25bcc6c22126e7803498a376f32272e77511b247d4ab18c089d1d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:11.009823 kubelet[3272]: E0213 19:46:11.009788 3272 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e51c93fcab25bcc6c22126e7803498a376f32272e77511b247d4ab18c089d1d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-t77lg" Feb 13 19:46:11.009823 kubelet[3272]: E0213 19:46:11.009801 3272 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e51c93fcab25bcc6c22126e7803498a376f32272e77511b247d4ab18c089d1d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-t77lg" Feb 13 19:46:11.009878 kubelet[3272]: E0213 19:46:11.009824 3272 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-t77lg_kube-system(4bb94446-f95c-44d1-9b40-e90a44987989)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-t77lg_kube-system(4bb94446-f95c-44d1-9b40-e90a44987989)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e51c93fcab25bcc6c22126e7803498a376f32272e77511b247d4ab18c089d1d8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-t77lg" podUID="4bb94446-f95c-44d1-9b40-e90a44987989" Feb 13 19:46:11.526039 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-515e2dbbd1be1c8fb85c1e550f65c5d2a7c456a8e8d1bd8d101a078c05308133-shm.mount: Deactivated successfully. Feb 13 19:46:11.972192 kubelet[3272]: I0213 19:46:11.972142 3272 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="515e2dbbd1be1c8fb85c1e550f65c5d2a7c456a8e8d1bd8d101a078c05308133" Feb 13 19:46:11.973297 containerd[1805]: time="2025-02-13T19:46:11.973204326Z" level=info msg="StopPodSandbox for \"515e2dbbd1be1c8fb85c1e550f65c5d2a7c456a8e8d1bd8d101a078c05308133\"" Feb 13 19:46:11.974026 containerd[1805]: time="2025-02-13T19:46:11.973822594Z" level=info msg="Ensure that sandbox 515e2dbbd1be1c8fb85c1e550f65c5d2a7c456a8e8d1bd8d101a078c05308133 in task-service has been cleanup successfully" Feb 13 19:46:11.974299 containerd[1805]: time="2025-02-13T19:46:11.974240775Z" level=info msg="TearDown network for sandbox \"515e2dbbd1be1c8fb85c1e550f65c5d2a7c456a8e8d1bd8d101a078c05308133\" successfully" Feb 13 19:46:11.974527 containerd[1805]: time="2025-02-13T19:46:11.974296567Z" level=info msg="StopPodSandbox for \"515e2dbbd1be1c8fb85c1e550f65c5d2a7c456a8e8d1bd8d101a078c05308133\" returns successfully" Feb 13 19:46:11.974981 containerd[1805]: time="2025-02-13T19:46:11.974907376Z" level=info msg="StopPodSandbox for \"83e7edb6536af5c49bf3dcdb8edee0f773f777fb86a2c95fed050893a903a67c\"" Feb 13 19:46:11.975196 containerd[1805]: time="2025-02-13T19:46:11.975144520Z" level=info msg="TearDown network for sandbox \"83e7edb6536af5c49bf3dcdb8edee0f773f777fb86a2c95fed050893a903a67c\" successfully" Feb 13 19:46:11.975364 containerd[1805]: time="2025-02-13T19:46:11.975192577Z" level=info msg="StopPodSandbox for \"83e7edb6536af5c49bf3dcdb8edee0f773f777fb86a2c95fed050893a903a67c\" returns successfully" Feb 13 19:46:11.975797 containerd[1805]: time="2025-02-13T19:46:11.975724297Z" level=info msg="StopPodSandbox for \"45c87c61ef4cd93a6712ac1a472c5dc83746a8c57cbce7faf17a97105703dc6f\"" Feb 13 19:46:11.976054 containerd[1805]: time="2025-02-13T19:46:11.975988576Z" level=info msg="TearDown network for sandbox \"45c87c61ef4cd93a6712ac1a472c5dc83746a8c57cbce7faf17a97105703dc6f\" successfully" Feb 13 19:46:11.976054 containerd[1805]: time="2025-02-13T19:46:11.976045450Z" level=info msg="StopPodSandbox for \"45c87c61ef4cd93a6712ac1a472c5dc83746a8c57cbce7faf17a97105703dc6f\" returns successfully" Feb 13 19:46:11.976416 kubelet[3272]: I0213 19:46:11.976023 3272 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="273e7d864ff43274281d57fb2f27941c5b34017810dda5ef7ecc8294315a449b" Feb 13 19:46:11.977169 containerd[1805]: time="2025-02-13T19:46:11.977089195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7469c76fc6-p6gv2,Uid:922c820f-72e7-49c5-977f-4e21e9e5b030,Namespace:calico-apiserver,Attempt:3,}" Feb 13 19:46:11.977455 containerd[1805]: time="2025-02-13T19:46:11.977365857Z" level=info msg="StopPodSandbox for \"273e7d864ff43274281d57fb2f27941c5b34017810dda5ef7ecc8294315a449b\"" Feb 13 19:46:11.977739 containerd[1805]: time="2025-02-13T19:46:11.977728441Z" level=info msg="Ensure that sandbox 273e7d864ff43274281d57fb2f27941c5b34017810dda5ef7ecc8294315a449b in task-service has been cleanup successfully" Feb 13 19:46:11.977873 containerd[1805]: time="2025-02-13T19:46:11.977862845Z" level=info msg="TearDown network for sandbox \"273e7d864ff43274281d57fb2f27941c5b34017810dda5ef7ecc8294315a449b\" successfully" Feb 13 19:46:11.977873 containerd[1805]: time="2025-02-13T19:46:11.977871607Z" level=info msg="StopPodSandbox for \"273e7d864ff43274281d57fb2f27941c5b34017810dda5ef7ecc8294315a449b\" returns successfully" Feb 13 19:46:11.977992 containerd[1805]: time="2025-02-13T19:46:11.977979968Z" level=info msg="StopPodSandbox for \"a76e4eaee661cfd563bcd392434c458182000015a3325e637abcb2534d276298\"" Feb 13 19:46:11.978031 containerd[1805]: time="2025-02-13T19:46:11.978020238Z" level=info msg="TearDown network for sandbox \"a76e4eaee661cfd563bcd392434c458182000015a3325e637abcb2534d276298\" successfully" Feb 13 19:46:11.978031 containerd[1805]: time="2025-02-13T19:46:11.978026989Z" level=info msg="StopPodSandbox for \"a76e4eaee661cfd563bcd392434c458182000015a3325e637abcb2534d276298\" returns successfully" Feb 13 19:46:11.978085 kubelet[3272]: I0213 19:46:11.978016 3272 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64b94a48217994b33c55bb7987c78d0f884f37cbe7f8cf23e7deaf8ba4c86bbe" Feb 13 19:46:11.978143 containerd[1805]: time="2025-02-13T19:46:11.978132189Z" level=info msg="StopPodSandbox for \"80232807547030d8f07b5c4f1cde555f2f89298b4b252d2c8d124b51c03bbfbb\"" Feb 13 19:46:11.978191 containerd[1805]: time="2025-02-13T19:46:11.978167124Z" level=info msg="TearDown network for sandbox \"80232807547030d8f07b5c4f1cde555f2f89298b4b252d2c8d124b51c03bbfbb\" successfully" Feb 13 19:46:11.978226 containerd[1805]: time="2025-02-13T19:46:11.978191765Z" level=info msg="StopPodSandbox for \"80232807547030d8f07b5c4f1cde555f2f89298b4b252d2c8d124b51c03bbfbb\" returns successfully" Feb 13 19:46:11.978226 containerd[1805]: time="2025-02-13T19:46:11.978213696Z" level=info msg="StopPodSandbox for \"64b94a48217994b33c55bb7987c78d0f884f37cbe7f8cf23e7deaf8ba4c86bbe\"" Feb 13 19:46:11.978332 containerd[1805]: time="2025-02-13T19:46:11.978320620Z" level=info msg="Ensure that sandbox 64b94a48217994b33c55bb7987c78d0f884f37cbe7f8cf23e7deaf8ba4c86bbe in task-service has been cleanup successfully" Feb 13 19:46:11.978414 containerd[1805]: time="2025-02-13T19:46:11.978404500Z" level=info msg="TearDown network for sandbox \"64b94a48217994b33c55bb7987c78d0f884f37cbe7f8cf23e7deaf8ba4c86bbe\" successfully" Feb 13 19:46:11.978453 containerd[1805]: time="2025-02-13T19:46:11.978413228Z" level=info msg="StopPodSandbox for \"64b94a48217994b33c55bb7987c78d0f884f37cbe7f8cf23e7deaf8ba4c86bbe\" returns successfully" Feb 13 19:46:11.978453 containerd[1805]: time="2025-02-13T19:46:11.978409834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54fdf9b76f-wsc67,Uid:a3049fc5-6472-40ea-b289-504e898e9372,Namespace:calico-system,Attempt:3,}" Feb 13 19:46:11.978540 containerd[1805]: time="2025-02-13T19:46:11.978529488Z" level=info msg="StopPodSandbox for \"8065894b6a1abeb1668ff49ae8d1a1408a0af9ccf6106a60de8a212a434edc9e\"" Feb 13 19:46:11.978582 containerd[1805]: time="2025-02-13T19:46:11.978573653Z" level=info msg="TearDown network for sandbox \"8065894b6a1abeb1668ff49ae8d1a1408a0af9ccf6106a60de8a212a434edc9e\" successfully" Feb 13 19:46:11.978612 containerd[1805]: time="2025-02-13T19:46:11.978581170Z" level=info msg="StopPodSandbox for \"8065894b6a1abeb1668ff49ae8d1a1408a0af9ccf6106a60de8a212a434edc9e\" returns successfully" Feb 13 19:46:11.978619 systemd[1]: run-netns-cni\x2dfcf13840\x2d044d\x2d1db6\x2d7af5\x2d6929cebccc8f.mount: Deactivated successfully. Feb 13 19:46:11.978787 containerd[1805]: time="2025-02-13T19:46:11.978681049Z" level=info msg="StopPodSandbox for \"171aaacb6955d25fb1b3e8a93959e2fe163538dfa3fba461e8d6106fdaceba89\"" Feb 13 19:46:11.978787 containerd[1805]: time="2025-02-13T19:46:11.978729266Z" level=info msg="TearDown network for sandbox \"171aaacb6955d25fb1b3e8a93959e2fe163538dfa3fba461e8d6106fdaceba89\" successfully" Feb 13 19:46:11.978787 containerd[1805]: time="2025-02-13T19:46:11.978739069Z" level=info msg="StopPodSandbox for \"171aaacb6955d25fb1b3e8a93959e2fe163538dfa3fba461e8d6106fdaceba89\" returns successfully" Feb 13 19:46:11.978851 kubelet[3272]: I0213 19:46:11.978684 3272 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e51c93fcab25bcc6c22126e7803498a376f32272e77511b247d4ab18c089d1d8" Feb 13 19:46:11.978893 containerd[1805]: time="2025-02-13T19:46:11.978881568Z" level=info msg="StopPodSandbox for \"e51c93fcab25bcc6c22126e7803498a376f32272e77511b247d4ab18c089d1d8\"" Feb 13 19:46:11.978931 containerd[1805]: time="2025-02-13T19:46:11.978921403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7469c76fc6-shmk8,Uid:168182d0-b9c4-48bd-9c74-28acdd82becf,Namespace:calico-apiserver,Attempt:3,}" Feb 13 19:46:11.978973 containerd[1805]: time="2025-02-13T19:46:11.978965163Z" level=info msg="Ensure that sandbox e51c93fcab25bcc6c22126e7803498a376f32272e77511b247d4ab18c089d1d8 in task-service has been cleanup successfully" Feb 13 19:46:11.979049 containerd[1805]: time="2025-02-13T19:46:11.979041309Z" level=info msg="TearDown network for sandbox \"e51c93fcab25bcc6c22126e7803498a376f32272e77511b247d4ab18c089d1d8\" successfully" Feb 13 19:46:11.979070 containerd[1805]: time="2025-02-13T19:46:11.979048756Z" level=info msg="StopPodSandbox for \"e51c93fcab25bcc6c22126e7803498a376f32272e77511b247d4ab18c089d1d8\" returns successfully" Feb 13 19:46:11.979165 containerd[1805]: time="2025-02-13T19:46:11.979153518Z" level=info msg="StopPodSandbox for \"17c86641c0094d0d8e43422341e1cc10769b1be113bacfab3dfb5b9b7eacc05c\"" Feb 13 19:46:11.979217 containerd[1805]: time="2025-02-13T19:46:11.979197237Z" level=info msg="TearDown network for sandbox \"17c86641c0094d0d8e43422341e1cc10769b1be113bacfab3dfb5b9b7eacc05c\" successfully" Feb 13 19:46:11.979239 containerd[1805]: time="2025-02-13T19:46:11.979217993Z" level=info msg="StopPodSandbox for \"17c86641c0094d0d8e43422341e1cc10769b1be113bacfab3dfb5b9b7eacc05c\" returns successfully" Feb 13 19:46:11.979305 kubelet[3272]: I0213 19:46:11.979299 3272 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd78dbf308776311aab128402becbbd5898d7d765367e436b435cde3f537df0d" Feb 13 19:46:11.979325 containerd[1805]: time="2025-02-13T19:46:11.979304264Z" level=info msg="StopPodSandbox for \"9a29ac16eb479f3a98d75645f8fdb66ad9a507e136a4a0536faab04d17a70738\"" Feb 13 19:46:11.979345 containerd[1805]: time="2025-02-13T19:46:11.979337751Z" level=info msg="TearDown network for sandbox \"9a29ac16eb479f3a98d75645f8fdb66ad9a507e136a4a0536faab04d17a70738\" successfully" Feb 13 19:46:11.979345 containerd[1805]: time="2025-02-13T19:46:11.979343192Z" level=info msg="StopPodSandbox for \"9a29ac16eb479f3a98d75645f8fdb66ad9a507e136a4a0536faab04d17a70738\" returns successfully" Feb 13 19:46:11.979486 containerd[1805]: time="2025-02-13T19:46:11.979476637Z" level=info msg="StopPodSandbox for \"cd78dbf308776311aab128402becbbd5898d7d765367e436b435cde3f537df0d\"" Feb 13 19:46:11.979509 containerd[1805]: time="2025-02-13T19:46:11.979490879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-t77lg,Uid:4bb94446-f95c-44d1-9b40-e90a44987989,Namespace:kube-system,Attempt:3,}" Feb 13 19:46:11.979569 containerd[1805]: time="2025-02-13T19:46:11.979560817Z" level=info msg="Ensure that sandbox cd78dbf308776311aab128402becbbd5898d7d765367e436b435cde3f537df0d in task-service has been cleanup successfully" Feb 13 19:46:11.979647 containerd[1805]: time="2025-02-13T19:46:11.979637557Z" level=info msg="TearDown network for sandbox \"cd78dbf308776311aab128402becbbd5898d7d765367e436b435cde3f537df0d\" successfully" Feb 13 19:46:11.979671 containerd[1805]: time="2025-02-13T19:46:11.979647293Z" level=info msg="StopPodSandbox for \"cd78dbf308776311aab128402becbbd5898d7d765367e436b435cde3f537df0d\" returns successfully" Feb 13 19:46:11.979804 containerd[1805]: time="2025-02-13T19:46:11.979791767Z" level=info msg="StopPodSandbox for \"ce3e2500fc4278415f6ebbad079169c382c4f49b1ed76d104c93e357f67447a4\"" Feb 13 19:46:11.979853 containerd[1805]: time="2025-02-13T19:46:11.979842143Z" level=info msg="TearDown network for sandbox \"ce3e2500fc4278415f6ebbad079169c382c4f49b1ed76d104c93e357f67447a4\" successfully" Feb 13 19:46:11.979883 containerd[1805]: time="2025-02-13T19:46:11.979853013Z" level=info msg="StopPodSandbox for \"ce3e2500fc4278415f6ebbad079169c382c4f49b1ed76d104c93e357f67447a4\" returns successfully" Feb 13 19:46:11.979935 kubelet[3272]: I0213 19:46:11.979927 3272 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27d8eb9af9f42e9a53c98084e642c5a6c62e6a1a37b6fe8e8b95f03324695b3d" Feb 13 19:46:11.980027 containerd[1805]: time="2025-02-13T19:46:11.980012673Z" level=info msg="StopPodSandbox for \"12474318e4f6f721cac7083f9ef1f395d02fee9ab3fb10b76216dc654e5175bb\"" Feb 13 19:46:11.980072 containerd[1805]: time="2025-02-13T19:46:11.980062120Z" level=info msg="TearDown network for sandbox \"12474318e4f6f721cac7083f9ef1f395d02fee9ab3fb10b76216dc654e5175bb\" successfully" Feb 13 19:46:11.980104 containerd[1805]: time="2025-02-13T19:46:11.980071640Z" level=info msg="StopPodSandbox for \"12474318e4f6f721cac7083f9ef1f395d02fee9ab3fb10b76216dc654e5175bb\" returns successfully" Feb 13 19:46:11.980139 containerd[1805]: time="2025-02-13T19:46:11.980129637Z" level=info msg="StopPodSandbox for \"27d8eb9af9f42e9a53c98084e642c5a6c62e6a1a37b6fe8e8b95f03324695b3d\"" Feb 13 19:46:11.980228 containerd[1805]: time="2025-02-13T19:46:11.980218145Z" level=info msg="Ensure that sandbox 27d8eb9af9f42e9a53c98084e642c5a6c62e6a1a37b6fe8e8b95f03324695b3d in task-service has been cleanup successfully" Feb 13 19:46:11.980288 containerd[1805]: time="2025-02-13T19:46:11.980276487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-xvtfk,Uid:c5186765-6941-4a28-a06d-cf22cd68adee,Namespace:kube-system,Attempt:3,}" Feb 13 19:46:11.980316 containerd[1805]: time="2025-02-13T19:46:11.980306329Z" level=info msg="TearDown network for sandbox \"27d8eb9af9f42e9a53c98084e642c5a6c62e6a1a37b6fe8e8b95f03324695b3d\" successfully" Feb 13 19:46:11.980338 containerd[1805]: time="2025-02-13T19:46:11.980317068Z" level=info msg="StopPodSandbox for \"27d8eb9af9f42e9a53c98084e642c5a6c62e6a1a37b6fe8e8b95f03324695b3d\" returns successfully" Feb 13 19:46:11.980378 systemd[1]: run-netns-cni\x2d79fcd952\x2de811\x2d6e66\x2d1dd5\x2d93a85a5e2eaf.mount: Deactivated successfully. Feb 13 19:46:11.980445 systemd[1]: run-netns-cni\x2d279537f3\x2d4760\x2d97c1\x2db8c7\x2dd3c4b26f0e7d.mount: Deactivated successfully. Feb 13 19:46:11.980477 containerd[1805]: time="2025-02-13T19:46:11.980441382Z" level=info msg="StopPodSandbox for \"86d53dc9a54aa1591336fabe505d737e9ac9ca61810835e49d34bd02140a924c\"" Feb 13 19:46:11.980483 systemd[1]: run-netns-cni\x2d04a2330b\x2d9025\x2d072e\x2def10\x2de2ddbc4a945c.mount: Deactivated successfully. Feb 13 19:46:11.980517 containerd[1805]: time="2025-02-13T19:46:11.980490127Z" level=info msg="TearDown network for sandbox \"86d53dc9a54aa1591336fabe505d737e9ac9ca61810835e49d34bd02140a924c\" successfully" Feb 13 19:46:11.980517 containerd[1805]: time="2025-02-13T19:46:11.980498694Z" level=info msg="StopPodSandbox for \"86d53dc9a54aa1591336fabe505d737e9ac9ca61810835e49d34bd02140a924c\" returns successfully" Feb 13 19:46:11.980629 containerd[1805]: time="2025-02-13T19:46:11.980614477Z" level=info msg="StopPodSandbox for \"40cc26bfd58a26e7cb37e7b5a69dffb2a31831b9cdfaa60054c153d9762e12fa\"" Feb 13 19:46:11.980665 containerd[1805]: time="2025-02-13T19:46:11.980657173Z" level=info msg="TearDown network for sandbox \"40cc26bfd58a26e7cb37e7b5a69dffb2a31831b9cdfaa60054c153d9762e12fa\" successfully" Feb 13 19:46:11.980665 containerd[1805]: time="2025-02-13T19:46:11.980664497Z" level=info msg="StopPodSandbox for \"40cc26bfd58a26e7cb37e7b5a69dffb2a31831b9cdfaa60054c153d9762e12fa\" returns successfully" Feb 13 19:46:11.980835 containerd[1805]: time="2025-02-13T19:46:11.980825097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vkkt7,Uid:f3327118-2549-4d08-a802-8c7cfa7fb673,Namespace:calico-system,Attempt:3,}" Feb 13 19:46:11.982295 systemd[1]: run-netns-cni\x2db166a85c\x2ddcb7\x2d8ab2\x2d7c7f\x2dd72ca13a0a23.mount: Deactivated successfully. Feb 13 19:46:11.982337 systemd[1]: run-netns-cni\x2db2c5a891\x2d0619\x2d720c\x2db3b7\x2d3c0db20de162.mount: Deactivated successfully. Feb 13 19:46:12.137661 containerd[1805]: time="2025-02-13T19:46:12.137616609Z" level=error msg="Failed to destroy network for sandbox \"8b701cfc8211741af42c4414d372b5f700d7a816142fc765ae65aac6d68c837d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:12.137888 containerd[1805]: time="2025-02-13T19:46:12.137867935Z" level=error msg="encountered an error cleaning up failed sandbox \"8b701cfc8211741af42c4414d372b5f700d7a816142fc765ae65aac6d68c837d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:12.137993 containerd[1805]: time="2025-02-13T19:46:12.137977253Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-t77lg,Uid:4bb94446-f95c-44d1-9b40-e90a44987989,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"8b701cfc8211741af42c4414d372b5f700d7a816142fc765ae65aac6d68c837d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:12.138082 containerd[1805]: time="2025-02-13T19:46:12.138063159Z" level=error msg="Failed to destroy network for sandbox \"ed2ad3d19fc950de171eb3cf6c51622d2afb4aef5ea2ef8a88ae4d2b1b530489\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:12.138129 containerd[1805]: time="2025-02-13T19:46:12.138070035Z" level=error msg="Failed to destroy network for sandbox \"13e4aa195b30d4eb9cefcecd0a1cebc44189e63e51154716b79006b8f7b53aff\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:12.138192 containerd[1805]: time="2025-02-13T19:46:12.138128465Z" level=error msg="Failed to destroy network for sandbox \"07a3d54a09a9b21a85afe18dfe2963f03ca1a2e398eb9b9070e48712833bd212\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:12.138227 kubelet[3272]: E0213 19:46:12.138191 3272 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b701cfc8211741af42c4414d372b5f700d7a816142fc765ae65aac6d68c837d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:12.138268 kubelet[3272]: E0213 19:46:12.138238 3272 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b701cfc8211741af42c4414d372b5f700d7a816142fc765ae65aac6d68c837d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-t77lg" Feb 13 19:46:12.138268 kubelet[3272]: E0213 19:46:12.138256 3272 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b701cfc8211741af42c4414d372b5f700d7a816142fc765ae65aac6d68c837d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-t77lg" Feb 13 19:46:12.138330 containerd[1805]: time="2025-02-13T19:46:12.138215243Z" level=error msg="Failed to destroy network for sandbox \"9093a30fc2f5cae8b2ce8547aa8a9c41cf2b97877a8b2f1062b62db2d5c43068\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:12.138330 containerd[1805]: time="2025-02-13T19:46:12.138298796Z" level=error msg="encountered an error cleaning up failed sandbox \"ed2ad3d19fc950de171eb3cf6c51622d2afb4aef5ea2ef8a88ae4d2b1b530489\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:12.138400 kubelet[3272]: E0213 19:46:12.138291 3272 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-t77lg_kube-system(4bb94446-f95c-44d1-9b40-e90a44987989)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-t77lg_kube-system(4bb94446-f95c-44d1-9b40-e90a44987989)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8b701cfc8211741af42c4414d372b5f700d7a816142fc765ae65aac6d68c837d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-t77lg" podUID="4bb94446-f95c-44d1-9b40-e90a44987989" Feb 13 19:46:12.138460 containerd[1805]: time="2025-02-13T19:46:12.138328731Z" level=error msg="encountered an error cleaning up failed sandbox \"13e4aa195b30d4eb9cefcecd0a1cebc44189e63e51154716b79006b8f7b53aff\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:12.138460 containerd[1805]: time="2025-02-13T19:46:12.138340581Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7469c76fc6-shmk8,Uid:168182d0-b9c4-48bd-9c74-28acdd82becf,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"ed2ad3d19fc950de171eb3cf6c51622d2afb4aef5ea2ef8a88ae4d2b1b530489\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:12.138460 containerd[1805]: time="2025-02-13T19:46:12.138366284Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7469c76fc6-p6gv2,Uid:922c820f-72e7-49c5-977f-4e21e9e5b030,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"13e4aa195b30d4eb9cefcecd0a1cebc44189e63e51154716b79006b8f7b53aff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:12.138460 containerd[1805]: time="2025-02-13T19:46:12.138383173Z" level=error msg="encountered an error cleaning up failed sandbox \"9093a30fc2f5cae8b2ce8547aa8a9c41cf2b97877a8b2f1062b62db2d5c43068\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:12.138460 containerd[1805]: time="2025-02-13T19:46:12.138421494Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-xvtfk,Uid:c5186765-6941-4a28-a06d-cf22cd68adee,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"9093a30fc2f5cae8b2ce8547aa8a9c41cf2b97877a8b2f1062b62db2d5c43068\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:12.138460 containerd[1805]: time="2025-02-13T19:46:12.138351739Z" level=error msg="encountered an error cleaning up failed sandbox \"07a3d54a09a9b21a85afe18dfe2963f03ca1a2e398eb9b9070e48712833bd212\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:12.138590 kubelet[3272]: E0213 19:46:12.138435 3272 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed2ad3d19fc950de171eb3cf6c51622d2afb4aef5ea2ef8a88ae4d2b1b530489\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:12.138590 kubelet[3272]: E0213 19:46:12.138439 3272 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"13e4aa195b30d4eb9cefcecd0a1cebc44189e63e51154716b79006b8f7b53aff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:12.138590 kubelet[3272]: E0213 19:46:12.138464 3272 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed2ad3d19fc950de171eb3cf6c51622d2afb4aef5ea2ef8a88ae4d2b1b530489\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7469c76fc6-shmk8" Feb 13 19:46:12.138590 kubelet[3272]: E0213 19:46:12.138467 3272 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"13e4aa195b30d4eb9cefcecd0a1cebc44189e63e51154716b79006b8f7b53aff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7469c76fc6-p6gv2" Feb 13 19:46:12.138661 containerd[1805]: time="2025-02-13T19:46:12.138472123Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vkkt7,Uid:f3327118-2549-4d08-a802-8c7cfa7fb673,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"07a3d54a09a9b21a85afe18dfe2963f03ca1a2e398eb9b9070e48712833bd212\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:12.138685 kubelet[3272]: E0213 19:46:12.138478 3272 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed2ad3d19fc950de171eb3cf6c51622d2afb4aef5ea2ef8a88ae4d2b1b530489\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7469c76fc6-shmk8" Feb 13 19:46:12.138685 kubelet[3272]: E0213 19:46:12.138482 3272 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"13e4aa195b30d4eb9cefcecd0a1cebc44189e63e51154716b79006b8f7b53aff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7469c76fc6-p6gv2" Feb 13 19:46:12.138685 kubelet[3272]: E0213 19:46:12.138497 3272 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9093a30fc2f5cae8b2ce8547aa8a9c41cf2b97877a8b2f1062b62db2d5c43068\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:12.138736 kubelet[3272]: E0213 19:46:12.138502 3272 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7469c76fc6-shmk8_calico-apiserver(168182d0-b9c4-48bd-9c74-28acdd82becf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7469c76fc6-shmk8_calico-apiserver(168182d0-b9c4-48bd-9c74-28acdd82becf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ed2ad3d19fc950de171eb3cf6c51622d2afb4aef5ea2ef8a88ae4d2b1b530489\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7469c76fc6-shmk8" podUID="168182d0-b9c4-48bd-9c74-28acdd82becf" Feb 13 19:46:12.138736 kubelet[3272]: E0213 19:46:12.138505 3272 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7469c76fc6-p6gv2_calico-apiserver(922c820f-72e7-49c5-977f-4e21e9e5b030)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7469c76fc6-p6gv2_calico-apiserver(922c820f-72e7-49c5-977f-4e21e9e5b030)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"13e4aa195b30d4eb9cefcecd0a1cebc44189e63e51154716b79006b8f7b53aff\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7469c76fc6-p6gv2" podUID="922c820f-72e7-49c5-977f-4e21e9e5b030" Feb 13 19:46:12.138797 kubelet[3272]: E0213 19:46:12.138517 3272 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9093a30fc2f5cae8b2ce8547aa8a9c41cf2b97877a8b2f1062b62db2d5c43068\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-xvtfk" Feb 13 19:46:12.138797 kubelet[3272]: E0213 19:46:12.138531 3272 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9093a30fc2f5cae8b2ce8547aa8a9c41cf2b97877a8b2f1062b62db2d5c43068\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-xvtfk" Feb 13 19:46:12.138797 kubelet[3272]: E0213 19:46:12.138531 3272 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"07a3d54a09a9b21a85afe18dfe2963f03ca1a2e398eb9b9070e48712833bd212\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:12.138797 kubelet[3272]: E0213 19:46:12.138547 3272 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"07a3d54a09a9b21a85afe18dfe2963f03ca1a2e398eb9b9070e48712833bd212\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vkkt7" Feb 13 19:46:12.138877 kubelet[3272]: E0213 19:46:12.138561 3272 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"07a3d54a09a9b21a85afe18dfe2963f03ca1a2e398eb9b9070e48712833bd212\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vkkt7" Feb 13 19:46:12.138877 kubelet[3272]: E0213 19:46:12.138583 3272 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-vkkt7_calico-system(f3327118-2549-4d08-a802-8c7cfa7fb673)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-vkkt7_calico-system(f3327118-2549-4d08-a802-8c7cfa7fb673)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"07a3d54a09a9b21a85afe18dfe2963f03ca1a2e398eb9b9070e48712833bd212\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-vkkt7" podUID="f3327118-2549-4d08-a802-8c7cfa7fb673" Feb 13 19:46:12.138877 kubelet[3272]: E0213 19:46:12.138547 3272 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-xvtfk_kube-system(c5186765-6941-4a28-a06d-cf22cd68adee)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-xvtfk_kube-system(c5186765-6941-4a28-a06d-cf22cd68adee)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9093a30fc2f5cae8b2ce8547aa8a9c41cf2b97877a8b2f1062b62db2d5c43068\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-xvtfk" podUID="c5186765-6941-4a28-a06d-cf22cd68adee" Feb 13 19:46:12.139238 containerd[1805]: time="2025-02-13T19:46:12.139222951Z" level=error msg="Failed to destroy network for sandbox \"3d0ee9cc9d198a644926e3567cfe87b5412c43401da9974690432ddf97964780\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:12.139389 containerd[1805]: time="2025-02-13T19:46:12.139375651Z" level=error msg="encountered an error cleaning up failed sandbox \"3d0ee9cc9d198a644926e3567cfe87b5412c43401da9974690432ddf97964780\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:12.139432 containerd[1805]: time="2025-02-13T19:46:12.139399898Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54fdf9b76f-wsc67,Uid:a3049fc5-6472-40ea-b289-504e898e9372,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"3d0ee9cc9d198a644926e3567cfe87b5412c43401da9974690432ddf97964780\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:12.139476 kubelet[3272]: E0213 19:46:12.139466 3272 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d0ee9cc9d198a644926e3567cfe87b5412c43401da9974690432ddf97964780\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:12.139499 kubelet[3272]: E0213 19:46:12.139482 3272 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d0ee9cc9d198a644926e3567cfe87b5412c43401da9974690432ddf97964780\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-54fdf9b76f-wsc67" Feb 13 19:46:12.139499 kubelet[3272]: E0213 19:46:12.139493 3272 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d0ee9cc9d198a644926e3567cfe87b5412c43401da9974690432ddf97964780\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-54fdf9b76f-wsc67" Feb 13 19:46:12.139545 kubelet[3272]: E0213 19:46:12.139509 3272 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-54fdf9b76f-wsc67_calico-system(a3049fc5-6472-40ea-b289-504e898e9372)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-54fdf9b76f-wsc67_calico-system(a3049fc5-6472-40ea-b289-504e898e9372)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3d0ee9cc9d198a644926e3567cfe87b5412c43401da9974690432ddf97964780\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-54fdf9b76f-wsc67" podUID="a3049fc5-6472-40ea-b289-504e898e9372" Feb 13 19:46:12.981994 kubelet[3272]: I0213 19:46:12.981977 3272 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9093a30fc2f5cae8b2ce8547aa8a9c41cf2b97877a8b2f1062b62db2d5c43068" Feb 13 19:46:12.982259 containerd[1805]: time="2025-02-13T19:46:12.982236652Z" level=info msg="StopPodSandbox for \"9093a30fc2f5cae8b2ce8547aa8a9c41cf2b97877a8b2f1062b62db2d5c43068\"" Feb 13 19:46:12.982398 containerd[1805]: time="2025-02-13T19:46:12.982384445Z" level=info msg="Ensure that sandbox 9093a30fc2f5cae8b2ce8547aa8a9c41cf2b97877a8b2f1062b62db2d5c43068 in task-service has been cleanup successfully" Feb 13 19:46:12.982526 containerd[1805]: time="2025-02-13T19:46:12.982513148Z" level=info msg="TearDown network for sandbox \"9093a30fc2f5cae8b2ce8547aa8a9c41cf2b97877a8b2f1062b62db2d5c43068\" successfully" Feb 13 19:46:12.982556 containerd[1805]: time="2025-02-13T19:46:12.982524829Z" level=info msg="StopPodSandbox for \"9093a30fc2f5cae8b2ce8547aa8a9c41cf2b97877a8b2f1062b62db2d5c43068\" returns successfully" Feb 13 19:46:12.982633 containerd[1805]: time="2025-02-13T19:46:12.982619072Z" level=info msg="StopPodSandbox for \"cd78dbf308776311aab128402becbbd5898d7d765367e436b435cde3f537df0d\"" Feb 13 19:46:12.982713 containerd[1805]: time="2025-02-13T19:46:12.982678369Z" level=info msg="TearDown network for sandbox \"cd78dbf308776311aab128402becbbd5898d7d765367e436b435cde3f537df0d\" successfully" Feb 13 19:46:12.982733 containerd[1805]: time="2025-02-13T19:46:12.982713541Z" level=info msg="StopPodSandbox for \"cd78dbf308776311aab128402becbbd5898d7d765367e436b435cde3f537df0d\" returns successfully" Feb 13 19:46:12.982840 containerd[1805]: time="2025-02-13T19:46:12.982828993Z" level=info msg="StopPodSandbox for \"ce3e2500fc4278415f6ebbad079169c382c4f49b1ed76d104c93e357f67447a4\"" Feb 13 19:46:12.982886 containerd[1805]: time="2025-02-13T19:46:12.982877153Z" level=info msg="TearDown network for sandbox \"ce3e2500fc4278415f6ebbad079169c382c4f49b1ed76d104c93e357f67447a4\" successfully" Feb 13 19:46:12.982905 containerd[1805]: time="2025-02-13T19:46:12.982887329Z" level=info msg="StopPodSandbox for \"ce3e2500fc4278415f6ebbad079169c382c4f49b1ed76d104c93e357f67447a4\" returns successfully" Feb 13 19:46:12.982938 kubelet[3272]: I0213 19:46:12.982883 3272 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d0ee9cc9d198a644926e3567cfe87b5412c43401da9974690432ddf97964780" Feb 13 19:46:12.983014 containerd[1805]: time="2025-02-13T19:46:12.983003998Z" level=info msg="StopPodSandbox for \"12474318e4f6f721cac7083f9ef1f395d02fee9ab3fb10b76216dc654e5175bb\"" Feb 13 19:46:12.983063 containerd[1805]: time="2025-02-13T19:46:12.983054380Z" level=info msg="TearDown network for sandbox \"12474318e4f6f721cac7083f9ef1f395d02fee9ab3fb10b76216dc654e5175bb\" successfully" Feb 13 19:46:12.983082 containerd[1805]: time="2025-02-13T19:46:12.983064469Z" level=info msg="StopPodSandbox for \"12474318e4f6f721cac7083f9ef1f395d02fee9ab3fb10b76216dc654e5175bb\" returns successfully" Feb 13 19:46:12.983122 containerd[1805]: time="2025-02-13T19:46:12.983111289Z" level=info msg="StopPodSandbox for \"3d0ee9cc9d198a644926e3567cfe87b5412c43401da9974690432ddf97964780\"" Feb 13 19:46:12.983238 containerd[1805]: time="2025-02-13T19:46:12.983228676Z" level=info msg="Ensure that sandbox 3d0ee9cc9d198a644926e3567cfe87b5412c43401da9974690432ddf97964780 in task-service has been cleanup successfully" Feb 13 19:46:12.983260 containerd[1805]: time="2025-02-13T19:46:12.983246040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-xvtfk,Uid:c5186765-6941-4a28-a06d-cf22cd68adee,Namespace:kube-system,Attempt:4,}" Feb 13 19:46:12.983346 containerd[1805]: time="2025-02-13T19:46:12.983338422Z" level=info msg="TearDown network for sandbox \"3d0ee9cc9d198a644926e3567cfe87b5412c43401da9974690432ddf97964780\" successfully" Feb 13 19:46:12.983369 containerd[1805]: time="2025-02-13T19:46:12.983346535Z" level=info msg="StopPodSandbox for \"3d0ee9cc9d198a644926e3567cfe87b5412c43401da9974690432ddf97964780\" returns successfully" Feb 13 19:46:12.983469 containerd[1805]: time="2025-02-13T19:46:12.983457296Z" level=info msg="StopPodSandbox for \"273e7d864ff43274281d57fb2f27941c5b34017810dda5ef7ecc8294315a449b\"" Feb 13 19:46:12.983521 containerd[1805]: time="2025-02-13T19:46:12.983511808Z" level=info msg="TearDown network for sandbox \"273e7d864ff43274281d57fb2f27941c5b34017810dda5ef7ecc8294315a449b\" successfully" Feb 13 19:46:12.983542 containerd[1805]: time="2025-02-13T19:46:12.983521567Z" level=info msg="StopPodSandbox for \"273e7d864ff43274281d57fb2f27941c5b34017810dda5ef7ecc8294315a449b\" returns successfully" Feb 13 19:46:12.983605 kubelet[3272]: I0213 19:46:12.983595 3272 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13e4aa195b30d4eb9cefcecd0a1cebc44189e63e51154716b79006b8f7b53aff" Feb 13 19:46:12.983645 containerd[1805]: time="2025-02-13T19:46:12.983635857Z" level=info msg="StopPodSandbox for \"a76e4eaee661cfd563bcd392434c458182000015a3325e637abcb2534d276298\"" Feb 13 19:46:12.983693 containerd[1805]: time="2025-02-13T19:46:12.983684404Z" level=info msg="TearDown network for sandbox \"a76e4eaee661cfd563bcd392434c458182000015a3325e637abcb2534d276298\" successfully" Feb 13 19:46:12.983720 containerd[1805]: time="2025-02-13T19:46:12.983693763Z" level=info msg="StopPodSandbox for \"a76e4eaee661cfd563bcd392434c458182000015a3325e637abcb2534d276298\" returns successfully" Feb 13 19:46:12.983815 containerd[1805]: time="2025-02-13T19:46:12.983801700Z" level=info msg="StopPodSandbox for \"13e4aa195b30d4eb9cefcecd0a1cebc44189e63e51154716b79006b8f7b53aff\"" Feb 13 19:46:12.983848 containerd[1805]: time="2025-02-13T19:46:12.983816526Z" level=info msg="StopPodSandbox for \"80232807547030d8f07b5c4f1cde555f2f89298b4b252d2c8d124b51c03bbfbb\"" Feb 13 19:46:12.983866 containerd[1805]: time="2025-02-13T19:46:12.983856960Z" level=info msg="TearDown network for sandbox \"80232807547030d8f07b5c4f1cde555f2f89298b4b252d2c8d124b51c03bbfbb\" successfully" Feb 13 19:46:12.983866 containerd[1805]: time="2025-02-13T19:46:12.983863803Z" level=info msg="StopPodSandbox for \"80232807547030d8f07b5c4f1cde555f2f89298b4b252d2c8d124b51c03bbfbb\" returns successfully" Feb 13 19:46:12.983937 containerd[1805]: time="2025-02-13T19:46:12.983926363Z" level=info msg="Ensure that sandbox 13e4aa195b30d4eb9cefcecd0a1cebc44189e63e51154716b79006b8f7b53aff in task-service has been cleanup successfully" Feb 13 19:46:12.984030 containerd[1805]: time="2025-02-13T19:46:12.984017867Z" level=info msg="TearDown network for sandbox \"13e4aa195b30d4eb9cefcecd0a1cebc44189e63e51154716b79006b8f7b53aff\" successfully" Feb 13 19:46:12.984067 containerd[1805]: time="2025-02-13T19:46:12.984028949Z" level=info msg="StopPodSandbox for \"13e4aa195b30d4eb9cefcecd0a1cebc44189e63e51154716b79006b8f7b53aff\" returns successfully" Feb 13 19:46:12.984067 containerd[1805]: time="2025-02-13T19:46:12.984058858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54fdf9b76f-wsc67,Uid:a3049fc5-6472-40ea-b289-504e898e9372,Namespace:calico-system,Attempt:4,}" Feb 13 19:46:12.984119 systemd[1]: run-netns-cni\x2debf1f38f\x2dc02d\x2dd69d\x2d237a\x2de526e2210871.mount: Deactivated successfully. Feb 13 19:46:12.984275 containerd[1805]: time="2025-02-13T19:46:12.984135520Z" level=info msg="StopPodSandbox for \"515e2dbbd1be1c8fb85c1e550f65c5d2a7c456a8e8d1bd8d101a078c05308133\"" Feb 13 19:46:12.984275 containerd[1805]: time="2025-02-13T19:46:12.984184943Z" level=info msg="TearDown network for sandbox \"515e2dbbd1be1c8fb85c1e550f65c5d2a7c456a8e8d1bd8d101a078c05308133\" successfully" Feb 13 19:46:12.984275 containerd[1805]: time="2025-02-13T19:46:12.984195310Z" level=info msg="StopPodSandbox for \"515e2dbbd1be1c8fb85c1e550f65c5d2a7c456a8e8d1bd8d101a078c05308133\" returns successfully" Feb 13 19:46:12.984330 containerd[1805]: time="2025-02-13T19:46:12.984303845Z" level=info msg="StopPodSandbox for \"83e7edb6536af5c49bf3dcdb8edee0f773f777fb86a2c95fed050893a903a67c\"" Feb 13 19:46:12.984369 containerd[1805]: time="2025-02-13T19:46:12.984358752Z" level=info msg="TearDown network for sandbox \"83e7edb6536af5c49bf3dcdb8edee0f773f777fb86a2c95fed050893a903a67c\" successfully" Feb 13 19:46:12.984397 containerd[1805]: time="2025-02-13T19:46:12.984369475Z" level=info msg="StopPodSandbox for \"83e7edb6536af5c49bf3dcdb8edee0f773f777fb86a2c95fed050893a903a67c\" returns successfully" Feb 13 19:46:12.984451 kubelet[3272]: I0213 19:46:12.984441 3272 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="07a3d54a09a9b21a85afe18dfe2963f03ca1a2e398eb9b9070e48712833bd212" Feb 13 19:46:12.984516 containerd[1805]: time="2025-02-13T19:46:12.984505091Z" level=info msg="StopPodSandbox for \"45c87c61ef4cd93a6712ac1a472c5dc83746a8c57cbce7faf17a97105703dc6f\"" Feb 13 19:46:12.984559 containerd[1805]: time="2025-02-13T19:46:12.984552366Z" level=info msg="TearDown network for sandbox \"45c87c61ef4cd93a6712ac1a472c5dc83746a8c57cbce7faf17a97105703dc6f\" successfully" Feb 13 19:46:12.984586 containerd[1805]: time="2025-02-13T19:46:12.984559717Z" level=info msg="StopPodSandbox for \"45c87c61ef4cd93a6712ac1a472c5dc83746a8c57cbce7faf17a97105703dc6f\" returns successfully" Feb 13 19:46:12.984683 containerd[1805]: time="2025-02-13T19:46:12.984674760Z" level=info msg="StopPodSandbox for \"07a3d54a09a9b21a85afe18dfe2963f03ca1a2e398eb9b9070e48712833bd212\"" Feb 13 19:46:12.984780 containerd[1805]: time="2025-02-13T19:46:12.984767358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7469c76fc6-p6gv2,Uid:922c820f-72e7-49c5-977f-4e21e9e5b030,Namespace:calico-apiserver,Attempt:4,}" Feb 13 19:46:12.984807 containerd[1805]: time="2025-02-13T19:46:12.984770028Z" level=info msg="Ensure that sandbox 07a3d54a09a9b21a85afe18dfe2963f03ca1a2e398eb9b9070e48712833bd212 in task-service has been cleanup successfully" Feb 13 19:46:12.984907 containerd[1805]: time="2025-02-13T19:46:12.984896832Z" level=info msg="TearDown network for sandbox \"07a3d54a09a9b21a85afe18dfe2963f03ca1a2e398eb9b9070e48712833bd212\" successfully" Feb 13 19:46:12.984927 containerd[1805]: time="2025-02-13T19:46:12.984906516Z" level=info msg="StopPodSandbox for \"07a3d54a09a9b21a85afe18dfe2963f03ca1a2e398eb9b9070e48712833bd212\" returns successfully" Feb 13 19:46:12.985014 containerd[1805]: time="2025-02-13T19:46:12.985006555Z" level=info msg="StopPodSandbox for \"27d8eb9af9f42e9a53c98084e642c5a6c62e6a1a37b6fe8e8b95f03324695b3d\"" Feb 13 19:46:12.985047 containerd[1805]: time="2025-02-13T19:46:12.985040875Z" level=info msg="TearDown network for sandbox \"27d8eb9af9f42e9a53c98084e642c5a6c62e6a1a37b6fe8e8b95f03324695b3d\" successfully" Feb 13 19:46:12.985066 containerd[1805]: time="2025-02-13T19:46:12.985047009Z" level=info msg="StopPodSandbox for \"27d8eb9af9f42e9a53c98084e642c5a6c62e6a1a37b6fe8e8b95f03324695b3d\" returns successfully" Feb 13 19:46:12.985155 containerd[1805]: time="2025-02-13T19:46:12.985145319Z" level=info msg="StopPodSandbox for \"86d53dc9a54aa1591336fabe505d737e9ac9ca61810835e49d34bd02140a924c\"" Feb 13 19:46:12.985189 containerd[1805]: time="2025-02-13T19:46:12.985182359Z" level=info msg="TearDown network for sandbox \"86d53dc9a54aa1591336fabe505d737e9ac9ca61810835e49d34bd02140a924c\" successfully" Feb 13 19:46:12.985213 containerd[1805]: time="2025-02-13T19:46:12.985188999Z" level=info msg="StopPodSandbox for \"86d53dc9a54aa1591336fabe505d737e9ac9ca61810835e49d34bd02140a924c\" returns successfully" Feb 13 19:46:12.985274 kubelet[3272]: I0213 19:46:12.985265 3272 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed2ad3d19fc950de171eb3cf6c51622d2afb4aef5ea2ef8a88ae4d2b1b530489" Feb 13 19:46:12.985342 containerd[1805]: time="2025-02-13T19:46:12.985332847Z" level=info msg="StopPodSandbox for \"40cc26bfd58a26e7cb37e7b5a69dffb2a31831b9cdfaa60054c153d9762e12fa\"" Feb 13 19:46:12.985396 containerd[1805]: time="2025-02-13T19:46:12.985388167Z" level=info msg="TearDown network for sandbox \"40cc26bfd58a26e7cb37e7b5a69dffb2a31831b9cdfaa60054c153d9762e12fa\" successfully" Feb 13 19:46:12.985396 containerd[1805]: time="2025-02-13T19:46:12.985395400Z" level=info msg="StopPodSandbox for \"40cc26bfd58a26e7cb37e7b5a69dffb2a31831b9cdfaa60054c153d9762e12fa\" returns successfully" Feb 13 19:46:12.985485 containerd[1805]: time="2025-02-13T19:46:12.985474424Z" level=info msg="StopPodSandbox for \"ed2ad3d19fc950de171eb3cf6c51622d2afb4aef5ea2ef8a88ae4d2b1b530489\"" Feb 13 19:46:12.985584 containerd[1805]: time="2025-02-13T19:46:12.985571127Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vkkt7,Uid:f3327118-2549-4d08-a802-8c7cfa7fb673,Namespace:calico-system,Attempt:4,}" Feb 13 19:46:12.985610 containerd[1805]: time="2025-02-13T19:46:12.985579556Z" level=info msg="Ensure that sandbox ed2ad3d19fc950de171eb3cf6c51622d2afb4aef5ea2ef8a88ae4d2b1b530489 in task-service has been cleanup successfully" Feb 13 19:46:12.985683 containerd[1805]: time="2025-02-13T19:46:12.985674625Z" level=info msg="TearDown network for sandbox \"ed2ad3d19fc950de171eb3cf6c51622d2afb4aef5ea2ef8a88ae4d2b1b530489\" successfully" Feb 13 19:46:12.985712 containerd[1805]: time="2025-02-13T19:46:12.985682997Z" level=info msg="StopPodSandbox for \"ed2ad3d19fc950de171eb3cf6c51622d2afb4aef5ea2ef8a88ae4d2b1b530489\" returns successfully" Feb 13 19:46:12.987102 containerd[1805]: time="2025-02-13T19:46:12.985796019Z" level=info msg="StopPodSandbox for \"64b94a48217994b33c55bb7987c78d0f884f37cbe7f8cf23e7deaf8ba4c86bbe\"" Feb 13 19:46:12.987102 containerd[1805]: time="2025-02-13T19:46:12.985849991Z" level=info msg="TearDown network for sandbox \"64b94a48217994b33c55bb7987c78d0f884f37cbe7f8cf23e7deaf8ba4c86bbe\" successfully" Feb 13 19:46:12.987102 containerd[1805]: time="2025-02-13T19:46:12.985860107Z" level=info msg="StopPodSandbox for \"64b94a48217994b33c55bb7987c78d0f884f37cbe7f8cf23e7deaf8ba4c86bbe\" returns successfully" Feb 13 19:46:12.987102 containerd[1805]: time="2025-02-13T19:46:12.985975106Z" level=info msg="StopPodSandbox for \"8065894b6a1abeb1668ff49ae8d1a1408a0af9ccf6106a60de8a212a434edc9e\"" Feb 13 19:46:12.987102 containerd[1805]: time="2025-02-13T19:46:12.986024292Z" level=info msg="TearDown network for sandbox \"8065894b6a1abeb1668ff49ae8d1a1408a0af9ccf6106a60de8a212a434edc9e\" successfully" Feb 13 19:46:12.987102 containerd[1805]: time="2025-02-13T19:46:12.986030476Z" level=info msg="StopPodSandbox for \"8065894b6a1abeb1668ff49ae8d1a1408a0af9ccf6106a60de8a212a434edc9e\" returns successfully" Feb 13 19:46:12.987102 containerd[1805]: time="2025-02-13T19:46:12.986125338Z" level=info msg="StopPodSandbox for \"171aaacb6955d25fb1b3e8a93959e2fe163538dfa3fba461e8d6106fdaceba89\"" Feb 13 19:46:12.987102 containerd[1805]: time="2025-02-13T19:46:12.986162112Z" level=info msg="TearDown network for sandbox \"171aaacb6955d25fb1b3e8a93959e2fe163538dfa3fba461e8d6106fdaceba89\" successfully" Feb 13 19:46:12.987102 containerd[1805]: time="2025-02-13T19:46:12.986168301Z" level=info msg="StopPodSandbox for \"171aaacb6955d25fb1b3e8a93959e2fe163538dfa3fba461e8d6106fdaceba89\" returns successfully" Feb 13 19:46:12.987102 containerd[1805]: time="2025-02-13T19:46:12.986229800Z" level=info msg="StopPodSandbox for \"8b701cfc8211741af42c4414d372b5f700d7a816142fc765ae65aac6d68c837d\"" Feb 13 19:46:12.987102 containerd[1805]: time="2025-02-13T19:46:12.986298923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7469c76fc6-shmk8,Uid:168182d0-b9c4-48bd-9c74-28acdd82becf,Namespace:calico-apiserver,Attempt:4,}" Feb 13 19:46:12.987102 containerd[1805]: time="2025-02-13T19:46:12.986310459Z" level=info msg="Ensure that sandbox 8b701cfc8211741af42c4414d372b5f700d7a816142fc765ae65aac6d68c837d in task-service has been cleanup successfully" Feb 13 19:46:12.987102 containerd[1805]: time="2025-02-13T19:46:12.986389101Z" level=info msg="TearDown network for sandbox \"8b701cfc8211741af42c4414d372b5f700d7a816142fc765ae65aac6d68c837d\" successfully" Feb 13 19:46:12.987102 containerd[1805]: time="2025-02-13T19:46:12.986398539Z" level=info msg="StopPodSandbox for \"8b701cfc8211741af42c4414d372b5f700d7a816142fc765ae65aac6d68c837d\" returns successfully" Feb 13 19:46:12.987102 containerd[1805]: time="2025-02-13T19:46:12.986516040Z" level=info msg="StopPodSandbox for \"e51c93fcab25bcc6c22126e7803498a376f32272e77511b247d4ab18c089d1d8\"" Feb 13 19:46:12.987102 containerd[1805]: time="2025-02-13T19:46:12.986556491Z" level=info msg="TearDown network for sandbox \"e51c93fcab25bcc6c22126e7803498a376f32272e77511b247d4ab18c089d1d8\" successfully" Feb 13 19:46:12.987102 containerd[1805]: time="2025-02-13T19:46:12.986562931Z" level=info msg="StopPodSandbox for \"e51c93fcab25bcc6c22126e7803498a376f32272e77511b247d4ab18c089d1d8\" returns successfully" Feb 13 19:46:12.987102 containerd[1805]: time="2025-02-13T19:46:12.986662480Z" level=info msg="StopPodSandbox for \"17c86641c0094d0d8e43422341e1cc10769b1be113bacfab3dfb5b9b7eacc05c\"" Feb 13 19:46:12.987102 containerd[1805]: time="2025-02-13T19:46:12.986705763Z" level=info msg="TearDown network for sandbox \"17c86641c0094d0d8e43422341e1cc10769b1be113bacfab3dfb5b9b7eacc05c\" successfully" Feb 13 19:46:12.987102 containerd[1805]: time="2025-02-13T19:46:12.986712337Z" level=info msg="StopPodSandbox for \"17c86641c0094d0d8e43422341e1cc10769b1be113bacfab3dfb5b9b7eacc05c\" returns successfully" Feb 13 19:46:12.987102 containerd[1805]: time="2025-02-13T19:46:12.986813723Z" level=info msg="StopPodSandbox for \"9a29ac16eb479f3a98d75645f8fdb66ad9a507e136a4a0536faab04d17a70738\"" Feb 13 19:46:12.987102 containerd[1805]: time="2025-02-13T19:46:12.986848447Z" level=info msg="TearDown network for sandbox \"9a29ac16eb479f3a98d75645f8fdb66ad9a507e136a4a0536faab04d17a70738\" successfully" Feb 13 19:46:12.987102 containerd[1805]: time="2025-02-13T19:46:12.986854261Z" level=info msg="StopPodSandbox for \"9a29ac16eb479f3a98d75645f8fdb66ad9a507e136a4a0536faab04d17a70738\" returns successfully" Feb 13 19:46:12.987102 containerd[1805]: time="2025-02-13T19:46:12.987011779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-t77lg,Uid:4bb94446-f95c-44d1-9b40-e90a44987989,Namespace:kube-system,Attempt:4,}" Feb 13 19:46:12.986629 systemd[1]: run-netns-cni\x2d320a861d\x2de8bc\x2d8d5b\x2d904b\x2dea179a7c9c5b.mount: Deactivated successfully. Feb 13 19:46:12.987516 kubelet[3272]: I0213 19:46:12.986032 3272 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b701cfc8211741af42c4414d372b5f700d7a816142fc765ae65aac6d68c837d" Feb 13 19:46:12.986681 systemd[1]: run-netns-cni\x2d95aa0255\x2d8d2e\x2d1d92\x2db2ec\x2df35158b35cfa.mount: Deactivated successfully. Feb 13 19:46:12.986715 systemd[1]: run-netns-cni\x2dbf5fa982\x2dacf9\x2dd2c2\x2dd2a4\x2d2c074b801e31.mount: Deactivated successfully. Feb 13 19:46:12.988655 systemd[1]: run-netns-cni\x2dc829590b\x2dd48c\x2df504\x2d2fa6\x2d56bb153b2948.mount: Deactivated successfully. Feb 13 19:46:12.988699 systemd[1]: run-netns-cni\x2dc6ec84b3\x2d29b4\x2d0cb6\x2d212e\x2d94f531042905.mount: Deactivated successfully. Feb 13 19:46:13.030016 containerd[1805]: time="2025-02-13T19:46:13.029908882Z" level=error msg="Failed to destroy network for sandbox \"42255449bd41deb01db183135e0eed2b08b49473645455cebbcc24d5d22eb0a4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:13.030246 containerd[1805]: time="2025-02-13T19:46:13.030230278Z" level=error msg="encountered an error cleaning up failed sandbox \"42255449bd41deb01db183135e0eed2b08b49473645455cebbcc24d5d22eb0a4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:13.030297 containerd[1805]: time="2025-02-13T19:46:13.030273911Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-xvtfk,Uid:c5186765-6941-4a28-a06d-cf22cd68adee,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"42255449bd41deb01db183135e0eed2b08b49473645455cebbcc24d5d22eb0a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:13.030601 kubelet[3272]: E0213 19:46:13.030563 3272 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42255449bd41deb01db183135e0eed2b08b49473645455cebbcc24d5d22eb0a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:13.030668 kubelet[3272]: E0213 19:46:13.030627 3272 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42255449bd41deb01db183135e0eed2b08b49473645455cebbcc24d5d22eb0a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-xvtfk" Feb 13 19:46:13.030668 kubelet[3272]: E0213 19:46:13.030649 3272 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42255449bd41deb01db183135e0eed2b08b49473645455cebbcc24d5d22eb0a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-xvtfk" Feb 13 19:46:13.030717 kubelet[3272]: E0213 19:46:13.030690 3272 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-xvtfk_kube-system(c5186765-6941-4a28-a06d-cf22cd68adee)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-xvtfk_kube-system(c5186765-6941-4a28-a06d-cf22cd68adee)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"42255449bd41deb01db183135e0eed2b08b49473645455cebbcc24d5d22eb0a4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-xvtfk" podUID="c5186765-6941-4a28-a06d-cf22cd68adee" Feb 13 19:46:13.037109 containerd[1805]: time="2025-02-13T19:46:13.037072877Z" level=error msg="Failed to destroy network for sandbox \"2dc41cc4f9f38b25ede16d8cd1afd6e66411dd90e205f0af6dab3a956b9015f1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:13.037213 containerd[1805]: time="2025-02-13T19:46:13.037126809Z" level=error msg="Failed to destroy network for sandbox \"f591b3689d85817516cd404dcb28ead0b5ffd0a594a13bb523640ede1db5693a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:13.037282 containerd[1805]: time="2025-02-13T19:46:13.037268707Z" level=error msg="encountered an error cleaning up failed sandbox \"2dc41cc4f9f38b25ede16d8cd1afd6e66411dd90e205f0af6dab3a956b9015f1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:13.037314 containerd[1805]: time="2025-02-13T19:46:13.037296271Z" level=error msg="encountered an error cleaning up failed sandbox \"f591b3689d85817516cd404dcb28ead0b5ffd0a594a13bb523640ede1db5693a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:13.037336 containerd[1805]: time="2025-02-13T19:46:13.037307527Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7469c76fc6-p6gv2,Uid:922c820f-72e7-49c5-977f-4e21e9e5b030,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"2dc41cc4f9f38b25ede16d8cd1afd6e66411dd90e205f0af6dab3a956b9015f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:13.037376 containerd[1805]: time="2025-02-13T19:46:13.037328662Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54fdf9b76f-wsc67,Uid:a3049fc5-6472-40ea-b289-504e898e9372,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"f591b3689d85817516cd404dcb28ead0b5ffd0a594a13bb523640ede1db5693a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:13.037470 kubelet[3272]: E0213 19:46:13.037449 3272 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2dc41cc4f9f38b25ede16d8cd1afd6e66411dd90e205f0af6dab3a956b9015f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:13.037506 kubelet[3272]: E0213 19:46:13.037487 3272 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2dc41cc4f9f38b25ede16d8cd1afd6e66411dd90e205f0af6dab3a956b9015f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7469c76fc6-p6gv2" Feb 13 19:46:13.037506 kubelet[3272]: E0213 19:46:13.037500 3272 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2dc41cc4f9f38b25ede16d8cd1afd6e66411dd90e205f0af6dab3a956b9015f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7469c76fc6-p6gv2" Feb 13 19:46:13.037541 kubelet[3272]: E0213 19:46:13.037450 3272 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f591b3689d85817516cd404dcb28ead0b5ffd0a594a13bb523640ede1db5693a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:13.037541 kubelet[3272]: E0213 19:46:13.037525 3272 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7469c76fc6-p6gv2_calico-apiserver(922c820f-72e7-49c5-977f-4e21e9e5b030)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7469c76fc6-p6gv2_calico-apiserver(922c820f-72e7-49c5-977f-4e21e9e5b030)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2dc41cc4f9f38b25ede16d8cd1afd6e66411dd90e205f0af6dab3a956b9015f1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7469c76fc6-p6gv2" podUID="922c820f-72e7-49c5-977f-4e21e9e5b030" Feb 13 19:46:13.037590 kubelet[3272]: E0213 19:46:13.037543 3272 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f591b3689d85817516cd404dcb28ead0b5ffd0a594a13bb523640ede1db5693a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-54fdf9b76f-wsc67" Feb 13 19:46:13.037590 kubelet[3272]: E0213 19:46:13.037560 3272 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f591b3689d85817516cd404dcb28ead0b5ffd0a594a13bb523640ede1db5693a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-54fdf9b76f-wsc67" Feb 13 19:46:13.037627 kubelet[3272]: E0213 19:46:13.037584 3272 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-54fdf9b76f-wsc67_calico-system(a3049fc5-6472-40ea-b289-504e898e9372)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-54fdf9b76f-wsc67_calico-system(a3049fc5-6472-40ea-b289-504e898e9372)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f591b3689d85817516cd404dcb28ead0b5ffd0a594a13bb523640ede1db5693a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-54fdf9b76f-wsc67" podUID="a3049fc5-6472-40ea-b289-504e898e9372" Feb 13 19:46:13.037855 containerd[1805]: time="2025-02-13T19:46:13.037836749Z" level=error msg="Failed to destroy network for sandbox \"69a434fd10ad240732bbebc25134f8c4622085ed22500f6716a8079731e5c35b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:13.037986 containerd[1805]: time="2025-02-13T19:46:13.037973599Z" level=error msg="encountered an error cleaning up failed sandbox \"69a434fd10ad240732bbebc25134f8c4622085ed22500f6716a8079731e5c35b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:13.038011 containerd[1805]: time="2025-02-13T19:46:13.037997700Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7469c76fc6-shmk8,Uid:168182d0-b9c4-48bd-9c74-28acdd82becf,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"69a434fd10ad240732bbebc25134f8c4622085ed22500f6716a8079731e5c35b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:13.038077 kubelet[3272]: E0213 19:46:13.038065 3272 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69a434fd10ad240732bbebc25134f8c4622085ed22500f6716a8079731e5c35b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:13.038101 kubelet[3272]: E0213 19:46:13.038085 3272 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69a434fd10ad240732bbebc25134f8c4622085ed22500f6716a8079731e5c35b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7469c76fc6-shmk8" Feb 13 19:46:13.038101 kubelet[3272]: E0213 19:46:13.038096 3272 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69a434fd10ad240732bbebc25134f8c4622085ed22500f6716a8079731e5c35b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7469c76fc6-shmk8" Feb 13 19:46:13.038157 kubelet[3272]: E0213 19:46:13.038115 3272 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7469c76fc6-shmk8_calico-apiserver(168182d0-b9c4-48bd-9c74-28acdd82becf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7469c76fc6-shmk8_calico-apiserver(168182d0-b9c4-48bd-9c74-28acdd82becf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"69a434fd10ad240732bbebc25134f8c4622085ed22500f6716a8079731e5c35b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7469c76fc6-shmk8" podUID="168182d0-b9c4-48bd-9c74-28acdd82becf" Feb 13 19:46:13.038863 containerd[1805]: time="2025-02-13T19:46:13.038845597Z" level=error msg="Failed to destroy network for sandbox \"83b2d907235c0a11a34679a7af1d5627566cabef49812299411e9adcc0579e86\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:13.039033 containerd[1805]: time="2025-02-13T19:46:13.039017549Z" level=error msg="encountered an error cleaning up failed sandbox \"83b2d907235c0a11a34679a7af1d5627566cabef49812299411e9adcc0579e86\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:13.039065 containerd[1805]: time="2025-02-13T19:46:13.039052493Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vkkt7,Uid:f3327118-2549-4d08-a802-8c7cfa7fb673,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"83b2d907235c0a11a34679a7af1d5627566cabef49812299411e9adcc0579e86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:13.039147 kubelet[3272]: E0213 19:46:13.039134 3272 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83b2d907235c0a11a34679a7af1d5627566cabef49812299411e9adcc0579e86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:13.039183 kubelet[3272]: E0213 19:46:13.039161 3272 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83b2d907235c0a11a34679a7af1d5627566cabef49812299411e9adcc0579e86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vkkt7" Feb 13 19:46:13.039183 kubelet[3272]: E0213 19:46:13.039177 3272 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83b2d907235c0a11a34679a7af1d5627566cabef49812299411e9adcc0579e86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vkkt7" Feb 13 19:46:13.039238 kubelet[3272]: E0213 19:46:13.039204 3272 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-vkkt7_calico-system(f3327118-2549-4d08-a802-8c7cfa7fb673)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-vkkt7_calico-system(f3327118-2549-4d08-a802-8c7cfa7fb673)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"83b2d907235c0a11a34679a7af1d5627566cabef49812299411e9adcc0579e86\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-vkkt7" podUID="f3327118-2549-4d08-a802-8c7cfa7fb673" Feb 13 19:46:13.041861 containerd[1805]: time="2025-02-13T19:46:13.041840016Z" level=error msg="Failed to destroy network for sandbox \"ed35a2434e04ccf39eb2f180a16dd5452b789fb9c1d32581580ac92a02c45d98\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:13.042041 containerd[1805]: time="2025-02-13T19:46:13.042023182Z" level=error msg="encountered an error cleaning up failed sandbox \"ed35a2434e04ccf39eb2f180a16dd5452b789fb9c1d32581580ac92a02c45d98\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:13.042066 containerd[1805]: time="2025-02-13T19:46:13.042057533Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-t77lg,Uid:4bb94446-f95c-44d1-9b40-e90a44987989,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"ed35a2434e04ccf39eb2f180a16dd5452b789fb9c1d32581580ac92a02c45d98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:13.042154 kubelet[3272]: E0213 19:46:13.042140 3272 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed35a2434e04ccf39eb2f180a16dd5452b789fb9c1d32581580ac92a02c45d98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:46:13.042198 kubelet[3272]: E0213 19:46:13.042169 3272 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed35a2434e04ccf39eb2f180a16dd5452b789fb9c1d32581580ac92a02c45d98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-t77lg" Feb 13 19:46:13.042198 kubelet[3272]: E0213 19:46:13.042186 3272 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed35a2434e04ccf39eb2f180a16dd5452b789fb9c1d32581580ac92a02c45d98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-t77lg" Feb 13 19:46:13.042256 kubelet[3272]: E0213 19:46:13.042214 3272 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-t77lg_kube-system(4bb94446-f95c-44d1-9b40-e90a44987989)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-t77lg_kube-system(4bb94446-f95c-44d1-9b40-e90a44987989)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ed35a2434e04ccf39eb2f180a16dd5452b789fb9c1d32581580ac92a02c45d98\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-t77lg" podUID="4bb94446-f95c-44d1-9b40-e90a44987989" Feb 13 19:46:13.445275 containerd[1805]: time="2025-02-13T19:46:13.445190081Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:46:13.445435 containerd[1805]: time="2025-02-13T19:46:13.445415851Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Feb 13 19:46:13.445808 containerd[1805]: time="2025-02-13T19:46:13.445760452Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:46:13.446670 containerd[1805]: time="2025-02-13T19:46:13.446630086Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:46:13.447200 containerd[1805]: time="2025-02-13T19:46:13.447157280Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 3.487473962s" Feb 13 19:46:13.447200 containerd[1805]: time="2025-02-13T19:46:13.447177915Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Feb 13 19:46:13.450720 containerd[1805]: time="2025-02-13T19:46:13.450668267Z" level=info msg="CreateContainer within sandbox \"91182644a72703e3fbdf2a623f27621c8437476e7a62e635a877b014bfbcf140\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 13 19:46:13.462032 containerd[1805]: time="2025-02-13T19:46:13.461987445Z" level=info msg="CreateContainer within sandbox \"91182644a72703e3fbdf2a623f27621c8437476e7a62e635a877b014bfbcf140\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e4e663e44b4425acdc1abe93097f46d8d2c8b730a5c3ed1cb27c5e60d0278379\"" Feb 13 19:46:13.462230 containerd[1805]: time="2025-02-13T19:46:13.462188822Z" level=info msg="StartContainer for \"e4e663e44b4425acdc1abe93097f46d8d2c8b730a5c3ed1cb27c5e60d0278379\"" Feb 13 19:46:13.483822 systemd[1]: Started cri-containerd-e4e663e44b4425acdc1abe93097f46d8d2c8b730a5c3ed1cb27c5e60d0278379.scope - libcontainer container e4e663e44b4425acdc1abe93097f46d8d2c8b730a5c3ed1cb27c5e60d0278379. Feb 13 19:46:13.534198 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-42255449bd41deb01db183135e0eed2b08b49473645455cebbcc24d5d22eb0a4-shm.mount: Deactivated successfully. Feb 13 19:46:13.534334 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2013586029.mount: Deactivated successfully. Feb 13 19:46:13.535192 containerd[1805]: time="2025-02-13T19:46:13.535149254Z" level=info msg="StartContainer for \"e4e663e44b4425acdc1abe93097f46d8d2c8b730a5c3ed1cb27c5e60d0278379\" returns successfully" Feb 13 19:46:13.609491 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 13 19:46:13.609549 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 13 19:46:13.994051 kubelet[3272]: I0213 19:46:13.993990 3272 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69a434fd10ad240732bbebc25134f8c4622085ed22500f6716a8079731e5c35b" Feb 13 19:46:13.995247 containerd[1805]: time="2025-02-13T19:46:13.995176972Z" level=info msg="StopPodSandbox for \"69a434fd10ad240732bbebc25134f8c4622085ed22500f6716a8079731e5c35b\"" Feb 13 19:46:13.996084 containerd[1805]: time="2025-02-13T19:46:13.995678427Z" level=info msg="Ensure that sandbox 69a434fd10ad240732bbebc25134f8c4622085ed22500f6716a8079731e5c35b in task-service has been cleanup successfully" Feb 13 19:46:13.996084 containerd[1805]: time="2025-02-13T19:46:13.996065998Z" level=info msg="TearDown network for sandbox \"69a434fd10ad240732bbebc25134f8c4622085ed22500f6716a8079731e5c35b\" successfully" Feb 13 19:46:13.996412 containerd[1805]: time="2025-02-13T19:46:13.996104390Z" level=info msg="StopPodSandbox for \"69a434fd10ad240732bbebc25134f8c4622085ed22500f6716a8079731e5c35b\" returns successfully" Feb 13 19:46:13.996782 containerd[1805]: time="2025-02-13T19:46:13.996708691Z" level=info msg="StopPodSandbox for \"ed2ad3d19fc950de171eb3cf6c51622d2afb4aef5ea2ef8a88ae4d2b1b530489\"" Feb 13 19:46:13.997046 containerd[1805]: time="2025-02-13T19:46:13.996991652Z" level=info msg="TearDown network for sandbox \"ed2ad3d19fc950de171eb3cf6c51622d2afb4aef5ea2ef8a88ae4d2b1b530489\" successfully" Feb 13 19:46:13.997145 containerd[1805]: time="2025-02-13T19:46:13.997051033Z" level=info msg="StopPodSandbox for \"ed2ad3d19fc950de171eb3cf6c51622d2afb4aef5ea2ef8a88ae4d2b1b530489\" returns successfully" Feb 13 19:46:13.997627 containerd[1805]: time="2025-02-13T19:46:13.997553793Z" level=info msg="StopPodSandbox for \"64b94a48217994b33c55bb7987c78d0f884f37cbe7f8cf23e7deaf8ba4c86bbe\"" Feb 13 19:46:13.997889 containerd[1805]: time="2025-02-13T19:46:13.997782537Z" level=info msg="TearDown network for sandbox \"64b94a48217994b33c55bb7987c78d0f884f37cbe7f8cf23e7deaf8ba4c86bbe\" successfully" Feb 13 19:46:13.998055 containerd[1805]: time="2025-02-13T19:46:13.997887851Z" level=info msg="StopPodSandbox for \"64b94a48217994b33c55bb7987c78d0f884f37cbe7f8cf23e7deaf8ba4c86bbe\" returns successfully" Feb 13 19:46:13.998353 kubelet[3272]: I0213 19:46:13.998319 3272 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed35a2434e04ccf39eb2f180a16dd5452b789fb9c1d32581580ac92a02c45d98" Feb 13 19:46:13.998513 containerd[1805]: time="2025-02-13T19:46:13.998384960Z" level=info msg="StopPodSandbox for \"8065894b6a1abeb1668ff49ae8d1a1408a0af9ccf6106a60de8a212a434edc9e\"" Feb 13 19:46:13.998660 containerd[1805]: time="2025-02-13T19:46:13.998628072Z" level=info msg="TearDown network for sandbox \"8065894b6a1abeb1668ff49ae8d1a1408a0af9ccf6106a60de8a212a434edc9e\" successfully" Feb 13 19:46:13.998767 containerd[1805]: time="2025-02-13T19:46:13.998663934Z" level=info msg="StopPodSandbox for \"8065894b6a1abeb1668ff49ae8d1a1408a0af9ccf6106a60de8a212a434edc9e\" returns successfully" Feb 13 19:46:13.999341 containerd[1805]: time="2025-02-13T19:46:13.999289982Z" level=info msg="StopPodSandbox for \"171aaacb6955d25fb1b3e8a93959e2fe163538dfa3fba461e8d6106fdaceba89\"" Feb 13 19:46:13.999547 containerd[1805]: time="2025-02-13T19:46:13.999309899Z" level=info msg="StopPodSandbox for \"ed35a2434e04ccf39eb2f180a16dd5452b789fb9c1d32581580ac92a02c45d98\"" Feb 13 19:46:13.999547 containerd[1805]: time="2025-02-13T19:46:13.999508071Z" level=info msg="TearDown network for sandbox \"171aaacb6955d25fb1b3e8a93959e2fe163538dfa3fba461e8d6106fdaceba89\" successfully" Feb 13 19:46:13.999547 containerd[1805]: time="2025-02-13T19:46:13.999540228Z" level=info msg="StopPodSandbox for \"171aaacb6955d25fb1b3e8a93959e2fe163538dfa3fba461e8d6106fdaceba89\" returns successfully" Feb 13 19:46:14.000029 containerd[1805]: time="2025-02-13T19:46:13.999966139Z" level=info msg="Ensure that sandbox ed35a2434e04ccf39eb2f180a16dd5452b789fb9c1d32581580ac92a02c45d98 in task-service has been cleanup successfully" Feb 13 19:46:14.000445 containerd[1805]: time="2025-02-13T19:46:14.000366551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7469c76fc6-shmk8,Uid:168182d0-b9c4-48bd-9c74-28acdd82becf,Namespace:calico-apiserver,Attempt:5,}" Feb 13 19:46:14.000564 containerd[1805]: time="2025-02-13T19:46:14.000480930Z" level=info msg="TearDown network for sandbox \"ed35a2434e04ccf39eb2f180a16dd5452b789fb9c1d32581580ac92a02c45d98\" successfully" Feb 13 19:46:14.000564 containerd[1805]: time="2025-02-13T19:46:14.000536236Z" level=info msg="StopPodSandbox for \"ed35a2434e04ccf39eb2f180a16dd5452b789fb9c1d32581580ac92a02c45d98\" returns successfully" Feb 13 19:46:14.000811 containerd[1805]: time="2025-02-13T19:46:14.000799029Z" level=info msg="StopPodSandbox for \"8b701cfc8211741af42c4414d372b5f700d7a816142fc765ae65aac6d68c837d\"" Feb 13 19:46:14.000893 containerd[1805]: time="2025-02-13T19:46:14.000883284Z" level=info msg="TearDown network for sandbox \"8b701cfc8211741af42c4414d372b5f700d7a816142fc765ae65aac6d68c837d\" successfully" Feb 13 19:46:14.000915 containerd[1805]: time="2025-02-13T19:46:14.000894440Z" level=info msg="StopPodSandbox for \"8b701cfc8211741af42c4414d372b5f700d7a816142fc765ae65aac6d68c837d\" returns successfully" Feb 13 19:46:14.001047 containerd[1805]: time="2025-02-13T19:46:14.001036875Z" level=info msg="StopPodSandbox for \"e51c93fcab25bcc6c22126e7803498a376f32272e77511b247d4ab18c089d1d8\"" Feb 13 19:46:14.001085 containerd[1805]: time="2025-02-13T19:46:14.001078245Z" level=info msg="TearDown network for sandbox \"e51c93fcab25bcc6c22126e7803498a376f32272e77511b247d4ab18c089d1d8\" successfully" Feb 13 19:46:14.001103 containerd[1805]: time="2025-02-13T19:46:14.001085291Z" level=info msg="StopPodSandbox for \"e51c93fcab25bcc6c22126e7803498a376f32272e77511b247d4ab18c089d1d8\" returns successfully" Feb 13 19:46:14.001186 containerd[1805]: time="2025-02-13T19:46:14.001176902Z" level=info msg="StopPodSandbox for \"17c86641c0094d0d8e43422341e1cc10769b1be113bacfab3dfb5b9b7eacc05c\"" Feb 13 19:46:14.001223 containerd[1805]: time="2025-02-13T19:46:14.001216622Z" level=info msg="TearDown network for sandbox \"17c86641c0094d0d8e43422341e1cc10769b1be113bacfab3dfb5b9b7eacc05c\" successfully" Feb 13 19:46:14.001247 containerd[1805]: time="2025-02-13T19:46:14.001223638Z" level=info msg="StopPodSandbox for \"17c86641c0094d0d8e43422341e1cc10769b1be113bacfab3dfb5b9b7eacc05c\" returns successfully" Feb 13 19:46:14.001265 kubelet[3272]: I0213 19:46:14.001233 3272 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="42255449bd41deb01db183135e0eed2b08b49473645455cebbcc24d5d22eb0a4" Feb 13 19:46:14.001333 containerd[1805]: time="2025-02-13T19:46:14.001325008Z" level=info msg="StopPodSandbox for \"9a29ac16eb479f3a98d75645f8fdb66ad9a507e136a4a0536faab04d17a70738\"" Feb 13 19:46:14.001380 containerd[1805]: time="2025-02-13T19:46:14.001369294Z" level=info msg="TearDown network for sandbox \"9a29ac16eb479f3a98d75645f8fdb66ad9a507e136a4a0536faab04d17a70738\" successfully" Feb 13 19:46:14.001411 containerd[1805]: time="2025-02-13T19:46:14.001379554Z" level=info msg="StopPodSandbox for \"9a29ac16eb479f3a98d75645f8fdb66ad9a507e136a4a0536faab04d17a70738\" returns successfully" Feb 13 19:46:14.001453 containerd[1805]: time="2025-02-13T19:46:14.001429717Z" level=info msg="StopPodSandbox for \"42255449bd41deb01db183135e0eed2b08b49473645455cebbcc24d5d22eb0a4\"" Feb 13 19:46:14.001547 containerd[1805]: time="2025-02-13T19:46:14.001536670Z" level=info msg="Ensure that sandbox 42255449bd41deb01db183135e0eed2b08b49473645455cebbcc24d5d22eb0a4 in task-service has been cleanup successfully" Feb 13 19:46:14.001581 containerd[1805]: time="2025-02-13T19:46:14.001545606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-t77lg,Uid:4bb94446-f95c-44d1-9b40-e90a44987989,Namespace:kube-system,Attempt:5,}" Feb 13 19:46:14.001646 containerd[1805]: time="2025-02-13T19:46:14.001635040Z" level=info msg="TearDown network for sandbox \"42255449bd41deb01db183135e0eed2b08b49473645455cebbcc24d5d22eb0a4\" successfully" Feb 13 19:46:14.001677 containerd[1805]: time="2025-02-13T19:46:14.001644786Z" level=info msg="StopPodSandbox for \"42255449bd41deb01db183135e0eed2b08b49473645455cebbcc24d5d22eb0a4\" returns successfully" Feb 13 19:46:14.001714 systemd[1]: run-netns-cni\x2d60694715\x2db63f\x2dcf8e\x2d842b\x2d5ffb9e20a0bd.mount: Deactivated successfully. Feb 13 19:46:14.001832 containerd[1805]: time="2025-02-13T19:46:14.001760444Z" level=info msg="StopPodSandbox for \"9093a30fc2f5cae8b2ce8547aa8a9c41cf2b97877a8b2f1062b62db2d5c43068\"" Feb 13 19:46:14.001832 containerd[1805]: time="2025-02-13T19:46:14.001813687Z" level=info msg="TearDown network for sandbox \"9093a30fc2f5cae8b2ce8547aa8a9c41cf2b97877a8b2f1062b62db2d5c43068\" successfully" Feb 13 19:46:14.001868 containerd[1805]: time="2025-02-13T19:46:14.001832411Z" level=info msg="StopPodSandbox for \"9093a30fc2f5cae8b2ce8547aa8a9c41cf2b97877a8b2f1062b62db2d5c43068\" returns successfully" Feb 13 19:46:14.001978 containerd[1805]: time="2025-02-13T19:46:14.001968722Z" level=info msg="StopPodSandbox for \"cd78dbf308776311aab128402becbbd5898d7d765367e436b435cde3f537df0d\"" Feb 13 19:46:14.002021 containerd[1805]: time="2025-02-13T19:46:14.002013415Z" level=info msg="TearDown network for sandbox \"cd78dbf308776311aab128402becbbd5898d7d765367e436b435cde3f537df0d\" successfully" Feb 13 19:46:14.002041 containerd[1805]: time="2025-02-13T19:46:14.002021206Z" level=info msg="StopPodSandbox for \"cd78dbf308776311aab128402becbbd5898d7d765367e436b435cde3f537df0d\" returns successfully" Feb 13 19:46:14.002142 kubelet[3272]: I0213 19:46:14.002133 3272 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="83b2d907235c0a11a34679a7af1d5627566cabef49812299411e9adcc0579e86" Feb 13 19:46:14.002171 containerd[1805]: time="2025-02-13T19:46:14.002134372Z" level=info msg="StopPodSandbox for \"ce3e2500fc4278415f6ebbad079169c382c4f49b1ed76d104c93e357f67447a4\"" Feb 13 19:46:14.002219 containerd[1805]: time="2025-02-13T19:46:14.002186826Z" level=info msg="TearDown network for sandbox \"ce3e2500fc4278415f6ebbad079169c382c4f49b1ed76d104c93e357f67447a4\" successfully" Feb 13 19:46:14.002251 containerd[1805]: time="2025-02-13T19:46:14.002219465Z" level=info msg="StopPodSandbox for \"ce3e2500fc4278415f6ebbad079169c382c4f49b1ed76d104c93e357f67447a4\" returns successfully" Feb 13 19:46:14.002356 containerd[1805]: time="2025-02-13T19:46:14.002344602Z" level=info msg="StopPodSandbox for \"12474318e4f6f721cac7083f9ef1f395d02fee9ab3fb10b76216dc654e5175bb\"" Feb 13 19:46:14.002396 containerd[1805]: time="2025-02-13T19:46:14.002348097Z" level=info msg="StopPodSandbox for \"83b2d907235c0a11a34679a7af1d5627566cabef49812299411e9adcc0579e86\"" Feb 13 19:46:14.002416 containerd[1805]: time="2025-02-13T19:46:14.002396052Z" level=info msg="TearDown network for sandbox \"12474318e4f6f721cac7083f9ef1f395d02fee9ab3fb10b76216dc654e5175bb\" successfully" Feb 13 19:46:14.002416 containerd[1805]: time="2025-02-13T19:46:14.002405609Z" level=info msg="StopPodSandbox for \"12474318e4f6f721cac7083f9ef1f395d02fee9ab3fb10b76216dc654e5175bb\" returns successfully" Feb 13 19:46:14.002501 containerd[1805]: time="2025-02-13T19:46:14.002491380Z" level=info msg="Ensure that sandbox 83b2d907235c0a11a34679a7af1d5627566cabef49812299411e9adcc0579e86 in task-service has been cleanup successfully" Feb 13 19:46:14.002583 containerd[1805]: time="2025-02-13T19:46:14.002570362Z" level=info msg="TearDown network for sandbox \"83b2d907235c0a11a34679a7af1d5627566cabef49812299411e9adcc0579e86\" successfully" Feb 13 19:46:14.002583 containerd[1805]: time="2025-02-13T19:46:14.002581883Z" level=info msg="StopPodSandbox for \"83b2d907235c0a11a34679a7af1d5627566cabef49812299411e9adcc0579e86\" returns successfully" Feb 13 19:46:14.002653 containerd[1805]: time="2025-02-13T19:46:14.002588826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-xvtfk,Uid:c5186765-6941-4a28-a06d-cf22cd68adee,Namespace:kube-system,Attempt:5,}" Feb 13 19:46:14.002697 containerd[1805]: time="2025-02-13T19:46:14.002688676Z" level=info msg="StopPodSandbox for \"07a3d54a09a9b21a85afe18dfe2963f03ca1a2e398eb9b9070e48712833bd212\"" Feb 13 19:46:14.002737 containerd[1805]: time="2025-02-13T19:46:14.002731322Z" level=info msg="TearDown network for sandbox \"07a3d54a09a9b21a85afe18dfe2963f03ca1a2e398eb9b9070e48712833bd212\" successfully" Feb 13 19:46:14.002763 containerd[1805]: time="2025-02-13T19:46:14.002737799Z" level=info msg="StopPodSandbox for \"07a3d54a09a9b21a85afe18dfe2963f03ca1a2e398eb9b9070e48712833bd212\" returns successfully" Feb 13 19:46:14.002858 containerd[1805]: time="2025-02-13T19:46:14.002848878Z" level=info msg="StopPodSandbox for \"27d8eb9af9f42e9a53c98084e642c5a6c62e6a1a37b6fe8e8b95f03324695b3d\"" Feb 13 19:46:14.002910 containerd[1805]: time="2025-02-13T19:46:14.002890487Z" level=info msg="TearDown network for sandbox \"27d8eb9af9f42e9a53c98084e642c5a6c62e6a1a37b6fe8e8b95f03324695b3d\" successfully" Feb 13 19:46:14.002930 containerd[1805]: time="2025-02-13T19:46:14.002910719Z" level=info msg="StopPodSandbox for \"27d8eb9af9f42e9a53c98084e642c5a6c62e6a1a37b6fe8e8b95f03324695b3d\" returns successfully" Feb 13 19:46:14.003010 containerd[1805]: time="2025-02-13T19:46:14.002999960Z" level=info msg="StopPodSandbox for \"86d53dc9a54aa1591336fabe505d737e9ac9ca61810835e49d34bd02140a924c\"" Feb 13 19:46:14.003045 containerd[1805]: time="2025-02-13T19:46:14.003038648Z" level=info msg="TearDown network for sandbox \"86d53dc9a54aa1591336fabe505d737e9ac9ca61810835e49d34bd02140a924c\" successfully" Feb 13 19:46:14.003066 containerd[1805]: time="2025-02-13T19:46:14.003045256Z" level=info msg="StopPodSandbox for \"86d53dc9a54aa1591336fabe505d737e9ac9ca61810835e49d34bd02140a924c\" returns successfully" Feb 13 19:46:14.003165 containerd[1805]: time="2025-02-13T19:46:14.003156184Z" level=info msg="StopPodSandbox for \"40cc26bfd58a26e7cb37e7b5a69dffb2a31831b9cdfaa60054c153d9762e12fa\"" Feb 13 19:46:14.003203 containerd[1805]: time="2025-02-13T19:46:14.003193282Z" level=info msg="TearDown network for sandbox \"40cc26bfd58a26e7cb37e7b5a69dffb2a31831b9cdfaa60054c153d9762e12fa\" successfully" Feb 13 19:46:14.003203 containerd[1805]: time="2025-02-13T19:46:14.003200310Z" level=info msg="StopPodSandbox for \"40cc26bfd58a26e7cb37e7b5a69dffb2a31831b9cdfaa60054c153d9762e12fa\" returns successfully" Feb 13 19:46:14.003499 containerd[1805]: time="2025-02-13T19:46:14.003486353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vkkt7,Uid:f3327118-2549-4d08-a802-8c7cfa7fb673,Namespace:calico-system,Attempt:5,}" Feb 13 19:46:14.003827 systemd[1]: run-netns-cni\x2d0aa40822\x2d64b8\x2d9873\x2d4024\x2d1364b48bfa8f.mount: Deactivated successfully. Feb 13 19:46:14.003897 systemd[1]: run-netns-cni\x2d989416a4\x2d7579\x2dc280\x2d7592\x2d28decd412192.mount: Deactivated successfully. Feb 13 19:46:14.004164 kubelet[3272]: I0213 19:46:14.004152 3272 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2dc41cc4f9f38b25ede16d8cd1afd6e66411dd90e205f0af6dab3a956b9015f1" Feb 13 19:46:14.004386 containerd[1805]: time="2025-02-13T19:46:14.004372962Z" level=info msg="StopPodSandbox for \"2dc41cc4f9f38b25ede16d8cd1afd6e66411dd90e205f0af6dab3a956b9015f1\"" Feb 13 19:46:14.004525 containerd[1805]: time="2025-02-13T19:46:14.004512088Z" level=info msg="Ensure that sandbox 2dc41cc4f9f38b25ede16d8cd1afd6e66411dd90e205f0af6dab3a956b9015f1 in task-service has been cleanup successfully" Feb 13 19:46:14.004639 containerd[1805]: time="2025-02-13T19:46:14.004616693Z" level=info msg="TearDown network for sandbox \"2dc41cc4f9f38b25ede16d8cd1afd6e66411dd90e205f0af6dab3a956b9015f1\" successfully" Feb 13 19:46:14.004639 containerd[1805]: time="2025-02-13T19:46:14.004638121Z" level=info msg="StopPodSandbox for \"2dc41cc4f9f38b25ede16d8cd1afd6e66411dd90e205f0af6dab3a956b9015f1\" returns successfully" Feb 13 19:46:14.004806 containerd[1805]: time="2025-02-13T19:46:14.004786363Z" level=info msg="StopPodSandbox for \"13e4aa195b30d4eb9cefcecd0a1cebc44189e63e51154716b79006b8f7b53aff\"" Feb 13 19:46:14.004857 containerd[1805]: time="2025-02-13T19:46:14.004846229Z" level=info msg="TearDown network for sandbox \"13e4aa195b30d4eb9cefcecd0a1cebc44189e63e51154716b79006b8f7b53aff\" successfully" Feb 13 19:46:14.004888 containerd[1805]: time="2025-02-13T19:46:14.004856936Z" level=info msg="StopPodSandbox for \"13e4aa195b30d4eb9cefcecd0a1cebc44189e63e51154716b79006b8f7b53aff\" returns successfully" Feb 13 19:46:14.004998 containerd[1805]: time="2025-02-13T19:46:14.004983678Z" level=info msg="StopPodSandbox for \"515e2dbbd1be1c8fb85c1e550f65c5d2a7c456a8e8d1bd8d101a078c05308133\"" Feb 13 19:46:14.005059 containerd[1805]: time="2025-02-13T19:46:14.005044458Z" level=info msg="TearDown network for sandbox \"515e2dbbd1be1c8fb85c1e550f65c5d2a7c456a8e8d1bd8d101a078c05308133\" successfully" Feb 13 19:46:14.005105 containerd[1805]: time="2025-02-13T19:46:14.005057761Z" level=info msg="StopPodSandbox for \"515e2dbbd1be1c8fb85c1e550f65c5d2a7c456a8e8d1bd8d101a078c05308133\" returns successfully" Feb 13 19:46:14.005255 containerd[1805]: time="2025-02-13T19:46:14.005243552Z" level=info msg="StopPodSandbox for \"83e7edb6536af5c49bf3dcdb8edee0f773f777fb86a2c95fed050893a903a67c\"" Feb 13 19:46:14.005299 containerd[1805]: time="2025-02-13T19:46:14.005290478Z" level=info msg="TearDown network for sandbox \"83e7edb6536af5c49bf3dcdb8edee0f773f777fb86a2c95fed050893a903a67c\" successfully" Feb 13 19:46:14.005329 containerd[1805]: time="2025-02-13T19:46:14.005299039Z" level=info msg="StopPodSandbox for \"83e7edb6536af5c49bf3dcdb8edee0f773f777fb86a2c95fed050893a903a67c\" returns successfully" Feb 13 19:46:14.005358 kubelet[3272]: I0213 19:46:14.005332 3272 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f591b3689d85817516cd404dcb28ead0b5ffd0a594a13bb523640ede1db5693a" Feb 13 19:46:14.005477 containerd[1805]: time="2025-02-13T19:46:14.005463473Z" level=info msg="StopPodSandbox for \"45c87c61ef4cd93a6712ac1a472c5dc83746a8c57cbce7faf17a97105703dc6f\"" Feb 13 19:46:14.005522 containerd[1805]: time="2025-02-13T19:46:14.005505310Z" level=info msg="TearDown network for sandbox \"45c87c61ef4cd93a6712ac1a472c5dc83746a8c57cbce7faf17a97105703dc6f\" successfully" Feb 13 19:46:14.005522 containerd[1805]: time="2025-02-13T19:46:14.005511970Z" level=info msg="StopPodSandbox for \"45c87c61ef4cd93a6712ac1a472c5dc83746a8c57cbce7faf17a97105703dc6f\" returns successfully" Feb 13 19:46:14.005576 containerd[1805]: time="2025-02-13T19:46:14.005545202Z" level=info msg="StopPodSandbox for \"f591b3689d85817516cd404dcb28ead0b5ffd0a594a13bb523640ede1db5693a\"" Feb 13 19:46:14.005661 containerd[1805]: time="2025-02-13T19:46:14.005650292Z" level=info msg="Ensure that sandbox f591b3689d85817516cd404dcb28ead0b5ffd0a594a13bb523640ede1db5693a in task-service has been cleanup successfully" Feb 13 19:46:14.005708 containerd[1805]: time="2025-02-13T19:46:14.005698063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7469c76fc6-p6gv2,Uid:922c820f-72e7-49c5-977f-4e21e9e5b030,Namespace:calico-apiserver,Attempt:5,}" Feb 13 19:46:14.005751 containerd[1805]: time="2025-02-13T19:46:14.005740823Z" level=info msg="TearDown network for sandbox \"f591b3689d85817516cd404dcb28ead0b5ffd0a594a13bb523640ede1db5693a\" successfully" Feb 13 19:46:14.005780 containerd[1805]: time="2025-02-13T19:46:14.005751445Z" level=info msg="StopPodSandbox for \"f591b3689d85817516cd404dcb28ead0b5ffd0a594a13bb523640ede1db5693a\" returns successfully" Feb 13 19:46:14.005874 containerd[1805]: time="2025-02-13T19:46:14.005861854Z" level=info msg="StopPodSandbox for \"3d0ee9cc9d198a644926e3567cfe87b5412c43401da9974690432ddf97964780\"" Feb 13 19:46:14.005914 containerd[1805]: time="2025-02-13T19:46:14.005906541Z" level=info msg="TearDown network for sandbox \"3d0ee9cc9d198a644926e3567cfe87b5412c43401da9974690432ddf97964780\" successfully" Feb 13 19:46:14.005943 containerd[1805]: time="2025-02-13T19:46:14.005913995Z" level=info msg="StopPodSandbox for \"3d0ee9cc9d198a644926e3567cfe87b5412c43401da9974690432ddf97964780\" returns successfully" Feb 13 19:46:14.006029 containerd[1805]: time="2025-02-13T19:46:14.006019094Z" level=info msg="StopPodSandbox for \"273e7d864ff43274281d57fb2f27941c5b34017810dda5ef7ecc8294315a449b\"" Feb 13 19:46:14.006066 containerd[1805]: time="2025-02-13T19:46:14.006058074Z" level=info msg="TearDown network for sandbox \"273e7d864ff43274281d57fb2f27941c5b34017810dda5ef7ecc8294315a449b\" successfully" Feb 13 19:46:14.006096 containerd[1805]: time="2025-02-13T19:46:14.006065502Z" level=info msg="StopPodSandbox for \"273e7d864ff43274281d57fb2f27941c5b34017810dda5ef7ecc8294315a449b\" returns successfully" Feb 13 19:46:14.006176 systemd[1]: run-netns-cni\x2d1eb73c29\x2db171\x2d6ace\x2d6b24\x2d54485f8834d2.mount: Deactivated successfully. Feb 13 19:46:14.006235 containerd[1805]: time="2025-02-13T19:46:14.006186996Z" level=info msg="StopPodSandbox for \"a76e4eaee661cfd563bcd392434c458182000015a3325e637abcb2534d276298\"" Feb 13 19:46:14.006270 containerd[1805]: time="2025-02-13T19:46:14.006243486Z" level=info msg="TearDown network for sandbox \"a76e4eaee661cfd563bcd392434c458182000015a3325e637abcb2534d276298\" successfully" Feb 13 19:46:14.006270 containerd[1805]: time="2025-02-13T19:46:14.006254154Z" level=info msg="StopPodSandbox for \"a76e4eaee661cfd563bcd392434c458182000015a3325e637abcb2534d276298\" returns successfully" Feb 13 19:46:14.006240 systemd[1]: run-netns-cni\x2dfebd1459\x2d776c\x2d1193\x2dec43\x2df5a11d684d67.mount: Deactivated successfully. Feb 13 19:46:14.006405 containerd[1805]: time="2025-02-13T19:46:14.006390174Z" level=info msg="StopPodSandbox for \"80232807547030d8f07b5c4f1cde555f2f89298b4b252d2c8d124b51c03bbfbb\"" Feb 13 19:46:14.006461 containerd[1805]: time="2025-02-13T19:46:14.006451166Z" level=info msg="TearDown network for sandbox \"80232807547030d8f07b5c4f1cde555f2f89298b4b252d2c8d124b51c03bbfbb\" successfully" Feb 13 19:46:14.006491 containerd[1805]: time="2025-02-13T19:46:14.006461400Z" level=info msg="StopPodSandbox for \"80232807547030d8f07b5c4f1cde555f2f89298b4b252d2c8d124b51c03bbfbb\" returns successfully" Feb 13 19:46:14.006693 containerd[1805]: time="2025-02-13T19:46:14.006679127Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54fdf9b76f-wsc67,Uid:a3049fc5-6472-40ea-b289-504e898e9372,Namespace:calico-system,Attempt:5,}" Feb 13 19:46:14.009324 systemd[1]: run-netns-cni\x2da620191c\x2d10ff\x2d2ec4\x2d4083\x2d6a5cf7ecefdc.mount: Deactivated successfully. Feb 13 19:46:14.010998 kubelet[3272]: I0213 19:46:14.010967 3272 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-9sh44" podStartSLOduration=1.899063316 podStartE2EDuration="13.01095702s" podCreationTimestamp="2025-02-13 19:46:01 +0000 UTC" firstStartedPulling="2025-02-13 19:46:02.335641141 +0000 UTC m=+21.506847806" lastFinishedPulling="2025-02-13 19:46:13.447534849 +0000 UTC m=+32.618741510" observedRunningTime="2025-02-13 19:46:14.010399744 +0000 UTC m=+33.181606404" watchObservedRunningTime="2025-02-13 19:46:14.01095702 +0000 UTC m=+33.182163678" Feb 13 19:46:14.089290 systemd-networkd[1720]: calibf14f644f4b: Link UP Feb 13 19:46:14.089399 systemd-networkd[1720]: cali6225cdaf80b: Link UP Feb 13 19:46:14.089545 systemd-networkd[1720]: calibf14f644f4b: Gained carrier Feb 13 19:46:14.089675 systemd-networkd[1720]: cali6225cdaf80b: Gained carrier Feb 13 19:46:14.093988 containerd[1805]: 2025-02-13 19:46:14.030 [INFO][5728] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 19:46:14.093988 containerd[1805]: 2025-02-13 19:46:14.036 [INFO][5728] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4186.1.1--a--a8b3a25f31-k8s-coredns--7db6d8ff4d--xvtfk-eth0 coredns-7db6d8ff4d- kube-system c5186765-6941-4a28-a06d-cf22cd68adee 691 0 2025-02-13 19:45:57 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4186.1.1-a-a8b3a25f31 coredns-7db6d8ff4d-xvtfk eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6225cdaf80b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="ff992bf1f00546f54a1db30e03e3a6a55c2b5b4d15db733919df97339b26fc7c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-xvtfk" WorkloadEndpoint="ci--4186.1.1--a--a8b3a25f31-k8s-coredns--7db6d8ff4d--xvtfk-" Feb 13 19:46:14.093988 containerd[1805]: 2025-02-13 19:46:14.036 [INFO][5728] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ff992bf1f00546f54a1db30e03e3a6a55c2b5b4d15db733919df97339b26fc7c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-xvtfk" WorkloadEndpoint="ci--4186.1.1--a--a8b3a25f31-k8s-coredns--7db6d8ff4d--xvtfk-eth0" Feb 13 19:46:14.093988 containerd[1805]: 2025-02-13 19:46:14.051 [INFO][5829] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ff992bf1f00546f54a1db30e03e3a6a55c2b5b4d15db733919df97339b26fc7c" HandleID="k8s-pod-network.ff992bf1f00546f54a1db30e03e3a6a55c2b5b4d15db733919df97339b26fc7c" Workload="ci--4186.1.1--a--a8b3a25f31-k8s-coredns--7db6d8ff4d--xvtfk-eth0" Feb 13 19:46:14.093988 containerd[1805]: 2025-02-13 19:46:14.057 [INFO][5829] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ff992bf1f00546f54a1db30e03e3a6a55c2b5b4d15db733919df97339b26fc7c" HandleID="k8s-pod-network.ff992bf1f00546f54a1db30e03e3a6a55c2b5b4d15db733919df97339b26fc7c" Workload="ci--4186.1.1--a--a8b3a25f31-k8s-coredns--7db6d8ff4d--xvtfk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00011dc80), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4186.1.1-a-a8b3a25f31", "pod":"coredns-7db6d8ff4d-xvtfk", "timestamp":"2025-02-13 19:46:14.051801219 +0000 UTC"}, Hostname:"ci-4186.1.1-a-a8b3a25f31", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:46:14.093988 containerd[1805]: 2025-02-13 19:46:14.057 [INFO][5829] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:46:14.093988 containerd[1805]: 2025-02-13 19:46:14.057 [INFO][5829] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:46:14.093988 containerd[1805]: 2025-02-13 19:46:14.057 [INFO][5829] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4186.1.1-a-a8b3a25f31' Feb 13 19:46:14.093988 containerd[1805]: 2025-02-13 19:46:14.058 [INFO][5829] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ff992bf1f00546f54a1db30e03e3a6a55c2b5b4d15db733919df97339b26fc7c" host="ci-4186.1.1-a-a8b3a25f31" Feb 13 19:46:14.093988 containerd[1805]: 2025-02-13 19:46:14.062 [INFO][5829] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4186.1.1-a-a8b3a25f31" Feb 13 19:46:14.093988 containerd[1805]: 2025-02-13 19:46:14.064 [INFO][5829] ipam/ipam.go 489: Trying affinity for 192.168.106.192/26 host="ci-4186.1.1-a-a8b3a25f31" Feb 13 19:46:14.093988 containerd[1805]: 2025-02-13 19:46:14.065 [INFO][5829] ipam/ipam.go 155: Attempting to load block cidr=192.168.106.192/26 host="ci-4186.1.1-a-a8b3a25f31" Feb 13 19:46:14.093988 containerd[1805]: 2025-02-13 19:46:14.066 [INFO][5829] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.106.192/26 host="ci-4186.1.1-a-a8b3a25f31" Feb 13 19:46:14.093988 containerd[1805]: 2025-02-13 19:46:14.066 [INFO][5829] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.106.192/26 handle="k8s-pod-network.ff992bf1f00546f54a1db30e03e3a6a55c2b5b4d15db733919df97339b26fc7c" host="ci-4186.1.1-a-a8b3a25f31" Feb 13 19:46:14.093988 containerd[1805]: 2025-02-13 19:46:14.067 [INFO][5829] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ff992bf1f00546f54a1db30e03e3a6a55c2b5b4d15db733919df97339b26fc7c Feb 13 19:46:14.093988 containerd[1805]: 2025-02-13 19:46:14.070 [INFO][5829] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.106.192/26 handle="k8s-pod-network.ff992bf1f00546f54a1db30e03e3a6a55c2b5b4d15db733919df97339b26fc7c" host="ci-4186.1.1-a-a8b3a25f31" Feb 13 19:46:14.093988 containerd[1805]: 2025-02-13 19:46:14.072 [INFO][5829] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.106.193/26] block=192.168.106.192/26 handle="k8s-pod-network.ff992bf1f00546f54a1db30e03e3a6a55c2b5b4d15db733919df97339b26fc7c" host="ci-4186.1.1-a-a8b3a25f31" Feb 13 19:46:14.093988 containerd[1805]: 2025-02-13 19:46:14.072 [INFO][5829] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.106.193/26] handle="k8s-pod-network.ff992bf1f00546f54a1db30e03e3a6a55c2b5b4d15db733919df97339b26fc7c" host="ci-4186.1.1-a-a8b3a25f31" Feb 13 19:46:14.093988 containerd[1805]: 2025-02-13 19:46:14.072 [INFO][5829] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:46:14.093988 containerd[1805]: 2025-02-13 19:46:14.072 [INFO][5829] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.106.193/26] IPv6=[] ContainerID="ff992bf1f00546f54a1db30e03e3a6a55c2b5b4d15db733919df97339b26fc7c" HandleID="k8s-pod-network.ff992bf1f00546f54a1db30e03e3a6a55c2b5b4d15db733919df97339b26fc7c" Workload="ci--4186.1.1--a--a8b3a25f31-k8s-coredns--7db6d8ff4d--xvtfk-eth0" Feb 13 19:46:14.094504 containerd[1805]: 2025-02-13 19:46:14.073 [INFO][5728] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ff992bf1f00546f54a1db30e03e3a6a55c2b5b4d15db733919df97339b26fc7c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-xvtfk" WorkloadEndpoint="ci--4186.1.1--a--a8b3a25f31-k8s-coredns--7db6d8ff4d--xvtfk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.1.1--a--a8b3a25f31-k8s-coredns--7db6d8ff4d--xvtfk-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"c5186765-6941-4a28-a06d-cf22cd68adee", ResourceVersion:"691", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 45, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.1.1-a-a8b3a25f31", ContainerID:"", Pod:"coredns-7db6d8ff4d-xvtfk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.106.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6225cdaf80b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:46:14.094504 containerd[1805]: 2025-02-13 19:46:14.074 [INFO][5728] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.106.193/32] ContainerID="ff992bf1f00546f54a1db30e03e3a6a55c2b5b4d15db733919df97339b26fc7c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-xvtfk" WorkloadEndpoint="ci--4186.1.1--a--a8b3a25f31-k8s-coredns--7db6d8ff4d--xvtfk-eth0" Feb 13 19:46:14.094504 containerd[1805]: 2025-02-13 19:46:14.074 [INFO][5728] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6225cdaf80b ContainerID="ff992bf1f00546f54a1db30e03e3a6a55c2b5b4d15db733919df97339b26fc7c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-xvtfk" WorkloadEndpoint="ci--4186.1.1--a--a8b3a25f31-k8s-coredns--7db6d8ff4d--xvtfk-eth0" Feb 13 19:46:14.094504 containerd[1805]: 2025-02-13 19:46:14.089 [INFO][5728] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ff992bf1f00546f54a1db30e03e3a6a55c2b5b4d15db733919df97339b26fc7c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-xvtfk" WorkloadEndpoint="ci--4186.1.1--a--a8b3a25f31-k8s-coredns--7db6d8ff4d--xvtfk-eth0" Feb 13 19:46:14.094504 containerd[1805]: 2025-02-13 19:46:14.089 [INFO][5728] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ff992bf1f00546f54a1db30e03e3a6a55c2b5b4d15db733919df97339b26fc7c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-xvtfk" WorkloadEndpoint="ci--4186.1.1--a--a8b3a25f31-k8s-coredns--7db6d8ff4d--xvtfk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.1.1--a--a8b3a25f31-k8s-coredns--7db6d8ff4d--xvtfk-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"c5186765-6941-4a28-a06d-cf22cd68adee", ResourceVersion:"691", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 45, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.1.1-a-a8b3a25f31", ContainerID:"ff992bf1f00546f54a1db30e03e3a6a55c2b5b4d15db733919df97339b26fc7c", Pod:"coredns-7db6d8ff4d-xvtfk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.106.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6225cdaf80b", MAC:"16:f7:93:f8:4a:2c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:46:14.094504 containerd[1805]: 2025-02-13 19:46:14.093 [INFO][5728] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ff992bf1f00546f54a1db30e03e3a6a55c2b5b4d15db733919df97339b26fc7c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-xvtfk" WorkloadEndpoint="ci--4186.1.1--a--a8b3a25f31-k8s-coredns--7db6d8ff4d--xvtfk-eth0" Feb 13 19:46:14.094504 containerd[1805]: 2025-02-13 19:46:14.030 [INFO][5751] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 19:46:14.094697 containerd[1805]: 2025-02-13 19:46:14.036 [INFO][5751] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4186.1.1--a--a8b3a25f31-k8s-calico--apiserver--7469c76fc6--p6gv2-eth0 calico-apiserver-7469c76fc6- calico-apiserver 922c820f-72e7-49c5-977f-4e21e9e5b030 695 0 2025-02-13 19:46:01 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7469c76fc6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4186.1.1-a-a8b3a25f31 calico-apiserver-7469c76fc6-p6gv2 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calibf14f644f4b [] []}} ContainerID="ba323136db19b32b73270ee3c7127df1234af8e52b10dd5c88e43808ed8b0b7f" Namespace="calico-apiserver" Pod="calico-apiserver-7469c76fc6-p6gv2" WorkloadEndpoint="ci--4186.1.1--a--a8b3a25f31-k8s-calico--apiserver--7469c76fc6--p6gv2-" Feb 13 19:46:14.094697 containerd[1805]: 2025-02-13 19:46:14.036 [INFO][5751] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ba323136db19b32b73270ee3c7127df1234af8e52b10dd5c88e43808ed8b0b7f" Namespace="calico-apiserver" Pod="calico-apiserver-7469c76fc6-p6gv2" WorkloadEndpoint="ci--4186.1.1--a--a8b3a25f31-k8s-calico--apiserver--7469c76fc6--p6gv2-eth0" Feb 13 19:46:14.094697 containerd[1805]: 2025-02-13 19:46:14.053 [INFO][5838] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ba323136db19b32b73270ee3c7127df1234af8e52b10dd5c88e43808ed8b0b7f" HandleID="k8s-pod-network.ba323136db19b32b73270ee3c7127df1234af8e52b10dd5c88e43808ed8b0b7f" Workload="ci--4186.1.1--a--a8b3a25f31-k8s-calico--apiserver--7469c76fc6--p6gv2-eth0" Feb 13 19:46:14.094697 containerd[1805]: 2025-02-13 19:46:14.057 [INFO][5838] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ba323136db19b32b73270ee3c7127df1234af8e52b10dd5c88e43808ed8b0b7f" HandleID="k8s-pod-network.ba323136db19b32b73270ee3c7127df1234af8e52b10dd5c88e43808ed8b0b7f" Workload="ci--4186.1.1--a--a8b3a25f31-k8s-calico--apiserver--7469c76fc6--p6gv2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00042c600), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4186.1.1-a-a8b3a25f31", "pod":"calico-apiserver-7469c76fc6-p6gv2", "timestamp":"2025-02-13 19:46:14.053207394 +0000 UTC"}, Hostname:"ci-4186.1.1-a-a8b3a25f31", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:46:14.094697 containerd[1805]: 2025-02-13 19:46:14.057 [INFO][5838] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:46:14.094697 containerd[1805]: 2025-02-13 19:46:14.072 [INFO][5838] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:46:14.094697 containerd[1805]: 2025-02-13 19:46:14.072 [INFO][5838] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4186.1.1-a-a8b3a25f31' Feb 13 19:46:14.094697 containerd[1805]: 2025-02-13 19:46:14.073 [INFO][5838] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ba323136db19b32b73270ee3c7127df1234af8e52b10dd5c88e43808ed8b0b7f" host="ci-4186.1.1-a-a8b3a25f31" Feb 13 19:46:14.094697 containerd[1805]: 2025-02-13 19:46:14.075 [INFO][5838] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4186.1.1-a-a8b3a25f31" Feb 13 19:46:14.094697 containerd[1805]: 2025-02-13 19:46:14.077 [INFO][5838] ipam/ipam.go 489: Trying affinity for 192.168.106.192/26 host="ci-4186.1.1-a-a8b3a25f31" Feb 13 19:46:14.094697 containerd[1805]: 2025-02-13 19:46:14.078 [INFO][5838] ipam/ipam.go 155: Attempting to load block cidr=192.168.106.192/26 host="ci-4186.1.1-a-a8b3a25f31" Feb 13 19:46:14.094697 containerd[1805]: 2025-02-13 19:46:14.079 [INFO][5838] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.106.192/26 host="ci-4186.1.1-a-a8b3a25f31" Feb 13 19:46:14.094697 containerd[1805]: 2025-02-13 19:46:14.079 [INFO][5838] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.106.192/26 handle="k8s-pod-network.ba323136db19b32b73270ee3c7127df1234af8e52b10dd5c88e43808ed8b0b7f" host="ci-4186.1.1-a-a8b3a25f31" Feb 13 19:46:14.094697 containerd[1805]: 2025-02-13 19:46:14.079 [INFO][5838] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ba323136db19b32b73270ee3c7127df1234af8e52b10dd5c88e43808ed8b0b7f Feb 13 19:46:14.094697 containerd[1805]: 2025-02-13 19:46:14.081 [INFO][5838] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.106.192/26 handle="k8s-pod-network.ba323136db19b32b73270ee3c7127df1234af8e52b10dd5c88e43808ed8b0b7f" host="ci-4186.1.1-a-a8b3a25f31" Feb 13 19:46:14.094697 containerd[1805]: 2025-02-13 19:46:14.084 [INFO][5838] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.106.194/26] block=192.168.106.192/26 handle="k8s-pod-network.ba323136db19b32b73270ee3c7127df1234af8e52b10dd5c88e43808ed8b0b7f" host="ci-4186.1.1-a-a8b3a25f31" Feb 13 19:46:14.094697 containerd[1805]: 2025-02-13 19:46:14.084 [INFO][5838] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.106.194/26] handle="k8s-pod-network.ba323136db19b32b73270ee3c7127df1234af8e52b10dd5c88e43808ed8b0b7f" host="ci-4186.1.1-a-a8b3a25f31" Feb 13 19:46:14.094697 containerd[1805]: 2025-02-13 19:46:14.084 [INFO][5838] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:46:14.094697 containerd[1805]: 2025-02-13 19:46:14.084 [INFO][5838] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.106.194/26] IPv6=[] ContainerID="ba323136db19b32b73270ee3c7127df1234af8e52b10dd5c88e43808ed8b0b7f" HandleID="k8s-pod-network.ba323136db19b32b73270ee3c7127df1234af8e52b10dd5c88e43808ed8b0b7f" Workload="ci--4186.1.1--a--a8b3a25f31-k8s-calico--apiserver--7469c76fc6--p6gv2-eth0" Feb 13 19:46:14.095018 containerd[1805]: 2025-02-13 19:46:14.085 [INFO][5751] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ba323136db19b32b73270ee3c7127df1234af8e52b10dd5c88e43808ed8b0b7f" Namespace="calico-apiserver" Pod="calico-apiserver-7469c76fc6-p6gv2" WorkloadEndpoint="ci--4186.1.1--a--a8b3a25f31-k8s-calico--apiserver--7469c76fc6--p6gv2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.1.1--a--a8b3a25f31-k8s-calico--apiserver--7469c76fc6--p6gv2-eth0", GenerateName:"calico-apiserver-7469c76fc6-", Namespace:"calico-apiserver", SelfLink:"", UID:"922c820f-72e7-49c5-977f-4e21e9e5b030", ResourceVersion:"695", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 46, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7469c76fc6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.1.1-a-a8b3a25f31", ContainerID:"", Pod:"calico-apiserver-7469c76fc6-p6gv2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.106.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibf14f644f4b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:46:14.095018 containerd[1805]: 2025-02-13 19:46:14.085 [INFO][5751] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.106.194/32] ContainerID="ba323136db19b32b73270ee3c7127df1234af8e52b10dd5c88e43808ed8b0b7f" Namespace="calico-apiserver" Pod="calico-apiserver-7469c76fc6-p6gv2" WorkloadEndpoint="ci--4186.1.1--a--a8b3a25f31-k8s-calico--apiserver--7469c76fc6--p6gv2-eth0" Feb 13 19:46:14.095018 containerd[1805]: 2025-02-13 19:46:14.085 [INFO][5751] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibf14f644f4b ContainerID="ba323136db19b32b73270ee3c7127df1234af8e52b10dd5c88e43808ed8b0b7f" Namespace="calico-apiserver" Pod="calico-apiserver-7469c76fc6-p6gv2" WorkloadEndpoint="ci--4186.1.1--a--a8b3a25f31-k8s-calico--apiserver--7469c76fc6--p6gv2-eth0" Feb 13 19:46:14.095018 containerd[1805]: 2025-02-13 19:46:14.089 [INFO][5751] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ba323136db19b32b73270ee3c7127df1234af8e52b10dd5c88e43808ed8b0b7f" Namespace="calico-apiserver" Pod="calico-apiserver-7469c76fc6-p6gv2" WorkloadEndpoint="ci--4186.1.1--a--a8b3a25f31-k8s-calico--apiserver--7469c76fc6--p6gv2-eth0" Feb 13 19:46:14.095018 containerd[1805]: 2025-02-13 19:46:14.089 [INFO][5751] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ba323136db19b32b73270ee3c7127df1234af8e52b10dd5c88e43808ed8b0b7f" Namespace="calico-apiserver" Pod="calico-apiserver-7469c76fc6-p6gv2" WorkloadEndpoint="ci--4186.1.1--a--a8b3a25f31-k8s-calico--apiserver--7469c76fc6--p6gv2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.1.1--a--a8b3a25f31-k8s-calico--apiserver--7469c76fc6--p6gv2-eth0", GenerateName:"calico-apiserver-7469c76fc6-", Namespace:"calico-apiserver", SelfLink:"", UID:"922c820f-72e7-49c5-977f-4e21e9e5b030", ResourceVersion:"695", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 46, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7469c76fc6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.1.1-a-a8b3a25f31", ContainerID:"ba323136db19b32b73270ee3c7127df1234af8e52b10dd5c88e43808ed8b0b7f", Pod:"calico-apiserver-7469c76fc6-p6gv2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.106.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibf14f644f4b", MAC:"ae:db:67:a0:1c:26", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:46:14.095018 containerd[1805]: 2025-02-13 19:46:14.093 [INFO][5751] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ba323136db19b32b73270ee3c7127df1234af8e52b10dd5c88e43808ed8b0b7f" Namespace="calico-apiserver" Pod="calico-apiserver-7469c76fc6-p6gv2" WorkloadEndpoint="ci--4186.1.1--a--a8b3a25f31-k8s-calico--apiserver--7469c76fc6--p6gv2-eth0" Feb 13 19:46:14.101959 systemd-networkd[1720]: calicd1d1a2eb34: Link UP Feb 13 19:46:14.102073 systemd-networkd[1720]: calicd1d1a2eb34: Gained carrier Feb 13 19:46:14.104241 containerd[1805]: time="2025-02-13T19:46:14.104184406Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:46:14.104241 containerd[1805]: time="2025-02-13T19:46:14.104214827Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:46:14.104348 containerd[1805]: time="2025-02-13T19:46:14.104238385Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:46:14.104528 containerd[1805]: time="2025-02-13T19:46:14.104505835Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:46:14.104619 containerd[1805]: time="2025-02-13T19:46:14.104585555Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:46:14.104647 containerd[1805]: time="2025-02-13T19:46:14.104622946Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:46:14.104647 containerd[1805]: time="2025-02-13T19:46:14.104635431Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:46:14.104708 containerd[1805]: time="2025-02-13T19:46:14.104690917Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:46:14.107184 containerd[1805]: 2025-02-13 19:46:14.025 [INFO][5691] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 19:46:14.107184 containerd[1805]: 2025-02-13 19:46:14.034 [INFO][5691] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4186.1.1--a--a8b3a25f31-k8s-calico--apiserver--7469c76fc6--shmk8-eth0 calico-apiserver-7469c76fc6- calico-apiserver 168182d0-b9c4-48bd-9c74-28acdd82becf 697 0 2025-02-13 19:46:01 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7469c76fc6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4186.1.1-a-a8b3a25f31 calico-apiserver-7469c76fc6-shmk8 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calicd1d1a2eb34 [] []}} ContainerID="5eea4cd00402b8a09f07b977144e61901001cc79b4992cb534a40bc4568195fb" Namespace="calico-apiserver" Pod="calico-apiserver-7469c76fc6-shmk8" WorkloadEndpoint="ci--4186.1.1--a--a8b3a25f31-k8s-calico--apiserver--7469c76fc6--shmk8-" Feb 13 19:46:14.107184 containerd[1805]: 2025-02-13 19:46:14.034 [INFO][5691] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5eea4cd00402b8a09f07b977144e61901001cc79b4992cb534a40bc4568195fb" Namespace="calico-apiserver" Pod="calico-apiserver-7469c76fc6-shmk8" WorkloadEndpoint="ci--4186.1.1--a--a8b3a25f31-k8s-calico--apiserver--7469c76fc6--shmk8-eth0" Feb 13 19:46:14.107184 containerd[1805]: 2025-02-13 19:46:14.052 [INFO][5827] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5eea4cd00402b8a09f07b977144e61901001cc79b4992cb534a40bc4568195fb" HandleID="k8s-pod-network.5eea4cd00402b8a09f07b977144e61901001cc79b4992cb534a40bc4568195fb" Workload="ci--4186.1.1--a--a8b3a25f31-k8s-calico--apiserver--7469c76fc6--shmk8-eth0" Feb 13 19:46:14.107184 containerd[1805]: 2025-02-13 19:46:14.058 [INFO][5827] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5eea4cd00402b8a09f07b977144e61901001cc79b4992cb534a40bc4568195fb" HandleID="k8s-pod-network.5eea4cd00402b8a09f07b977144e61901001cc79b4992cb534a40bc4568195fb" Workload="ci--4186.1.1--a--a8b3a25f31-k8s-calico--apiserver--7469c76fc6--shmk8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000659a90), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4186.1.1-a-a8b3a25f31", "pod":"calico-apiserver-7469c76fc6-shmk8", "timestamp":"2025-02-13 19:46:14.052730292 +0000 UTC"}, Hostname:"ci-4186.1.1-a-a8b3a25f31", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:46:14.107184 containerd[1805]: 2025-02-13 19:46:14.058 [INFO][5827] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:46:14.107184 containerd[1805]: 2025-02-13 19:46:14.084 [INFO][5827] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:46:14.107184 containerd[1805]: 2025-02-13 19:46:14.084 [INFO][5827] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4186.1.1-a-a8b3a25f31' Feb 13 19:46:14.107184 containerd[1805]: 2025-02-13 19:46:14.085 [INFO][5827] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5eea4cd00402b8a09f07b977144e61901001cc79b4992cb534a40bc4568195fb" host="ci-4186.1.1-a-a8b3a25f31" Feb 13 19:46:14.107184 containerd[1805]: 2025-02-13 19:46:14.087 [INFO][5827] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4186.1.1-a-a8b3a25f31" Feb 13 19:46:14.107184 containerd[1805]: 2025-02-13 19:46:14.090 [INFO][5827] ipam/ipam.go 489: Trying affinity for 192.168.106.192/26 host="ci-4186.1.1-a-a8b3a25f31" Feb 13 19:46:14.107184 containerd[1805]: 2025-02-13 19:46:14.091 [INFO][5827] ipam/ipam.go 155: Attempting to load block cidr=192.168.106.192/26 host="ci-4186.1.1-a-a8b3a25f31" Feb 13 19:46:14.107184 containerd[1805]: 2025-02-13 19:46:14.093 [INFO][5827] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.106.192/26 host="ci-4186.1.1-a-a8b3a25f31" Feb 13 19:46:14.107184 containerd[1805]: 2025-02-13 19:46:14.093 [INFO][5827] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.106.192/26 handle="k8s-pod-network.5eea4cd00402b8a09f07b977144e61901001cc79b4992cb534a40bc4568195fb" host="ci-4186.1.1-a-a8b3a25f31" Feb 13 19:46:14.107184 containerd[1805]: 2025-02-13 19:46:14.094 [INFO][5827] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.5eea4cd00402b8a09f07b977144e61901001cc79b4992cb534a40bc4568195fb Feb 13 19:46:14.107184 containerd[1805]: 2025-02-13 19:46:14.096 [INFO][5827] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.106.192/26 handle="k8s-pod-network.5eea4cd00402b8a09f07b977144e61901001cc79b4992cb534a40bc4568195fb" host="ci-4186.1.1-a-a8b3a25f31" Feb 13 19:46:14.107184 containerd[1805]: 2025-02-13 19:46:14.099 [INFO][5827] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.106.195/26] block=192.168.106.192/26 handle="k8s-pod-network.5eea4cd00402b8a09f07b977144e61901001cc79b4992cb534a40bc4568195fb" host="ci-4186.1.1-a-a8b3a25f31" Feb 13 19:46:14.107184 containerd[1805]: 2025-02-13 19:46:14.099 [INFO][5827] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.106.195/26] handle="k8s-pod-network.5eea4cd00402b8a09f07b977144e61901001cc79b4992cb534a40bc4568195fb" host="ci-4186.1.1-a-a8b3a25f31" Feb 13 19:46:14.107184 containerd[1805]: 2025-02-13 19:46:14.099 [INFO][5827] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:46:14.107184 containerd[1805]: 2025-02-13 19:46:14.099 [INFO][5827] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.106.195/26] IPv6=[] ContainerID="5eea4cd00402b8a09f07b977144e61901001cc79b4992cb534a40bc4568195fb" HandleID="k8s-pod-network.5eea4cd00402b8a09f07b977144e61901001cc79b4992cb534a40bc4568195fb" Workload="ci--4186.1.1--a--a8b3a25f31-k8s-calico--apiserver--7469c76fc6--shmk8-eth0" Feb 13 19:46:14.107597 containerd[1805]: 2025-02-13 19:46:14.100 [INFO][5691] cni-plugin/k8s.go 386: Populated endpoint ContainerID="5eea4cd00402b8a09f07b977144e61901001cc79b4992cb534a40bc4568195fb" Namespace="calico-apiserver" Pod="calico-apiserver-7469c76fc6-shmk8" WorkloadEndpoint="ci--4186.1.1--a--a8b3a25f31-k8s-calico--apiserver--7469c76fc6--shmk8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.1.1--a--a8b3a25f31-k8s-calico--apiserver--7469c76fc6--shmk8-eth0", GenerateName:"calico-apiserver-7469c76fc6-", Namespace:"calico-apiserver", SelfLink:"", UID:"168182d0-b9c4-48bd-9c74-28acdd82becf", ResourceVersion:"697", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 46, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7469c76fc6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.1.1-a-a8b3a25f31", ContainerID:"", Pod:"calico-apiserver-7469c76fc6-shmk8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.106.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicd1d1a2eb34", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:46:14.107597 containerd[1805]: 2025-02-13 19:46:14.100 [INFO][5691] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.106.195/32] ContainerID="5eea4cd00402b8a09f07b977144e61901001cc79b4992cb534a40bc4568195fb" Namespace="calico-apiserver" Pod="calico-apiserver-7469c76fc6-shmk8" WorkloadEndpoint="ci--4186.1.1--a--a8b3a25f31-k8s-calico--apiserver--7469c76fc6--shmk8-eth0" Feb 13 19:46:14.107597 containerd[1805]: 2025-02-13 19:46:14.100 [INFO][5691] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicd1d1a2eb34 ContainerID="5eea4cd00402b8a09f07b977144e61901001cc79b4992cb534a40bc4568195fb" Namespace="calico-apiserver" Pod="calico-apiserver-7469c76fc6-shmk8" WorkloadEndpoint="ci--4186.1.1--a--a8b3a25f31-k8s-calico--apiserver--7469c76fc6--shmk8-eth0" Feb 13 19:46:14.107597 containerd[1805]: 2025-02-13 19:46:14.102 [INFO][5691] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5eea4cd00402b8a09f07b977144e61901001cc79b4992cb534a40bc4568195fb" Namespace="calico-apiserver" Pod="calico-apiserver-7469c76fc6-shmk8" WorkloadEndpoint="ci--4186.1.1--a--a8b3a25f31-k8s-calico--apiserver--7469c76fc6--shmk8-eth0" Feb 13 19:46:14.107597 containerd[1805]: 2025-02-13 19:46:14.102 [INFO][5691] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5eea4cd00402b8a09f07b977144e61901001cc79b4992cb534a40bc4568195fb" Namespace="calico-apiserver" Pod="calico-apiserver-7469c76fc6-shmk8" WorkloadEndpoint="ci--4186.1.1--a--a8b3a25f31-k8s-calico--apiserver--7469c76fc6--shmk8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.1.1--a--a8b3a25f31-k8s-calico--apiserver--7469c76fc6--shmk8-eth0", GenerateName:"calico-apiserver-7469c76fc6-", Namespace:"calico-apiserver", SelfLink:"", UID:"168182d0-b9c4-48bd-9c74-28acdd82becf", ResourceVersion:"697", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 46, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7469c76fc6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.1.1-a-a8b3a25f31", ContainerID:"5eea4cd00402b8a09f07b977144e61901001cc79b4992cb534a40bc4568195fb", Pod:"calico-apiserver-7469c76fc6-shmk8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.106.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicd1d1a2eb34", MAC:"1a:0f:24:3f:2b:22", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:46:14.107597 containerd[1805]: 2025-02-13 19:46:14.106 [INFO][5691] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="5eea4cd00402b8a09f07b977144e61901001cc79b4992cb534a40bc4568195fb" Namespace="calico-apiserver" Pod="calico-apiserver-7469c76fc6-shmk8" WorkloadEndpoint="ci--4186.1.1--a--a8b3a25f31-k8s-calico--apiserver--7469c76fc6--shmk8-eth0" Feb 13 19:46:14.117497 containerd[1805]: time="2025-02-13T19:46:14.117226036Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:46:14.117497 containerd[1805]: time="2025-02-13T19:46:14.117454628Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:46:14.117497 containerd[1805]: time="2025-02-13T19:46:14.117462767Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:46:14.117606 containerd[1805]: time="2025-02-13T19:46:14.117518562Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:46:14.117889 systemd-networkd[1720]: cali5f1b70e5fc6: Link UP Feb 13 19:46:14.118009 systemd-networkd[1720]: cali5f1b70e5fc6: Gained carrier Feb 13 19:46:14.119649 systemd[1]: Started cri-containerd-ba323136db19b32b73270ee3c7127df1234af8e52b10dd5c88e43808ed8b0b7f.scope - libcontainer container ba323136db19b32b73270ee3c7127df1234af8e52b10dd5c88e43808ed8b0b7f. Feb 13 19:46:14.120874 systemd[1]: Started cri-containerd-ff992bf1f00546f54a1db30e03e3a6a55c2b5b4d15db733919df97339b26fc7c.scope - libcontainer container ff992bf1f00546f54a1db30e03e3a6a55c2b5b4d15db733919df97339b26fc7c. Feb 13 19:46:14.123783 containerd[1805]: 2025-02-13 19:46:14.026 [INFO][5710] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 19:46:14.123783 containerd[1805]: 2025-02-13 19:46:14.034 [INFO][5710] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4186.1.1--a--a8b3a25f31-k8s-coredns--7db6d8ff4d--t77lg-eth0 coredns-7db6d8ff4d- kube-system 4bb94446-f95c-44d1-9b40-e90a44987989 696 0 2025-02-13 19:45:57 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4186.1.1-a-a8b3a25f31 coredns-7db6d8ff4d-t77lg eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5f1b70e5fc6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="f4e73c19ffd363be76d92c8d0ff47ea8826675d1bd2e5d9d81f933afcecd95bc" Namespace="kube-system" Pod="coredns-7db6d8ff4d-t77lg" WorkloadEndpoint="ci--4186.1.1--a--a8b3a25f31-k8s-coredns--7db6d8ff4d--t77lg-" Feb 13 19:46:14.123783 containerd[1805]: 2025-02-13 19:46:14.034 [INFO][5710] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f4e73c19ffd363be76d92c8d0ff47ea8826675d1bd2e5d9d81f933afcecd95bc" Namespace="kube-system" Pod="coredns-7db6d8ff4d-t77lg" WorkloadEndpoint="ci--4186.1.1--a--a8b3a25f31-k8s-coredns--7db6d8ff4d--t77lg-eth0" Feb 13 19:46:14.123783 containerd[1805]: 2025-02-13 19:46:14.052 [INFO][5828] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f4e73c19ffd363be76d92c8d0ff47ea8826675d1bd2e5d9d81f933afcecd95bc" HandleID="k8s-pod-network.f4e73c19ffd363be76d92c8d0ff47ea8826675d1bd2e5d9d81f933afcecd95bc" Workload="ci--4186.1.1--a--a8b3a25f31-k8s-coredns--7db6d8ff4d--t77lg-eth0" Feb 13 19:46:14.123783 containerd[1805]: 2025-02-13 19:46:14.058 [INFO][5828] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f4e73c19ffd363be76d92c8d0ff47ea8826675d1bd2e5d9d81f933afcecd95bc" HandleID="k8s-pod-network.f4e73c19ffd363be76d92c8d0ff47ea8826675d1bd2e5d9d81f933afcecd95bc" Workload="ci--4186.1.1--a--a8b3a25f31-k8s-coredns--7db6d8ff4d--t77lg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002996d0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4186.1.1-a-a8b3a25f31", "pod":"coredns-7db6d8ff4d-t77lg", "timestamp":"2025-02-13 19:46:14.052408421 +0000 UTC"}, Hostname:"ci-4186.1.1-a-a8b3a25f31", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:46:14.123783 containerd[1805]: 2025-02-13 19:46:14.058 [INFO][5828] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:46:14.123783 containerd[1805]: 2025-02-13 19:46:14.099 [INFO][5828] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:46:14.123783 containerd[1805]: 2025-02-13 19:46:14.099 [INFO][5828] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4186.1.1-a-a8b3a25f31' Feb 13 19:46:14.123783 containerd[1805]: 2025-02-13 19:46:14.101 [INFO][5828] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f4e73c19ffd363be76d92c8d0ff47ea8826675d1bd2e5d9d81f933afcecd95bc" host="ci-4186.1.1-a-a8b3a25f31" Feb 13 19:46:14.123783 containerd[1805]: 2025-02-13 19:46:14.104 [INFO][5828] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4186.1.1-a-a8b3a25f31" Feb 13 19:46:14.123783 containerd[1805]: 2025-02-13 19:46:14.107 [INFO][5828] ipam/ipam.go 489: Trying affinity for 192.168.106.192/26 host="ci-4186.1.1-a-a8b3a25f31" Feb 13 19:46:14.123783 containerd[1805]: 2025-02-13 19:46:14.108 [INFO][5828] ipam/ipam.go 155: Attempting to load block cidr=192.168.106.192/26 host="ci-4186.1.1-a-a8b3a25f31" Feb 13 19:46:14.123783 containerd[1805]: 2025-02-13 19:46:14.109 [INFO][5828] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.106.192/26 host="ci-4186.1.1-a-a8b3a25f31" Feb 13 19:46:14.123783 containerd[1805]: 2025-02-13 19:46:14.109 [INFO][5828] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.106.192/26 handle="k8s-pod-network.f4e73c19ffd363be76d92c8d0ff47ea8826675d1bd2e5d9d81f933afcecd95bc" host="ci-4186.1.1-a-a8b3a25f31" Feb 13 19:46:14.123783 containerd[1805]: 2025-02-13 19:46:14.110 [INFO][5828] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f4e73c19ffd363be76d92c8d0ff47ea8826675d1bd2e5d9d81f933afcecd95bc Feb 13 19:46:14.123783 containerd[1805]: 2025-02-13 19:46:14.113 [INFO][5828] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.106.192/26 handle="k8s-pod-network.f4e73c19ffd363be76d92c8d0ff47ea8826675d1bd2e5d9d81f933afcecd95bc" host="ci-4186.1.1-a-a8b3a25f31" Feb 13 19:46:14.123783 containerd[1805]: 2025-02-13 19:46:14.116 [INFO][5828] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.106.196/26] block=192.168.106.192/26 handle="k8s-pod-network.f4e73c19ffd363be76d92c8d0ff47ea8826675d1bd2e5d9d81f933afcecd95bc" host="ci-4186.1.1-a-a8b3a25f31" Feb 13 19:46:14.123783 containerd[1805]: 2025-02-13 19:46:14.116 [INFO][5828] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.106.196/26] handle="k8s-pod-network.f4e73c19ffd363be76d92c8d0ff47ea8826675d1bd2e5d9d81f933afcecd95bc" host="ci-4186.1.1-a-a8b3a25f31" Feb 13 19:46:14.123783 containerd[1805]: 2025-02-13 19:46:14.116 [INFO][5828] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:46:14.123783 containerd[1805]: 2025-02-13 19:46:14.116 [INFO][5828] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.106.196/26] IPv6=[] ContainerID="f4e73c19ffd363be76d92c8d0ff47ea8826675d1bd2e5d9d81f933afcecd95bc" HandleID="k8s-pod-network.f4e73c19ffd363be76d92c8d0ff47ea8826675d1bd2e5d9d81f933afcecd95bc" Workload="ci--4186.1.1--a--a8b3a25f31-k8s-coredns--7db6d8ff4d--t77lg-eth0" Feb 13 19:46:14.124390 containerd[1805]: 2025-02-13 19:46:14.116 [INFO][5710] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f4e73c19ffd363be76d92c8d0ff47ea8826675d1bd2e5d9d81f933afcecd95bc" Namespace="kube-system" Pod="coredns-7db6d8ff4d-t77lg" WorkloadEndpoint="ci--4186.1.1--a--a8b3a25f31-k8s-coredns--7db6d8ff4d--t77lg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.1.1--a--a8b3a25f31-k8s-coredns--7db6d8ff4d--t77lg-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"4bb94446-f95c-44d1-9b40-e90a44987989", ResourceVersion:"696", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 45, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.1.1-a-a8b3a25f31", ContainerID:"", Pod:"coredns-7db6d8ff4d-t77lg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.106.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5f1b70e5fc6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:46:14.124390 containerd[1805]: 2025-02-13 19:46:14.117 [INFO][5710] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.106.196/32] ContainerID="f4e73c19ffd363be76d92c8d0ff47ea8826675d1bd2e5d9d81f933afcecd95bc" Namespace="kube-system" Pod="coredns-7db6d8ff4d-t77lg" WorkloadEndpoint="ci--4186.1.1--a--a8b3a25f31-k8s-coredns--7db6d8ff4d--t77lg-eth0" Feb 13 19:46:14.124390 containerd[1805]: 2025-02-13 19:46:14.117 [INFO][5710] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5f1b70e5fc6 ContainerID="f4e73c19ffd363be76d92c8d0ff47ea8826675d1bd2e5d9d81f933afcecd95bc" Namespace="kube-system" Pod="coredns-7db6d8ff4d-t77lg" WorkloadEndpoint="ci--4186.1.1--a--a8b3a25f31-k8s-coredns--7db6d8ff4d--t77lg-eth0" Feb 13 19:46:14.124390 containerd[1805]: 2025-02-13 19:46:14.118 [INFO][5710] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f4e73c19ffd363be76d92c8d0ff47ea8826675d1bd2e5d9d81f933afcecd95bc" Namespace="kube-system" Pod="coredns-7db6d8ff4d-t77lg" WorkloadEndpoint="ci--4186.1.1--a--a8b3a25f31-k8s-coredns--7db6d8ff4d--t77lg-eth0" Feb 13 19:46:14.124390 containerd[1805]: 2025-02-13 19:46:14.118 [INFO][5710] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f4e73c19ffd363be76d92c8d0ff47ea8826675d1bd2e5d9d81f933afcecd95bc" Namespace="kube-system" Pod="coredns-7db6d8ff4d-t77lg" WorkloadEndpoint="ci--4186.1.1--a--a8b3a25f31-k8s-coredns--7db6d8ff4d--t77lg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.1.1--a--a8b3a25f31-k8s-coredns--7db6d8ff4d--t77lg-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"4bb94446-f95c-44d1-9b40-e90a44987989", ResourceVersion:"696", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 45, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.1.1-a-a8b3a25f31", ContainerID:"f4e73c19ffd363be76d92c8d0ff47ea8826675d1bd2e5d9d81f933afcecd95bc", Pod:"coredns-7db6d8ff4d-t77lg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.106.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5f1b70e5fc6", MAC:"ca:23:09:43:4f:7b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:46:14.124390 containerd[1805]: 2025-02-13 19:46:14.122 [INFO][5710] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f4e73c19ffd363be76d92c8d0ff47ea8826675d1bd2e5d9d81f933afcecd95bc" Namespace="kube-system" Pod="coredns-7db6d8ff4d-t77lg" WorkloadEndpoint="ci--4186.1.1--a--a8b3a25f31-k8s-coredns--7db6d8ff4d--t77lg-eth0" Feb 13 19:46:14.124435 systemd[1]: Started cri-containerd-5eea4cd00402b8a09f07b977144e61901001cc79b4992cb534a40bc4568195fb.scope - libcontainer container 5eea4cd00402b8a09f07b977144e61901001cc79b4992cb534a40bc4568195fb. Feb 13 19:46:14.134083 containerd[1805]: time="2025-02-13T19:46:14.133809608Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:46:14.134083 containerd[1805]: time="2025-02-13T19:46:14.134028077Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:46:14.134083 containerd[1805]: time="2025-02-13T19:46:14.134036317Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:46:14.134083 containerd[1805]: time="2025-02-13T19:46:14.134075023Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:46:14.140177 systemd[1]: Started cri-containerd-f4e73c19ffd363be76d92c8d0ff47ea8826675d1bd2e5d9d81f933afcecd95bc.scope - libcontainer container f4e73c19ffd363be76d92c8d0ff47ea8826675d1bd2e5d9d81f933afcecd95bc. Feb 13 19:46:14.144544 containerd[1805]: time="2025-02-13T19:46:14.144513267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7469c76fc6-p6gv2,Uid:922c820f-72e7-49c5-977f-4e21e9e5b030,Namespace:calico-apiserver,Attempt:5,} returns sandbox id \"ba323136db19b32b73270ee3c7127df1234af8e52b10dd5c88e43808ed8b0b7f\"" Feb 13 19:46:14.145272 containerd[1805]: time="2025-02-13T19:46:14.145258242Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 13 19:46:14.146613 containerd[1805]: time="2025-02-13T19:46:14.146593031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-xvtfk,Uid:c5186765-6941-4a28-a06d-cf22cd68adee,Namespace:kube-system,Attempt:5,} returns sandbox id \"ff992bf1f00546f54a1db30e03e3a6a55c2b5b4d15db733919df97339b26fc7c\"" Feb 13 19:46:14.148069 containerd[1805]: time="2025-02-13T19:46:14.147975932Z" level=info msg="CreateContainer within sandbox \"ff992bf1f00546f54a1db30e03e3a6a55c2b5b4d15db733919df97339b26fc7c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:46:14.148698 containerd[1805]: time="2025-02-13T19:46:14.148681220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7469c76fc6-shmk8,Uid:168182d0-b9c4-48bd-9c74-28acdd82becf,Namespace:calico-apiserver,Attempt:5,} returns sandbox id \"5eea4cd00402b8a09f07b977144e61901001cc79b4992cb534a40bc4568195fb\"" Feb 13 19:46:14.150486 systemd-networkd[1720]: caliaf48a86f546: Link UP Feb 13 19:46:14.150606 systemd-networkd[1720]: caliaf48a86f546: Gained carrier Feb 13 19:46:14.153594 containerd[1805]: time="2025-02-13T19:46:14.153546417Z" level=info msg="CreateContainer within sandbox \"ff992bf1f00546f54a1db30e03e3a6a55c2b5b4d15db733919df97339b26fc7c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"edf759ea473056f5b75718e2a8248ff25d229263afdb3bcd88e28ca47fc36430\"" Feb 13 19:46:14.153861 containerd[1805]: time="2025-02-13T19:46:14.153835316Z" level=info msg="StartContainer for \"edf759ea473056f5b75718e2a8248ff25d229263afdb3bcd88e28ca47fc36430\"" Feb 13 19:46:14.156628 containerd[1805]: 2025-02-13 19:46:14.031 [INFO][5744] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 19:46:14.156628 containerd[1805]: 2025-02-13 19:46:14.036 [INFO][5744] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4186.1.1--a--a8b3a25f31-k8s-csi--node--driver--vkkt7-eth0 csi-node-driver- calico-system f3327118-2549-4d08-a802-8c7cfa7fb673 627 0 2025-02-13 19:46:02 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4186.1.1-a-a8b3a25f31 csi-node-driver-vkkt7 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] caliaf48a86f546 [] []}} ContainerID="a29c0c23a1fc854e797dfa06e91fe68ebf32aaa50bcdfc255aee7c52ec041336" Namespace="calico-system" Pod="csi-node-driver-vkkt7" WorkloadEndpoint="ci--4186.1.1--a--a8b3a25f31-k8s-csi--node--driver--vkkt7-" Feb 13 19:46:14.156628 containerd[1805]: 2025-02-13 19:46:14.036 [INFO][5744] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a29c0c23a1fc854e797dfa06e91fe68ebf32aaa50bcdfc255aee7c52ec041336" Namespace="calico-system" Pod="csi-node-driver-vkkt7" WorkloadEndpoint="ci--4186.1.1--a--a8b3a25f31-k8s-csi--node--driver--vkkt7-eth0" Feb 13 19:46:14.156628 containerd[1805]: 2025-02-13 19:46:14.054 [INFO][5847] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a29c0c23a1fc854e797dfa06e91fe68ebf32aaa50bcdfc255aee7c52ec041336" HandleID="k8s-pod-network.a29c0c23a1fc854e797dfa06e91fe68ebf32aaa50bcdfc255aee7c52ec041336" Workload="ci--4186.1.1--a--a8b3a25f31-k8s-csi--node--driver--vkkt7-eth0" Feb 13 19:46:14.156628 containerd[1805]: 2025-02-13 19:46:14.059 [INFO][5847] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a29c0c23a1fc854e797dfa06e91fe68ebf32aaa50bcdfc255aee7c52ec041336" HandleID="k8s-pod-network.a29c0c23a1fc854e797dfa06e91fe68ebf32aaa50bcdfc255aee7c52ec041336" Workload="ci--4186.1.1--a--a8b3a25f31-k8s-csi--node--driver--vkkt7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000299b30), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4186.1.1-a-a8b3a25f31", "pod":"csi-node-driver-vkkt7", "timestamp":"2025-02-13 19:46:14.05404907 +0000 UTC"}, Hostname:"ci-4186.1.1-a-a8b3a25f31", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:46:14.156628 containerd[1805]: 2025-02-13 19:46:14.059 [INFO][5847] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:46:14.156628 containerd[1805]: 2025-02-13 19:46:14.116 [INFO][5847] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:46:14.156628 containerd[1805]: 2025-02-13 19:46:14.116 [INFO][5847] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4186.1.1-a-a8b3a25f31' Feb 13 19:46:14.156628 containerd[1805]: 2025-02-13 19:46:14.117 [INFO][5847] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a29c0c23a1fc854e797dfa06e91fe68ebf32aaa50bcdfc255aee7c52ec041336" host="ci-4186.1.1-a-a8b3a25f31" Feb 13 19:46:14.156628 containerd[1805]: 2025-02-13 19:46:14.119 [INFO][5847] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4186.1.1-a-a8b3a25f31" Feb 13 19:46:14.156628 containerd[1805]: 2025-02-13 19:46:14.122 [INFO][5847] ipam/ipam.go 489: Trying affinity for 192.168.106.192/26 host="ci-4186.1.1-a-a8b3a25f31" Feb 13 19:46:14.156628 containerd[1805]: 2025-02-13 19:46:14.124 [INFO][5847] ipam/ipam.go 155: Attempting to load block cidr=192.168.106.192/26 host="ci-4186.1.1-a-a8b3a25f31" Feb 13 19:46:14.156628 containerd[1805]: 2025-02-13 19:46:14.125 [INFO][5847] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.106.192/26 host="ci-4186.1.1-a-a8b3a25f31" Feb 13 19:46:14.156628 containerd[1805]: 2025-02-13 19:46:14.125 [INFO][5847] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.106.192/26 handle="k8s-pod-network.a29c0c23a1fc854e797dfa06e91fe68ebf32aaa50bcdfc255aee7c52ec041336" host="ci-4186.1.1-a-a8b3a25f31" Feb 13 19:46:14.156628 containerd[1805]: 2025-02-13 19:46:14.126 [INFO][5847] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a29c0c23a1fc854e797dfa06e91fe68ebf32aaa50bcdfc255aee7c52ec041336 Feb 13 19:46:14.156628 containerd[1805]: 2025-02-13 19:46:14.129 [INFO][5847] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.106.192/26 handle="k8s-pod-network.a29c0c23a1fc854e797dfa06e91fe68ebf32aaa50bcdfc255aee7c52ec041336" host="ci-4186.1.1-a-a8b3a25f31" Feb 13 19:46:14.156628 containerd[1805]: 2025-02-13 19:46:14.148 [INFO][5847] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.106.197/26] block=192.168.106.192/26 handle="k8s-pod-network.a29c0c23a1fc854e797dfa06e91fe68ebf32aaa50bcdfc255aee7c52ec041336" host="ci-4186.1.1-a-a8b3a25f31" Feb 13 19:46:14.156628 containerd[1805]: 2025-02-13 19:46:14.148 [INFO][5847] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.106.197/26] handle="k8s-pod-network.a29c0c23a1fc854e797dfa06e91fe68ebf32aaa50bcdfc255aee7c52ec041336" host="ci-4186.1.1-a-a8b3a25f31" Feb 13 19:46:14.156628 containerd[1805]: 2025-02-13 19:46:14.148 [INFO][5847] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:46:14.156628 containerd[1805]: 2025-02-13 19:46:14.148 [INFO][5847] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.106.197/26] IPv6=[] ContainerID="a29c0c23a1fc854e797dfa06e91fe68ebf32aaa50bcdfc255aee7c52ec041336" HandleID="k8s-pod-network.a29c0c23a1fc854e797dfa06e91fe68ebf32aaa50bcdfc255aee7c52ec041336" Workload="ci--4186.1.1--a--a8b3a25f31-k8s-csi--node--driver--vkkt7-eth0" Feb 13 19:46:14.157043 containerd[1805]: 2025-02-13 19:46:14.149 [INFO][5744] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a29c0c23a1fc854e797dfa06e91fe68ebf32aaa50bcdfc255aee7c52ec041336" Namespace="calico-system" Pod="csi-node-driver-vkkt7" WorkloadEndpoint="ci--4186.1.1--a--a8b3a25f31-k8s-csi--node--driver--vkkt7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.1.1--a--a8b3a25f31-k8s-csi--node--driver--vkkt7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f3327118-2549-4d08-a802-8c7cfa7fb673", ResourceVersion:"627", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 46, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.1.1-a-a8b3a25f31", ContainerID:"", Pod:"csi-node-driver-vkkt7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.106.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliaf48a86f546", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:46:14.157043 containerd[1805]: 2025-02-13 19:46:14.149 [INFO][5744] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.106.197/32] ContainerID="a29c0c23a1fc854e797dfa06e91fe68ebf32aaa50bcdfc255aee7c52ec041336" Namespace="calico-system" Pod="csi-node-driver-vkkt7" WorkloadEndpoint="ci--4186.1.1--a--a8b3a25f31-k8s-csi--node--driver--vkkt7-eth0" Feb 13 19:46:14.157043 containerd[1805]: 2025-02-13 19:46:14.149 [INFO][5744] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaf48a86f546 ContainerID="a29c0c23a1fc854e797dfa06e91fe68ebf32aaa50bcdfc255aee7c52ec041336" Namespace="calico-system" Pod="csi-node-driver-vkkt7" WorkloadEndpoint="ci--4186.1.1--a--a8b3a25f31-k8s-csi--node--driver--vkkt7-eth0" Feb 13 19:46:14.157043 containerd[1805]: 2025-02-13 19:46:14.150 [INFO][5744] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a29c0c23a1fc854e797dfa06e91fe68ebf32aaa50bcdfc255aee7c52ec041336" Namespace="calico-system" Pod="csi-node-driver-vkkt7" WorkloadEndpoint="ci--4186.1.1--a--a8b3a25f31-k8s-csi--node--driver--vkkt7-eth0" Feb 13 19:46:14.157043 containerd[1805]: 2025-02-13 19:46:14.150 [INFO][5744] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a29c0c23a1fc854e797dfa06e91fe68ebf32aaa50bcdfc255aee7c52ec041336" Namespace="calico-system" Pod="csi-node-driver-vkkt7" WorkloadEndpoint="ci--4186.1.1--a--a8b3a25f31-k8s-csi--node--driver--vkkt7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.1.1--a--a8b3a25f31-k8s-csi--node--driver--vkkt7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f3327118-2549-4d08-a802-8c7cfa7fb673", ResourceVersion:"627", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 46, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.1.1-a-a8b3a25f31", ContainerID:"a29c0c23a1fc854e797dfa06e91fe68ebf32aaa50bcdfc255aee7c52ec041336", Pod:"csi-node-driver-vkkt7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.106.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliaf48a86f546", MAC:"52:13:6e:85:a7:48", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:46:14.157043 containerd[1805]: 2025-02-13 19:46:14.155 [INFO][5744] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a29c0c23a1fc854e797dfa06e91fe68ebf32aaa50bcdfc255aee7c52ec041336" Namespace="calico-system" Pod="csi-node-driver-vkkt7" WorkloadEndpoint="ci--4186.1.1--a--a8b3a25f31-k8s-csi--node--driver--vkkt7-eth0" Feb 13 19:46:14.166941 systemd-networkd[1720]: calidecfba59a64: Link UP Feb 13 19:46:14.167039 containerd[1805]: time="2025-02-13T19:46:14.166990027Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:46:14.167072 containerd[1805]: time="2025-02-13T19:46:14.167038923Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:46:14.167072 containerd[1805]: time="2025-02-13T19:46:14.167051450Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:46:14.167086 systemd-networkd[1720]: calidecfba59a64: Gained carrier Feb 13 19:46:14.167134 containerd[1805]: time="2025-02-13T19:46:14.167112941Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:46:14.172276 containerd[1805]: 2025-02-13 19:46:14.034 [INFO][5766] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 19:46:14.172276 containerd[1805]: 2025-02-13 19:46:14.040 [INFO][5766] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4186.1.1--a--a8b3a25f31-k8s-calico--kube--controllers--54fdf9b76f--wsc67-eth0 calico-kube-controllers-54fdf9b76f- calico-system a3049fc5-6472-40ea-b289-504e898e9372 694 0 2025-02-13 19:46:02 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:54fdf9b76f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4186.1.1-a-a8b3a25f31 calico-kube-controllers-54fdf9b76f-wsc67 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calidecfba59a64 [] []}} ContainerID="c7e15f8f198e9870330ee8faa3d4cbacc63d6a75603fe56d17cf1b40df7d2b2b" Namespace="calico-system" Pod="calico-kube-controllers-54fdf9b76f-wsc67" WorkloadEndpoint="ci--4186.1.1--a--a8b3a25f31-k8s-calico--kube--controllers--54fdf9b76f--wsc67-" Feb 13 19:46:14.172276 containerd[1805]: 2025-02-13 19:46:14.040 [INFO][5766] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c7e15f8f198e9870330ee8faa3d4cbacc63d6a75603fe56d17cf1b40df7d2b2b" Namespace="calico-system" Pod="calico-kube-controllers-54fdf9b76f-wsc67" WorkloadEndpoint="ci--4186.1.1--a--a8b3a25f31-k8s-calico--kube--controllers--54fdf9b76f--wsc67-eth0" Feb 13 19:46:14.172276 containerd[1805]: 2025-02-13 19:46:14.059 [INFO][5862] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c7e15f8f198e9870330ee8faa3d4cbacc63d6a75603fe56d17cf1b40df7d2b2b" HandleID="k8s-pod-network.c7e15f8f198e9870330ee8faa3d4cbacc63d6a75603fe56d17cf1b40df7d2b2b" Workload="ci--4186.1.1--a--a8b3a25f31-k8s-calico--kube--controllers--54fdf9b76f--wsc67-eth0" Feb 13 19:46:14.172276 containerd[1805]: 2025-02-13 19:46:14.063 [INFO][5862] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c7e15f8f198e9870330ee8faa3d4cbacc63d6a75603fe56d17cf1b40df7d2b2b" HandleID="k8s-pod-network.c7e15f8f198e9870330ee8faa3d4cbacc63d6a75603fe56d17cf1b40df7d2b2b" Workload="ci--4186.1.1--a--a8b3a25f31-k8s-calico--kube--controllers--54fdf9b76f--wsc67-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000360920), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4186.1.1-a-a8b3a25f31", "pod":"calico-kube-controllers-54fdf9b76f-wsc67", "timestamp":"2025-02-13 19:46:14.059088999 +0000 UTC"}, Hostname:"ci-4186.1.1-a-a8b3a25f31", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:46:14.172276 containerd[1805]: 2025-02-13 19:46:14.063 [INFO][5862] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:46:14.172276 containerd[1805]: 2025-02-13 19:46:14.148 [INFO][5862] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:46:14.172276 containerd[1805]: 2025-02-13 19:46:14.148 [INFO][5862] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4186.1.1-a-a8b3a25f31' Feb 13 19:46:14.172276 containerd[1805]: 2025-02-13 19:46:14.149 [INFO][5862] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c7e15f8f198e9870330ee8faa3d4cbacc63d6a75603fe56d17cf1b40df7d2b2b" host="ci-4186.1.1-a-a8b3a25f31" Feb 13 19:46:14.172276 containerd[1805]: 2025-02-13 19:46:14.152 [INFO][5862] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4186.1.1-a-a8b3a25f31" Feb 13 19:46:14.172276 containerd[1805]: 2025-02-13 19:46:14.154 [INFO][5862] ipam/ipam.go 489: Trying affinity for 192.168.106.192/26 host="ci-4186.1.1-a-a8b3a25f31" Feb 13 19:46:14.172276 containerd[1805]: 2025-02-13 19:46:14.155 [INFO][5862] ipam/ipam.go 155: Attempting to load block cidr=192.168.106.192/26 host="ci-4186.1.1-a-a8b3a25f31" Feb 13 19:46:14.172276 containerd[1805]: 2025-02-13 19:46:14.157 [INFO][5862] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.106.192/26 host="ci-4186.1.1-a-a8b3a25f31" Feb 13 19:46:14.172276 containerd[1805]: 2025-02-13 19:46:14.157 [INFO][5862] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.106.192/26 handle="k8s-pod-network.c7e15f8f198e9870330ee8faa3d4cbacc63d6a75603fe56d17cf1b40df7d2b2b" host="ci-4186.1.1-a-a8b3a25f31" Feb 13 19:46:14.172276 containerd[1805]: 2025-02-13 19:46:14.158 [INFO][5862] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c7e15f8f198e9870330ee8faa3d4cbacc63d6a75603fe56d17cf1b40df7d2b2b Feb 13 19:46:14.172276 containerd[1805]: 2025-02-13 19:46:14.161 [INFO][5862] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.106.192/26 handle="k8s-pod-network.c7e15f8f198e9870330ee8faa3d4cbacc63d6a75603fe56d17cf1b40df7d2b2b" host="ci-4186.1.1-a-a8b3a25f31" Feb 13 19:46:14.172276 containerd[1805]: 2025-02-13 19:46:14.165 [INFO][5862] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.106.198/26] block=192.168.106.192/26 handle="k8s-pod-network.c7e15f8f198e9870330ee8faa3d4cbacc63d6a75603fe56d17cf1b40df7d2b2b" host="ci-4186.1.1-a-a8b3a25f31" Feb 13 19:46:14.172276 containerd[1805]: 2025-02-13 19:46:14.165 [INFO][5862] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.106.198/26] handle="k8s-pod-network.c7e15f8f198e9870330ee8faa3d4cbacc63d6a75603fe56d17cf1b40df7d2b2b" host="ci-4186.1.1-a-a8b3a25f31" Feb 13 19:46:14.172276 containerd[1805]: 2025-02-13 19:46:14.165 [INFO][5862] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:46:14.172276 containerd[1805]: 2025-02-13 19:46:14.165 [INFO][5862] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.106.198/26] IPv6=[] ContainerID="c7e15f8f198e9870330ee8faa3d4cbacc63d6a75603fe56d17cf1b40df7d2b2b" HandleID="k8s-pod-network.c7e15f8f198e9870330ee8faa3d4cbacc63d6a75603fe56d17cf1b40df7d2b2b" Workload="ci--4186.1.1--a--a8b3a25f31-k8s-calico--kube--controllers--54fdf9b76f--wsc67-eth0" Feb 13 19:46:14.172687 containerd[1805]: 2025-02-13 19:46:14.166 [INFO][5766] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c7e15f8f198e9870330ee8faa3d4cbacc63d6a75603fe56d17cf1b40df7d2b2b" Namespace="calico-system" Pod="calico-kube-controllers-54fdf9b76f-wsc67" WorkloadEndpoint="ci--4186.1.1--a--a8b3a25f31-k8s-calico--kube--controllers--54fdf9b76f--wsc67-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.1.1--a--a8b3a25f31-k8s-calico--kube--controllers--54fdf9b76f--wsc67-eth0", GenerateName:"calico-kube-controllers-54fdf9b76f-", Namespace:"calico-system", SelfLink:"", UID:"a3049fc5-6472-40ea-b289-504e898e9372", ResourceVersion:"694", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 46, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"54fdf9b76f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.1.1-a-a8b3a25f31", ContainerID:"", Pod:"calico-kube-controllers-54fdf9b76f-wsc67", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.106.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calidecfba59a64", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:46:14.172687 containerd[1805]: 2025-02-13 19:46:14.166 [INFO][5766] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.106.198/32] ContainerID="c7e15f8f198e9870330ee8faa3d4cbacc63d6a75603fe56d17cf1b40df7d2b2b" Namespace="calico-system" Pod="calico-kube-controllers-54fdf9b76f-wsc67" WorkloadEndpoint="ci--4186.1.1--a--a8b3a25f31-k8s-calico--kube--controllers--54fdf9b76f--wsc67-eth0" Feb 13 19:46:14.172687 containerd[1805]: 2025-02-13 19:46:14.166 [INFO][5766] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidecfba59a64 ContainerID="c7e15f8f198e9870330ee8faa3d4cbacc63d6a75603fe56d17cf1b40df7d2b2b" Namespace="calico-system" Pod="calico-kube-controllers-54fdf9b76f-wsc67" WorkloadEndpoint="ci--4186.1.1--a--a8b3a25f31-k8s-calico--kube--controllers--54fdf9b76f--wsc67-eth0" Feb 13 19:46:14.172687 containerd[1805]: 2025-02-13 19:46:14.167 [INFO][5766] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c7e15f8f198e9870330ee8faa3d4cbacc63d6a75603fe56d17cf1b40df7d2b2b" Namespace="calico-system" Pod="calico-kube-controllers-54fdf9b76f-wsc67" WorkloadEndpoint="ci--4186.1.1--a--a8b3a25f31-k8s-calico--kube--controllers--54fdf9b76f--wsc67-eth0" Feb 13 19:46:14.172687 containerd[1805]: 2025-02-13 19:46:14.167 [INFO][5766] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c7e15f8f198e9870330ee8faa3d4cbacc63d6a75603fe56d17cf1b40df7d2b2b" Namespace="calico-system" Pod="calico-kube-controllers-54fdf9b76f-wsc67" WorkloadEndpoint="ci--4186.1.1--a--a8b3a25f31-k8s-calico--kube--controllers--54fdf9b76f--wsc67-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.1.1--a--a8b3a25f31-k8s-calico--kube--controllers--54fdf9b76f--wsc67-eth0", GenerateName:"calico-kube-controllers-54fdf9b76f-", Namespace:"calico-system", SelfLink:"", UID:"a3049fc5-6472-40ea-b289-504e898e9372", ResourceVersion:"694", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 46, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"54fdf9b76f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.1.1-a-a8b3a25f31", ContainerID:"c7e15f8f198e9870330ee8faa3d4cbacc63d6a75603fe56d17cf1b40df7d2b2b", Pod:"calico-kube-controllers-54fdf9b76f-wsc67", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.106.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calidecfba59a64", MAC:"82:9f:18:0d:a5:ec", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:46:14.172687 containerd[1805]: 2025-02-13 19:46:14.171 [INFO][5766] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c7e15f8f198e9870330ee8faa3d4cbacc63d6a75603fe56d17cf1b40df7d2b2b" Namespace="calico-system" Pod="calico-kube-controllers-54fdf9b76f-wsc67" WorkloadEndpoint="ci--4186.1.1--a--a8b3a25f31-k8s-calico--kube--controllers--54fdf9b76f--wsc67-eth0" Feb 13 19:46:14.174564 systemd[1]: Started cri-containerd-edf759ea473056f5b75718e2a8248ff25d229263afdb3bcd88e28ca47fc36430.scope - libcontainer container edf759ea473056f5b75718e2a8248ff25d229263afdb3bcd88e28ca47fc36430. Feb 13 19:46:14.174875 containerd[1805]: time="2025-02-13T19:46:14.174851928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-t77lg,Uid:4bb94446-f95c-44d1-9b40-e90a44987989,Namespace:kube-system,Attempt:5,} returns sandbox id \"f4e73c19ffd363be76d92c8d0ff47ea8826675d1bd2e5d9d81f933afcecd95bc\"" Feb 13 19:46:14.176217 containerd[1805]: time="2025-02-13T19:46:14.176195732Z" level=info msg="CreateContainer within sandbox \"f4e73c19ffd363be76d92c8d0ff47ea8826675d1bd2e5d9d81f933afcecd95bc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:46:14.176693 systemd[1]: Started cri-containerd-a29c0c23a1fc854e797dfa06e91fe68ebf32aaa50bcdfc255aee7c52ec041336.scope - libcontainer container a29c0c23a1fc854e797dfa06e91fe68ebf32aaa50bcdfc255aee7c52ec041336. Feb 13 19:46:14.181845 containerd[1805]: time="2025-02-13T19:46:14.181822234Z" level=info msg="CreateContainer within sandbox \"f4e73c19ffd363be76d92c8d0ff47ea8826675d1bd2e5d9d81f933afcecd95bc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"86604c2ad03bf7a0597ac49c8a592a9049620a83d646a2258b05b7c10677db7d\"" Feb 13 19:46:14.182177 containerd[1805]: time="2025-02-13T19:46:14.182160025Z" level=info msg="StartContainer for \"86604c2ad03bf7a0597ac49c8a592a9049620a83d646a2258b05b7c10677db7d\"" Feb 13 19:46:14.182533 containerd[1805]: time="2025-02-13T19:46:14.182469978Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:46:14.182752 containerd[1805]: time="2025-02-13T19:46:14.182733639Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:46:14.182752 containerd[1805]: time="2025-02-13T19:46:14.182745205Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:46:14.182830 containerd[1805]: time="2025-02-13T19:46:14.182794646Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:46:14.187248 containerd[1805]: time="2025-02-13T19:46:14.187199312Z" level=info msg="StartContainer for \"edf759ea473056f5b75718e2a8248ff25d229263afdb3bcd88e28ca47fc36430\" returns successfully" Feb 13 19:46:14.188427 containerd[1805]: time="2025-02-13T19:46:14.188397249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vkkt7,Uid:f3327118-2549-4d08-a802-8c7cfa7fb673,Namespace:calico-system,Attempt:5,} returns sandbox id \"a29c0c23a1fc854e797dfa06e91fe68ebf32aaa50bcdfc255aee7c52ec041336\"" Feb 13 19:46:14.190464 systemd[1]: Started cri-containerd-c7e15f8f198e9870330ee8faa3d4cbacc63d6a75603fe56d17cf1b40df7d2b2b.scope - libcontainer container c7e15f8f198e9870330ee8faa3d4cbacc63d6a75603fe56d17cf1b40df7d2b2b. Feb 13 19:46:14.193350 systemd[1]: Started cri-containerd-86604c2ad03bf7a0597ac49c8a592a9049620a83d646a2258b05b7c10677db7d.scope - libcontainer container 86604c2ad03bf7a0597ac49c8a592a9049620a83d646a2258b05b7c10677db7d. Feb 13 19:46:14.207124 containerd[1805]: time="2025-02-13T19:46:14.207078032Z" level=info msg="StartContainer for \"86604c2ad03bf7a0597ac49c8a592a9049620a83d646a2258b05b7c10677db7d\" returns successfully" Feb 13 19:46:14.214941 containerd[1805]: time="2025-02-13T19:46:14.214879705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54fdf9b76f-wsc67,Uid:a3049fc5-6472-40ea-b289-504e898e9372,Namespace:calico-system,Attempt:5,} returns sandbox id \"c7e15f8f198e9870330ee8faa3d4cbacc63d6a75603fe56d17cf1b40df7d2b2b\"" Feb 13 19:46:15.017797 kubelet[3272]: I0213 19:46:15.017763 3272 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-t77lg" podStartSLOduration=18.017747026 podStartE2EDuration="18.017747026s" podCreationTimestamp="2025-02-13 19:45:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:46:15.017527518 +0000 UTC m=+34.188734185" watchObservedRunningTime="2025-02-13 19:46:15.017747026 +0000 UTC m=+34.188953685" Feb 13 19:46:15.023259 kubelet[3272]: I0213 19:46:15.023227 3272 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-xvtfk" podStartSLOduration=18.023217939 podStartE2EDuration="18.023217939s" podCreationTimestamp="2025-02-13 19:45:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:46:15.023055882 +0000 UTC m=+34.194262545" watchObservedRunningTime="2025-02-13 19:46:15.023217939 +0000 UTC m=+34.194424599" Feb 13 19:46:15.307725 systemd-networkd[1720]: calidecfba59a64: Gained IPv6LL Feb 13 19:46:15.435686 systemd-networkd[1720]: caliaf48a86f546: Gained IPv6LL Feb 13 19:46:15.883524 systemd-networkd[1720]: cali5f1b70e5fc6: Gained IPv6LL Feb 13 19:46:15.883696 systemd-networkd[1720]: calibf14f644f4b: Gained IPv6LL Feb 13 19:46:15.947486 systemd-networkd[1720]: cali6225cdaf80b: Gained IPv6LL Feb 13 19:46:16.024645 containerd[1805]: time="2025-02-13T19:46:16.024593809Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:46:16.024854 containerd[1805]: time="2025-02-13T19:46:16.024785310Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Feb 13 19:46:16.025211 containerd[1805]: time="2025-02-13T19:46:16.025169714Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:46:16.026469 containerd[1805]: time="2025-02-13T19:46:16.026441702Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:46:16.026826 containerd[1805]: time="2025-02-13T19:46:16.026784848Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 1.881509181s" Feb 13 19:46:16.026826 containerd[1805]: time="2025-02-13T19:46:16.026799770Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Feb 13 19:46:16.027297 containerd[1805]: time="2025-02-13T19:46:16.027287828Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 13 19:46:16.027756 containerd[1805]: time="2025-02-13T19:46:16.027741457Z" level=info msg="CreateContainer within sandbox \"ba323136db19b32b73270ee3c7127df1234af8e52b10dd5c88e43808ed8b0b7f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 19:46:16.034093 containerd[1805]: time="2025-02-13T19:46:16.034050734Z" level=info msg="CreateContainer within sandbox \"ba323136db19b32b73270ee3c7127df1234af8e52b10dd5c88e43808ed8b0b7f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"24383e50a5983a235b760527591a10dc53bbf15d08b2f604a09ef82597dea5f1\"" Feb 13 19:46:16.034246 containerd[1805]: time="2025-02-13T19:46:16.034234005Z" level=info msg="StartContainer for \"24383e50a5983a235b760527591a10dc53bbf15d08b2f604a09ef82597dea5f1\"" Feb 13 19:46:16.058702 systemd[1]: Started cri-containerd-24383e50a5983a235b760527591a10dc53bbf15d08b2f604a09ef82597dea5f1.scope - libcontainer container 24383e50a5983a235b760527591a10dc53bbf15d08b2f604a09ef82597dea5f1. Feb 13 19:46:16.075550 systemd-networkd[1720]: calicd1d1a2eb34: Gained IPv6LL Feb 13 19:46:16.081547 containerd[1805]: time="2025-02-13T19:46:16.081486592Z" level=info msg="StartContainer for \"24383e50a5983a235b760527591a10dc53bbf15d08b2f604a09ef82597dea5f1\" returns successfully" Feb 13 19:46:16.466630 containerd[1805]: time="2025-02-13T19:46:16.466577935Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:46:16.466797 containerd[1805]: time="2025-02-13T19:46:16.466747599Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Feb 13 19:46:16.468040 containerd[1805]: time="2025-02-13T19:46:16.467997214Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 440.695592ms" Feb 13 19:46:16.468040 containerd[1805]: time="2025-02-13T19:46:16.468012357Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Feb 13 19:46:16.468466 containerd[1805]: time="2025-02-13T19:46:16.468455481Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Feb 13 19:46:16.469210 containerd[1805]: time="2025-02-13T19:46:16.469173620Z" level=info msg="CreateContainer within sandbox \"5eea4cd00402b8a09f07b977144e61901001cc79b4992cb534a40bc4568195fb\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 19:46:16.475138 containerd[1805]: time="2025-02-13T19:46:16.475091501Z" level=info msg="CreateContainer within sandbox \"5eea4cd00402b8a09f07b977144e61901001cc79b4992cb534a40bc4568195fb\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"e07e47e1bd26316834fde712c480b39cb8b9f0fb96a5421b854c375ac89b4acf\"" Feb 13 19:46:16.475356 containerd[1805]: time="2025-02-13T19:46:16.475344180Z" level=info msg="StartContainer for \"e07e47e1bd26316834fde712c480b39cb8b9f0fb96a5421b854c375ac89b4acf\"" Feb 13 19:46:16.496599 systemd[1]: Started cri-containerd-e07e47e1bd26316834fde712c480b39cb8b9f0fb96a5421b854c375ac89b4acf.scope - libcontainer container e07e47e1bd26316834fde712c480b39cb8b9f0fb96a5421b854c375ac89b4acf. Feb 13 19:46:16.526838 containerd[1805]: time="2025-02-13T19:46:16.526807270Z" level=info msg="StartContainer for \"e07e47e1bd26316834fde712c480b39cb8b9f0fb96a5421b854c375ac89b4acf\" returns successfully" Feb 13 19:46:16.733689 kubelet[3272]: I0213 19:46:16.733581 3272 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 19:46:16.875501 kernel: bpftool[6691]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Feb 13 19:46:17.023941 kubelet[3272]: I0213 19:46:17.023857 3272 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7469c76fc6-p6gv2" podStartSLOduration=14.1417556 podStartE2EDuration="16.023841282s" podCreationTimestamp="2025-02-13 19:46:01 +0000 UTC" firstStartedPulling="2025-02-13 19:46:14.14510157 +0000 UTC m=+33.316308235" lastFinishedPulling="2025-02-13 19:46:16.027187257 +0000 UTC m=+35.198393917" observedRunningTime="2025-02-13 19:46:17.023396345 +0000 UTC m=+36.194603007" watchObservedRunningTime="2025-02-13 19:46:17.023841282 +0000 UTC m=+36.195047940" Feb 13 19:46:17.024710 systemd-networkd[1720]: vxlan.calico: Link UP Feb 13 19:46:17.024713 systemd-networkd[1720]: vxlan.calico: Gained carrier Feb 13 19:46:17.029239 kubelet[3272]: I0213 19:46:17.029183 3272 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7469c76fc6-shmk8" podStartSLOduration=13.710054841 podStartE2EDuration="16.029165026s" podCreationTimestamp="2025-02-13 19:46:01 +0000 UTC" firstStartedPulling="2025-02-13 19:46:14.149259829 +0000 UTC m=+33.320466490" lastFinishedPulling="2025-02-13 19:46:16.468370015 +0000 UTC m=+35.639576675" observedRunningTime="2025-02-13 19:46:17.028886287 +0000 UTC m=+36.200092951" watchObservedRunningTime="2025-02-13 19:46:17.029165026 +0000 UTC m=+36.200371691" Feb 13 19:46:17.931400 containerd[1805]: time="2025-02-13T19:46:17.931371862Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:46:17.931652 containerd[1805]: time="2025-02-13T19:46:17.931618974Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Feb 13 19:46:17.931977 containerd[1805]: time="2025-02-13T19:46:17.931966952Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:46:17.932934 containerd[1805]: time="2025-02-13T19:46:17.932895460Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:46:17.933687 containerd[1805]: time="2025-02-13T19:46:17.933645829Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.465177043s" Feb 13 19:46:17.933687 containerd[1805]: time="2025-02-13T19:46:17.933661003Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Feb 13 19:46:17.934221 containerd[1805]: time="2025-02-13T19:46:17.934153103Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Feb 13 19:46:17.934842 containerd[1805]: time="2025-02-13T19:46:17.934827519Z" level=info msg="CreateContainer within sandbox \"a29c0c23a1fc854e797dfa06e91fe68ebf32aaa50bcdfc255aee7c52ec041336\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 13 19:46:17.940606 containerd[1805]: time="2025-02-13T19:46:17.940561791Z" level=info msg="CreateContainer within sandbox \"a29c0c23a1fc854e797dfa06e91fe68ebf32aaa50bcdfc255aee7c52ec041336\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"9efe12a48d78cb6fb31c7d326b41f2595f73fac8a491780df7ec7ca205f7e385\"" Feb 13 19:46:17.940833 containerd[1805]: time="2025-02-13T19:46:17.940820845Z" level=info msg="StartContainer for \"9efe12a48d78cb6fb31c7d326b41f2595f73fac8a491780df7ec7ca205f7e385\"" Feb 13 19:46:17.968725 systemd[1]: Started cri-containerd-9efe12a48d78cb6fb31c7d326b41f2595f73fac8a491780df7ec7ca205f7e385.scope - libcontainer container 9efe12a48d78cb6fb31c7d326b41f2595f73fac8a491780df7ec7ca205f7e385. Feb 13 19:46:17.985717 containerd[1805]: time="2025-02-13T19:46:17.985650470Z" level=info msg="StartContainer for \"9efe12a48d78cb6fb31c7d326b41f2595f73fac8a491780df7ec7ca205f7e385\" returns successfully" Feb 13 19:46:18.025843 kubelet[3272]: I0213 19:46:18.025777 3272 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 19:46:18.123693 systemd-networkd[1720]: vxlan.calico: Gained IPv6LL Feb 13 19:46:19.859339 containerd[1805]: time="2025-02-13T19:46:19.859282713Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:46:19.859562 containerd[1805]: time="2025-02-13T19:46:19.859423925Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Feb 13 19:46:19.859800 containerd[1805]: time="2025-02-13T19:46:19.859760191Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:46:19.860789 containerd[1805]: time="2025-02-13T19:46:19.860748894Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:46:19.861206 containerd[1805]: time="2025-02-13T19:46:19.861155282Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 1.926985599s" Feb 13 19:46:19.861206 containerd[1805]: time="2025-02-13T19:46:19.861169996Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Feb 13 19:46:19.861652 containerd[1805]: time="2025-02-13T19:46:19.861640387Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Feb 13 19:46:19.864696 containerd[1805]: time="2025-02-13T19:46:19.864678397Z" level=info msg="CreateContainer within sandbox \"c7e15f8f198e9870330ee8faa3d4cbacc63d6a75603fe56d17cf1b40df7d2b2b\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Feb 13 19:46:19.871833 containerd[1805]: time="2025-02-13T19:46:19.871789957Z" level=info msg="CreateContainer within sandbox \"c7e15f8f198e9870330ee8faa3d4cbacc63d6a75603fe56d17cf1b40df7d2b2b\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"0e2a70f1fdea92985b2add3646c876a400c7e8bae23855a60b1842cb6ac69bfd\"" Feb 13 19:46:19.872035 containerd[1805]: time="2025-02-13T19:46:19.871992440Z" level=info msg="StartContainer for \"0e2a70f1fdea92985b2add3646c876a400c7e8bae23855a60b1842cb6ac69bfd\"" Feb 13 19:46:19.902734 systemd[1]: Started cri-containerd-0e2a70f1fdea92985b2add3646c876a400c7e8bae23855a60b1842cb6ac69bfd.scope - libcontainer container 0e2a70f1fdea92985b2add3646c876a400c7e8bae23855a60b1842cb6ac69bfd. Feb 13 19:46:19.928101 containerd[1805]: time="2025-02-13T19:46:19.928045239Z" level=info msg="StartContainer for \"0e2a70f1fdea92985b2add3646c876a400c7e8bae23855a60b1842cb6ac69bfd\" returns successfully" Feb 13 19:46:20.062037 kubelet[3272]: I0213 19:46:20.061920 3272 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-54fdf9b76f-wsc67" podStartSLOduration=12.415776923 podStartE2EDuration="18.061882625s" podCreationTimestamp="2025-02-13 19:46:02 +0000 UTC" firstStartedPulling="2025-02-13 19:46:14.21548707 +0000 UTC m=+33.386693730" lastFinishedPulling="2025-02-13 19:46:19.861592772 +0000 UTC m=+39.032799432" observedRunningTime="2025-02-13 19:46:20.060776485 +0000 UTC m=+39.231983217" watchObservedRunningTime="2025-02-13 19:46:20.061882625 +0000 UTC m=+39.233089337" Feb 13 19:46:20.193846 kubelet[3272]: I0213 19:46:20.193608 3272 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 19:46:21.396138 containerd[1805]: time="2025-02-13T19:46:21.396084701Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:46:21.396351 containerd[1805]: time="2025-02-13T19:46:21.396193670Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Feb 13 19:46:21.396660 containerd[1805]: time="2025-02-13T19:46:21.396618249Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:46:21.397653 containerd[1805]: time="2025-02-13T19:46:21.397641712Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:46:21.398274 containerd[1805]: time="2025-02-13T19:46:21.398253926Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.536595266s" Feb 13 19:46:21.398307 containerd[1805]: time="2025-02-13T19:46:21.398278282Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Feb 13 19:46:21.399854 containerd[1805]: time="2025-02-13T19:46:21.399834153Z" level=info msg="CreateContainer within sandbox \"a29c0c23a1fc854e797dfa06e91fe68ebf32aaa50bcdfc255aee7c52ec041336\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 13 19:46:21.404790 containerd[1805]: time="2025-02-13T19:46:21.404765313Z" level=info msg="CreateContainer within sandbox \"a29c0c23a1fc854e797dfa06e91fe68ebf32aaa50bcdfc255aee7c52ec041336\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"bc71a81389509561d14ac6a1d6486d2a45f9a055069f4a6b770566e4659331a2\"" Feb 13 19:46:21.405037 containerd[1805]: time="2025-02-13T19:46:21.405024849Z" level=info msg="StartContainer for \"bc71a81389509561d14ac6a1d6486d2a45f9a055069f4a6b770566e4659331a2\"" Feb 13 19:46:21.428597 systemd[1]: Started cri-containerd-bc71a81389509561d14ac6a1d6486d2a45f9a055069f4a6b770566e4659331a2.scope - libcontainer container bc71a81389509561d14ac6a1d6486d2a45f9a055069f4a6b770566e4659331a2. Feb 13 19:46:21.442952 containerd[1805]: time="2025-02-13T19:46:21.442916891Z" level=info msg="StartContainer for \"bc71a81389509561d14ac6a1d6486d2a45f9a055069f4a6b770566e4659331a2\" returns successfully" Feb 13 19:46:21.914525 kubelet[3272]: I0213 19:46:21.914390 3272 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 13 19:46:21.915489 kubelet[3272]: I0213 19:46:21.914542 3272 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 13 19:46:22.075194 kubelet[3272]: I0213 19:46:22.075075 3272 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-vkkt7" podStartSLOduration=12.865041513 podStartE2EDuration="20.07503483s" podCreationTimestamp="2025-02-13 19:46:02 +0000 UTC" firstStartedPulling="2025-02-13 19:46:14.189085357 +0000 UTC m=+33.360292020" lastFinishedPulling="2025-02-13 19:46:21.399078672 +0000 UTC m=+40.570285337" observedRunningTime="2025-02-13 19:46:22.074460636 +0000 UTC m=+41.245667384" watchObservedRunningTime="2025-02-13 19:46:22.07503483 +0000 UTC m=+41.246241579" Feb 13 19:46:40.871433 containerd[1805]: time="2025-02-13T19:46:40.871351063Z" level=info msg="StopPodSandbox for \"9a29ac16eb479f3a98d75645f8fdb66ad9a507e136a4a0536faab04d17a70738\"" Feb 13 19:46:40.871817 containerd[1805]: time="2025-02-13T19:46:40.871443580Z" level=info msg="TearDown network for sandbox \"9a29ac16eb479f3a98d75645f8fdb66ad9a507e136a4a0536faab04d17a70738\" successfully" Feb 13 19:46:40.871817 containerd[1805]: time="2025-02-13T19:46:40.871464768Z" level=info msg="StopPodSandbox for \"9a29ac16eb479f3a98d75645f8fdb66ad9a507e136a4a0536faab04d17a70738\" returns successfully" Feb 13 19:46:40.871817 containerd[1805]: time="2025-02-13T19:46:40.871755641Z" level=info msg="RemovePodSandbox for \"9a29ac16eb479f3a98d75645f8fdb66ad9a507e136a4a0536faab04d17a70738\"" Feb 13 19:46:40.871817 containerd[1805]: time="2025-02-13T19:46:40.871768105Z" level=info msg="Forcibly stopping sandbox \"9a29ac16eb479f3a98d75645f8fdb66ad9a507e136a4a0536faab04d17a70738\"" Feb 13 19:46:40.871998 containerd[1805]: time="2025-02-13T19:46:40.871964707Z" level=info msg="TearDown network for sandbox \"9a29ac16eb479f3a98d75645f8fdb66ad9a507e136a4a0536faab04d17a70738\" successfully" Feb 13 19:46:40.874010 containerd[1805]: time="2025-02-13T19:46:40.873997673Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9a29ac16eb479f3a98d75645f8fdb66ad9a507e136a4a0536faab04d17a70738\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:46:40.874044 containerd[1805]: time="2025-02-13T19:46:40.874020260Z" level=info msg="RemovePodSandbox \"9a29ac16eb479f3a98d75645f8fdb66ad9a507e136a4a0536faab04d17a70738\" returns successfully" Feb 13 19:46:40.874268 containerd[1805]: time="2025-02-13T19:46:40.874259246Z" level=info msg="StopPodSandbox for \"17c86641c0094d0d8e43422341e1cc10769b1be113bacfab3dfb5b9b7eacc05c\"" Feb 13 19:46:40.874321 containerd[1805]: time="2025-02-13T19:46:40.874297638Z" level=info msg="TearDown network for sandbox \"17c86641c0094d0d8e43422341e1cc10769b1be113bacfab3dfb5b9b7eacc05c\" successfully" Feb 13 19:46:40.874343 containerd[1805]: time="2025-02-13T19:46:40.874321294Z" level=info msg="StopPodSandbox for \"17c86641c0094d0d8e43422341e1cc10769b1be113bacfab3dfb5b9b7eacc05c\" returns successfully" Feb 13 19:46:40.874437 containerd[1805]: time="2025-02-13T19:46:40.874427936Z" level=info msg="RemovePodSandbox for \"17c86641c0094d0d8e43422341e1cc10769b1be113bacfab3dfb5b9b7eacc05c\"" Feb 13 19:46:40.874484 containerd[1805]: time="2025-02-13T19:46:40.874440600Z" level=info msg="Forcibly stopping sandbox \"17c86641c0094d0d8e43422341e1cc10769b1be113bacfab3dfb5b9b7eacc05c\"" Feb 13 19:46:40.874545 containerd[1805]: time="2025-02-13T19:46:40.874529170Z" level=info msg="TearDown network for sandbox \"17c86641c0094d0d8e43422341e1cc10769b1be113bacfab3dfb5b9b7eacc05c\" successfully" Feb 13 19:46:40.875710 containerd[1805]: time="2025-02-13T19:46:40.875699146Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"17c86641c0094d0d8e43422341e1cc10769b1be113bacfab3dfb5b9b7eacc05c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:46:40.875737 containerd[1805]: time="2025-02-13T19:46:40.875719456Z" level=info msg="RemovePodSandbox \"17c86641c0094d0d8e43422341e1cc10769b1be113bacfab3dfb5b9b7eacc05c\" returns successfully" Feb 13 19:46:40.876041 containerd[1805]: time="2025-02-13T19:46:40.875972702Z" level=info msg="StopPodSandbox for \"e51c93fcab25bcc6c22126e7803498a376f32272e77511b247d4ab18c089d1d8\"" Feb 13 19:46:40.876122 containerd[1805]: time="2025-02-13T19:46:40.876064439Z" level=info msg="TearDown network for sandbox \"e51c93fcab25bcc6c22126e7803498a376f32272e77511b247d4ab18c089d1d8\" successfully" Feb 13 19:46:40.876122 containerd[1805]: time="2025-02-13T19:46:40.876087892Z" level=info msg="StopPodSandbox for \"e51c93fcab25bcc6c22126e7803498a376f32272e77511b247d4ab18c089d1d8\" returns successfully" Feb 13 19:46:40.876329 containerd[1805]: time="2025-02-13T19:46:40.876318551Z" level=info msg="RemovePodSandbox for \"e51c93fcab25bcc6c22126e7803498a376f32272e77511b247d4ab18c089d1d8\"" Feb 13 19:46:40.876355 containerd[1805]: time="2025-02-13T19:46:40.876332248Z" level=info msg="Forcibly stopping sandbox \"e51c93fcab25bcc6c22126e7803498a376f32272e77511b247d4ab18c089d1d8\"" Feb 13 19:46:40.876391 containerd[1805]: time="2025-02-13T19:46:40.876370028Z" level=info msg="TearDown network for sandbox \"e51c93fcab25bcc6c22126e7803498a376f32272e77511b247d4ab18c089d1d8\" successfully" Feb 13 19:46:40.877666 containerd[1805]: time="2025-02-13T19:46:40.877624774Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e51c93fcab25bcc6c22126e7803498a376f32272e77511b247d4ab18c089d1d8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:46:40.877666 containerd[1805]: time="2025-02-13T19:46:40.877644679Z" level=info msg="RemovePodSandbox \"e51c93fcab25bcc6c22126e7803498a376f32272e77511b247d4ab18c089d1d8\" returns successfully" Feb 13 19:46:40.877771 containerd[1805]: time="2025-02-13T19:46:40.877743251Z" level=info msg="StopPodSandbox for \"8b701cfc8211741af42c4414d372b5f700d7a816142fc765ae65aac6d68c837d\"" Feb 13 19:46:40.877835 containerd[1805]: time="2025-02-13T19:46:40.877820224Z" level=info msg="TearDown network for sandbox \"8b701cfc8211741af42c4414d372b5f700d7a816142fc765ae65aac6d68c837d\" successfully" Feb 13 19:46:40.877835 containerd[1805]: time="2025-02-13T19:46:40.877826662Z" level=info msg="StopPodSandbox for \"8b701cfc8211741af42c4414d372b5f700d7a816142fc765ae65aac6d68c837d\" returns successfully" Feb 13 19:46:40.878009 containerd[1805]: time="2025-02-13T19:46:40.877955037Z" level=info msg="RemovePodSandbox for \"8b701cfc8211741af42c4414d372b5f700d7a816142fc765ae65aac6d68c837d\"" Feb 13 19:46:40.878009 containerd[1805]: time="2025-02-13T19:46:40.877999085Z" level=info msg="Forcibly stopping sandbox \"8b701cfc8211741af42c4414d372b5f700d7a816142fc765ae65aac6d68c837d\"" Feb 13 19:46:40.878048 containerd[1805]: time="2025-02-13T19:46:40.878031801Z" level=info msg="TearDown network for sandbox \"8b701cfc8211741af42c4414d372b5f700d7a816142fc765ae65aac6d68c837d\" successfully" Feb 13 19:46:40.879241 containerd[1805]: time="2025-02-13T19:46:40.879201871Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8b701cfc8211741af42c4414d372b5f700d7a816142fc765ae65aac6d68c837d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:46:40.879241 containerd[1805]: time="2025-02-13T19:46:40.879220048Z" level=info msg="RemovePodSandbox \"8b701cfc8211741af42c4414d372b5f700d7a816142fc765ae65aac6d68c837d\" returns successfully" Feb 13 19:46:40.879361 containerd[1805]: time="2025-02-13T19:46:40.879351641Z" level=info msg="StopPodSandbox for \"ed35a2434e04ccf39eb2f180a16dd5452b789fb9c1d32581580ac92a02c45d98\"" Feb 13 19:46:40.879399 containerd[1805]: time="2025-02-13T19:46:40.879392068Z" level=info msg="TearDown network for sandbox \"ed35a2434e04ccf39eb2f180a16dd5452b789fb9c1d32581580ac92a02c45d98\" successfully" Feb 13 19:46:40.879424 containerd[1805]: time="2025-02-13T19:46:40.879398875Z" level=info msg="StopPodSandbox for \"ed35a2434e04ccf39eb2f180a16dd5452b789fb9c1d32581580ac92a02c45d98\" returns successfully" Feb 13 19:46:40.879645 containerd[1805]: time="2025-02-13T19:46:40.879592172Z" level=info msg="RemovePodSandbox for \"ed35a2434e04ccf39eb2f180a16dd5452b789fb9c1d32581580ac92a02c45d98\"" Feb 13 19:46:40.879645 containerd[1805]: time="2025-02-13T19:46:40.879620658Z" level=info msg="Forcibly stopping sandbox \"ed35a2434e04ccf39eb2f180a16dd5452b789fb9c1d32581580ac92a02c45d98\"" Feb 13 19:46:40.879694 containerd[1805]: time="2025-02-13T19:46:40.879673979Z" level=info msg="TearDown network for sandbox \"ed35a2434e04ccf39eb2f180a16dd5452b789fb9c1d32581580ac92a02c45d98\" successfully" Feb 13 19:46:40.880832 containerd[1805]: time="2025-02-13T19:46:40.880778808Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ed35a2434e04ccf39eb2f180a16dd5452b789fb9c1d32581580ac92a02c45d98\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:46:40.880832 containerd[1805]: time="2025-02-13T19:46:40.880796076Z" level=info msg="RemovePodSandbox \"ed35a2434e04ccf39eb2f180a16dd5452b789fb9c1d32581580ac92a02c45d98\" returns successfully" Feb 13 19:46:40.881027 containerd[1805]: time="2025-02-13T19:46:40.880968739Z" level=info msg="StopPodSandbox for \"80232807547030d8f07b5c4f1cde555f2f89298b4b252d2c8d124b51c03bbfbb\"" Feb 13 19:46:40.881105 containerd[1805]: time="2025-02-13T19:46:40.881073958Z" level=info msg="TearDown network for sandbox \"80232807547030d8f07b5c4f1cde555f2f89298b4b252d2c8d124b51c03bbfbb\" successfully" Feb 13 19:46:40.881105 containerd[1805]: time="2025-02-13T19:46:40.881080200Z" level=info msg="StopPodSandbox for \"80232807547030d8f07b5c4f1cde555f2f89298b4b252d2c8d124b51c03bbfbb\" returns successfully" Feb 13 19:46:40.881226 containerd[1805]: time="2025-02-13T19:46:40.881193016Z" level=info msg="RemovePodSandbox for \"80232807547030d8f07b5c4f1cde555f2f89298b4b252d2c8d124b51c03bbfbb\"" Feb 13 19:46:40.881226 containerd[1805]: time="2025-02-13T19:46:40.881204026Z" level=info msg="Forcibly stopping sandbox \"80232807547030d8f07b5c4f1cde555f2f89298b4b252d2c8d124b51c03bbfbb\"" Feb 13 19:46:40.881264 containerd[1805]: time="2025-02-13T19:46:40.881240292Z" level=info msg="TearDown network for sandbox \"80232807547030d8f07b5c4f1cde555f2f89298b4b252d2c8d124b51c03bbfbb\" successfully" Feb 13 19:46:40.882361 containerd[1805]: time="2025-02-13T19:46:40.882308365Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"80232807547030d8f07b5c4f1cde555f2f89298b4b252d2c8d124b51c03bbfbb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:46:40.882361 containerd[1805]: time="2025-02-13T19:46:40.882326991Z" level=info msg="RemovePodSandbox \"80232807547030d8f07b5c4f1cde555f2f89298b4b252d2c8d124b51c03bbfbb\" returns successfully" Feb 13 19:46:40.882625 containerd[1805]: time="2025-02-13T19:46:40.882596728Z" level=info msg="StopPodSandbox for \"a76e4eaee661cfd563bcd392434c458182000015a3325e637abcb2534d276298\"" Feb 13 19:46:40.882686 containerd[1805]: time="2025-02-13T19:46:40.882656380Z" level=info msg="TearDown network for sandbox \"a76e4eaee661cfd563bcd392434c458182000015a3325e637abcb2534d276298\" successfully" Feb 13 19:46:40.882686 containerd[1805]: time="2025-02-13T19:46:40.882662830Z" level=info msg="StopPodSandbox for \"a76e4eaee661cfd563bcd392434c458182000015a3325e637abcb2534d276298\" returns successfully" Feb 13 19:46:40.882924 containerd[1805]: time="2025-02-13T19:46:40.882893955Z" level=info msg="RemovePodSandbox for \"a76e4eaee661cfd563bcd392434c458182000015a3325e637abcb2534d276298\"" Feb 13 19:46:40.882924 containerd[1805]: time="2025-02-13T19:46:40.882905421Z" level=info msg="Forcibly stopping sandbox \"a76e4eaee661cfd563bcd392434c458182000015a3325e637abcb2534d276298\"" Feb 13 19:46:40.883008 containerd[1805]: time="2025-02-13T19:46:40.882959452Z" level=info msg="TearDown network for sandbox \"a76e4eaee661cfd563bcd392434c458182000015a3325e637abcb2534d276298\" successfully" Feb 13 19:46:40.884116 containerd[1805]: time="2025-02-13T19:46:40.884073926Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a76e4eaee661cfd563bcd392434c458182000015a3325e637abcb2534d276298\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:46:40.884169 containerd[1805]: time="2025-02-13T19:46:40.884117239Z" level=info msg="RemovePodSandbox \"a76e4eaee661cfd563bcd392434c458182000015a3325e637abcb2534d276298\" returns successfully" Feb 13 19:46:40.884287 containerd[1805]: time="2025-02-13T19:46:40.884253344Z" level=info msg="StopPodSandbox for \"273e7d864ff43274281d57fb2f27941c5b34017810dda5ef7ecc8294315a449b\"" Feb 13 19:46:40.884319 containerd[1805]: time="2025-02-13T19:46:40.884311896Z" level=info msg="TearDown network for sandbox \"273e7d864ff43274281d57fb2f27941c5b34017810dda5ef7ecc8294315a449b\" successfully" Feb 13 19:46:40.884342 containerd[1805]: time="2025-02-13T19:46:40.884319743Z" level=info msg="StopPodSandbox for \"273e7d864ff43274281d57fb2f27941c5b34017810dda5ef7ecc8294315a449b\" returns successfully" Feb 13 19:46:40.884488 containerd[1805]: time="2025-02-13T19:46:40.884414655Z" level=info msg="RemovePodSandbox for \"273e7d864ff43274281d57fb2f27941c5b34017810dda5ef7ecc8294315a449b\"" Feb 13 19:46:40.884488 containerd[1805]: time="2025-02-13T19:46:40.884431900Z" level=info msg="Forcibly stopping sandbox \"273e7d864ff43274281d57fb2f27941c5b34017810dda5ef7ecc8294315a449b\"" Feb 13 19:46:40.884542 containerd[1805]: time="2025-02-13T19:46:40.884494717Z" level=info msg="TearDown network for sandbox \"273e7d864ff43274281d57fb2f27941c5b34017810dda5ef7ecc8294315a449b\" successfully" Feb 13 19:46:40.901953 containerd[1805]: time="2025-02-13T19:46:40.901909943Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"273e7d864ff43274281d57fb2f27941c5b34017810dda5ef7ecc8294315a449b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:46:40.901953 containerd[1805]: time="2025-02-13T19:46:40.901953639Z" level=info msg="RemovePodSandbox \"273e7d864ff43274281d57fb2f27941c5b34017810dda5ef7ecc8294315a449b\" returns successfully" Feb 13 19:46:40.902118 containerd[1805]: time="2025-02-13T19:46:40.902064157Z" level=info msg="StopPodSandbox for \"3d0ee9cc9d198a644926e3567cfe87b5412c43401da9974690432ddf97964780\"" Feb 13 19:46:40.902151 containerd[1805]: time="2025-02-13T19:46:40.902141178Z" level=info msg="TearDown network for sandbox \"3d0ee9cc9d198a644926e3567cfe87b5412c43401da9974690432ddf97964780\" successfully" Feb 13 19:46:40.902151 containerd[1805]: time="2025-02-13T19:46:40.902148382Z" level=info msg="StopPodSandbox for \"3d0ee9cc9d198a644926e3567cfe87b5412c43401da9974690432ddf97964780\" returns successfully" Feb 13 19:46:40.902318 containerd[1805]: time="2025-02-13T19:46:40.902279647Z" level=info msg="RemovePodSandbox for \"3d0ee9cc9d198a644926e3567cfe87b5412c43401da9974690432ddf97964780\"" Feb 13 19:46:40.902318 containerd[1805]: time="2025-02-13T19:46:40.902292196Z" level=info msg="Forcibly stopping sandbox \"3d0ee9cc9d198a644926e3567cfe87b5412c43401da9974690432ddf97964780\"" Feb 13 19:46:40.902356 containerd[1805]: time="2025-02-13T19:46:40.902330849Z" level=info msg="TearDown network for sandbox \"3d0ee9cc9d198a644926e3567cfe87b5412c43401da9974690432ddf97964780\" successfully" Feb 13 19:46:40.903574 containerd[1805]: time="2025-02-13T19:46:40.903517607Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3d0ee9cc9d198a644926e3567cfe87b5412c43401da9974690432ddf97964780\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:46:40.903574 containerd[1805]: time="2025-02-13T19:46:40.903552142Z" level=info msg="RemovePodSandbox \"3d0ee9cc9d198a644926e3567cfe87b5412c43401da9974690432ddf97964780\" returns successfully" Feb 13 19:46:40.903790 containerd[1805]: time="2025-02-13T19:46:40.903749001Z" level=info msg="StopPodSandbox for \"f591b3689d85817516cd404dcb28ead0b5ffd0a594a13bb523640ede1db5693a\"" Feb 13 19:46:40.903839 containerd[1805]: time="2025-02-13T19:46:40.903824188Z" level=info msg="TearDown network for sandbox \"f591b3689d85817516cd404dcb28ead0b5ffd0a594a13bb523640ede1db5693a\" successfully" Feb 13 19:46:40.903839 containerd[1805]: time="2025-02-13T19:46:40.903830807Z" level=info msg="StopPodSandbox for \"f591b3689d85817516cd404dcb28ead0b5ffd0a594a13bb523640ede1db5693a\" returns successfully" Feb 13 19:46:40.904006 containerd[1805]: time="2025-02-13T19:46:40.903965594Z" level=info msg="RemovePodSandbox for \"f591b3689d85817516cd404dcb28ead0b5ffd0a594a13bb523640ede1db5693a\"" Feb 13 19:46:40.904006 containerd[1805]: time="2025-02-13T19:46:40.903977178Z" level=info msg="Forcibly stopping sandbox \"f591b3689d85817516cd404dcb28ead0b5ffd0a594a13bb523640ede1db5693a\"" Feb 13 19:46:40.904073 containerd[1805]: time="2025-02-13T19:46:40.904007355Z" level=info msg="TearDown network for sandbox \"f591b3689d85817516cd404dcb28ead0b5ffd0a594a13bb523640ede1db5693a\" successfully" Feb 13 19:46:40.905149 containerd[1805]: time="2025-02-13T19:46:40.905139242Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f591b3689d85817516cd404dcb28ead0b5ffd0a594a13bb523640ede1db5693a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:46:40.905171 containerd[1805]: time="2025-02-13T19:46:40.905157744Z" level=info msg="RemovePodSandbox \"f591b3689d85817516cd404dcb28ead0b5ffd0a594a13bb523640ede1db5693a\" returns successfully" Feb 13 19:46:40.905359 containerd[1805]: time="2025-02-13T19:46:40.905325733Z" level=info msg="StopPodSandbox for \"45c87c61ef4cd93a6712ac1a472c5dc83746a8c57cbce7faf17a97105703dc6f\"" Feb 13 19:46:40.905379 containerd[1805]: time="2025-02-13T19:46:40.905365313Z" level=info msg="TearDown network for sandbox \"45c87c61ef4cd93a6712ac1a472c5dc83746a8c57cbce7faf17a97105703dc6f\" successfully" Feb 13 19:46:40.905379 containerd[1805]: time="2025-02-13T19:46:40.905371658Z" level=info msg="StopPodSandbox for \"45c87c61ef4cd93a6712ac1a472c5dc83746a8c57cbce7faf17a97105703dc6f\" returns successfully" Feb 13 19:46:40.905625 containerd[1805]: time="2025-02-13T19:46:40.905573526Z" level=info msg="RemovePodSandbox for \"45c87c61ef4cd93a6712ac1a472c5dc83746a8c57cbce7faf17a97105703dc6f\"" Feb 13 19:46:40.905625 containerd[1805]: time="2025-02-13T19:46:40.905585622Z" level=info msg="Forcibly stopping sandbox \"45c87c61ef4cd93a6712ac1a472c5dc83746a8c57cbce7faf17a97105703dc6f\"" Feb 13 19:46:40.905709 containerd[1805]: time="2025-02-13T19:46:40.905658335Z" level=info msg="TearDown network for sandbox \"45c87c61ef4cd93a6712ac1a472c5dc83746a8c57cbce7faf17a97105703dc6f\" successfully" Feb 13 19:46:40.906763 containerd[1805]: time="2025-02-13T19:46:40.906723560Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"45c87c61ef4cd93a6712ac1a472c5dc83746a8c57cbce7faf17a97105703dc6f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:46:40.906763 containerd[1805]: time="2025-02-13T19:46:40.906742165Z" level=info msg="RemovePodSandbox \"45c87c61ef4cd93a6712ac1a472c5dc83746a8c57cbce7faf17a97105703dc6f\" returns successfully" Feb 13 19:46:40.906988 containerd[1805]: time="2025-02-13T19:46:40.906913471Z" level=info msg="StopPodSandbox for \"83e7edb6536af5c49bf3dcdb8edee0f773f777fb86a2c95fed050893a903a67c\"" Feb 13 19:46:40.906988 containerd[1805]: time="2025-02-13T19:46:40.906981495Z" level=info msg="TearDown network for sandbox \"83e7edb6536af5c49bf3dcdb8edee0f773f777fb86a2c95fed050893a903a67c\" successfully" Feb 13 19:46:40.906988 containerd[1805]: time="2025-02-13T19:46:40.906987702Z" level=info msg="StopPodSandbox for \"83e7edb6536af5c49bf3dcdb8edee0f773f777fb86a2c95fed050893a903a67c\" returns successfully" Feb 13 19:46:40.907196 containerd[1805]: time="2025-02-13T19:46:40.907139099Z" level=info msg="RemovePodSandbox for \"83e7edb6536af5c49bf3dcdb8edee0f773f777fb86a2c95fed050893a903a67c\"" Feb 13 19:46:40.907196 containerd[1805]: time="2025-02-13T19:46:40.907168615Z" level=info msg="Forcibly stopping sandbox \"83e7edb6536af5c49bf3dcdb8edee0f773f777fb86a2c95fed050893a903a67c\"" Feb 13 19:46:40.907259 containerd[1805]: time="2025-02-13T19:46:40.907230524Z" level=info msg="TearDown network for sandbox \"83e7edb6536af5c49bf3dcdb8edee0f773f777fb86a2c95fed050893a903a67c\" successfully" Feb 13 19:46:40.908352 containerd[1805]: time="2025-02-13T19:46:40.908340281Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"83e7edb6536af5c49bf3dcdb8edee0f773f777fb86a2c95fed050893a903a67c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:46:40.908443 containerd[1805]: time="2025-02-13T19:46:40.908383019Z" level=info msg="RemovePodSandbox \"83e7edb6536af5c49bf3dcdb8edee0f773f777fb86a2c95fed050893a903a67c\" returns successfully" Feb 13 19:46:40.908621 containerd[1805]: time="2025-02-13T19:46:40.908588423Z" level=info msg="StopPodSandbox for \"515e2dbbd1be1c8fb85c1e550f65c5d2a7c456a8e8d1bd8d101a078c05308133\"" Feb 13 19:46:40.908659 containerd[1805]: time="2025-02-13T19:46:40.908646075Z" level=info msg="TearDown network for sandbox \"515e2dbbd1be1c8fb85c1e550f65c5d2a7c456a8e8d1bd8d101a078c05308133\" successfully" Feb 13 19:46:40.908659 containerd[1805]: time="2025-02-13T19:46:40.908652487Z" level=info msg="StopPodSandbox for \"515e2dbbd1be1c8fb85c1e550f65c5d2a7c456a8e8d1bd8d101a078c05308133\" returns successfully" Feb 13 19:46:40.908892 containerd[1805]: time="2025-02-13T19:46:40.908846872Z" level=info msg="RemovePodSandbox for \"515e2dbbd1be1c8fb85c1e550f65c5d2a7c456a8e8d1bd8d101a078c05308133\"" Feb 13 19:46:40.908892 containerd[1805]: time="2025-02-13T19:46:40.908858689Z" level=info msg="Forcibly stopping sandbox \"515e2dbbd1be1c8fb85c1e550f65c5d2a7c456a8e8d1bd8d101a078c05308133\"" Feb 13 19:46:40.908957 containerd[1805]: time="2025-02-13T19:46:40.908889983Z" level=info msg="TearDown network for sandbox \"515e2dbbd1be1c8fb85c1e550f65c5d2a7c456a8e8d1bd8d101a078c05308133\" successfully" Feb 13 19:46:40.910003 containerd[1805]: time="2025-02-13T19:46:40.909992257Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"515e2dbbd1be1c8fb85c1e550f65c5d2a7c456a8e8d1bd8d101a078c05308133\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:46:40.910030 containerd[1805]: time="2025-02-13T19:46:40.910010368Z" level=info msg="RemovePodSandbox \"515e2dbbd1be1c8fb85c1e550f65c5d2a7c456a8e8d1bd8d101a078c05308133\" returns successfully" Feb 13 19:46:40.910201 containerd[1805]: time="2025-02-13T19:46:40.910168359Z" level=info msg="StopPodSandbox for \"13e4aa195b30d4eb9cefcecd0a1cebc44189e63e51154716b79006b8f7b53aff\"" Feb 13 19:46:40.910252 containerd[1805]: time="2025-02-13T19:46:40.910238608Z" level=info msg="TearDown network for sandbox \"13e4aa195b30d4eb9cefcecd0a1cebc44189e63e51154716b79006b8f7b53aff\" successfully" Feb 13 19:46:40.910252 containerd[1805]: time="2025-02-13T19:46:40.910244551Z" level=info msg="StopPodSandbox for \"13e4aa195b30d4eb9cefcecd0a1cebc44189e63e51154716b79006b8f7b53aff\" returns successfully" Feb 13 19:46:40.910403 containerd[1805]: time="2025-02-13T19:46:40.910393133Z" level=info msg="RemovePodSandbox for \"13e4aa195b30d4eb9cefcecd0a1cebc44189e63e51154716b79006b8f7b53aff\"" Feb 13 19:46:40.910430 containerd[1805]: time="2025-02-13T19:46:40.910404031Z" level=info msg="Forcibly stopping sandbox \"13e4aa195b30d4eb9cefcecd0a1cebc44189e63e51154716b79006b8f7b53aff\"" Feb 13 19:46:40.910475 containerd[1805]: time="2025-02-13T19:46:40.910440078Z" level=info msg="TearDown network for sandbox \"13e4aa195b30d4eb9cefcecd0a1cebc44189e63e51154716b79006b8f7b53aff\" successfully" Feb 13 19:46:40.911590 containerd[1805]: time="2025-02-13T19:46:40.911549128Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"13e4aa195b30d4eb9cefcecd0a1cebc44189e63e51154716b79006b8f7b53aff\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:46:40.911590 containerd[1805]: time="2025-02-13T19:46:40.911565521Z" level=info msg="RemovePodSandbox \"13e4aa195b30d4eb9cefcecd0a1cebc44189e63e51154716b79006b8f7b53aff\" returns successfully" Feb 13 19:46:40.911878 containerd[1805]: time="2025-02-13T19:46:40.911821022Z" level=info msg="StopPodSandbox for \"2dc41cc4f9f38b25ede16d8cd1afd6e66411dd90e205f0af6dab3a956b9015f1\"" Feb 13 19:46:40.911924 containerd[1805]: time="2025-02-13T19:46:40.911885761Z" level=info msg="TearDown network for sandbox \"2dc41cc4f9f38b25ede16d8cd1afd6e66411dd90e205f0af6dab3a956b9015f1\" successfully" Feb 13 19:46:40.911924 containerd[1805]: time="2025-02-13T19:46:40.911892234Z" level=info msg="StopPodSandbox for \"2dc41cc4f9f38b25ede16d8cd1afd6e66411dd90e205f0af6dab3a956b9015f1\" returns successfully" Feb 13 19:46:40.912228 containerd[1805]: time="2025-02-13T19:46:40.912170536Z" level=info msg="RemovePodSandbox for \"2dc41cc4f9f38b25ede16d8cd1afd6e66411dd90e205f0af6dab3a956b9015f1\"" Feb 13 19:46:40.912228 containerd[1805]: time="2025-02-13T19:46:40.912181509Z" level=info msg="Forcibly stopping sandbox \"2dc41cc4f9f38b25ede16d8cd1afd6e66411dd90e205f0af6dab3a956b9015f1\"" Feb 13 19:46:40.912296 containerd[1805]: time="2025-02-13T19:46:40.912256765Z" level=info msg="TearDown network for sandbox \"2dc41cc4f9f38b25ede16d8cd1afd6e66411dd90e205f0af6dab3a956b9015f1\" successfully" Feb 13 19:46:40.913369 containerd[1805]: time="2025-02-13T19:46:40.913326422Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2dc41cc4f9f38b25ede16d8cd1afd6e66411dd90e205f0af6dab3a956b9015f1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:46:40.913369 containerd[1805]: time="2025-02-13T19:46:40.913367302Z" level=info msg="RemovePodSandbox \"2dc41cc4f9f38b25ede16d8cd1afd6e66411dd90e205f0af6dab3a956b9015f1\" returns successfully" Feb 13 19:46:40.913592 containerd[1805]: time="2025-02-13T19:46:40.913549812Z" level=info msg="StopPodSandbox for \"12474318e4f6f721cac7083f9ef1f395d02fee9ab3fb10b76216dc654e5175bb\"" Feb 13 19:46:40.913659 containerd[1805]: time="2025-02-13T19:46:40.913643043Z" level=info msg="TearDown network for sandbox \"12474318e4f6f721cac7083f9ef1f395d02fee9ab3fb10b76216dc654e5175bb\" successfully" Feb 13 19:46:40.913659 containerd[1805]: time="2025-02-13T19:46:40.913650363Z" level=info msg="StopPodSandbox for \"12474318e4f6f721cac7083f9ef1f395d02fee9ab3fb10b76216dc654e5175bb\" returns successfully" Feb 13 19:46:40.913868 containerd[1805]: time="2025-02-13T19:46:40.913838517Z" level=info msg="RemovePodSandbox for \"12474318e4f6f721cac7083f9ef1f395d02fee9ab3fb10b76216dc654e5175bb\"" Feb 13 19:46:40.913941 containerd[1805]: time="2025-02-13T19:46:40.913868442Z" level=info msg="Forcibly stopping sandbox \"12474318e4f6f721cac7083f9ef1f395d02fee9ab3fb10b76216dc654e5175bb\"" Feb 13 19:46:40.913978 containerd[1805]: time="2025-02-13T19:46:40.913931722Z" level=info msg="TearDown network for sandbox \"12474318e4f6f721cac7083f9ef1f395d02fee9ab3fb10b76216dc654e5175bb\" successfully" Feb 13 19:46:40.915078 containerd[1805]: time="2025-02-13T19:46:40.915033914Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"12474318e4f6f721cac7083f9ef1f395d02fee9ab3fb10b76216dc654e5175bb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:46:40.915078 containerd[1805]: time="2025-02-13T19:46:40.915075021Z" level=info msg="RemovePodSandbox \"12474318e4f6f721cac7083f9ef1f395d02fee9ab3fb10b76216dc654e5175bb\" returns successfully" Feb 13 19:46:40.915345 containerd[1805]: time="2025-02-13T19:46:40.915289168Z" level=info msg="StopPodSandbox for \"ce3e2500fc4278415f6ebbad079169c382c4f49b1ed76d104c93e357f67447a4\"" Feb 13 19:46:40.915379 containerd[1805]: time="2025-02-13T19:46:40.915360783Z" level=info msg="TearDown network for sandbox \"ce3e2500fc4278415f6ebbad079169c382c4f49b1ed76d104c93e357f67447a4\" successfully" Feb 13 19:46:40.915379 containerd[1805]: time="2025-02-13T19:46:40.915369518Z" level=info msg="StopPodSandbox for \"ce3e2500fc4278415f6ebbad079169c382c4f49b1ed76d104c93e357f67447a4\" returns successfully" Feb 13 19:46:40.915575 containerd[1805]: time="2025-02-13T19:46:40.915516328Z" level=info msg="RemovePodSandbox for \"ce3e2500fc4278415f6ebbad079169c382c4f49b1ed76d104c93e357f67447a4\"" Feb 13 19:46:40.915575 containerd[1805]: time="2025-02-13T19:46:40.915547531Z" level=info msg="Forcibly stopping sandbox \"ce3e2500fc4278415f6ebbad079169c382c4f49b1ed76d104c93e357f67447a4\"" Feb 13 19:46:40.915667 containerd[1805]: time="2025-02-13T19:46:40.915599592Z" level=info msg="TearDown network for sandbox \"ce3e2500fc4278415f6ebbad079169c382c4f49b1ed76d104c93e357f67447a4\" successfully" Feb 13 19:46:40.916838 containerd[1805]: time="2025-02-13T19:46:40.916784469Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ce3e2500fc4278415f6ebbad079169c382c4f49b1ed76d104c93e357f67447a4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:46:40.916838 containerd[1805]: time="2025-02-13T19:46:40.916802030Z" level=info msg="RemovePodSandbox \"ce3e2500fc4278415f6ebbad079169c382c4f49b1ed76d104c93e357f67447a4\" returns successfully" Feb 13 19:46:40.917086 containerd[1805]: time="2025-02-13T19:46:40.917030923Z" level=info msg="StopPodSandbox for \"cd78dbf308776311aab128402becbbd5898d7d765367e436b435cde3f537df0d\"" Feb 13 19:46:40.917122 containerd[1805]: time="2025-02-13T19:46:40.917102330Z" level=info msg="TearDown network for sandbox \"cd78dbf308776311aab128402becbbd5898d7d765367e436b435cde3f537df0d\" successfully" Feb 13 19:46:40.917122 containerd[1805]: time="2025-02-13T19:46:40.917109108Z" level=info msg="StopPodSandbox for \"cd78dbf308776311aab128402becbbd5898d7d765367e436b435cde3f537df0d\" returns successfully" Feb 13 19:46:40.917339 containerd[1805]: time="2025-02-13T19:46:40.917298921Z" level=info msg="RemovePodSandbox for \"cd78dbf308776311aab128402becbbd5898d7d765367e436b435cde3f537df0d\"" Feb 13 19:46:40.917339 containerd[1805]: time="2025-02-13T19:46:40.917311442Z" level=info msg="Forcibly stopping sandbox \"cd78dbf308776311aab128402becbbd5898d7d765367e436b435cde3f537df0d\"" Feb 13 19:46:40.917390 containerd[1805]: time="2025-02-13T19:46:40.917344568Z" level=info msg="TearDown network for sandbox \"cd78dbf308776311aab128402becbbd5898d7d765367e436b435cde3f537df0d\" successfully" Feb 13 19:46:40.918463 containerd[1805]: time="2025-02-13T19:46:40.918417319Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cd78dbf308776311aab128402becbbd5898d7d765367e436b435cde3f537df0d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:46:40.918463 containerd[1805]: time="2025-02-13T19:46:40.918459377Z" level=info msg="RemovePodSandbox \"cd78dbf308776311aab128402becbbd5898d7d765367e436b435cde3f537df0d\" returns successfully" Feb 13 19:46:40.918623 containerd[1805]: time="2025-02-13T19:46:40.918591091Z" level=info msg="StopPodSandbox for \"9093a30fc2f5cae8b2ce8547aa8a9c41cf2b97877a8b2f1062b62db2d5c43068\"" Feb 13 19:46:40.918689 containerd[1805]: time="2025-02-13T19:46:40.918664848Z" level=info msg="TearDown network for sandbox \"9093a30fc2f5cae8b2ce8547aa8a9c41cf2b97877a8b2f1062b62db2d5c43068\" successfully" Feb 13 19:46:40.918689 containerd[1805]: time="2025-02-13T19:46:40.918685928Z" level=info msg="StopPodSandbox for \"9093a30fc2f5cae8b2ce8547aa8a9c41cf2b97877a8b2f1062b62db2d5c43068\" returns successfully" Feb 13 19:46:40.918910 containerd[1805]: time="2025-02-13T19:46:40.918878713Z" level=info msg="RemovePodSandbox for \"9093a30fc2f5cae8b2ce8547aa8a9c41cf2b97877a8b2f1062b62db2d5c43068\"" Feb 13 19:46:40.918910 containerd[1805]: time="2025-02-13T19:46:40.918905453Z" level=info msg="Forcibly stopping sandbox \"9093a30fc2f5cae8b2ce8547aa8a9c41cf2b97877a8b2f1062b62db2d5c43068\"" Feb 13 19:46:40.918994 containerd[1805]: time="2025-02-13T19:46:40.918936720Z" level=info msg="TearDown network for sandbox \"9093a30fc2f5cae8b2ce8547aa8a9c41cf2b97877a8b2f1062b62db2d5c43068\" successfully" Feb 13 19:46:40.920189 containerd[1805]: time="2025-02-13T19:46:40.920147076Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9093a30fc2f5cae8b2ce8547aa8a9c41cf2b97877a8b2f1062b62db2d5c43068\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:46:40.920189 containerd[1805]: time="2025-02-13T19:46:40.920187680Z" level=info msg="RemovePodSandbox \"9093a30fc2f5cae8b2ce8547aa8a9c41cf2b97877a8b2f1062b62db2d5c43068\" returns successfully" Feb 13 19:46:40.920405 containerd[1805]: time="2025-02-13T19:46:40.920365727Z" level=info msg="StopPodSandbox for \"42255449bd41deb01db183135e0eed2b08b49473645455cebbcc24d5d22eb0a4\"" Feb 13 19:46:40.920459 containerd[1805]: time="2025-02-13T19:46:40.920408541Z" level=info msg="TearDown network for sandbox \"42255449bd41deb01db183135e0eed2b08b49473645455cebbcc24d5d22eb0a4\" successfully" Feb 13 19:46:40.920459 containerd[1805]: time="2025-02-13T19:46:40.920415607Z" level=info msg="StopPodSandbox for \"42255449bd41deb01db183135e0eed2b08b49473645455cebbcc24d5d22eb0a4\" returns successfully" Feb 13 19:46:40.920665 containerd[1805]: time="2025-02-13T19:46:40.920630744Z" level=info msg="RemovePodSandbox for \"42255449bd41deb01db183135e0eed2b08b49473645455cebbcc24d5d22eb0a4\"" Feb 13 19:46:40.920665 containerd[1805]: time="2025-02-13T19:46:40.920660066Z" level=info msg="Forcibly stopping sandbox \"42255449bd41deb01db183135e0eed2b08b49473645455cebbcc24d5d22eb0a4\"" Feb 13 19:46:40.920767 containerd[1805]: time="2025-02-13T19:46:40.920710184Z" level=info msg="TearDown network for sandbox \"42255449bd41deb01db183135e0eed2b08b49473645455cebbcc24d5d22eb0a4\" successfully" Feb 13 19:46:40.921896 containerd[1805]: time="2025-02-13T19:46:40.921855670Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"42255449bd41deb01db183135e0eed2b08b49473645455cebbcc24d5d22eb0a4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:46:40.921896 containerd[1805]: time="2025-02-13T19:46:40.921873707Z" level=info msg="RemovePodSandbox \"42255449bd41deb01db183135e0eed2b08b49473645455cebbcc24d5d22eb0a4\" returns successfully" Feb 13 19:46:40.922211 containerd[1805]: time="2025-02-13T19:46:40.922168254Z" level=info msg="StopPodSandbox for \"40cc26bfd58a26e7cb37e7b5a69dffb2a31831b9cdfaa60054c153d9762e12fa\"" Feb 13 19:46:40.922295 containerd[1805]: time="2025-02-13T19:46:40.922230097Z" level=info msg="TearDown network for sandbox \"40cc26bfd58a26e7cb37e7b5a69dffb2a31831b9cdfaa60054c153d9762e12fa\" successfully" Feb 13 19:46:40.922295 containerd[1805]: time="2025-02-13T19:46:40.922236384Z" level=info msg="StopPodSandbox for \"40cc26bfd58a26e7cb37e7b5a69dffb2a31831b9cdfaa60054c153d9762e12fa\" returns successfully" Feb 13 19:46:40.922381 containerd[1805]: time="2025-02-13T19:46:40.922370898Z" level=info msg="RemovePodSandbox for \"40cc26bfd58a26e7cb37e7b5a69dffb2a31831b9cdfaa60054c153d9762e12fa\"" Feb 13 19:46:40.922403 containerd[1805]: time="2025-02-13T19:46:40.922384259Z" level=info msg="Forcibly stopping sandbox \"40cc26bfd58a26e7cb37e7b5a69dffb2a31831b9cdfaa60054c153d9762e12fa\"" Feb 13 19:46:40.922514 containerd[1805]: time="2025-02-13T19:46:40.922414558Z" level=info msg="TearDown network for sandbox \"40cc26bfd58a26e7cb37e7b5a69dffb2a31831b9cdfaa60054c153d9762e12fa\" successfully" Feb 13 19:46:40.923716 containerd[1805]: time="2025-02-13T19:46:40.923671643Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"40cc26bfd58a26e7cb37e7b5a69dffb2a31831b9cdfaa60054c153d9762e12fa\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:46:40.923716 containerd[1805]: time="2025-02-13T19:46:40.923714022Z" level=info msg="RemovePodSandbox \"40cc26bfd58a26e7cb37e7b5a69dffb2a31831b9cdfaa60054c153d9762e12fa\" returns successfully" Feb 13 19:46:40.924027 containerd[1805]: time="2025-02-13T19:46:40.923969451Z" level=info msg="StopPodSandbox for \"86d53dc9a54aa1591336fabe505d737e9ac9ca61810835e49d34bd02140a924c\"" Feb 13 19:46:40.924102 containerd[1805]: time="2025-02-13T19:46:40.924055572Z" level=info msg="TearDown network for sandbox \"86d53dc9a54aa1591336fabe505d737e9ac9ca61810835e49d34bd02140a924c\" successfully" Feb 13 19:46:40.924102 containerd[1805]: time="2025-02-13T19:46:40.924062018Z" level=info msg="StopPodSandbox for \"86d53dc9a54aa1591336fabe505d737e9ac9ca61810835e49d34bd02140a924c\" returns successfully" Feb 13 19:46:40.924347 containerd[1805]: time="2025-02-13T19:46:40.924307990Z" level=info msg="RemovePodSandbox for \"86d53dc9a54aa1591336fabe505d737e9ac9ca61810835e49d34bd02140a924c\"" Feb 13 19:46:40.924347 containerd[1805]: time="2025-02-13T19:46:40.924319830Z" level=info msg="Forcibly stopping sandbox \"86d53dc9a54aa1591336fabe505d737e9ac9ca61810835e49d34bd02140a924c\"" Feb 13 19:46:40.924393 containerd[1805]: time="2025-02-13T19:46:40.924355504Z" level=info msg="TearDown network for sandbox \"86d53dc9a54aa1591336fabe505d737e9ac9ca61810835e49d34bd02140a924c\" successfully" Feb 13 19:46:40.925518 containerd[1805]: time="2025-02-13T19:46:40.925476462Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"86d53dc9a54aa1591336fabe505d737e9ac9ca61810835e49d34bd02140a924c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:46:40.925518 containerd[1805]: time="2025-02-13T19:46:40.925492666Z" level=info msg="RemovePodSandbox \"86d53dc9a54aa1591336fabe505d737e9ac9ca61810835e49d34bd02140a924c\" returns successfully" Feb 13 19:46:40.925787 containerd[1805]: time="2025-02-13T19:46:40.925740754Z" level=info msg="StopPodSandbox for \"27d8eb9af9f42e9a53c98084e642c5a6c62e6a1a37b6fe8e8b95f03324695b3d\"" Feb 13 19:46:40.925787 containerd[1805]: time="2025-02-13T19:46:40.925779652Z" level=info msg="TearDown network for sandbox \"27d8eb9af9f42e9a53c98084e642c5a6c62e6a1a37b6fe8e8b95f03324695b3d\" successfully" Feb 13 19:46:40.925787 containerd[1805]: time="2025-02-13T19:46:40.925785912Z" level=info msg="StopPodSandbox for \"27d8eb9af9f42e9a53c98084e642c5a6c62e6a1a37b6fe8e8b95f03324695b3d\" returns successfully" Feb 13 19:46:40.925994 containerd[1805]: time="2025-02-13T19:46:40.925941571Z" level=info msg="RemovePodSandbox for \"27d8eb9af9f42e9a53c98084e642c5a6c62e6a1a37b6fe8e8b95f03324695b3d\"" Feb 13 19:46:40.925994 containerd[1805]: time="2025-02-13T19:46:40.925965753Z" level=info msg="Forcibly stopping sandbox \"27d8eb9af9f42e9a53c98084e642c5a6c62e6a1a37b6fe8e8b95f03324695b3d\"" Feb 13 19:46:40.926097 containerd[1805]: time="2025-02-13T19:46:40.926018949Z" level=info msg="TearDown network for sandbox \"27d8eb9af9f42e9a53c98084e642c5a6c62e6a1a37b6fe8e8b95f03324695b3d\" successfully" Feb 13 19:46:40.927222 containerd[1805]: time="2025-02-13T19:46:40.927180473Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"27d8eb9af9f42e9a53c98084e642c5a6c62e6a1a37b6fe8e8b95f03324695b3d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:46:40.927222 containerd[1805]: time="2025-02-13T19:46:40.927197858Z" level=info msg="RemovePodSandbox \"27d8eb9af9f42e9a53c98084e642c5a6c62e6a1a37b6fe8e8b95f03324695b3d\" returns successfully" Feb 13 19:46:40.927347 containerd[1805]: time="2025-02-13T19:46:40.927337103Z" level=info msg="StopPodSandbox for \"07a3d54a09a9b21a85afe18dfe2963f03ca1a2e398eb9b9070e48712833bd212\"" Feb 13 19:46:40.927384 containerd[1805]: time="2025-02-13T19:46:40.927376285Z" level=info msg="TearDown network for sandbox \"07a3d54a09a9b21a85afe18dfe2963f03ca1a2e398eb9b9070e48712833bd212\" successfully" Feb 13 19:46:40.927408 containerd[1805]: time="2025-02-13T19:46:40.927383486Z" level=info msg="StopPodSandbox for \"07a3d54a09a9b21a85afe18dfe2963f03ca1a2e398eb9b9070e48712833bd212\" returns successfully" Feb 13 19:46:40.927617 containerd[1805]: time="2025-02-13T19:46:40.927554833Z" level=info msg="RemovePodSandbox for \"07a3d54a09a9b21a85afe18dfe2963f03ca1a2e398eb9b9070e48712833bd212\"" Feb 13 19:46:40.927617 containerd[1805]: time="2025-02-13T19:46:40.927593513Z" level=info msg="Forcibly stopping sandbox \"07a3d54a09a9b21a85afe18dfe2963f03ca1a2e398eb9b9070e48712833bd212\"" Feb 13 19:46:40.927673 containerd[1805]: time="2025-02-13T19:46:40.927640705Z" level=info msg="TearDown network for sandbox \"07a3d54a09a9b21a85afe18dfe2963f03ca1a2e398eb9b9070e48712833bd212\" successfully" Feb 13 19:46:40.928821 containerd[1805]: time="2025-02-13T19:46:40.928781518Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"07a3d54a09a9b21a85afe18dfe2963f03ca1a2e398eb9b9070e48712833bd212\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:46:40.928821 containerd[1805]: time="2025-02-13T19:46:40.928798797Z" level=info msg="RemovePodSandbox \"07a3d54a09a9b21a85afe18dfe2963f03ca1a2e398eb9b9070e48712833bd212\" returns successfully" Feb 13 19:46:40.929103 containerd[1805]: time="2025-02-13T19:46:40.929066296Z" level=info msg="StopPodSandbox for \"83b2d907235c0a11a34679a7af1d5627566cabef49812299411e9adcc0579e86\"" Feb 13 19:46:40.929159 containerd[1805]: time="2025-02-13T19:46:40.929143089Z" level=info msg="TearDown network for sandbox \"83b2d907235c0a11a34679a7af1d5627566cabef49812299411e9adcc0579e86\" successfully" Feb 13 19:46:40.929159 containerd[1805]: time="2025-02-13T19:46:40.929149194Z" level=info msg="StopPodSandbox for \"83b2d907235c0a11a34679a7af1d5627566cabef49812299411e9adcc0579e86\" returns successfully" Feb 13 19:46:40.929327 containerd[1805]: time="2025-02-13T19:46:40.929288902Z" level=info msg="RemovePodSandbox for \"83b2d907235c0a11a34679a7af1d5627566cabef49812299411e9adcc0579e86\"" Feb 13 19:46:40.929327 containerd[1805]: time="2025-02-13T19:46:40.929301472Z" level=info msg="Forcibly stopping sandbox \"83b2d907235c0a11a34679a7af1d5627566cabef49812299411e9adcc0579e86\"" Feb 13 19:46:40.929376 containerd[1805]: time="2025-02-13T19:46:40.929333665Z" level=info msg="TearDown network for sandbox \"83b2d907235c0a11a34679a7af1d5627566cabef49812299411e9adcc0579e86\" successfully" Feb 13 19:46:40.930498 containerd[1805]: time="2025-02-13T19:46:40.930435341Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"83b2d907235c0a11a34679a7af1d5627566cabef49812299411e9adcc0579e86\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:46:40.930498 containerd[1805]: time="2025-02-13T19:46:40.930468340Z" level=info msg="RemovePodSandbox \"83b2d907235c0a11a34679a7af1d5627566cabef49812299411e9adcc0579e86\" returns successfully" Feb 13 19:46:40.930731 containerd[1805]: time="2025-02-13T19:46:40.930668994Z" level=info msg="StopPodSandbox for \"171aaacb6955d25fb1b3e8a93959e2fe163538dfa3fba461e8d6106fdaceba89\"" Feb 13 19:46:40.930800 containerd[1805]: time="2025-02-13T19:46:40.930740828Z" level=info msg="TearDown network for sandbox \"171aaacb6955d25fb1b3e8a93959e2fe163538dfa3fba461e8d6106fdaceba89\" successfully" Feb 13 19:46:40.930800 containerd[1805]: time="2025-02-13T19:46:40.930762386Z" level=info msg="StopPodSandbox for \"171aaacb6955d25fb1b3e8a93959e2fe163538dfa3fba461e8d6106fdaceba89\" returns successfully" Feb 13 19:46:40.930956 containerd[1805]: time="2025-02-13T19:46:40.930921827Z" level=info msg="RemovePodSandbox for \"171aaacb6955d25fb1b3e8a93959e2fe163538dfa3fba461e8d6106fdaceba89\"" Feb 13 19:46:40.930956 containerd[1805]: time="2025-02-13T19:46:40.930948961Z" level=info msg="Forcibly stopping sandbox \"171aaacb6955d25fb1b3e8a93959e2fe163538dfa3fba461e8d6106fdaceba89\"" Feb 13 19:46:40.931018 containerd[1805]: time="2025-02-13T19:46:40.930990085Z" level=info msg="TearDown network for sandbox \"171aaacb6955d25fb1b3e8a93959e2fe163538dfa3fba461e8d6106fdaceba89\" successfully" Feb 13 19:46:40.932242 containerd[1805]: time="2025-02-13T19:46:40.932203011Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"171aaacb6955d25fb1b3e8a93959e2fe163538dfa3fba461e8d6106fdaceba89\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:46:40.932242 containerd[1805]: time="2025-02-13T19:46:40.932220270Z" level=info msg="RemovePodSandbox \"171aaacb6955d25fb1b3e8a93959e2fe163538dfa3fba461e8d6106fdaceba89\" returns successfully" Feb 13 19:46:40.932376 containerd[1805]: time="2025-02-13T19:46:40.932366582Z" level=info msg="StopPodSandbox for \"8065894b6a1abeb1668ff49ae8d1a1408a0af9ccf6106a60de8a212a434edc9e\"" Feb 13 19:46:40.932411 containerd[1805]: time="2025-02-13T19:46:40.932404418Z" level=info msg="TearDown network for sandbox \"8065894b6a1abeb1668ff49ae8d1a1408a0af9ccf6106a60de8a212a434edc9e\" successfully" Feb 13 19:46:40.932472 containerd[1805]: time="2025-02-13T19:46:40.932410701Z" level=info msg="StopPodSandbox for \"8065894b6a1abeb1668ff49ae8d1a1408a0af9ccf6106a60de8a212a434edc9e\" returns successfully" Feb 13 19:46:40.932610 containerd[1805]: time="2025-02-13T19:46:40.932574286Z" level=info msg="RemovePodSandbox for \"8065894b6a1abeb1668ff49ae8d1a1408a0af9ccf6106a60de8a212a434edc9e\"" Feb 13 19:46:40.932610 containerd[1805]: time="2025-02-13T19:46:40.932605637Z" level=info msg="Forcibly stopping sandbox \"8065894b6a1abeb1668ff49ae8d1a1408a0af9ccf6106a60de8a212a434edc9e\"" Feb 13 19:46:40.932700 containerd[1805]: time="2025-02-13T19:46:40.932678161Z" level=info msg="TearDown network for sandbox \"8065894b6a1abeb1668ff49ae8d1a1408a0af9ccf6106a60de8a212a434edc9e\" successfully" Feb 13 19:46:40.933934 containerd[1805]: time="2025-02-13T19:46:40.933894532Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8065894b6a1abeb1668ff49ae8d1a1408a0af9ccf6106a60de8a212a434edc9e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:46:40.933934 containerd[1805]: time="2025-02-13T19:46:40.933911710Z" level=info msg="RemovePodSandbox \"8065894b6a1abeb1668ff49ae8d1a1408a0af9ccf6106a60de8a212a434edc9e\" returns successfully" Feb 13 19:46:40.934160 containerd[1805]: time="2025-02-13T19:46:40.934118946Z" level=info msg="StopPodSandbox for \"64b94a48217994b33c55bb7987c78d0f884f37cbe7f8cf23e7deaf8ba4c86bbe\"" Feb 13 19:46:40.934206 containerd[1805]: time="2025-02-13T19:46:40.934180866Z" level=info msg="TearDown network for sandbox \"64b94a48217994b33c55bb7987c78d0f884f37cbe7f8cf23e7deaf8ba4c86bbe\" successfully" Feb 13 19:46:40.934206 containerd[1805]: time="2025-02-13T19:46:40.934187196Z" level=info msg="StopPodSandbox for \"64b94a48217994b33c55bb7987c78d0f884f37cbe7f8cf23e7deaf8ba4c86bbe\" returns successfully" Feb 13 19:46:40.934305 containerd[1805]: time="2025-02-13T19:46:40.934295591Z" level=info msg="RemovePodSandbox for \"64b94a48217994b33c55bb7987c78d0f884f37cbe7f8cf23e7deaf8ba4c86bbe\"" Feb 13 19:46:40.934328 containerd[1805]: time="2025-02-13T19:46:40.934306912Z" level=info msg="Forcibly stopping sandbox \"64b94a48217994b33c55bb7987c78d0f884f37cbe7f8cf23e7deaf8ba4c86bbe\"" Feb 13 19:46:40.934351 containerd[1805]: time="2025-02-13T19:46:40.934337028Z" level=info msg="TearDown network for sandbox \"64b94a48217994b33c55bb7987c78d0f884f37cbe7f8cf23e7deaf8ba4c86bbe\" successfully" Feb 13 19:46:40.935511 containerd[1805]: time="2025-02-13T19:46:40.935426623Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"64b94a48217994b33c55bb7987c78d0f884f37cbe7f8cf23e7deaf8ba4c86bbe\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:46:40.935511 containerd[1805]: time="2025-02-13T19:46:40.935479567Z" level=info msg="RemovePodSandbox \"64b94a48217994b33c55bb7987c78d0f884f37cbe7f8cf23e7deaf8ba4c86bbe\" returns successfully" Feb 13 19:46:40.935661 containerd[1805]: time="2025-02-13T19:46:40.935619141Z" level=info msg="StopPodSandbox for \"ed2ad3d19fc950de171eb3cf6c51622d2afb4aef5ea2ef8a88ae4d2b1b530489\"" Feb 13 19:46:40.935728 containerd[1805]: time="2025-02-13T19:46:40.935684013Z" level=info msg="TearDown network for sandbox \"ed2ad3d19fc950de171eb3cf6c51622d2afb4aef5ea2ef8a88ae4d2b1b530489\" successfully" Feb 13 19:46:40.935728 containerd[1805]: time="2025-02-13T19:46:40.935690690Z" level=info msg="StopPodSandbox for \"ed2ad3d19fc950de171eb3cf6c51622d2afb4aef5ea2ef8a88ae4d2b1b530489\" returns successfully" Feb 13 19:46:40.935871 containerd[1805]: time="2025-02-13T19:46:40.935813504Z" level=info msg="RemovePodSandbox for \"ed2ad3d19fc950de171eb3cf6c51622d2afb4aef5ea2ef8a88ae4d2b1b530489\"" Feb 13 19:46:40.935871 containerd[1805]: time="2025-02-13T19:46:40.935844030Z" level=info msg="Forcibly stopping sandbox \"ed2ad3d19fc950de171eb3cf6c51622d2afb4aef5ea2ef8a88ae4d2b1b530489\"" Feb 13 19:46:40.935937 containerd[1805]: time="2025-02-13T19:46:40.935891280Z" level=info msg="TearDown network for sandbox \"ed2ad3d19fc950de171eb3cf6c51622d2afb4aef5ea2ef8a88ae4d2b1b530489\" successfully" Feb 13 19:46:40.937056 containerd[1805]: time="2025-02-13T19:46:40.937011452Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ed2ad3d19fc950de171eb3cf6c51622d2afb4aef5ea2ef8a88ae4d2b1b530489\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:46:40.937056 containerd[1805]: time="2025-02-13T19:46:40.937054834Z" level=info msg="RemovePodSandbox \"ed2ad3d19fc950de171eb3cf6c51622d2afb4aef5ea2ef8a88ae4d2b1b530489\" returns successfully" Feb 13 19:46:40.937271 containerd[1805]: time="2025-02-13T19:46:40.937233148Z" level=info msg="StopPodSandbox for \"69a434fd10ad240732bbebc25134f8c4622085ed22500f6716a8079731e5c35b\"" Feb 13 19:46:40.937333 containerd[1805]: time="2025-02-13T19:46:40.937294546Z" level=info msg="TearDown network for sandbox \"69a434fd10ad240732bbebc25134f8c4622085ed22500f6716a8079731e5c35b\" successfully" Feb 13 19:46:40.937333 containerd[1805]: time="2025-02-13T19:46:40.937301566Z" level=info msg="StopPodSandbox for \"69a434fd10ad240732bbebc25134f8c4622085ed22500f6716a8079731e5c35b\" returns successfully" Feb 13 19:46:40.937421 containerd[1805]: time="2025-02-13T19:46:40.937409193Z" level=info msg="RemovePodSandbox for \"69a434fd10ad240732bbebc25134f8c4622085ed22500f6716a8079731e5c35b\"" Feb 13 19:46:40.937461 containerd[1805]: time="2025-02-13T19:46:40.937424798Z" level=info msg="Forcibly stopping sandbox \"69a434fd10ad240732bbebc25134f8c4622085ed22500f6716a8079731e5c35b\"" Feb 13 19:46:40.937547 containerd[1805]: time="2025-02-13T19:46:40.937496268Z" level=info msg="TearDown network for sandbox \"69a434fd10ad240732bbebc25134f8c4622085ed22500f6716a8079731e5c35b\" successfully" Feb 13 19:46:40.938644 containerd[1805]: time="2025-02-13T19:46:40.938633676Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"69a434fd10ad240732bbebc25134f8c4622085ed22500f6716a8079731e5c35b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:46:40.938672 containerd[1805]: time="2025-02-13T19:46:40.938651243Z" level=info msg="RemovePodSandbox \"69a434fd10ad240732bbebc25134f8c4622085ed22500f6716a8079731e5c35b\" returns successfully" Feb 13 19:50:04.874742 update_engine[1793]: I20250213 19:50:04.874594 1793 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Feb 13 19:50:04.874742 update_engine[1793]: I20250213 19:50:04.874702 1793 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Feb 13 19:50:04.875918 update_engine[1793]: I20250213 19:50:04.875083 1793 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Feb 13 19:50:04.876275 update_engine[1793]: I20250213 19:50:04.876186 1793 omaha_request_params.cc:62] Current group set to beta Feb 13 19:50:04.876510 update_engine[1793]: I20250213 19:50:04.876454 1793 update_attempter.cc:499] Already updated boot flags. Skipping. Feb 13 19:50:04.876510 update_engine[1793]: I20250213 19:50:04.876493 1793 update_attempter.cc:643] Scheduling an action processor start. Feb 13 19:50:04.876730 update_engine[1793]: I20250213 19:50:04.876535 1793 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 13 19:50:04.876730 update_engine[1793]: I20250213 19:50:04.876638 1793 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Feb 13 19:50:04.876967 update_engine[1793]: I20250213 19:50:04.876811 1793 omaha_request_action.cc:271] Posting an Omaha request to disabled Feb 13 19:50:04.876967 update_engine[1793]: I20250213 19:50:04.876845 1793 omaha_request_action.cc:272] Request: Feb 13 19:50:04.876967 update_engine[1793]: Feb 13 19:50:04.876967 update_engine[1793]: Feb 13 19:50:04.876967 update_engine[1793]: Feb 13 19:50:04.876967 update_engine[1793]: Feb 13 19:50:04.876967 update_engine[1793]: Feb 13 19:50:04.876967 update_engine[1793]: Feb 13 19:50:04.876967 update_engine[1793]: Feb 13 19:50:04.876967 update_engine[1793]: Feb 13 19:50:04.876967 update_engine[1793]: I20250213 19:50:04.876863 1793 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 19:50:04.877991 locksmithd[1842]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Feb 13 19:50:04.880788 update_engine[1793]: I20250213 19:50:04.880692 1793 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 19:50:04.881684 update_engine[1793]: I20250213 19:50:04.881588 1793 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 19:50:04.881902 update_engine[1793]: E20250213 19:50:04.881841 1793 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 19:50:04.882130 update_engine[1793]: I20250213 19:50:04.882031 1793 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Feb 13 19:50:14.785293 update_engine[1793]: I20250213 19:50:14.785117 1793 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 19:50:14.787351 update_engine[1793]: I20250213 19:50:14.785715 1793 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 19:50:14.787351 update_engine[1793]: I20250213 19:50:14.786324 1793 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 19:50:14.787351 update_engine[1793]: E20250213 19:50:14.786703 1793 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 19:50:14.787351 update_engine[1793]: I20250213 19:50:14.786830 1793 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Feb 13 19:50:24.785147 update_engine[1793]: I20250213 19:50:24.784976 1793 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 19:50:24.786138 update_engine[1793]: I20250213 19:50:24.785578 1793 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 19:50:24.786254 update_engine[1793]: I20250213 19:50:24.786165 1793 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 19:50:24.786584 update_engine[1793]: E20250213 19:50:24.786489 1793 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 19:50:24.786790 update_engine[1793]: I20250213 19:50:24.786609 1793 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Feb 13 19:50:34.784932 update_engine[1793]: I20250213 19:50:34.784765 1793 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 19:50:34.785933 update_engine[1793]: I20250213 19:50:34.785306 1793 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 19:50:34.786053 update_engine[1793]: I20250213 19:50:34.785914 1793 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 19:50:34.786456 update_engine[1793]: E20250213 19:50:34.786341 1793 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 19:50:34.786622 update_engine[1793]: I20250213 19:50:34.786499 1793 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 13 19:50:34.786622 update_engine[1793]: I20250213 19:50:34.786536 1793 omaha_request_action.cc:617] Omaha request response: Feb 13 19:50:34.786805 update_engine[1793]: E20250213 19:50:34.786698 1793 omaha_request_action.cc:636] Omaha request network transfer failed. Feb 13 19:50:34.786805 update_engine[1793]: I20250213 19:50:34.786748 1793 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Feb 13 19:50:34.786805 update_engine[1793]: I20250213 19:50:34.786767 1793 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 19:50:34.786805 update_engine[1793]: I20250213 19:50:34.786783 1793 update_attempter.cc:306] Processing Done. Feb 13 19:50:34.787149 update_engine[1793]: E20250213 19:50:34.786816 1793 update_attempter.cc:619] Update failed. Feb 13 19:50:34.787149 update_engine[1793]: I20250213 19:50:34.786834 1793 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Feb 13 19:50:34.787149 update_engine[1793]: I20250213 19:50:34.786850 1793 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Feb 13 19:50:34.787149 update_engine[1793]: I20250213 19:50:34.786867 1793 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Feb 13 19:50:34.787149 update_engine[1793]: I20250213 19:50:34.787024 1793 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 13 19:50:34.787149 update_engine[1793]: I20250213 19:50:34.787086 1793 omaha_request_action.cc:271] Posting an Omaha request to disabled Feb 13 19:50:34.787149 update_engine[1793]: I20250213 19:50:34.787106 1793 omaha_request_action.cc:272] Request: Feb 13 19:50:34.787149 update_engine[1793]: Feb 13 19:50:34.787149 update_engine[1793]: Feb 13 19:50:34.787149 update_engine[1793]: Feb 13 19:50:34.787149 update_engine[1793]: Feb 13 19:50:34.787149 update_engine[1793]: Feb 13 19:50:34.787149 update_engine[1793]: Feb 13 19:50:34.787149 update_engine[1793]: I20250213 19:50:34.787123 1793 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 19:50:34.788289 update_engine[1793]: I20250213 19:50:34.787559 1793 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 19:50:34.788289 update_engine[1793]: I20250213 19:50:34.788041 1793 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 19:50:34.788487 locksmithd[1842]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Feb 13 19:50:34.789089 update_engine[1793]: E20250213 19:50:34.788396 1793 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 19:50:34.789089 update_engine[1793]: I20250213 19:50:34.788569 1793 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 13 19:50:34.789089 update_engine[1793]: I20250213 19:50:34.788604 1793 omaha_request_action.cc:617] Omaha request response: Feb 13 19:50:34.789089 update_engine[1793]: I20250213 19:50:34.788625 1793 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 19:50:34.789089 update_engine[1793]: I20250213 19:50:34.788640 1793 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 19:50:34.789089 update_engine[1793]: I20250213 19:50:34.788655 1793 update_attempter.cc:306] Processing Done. Feb 13 19:50:34.789089 update_engine[1793]: I20250213 19:50:34.788672 1793 update_attempter.cc:310] Error event sent. Feb 13 19:50:34.789089 update_engine[1793]: I20250213 19:50:34.788698 1793 update_check_scheduler.cc:74] Next update check in 46m9s Feb 13 19:50:34.789861 locksmithd[1842]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Feb 13 19:51:55.713355 systemd[1]: Started sshd@9-147.28.180.89:22-139.178.89.65:51274.service - OpenSSH per-connection server daemon (139.178.89.65:51274). Feb 13 19:51:55.774666 sshd[7774]: Accepted publickey for core from 139.178.89.65 port 51274 ssh2: RSA SHA256:oqFzLKltKjIjWJ2xRNYaxZupDvhRtAv9cyn8jloyOa0 Feb 13 19:51:55.775325 sshd-session[7774]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:51:55.778140 systemd-logind[1788]: New session 12 of user core. Feb 13 19:51:55.788562 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 19:51:55.924716 sshd[7776]: Connection closed by 139.178.89.65 port 51274 Feb 13 19:51:55.924927 sshd-session[7774]: pam_unix(sshd:session): session closed for user core Feb 13 19:51:55.926610 systemd[1]: sshd@9-147.28.180.89:22-139.178.89.65:51274.service: Deactivated successfully. Feb 13 19:51:55.927639 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 19:51:55.928380 systemd-logind[1788]: Session 12 logged out. Waiting for processes to exit. Feb 13 19:51:55.929099 systemd-logind[1788]: Removed session 12. Feb 13 19:52:00.941535 systemd[1]: Started sshd@10-147.28.180.89:22-139.178.89.65:51286.service - OpenSSH per-connection server daemon (139.178.89.65:51286). Feb 13 19:52:00.974024 sshd[7804]: Accepted publickey for core from 139.178.89.65 port 51286 ssh2: RSA SHA256:oqFzLKltKjIjWJ2xRNYaxZupDvhRtAv9cyn8jloyOa0 Feb 13 19:52:00.974691 sshd-session[7804]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:52:00.977245 systemd-logind[1788]: New session 13 of user core. Feb 13 19:52:00.998542 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 19:52:01.083773 sshd[7806]: Connection closed by 139.178.89.65 port 51286 Feb 13 19:52:01.083936 sshd-session[7804]: pam_unix(sshd:session): session closed for user core Feb 13 19:52:01.085542 systemd[1]: sshd@10-147.28.180.89:22-139.178.89.65:51286.service: Deactivated successfully. Feb 13 19:52:01.086433 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 19:52:01.087187 systemd-logind[1788]: Session 13 logged out. Waiting for processes to exit. Feb 13 19:52:01.087738 systemd-logind[1788]: Removed session 13. Feb 13 19:52:06.111285 systemd[1]: Started sshd@11-147.28.180.89:22-139.178.89.65:58018.service - OpenSSH per-connection server daemon (139.178.89.65:58018). Feb 13 19:52:06.157652 sshd[7834]: Accepted publickey for core from 139.178.89.65 port 58018 ssh2: RSA SHA256:oqFzLKltKjIjWJ2xRNYaxZupDvhRtAv9cyn8jloyOa0 Feb 13 19:52:06.158343 sshd-session[7834]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:52:06.161038 systemd-logind[1788]: New session 14 of user core. Feb 13 19:52:06.170582 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 19:52:06.254204 sshd[7836]: Connection closed by 139.178.89.65 port 58018 Feb 13 19:52:06.254363 sshd-session[7834]: pam_unix(sshd:session): session closed for user core Feb 13 19:52:06.273224 systemd[1]: sshd@11-147.28.180.89:22-139.178.89.65:58018.service: Deactivated successfully. Feb 13 19:52:06.274167 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 19:52:06.275007 systemd-logind[1788]: Session 14 logged out. Waiting for processes to exit. Feb 13 19:52:06.275918 systemd[1]: Started sshd@12-147.28.180.89:22-139.178.89.65:58034.service - OpenSSH per-connection server daemon (139.178.89.65:58034). Feb 13 19:52:06.276570 systemd-logind[1788]: Removed session 14. Feb 13 19:52:06.308228 sshd[7861]: Accepted publickey for core from 139.178.89.65 port 58034 ssh2: RSA SHA256:oqFzLKltKjIjWJ2xRNYaxZupDvhRtAv9cyn8jloyOa0 Feb 13 19:52:06.308863 sshd-session[7861]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:52:06.311329 systemd-logind[1788]: New session 15 of user core. Feb 13 19:52:06.323712 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 19:52:06.452245 sshd[7863]: Connection closed by 139.178.89.65 port 58034 Feb 13 19:52:06.452383 sshd-session[7861]: pam_unix(sshd:session): session closed for user core Feb 13 19:52:06.464170 systemd[1]: sshd@12-147.28.180.89:22-139.178.89.65:58034.service: Deactivated successfully. Feb 13 19:52:06.465042 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 19:52:06.465784 systemd-logind[1788]: Session 15 logged out. Waiting for processes to exit. Feb 13 19:52:06.466467 systemd[1]: Started sshd@13-147.28.180.89:22-139.178.89.65:58044.service - OpenSSH per-connection server daemon (139.178.89.65:58044). Feb 13 19:52:06.466985 systemd-logind[1788]: Removed session 15. Feb 13 19:52:06.502682 sshd[7885]: Accepted publickey for core from 139.178.89.65 port 58044 ssh2: RSA SHA256:oqFzLKltKjIjWJ2xRNYaxZupDvhRtAv9cyn8jloyOa0 Feb 13 19:52:06.503355 sshd-session[7885]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:52:06.506085 systemd-logind[1788]: New session 16 of user core. Feb 13 19:52:06.524689 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 19:52:06.652652 sshd[7889]: Connection closed by 139.178.89.65 port 58044 Feb 13 19:52:06.652829 sshd-session[7885]: pam_unix(sshd:session): session closed for user core Feb 13 19:52:06.654620 systemd[1]: sshd@13-147.28.180.89:22-139.178.89.65:58044.service: Deactivated successfully. Feb 13 19:52:06.655440 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 19:52:06.655841 systemd-logind[1788]: Session 16 logged out. Waiting for processes to exit. Feb 13 19:52:06.656363 systemd-logind[1788]: Removed session 16. Feb 13 19:52:11.680755 systemd[1]: Started sshd@14-147.28.180.89:22-139.178.89.65:58056.service - OpenSSH per-connection server daemon (139.178.89.65:58056). Feb 13 19:52:11.709659 sshd[7966]: Accepted publickey for core from 139.178.89.65 port 58056 ssh2: RSA SHA256:oqFzLKltKjIjWJ2xRNYaxZupDvhRtAv9cyn8jloyOa0 Feb 13 19:52:11.710268 sshd-session[7966]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:52:11.712948 systemd-logind[1788]: New session 17 of user core. Feb 13 19:52:11.728596 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 19:52:11.813329 sshd[7968]: Connection closed by 139.178.89.65 port 58056 Feb 13 19:52:11.813528 sshd-session[7966]: pam_unix(sshd:session): session closed for user core Feb 13 19:52:11.815217 systemd[1]: sshd@14-147.28.180.89:22-139.178.89.65:58056.service: Deactivated successfully. Feb 13 19:52:11.816160 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 19:52:11.816869 systemd-logind[1788]: Session 17 logged out. Waiting for processes to exit. Feb 13 19:52:11.817394 systemd-logind[1788]: Removed session 17. Feb 13 19:52:16.860712 systemd[1]: Started sshd@15-147.28.180.89:22-139.178.89.65:55434.service - OpenSSH per-connection server daemon (139.178.89.65:55434). Feb 13 19:52:16.897130 sshd[7991]: Accepted publickey for core from 139.178.89.65 port 55434 ssh2: RSA SHA256:oqFzLKltKjIjWJ2xRNYaxZupDvhRtAv9cyn8jloyOa0 Feb 13 19:52:16.900569 sshd-session[7991]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:52:16.912249 systemd-logind[1788]: New session 18 of user core. Feb 13 19:52:16.934861 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 19:52:17.035074 sshd[7993]: Connection closed by 139.178.89.65 port 55434 Feb 13 19:52:17.035233 sshd-session[7991]: pam_unix(sshd:session): session closed for user core Feb 13 19:52:17.048537 systemd[1]: sshd@15-147.28.180.89:22-139.178.89.65:55434.service: Deactivated successfully. Feb 13 19:52:17.049608 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 19:52:17.050459 systemd-logind[1788]: Session 18 logged out. Waiting for processes to exit. Feb 13 19:52:17.051362 systemd[1]: Started sshd@16-147.28.180.89:22-139.178.89.65:55444.service - OpenSSH per-connection server daemon (139.178.89.65:55444). Feb 13 19:52:17.052023 systemd-logind[1788]: Removed session 18. Feb 13 19:52:17.097329 sshd[8017]: Accepted publickey for core from 139.178.89.65 port 55444 ssh2: RSA SHA256:oqFzLKltKjIjWJ2xRNYaxZupDvhRtAv9cyn8jloyOa0 Feb 13 19:52:17.098206 sshd-session[8017]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:52:17.101666 systemd-logind[1788]: New session 19 of user core. Feb 13 19:52:17.124747 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 19:52:17.297399 sshd[8019]: Connection closed by 139.178.89.65 port 55444 Feb 13 19:52:17.297748 sshd-session[8017]: pam_unix(sshd:session): session closed for user core Feb 13 19:52:17.312768 systemd[1]: sshd@16-147.28.180.89:22-139.178.89.65:55444.service: Deactivated successfully. Feb 13 19:52:17.313941 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 19:52:17.314960 systemd-logind[1788]: Session 19 logged out. Waiting for processes to exit. Feb 13 19:52:17.316054 systemd[1]: Started sshd@17-147.28.180.89:22-139.178.89.65:55460.service - OpenSSH per-connection server daemon (139.178.89.65:55460). Feb 13 19:52:17.316709 systemd-logind[1788]: Removed session 19. Feb 13 19:52:17.361666 sshd[8040]: Accepted publickey for core from 139.178.89.65 port 55460 ssh2: RSA SHA256:oqFzLKltKjIjWJ2xRNYaxZupDvhRtAv9cyn8jloyOa0 Feb 13 19:52:17.362310 sshd-session[8040]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:52:17.365157 systemd-logind[1788]: New session 20 of user core. Feb 13 19:52:17.387675 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 19:52:18.567578 sshd[8042]: Connection closed by 139.178.89.65 port 55460 Feb 13 19:52:18.568237 sshd-session[8040]: pam_unix(sshd:session): session closed for user core Feb 13 19:52:18.585286 systemd[1]: sshd@17-147.28.180.89:22-139.178.89.65:55460.service: Deactivated successfully. Feb 13 19:52:18.587443 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 19:52:18.589219 systemd-logind[1788]: Session 20 logged out. Waiting for processes to exit. Feb 13 19:52:18.591144 systemd[1]: Started sshd@18-147.28.180.89:22-139.178.89.65:55464.service - OpenSSH per-connection server daemon (139.178.89.65:55464). Feb 13 19:52:18.592938 systemd-logind[1788]: Removed session 20. Feb 13 19:52:18.645717 sshd[8072]: Accepted publickey for core from 139.178.89.65 port 55464 ssh2: RSA SHA256:oqFzLKltKjIjWJ2xRNYaxZupDvhRtAv9cyn8jloyOa0 Feb 13 19:52:18.646428 sshd-session[8072]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:52:18.649083 systemd-logind[1788]: New session 21 of user core. Feb 13 19:52:18.663687 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 19:52:18.828781 sshd[8076]: Connection closed by 139.178.89.65 port 55464 Feb 13 19:52:18.828973 sshd-session[8072]: pam_unix(sshd:session): session closed for user core Feb 13 19:52:18.840173 systemd[1]: sshd@18-147.28.180.89:22-139.178.89.65:55464.service: Deactivated successfully. Feb 13 19:52:18.841037 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 19:52:18.841759 systemd-logind[1788]: Session 21 logged out. Waiting for processes to exit. Feb 13 19:52:18.842475 systemd[1]: Started sshd@19-147.28.180.89:22-139.178.89.65:55470.service - OpenSSH per-connection server daemon (139.178.89.65:55470). Feb 13 19:52:18.842986 systemd-logind[1788]: Removed session 21. Feb 13 19:52:18.881225 sshd[8099]: Accepted publickey for core from 139.178.89.65 port 55470 ssh2: RSA SHA256:oqFzLKltKjIjWJ2xRNYaxZupDvhRtAv9cyn8jloyOa0 Feb 13 19:52:18.882048 sshd-session[8099]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:52:18.885028 systemd-logind[1788]: New session 22 of user core. Feb 13 19:52:18.896692 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 19:52:19.036029 sshd[8101]: Connection closed by 139.178.89.65 port 55470 Feb 13 19:52:19.036224 sshd-session[8099]: pam_unix(sshd:session): session closed for user core Feb 13 19:52:19.037817 systemd[1]: sshd@19-147.28.180.89:22-139.178.89.65:55470.service: Deactivated successfully. Feb 13 19:52:19.038814 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 19:52:19.039459 systemd-logind[1788]: Session 22 logged out. Waiting for processes to exit. Feb 13 19:52:19.040053 systemd-logind[1788]: Removed session 22. Feb 13 19:52:24.053851 systemd[1]: Started sshd@20-147.28.180.89:22-139.178.89.65:55472.service - OpenSSH per-connection server daemon (139.178.89.65:55472). Feb 13 19:52:24.085446 sshd[8129]: Accepted publickey for core from 139.178.89.65 port 55472 ssh2: RSA SHA256:oqFzLKltKjIjWJ2xRNYaxZupDvhRtAv9cyn8jloyOa0 Feb 13 19:52:24.086126 sshd-session[8129]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:52:24.088436 systemd-logind[1788]: New session 23 of user core. Feb 13 19:52:24.100578 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 19:52:24.184756 sshd[8131]: Connection closed by 139.178.89.65 port 55472 Feb 13 19:52:24.184921 sshd-session[8129]: pam_unix(sshd:session): session closed for user core Feb 13 19:52:24.186545 systemd[1]: sshd@20-147.28.180.89:22-139.178.89.65:55472.service: Deactivated successfully. Feb 13 19:52:24.187437 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 19:52:24.188077 systemd-logind[1788]: Session 23 logged out. Waiting for processes to exit. Feb 13 19:52:24.188613 systemd-logind[1788]: Removed session 23. Feb 13 19:52:29.205278 systemd[1]: Started sshd@21-147.28.180.89:22-139.178.89.65:43868.service - OpenSSH per-connection server daemon (139.178.89.65:43868). Feb 13 19:52:29.236585 sshd[8172]: Accepted publickey for core from 139.178.89.65 port 43868 ssh2: RSA SHA256:oqFzLKltKjIjWJ2xRNYaxZupDvhRtAv9cyn8jloyOa0 Feb 13 19:52:29.237167 sshd-session[8172]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:52:29.239493 systemd-logind[1788]: New session 24 of user core. Feb 13 19:52:29.255702 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 19:52:29.338710 sshd[8174]: Connection closed by 139.178.89.65 port 43868 Feb 13 19:52:29.339056 sshd-session[8172]: pam_unix(sshd:session): session closed for user core Feb 13 19:52:29.340775 systemd[1]: sshd@21-147.28.180.89:22-139.178.89.65:43868.service: Deactivated successfully. Feb 13 19:52:29.341736 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 19:52:29.342377 systemd-logind[1788]: Session 24 logged out. Waiting for processes to exit. Feb 13 19:52:29.343015 systemd-logind[1788]: Removed session 24. Feb 13 19:52:34.360219 systemd[1]: Started sshd@22-147.28.180.89:22-139.178.89.65:43874.service - OpenSSH per-connection server daemon (139.178.89.65:43874). Feb 13 19:52:34.391172 sshd[8195]: Accepted publickey for core from 139.178.89.65 port 43874 ssh2: RSA SHA256:oqFzLKltKjIjWJ2xRNYaxZupDvhRtAv9cyn8jloyOa0 Feb 13 19:52:34.391816 sshd-session[8195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:52:34.394319 systemd-logind[1788]: New session 25 of user core. Feb 13 19:52:34.405598 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 19:52:34.490203 sshd[8197]: Connection closed by 139.178.89.65 port 43874 Feb 13 19:52:34.490393 sshd-session[8195]: pam_unix(sshd:session): session closed for user core Feb 13 19:52:34.492132 systemd[1]: sshd@22-147.28.180.89:22-139.178.89.65:43874.service: Deactivated successfully. Feb 13 19:52:34.493096 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 19:52:34.493817 systemd-logind[1788]: Session 25 logged out. Waiting for processes to exit. Feb 13 19:52:34.494393 systemd-logind[1788]: Removed session 25.