Jul 7 05:52:35.879546 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 7 05:52:35.879576 kernel: Linux version 6.6.95-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Sun Jul 6 22:28:26 -00 2025 Jul 7 05:52:35.879588 kernel: KASLR enabled Jul 7 05:52:35.879594 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Jul 7 05:52:35.879600 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390c1018 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b43d18 Jul 7 05:52:35.879606 kernel: random: crng init done Jul 7 05:52:35.879614 kernel: ACPI: Early table checksum verification disabled Jul 7 05:52:35.879620 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Jul 7 05:52:35.879627 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Jul 7 05:52:35.879635 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 05:52:35.879641 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 05:52:35.879647 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 05:52:35.879654 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 05:52:35.879660 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 05:52:35.879668 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 05:52:35.879676 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 05:52:35.879683 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 05:52:35.879690 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 05:52:35.879697 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Jul 7 05:52:35.879703 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Jul 7 05:52:35.879710 kernel: NUMA: Failed to initialise from firmware Jul 7 05:52:35.879717 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Jul 7 05:52:35.879723 kernel: NUMA: NODE_DATA [mem 0x13966f800-0x139674fff] Jul 7 05:52:35.879730 kernel: Zone ranges: Jul 7 05:52:35.879736 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jul 7 05:52:35.879744 kernel: DMA32 empty Jul 7 05:52:35.879751 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Jul 7 05:52:35.879758 kernel: Movable zone start for each node Jul 7 05:52:35.879764 kernel: Early memory node ranges Jul 7 05:52:35.879771 kernel: node 0: [mem 0x0000000040000000-0x000000013676ffff] Jul 7 05:52:35.879777 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Jul 7 05:52:35.879784 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Jul 7 05:52:35.879791 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Jul 7 05:52:35.879797 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Jul 7 05:52:35.879804 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Jul 7 05:52:35.879810 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Jul 7 05:52:35.879817 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Jul 7 05:52:35.879826 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Jul 7 05:52:35.879833 kernel: psci: probing for conduit method from ACPI. Jul 7 05:52:35.879840 kernel: psci: PSCIv1.1 detected in firmware. Jul 7 05:52:35.879850 kernel: psci: Using standard PSCI v0.2 function IDs Jul 7 05:52:35.879857 kernel: psci: Trusted OS migration not required Jul 7 05:52:35.879864 kernel: psci: SMC Calling Convention v1.1 Jul 7 05:52:35.879872 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 7 05:52:35.879880 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jul 7 05:52:35.879887 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jul 7 05:52:35.879894 kernel: pcpu-alloc: [0] 0 [0] 1 Jul 7 05:52:35.879901 kernel: Detected PIPT I-cache on CPU0 Jul 7 05:52:35.879908 kernel: CPU features: detected: GIC system register CPU interface Jul 7 05:52:35.880932 kernel: CPU features: detected: Hardware dirty bit management Jul 7 05:52:35.880941 kernel: CPU features: detected: Spectre-v4 Jul 7 05:52:35.880948 kernel: CPU features: detected: Spectre-BHB Jul 7 05:52:35.880955 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 7 05:52:35.880968 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 7 05:52:35.880975 kernel: CPU features: detected: ARM erratum 1418040 Jul 7 05:52:35.880982 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 7 05:52:35.880989 kernel: alternatives: applying boot alternatives Jul 7 05:52:35.880998 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=d8ee5af37c0fd8dad02b585c18ea1a7b66b80110546cbe726b93dd7a9fbe678b Jul 7 05:52:35.881006 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 7 05:52:35.881013 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 7 05:52:35.881020 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 7 05:52:35.881027 kernel: Fallback order for Node 0: 0 Jul 7 05:52:35.881034 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Jul 7 05:52:35.881041 kernel: Policy zone: Normal Jul 7 05:52:35.881050 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 7 05:52:35.881057 kernel: software IO TLB: area num 2. Jul 7 05:52:35.881064 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Jul 7 05:52:35.881137 kernel: Memory: 3882808K/4096000K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39424K init, 897K bss, 213192K reserved, 0K cma-reserved) Jul 7 05:52:35.881145 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 7 05:52:35.881152 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 7 05:52:35.881159 kernel: rcu: RCU event tracing is enabled. Jul 7 05:52:35.881167 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 7 05:52:35.881174 kernel: Trampoline variant of Tasks RCU enabled. Jul 7 05:52:35.881181 kernel: Tracing variant of Tasks RCU enabled. Jul 7 05:52:35.881188 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 7 05:52:35.881199 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 7 05:52:35.881206 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 7 05:52:35.881213 kernel: GICv3: 256 SPIs implemented Jul 7 05:52:35.881220 kernel: GICv3: 0 Extended SPIs implemented Jul 7 05:52:35.881227 kernel: Root IRQ handler: gic_handle_irq Jul 7 05:52:35.881234 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 7 05:52:35.881241 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 7 05:52:35.881248 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 7 05:52:35.881255 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Jul 7 05:52:35.881263 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Jul 7 05:52:35.881270 kernel: GICv3: using LPI property table @0x00000001000e0000 Jul 7 05:52:35.881277 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Jul 7 05:52:35.881286 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 7 05:52:35.881293 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 05:52:35.881301 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 7 05:52:35.881308 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 7 05:52:35.881315 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 7 05:52:35.881322 kernel: Console: colour dummy device 80x25 Jul 7 05:52:35.881330 kernel: ACPI: Core revision 20230628 Jul 7 05:52:35.881338 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 7 05:52:35.881345 kernel: pid_max: default: 32768 minimum: 301 Jul 7 05:52:35.881353 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 7 05:52:35.881362 kernel: landlock: Up and running. Jul 7 05:52:35.881369 kernel: SELinux: Initializing. Jul 7 05:52:35.881376 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 7 05:52:35.881384 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 7 05:52:35.881391 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 7 05:52:35.881399 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 7 05:52:35.881406 kernel: rcu: Hierarchical SRCU implementation. Jul 7 05:52:35.881413 kernel: rcu: Max phase no-delay instances is 400. Jul 7 05:52:35.881420 kernel: Platform MSI: ITS@0x8080000 domain created Jul 7 05:52:35.881428 kernel: PCI/MSI: ITS@0x8080000 domain created Jul 7 05:52:35.881435 kernel: Remapping and enabling EFI services. Jul 7 05:52:35.881443 kernel: smp: Bringing up secondary CPUs ... Jul 7 05:52:35.881463 kernel: Detected PIPT I-cache on CPU1 Jul 7 05:52:35.881473 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 7 05:52:35.881480 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Jul 7 05:52:35.881487 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 05:52:35.881494 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 7 05:52:35.881501 kernel: smp: Brought up 1 node, 2 CPUs Jul 7 05:52:35.881508 kernel: SMP: Total of 2 processors activated. Jul 7 05:52:35.881519 kernel: CPU features: detected: 32-bit EL0 Support Jul 7 05:52:35.881526 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 7 05:52:35.881539 kernel: CPU features: detected: Common not Private translations Jul 7 05:52:35.881548 kernel: CPU features: detected: CRC32 instructions Jul 7 05:52:35.881556 kernel: CPU features: detected: Enhanced Virtualization Traps Jul 7 05:52:35.881563 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 7 05:52:35.881570 kernel: CPU features: detected: LSE atomic instructions Jul 7 05:52:35.881578 kernel: CPU features: detected: Privileged Access Never Jul 7 05:52:35.881586 kernel: CPU features: detected: RAS Extension Support Jul 7 05:52:35.881595 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 7 05:52:35.881602 kernel: CPU: All CPU(s) started at EL1 Jul 7 05:52:35.881610 kernel: alternatives: applying system-wide alternatives Jul 7 05:52:35.881617 kernel: devtmpfs: initialized Jul 7 05:52:35.881625 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 7 05:52:35.881633 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 7 05:52:35.881640 kernel: pinctrl core: initialized pinctrl subsystem Jul 7 05:52:35.881649 kernel: SMBIOS 3.0.0 present. Jul 7 05:52:35.881657 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Jul 7 05:52:35.881665 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 7 05:52:35.881672 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 7 05:52:35.881680 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 7 05:52:35.881687 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 7 05:52:35.881695 kernel: audit: initializing netlink subsys (disabled) Jul 7 05:52:35.881702 kernel: audit: type=2000 audit(0.017:1): state=initialized audit_enabled=0 res=1 Jul 7 05:52:35.881710 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 7 05:52:35.881719 kernel: cpuidle: using governor menu Jul 7 05:52:35.881726 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 7 05:52:35.881734 kernel: ASID allocator initialised with 32768 entries Jul 7 05:52:35.881741 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 7 05:52:35.881749 kernel: Serial: AMBA PL011 UART driver Jul 7 05:52:35.881756 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 7 05:52:35.881764 kernel: Modules: 0 pages in range for non-PLT usage Jul 7 05:52:35.881771 kernel: Modules: 509008 pages in range for PLT usage Jul 7 05:52:35.881779 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 7 05:52:35.881788 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 7 05:52:35.881795 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 7 05:52:35.881803 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 7 05:52:35.881810 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 7 05:52:35.881818 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 7 05:52:35.881825 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 7 05:52:35.881833 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 7 05:52:35.881840 kernel: ACPI: Added _OSI(Module Device) Jul 7 05:52:35.881847 kernel: ACPI: Added _OSI(Processor Device) Jul 7 05:52:35.881857 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 7 05:52:35.881864 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 7 05:52:35.881872 kernel: ACPI: Interpreter enabled Jul 7 05:52:35.881879 kernel: ACPI: Using GIC for interrupt routing Jul 7 05:52:35.881887 kernel: ACPI: MCFG table detected, 1 entries Jul 7 05:52:35.881894 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 7 05:52:35.881902 kernel: printk: console [ttyAMA0] enabled Jul 7 05:52:35.881909 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 7 05:52:35.882061 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 7 05:52:35.883258 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 7 05:52:35.883330 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 7 05:52:35.883395 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 7 05:52:35.883477 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 7 05:52:35.883489 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 7 05:52:35.883498 kernel: PCI host bridge to bus 0000:00 Jul 7 05:52:35.883573 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 7 05:52:35.883638 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 7 05:52:35.883696 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 7 05:52:35.883754 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 7 05:52:35.883903 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jul 7 05:52:35.883987 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Jul 7 05:52:35.884056 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Jul 7 05:52:35.885282 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Jul 7 05:52:35.885382 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jul 7 05:52:35.885460 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Jul 7 05:52:35.885538 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jul 7 05:52:35.885606 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Jul 7 05:52:35.885682 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jul 7 05:52:35.885756 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Jul 7 05:52:35.885830 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jul 7 05:52:35.885898 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Jul 7 05:52:35.885970 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jul 7 05:52:35.886038 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Jul 7 05:52:35.886196 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jul 7 05:52:35.886323 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Jul 7 05:52:35.886404 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jul 7 05:52:35.886487 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Jul 7 05:52:35.886564 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jul 7 05:52:35.886631 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Jul 7 05:52:35.886708 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Jul 7 05:52:35.886780 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Jul 7 05:52:35.886871 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Jul 7 05:52:35.886941 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Jul 7 05:52:35.887018 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Jul 7 05:52:35.890193 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Jul 7 05:52:35.890293 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jul 7 05:52:35.890364 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Jul 7 05:52:35.890463 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Jul 7 05:52:35.890539 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Jul 7 05:52:35.890664 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Jul 7 05:52:35.890743 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Jul 7 05:52:35.890813 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Jul 7 05:52:35.890890 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Jul 7 05:52:35.890959 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Jul 7 05:52:35.891042 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Jul 7 05:52:35.891203 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x10800000-0x10800fff] Jul 7 05:52:35.891285 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Jul 7 05:52:35.891362 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Jul 7 05:52:35.891431 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Jul 7 05:52:35.891519 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Jul 7 05:52:35.891600 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Jul 7 05:52:35.891669 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Jul 7 05:52:35.891876 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Jul 7 05:52:35.891989 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Jul 7 05:52:35.894219 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jul 7 05:52:35.894308 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Jul 7 05:52:35.894383 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Jul 7 05:52:35.894454 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jul 7 05:52:35.894524 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jul 7 05:52:35.894589 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Jul 7 05:52:35.894660 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jul 7 05:52:35.894745 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Jul 7 05:52:35.894817 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Jul 7 05:52:35.894887 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jul 7 05:52:35.894959 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Jul 7 05:52:35.895027 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Jul 7 05:52:35.895174 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jul 7 05:52:35.895251 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Jul 7 05:52:35.895317 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Jul 7 05:52:35.895447 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jul 7 05:52:35.895516 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Jul 7 05:52:35.895588 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Jul 7 05:52:35.895660 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jul 7 05:52:35.895725 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Jul 7 05:52:35.895791 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Jul 7 05:52:35.895860 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jul 7 05:52:35.895925 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Jul 7 05:52:35.895989 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Jul 7 05:52:35.896059 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jul 7 05:52:35.898248 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Jul 7 05:52:35.898322 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Jul 7 05:52:35.898392 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Jul 7 05:52:35.898523 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Jul 7 05:52:35.898592 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Jul 7 05:52:35.898664 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Jul 7 05:52:35.898750 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Jul 7 05:52:35.898830 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Jul 7 05:52:35.898901 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Jul 7 05:52:35.898968 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Jul 7 05:52:35.899038 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Jul 7 05:52:35.899224 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Jul 7 05:52:35.899298 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Jul 7 05:52:35.899368 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Jul 7 05:52:35.899434 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Jul 7 05:52:35.899511 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Jul 7 05:52:35.899579 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Jul 7 05:52:35.899644 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Jul 7 05:52:35.899708 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Jul 7 05:52:35.899772 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Jul 7 05:52:35.899846 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Jul 7 05:52:35.899916 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Jul 7 05:52:35.899982 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Jul 7 05:52:35.900047 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Jul 7 05:52:35.901753 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Jul 7 05:52:35.901836 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Jul 7 05:52:35.901955 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Jul 7 05:52:35.902038 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Jul 7 05:52:35.902158 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Jul 7 05:52:35.902229 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Jul 7 05:52:35.902297 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Jul 7 05:52:35.902380 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Jul 7 05:52:35.902459 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Jul 7 05:52:35.902527 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Jul 7 05:52:35.902594 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Jul 7 05:52:35.902659 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Jul 7 05:52:35.902730 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Jul 7 05:52:35.902795 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Jul 7 05:52:35.902879 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Jul 7 05:52:35.902945 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Jul 7 05:52:35.903015 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Jul 7 05:52:35.903932 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Jul 7 05:52:35.904038 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jul 7 05:52:35.904219 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Jul 7 05:52:35.904300 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jul 7 05:52:35.904367 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jul 7 05:52:35.904432 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Jul 7 05:52:35.904509 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Jul 7 05:52:35.904583 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Jul 7 05:52:35.904655 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jul 7 05:52:35.904721 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jul 7 05:52:35.904785 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Jul 7 05:52:35.904849 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Jul 7 05:52:35.904922 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Jul 7 05:52:35.904990 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Jul 7 05:52:35.905055 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jul 7 05:52:35.906203 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jul 7 05:52:35.906280 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Jul 7 05:52:35.906347 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Jul 7 05:52:35.906443 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Jul 7 05:52:35.906518 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jul 7 05:52:35.906591 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jul 7 05:52:35.906657 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Jul 7 05:52:35.906728 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Jul 7 05:52:35.906811 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Jul 7 05:52:35.906879 kernel: pci 0000:05:00.0: BAR 1: assigned [mem 0x10800000-0x10800fff] Jul 7 05:52:35.906947 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jul 7 05:52:35.907011 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jul 7 05:52:35.907234 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Jul 7 05:52:35.907322 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Jul 7 05:52:35.907415 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Jul 7 05:52:35.907496 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Jul 7 05:52:35.907570 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jul 7 05:52:35.907634 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jul 7 05:52:35.907697 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Jul 7 05:52:35.907760 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Jul 7 05:52:35.907832 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Jul 7 05:52:35.907937 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Jul 7 05:52:35.908015 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Jul 7 05:52:35.908124 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jul 7 05:52:35.908201 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jul 7 05:52:35.908306 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Jul 7 05:52:35.908376 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Jul 7 05:52:35.908454 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jul 7 05:52:35.908548 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jul 7 05:52:35.908620 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Jul 7 05:52:35.908695 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Jul 7 05:52:35.908763 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jul 7 05:52:35.908834 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Jul 7 05:52:35.908898 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Jul 7 05:52:35.908966 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Jul 7 05:52:35.909042 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 7 05:52:35.909195 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 7 05:52:35.909259 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 7 05:52:35.909330 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jul 7 05:52:35.909397 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Jul 7 05:52:35.909466 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Jul 7 05:52:35.909535 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Jul 7 05:52:35.909595 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Jul 7 05:52:35.909654 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Jul 7 05:52:35.909721 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Jul 7 05:52:35.909839 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Jul 7 05:52:35.909915 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Jul 7 05:52:35.909984 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Jul 7 05:52:35.910060 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Jul 7 05:52:35.910159 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Jul 7 05:52:35.910249 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Jul 7 05:52:35.910313 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Jul 7 05:52:35.910377 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Jul 7 05:52:35.910445 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Jul 7 05:52:35.910506 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Jul 7 05:52:35.910569 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Jul 7 05:52:35.910638 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Jul 7 05:52:35.910698 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Jul 7 05:52:35.910757 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Jul 7 05:52:35.910823 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Jul 7 05:52:35.910883 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Jul 7 05:52:35.910943 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Jul 7 05:52:35.911014 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Jul 7 05:52:35.911136 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Jul 7 05:52:35.911213 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Jul 7 05:52:35.911224 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 7 05:52:35.911232 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 7 05:52:35.911240 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 7 05:52:35.911248 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 7 05:52:35.911256 kernel: iommu: Default domain type: Translated Jul 7 05:52:35.911264 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 7 05:52:35.911274 kernel: efivars: Registered efivars operations Jul 7 05:52:35.911282 kernel: vgaarb: loaded Jul 7 05:52:35.911290 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 7 05:52:35.911298 kernel: VFS: Disk quotas dquot_6.6.0 Jul 7 05:52:35.911306 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 7 05:52:35.911314 kernel: pnp: PnP ACPI init Jul 7 05:52:35.911388 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 7 05:52:35.911400 kernel: pnp: PnP ACPI: found 1 devices Jul 7 05:52:35.911408 kernel: NET: Registered PF_INET protocol family Jul 7 05:52:35.911418 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 7 05:52:35.911427 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 7 05:52:35.911435 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 7 05:52:35.911479 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 7 05:52:35.911489 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 7 05:52:35.911497 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 7 05:52:35.911505 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 7 05:52:35.911513 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 7 05:52:35.911524 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 7 05:52:35.911622 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Jul 7 05:52:35.911636 kernel: PCI: CLS 0 bytes, default 64 Jul 7 05:52:35.911645 kernel: kvm [1]: HYP mode not available Jul 7 05:52:35.911652 kernel: Initialise system trusted keyrings Jul 7 05:52:35.911660 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 7 05:52:35.911668 kernel: Key type asymmetric registered Jul 7 05:52:35.911676 kernel: Asymmetric key parser 'x509' registered Jul 7 05:52:35.911684 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 7 05:52:35.911703 kernel: io scheduler mq-deadline registered Jul 7 05:52:35.911715 kernel: io scheduler kyber registered Jul 7 05:52:35.911723 kernel: io scheduler bfq registered Jul 7 05:52:35.911731 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jul 7 05:52:35.911820 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Jul 7 05:52:35.911892 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Jul 7 05:52:35.911958 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 7 05:52:35.912025 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Jul 7 05:52:35.912124 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Jul 7 05:52:35.912193 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 7 05:52:35.912261 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Jul 7 05:52:35.912326 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Jul 7 05:52:35.912392 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 7 05:52:35.912471 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Jul 7 05:52:35.912544 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Jul 7 05:52:35.912617 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 7 05:52:35.912690 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Jul 7 05:52:35.912756 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Jul 7 05:52:35.912821 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 7 05:52:35.912888 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Jul 7 05:52:35.912956 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Jul 7 05:52:35.913021 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 7 05:52:35.913250 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Jul 7 05:52:35.913328 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Jul 7 05:52:35.913393 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 7 05:52:35.913487 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Jul 7 05:52:35.913563 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Jul 7 05:52:35.913629 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 7 05:52:35.913640 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Jul 7 05:52:35.913703 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Jul 7 05:52:35.913768 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Jul 7 05:52:35.913831 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 7 05:52:35.913844 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 7 05:52:35.913858 kernel: ACPI: button: Power Button [PWRB] Jul 7 05:52:35.913870 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 7 05:52:35.913945 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Jul 7 05:52:35.914016 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Jul 7 05:52:35.914028 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 7 05:52:35.914036 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jul 7 05:52:35.914173 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Jul 7 05:52:35.914188 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Jul 7 05:52:35.914200 kernel: thunder_xcv, ver 1.0 Jul 7 05:52:35.914208 kernel: thunder_bgx, ver 1.0 Jul 7 05:52:35.914220 kernel: nicpf, ver 1.0 Jul 7 05:52:35.914229 kernel: nicvf, ver 1.0 Jul 7 05:52:35.914313 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 7 05:52:35.914382 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-07T05:52:35 UTC (1751867555) Jul 7 05:52:35.914394 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 7 05:52:35.914402 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jul 7 05:52:35.914412 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jul 7 05:52:35.914420 kernel: watchdog: Hard watchdog permanently disabled Jul 7 05:52:35.914428 kernel: NET: Registered PF_INET6 protocol family Jul 7 05:52:35.914436 kernel: Segment Routing with IPv6 Jul 7 05:52:35.914444 kernel: In-situ OAM (IOAM) with IPv6 Jul 7 05:52:35.914451 kernel: NET: Registered PF_PACKET protocol family Jul 7 05:52:35.914459 kernel: Key type dns_resolver registered Jul 7 05:52:35.914467 kernel: registered taskstats version 1 Jul 7 05:52:35.914475 kernel: Loading compiled-in X.509 certificates Jul 7 05:52:35.914484 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.95-flatcar: 238b9dc1e5bb098e9decff566778e6505241ab94' Jul 7 05:52:35.914492 kernel: Key type .fscrypt registered Jul 7 05:52:35.914500 kernel: Key type fscrypt-provisioning registered Jul 7 05:52:35.914508 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 7 05:52:35.914516 kernel: ima: Allocated hash algorithm: sha1 Jul 7 05:52:35.914524 kernel: ima: No architecture policies found Jul 7 05:52:35.914532 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 7 05:52:35.914539 kernel: clk: Disabling unused clocks Jul 7 05:52:35.914547 kernel: Freeing unused kernel memory: 39424K Jul 7 05:52:35.914557 kernel: Run /init as init process Jul 7 05:52:35.914565 kernel: with arguments: Jul 7 05:52:35.914573 kernel: /init Jul 7 05:52:35.914581 kernel: with environment: Jul 7 05:52:35.914588 kernel: HOME=/ Jul 7 05:52:35.914596 kernel: TERM=linux Jul 7 05:52:35.914603 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 7 05:52:35.914613 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 7 05:52:35.914625 systemd[1]: Detected virtualization kvm. Jul 7 05:52:35.914634 systemd[1]: Detected architecture arm64. Jul 7 05:52:35.914642 systemd[1]: Running in initrd. Jul 7 05:52:35.914651 systemd[1]: No hostname configured, using default hostname. Jul 7 05:52:35.914659 systemd[1]: Hostname set to . Jul 7 05:52:35.914707 systemd[1]: Initializing machine ID from VM UUID. Jul 7 05:52:35.914718 systemd[1]: Queued start job for default target initrd.target. Jul 7 05:52:35.914727 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 05:52:35.914739 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 05:52:35.914748 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 7 05:52:35.914756 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 05:52:35.914765 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 7 05:52:35.914773 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 7 05:52:35.914783 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 7 05:52:35.914793 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 7 05:52:35.914802 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 05:52:35.914811 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 05:52:35.914819 systemd[1]: Reached target paths.target - Path Units. Jul 7 05:52:35.914829 systemd[1]: Reached target slices.target - Slice Units. Jul 7 05:52:35.914838 systemd[1]: Reached target swap.target - Swaps. Jul 7 05:52:35.914846 systemd[1]: Reached target timers.target - Timer Units. Jul 7 05:52:35.914854 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 05:52:35.914863 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 05:52:35.914873 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 7 05:52:35.914882 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 7 05:52:35.914890 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 05:52:35.914899 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 05:52:35.914907 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 05:52:35.914916 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 05:52:35.914924 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 7 05:52:35.914932 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 05:52:35.914941 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 7 05:52:35.914951 systemd[1]: Starting systemd-fsck-usr.service... Jul 7 05:52:35.914959 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 05:52:35.914967 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 05:52:35.914976 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 05:52:35.914984 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 7 05:52:35.915027 systemd-journald[235]: Collecting audit messages is disabled. Jul 7 05:52:35.915054 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 05:52:35.915063 systemd[1]: Finished systemd-fsck-usr.service. Jul 7 05:52:35.915096 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 7 05:52:35.915108 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 05:52:35.915118 systemd-journald[235]: Journal started Jul 7 05:52:35.915137 systemd-journald[235]: Runtime Journal (/run/log/journal/411234e3a7d041e581d52a0d776661ff) is 8.0M, max 76.6M, 68.6M free. Jul 7 05:52:35.917462 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 05:52:35.900409 systemd-modules-load[237]: Inserted module 'overlay' Jul 7 05:52:35.920239 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 05:52:35.927127 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 7 05:52:35.927165 kernel: Bridge firewalling registered Jul 7 05:52:35.926186 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 05:52:35.927346 systemd-modules-load[237]: Inserted module 'br_netfilter' Jul 7 05:52:35.929219 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 05:52:35.934377 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 05:52:35.936234 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 05:52:35.941653 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 05:52:35.942503 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 05:52:35.951783 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 7 05:52:35.957157 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 05:52:35.963406 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 05:52:35.966287 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 05:52:35.972230 dracut-cmdline[266]: dracut-dracut-053 Jul 7 05:52:35.972977 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 05:52:35.974688 dracut-cmdline[266]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=d8ee5af37c0fd8dad02b585c18ea1a7b66b80110546cbe726b93dd7a9fbe678b Jul 7 05:52:36.009861 systemd-resolved[277]: Positive Trust Anchors: Jul 7 05:52:36.010551 systemd-resolved[277]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 05:52:36.010586 systemd-resolved[277]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 05:52:36.020133 systemd-resolved[277]: Defaulting to hostname 'linux'. Jul 7 05:52:36.021886 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 05:52:36.023427 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 05:52:36.057158 kernel: SCSI subsystem initialized Jul 7 05:52:36.062175 kernel: Loading iSCSI transport class v2.0-870. Jul 7 05:52:36.071645 kernel: iscsi: registered transport (tcp) Jul 7 05:52:36.086161 kernel: iscsi: registered transport (qla4xxx) Jul 7 05:52:36.086235 kernel: QLogic iSCSI HBA Driver Jul 7 05:52:36.183140 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 7 05:52:36.193398 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 7 05:52:36.214393 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 7 05:52:36.214487 kernel: device-mapper: uevent: version 1.0.3 Jul 7 05:52:36.214501 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 7 05:52:36.267128 kernel: raid6: neonx8 gen() 15562 MB/s Jul 7 05:52:36.284152 kernel: raid6: neonx4 gen() 15418 MB/s Jul 7 05:52:36.301145 kernel: raid6: neonx2 gen() 12975 MB/s Jul 7 05:52:36.318151 kernel: raid6: neonx1 gen() 10177 MB/s Jul 7 05:52:36.335133 kernel: raid6: int64x8 gen() 6799 MB/s Jul 7 05:52:36.352138 kernel: raid6: int64x4 gen() 7268 MB/s Jul 7 05:52:36.369130 kernel: raid6: int64x2 gen() 6083 MB/s Jul 7 05:52:36.386145 kernel: raid6: int64x1 gen() 4983 MB/s Jul 7 05:52:36.386224 kernel: raid6: using algorithm neonx8 gen() 15562 MB/s Jul 7 05:52:36.403154 kernel: raid6: .... xor() 11823 MB/s, rmw enabled Jul 7 05:52:36.403239 kernel: raid6: using neon recovery algorithm Jul 7 05:52:36.408119 kernel: xor: measuring software checksum speed Jul 7 05:52:36.408185 kernel: 8regs : 19754 MB/sec Jul 7 05:52:36.409157 kernel: 32regs : 15561 MB/sec Jul 7 05:52:36.409191 kernel: arm64_neon : 26527 MB/sec Jul 7 05:52:36.409210 kernel: xor: using function: arm64_neon (26527 MB/sec) Jul 7 05:52:36.460153 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 7 05:52:36.474563 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 7 05:52:36.486370 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 05:52:36.498593 systemd-udevd[456]: Using default interface naming scheme 'v255'. Jul 7 05:52:36.501928 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 05:52:36.512271 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 7 05:52:36.534771 dracut-pre-trigger[464]: rd.md=0: removing MD RAID activation Jul 7 05:52:36.572883 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 05:52:36.581320 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 05:52:36.650988 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 05:52:36.656565 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 7 05:52:36.688284 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 7 05:52:36.689420 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 05:52:36.690717 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 05:52:36.691856 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 05:52:36.700319 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 7 05:52:36.733387 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 7 05:52:36.789109 kernel: scsi host0: Virtio SCSI HBA Jul 7 05:52:36.789891 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 7 05:52:36.790026 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 05:52:36.792169 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 05:52:36.797403 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 05:52:36.806241 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 7 05:52:36.806293 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jul 7 05:52:36.806314 kernel: ACPI: bus type USB registered Jul 7 05:52:36.798324 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 05:52:36.803825 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 05:52:36.810437 kernel: usbcore: registered new interface driver usbfs Jul 7 05:52:36.810520 kernel: usbcore: registered new interface driver hub Jul 7 05:52:36.810573 kernel: usbcore: registered new device driver usb Jul 7 05:52:36.814051 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 05:52:36.837855 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 05:52:36.849469 kernel: sr 0:0:0:0: Power-on or device reset occurred Jul 7 05:52:36.849328 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 05:52:36.853115 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Jul 7 05:52:36.853288 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 7 05:52:36.855342 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jul 7 05:52:36.855505 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Jul 7 05:52:36.855591 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jul 7 05:52:36.858505 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jul 7 05:52:36.858679 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Jul 7 05:52:36.858767 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Jul 7 05:52:36.859097 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Jul 7 05:52:36.862275 kernel: hub 1-0:1.0: USB hub found Jul 7 05:52:36.862448 kernel: hub 1-0:1.0: 4 ports detected Jul 7 05:52:36.865135 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jul 7 05:52:36.866716 kernel: hub 2-0:1.0: USB hub found Jul 7 05:52:36.866887 kernel: hub 2-0:1.0: 4 ports detected Jul 7 05:52:36.872456 kernel: sd 0:0:0:1: Power-on or device reset occurred Jul 7 05:52:36.872629 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Jul 7 05:52:36.872715 kernel: sd 0:0:0:1: [sda] Write Protect is off Jul 7 05:52:36.872873 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Jul 7 05:52:36.872959 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jul 7 05:52:36.877255 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 7 05:52:36.877295 kernel: GPT:17805311 != 80003071 Jul 7 05:52:36.877305 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 7 05:52:36.877315 kernel: GPT:17805311 != 80003071 Jul 7 05:52:36.877324 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 7 05:52:36.879161 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 7 05:52:36.880780 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 05:52:36.882314 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Jul 7 05:52:36.919236 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jul 7 05:52:36.922146 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (508) Jul 7 05:52:36.926143 kernel: BTRFS: device fsid 8b9ce65a-b4d6-4744-987c-133e7f159d2d devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (527) Jul 7 05:52:36.933318 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jul 7 05:52:36.946734 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jul 7 05:52:36.950903 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Jul 7 05:52:36.952165 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jul 7 05:52:36.959289 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 7 05:52:36.967742 disk-uuid[575]: Primary Header is updated. Jul 7 05:52:36.967742 disk-uuid[575]: Secondary Entries is updated. Jul 7 05:52:36.967742 disk-uuid[575]: Secondary Header is updated. Jul 7 05:52:36.975115 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 7 05:52:36.982120 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 7 05:52:36.988140 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 7 05:52:37.108132 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jul 7 05:52:37.244398 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Jul 7 05:52:37.244471 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Jul 7 05:52:37.244783 kernel: usbcore: registered new interface driver usbhid Jul 7 05:52:37.244821 kernel: usbhid: USB HID core driver Jul 7 05:52:37.350109 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Jul 7 05:52:37.479132 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Jul 7 05:52:37.533190 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Jul 7 05:52:37.992121 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 7 05:52:37.992561 disk-uuid[576]: The operation has completed successfully. Jul 7 05:52:38.046818 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 7 05:52:38.046940 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 7 05:52:38.057404 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 7 05:52:38.063961 sh[594]: Success Jul 7 05:52:38.079294 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 7 05:52:38.146556 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 7 05:52:38.150207 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 7 05:52:38.152109 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 7 05:52:38.172402 kernel: BTRFS info (device dm-0): first mount of filesystem 8b9ce65a-b4d6-4744-987c-133e7f159d2d Jul 7 05:52:38.172456 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 7 05:52:38.173110 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 7 05:52:38.174145 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 7 05:52:38.174175 kernel: BTRFS info (device dm-0): using free space tree Jul 7 05:52:38.181117 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jul 7 05:52:38.183943 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 7 05:52:38.185259 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 7 05:52:38.202399 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 7 05:52:38.207619 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 7 05:52:38.223098 kernel: BTRFS info (device sda6): first mount of filesystem 1c5c26db-4e47-4c5b-afcc-cdf6cfde8d6e Jul 7 05:52:38.223144 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 7 05:52:38.223156 kernel: BTRFS info (device sda6): using free space tree Jul 7 05:52:38.228169 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 7 05:52:38.228223 kernel: BTRFS info (device sda6): auto enabling async discard Jul 7 05:52:38.236894 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 7 05:52:38.240240 kernel: BTRFS info (device sda6): last unmount of filesystem 1c5c26db-4e47-4c5b-afcc-cdf6cfde8d6e Jul 7 05:52:38.248188 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 7 05:52:38.255264 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 7 05:52:38.319735 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 05:52:38.329897 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 05:52:38.366748 systemd-networkd[781]: lo: Link UP Jul 7 05:52:38.366761 systemd-networkd[781]: lo: Gained carrier Jul 7 05:52:38.367109 ignition[688]: Ignition 2.19.0 Jul 7 05:52:38.367117 ignition[688]: Stage: fetch-offline Jul 7 05:52:38.367165 ignition[688]: no configs at "/usr/lib/ignition/base.d" Jul 7 05:52:38.367174 ignition[688]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jul 7 05:52:38.369035 systemd-networkd[781]: Enumeration completed Jul 7 05:52:38.367345 ignition[688]: parsed url from cmdline: "" Jul 7 05:52:38.369366 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 05:52:38.367348 ignition[688]: no config URL provided Jul 7 05:52:38.370513 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 05:52:38.367353 ignition[688]: reading system config file "/usr/lib/ignition/user.ign" Jul 7 05:52:38.370516 systemd-networkd[781]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 05:52:38.367360 ignition[688]: no config at "/usr/lib/ignition/user.ign" Jul 7 05:52:38.371398 systemd-networkd[781]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 05:52:38.367365 ignition[688]: failed to fetch config: resource requires networking Jul 7 05:52:38.371401 systemd-networkd[781]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 05:52:38.369908 ignition[688]: Ignition finished successfully Jul 7 05:52:38.372612 systemd-networkd[781]: eth0: Link UP Jul 7 05:52:38.372616 systemd-networkd[781]: eth0: Gained carrier Jul 7 05:52:38.372623 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 05:52:38.373139 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 05:52:38.374242 systemd[1]: Reached target network.target - Network. Jul 7 05:52:38.376344 systemd-networkd[781]: eth1: Link UP Jul 7 05:52:38.376348 systemd-networkd[781]: eth1: Gained carrier Jul 7 05:52:38.376355 systemd-networkd[781]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 05:52:38.381443 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 7 05:52:38.396760 ignition[784]: Ignition 2.19.0 Jul 7 05:52:38.396776 ignition[784]: Stage: fetch Jul 7 05:52:38.396996 ignition[784]: no configs at "/usr/lib/ignition/base.d" Jul 7 05:52:38.397010 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jul 7 05:52:38.397184 ignition[784]: parsed url from cmdline: "" Jul 7 05:52:38.397188 ignition[784]: no config URL provided Jul 7 05:52:38.397193 ignition[784]: reading system config file "/usr/lib/ignition/user.ign" Jul 7 05:52:38.397201 ignition[784]: no config at "/usr/lib/ignition/user.ign" Jul 7 05:52:38.397222 ignition[784]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Jul 7 05:52:38.397779 ignition[784]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Jul 7 05:52:38.405346 systemd-networkd[781]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 7 05:52:38.430157 systemd-networkd[781]: eth0: DHCPv4 address 159.69.113.68/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jul 7 05:52:38.597993 ignition[784]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Jul 7 05:52:38.605532 ignition[784]: GET result: OK Jul 7 05:52:38.605674 ignition[784]: parsing config with SHA512: 8303390b080c5b1617191830a7f417baec103fc4dc3bd24a1830931ff9d2ce4eceddb6c4aa31e5b7c0978e201a77b2561a6339143b7e853f20019a61398100f3 Jul 7 05:52:38.611369 unknown[784]: fetched base config from "system" Jul 7 05:52:38.611378 unknown[784]: fetched base config from "system" Jul 7 05:52:38.611792 ignition[784]: fetch: fetch complete Jul 7 05:52:38.611384 unknown[784]: fetched user config from "hetzner" Jul 7 05:52:38.611797 ignition[784]: fetch: fetch passed Jul 7 05:52:38.614874 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 7 05:52:38.611840 ignition[784]: Ignition finished successfully Jul 7 05:52:38.623376 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 7 05:52:38.638548 ignition[791]: Ignition 2.19.0 Jul 7 05:52:38.638670 ignition[791]: Stage: kargs Jul 7 05:52:38.638851 ignition[791]: no configs at "/usr/lib/ignition/base.d" Jul 7 05:52:38.638861 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jul 7 05:52:38.639957 ignition[791]: kargs: kargs passed Jul 7 05:52:38.642479 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 7 05:52:38.640014 ignition[791]: Ignition finished successfully Jul 7 05:52:38.655367 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 7 05:52:38.671098 ignition[798]: Ignition 2.19.0 Jul 7 05:52:38.671110 ignition[798]: Stage: disks Jul 7 05:52:38.671378 ignition[798]: no configs at "/usr/lib/ignition/base.d" Jul 7 05:52:38.671392 ignition[798]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jul 7 05:52:38.672610 ignition[798]: disks: disks passed Jul 7 05:52:38.672669 ignition[798]: Ignition finished successfully Jul 7 05:52:38.674300 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 7 05:52:38.675983 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 7 05:52:38.677478 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 7 05:52:38.678093 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 05:52:38.678600 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 05:52:38.680171 systemd[1]: Reached target basic.target - Basic System. Jul 7 05:52:38.696422 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 7 05:52:38.716470 systemd-fsck[806]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jul 7 05:52:38.720163 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 7 05:52:38.727242 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 7 05:52:38.778099 kernel: EXT4-fs (sda9): mounted filesystem bea371b7-1069-4e98-84b2-bf5b94f934f3 r/w with ordered data mode. Quota mode: none. Jul 7 05:52:38.779269 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 7 05:52:38.781569 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 7 05:52:38.788280 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 05:52:38.792236 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 7 05:52:38.794518 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jul 7 05:52:38.797957 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 7 05:52:38.799202 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 05:52:38.802496 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 7 05:52:38.810750 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (814) Jul 7 05:52:38.812512 kernel: BTRFS info (device sda6): first mount of filesystem 1c5c26db-4e47-4c5b-afcc-cdf6cfde8d6e Jul 7 05:52:38.812554 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 7 05:52:38.812375 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 7 05:52:38.816513 kernel: BTRFS info (device sda6): using free space tree Jul 7 05:52:38.831268 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 7 05:52:38.831340 kernel: BTRFS info (device sda6): auto enabling async discard Jul 7 05:52:38.836992 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 05:52:38.873433 initrd-setup-root[842]: cut: /sysroot/etc/passwd: No such file or directory Jul 7 05:52:38.878475 coreos-metadata[816]: Jul 07 05:52:38.878 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Jul 7 05:52:38.879948 coreos-metadata[816]: Jul 07 05:52:38.878 INFO Fetch successful Jul 7 05:52:38.879948 coreos-metadata[816]: Jul 07 05:52:38.879 INFO wrote hostname ci-4081-3-4-0-cfa01fffc0 to /sysroot/etc/hostname Jul 7 05:52:38.882393 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 7 05:52:38.885579 initrd-setup-root[850]: cut: /sysroot/etc/group: No such file or directory Jul 7 05:52:38.889897 initrd-setup-root[857]: cut: /sysroot/etc/shadow: No such file or directory Jul 7 05:52:38.894204 initrd-setup-root[864]: cut: /sysroot/etc/gshadow: No such file or directory Jul 7 05:52:39.002562 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 7 05:52:39.012283 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 7 05:52:39.016411 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 7 05:52:39.028121 kernel: BTRFS info (device sda6): last unmount of filesystem 1c5c26db-4e47-4c5b-afcc-cdf6cfde8d6e Jul 7 05:52:39.056304 ignition[931]: INFO : Ignition 2.19.0 Jul 7 05:52:39.056304 ignition[931]: INFO : Stage: mount Jul 7 05:52:39.059134 ignition[931]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 05:52:39.059134 ignition[931]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jul 7 05:52:39.059134 ignition[931]: INFO : mount: mount passed Jul 7 05:52:39.059134 ignition[931]: INFO : Ignition finished successfully Jul 7 05:52:39.060479 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 7 05:52:39.061953 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 7 05:52:39.068220 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 7 05:52:39.172802 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 7 05:52:39.180351 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 05:52:39.193108 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (943) Jul 7 05:52:39.194305 kernel: BTRFS info (device sda6): first mount of filesystem 1c5c26db-4e47-4c5b-afcc-cdf6cfde8d6e Jul 7 05:52:39.194347 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 7 05:52:39.194368 kernel: BTRFS info (device sda6): using free space tree Jul 7 05:52:39.199123 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 7 05:52:39.199186 kernel: BTRFS info (device sda6): auto enabling async discard Jul 7 05:52:39.202639 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 05:52:39.227477 ignition[960]: INFO : Ignition 2.19.0 Jul 7 05:52:39.227477 ignition[960]: INFO : Stage: files Jul 7 05:52:39.228752 ignition[960]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 05:52:39.228752 ignition[960]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jul 7 05:52:39.230783 ignition[960]: DEBUG : files: compiled without relabeling support, skipping Jul 7 05:52:39.230783 ignition[960]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 7 05:52:39.230783 ignition[960]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 7 05:52:39.234094 ignition[960]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 7 05:52:39.235052 ignition[960]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 7 05:52:39.235987 unknown[960]: wrote ssh authorized keys file for user: core Jul 7 05:52:39.236847 ignition[960]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 7 05:52:39.239124 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 7 05:52:39.239124 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 7 05:52:39.242848 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 7 05:52:39.242848 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jul 7 05:52:39.615349 systemd-networkd[781]: eth0: Gained IPv6LL Jul 7 05:52:40.191332 systemd-networkd[781]: eth1: Gained IPv6LL Jul 7 05:52:40.316695 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 7 05:52:41.789793 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 7 05:52:41.789793 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 7 05:52:41.793310 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 7 05:52:41.793310 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 7 05:52:41.793310 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 7 05:52:41.793310 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 05:52:41.793310 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 05:52:41.793310 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 05:52:41.793310 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 05:52:41.793310 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 05:52:41.793310 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 05:52:41.793310 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 7 05:52:41.793310 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 7 05:52:41.793310 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 7 05:52:41.793310 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Jul 7 05:52:42.546892 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 7 05:52:43.566124 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 7 05:52:43.566124 ignition[960]: INFO : files: op(c): [started] processing unit "containerd.service" Jul 7 05:52:43.569417 ignition[960]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 7 05:52:43.569417 ignition[960]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 7 05:52:43.569417 ignition[960]: INFO : files: op(c): [finished] processing unit "containerd.service" Jul 7 05:52:43.569417 ignition[960]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jul 7 05:52:43.569417 ignition[960]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 05:52:43.569417 ignition[960]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 05:52:43.569417 ignition[960]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jul 7 05:52:43.569417 ignition[960]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Jul 7 05:52:43.569417 ignition[960]: INFO : files: op(10): op(11): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jul 7 05:52:43.569417 ignition[960]: INFO : files: op(10): op(11): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jul 7 05:52:43.569417 ignition[960]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Jul 7 05:52:43.569417 ignition[960]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jul 7 05:52:43.569417 ignition[960]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jul 7 05:52:43.569417 ignition[960]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 7 05:52:43.569417 ignition[960]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 7 05:52:43.569417 ignition[960]: INFO : files: files passed Jul 7 05:52:43.569417 ignition[960]: INFO : Ignition finished successfully Jul 7 05:52:43.573598 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 7 05:52:43.584257 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 7 05:52:43.588238 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 7 05:52:43.590301 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 7 05:52:43.592093 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 7 05:52:43.612505 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 05:52:43.612505 initrd-setup-root-after-ignition[988]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 7 05:52:43.616251 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 05:52:43.617602 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 05:52:43.618963 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 7 05:52:43.629330 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 7 05:52:43.665354 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 7 05:52:43.665594 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 7 05:52:43.668616 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 7 05:52:43.669539 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 7 05:52:43.670518 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 7 05:52:43.671878 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 7 05:52:43.689051 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 05:52:43.703459 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 7 05:52:43.717677 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 7 05:52:43.718452 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 05:52:43.719697 systemd[1]: Stopped target timers.target - Timer Units. Jul 7 05:52:43.720767 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 7 05:52:43.720890 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 05:52:43.722412 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 7 05:52:43.723026 systemd[1]: Stopped target basic.target - Basic System. Jul 7 05:52:43.724165 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 7 05:52:43.725245 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 05:52:43.726268 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 7 05:52:43.727345 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 7 05:52:43.728425 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 05:52:43.729594 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 7 05:52:43.730633 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 7 05:52:43.731691 systemd[1]: Stopped target swap.target - Swaps. Jul 7 05:52:43.732560 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 7 05:52:43.732678 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 7 05:52:43.733944 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 7 05:52:43.734662 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 05:52:43.735729 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 7 05:52:43.736183 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 05:52:43.736849 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 7 05:52:43.736959 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 7 05:52:43.738541 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 7 05:52:43.738673 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 05:52:43.739748 systemd[1]: ignition-files.service: Deactivated successfully. Jul 7 05:52:43.739842 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 7 05:52:43.740911 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 7 05:52:43.741054 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 7 05:52:43.751337 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 7 05:52:43.757449 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 7 05:52:43.762136 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 7 05:52:43.762876 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 05:52:43.764607 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 7 05:52:43.764752 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 05:52:43.771026 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 7 05:52:43.773203 ignition[1012]: INFO : Ignition 2.19.0 Jul 7 05:52:43.773203 ignition[1012]: INFO : Stage: umount Jul 7 05:52:43.773203 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 05:52:43.773203 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jul 7 05:52:43.776382 ignition[1012]: INFO : umount: umount passed Jul 7 05:52:43.776382 ignition[1012]: INFO : Ignition finished successfully Jul 7 05:52:43.775253 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 7 05:52:43.776228 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 7 05:52:43.777616 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 7 05:52:43.780653 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 7 05:52:43.780744 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 7 05:52:43.782860 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 7 05:52:43.782912 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 7 05:52:43.783659 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 7 05:52:43.783700 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 7 05:52:43.785519 systemd[1]: Stopped target network.target - Network. Jul 7 05:52:43.786061 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 7 05:52:43.786141 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 05:52:43.786825 systemd[1]: Stopped target paths.target - Path Units. Jul 7 05:52:43.787924 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 7 05:52:43.792126 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 05:52:43.793425 systemd[1]: Stopped target slices.target - Slice Units. Jul 7 05:52:43.794749 systemd[1]: Stopped target sockets.target - Socket Units. Jul 7 05:52:43.797013 systemd[1]: iscsid.socket: Deactivated successfully. Jul 7 05:52:43.797060 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 05:52:43.799357 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 7 05:52:43.799394 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 05:52:43.800807 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 7 05:52:43.800857 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 7 05:52:43.802180 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 7 05:52:43.802221 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 7 05:52:43.803148 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 7 05:52:43.804452 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 7 05:52:43.806418 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 7 05:52:43.806922 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 7 05:52:43.807025 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 7 05:52:43.808515 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 7 05:52:43.808600 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 7 05:52:43.811873 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 7 05:52:43.812027 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 7 05:52:43.813119 systemd-networkd[781]: eth0: DHCPv6 lease lost Jul 7 05:52:43.814382 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 7 05:52:43.814428 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 05:52:43.815558 systemd-networkd[781]: eth1: DHCPv6 lease lost Jul 7 05:52:43.817583 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 7 05:52:43.817711 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 7 05:52:43.819461 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 7 05:52:43.819699 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 7 05:52:43.825301 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 7 05:52:43.825773 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 7 05:52:43.825827 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 05:52:43.828474 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 7 05:52:43.828519 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 7 05:52:43.829724 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 7 05:52:43.829760 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 7 05:52:43.831269 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 05:52:43.840327 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 7 05:52:43.846428 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 05:52:43.849474 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 7 05:52:43.849517 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 7 05:52:43.850697 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 7 05:52:43.850730 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 05:52:43.853165 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 7 05:52:43.853288 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 7 05:52:43.854596 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 7 05:52:43.854642 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 7 05:52:43.856211 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 7 05:52:43.856261 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 05:52:43.863266 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 7 05:52:43.865508 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 7 05:52:43.865607 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 05:52:43.868958 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 7 05:52:43.869029 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 05:52:43.870763 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 7 05:52:43.870812 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 05:52:43.872050 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 05:52:43.872106 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 05:52:43.874600 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 7 05:52:43.874790 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 7 05:52:43.876569 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 7 05:52:43.876713 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 7 05:52:43.878826 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 7 05:52:43.886456 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 7 05:52:43.897695 systemd[1]: Switching root. Jul 7 05:52:43.932429 systemd-journald[235]: Journal stopped Jul 7 05:52:44.886345 systemd-journald[235]: Received SIGTERM from PID 1 (systemd). Jul 7 05:52:44.886421 kernel: SELinux: policy capability network_peer_controls=1 Jul 7 05:52:44.886433 kernel: SELinux: policy capability open_perms=1 Jul 7 05:52:44.886443 kernel: SELinux: policy capability extended_socket_class=1 Jul 7 05:52:44.886453 kernel: SELinux: policy capability always_check_network=0 Jul 7 05:52:44.887246 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 7 05:52:44.887263 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 7 05:52:44.887280 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 7 05:52:44.887294 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 7 05:52:44.887304 kernel: audit: type=1403 audit(1751867564.144:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 7 05:52:44.887315 systemd[1]: Successfully loaded SELinux policy in 35.367ms. Jul 7 05:52:44.887339 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.022ms. Jul 7 05:52:44.887351 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 7 05:52:44.887362 systemd[1]: Detected virtualization kvm. Jul 7 05:52:44.887379 systemd[1]: Detected architecture arm64. Jul 7 05:52:44.887389 systemd[1]: Detected first boot. Jul 7 05:52:44.887402 systemd[1]: Hostname set to . Jul 7 05:52:44.887412 systemd[1]: Initializing machine ID from VM UUID. Jul 7 05:52:44.887423 zram_generator::config[1072]: No configuration found. Jul 7 05:52:44.887436 systemd[1]: Populated /etc with preset unit settings. Jul 7 05:52:44.887446 systemd[1]: Queued start job for default target multi-user.target. Jul 7 05:52:44.887456 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jul 7 05:52:44.887472 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 7 05:52:44.887482 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 7 05:52:44.887493 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 7 05:52:44.887505 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 7 05:52:44.887515 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 7 05:52:44.887530 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 7 05:52:44.887540 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 7 05:52:44.887550 systemd[1]: Created slice user.slice - User and Session Slice. Jul 7 05:52:44.887561 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 05:52:44.887571 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 05:52:44.887582 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 7 05:52:44.887594 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 7 05:52:44.887605 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 7 05:52:44.887615 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 05:52:44.887626 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 7 05:52:44.887637 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 05:52:44.887647 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 7 05:52:44.887657 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 05:52:44.887669 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 05:52:44.887682 systemd[1]: Reached target slices.target - Slice Units. Jul 7 05:52:44.887692 systemd[1]: Reached target swap.target - Swaps. Jul 7 05:52:44.887703 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 7 05:52:44.887713 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 7 05:52:44.887724 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 7 05:52:44.887734 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 7 05:52:44.887745 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 05:52:44.887756 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 05:52:44.887768 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 05:52:44.887778 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 7 05:52:44.887789 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 7 05:52:44.887799 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 7 05:52:44.887810 systemd[1]: Mounting media.mount - External Media Directory... Jul 7 05:52:44.887821 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 7 05:52:44.887835 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 7 05:52:44.887847 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 7 05:52:44.887858 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 7 05:52:44.887869 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 05:52:44.887879 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 05:52:44.887890 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 7 05:52:44.887901 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 05:52:44.887913 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 05:52:44.887923 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 05:52:44.887936 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 7 05:52:44.887946 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 05:52:44.887958 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 7 05:52:44.887981 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jul 7 05:52:44.887995 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jul 7 05:52:44.888006 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 05:52:44.888017 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 05:52:44.888029 kernel: ACPI: bus type drm_connector registered Jul 7 05:52:44.888042 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 7 05:52:44.888053 kernel: fuse: init (API version 7.39) Jul 7 05:52:44.888063 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 7 05:52:44.888104 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 05:52:44.888117 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 7 05:52:44.888127 kernel: loop: module loaded Jul 7 05:52:44.888137 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 7 05:52:44.888148 systemd[1]: Mounted media.mount - External Media Directory. Jul 7 05:52:44.888186 systemd-journald[1156]: Collecting audit messages is disabled. Jul 7 05:52:44.888215 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 7 05:52:44.888227 systemd-journald[1156]: Journal started Jul 7 05:52:44.888251 systemd-journald[1156]: Runtime Journal (/run/log/journal/411234e3a7d041e581d52a0d776661ff) is 8.0M, max 76.6M, 68.6M free. Jul 7 05:52:44.891550 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 05:52:44.894480 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 7 05:52:44.895514 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 7 05:52:44.898701 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 05:52:44.899793 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 7 05:52:44.899982 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 7 05:52:44.901416 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 05:52:44.901597 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 05:52:44.903605 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 05:52:44.903783 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 05:52:44.904916 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 05:52:44.905362 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 05:52:44.906582 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 7 05:52:44.906723 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 7 05:52:44.907589 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 05:52:44.910281 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 05:52:44.911459 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 05:52:44.913957 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 05:52:44.915817 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 7 05:52:44.923736 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 7 05:52:44.933788 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 7 05:52:44.940197 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 7 05:52:44.943219 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 7 05:52:44.946178 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 7 05:52:44.948143 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 7 05:52:44.958322 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 7 05:52:44.959927 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 05:52:44.964338 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 7 05:52:44.968266 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 05:52:44.980671 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 05:52:44.988298 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 7 05:52:44.990824 systemd-journald[1156]: Time spent on flushing to /var/log/journal/411234e3a7d041e581d52a0d776661ff is 42.783ms for 1114 entries. Jul 7 05:52:44.990824 systemd-journald[1156]: System Journal (/var/log/journal/411234e3a7d041e581d52a0d776661ff) is 8.0M, max 584.8M, 576.8M free. Jul 7 05:52:45.045805 systemd-journald[1156]: Received client request to flush runtime journal. Jul 7 05:52:44.992493 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 05:52:44.995380 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 7 05:52:44.996341 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 7 05:52:45.004220 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 7 05:52:45.016489 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 7 05:52:45.018411 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 7 05:52:45.026316 udevadm[1214]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 7 05:52:45.043747 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 05:52:45.049057 systemd-tmpfiles[1209]: ACLs are not supported, ignoring. Jul 7 05:52:45.049480 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 7 05:52:45.050149 systemd-tmpfiles[1209]: ACLs are not supported, ignoring. Jul 7 05:52:45.055126 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 05:52:45.067261 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 7 05:52:45.096306 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 7 05:52:45.103380 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 05:52:45.125059 systemd-tmpfiles[1230]: ACLs are not supported, ignoring. Jul 7 05:52:45.125096 systemd-tmpfiles[1230]: ACLs are not supported, ignoring. Jul 7 05:52:45.129636 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 05:52:45.537875 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 7 05:52:45.546239 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 05:52:45.570218 systemd-udevd[1236]: Using default interface naming scheme 'v255'. Jul 7 05:52:45.594402 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 05:52:45.607234 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 05:52:45.627299 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 7 05:52:45.675930 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 7 05:52:45.706567 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Jul 7 05:52:45.750162 systemd-networkd[1244]: lo: Link UP Jul 7 05:52:45.750171 systemd-networkd[1244]: lo: Gained carrier Jul 7 05:52:45.752844 systemd-networkd[1244]: Enumeration completed Jul 7 05:52:45.752997 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 05:52:45.755355 systemd-networkd[1244]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 05:52:45.755363 systemd-networkd[1244]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 05:52:45.756791 systemd-networkd[1244]: eth1: Link UP Jul 7 05:52:45.756881 systemd-networkd[1244]: eth1: Gained carrier Jul 7 05:52:45.756946 systemd-networkd[1244]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 05:52:45.770159 kernel: mousedev: PS/2 mouse device common for all mice Jul 7 05:52:45.771466 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 7 05:52:45.793872 systemd-networkd[1244]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 7 05:52:45.795827 systemd-networkd[1244]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 05:52:45.795923 systemd-networkd[1244]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 05:52:45.796734 systemd-networkd[1244]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 05:52:45.796854 systemd-networkd[1244]: eth0: Link UP Jul 7 05:52:45.796897 systemd-networkd[1244]: eth0: Gained carrier Jul 7 05:52:45.797012 systemd-networkd[1244]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 05:52:45.834118 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1251) Jul 7 05:52:45.849187 systemd-networkd[1244]: eth0: DHCPv4 address 159.69.113.68/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jul 7 05:52:45.882099 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Jul 7 05:52:45.882166 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jul 7 05:52:45.882179 kernel: [drm] features: -context_init Jul 7 05:52:45.884096 kernel: [drm] number of scanouts: 1 Jul 7 05:52:45.884188 kernel: [drm] number of cap sets: 0 Jul 7 05:52:45.888378 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Jul 7 05:52:45.888897 systemd[1]: Condition check resulted in dev-vport2p1.device - /dev/vport2p1 being skipped. Jul 7 05:52:45.889103 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 05:52:45.892146 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Jul 7 05:52:45.893266 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 05:52:45.897267 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 05:52:45.903026 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 05:52:45.907139 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 7 05:52:45.907191 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 7 05:52:45.915021 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 05:52:45.915203 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 05:52:45.921088 kernel: Console: switching to colour frame buffer device 160x50 Jul 7 05:52:45.922606 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 05:52:45.922817 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 05:52:45.937197 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jul 7 05:52:45.938495 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 05:52:45.938728 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 05:52:45.945768 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jul 7 05:52:45.950594 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 05:52:45.950914 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 05:52:45.958552 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 05:52:46.023845 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 05:52:46.101149 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 7 05:52:46.115371 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 7 05:52:46.134399 lvm[1304]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 7 05:52:46.160991 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 7 05:52:46.162663 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 05:52:46.167409 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 7 05:52:46.173807 lvm[1307]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 7 05:52:46.199467 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 7 05:52:46.201453 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 7 05:52:46.203483 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 7 05:52:46.203621 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 05:52:46.204307 systemd[1]: Reached target machines.target - Containers. Jul 7 05:52:46.206421 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 7 05:52:46.212332 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 7 05:52:46.216333 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 7 05:52:46.219349 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 05:52:46.222278 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 7 05:52:46.226161 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 7 05:52:46.232540 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 7 05:52:46.236272 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 7 05:52:46.247765 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 7 05:52:46.262478 kernel: loop0: detected capacity change from 0 to 114328 Jul 7 05:52:46.265051 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 7 05:52:46.267422 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 7 05:52:46.285293 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 7 05:52:46.309181 kernel: loop1: detected capacity change from 0 to 203944 Jul 7 05:52:46.348202 kernel: loop2: detected capacity change from 0 to 8 Jul 7 05:52:46.367108 kernel: loop3: detected capacity change from 0 to 114432 Jul 7 05:52:46.409615 kernel: loop4: detected capacity change from 0 to 114328 Jul 7 05:52:46.424367 kernel: loop5: detected capacity change from 0 to 203944 Jul 7 05:52:46.439334 kernel: loop6: detected capacity change from 0 to 8 Jul 7 05:52:46.442445 kernel: loop7: detected capacity change from 0 to 114432 Jul 7 05:52:46.459710 (sd-merge)[1329]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Jul 7 05:52:46.460643 (sd-merge)[1329]: Merged extensions into '/usr'. Jul 7 05:52:46.475385 systemd[1]: Reloading requested from client PID 1315 ('systemd-sysext') (unit systemd-sysext.service)... Jul 7 05:52:46.475572 systemd[1]: Reloading... Jul 7 05:52:46.559093 zram_generator::config[1357]: No configuration found. Jul 7 05:52:46.666028 ldconfig[1311]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 7 05:52:46.677418 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 05:52:46.739465 systemd[1]: Reloading finished in 263 ms. Jul 7 05:52:46.756913 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 7 05:52:46.759855 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 7 05:52:46.768395 systemd[1]: Starting ensure-sysext.service... Jul 7 05:52:46.772245 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 05:52:46.778290 systemd[1]: Reloading requested from client PID 1401 ('systemctl') (unit ensure-sysext.service)... Jul 7 05:52:46.778420 systemd[1]: Reloading... Jul 7 05:52:46.810674 systemd-tmpfiles[1402]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 7 05:52:46.810967 systemd-tmpfiles[1402]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 7 05:52:46.811702 systemd-tmpfiles[1402]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 7 05:52:46.811925 systemd-tmpfiles[1402]: ACLs are not supported, ignoring. Jul 7 05:52:46.811994 systemd-tmpfiles[1402]: ACLs are not supported, ignoring. Jul 7 05:52:46.815621 systemd-tmpfiles[1402]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 05:52:46.815775 systemd-tmpfiles[1402]: Skipping /boot Jul 7 05:52:46.825460 systemd-tmpfiles[1402]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 05:52:46.825578 systemd-tmpfiles[1402]: Skipping /boot Jul 7 05:52:46.847260 systemd-networkd[1244]: eth0: Gained IPv6LL Jul 7 05:52:46.866098 zram_generator::config[1430]: No configuration found. Jul 7 05:52:46.976189 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 05:52:47.038540 systemd[1]: Reloading finished in 259 ms. Jul 7 05:52:47.057931 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 7 05:52:47.059143 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 05:52:47.091986 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 7 05:52:47.097332 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 7 05:52:47.104408 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 7 05:52:47.111500 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 05:52:47.116108 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 7 05:52:47.123801 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 05:52:47.130507 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 05:52:47.134116 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 05:52:47.138119 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 05:52:47.141244 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 05:52:47.151334 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 05:52:47.151502 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 05:52:47.158229 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 05:52:47.176515 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 05:52:47.181362 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 05:52:47.184631 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 7 05:52:47.191762 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 05:52:47.192005 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 05:52:47.195233 augenrules[1509]: No rules Jul 7 05:52:47.195238 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 05:52:47.195429 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 05:52:47.197301 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 05:52:47.197724 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 05:52:47.199891 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 7 05:52:47.212899 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 7 05:52:47.217825 systemd[1]: Finished ensure-sysext.service. Jul 7 05:52:47.223267 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 05:52:47.226412 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 05:52:47.229001 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 05:52:47.237245 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 05:52:47.247214 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 05:52:47.247855 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 05:52:47.251205 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 7 05:52:47.256371 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 7 05:52:47.257715 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 7 05:52:47.261425 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 05:52:47.261579 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 05:52:47.263543 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 05:52:47.263716 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 05:52:47.265466 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 05:52:47.265612 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 05:52:47.271902 systemd-resolved[1487]: Positive Trust Anchors: Jul 7 05:52:47.272113 systemd-resolved[1487]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 05:52:47.272147 systemd-resolved[1487]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 05:52:47.274763 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 05:52:47.275790 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 05:52:47.277018 systemd-resolved[1487]: Using system hostname 'ci-4081-3-4-0-cfa01fffc0'. Jul 7 05:52:47.280019 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 05:52:47.285240 systemd[1]: Reached target network.target - Network. Jul 7 05:52:47.286311 systemd[1]: Reached target network-online.target - Network is Online. Jul 7 05:52:47.286915 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 05:52:47.287578 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 05:52:47.287637 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 05:52:47.287665 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 7 05:52:47.295716 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 7 05:52:47.335313 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 7 05:52:47.337434 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 05:52:47.338232 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 7 05:52:47.338955 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 7 05:52:47.339780 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 7 05:52:47.340540 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 7 05:52:47.340576 systemd[1]: Reached target paths.target - Path Units. Jul 7 05:52:47.341151 systemd[1]: Reached target time-set.target - System Time Set. Jul 7 05:52:47.341980 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 7 05:52:47.342770 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 7 05:52:47.343482 systemd[1]: Reached target timers.target - Timer Units. Jul 7 05:52:47.346116 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 7 05:52:47.348339 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 7 05:52:47.350423 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 7 05:52:47.354246 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 7 05:52:47.355016 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 05:52:47.355688 systemd[1]: Reached target basic.target - Basic System. Jul 7 05:52:47.356512 systemd[1]: System is tainted: cgroupsv1 Jul 7 05:52:47.356565 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 7 05:52:47.356592 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 7 05:52:47.360193 systemd[1]: Starting containerd.service - containerd container runtime... Jul 7 05:52:47.362150 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 7 05:52:47.368018 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 7 05:52:47.371309 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 7 05:52:47.379141 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 7 05:52:47.383756 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 7 05:52:47.390489 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 05:52:47.397015 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 7 05:52:47.399022 jq[1550]: false Jul 7 05:52:47.401270 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 7 05:52:47.412202 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 7 05:52:47.417695 coreos-metadata[1548]: Jul 07 05:52:47.417 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Jul 7 05:52:47.425675 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Jul 7 05:52:47.427679 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 7 05:52:47.429993 dbus-daemon[1549]: [system] SELinux support is enabled Jul 7 05:52:47.434277 coreos-metadata[1548]: Jul 07 05:52:47.433 INFO Fetch successful Jul 7 05:52:47.434277 coreos-metadata[1548]: Jul 07 05:52:47.433 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Jul 7 05:52:47.436712 coreos-metadata[1548]: Jul 07 05:52:47.435 INFO Fetch successful Jul 7 05:52:47.437031 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 7 05:52:47.450376 extend-filesystems[1554]: Found loop4 Jul 7 05:52:47.457181 extend-filesystems[1554]: Found loop5 Jul 7 05:52:47.457181 extend-filesystems[1554]: Found loop6 Jul 7 05:52:47.457181 extend-filesystems[1554]: Found loop7 Jul 7 05:52:47.457181 extend-filesystems[1554]: Found sda Jul 7 05:52:47.457181 extend-filesystems[1554]: Found sda1 Jul 7 05:52:47.457181 extend-filesystems[1554]: Found sda2 Jul 7 05:52:47.457181 extend-filesystems[1554]: Found sda3 Jul 7 05:52:47.457181 extend-filesystems[1554]: Found usr Jul 7 05:52:47.457181 extend-filesystems[1554]: Found sda4 Jul 7 05:52:47.457181 extend-filesystems[1554]: Found sda6 Jul 7 05:52:47.457181 extend-filesystems[1554]: Found sda7 Jul 7 05:52:47.457181 extend-filesystems[1554]: Found sda9 Jul 7 05:52:47.457181 extend-filesystems[1554]: Checking size of /dev/sda9 Jul 7 05:52:47.453998 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 7 05:52:47.455508 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 7 05:52:47.466481 systemd[1]: Starting update-engine.service - Update Engine... Jul 7 05:52:47.478174 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 7 05:52:47.479451 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 7 05:52:47.487473 systemd-timesyncd[1533]: Contacted time server 85.215.229.230:123 (0.flatcar.pool.ntp.org). Jul 7 05:52:47.487594 systemd-timesyncd[1533]: Initial clock synchronization to Mon 2025-07-07 05:52:47.774715 UTC. Jul 7 05:52:47.493801 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 7 05:52:47.494061 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 7 05:52:47.496358 extend-filesystems[1554]: Resized partition /dev/sda9 Jul 7 05:52:47.501507 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 7 05:52:47.505062 extend-filesystems[1595]: resize2fs 1.47.1 (20-May-2024) Jul 7 05:52:47.506366 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 7 05:52:47.513303 jq[1584]: true Jul 7 05:52:47.522227 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Jul 7 05:52:47.506620 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 7 05:52:47.525307 systemd[1]: motdgen.service: Deactivated successfully. Jul 7 05:52:47.525518 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 7 05:52:47.569092 jq[1598]: true Jul 7 05:52:47.586262 update_engine[1578]: I20250707 05:52:47.580592 1578 main.cc:92] Flatcar Update Engine starting Jul 7 05:52:47.593198 update_engine[1578]: I20250707 05:52:47.592342 1578 update_check_scheduler.cc:74] Next update check in 10m18s Jul 7 05:52:47.593445 (ntainerd)[1607]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 7 05:52:47.601126 systemd[1]: Started update-engine.service - Update Engine. Jul 7 05:52:47.603731 tar[1594]: linux-arm64/helm Jul 7 05:52:47.602439 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 7 05:52:47.602481 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 7 05:52:47.605490 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 7 05:52:47.605509 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 7 05:52:47.606667 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 7 05:52:47.613248 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 7 05:52:47.623048 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1255) Jul 7 05:52:47.672102 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 7 05:52:47.673520 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 7 05:52:47.695021 systemd-logind[1570]: New seat seat0. Jul 7 05:52:47.701287 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Jul 7 05:52:47.710583 systemd-logind[1570]: Watching system buttons on /dev/input/event0 (Power Button) Jul 7 05:52:47.710608 systemd-logind[1570]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Jul 7 05:52:47.710945 systemd[1]: Started systemd-logind.service - User Login Management. Jul 7 05:52:47.712619 extend-filesystems[1595]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jul 7 05:52:47.712619 extend-filesystems[1595]: old_desc_blocks = 1, new_desc_blocks = 5 Jul 7 05:52:47.712619 extend-filesystems[1595]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Jul 7 05:52:47.719509 extend-filesystems[1554]: Resized filesystem in /dev/sda9 Jul 7 05:52:47.719509 extend-filesystems[1554]: Found sr0 Jul 7 05:52:47.723249 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 7 05:52:47.723480 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 7 05:52:47.740497 bash[1643]: Updated "/home/core/.ssh/authorized_keys" Jul 7 05:52:47.743236 systemd-networkd[1244]: eth1: Gained IPv6LL Jul 7 05:52:47.747973 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 7 05:52:47.756687 systemd[1]: Starting sshkeys.service... Jul 7 05:52:47.792739 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 7 05:52:47.802336 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 7 05:52:47.862192 coreos-metadata[1654]: Jul 07 05:52:47.862 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Jul 7 05:52:47.866450 coreos-metadata[1654]: Jul 07 05:52:47.866 INFO Fetch successful Jul 7 05:52:47.872520 unknown[1654]: wrote ssh authorized keys file for user: core Jul 7 05:52:47.908095 update-ssh-keys[1658]: Updated "/home/core/.ssh/authorized_keys" Jul 7 05:52:47.910260 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 7 05:52:47.920460 systemd[1]: Finished sshkeys.service. Jul 7 05:52:47.980420 containerd[1607]: time="2025-07-07T05:52:47.978480440Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jul 7 05:52:47.985658 locksmithd[1623]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 7 05:52:48.058741 containerd[1607]: time="2025-07-07T05:52:48.058132264Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 7 05:52:48.061778 containerd[1607]: time="2025-07-07T05:52:48.061730142Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.95-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 7 05:52:48.063556 containerd[1607]: time="2025-07-07T05:52:48.061906082Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 7 05:52:48.063556 containerd[1607]: time="2025-07-07T05:52:48.061937367Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 7 05:52:48.063556 containerd[1607]: time="2025-07-07T05:52:48.062199207Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 7 05:52:48.063556 containerd[1607]: time="2025-07-07T05:52:48.062225105Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 7 05:52:48.063556 containerd[1607]: time="2025-07-07T05:52:48.062294263Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 05:52:48.063556 containerd[1607]: time="2025-07-07T05:52:48.062307232Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 7 05:52:48.063556 containerd[1607]: time="2025-07-07T05:52:48.062538450Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 05:52:48.063556 containerd[1607]: time="2025-07-07T05:52:48.062555688Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 7 05:52:48.063556 containerd[1607]: time="2025-07-07T05:52:48.062569403Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 05:52:48.063556 containerd[1607]: time="2025-07-07T05:52:48.062579389Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 7 05:52:48.063556 containerd[1607]: time="2025-07-07T05:52:48.062653976Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 7 05:52:48.063556 containerd[1607]: time="2025-07-07T05:52:48.062844668Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 7 05:52:48.064739 containerd[1607]: time="2025-07-07T05:52:48.064683179Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 05:52:48.064885 containerd[1607]: time="2025-07-07T05:52:48.064868609Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 7 05:52:48.065439 containerd[1607]: time="2025-07-07T05:52:48.065416155Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 7 05:52:48.065670 containerd[1607]: time="2025-07-07T05:52:48.065594789Z" level=info msg="metadata content store policy set" policy=shared Jul 7 05:52:48.070219 containerd[1607]: time="2025-07-07T05:52:48.070191418Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 7 05:52:48.070505 containerd[1607]: time="2025-07-07T05:52:48.070487857Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 7 05:52:48.070631 containerd[1607]: time="2025-07-07T05:52:48.070616726Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 7 05:52:48.070755 containerd[1607]: time="2025-07-07T05:52:48.070738177Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 7 05:52:48.071668 containerd[1607]: time="2025-07-07T05:52:48.070903634Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 7 05:52:48.071668 containerd[1607]: time="2025-07-07T05:52:48.071055790Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 7 05:52:48.072008 containerd[1607]: time="2025-07-07T05:52:48.071985011Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 7 05:52:48.073418 containerd[1607]: time="2025-07-07T05:52:48.072244323Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 7 05:52:48.073418 containerd[1607]: time="2025-07-07T05:52:48.072267693Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 7 05:52:48.073418 containerd[1607]: time="2025-07-07T05:52:48.072281533Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 7 05:52:48.073418 containerd[1607]: time="2025-07-07T05:52:48.072298025Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 7 05:52:48.073418 containerd[1607]: time="2025-07-07T05:52:48.072311865Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 7 05:52:48.073418 containerd[1607]: time="2025-07-07T05:52:48.072324876Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 7 05:52:48.073418 containerd[1607]: time="2025-07-07T05:52:48.072338633Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 7 05:52:48.073418 containerd[1607]: time="2025-07-07T05:52:48.072352805Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 7 05:52:48.073418 containerd[1607]: time="2025-07-07T05:52:48.072366686Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 7 05:52:48.073418 containerd[1607]: time="2025-07-07T05:52:48.072380194Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 7 05:52:48.073418 containerd[1607]: time="2025-07-07T05:52:48.072392377Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 7 05:52:48.073418 containerd[1607]: time="2025-07-07T05:52:48.072412308Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 7 05:52:48.073418 containerd[1607]: time="2025-07-07T05:52:48.072427142Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 7 05:52:48.073418 containerd[1607]: time="2025-07-07T05:52:48.072490665Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 7 05:52:48.073778 containerd[1607]: time="2025-07-07T05:52:48.072514077Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 7 05:52:48.073778 containerd[1607]: time="2025-07-07T05:52:48.072527254Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 7 05:52:48.073778 containerd[1607]: time="2025-07-07T05:52:48.072548635Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 7 05:52:48.073778 containerd[1607]: time="2025-07-07T05:52:48.072561688Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 7 05:52:48.073778 containerd[1607]: time="2025-07-07T05:52:48.072574948Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 7 05:52:48.073778 containerd[1607]: time="2025-07-07T05:52:48.072588166Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 7 05:52:48.073778 containerd[1607]: time="2025-07-07T05:52:48.072602752Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 7 05:52:48.073778 containerd[1607]: time="2025-07-07T05:52:48.072614768Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 7 05:52:48.073778 containerd[1607]: time="2025-07-07T05:52:48.072628691Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 7 05:52:48.073778 containerd[1607]: time="2025-07-07T05:52:48.072641371Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 7 05:52:48.073778 containerd[1607]: time="2025-07-07T05:52:48.072657075Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 7 05:52:48.073778 containerd[1607]: time="2025-07-07T05:52:48.072678622Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 7 05:52:48.073778 containerd[1607]: time="2025-07-07T05:52:48.072691675Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 7 05:52:48.073778 containerd[1607]: time="2025-07-07T05:52:48.072703733Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 7 05:52:48.074040 containerd[1607]: time="2025-07-07T05:52:48.072816483Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 7 05:52:48.074040 containerd[1607]: time="2025-07-07T05:52:48.072834176Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 7 05:52:48.074040 containerd[1607]: time="2025-07-07T05:52:48.072845903Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 7 05:52:48.074040 containerd[1607]: time="2025-07-07T05:52:48.072858334Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 7 05:52:48.074040 containerd[1607]: time="2025-07-07T05:52:48.072868030Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 7 05:52:48.074040 containerd[1607]: time="2025-07-07T05:52:48.072880005Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 7 05:52:48.074040 containerd[1607]: time="2025-07-07T05:52:48.072890199Z" level=info msg="NRI interface is disabled by configuration." Jul 7 05:52:48.074040 containerd[1607]: time="2025-07-07T05:52:48.072901967Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 7 05:52:48.077468 containerd[1607]: time="2025-07-07T05:52:48.075528772Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 7 05:52:48.077468 containerd[1607]: time="2025-07-07T05:52:48.075643013Z" level=info msg="Connect containerd service" Jul 7 05:52:48.077468 containerd[1607]: time="2025-07-07T05:52:48.075681260Z" level=info msg="using legacy CRI server" Jul 7 05:52:48.077468 containerd[1607]: time="2025-07-07T05:52:48.075688262Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 7 05:52:48.077468 containerd[1607]: time="2025-07-07T05:52:48.075787379Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 7 05:52:48.077468 containerd[1607]: time="2025-07-07T05:52:48.076496529Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 7 05:52:48.079004 containerd[1607]: time="2025-07-07T05:52:48.078960487Z" level=info msg="Start subscribing containerd event" Jul 7 05:52:48.079128 containerd[1607]: time="2025-07-07T05:52:48.079104853Z" level=info msg="Start recovering state" Jul 7 05:52:48.080495 containerd[1607]: time="2025-07-07T05:52:48.079254606Z" level=info msg="Start event monitor" Jul 7 05:52:48.080495 containerd[1607]: time="2025-07-07T05:52:48.080137583Z" level=info msg="Start snapshots syncer" Jul 7 05:52:48.080495 containerd[1607]: time="2025-07-07T05:52:48.080152004Z" level=info msg="Start cni network conf syncer for default" Jul 7 05:52:48.080495 containerd[1607]: time="2025-07-07T05:52:48.080160788Z" level=info msg="Start streaming server" Jul 7 05:52:48.080781 containerd[1607]: time="2025-07-07T05:52:48.080681691Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 7 05:52:48.080958 containerd[1607]: time="2025-07-07T05:52:48.080891237Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 7 05:52:48.081127 containerd[1607]: time="2025-07-07T05:52:48.081113669Z" level=info msg="containerd successfully booted in 0.105145s" Jul 7 05:52:48.081237 systemd[1]: Started containerd.service - containerd container runtime. Jul 7 05:52:48.531852 tar[1594]: linux-arm64/LICENSE Jul 7 05:52:48.533141 tar[1594]: linux-arm64/README.md Jul 7 05:52:48.553768 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 7 05:52:48.702638 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 05:52:48.712690 (kubelet)[1687]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 05:52:49.237584 sshd_keygen[1592]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 7 05:52:49.261819 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 7 05:52:49.274582 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 7 05:52:49.286797 systemd[1]: issuegen.service: Deactivated successfully. Jul 7 05:52:49.287219 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 7 05:52:49.290488 kubelet[1687]: E0707 05:52:49.290457 1687 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 05:52:49.297636 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 7 05:52:49.298809 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 05:52:49.298999 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 05:52:49.307883 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 7 05:52:49.314399 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 7 05:52:49.320594 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 7 05:52:49.323303 systemd[1]: Reached target getty.target - Login Prompts. Jul 7 05:52:49.325372 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 7 05:52:49.326323 systemd[1]: Startup finished in 9.261s (kernel) + 5.216s (userspace) = 14.478s. Jul 7 05:52:59.357190 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 7 05:52:59.365404 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 05:52:59.502479 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 05:52:59.502838 (kubelet)[1731]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 05:52:59.553865 kubelet[1731]: E0707 05:52:59.553804 1731 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 05:52:59.557279 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 05:52:59.557431 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 05:53:09.606175 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 7 05:53:09.623580 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 05:53:09.760332 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 05:53:09.761426 (kubelet)[1751]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 05:53:09.813020 kubelet[1751]: E0707 05:53:09.812927 1751 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 05:53:09.817745 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 05:53:09.818366 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 05:53:19.855689 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 7 05:53:19.863281 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 05:53:19.993364 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 05:53:20.003749 (kubelet)[1771]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 05:53:20.049265 kubelet[1771]: E0707 05:53:20.049186 1771 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 05:53:20.051567 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 05:53:20.051782 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 05:53:26.966322 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 7 05:53:26.973566 systemd[1]: Started sshd@0-159.69.113.68:22-147.75.109.163:39302.service - OpenSSH per-connection server daemon (147.75.109.163:39302). Jul 7 05:53:27.981913 sshd[1779]: Accepted publickey for core from 147.75.109.163 port 39302 ssh2: RSA SHA256:kLE+u5/r4/ydHwbzB201ybJdYCioVP+NA3MAI6UVV6g Jul 7 05:53:27.985285 sshd[1779]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:53:27.999212 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 7 05:53:28.000466 systemd-logind[1570]: New session 1 of user core. Jul 7 05:53:28.005470 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 7 05:53:28.019211 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 7 05:53:28.029495 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 7 05:53:28.033967 (systemd)[1785]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 7 05:53:28.142027 systemd[1785]: Queued start job for default target default.target. Jul 7 05:53:28.142471 systemd[1785]: Created slice app.slice - User Application Slice. Jul 7 05:53:28.142490 systemd[1785]: Reached target paths.target - Paths. Jul 7 05:53:28.142503 systemd[1785]: Reached target timers.target - Timers. Jul 7 05:53:28.149234 systemd[1785]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 7 05:53:28.159236 systemd[1785]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 7 05:53:28.159304 systemd[1785]: Reached target sockets.target - Sockets. Jul 7 05:53:28.159318 systemd[1785]: Reached target basic.target - Basic System. Jul 7 05:53:28.159372 systemd[1785]: Reached target default.target - Main User Target. Jul 7 05:53:28.159401 systemd[1785]: Startup finished in 119ms. Jul 7 05:53:28.159482 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 7 05:53:28.163514 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 7 05:53:28.867501 systemd[1]: Started sshd@1-159.69.113.68:22-147.75.109.163:39312.service - OpenSSH per-connection server daemon (147.75.109.163:39312). Jul 7 05:53:29.858118 sshd[1797]: Accepted publickey for core from 147.75.109.163 port 39312 ssh2: RSA SHA256:kLE+u5/r4/ydHwbzB201ybJdYCioVP+NA3MAI6UVV6g Jul 7 05:53:29.860685 sshd[1797]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:53:29.866462 systemd-logind[1570]: New session 2 of user core. Jul 7 05:53:29.874633 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 7 05:53:30.105791 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 7 05:53:30.115422 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 05:53:30.233287 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 05:53:30.238039 (kubelet)[1813]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 05:53:30.284285 kubelet[1813]: E0707 05:53:30.284211 1813 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 05:53:30.288642 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 05:53:30.288857 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 05:53:30.550440 sshd[1797]: pam_unix(sshd:session): session closed for user core Jul 7 05:53:30.555760 systemd[1]: sshd@1-159.69.113.68:22-147.75.109.163:39312.service: Deactivated successfully. Jul 7 05:53:30.559539 systemd[1]: session-2.scope: Deactivated successfully. Jul 7 05:53:30.560580 systemd-logind[1570]: Session 2 logged out. Waiting for processes to exit. Jul 7 05:53:30.561682 systemd-logind[1570]: Removed session 2. Jul 7 05:53:30.718531 systemd[1]: Started sshd@2-159.69.113.68:22-147.75.109.163:39324.service - OpenSSH per-connection server daemon (147.75.109.163:39324). Jul 7 05:53:31.704196 sshd[1825]: Accepted publickey for core from 147.75.109.163 port 39324 ssh2: RSA SHA256:kLE+u5/r4/ydHwbzB201ybJdYCioVP+NA3MAI6UVV6g Jul 7 05:53:31.706273 sshd[1825]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:53:31.713437 systemd-logind[1570]: New session 3 of user core. Jul 7 05:53:31.723251 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 7 05:53:32.387975 sshd[1825]: pam_unix(sshd:session): session closed for user core Jul 7 05:53:32.393426 systemd[1]: sshd@2-159.69.113.68:22-147.75.109.163:39324.service: Deactivated successfully. Jul 7 05:53:32.396975 systemd-logind[1570]: Session 3 logged out. Waiting for processes to exit. Jul 7 05:53:32.397647 systemd[1]: session-3.scope: Deactivated successfully. Jul 7 05:53:32.399259 systemd-logind[1570]: Removed session 3. Jul 7 05:53:32.559562 systemd[1]: Started sshd@3-159.69.113.68:22-147.75.109.163:39332.service - OpenSSH per-connection server daemon (147.75.109.163:39332). Jul 7 05:53:32.849237 update_engine[1578]: I20250707 05:53:32.848971 1578 update_attempter.cc:509] Updating boot flags... Jul 7 05:53:32.903095 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1844) Jul 7 05:53:32.971236 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1844) Jul 7 05:53:33.039090 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1844) Jul 7 05:53:33.561617 sshd[1833]: Accepted publickey for core from 147.75.109.163 port 39332 ssh2: RSA SHA256:kLE+u5/r4/ydHwbzB201ybJdYCioVP+NA3MAI6UVV6g Jul 7 05:53:33.563759 sshd[1833]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:53:33.570986 systemd-logind[1570]: New session 4 of user core. Jul 7 05:53:33.578509 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 7 05:53:34.259440 sshd[1833]: pam_unix(sshd:session): session closed for user core Jul 7 05:53:34.263853 systemd[1]: sshd@3-159.69.113.68:22-147.75.109.163:39332.service: Deactivated successfully. Jul 7 05:53:34.267496 systemd[1]: session-4.scope: Deactivated successfully. Jul 7 05:53:34.268352 systemd-logind[1570]: Session 4 logged out. Waiting for processes to exit. Jul 7 05:53:34.269289 systemd-logind[1570]: Removed session 4. Jul 7 05:53:34.430518 systemd[1]: Started sshd@4-159.69.113.68:22-147.75.109.163:39340.service - OpenSSH per-connection server daemon (147.75.109.163:39340). Jul 7 05:53:35.432498 sshd[1862]: Accepted publickey for core from 147.75.109.163 port 39340 ssh2: RSA SHA256:kLE+u5/r4/ydHwbzB201ybJdYCioVP+NA3MAI6UVV6g Jul 7 05:53:35.435226 sshd[1862]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:53:35.441899 systemd-logind[1570]: New session 5 of user core. Jul 7 05:53:35.452534 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 7 05:53:35.976684 sudo[1866]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 7 05:53:35.976992 sudo[1866]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 05:53:35.992513 sudo[1866]: pam_unix(sudo:session): session closed for user root Jul 7 05:53:36.156615 sshd[1862]: pam_unix(sshd:session): session closed for user core Jul 7 05:53:36.162189 systemd[1]: sshd@4-159.69.113.68:22-147.75.109.163:39340.service: Deactivated successfully. Jul 7 05:53:36.166780 systemd[1]: session-5.scope: Deactivated successfully. Jul 7 05:53:36.167717 systemd-logind[1570]: Session 5 logged out. Waiting for processes to exit. Jul 7 05:53:36.168879 systemd-logind[1570]: Removed session 5. Jul 7 05:53:36.325618 systemd[1]: Started sshd@5-159.69.113.68:22-147.75.109.163:54436.service - OpenSSH per-connection server daemon (147.75.109.163:54436). Jul 7 05:53:37.320104 sshd[1871]: Accepted publickey for core from 147.75.109.163 port 54436 ssh2: RSA SHA256:kLE+u5/r4/ydHwbzB201ybJdYCioVP+NA3MAI6UVV6g Jul 7 05:53:37.322326 sshd[1871]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:53:37.327838 systemd-logind[1570]: New session 6 of user core. Jul 7 05:53:37.343547 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 7 05:53:37.851115 sudo[1876]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 7 05:53:37.851770 sudo[1876]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 05:53:37.855959 sudo[1876]: pam_unix(sudo:session): session closed for user root Jul 7 05:53:37.861203 sudo[1875]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 7 05:53:37.861501 sudo[1875]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 05:53:37.884840 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 7 05:53:37.887400 auditctl[1879]: No rules Jul 7 05:53:37.889148 systemd[1]: audit-rules.service: Deactivated successfully. Jul 7 05:53:37.889482 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 7 05:53:37.897607 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 7 05:53:37.925773 augenrules[1898]: No rules Jul 7 05:53:37.927643 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 7 05:53:37.931345 sudo[1875]: pam_unix(sudo:session): session closed for user root Jul 7 05:53:38.094444 sshd[1871]: pam_unix(sshd:session): session closed for user core Jul 7 05:53:38.099602 systemd[1]: sshd@5-159.69.113.68:22-147.75.109.163:54436.service: Deactivated successfully. Jul 7 05:53:38.104393 systemd[1]: session-6.scope: Deactivated successfully. Jul 7 05:53:38.105549 systemd-logind[1570]: Session 6 logged out. Waiting for processes to exit. Jul 7 05:53:38.106783 systemd-logind[1570]: Removed session 6. Jul 7 05:53:38.266491 systemd[1]: Started sshd@6-159.69.113.68:22-147.75.109.163:54438.service - OpenSSH per-connection server daemon (147.75.109.163:54438). Jul 7 05:53:39.258364 sshd[1907]: Accepted publickey for core from 147.75.109.163 port 54438 ssh2: RSA SHA256:kLE+u5/r4/ydHwbzB201ybJdYCioVP+NA3MAI6UVV6g Jul 7 05:53:39.260535 sshd[1907]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:53:39.266273 systemd-logind[1570]: New session 7 of user core. Jul 7 05:53:39.268409 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 7 05:53:39.791500 sudo[1911]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 7 05:53:39.792098 sudo[1911]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 05:53:40.104604 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 7 05:53:40.105867 (dockerd)[1926]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 7 05:53:40.355604 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jul 7 05:53:40.358250 dockerd[1926]: time="2025-07-07T05:53:40.358196566Z" level=info msg="Starting up" Jul 7 05:53:40.364928 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 05:53:40.499358 dockerd[1926]: time="2025-07-07T05:53:40.499316217Z" level=info msg="Loading containers: start." Jul 7 05:53:40.538783 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 05:53:40.546477 (kubelet)[1962]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 05:53:40.599970 kubelet[1962]: E0707 05:53:40.599902 1962 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 05:53:40.603327 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 05:53:40.603703 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 05:53:40.630090 kernel: Initializing XFRM netlink socket Jul 7 05:53:40.709567 systemd-networkd[1244]: docker0: Link UP Jul 7 05:53:40.734530 dockerd[1926]: time="2025-07-07T05:53:40.734435430Z" level=info msg="Loading containers: done." Jul 7 05:53:40.754174 dockerd[1926]: time="2025-07-07T05:53:40.753965115Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 7 05:53:40.754174 dockerd[1926]: time="2025-07-07T05:53:40.754099868Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jul 7 05:53:40.754430 dockerd[1926]: time="2025-07-07T05:53:40.754236782Z" level=info msg="Daemon has completed initialization" Jul 7 05:53:40.795466 dockerd[1926]: time="2025-07-07T05:53:40.795295009Z" level=info msg="API listen on /run/docker.sock" Jul 7 05:53:40.796344 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 7 05:53:41.855938 containerd[1607]: time="2025-07-07T05:53:41.855869504Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 7 05:53:42.587439 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1684858530.mount: Deactivated successfully. Jul 7 05:53:43.795758 containerd[1607]: time="2025-07-07T05:53:43.794550788Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=25651885" Jul 7 05:53:43.795758 containerd[1607]: time="2025-07-07T05:53:43.795654068Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:53:43.798570 containerd[1607]: time="2025-07-07T05:53:43.798515492Z" level=info msg="ImageCreate event name:\"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:53:43.800204 containerd[1607]: time="2025-07-07T05:53:43.799904675Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"25648593\" in 1.943977758s" Jul 7 05:53:43.800204 containerd[1607]: time="2025-07-07T05:53:43.799952125Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\"" Jul 7 05:53:43.801500 containerd[1607]: time="2025-07-07T05:53:43.801462735Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:53:43.801694 containerd[1607]: time="2025-07-07T05:53:43.801675261Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 7 05:53:45.232112 containerd[1607]: time="2025-07-07T05:53:45.230703370Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:53:45.233090 containerd[1607]: time="2025-07-07T05:53:45.233032758Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=22459697" Jul 7 05:53:45.234589 containerd[1607]: time="2025-07-07T05:53:45.234556784Z" level=info msg="ImageCreate event name:\"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:53:45.239151 containerd[1607]: time="2025-07-07T05:53:45.239094537Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:53:45.240503 containerd[1607]: time="2025-07-07T05:53:45.240468053Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"23995467\" in 1.438704693s" Jul 7 05:53:45.240618 containerd[1607]: time="2025-07-07T05:53:45.240600720Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\"" Jul 7 05:53:45.241254 containerd[1607]: time="2025-07-07T05:53:45.241212363Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 7 05:53:46.590435 containerd[1607]: time="2025-07-07T05:53:46.590364564Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:53:46.593316 containerd[1607]: time="2025-07-07T05:53:46.593233479Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=17125086" Jul 7 05:53:46.593622 containerd[1607]: time="2025-07-07T05:53:46.593559582Z" level=info msg="ImageCreate event name:\"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:53:46.600365 containerd[1607]: time="2025-07-07T05:53:46.600289443Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:53:46.602576 containerd[1607]: time="2025-07-07T05:53:46.602114436Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"18660874\" in 1.360764566s" Jul 7 05:53:46.602576 containerd[1607]: time="2025-07-07T05:53:46.602162125Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\"" Jul 7 05:53:46.602739 containerd[1607]: time="2025-07-07T05:53:46.602719273Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 7 05:53:47.723479 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1889843804.mount: Deactivated successfully. Jul 7 05:53:48.048237 containerd[1607]: time="2025-07-07T05:53:48.047499963Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:53:48.049544 containerd[1607]: time="2025-07-07T05:53:48.049491680Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=26915983" Jul 7 05:53:48.050996 containerd[1607]: time="2025-07-07T05:53:48.050895172Z" level=info msg="ImageCreate event name:\"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:53:48.053562 containerd[1607]: time="2025-07-07T05:53:48.053505240Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:53:48.054516 containerd[1607]: time="2025-07-07T05:53:48.054352552Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"26914976\" in 1.451600193s" Jul 7 05:53:48.054516 containerd[1607]: time="2025-07-07T05:53:48.054407762Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\"" Jul 7 05:53:48.055307 containerd[1607]: time="2025-07-07T05:53:48.055128491Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 7 05:53:48.664998 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount774452314.mount: Deactivated successfully. Jul 7 05:53:49.408459 containerd[1607]: time="2025-07-07T05:53:49.408364161Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:53:49.410187 containerd[1607]: time="2025-07-07T05:53:49.410134027Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951714" Jul 7 05:53:49.411425 containerd[1607]: time="2025-07-07T05:53:49.411362319Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:53:49.415680 containerd[1607]: time="2025-07-07T05:53:49.415628497Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:53:49.417881 containerd[1607]: time="2025-07-07T05:53:49.417687453Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.362519315s" Jul 7 05:53:49.417881 containerd[1607]: time="2025-07-07T05:53:49.417746544Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 7 05:53:49.418455 containerd[1607]: time="2025-07-07T05:53:49.418428061Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 7 05:53:49.943307 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3239281823.mount: Deactivated successfully. Jul 7 05:53:49.952098 containerd[1607]: time="2025-07-07T05:53:49.951989065Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:53:49.953213 containerd[1607]: time="2025-07-07T05:53:49.953165988Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" Jul 7 05:53:49.954040 containerd[1607]: time="2025-07-07T05:53:49.953979729Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:53:49.957135 containerd[1607]: time="2025-07-07T05:53:49.957089467Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:53:49.958325 containerd[1607]: time="2025-07-07T05:53:49.957925252Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 539.304157ms" Jul 7 05:53:49.958325 containerd[1607]: time="2025-07-07T05:53:49.958315839Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 7 05:53:49.959164 containerd[1607]: time="2025-07-07T05:53:49.959106416Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 7 05:53:50.541676 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount348366148.mount: Deactivated successfully. Jul 7 05:53:50.605510 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jul 7 05:53:50.615917 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 05:53:50.749261 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 05:53:50.754472 (kubelet)[2235]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 05:53:50.809963 kubelet[2235]: E0707 05:53:50.809574 2235 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 05:53:50.814522 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 05:53:50.814687 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 05:53:52.614580 containerd[1607]: time="2025-07-07T05:53:52.614334610Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:53:52.617082 containerd[1607]: time="2025-07-07T05:53:52.617011148Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406533" Jul 7 05:53:52.618324 containerd[1607]: time="2025-07-07T05:53:52.618232899Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:53:52.624202 containerd[1607]: time="2025-07-07T05:53:52.624145622Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:53:52.625841 containerd[1607]: time="2025-07-07T05:53:52.625386616Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.666244154s" Jul 7 05:53:52.625841 containerd[1607]: time="2025-07-07T05:53:52.625438984Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Jul 7 05:53:57.815458 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 05:53:57.826442 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 05:53:57.872224 systemd[1]: Reloading requested from client PID 2314 ('systemctl') (unit session-7.scope)... Jul 7 05:53:57.872372 systemd[1]: Reloading... Jul 7 05:53:57.981118 zram_generator::config[2355]: No configuration found. Jul 7 05:53:58.083250 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 05:53:58.151473 systemd[1]: Reloading finished in 278 ms. Jul 7 05:53:58.221508 (kubelet)[2403]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 05:53:58.223961 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 05:53:58.227415 systemd[1]: kubelet.service: Deactivated successfully. Jul 7 05:53:58.228239 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 05:53:58.234444 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 05:53:58.365513 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 05:53:58.378799 (kubelet)[2418]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 05:53:58.425414 kubelet[2418]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 05:53:58.425414 kubelet[2418]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 7 05:53:58.425414 kubelet[2418]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 05:53:58.425880 kubelet[2418]: I0707 05:53:58.425449 2418 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 05:53:59.616595 kubelet[2418]: I0707 05:53:59.616542 2418 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 7 05:53:59.616595 kubelet[2418]: I0707 05:53:59.616583 2418 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 05:53:59.617317 kubelet[2418]: I0707 05:53:59.617281 2418 server.go:934] "Client rotation is on, will bootstrap in background" Jul 7 05:53:59.650100 kubelet[2418]: E0707 05:53:59.649649 2418 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://159.69.113.68:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 159.69.113.68:6443: connect: connection refused" logger="UnhandledError" Jul 7 05:53:59.652431 kubelet[2418]: I0707 05:53:59.652340 2418 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 05:53:59.660913 kubelet[2418]: E0707 05:53:59.660878 2418 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 7 05:53:59.661059 kubelet[2418]: I0707 05:53:59.661049 2418 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 7 05:53:59.665903 kubelet[2418]: I0707 05:53:59.665871 2418 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 05:53:59.666567 kubelet[2418]: I0707 05:53:59.666544 2418 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 7 05:53:59.667950 kubelet[2418]: I0707 05:53:59.666823 2418 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 05:53:59.667950 kubelet[2418]: I0707 05:53:59.666862 2418 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-4-0-cfa01fffc0","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jul 7 05:53:59.667950 kubelet[2418]: I0707 05:53:59.667283 2418 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 05:53:59.667950 kubelet[2418]: I0707 05:53:59.667302 2418 container_manager_linux.go:300] "Creating device plugin manager" Jul 7 05:53:59.668149 kubelet[2418]: I0707 05:53:59.667750 2418 state_mem.go:36] "Initialized new in-memory state store" Jul 7 05:53:59.671019 kubelet[2418]: I0707 05:53:59.670995 2418 kubelet.go:408] "Attempting to sync node with API server" Jul 7 05:53:59.671130 kubelet[2418]: I0707 05:53:59.671118 2418 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 05:53:59.671235 kubelet[2418]: I0707 05:53:59.671222 2418 kubelet.go:314] "Adding apiserver pod source" Jul 7 05:53:59.671364 kubelet[2418]: I0707 05:53:59.671354 2418 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 05:53:59.678747 kubelet[2418]: W0707 05:53:59.678672 2418 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://159.69.113.68:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-4-0-cfa01fffc0&limit=500&resourceVersion=0": dial tcp 159.69.113.68:6443: connect: connection refused Jul 7 05:53:59.678817 kubelet[2418]: E0707 05:53:59.678745 2418 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://159.69.113.68:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-4-0-cfa01fffc0&limit=500&resourceVersion=0\": dial tcp 159.69.113.68:6443: connect: connection refused" logger="UnhandledError" Jul 7 05:53:59.679320 kubelet[2418]: W0707 05:53:59.679270 2418 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://159.69.113.68:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 159.69.113.68:6443: connect: connection refused Jul 7 05:53:59.679396 kubelet[2418]: E0707 05:53:59.679329 2418 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://159.69.113.68:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 159.69.113.68:6443: connect: connection refused" logger="UnhandledError" Jul 7 05:53:59.679652 kubelet[2418]: I0707 05:53:59.679599 2418 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 7 05:53:59.680519 kubelet[2418]: I0707 05:53:59.680493 2418 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 7 05:53:59.680689 kubelet[2418]: W0707 05:53:59.680671 2418 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 7 05:53:59.682826 kubelet[2418]: I0707 05:53:59.682803 2418 server.go:1274] "Started kubelet" Jul 7 05:53:59.683090 kubelet[2418]: I0707 05:53:59.683054 2418 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 05:53:59.689778 kubelet[2418]: I0707 05:53:59.689701 2418 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 05:53:59.690098 kubelet[2418]: I0707 05:53:59.690046 2418 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 05:53:59.692766 kubelet[2418]: I0707 05:53:59.692745 2418 server.go:449] "Adding debug handlers to kubelet server" Jul 7 05:53:59.695686 kubelet[2418]: E0707 05:53:59.693299 2418 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://159.69.113.68:6443/api/v1/namespaces/default/events\": dial tcp 159.69.113.68:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-4-0-cfa01fffc0.184fe25624cb5293 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-4-0-cfa01fffc0,UID:ci-4081-3-4-0-cfa01fffc0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-4-0-cfa01fffc0,},FirstTimestamp:2025-07-07 05:53:59.682781843 +0000 UTC m=+1.298499843,LastTimestamp:2025-07-07 05:53:59.682781843 +0000 UTC m=+1.298499843,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-4-0-cfa01fffc0,}" Jul 7 05:53:59.696035 kubelet[2418]: I0707 05:53:59.696004 2418 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 05:53:59.697668 kubelet[2418]: I0707 05:53:59.697634 2418 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 05:53:59.699209 kubelet[2418]: I0707 05:53:59.699185 2418 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 7 05:53:59.699511 kubelet[2418]: E0707 05:53:59.699480 2418 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-4-0-cfa01fffc0\" not found" Jul 7 05:53:59.700939 kubelet[2418]: I0707 05:53:59.700902 2418 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 7 05:53:59.701008 kubelet[2418]: I0707 05:53:59.700958 2418 reconciler.go:26] "Reconciler: start to sync state" Jul 7 05:53:59.701815 kubelet[2418]: I0707 05:53:59.701782 2418 factory.go:221] Registration of the systemd container factory successfully Jul 7 05:53:59.701899 kubelet[2418]: I0707 05:53:59.701875 2418 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 05:53:59.702331 kubelet[2418]: W0707 05:53:59.702288 2418 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://159.69.113.68:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 159.69.113.68:6443: connect: connection refused Jul 7 05:53:59.702398 kubelet[2418]: E0707 05:53:59.702338 2418 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://159.69.113.68:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 159.69.113.68:6443: connect: connection refused" logger="UnhandledError" Jul 7 05:53:59.703109 kubelet[2418]: E0707 05:53:59.702522 2418 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://159.69.113.68:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-4-0-cfa01fffc0?timeout=10s\": dial tcp 159.69.113.68:6443: connect: connection refused" interval="200ms" Jul 7 05:53:59.704662 kubelet[2418]: I0707 05:53:59.704622 2418 factory.go:221] Registration of the containerd container factory successfully Jul 7 05:53:59.713798 kubelet[2418]: I0707 05:53:59.713753 2418 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 7 05:53:59.714800 kubelet[2418]: I0707 05:53:59.714779 2418 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 7 05:53:59.714890 kubelet[2418]: I0707 05:53:59.714880 2418 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 7 05:53:59.714952 kubelet[2418]: I0707 05:53:59.714944 2418 kubelet.go:2321] "Starting kubelet main sync loop" Jul 7 05:53:59.715044 kubelet[2418]: E0707 05:53:59.715027 2418 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 05:53:59.721894 kubelet[2418]: W0707 05:53:59.721350 2418 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://159.69.113.68:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 159.69.113.68:6443: connect: connection refused Jul 7 05:53:59.721894 kubelet[2418]: E0707 05:53:59.721409 2418 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://159.69.113.68:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 159.69.113.68:6443: connect: connection refused" logger="UnhandledError" Jul 7 05:53:59.732908 kubelet[2418]: E0707 05:53:59.732767 2418 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 05:53:59.741653 kubelet[2418]: I0707 05:53:59.741614 2418 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 7 05:53:59.741653 kubelet[2418]: I0707 05:53:59.741643 2418 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 7 05:53:59.741807 kubelet[2418]: I0707 05:53:59.741671 2418 state_mem.go:36] "Initialized new in-memory state store" Jul 7 05:53:59.744364 kubelet[2418]: I0707 05:53:59.744323 2418 policy_none.go:49] "None policy: Start" Jul 7 05:53:59.745597 kubelet[2418]: I0707 05:53:59.745577 2418 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 7 05:53:59.745976 kubelet[2418]: I0707 05:53:59.745822 2418 state_mem.go:35] "Initializing new in-memory state store" Jul 7 05:53:59.752818 kubelet[2418]: I0707 05:53:59.752789 2418 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 7 05:53:59.754123 kubelet[2418]: I0707 05:53:59.753163 2418 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 05:53:59.754123 kubelet[2418]: I0707 05:53:59.753185 2418 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 05:53:59.754851 kubelet[2418]: I0707 05:53:59.754830 2418 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 05:53:59.756635 kubelet[2418]: E0707 05:53:59.756585 2418 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-4-0-cfa01fffc0\" not found" Jul 7 05:53:59.855770 kubelet[2418]: I0707 05:53:59.855675 2418 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-4-0-cfa01fffc0" Jul 7 05:53:59.856369 kubelet[2418]: E0707 05:53:59.856328 2418 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://159.69.113.68:6443/api/v1/nodes\": dial tcp 159.69.113.68:6443: connect: connection refused" node="ci-4081-3-4-0-cfa01fffc0" Jul 7 05:53:59.904196 kubelet[2418]: E0707 05:53:59.904005 2418 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://159.69.113.68:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-4-0-cfa01fffc0?timeout=10s\": dial tcp 159.69.113.68:6443: connect: connection refused" interval="400ms" Jul 7 05:54:00.002188 kubelet[2418]: I0707 05:54:00.001843 2418 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/06a85450d4ea2ba6abea410b93b77d99-k8s-certs\") pod \"kube-apiserver-ci-4081-3-4-0-cfa01fffc0\" (UID: \"06a85450d4ea2ba6abea410b93b77d99\") " pod="kube-system/kube-apiserver-ci-4081-3-4-0-cfa01fffc0" Jul 7 05:54:00.002188 kubelet[2418]: I0707 05:54:00.001927 2418 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4468d39f30d56a90c6ec0af705fe358e-ca-certs\") pod \"kube-controller-manager-ci-4081-3-4-0-cfa01fffc0\" (UID: \"4468d39f30d56a90c6ec0af705fe358e\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-0-cfa01fffc0" Jul 7 05:54:00.002188 kubelet[2418]: I0707 05:54:00.001975 2418 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4468d39f30d56a90c6ec0af705fe358e-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-4-0-cfa01fffc0\" (UID: \"4468d39f30d56a90c6ec0af705fe358e\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-0-cfa01fffc0" Jul 7 05:54:00.002188 kubelet[2418]: I0707 05:54:00.002013 2418 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4468d39f30d56a90c6ec0af705fe358e-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-4-0-cfa01fffc0\" (UID: \"4468d39f30d56a90c6ec0af705fe358e\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-0-cfa01fffc0" Jul 7 05:54:00.002188 kubelet[2418]: I0707 05:54:00.002057 2418 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4468d39f30d56a90c6ec0af705fe358e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-4-0-cfa01fffc0\" (UID: \"4468d39f30d56a90c6ec0af705fe358e\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-0-cfa01fffc0" Jul 7 05:54:00.002667 kubelet[2418]: I0707 05:54:00.002141 2418 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0e71dd0d7ca76e5bec94d83d2f69de6a-kubeconfig\") pod \"kube-scheduler-ci-4081-3-4-0-cfa01fffc0\" (UID: \"0e71dd0d7ca76e5bec94d83d2f69de6a\") " pod="kube-system/kube-scheduler-ci-4081-3-4-0-cfa01fffc0" Jul 7 05:54:00.002667 kubelet[2418]: I0707 05:54:00.002178 2418 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/06a85450d4ea2ba6abea410b93b77d99-ca-certs\") pod \"kube-apiserver-ci-4081-3-4-0-cfa01fffc0\" (UID: \"06a85450d4ea2ba6abea410b93b77d99\") " pod="kube-system/kube-apiserver-ci-4081-3-4-0-cfa01fffc0" Jul 7 05:54:00.002667 kubelet[2418]: I0707 05:54:00.002215 2418 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/06a85450d4ea2ba6abea410b93b77d99-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-4-0-cfa01fffc0\" (UID: \"06a85450d4ea2ba6abea410b93b77d99\") " pod="kube-system/kube-apiserver-ci-4081-3-4-0-cfa01fffc0" Jul 7 05:54:00.002667 kubelet[2418]: I0707 05:54:00.002283 2418 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4468d39f30d56a90c6ec0af705fe358e-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-4-0-cfa01fffc0\" (UID: \"4468d39f30d56a90c6ec0af705fe358e\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-0-cfa01fffc0" Jul 7 05:54:00.060197 kubelet[2418]: I0707 05:54:00.059414 2418 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-4-0-cfa01fffc0" Jul 7 05:54:00.060197 kubelet[2418]: E0707 05:54:00.060053 2418 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://159.69.113.68:6443/api/v1/nodes\": dial tcp 159.69.113.68:6443: connect: connection refused" node="ci-4081-3-4-0-cfa01fffc0" Jul 7 05:54:00.126325 containerd[1607]: time="2025-07-07T05:54:00.126226225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-4-0-cfa01fffc0,Uid:06a85450d4ea2ba6abea410b93b77d99,Namespace:kube-system,Attempt:0,}" Jul 7 05:54:00.130118 containerd[1607]: time="2025-07-07T05:54:00.130035340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-4-0-cfa01fffc0,Uid:4468d39f30d56a90c6ec0af705fe358e,Namespace:kube-system,Attempt:0,}" Jul 7 05:54:00.133034 containerd[1607]: time="2025-07-07T05:54:00.132760239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-4-0-cfa01fffc0,Uid:0e71dd0d7ca76e5bec94d83d2f69de6a,Namespace:kube-system,Attempt:0,}" Jul 7 05:54:00.304717 kubelet[2418]: E0707 05:54:00.304540 2418 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://159.69.113.68:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-4-0-cfa01fffc0?timeout=10s\": dial tcp 159.69.113.68:6443: connect: connection refused" interval="800ms" Jul 7 05:54:00.463607 kubelet[2418]: I0707 05:54:00.463559 2418 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-4-0-cfa01fffc0" Jul 7 05:54:00.464615 kubelet[2418]: E0707 05:54:00.464561 2418 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://159.69.113.68:6443/api/v1/nodes\": dial tcp 159.69.113.68:6443: connect: connection refused" node="ci-4081-3-4-0-cfa01fffc0" Jul 7 05:54:00.506659 kubelet[2418]: W0707 05:54:00.506513 2418 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://159.69.113.68:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 159.69.113.68:6443: connect: connection refused Jul 7 05:54:00.506659 kubelet[2418]: E0707 05:54:00.506616 2418 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://159.69.113.68:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 159.69.113.68:6443: connect: connection refused" logger="UnhandledError" Jul 7 05:54:00.630361 kubelet[2418]: W0707 05:54:00.630169 2418 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://159.69.113.68:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 159.69.113.68:6443: connect: connection refused Jul 7 05:54:00.630361 kubelet[2418]: E0707 05:54:00.630293 2418 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://159.69.113.68:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 159.69.113.68:6443: connect: connection refused" logger="UnhandledError" Jul 7 05:54:00.698788 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3431006821.mount: Deactivated successfully. Jul 7 05:54:00.705660 containerd[1607]: time="2025-07-07T05:54:00.704412023Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 05:54:00.707155 containerd[1607]: time="2025-07-07T05:54:00.707118319Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Jul 7 05:54:00.711743 containerd[1607]: time="2025-07-07T05:54:00.711699170Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 05:54:00.713782 containerd[1607]: time="2025-07-07T05:54:00.713744504Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 7 05:54:00.715336 containerd[1607]: time="2025-07-07T05:54:00.715292897Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 05:54:00.716239 containerd[1607]: time="2025-07-07T05:54:00.716212651Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 05:54:00.717148 containerd[1607]: time="2025-07-07T05:54:00.717096201Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 7 05:54:00.720170 containerd[1607]: time="2025-07-07T05:54:00.720135259Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 05:54:00.721762 containerd[1607]: time="2025-07-07T05:54:00.721715616Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 591.560342ms" Jul 7 05:54:00.723288 containerd[1607]: time="2025-07-07T05:54:00.723247367Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 596.860761ms" Jul 7 05:54:00.725096 containerd[1607]: time="2025-07-07T05:54:00.725043950Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 592.208102ms" Jul 7 05:54:00.734594 kubelet[2418]: W0707 05:54:00.734499 2418 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://159.69.113.68:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-4-0-cfa01fffc0&limit=500&resourceVersion=0": dial tcp 159.69.113.68:6443: connect: connection refused Jul 7 05:54:00.734885 kubelet[2418]: E0707 05:54:00.734784 2418 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://159.69.113.68:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-4-0-cfa01fffc0&limit=500&resourceVersion=0\": dial tcp 159.69.113.68:6443: connect: connection refused" logger="UnhandledError" Jul 7 05:54:00.849802 containerd[1607]: time="2025-07-07T05:54:00.849650418Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 05:54:00.849802 containerd[1607]: time="2025-07-07T05:54:00.849710706Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 05:54:00.849802 containerd[1607]: time="2025-07-07T05:54:00.849726347Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:54:00.850086 containerd[1607]: time="2025-07-07T05:54:00.849811118Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:54:00.853320 containerd[1607]: time="2025-07-07T05:54:00.853147773Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 05:54:00.853649 containerd[1607]: time="2025-07-07T05:54:00.853540062Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 05:54:00.853768 containerd[1607]: time="2025-07-07T05:54:00.853744448Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:54:00.854110 containerd[1607]: time="2025-07-07T05:54:00.854060247Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:54:00.856085 containerd[1607]: time="2025-07-07T05:54:00.855969925Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 05:54:00.856332 containerd[1607]: time="2025-07-07T05:54:00.856176750Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 05:54:00.856332 containerd[1607]: time="2025-07-07T05:54:00.856199393Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:54:00.856484 containerd[1607]: time="2025-07-07T05:54:00.856444624Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:54:00.932465 containerd[1607]: time="2025-07-07T05:54:00.932221534Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-4-0-cfa01fffc0,Uid:06a85450d4ea2ba6abea410b93b77d99,Namespace:kube-system,Attempt:0,} returns sandbox id \"2d2998d47f02096128a0199e3320159f54a609c34688f94b704302e507ebdd74\"" Jul 7 05:54:00.935897 containerd[1607]: time="2025-07-07T05:54:00.935772976Z" level=info msg="CreateContainer within sandbox \"2d2998d47f02096128a0199e3320159f54a609c34688f94b704302e507ebdd74\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 7 05:54:00.945154 containerd[1607]: time="2025-07-07T05:54:00.944974401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-4-0-cfa01fffc0,Uid:4468d39f30d56a90c6ec0af705fe358e,Namespace:kube-system,Attempt:0,} returns sandbox id \"2fb2deafd53670f8b1d531dad1f9043993744a60dd93509970d6a5c91d826e5c\"" Jul 7 05:54:00.945846 containerd[1607]: time="2025-07-07T05:54:00.945563155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-4-0-cfa01fffc0,Uid:0e71dd0d7ca76e5bec94d83d2f69de6a,Namespace:kube-system,Attempt:0,} returns sandbox id \"7c48f27bd136a1dbb8f0feef2adad87b6af8ffd1b0673b25bd71ee9bcb71f360\"" Jul 7 05:54:00.950434 containerd[1607]: time="2025-07-07T05:54:00.950384515Z" level=info msg="CreateContainer within sandbox \"7c48f27bd136a1dbb8f0feef2adad87b6af8ffd1b0673b25bd71ee9bcb71f360\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 7 05:54:00.950772 containerd[1607]: time="2025-07-07T05:54:00.950480007Z" level=info msg="CreateContainer within sandbox \"2fb2deafd53670f8b1d531dad1f9043993744a60dd93509970d6a5c91d826e5c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 7 05:54:00.953588 containerd[1607]: time="2025-07-07T05:54:00.953525666Z" level=info msg="CreateContainer within sandbox \"2d2998d47f02096128a0199e3320159f54a609c34688f94b704302e507ebdd74\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"033ce9db59e1adc8560beeee083eb12492028ca7e90d66157a4d8686af505e04\"" Jul 7 05:54:00.954039 containerd[1607]: time="2025-07-07T05:54:00.954009766Z" level=info msg="StartContainer for \"033ce9db59e1adc8560beeee083eb12492028ca7e90d66157a4d8686af505e04\"" Jul 7 05:54:00.971840 containerd[1607]: time="2025-07-07T05:54:00.971638880Z" level=info msg="CreateContainer within sandbox \"7c48f27bd136a1dbb8f0feef2adad87b6af8ffd1b0673b25bd71ee9bcb71f360\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c5bdc75753922f22c8b44ae09326552d2f183581458ff6a1917a58f97d8e46f3\"" Jul 7 05:54:00.972648 containerd[1607]: time="2025-07-07T05:54:00.972518469Z" level=info msg="StartContainer for \"c5bdc75753922f22c8b44ae09326552d2f183581458ff6a1917a58f97d8e46f3\"" Jul 7 05:54:00.975945 containerd[1607]: time="2025-07-07T05:54:00.975830722Z" level=info msg="CreateContainer within sandbox \"2fb2deafd53670f8b1d531dad1f9043993744a60dd93509970d6a5c91d826e5c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b593dfcaa1d4d94d1816808d3582f85b4f475c2b2fd758157104513a24434a2a\"" Jul 7 05:54:00.979181 containerd[1607]: time="2025-07-07T05:54:00.977170168Z" level=info msg="StartContainer for \"b593dfcaa1d4d94d1816808d3582f85b4f475c2b2fd758157104513a24434a2a\"" Jul 7 05:54:01.043906 containerd[1607]: time="2025-07-07T05:54:01.043752890Z" level=info msg="StartContainer for \"033ce9db59e1adc8560beeee083eb12492028ca7e90d66157a4d8686af505e04\" returns successfully" Jul 7 05:54:01.096108 containerd[1607]: time="2025-07-07T05:54:01.093909585Z" level=info msg="StartContainer for \"b593dfcaa1d4d94d1816808d3582f85b4f475c2b2fd758157104513a24434a2a\" returns successfully" Jul 7 05:54:01.106685 containerd[1607]: time="2025-07-07T05:54:01.106118148Z" level=info msg="StartContainer for \"c5bdc75753922f22c8b44ae09326552d2f183581458ff6a1917a58f97d8e46f3\" returns successfully" Jul 7 05:54:01.106811 kubelet[2418]: E0707 05:54:01.106592 2418 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://159.69.113.68:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-4-0-cfa01fffc0?timeout=10s\": dial tcp 159.69.113.68:6443: connect: connection refused" interval="1.6s" Jul 7 05:54:01.267749 kubelet[2418]: I0707 05:54:01.267637 2418 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-4-0-cfa01fffc0" Jul 7 05:54:03.080715 kubelet[2418]: E0707 05:54:03.080662 2418 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-4-0-cfa01fffc0\" not found" node="ci-4081-3-4-0-cfa01fffc0" Jul 7 05:54:03.192234 kubelet[2418]: E0707 05:54:03.191970 2418 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081-3-4-0-cfa01fffc0.184fe25624cb5293 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-4-0-cfa01fffc0,UID:ci-4081-3-4-0-cfa01fffc0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-4-0-cfa01fffc0,},FirstTimestamp:2025-07-07 05:53:59.682781843 +0000 UTC m=+1.298499843,LastTimestamp:2025-07-07 05:53:59.682781843 +0000 UTC m=+1.298499843,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-4-0-cfa01fffc0,}" Jul 7 05:54:03.284097 kubelet[2418]: I0707 05:54:03.282696 2418 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-3-4-0-cfa01fffc0" Jul 7 05:54:03.284097 kubelet[2418]: E0707 05:54:03.282734 2418 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4081-3-4-0-cfa01fffc0\": node \"ci-4081-3-4-0-cfa01fffc0\" not found" Jul 7 05:54:03.322123 kubelet[2418]: E0707 05:54:03.321043 2418 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-4-0-cfa01fffc0\" not found" Jul 7 05:54:03.422856 kubelet[2418]: E0707 05:54:03.422800 2418 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-4-0-cfa01fffc0\" not found" Jul 7 05:54:03.523784 kubelet[2418]: E0707 05:54:03.523698 2418 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-4-0-cfa01fffc0\" not found" Jul 7 05:54:03.624674 kubelet[2418]: E0707 05:54:03.624588 2418 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-4-0-cfa01fffc0\" not found" Jul 7 05:54:03.724964 kubelet[2418]: E0707 05:54:03.724815 2418 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-4-0-cfa01fffc0\" not found" Jul 7 05:54:03.825355 kubelet[2418]: E0707 05:54:03.825296 2418 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-4-0-cfa01fffc0\" not found" Jul 7 05:54:04.683533 kubelet[2418]: I0707 05:54:04.683011 2418 apiserver.go:52] "Watching apiserver" Jul 7 05:54:04.701607 kubelet[2418]: I0707 05:54:04.701541 2418 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 7 05:54:05.331957 systemd[1]: Reloading requested from client PID 2695 ('systemctl') (unit session-7.scope)... Jul 7 05:54:05.331985 systemd[1]: Reloading... Jul 7 05:54:05.432113 zram_generator::config[2738]: No configuration found. Jul 7 05:54:05.553647 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 05:54:05.635961 systemd[1]: Reloading finished in 303 ms. Jul 7 05:54:05.670021 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 05:54:05.670551 kubelet[2418]: I0707 05:54:05.670281 2418 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 05:54:05.683614 systemd[1]: kubelet.service: Deactivated successfully. Jul 7 05:54:05.684360 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 05:54:05.693398 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 05:54:05.829274 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 05:54:05.834411 (kubelet)[2790]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 05:54:05.896538 kubelet[2790]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 05:54:05.896538 kubelet[2790]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 7 05:54:05.896538 kubelet[2790]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 05:54:05.896538 kubelet[2790]: I0707 05:54:05.896287 2790 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 05:54:05.910117 kubelet[2790]: I0707 05:54:05.909316 2790 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 7 05:54:05.910117 kubelet[2790]: I0707 05:54:05.909353 2790 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 05:54:05.910117 kubelet[2790]: I0707 05:54:05.909630 2790 server.go:934] "Client rotation is on, will bootstrap in background" Jul 7 05:54:05.912325 kubelet[2790]: I0707 05:54:05.912276 2790 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 7 05:54:05.921824 kubelet[2790]: I0707 05:54:05.921593 2790 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 05:54:05.930437 kubelet[2790]: E0707 05:54:05.930350 2790 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 7 05:54:05.930437 kubelet[2790]: I0707 05:54:05.930390 2790 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 7 05:54:05.934691 kubelet[2790]: I0707 05:54:05.934649 2790 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 05:54:05.935149 kubelet[2790]: I0707 05:54:05.935058 2790 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 7 05:54:05.935300 kubelet[2790]: I0707 05:54:05.935234 2790 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 05:54:05.935471 kubelet[2790]: I0707 05:54:05.935272 2790 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-4-0-cfa01fffc0","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jul 7 05:54:05.935471 kubelet[2790]: I0707 05:54:05.935458 2790 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 05:54:05.935471 kubelet[2790]: I0707 05:54:05.935469 2790 container_manager_linux.go:300] "Creating device plugin manager" Jul 7 05:54:05.935768 kubelet[2790]: I0707 05:54:05.935504 2790 state_mem.go:36] "Initialized new in-memory state store" Jul 7 05:54:05.935768 kubelet[2790]: I0707 05:54:05.935603 2790 kubelet.go:408] "Attempting to sync node with API server" Jul 7 05:54:05.935768 kubelet[2790]: I0707 05:54:05.935615 2790 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 05:54:05.935768 kubelet[2790]: I0707 05:54:05.935642 2790 kubelet.go:314] "Adding apiserver pod source" Jul 7 05:54:05.935768 kubelet[2790]: I0707 05:54:05.935663 2790 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 05:54:05.940091 kubelet[2790]: I0707 05:54:05.939109 2790 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 7 05:54:05.940091 kubelet[2790]: I0707 05:54:05.939564 2790 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 7 05:54:05.940091 kubelet[2790]: I0707 05:54:05.939961 2790 server.go:1274] "Started kubelet" Jul 7 05:54:05.943681 kubelet[2790]: I0707 05:54:05.943658 2790 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 05:54:05.946170 kubelet[2790]: I0707 05:54:05.946126 2790 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 05:54:05.949298 kubelet[2790]: I0707 05:54:05.949272 2790 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 05:54:05.949733 kubelet[2790]: I0707 05:54:05.949701 2790 server.go:449] "Adding debug handlers to kubelet server" Jul 7 05:54:05.950447 kubelet[2790]: I0707 05:54:05.950425 2790 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 7 05:54:05.950837 kubelet[2790]: E0707 05:54:05.950815 2790 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-4-0-cfa01fffc0\" not found" Jul 7 05:54:05.953215 kubelet[2790]: I0707 05:54:05.953196 2790 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 7 05:54:05.953409 kubelet[2790]: I0707 05:54:05.953354 2790 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 05:54:05.953494 kubelet[2790]: I0707 05:54:05.953484 2790 reconciler.go:26] "Reconciler: start to sync state" Jul 7 05:54:05.953592 kubelet[2790]: I0707 05:54:05.953570 2790 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 05:54:05.966232 kubelet[2790]: I0707 05:54:05.966189 2790 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 7 05:54:05.966984 kubelet[2790]: I0707 05:54:05.966858 2790 factory.go:221] Registration of the systemd container factory successfully Jul 7 05:54:05.967331 kubelet[2790]: I0707 05:54:05.967310 2790 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 05:54:05.968844 kubelet[2790]: I0707 05:54:05.968787 2790 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 7 05:54:05.968844 kubelet[2790]: I0707 05:54:05.968811 2790 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 7 05:54:05.968962 kubelet[2790]: I0707 05:54:05.968907 2790 kubelet.go:2321] "Starting kubelet main sync loop" Jul 7 05:54:05.968962 kubelet[2790]: E0707 05:54:05.968950 2790 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 05:54:05.970915 kubelet[2790]: I0707 05:54:05.970897 2790 factory.go:221] Registration of the containerd container factory successfully Jul 7 05:54:06.006698 kubelet[2790]: E0707 05:54:06.006669 2790 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 05:54:06.049055 kubelet[2790]: I0707 05:54:06.048565 2790 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 7 05:54:06.049055 kubelet[2790]: I0707 05:54:06.048596 2790 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 7 05:54:06.049055 kubelet[2790]: I0707 05:54:06.048629 2790 state_mem.go:36] "Initialized new in-memory state store" Jul 7 05:54:06.049055 kubelet[2790]: I0707 05:54:06.048849 2790 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 7 05:54:06.049055 kubelet[2790]: I0707 05:54:06.048865 2790 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 7 05:54:06.049055 kubelet[2790]: I0707 05:54:06.048917 2790 policy_none.go:49] "None policy: Start" Jul 7 05:54:06.049979 kubelet[2790]: I0707 05:54:06.049754 2790 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 7 05:54:06.049979 kubelet[2790]: I0707 05:54:06.049777 2790 state_mem.go:35] "Initializing new in-memory state store" Jul 7 05:54:06.049979 kubelet[2790]: I0707 05:54:06.049954 2790 state_mem.go:75] "Updated machine memory state" Jul 7 05:54:06.052250 kubelet[2790]: I0707 05:54:06.051207 2790 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 7 05:54:06.052250 kubelet[2790]: I0707 05:54:06.051369 2790 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 05:54:06.052250 kubelet[2790]: I0707 05:54:06.051380 2790 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 05:54:06.053125 kubelet[2790]: I0707 05:54:06.052871 2790 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 05:54:06.077783 kubelet[2790]: E0707 05:54:06.077407 2790 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081-3-4-0-cfa01fffc0\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-4-0-cfa01fffc0" Jul 7 05:54:06.155160 kubelet[2790]: I0707 05:54:06.154964 2790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4468d39f30d56a90c6ec0af705fe358e-ca-certs\") pod \"kube-controller-manager-ci-4081-3-4-0-cfa01fffc0\" (UID: \"4468d39f30d56a90c6ec0af705fe358e\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-0-cfa01fffc0" Jul 7 05:54:06.155160 kubelet[2790]: I0707 05:54:06.155037 2790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4468d39f30d56a90c6ec0af705fe358e-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-4-0-cfa01fffc0\" (UID: \"4468d39f30d56a90c6ec0af705fe358e\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-0-cfa01fffc0" Jul 7 05:54:06.155160 kubelet[2790]: I0707 05:54:06.155103 2790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4468d39f30d56a90c6ec0af705fe358e-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-4-0-cfa01fffc0\" (UID: \"4468d39f30d56a90c6ec0af705fe358e\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-0-cfa01fffc0" Jul 7 05:54:06.156416 kubelet[2790]: I0707 05:54:06.155440 2790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0e71dd0d7ca76e5bec94d83d2f69de6a-kubeconfig\") pod \"kube-scheduler-ci-4081-3-4-0-cfa01fffc0\" (UID: \"0e71dd0d7ca76e5bec94d83d2f69de6a\") " pod="kube-system/kube-scheduler-ci-4081-3-4-0-cfa01fffc0" Jul 7 05:54:06.156416 kubelet[2790]: I0707 05:54:06.155505 2790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/06a85450d4ea2ba6abea410b93b77d99-ca-certs\") pod \"kube-apiserver-ci-4081-3-4-0-cfa01fffc0\" (UID: \"06a85450d4ea2ba6abea410b93b77d99\") " pod="kube-system/kube-apiserver-ci-4081-3-4-0-cfa01fffc0" Jul 7 05:54:06.156416 kubelet[2790]: I0707 05:54:06.155536 2790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/06a85450d4ea2ba6abea410b93b77d99-k8s-certs\") pod \"kube-apiserver-ci-4081-3-4-0-cfa01fffc0\" (UID: \"06a85450d4ea2ba6abea410b93b77d99\") " pod="kube-system/kube-apiserver-ci-4081-3-4-0-cfa01fffc0" Jul 7 05:54:06.156416 kubelet[2790]: I0707 05:54:06.155588 2790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/06a85450d4ea2ba6abea410b93b77d99-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-4-0-cfa01fffc0\" (UID: \"06a85450d4ea2ba6abea410b93b77d99\") " pod="kube-system/kube-apiserver-ci-4081-3-4-0-cfa01fffc0" Jul 7 05:54:06.156416 kubelet[2790]: I0707 05:54:06.155726 2790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4468d39f30d56a90c6ec0af705fe358e-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-4-0-cfa01fffc0\" (UID: \"4468d39f30d56a90c6ec0af705fe358e\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-0-cfa01fffc0" Jul 7 05:54:06.156696 kubelet[2790]: I0707 05:54:06.155849 2790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4468d39f30d56a90c6ec0af705fe358e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-4-0-cfa01fffc0\" (UID: \"4468d39f30d56a90c6ec0af705fe358e\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-0-cfa01fffc0" Jul 7 05:54:06.159023 kubelet[2790]: I0707 05:54:06.158972 2790 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-4-0-cfa01fffc0" Jul 7 05:54:06.170319 kubelet[2790]: I0707 05:54:06.169931 2790 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081-3-4-0-cfa01fffc0" Jul 7 05:54:06.170319 kubelet[2790]: I0707 05:54:06.170027 2790 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-3-4-0-cfa01fffc0" Jul 7 05:54:06.938314 kubelet[2790]: I0707 05:54:06.938237 2790 apiserver.go:52] "Watching apiserver" Jul 7 05:54:06.954138 kubelet[2790]: I0707 05:54:06.954103 2790 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 7 05:54:07.036676 kubelet[2790]: E0707 05:54:07.036598 2790 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081-3-4-0-cfa01fffc0\" already exists" pod="kube-system/kube-controller-manager-ci-4081-3-4-0-cfa01fffc0" Jul 7 05:54:07.054406 kubelet[2790]: I0707 05:54:07.054186 2790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-4-0-cfa01fffc0" podStartSLOduration=1.054161089 podStartE2EDuration="1.054161089s" podCreationTimestamp="2025-07-07 05:54:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 05:54:07.054033116 +0000 UTC m=+1.216056143" watchObservedRunningTime="2025-07-07 05:54:07.054161089 +0000 UTC m=+1.216184157" Jul 7 05:54:07.085142 kubelet[2790]: I0707 05:54:07.085027 2790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-4-0-cfa01fffc0" podStartSLOduration=3.084963958 podStartE2EDuration="3.084963958s" podCreationTimestamp="2025-07-07 05:54:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 05:54:07.071650928 +0000 UTC m=+1.233673955" watchObservedRunningTime="2025-07-07 05:54:07.084963958 +0000 UTC m=+1.246987025" Jul 7 05:54:07.085313 kubelet[2790]: I0707 05:54:07.085267 2790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-4-0-cfa01fffc0" podStartSLOduration=1.085230626 podStartE2EDuration="1.085230626s" podCreationTimestamp="2025-07-07 05:54:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 05:54:07.084938635 +0000 UTC m=+1.246961662" watchObservedRunningTime="2025-07-07 05:54:07.085230626 +0000 UTC m=+1.247253693" Jul 7 05:54:11.418704 kubelet[2790]: I0707 05:54:11.418647 2790 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 7 05:54:11.419847 containerd[1607]: time="2025-07-07T05:54:11.419108938Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 7 05:54:11.420728 kubelet[2790]: I0707 05:54:11.420062 2790 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 7 05:54:12.293487 kubelet[2790]: I0707 05:54:12.293324 2790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9e6a7838-3312-4722-84de-01716738f676-kube-proxy\") pod \"kube-proxy-w79s2\" (UID: \"9e6a7838-3312-4722-84de-01716738f676\") " pod="kube-system/kube-proxy-w79s2" Jul 7 05:54:12.293487 kubelet[2790]: I0707 05:54:12.293374 2790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9e6a7838-3312-4722-84de-01716738f676-lib-modules\") pod \"kube-proxy-w79s2\" (UID: \"9e6a7838-3312-4722-84de-01716738f676\") " pod="kube-system/kube-proxy-w79s2" Jul 7 05:54:12.293487 kubelet[2790]: I0707 05:54:12.293396 2790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mr5wp\" (UniqueName: \"kubernetes.io/projected/9e6a7838-3312-4722-84de-01716738f676-kube-api-access-mr5wp\") pod \"kube-proxy-w79s2\" (UID: \"9e6a7838-3312-4722-84de-01716738f676\") " pod="kube-system/kube-proxy-w79s2" Jul 7 05:54:12.293487 kubelet[2790]: I0707 05:54:12.293419 2790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9e6a7838-3312-4722-84de-01716738f676-xtables-lock\") pod \"kube-proxy-w79s2\" (UID: \"9e6a7838-3312-4722-84de-01716738f676\") " pod="kube-system/kube-proxy-w79s2" Jul 7 05:54:12.478702 containerd[1607]: time="2025-07-07T05:54:12.478615039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-w79s2,Uid:9e6a7838-3312-4722-84de-01716738f676,Namespace:kube-system,Attempt:0,}" Jul 7 05:54:12.508584 containerd[1607]: time="2025-07-07T05:54:12.508211934Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 05:54:12.508584 containerd[1607]: time="2025-07-07T05:54:12.508266819Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 05:54:12.508584 containerd[1607]: time="2025-07-07T05:54:12.508282701Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:54:12.508584 containerd[1607]: time="2025-07-07T05:54:12.508494362Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:54:12.596620 kubelet[2790]: I0707 05:54:12.595872 2790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqktj\" (UniqueName: \"kubernetes.io/projected/8f473a42-d218-49b5-b872-7a612fffc4f2-kube-api-access-cqktj\") pod \"tigera-operator-5bf8dfcb4-twxbh\" (UID: \"8f473a42-d218-49b5-b872-7a612fffc4f2\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-twxbh" Jul 7 05:54:12.596620 kubelet[2790]: I0707 05:54:12.595949 2790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8f473a42-d218-49b5-b872-7a612fffc4f2-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-twxbh\" (UID: \"8f473a42-d218-49b5-b872-7a612fffc4f2\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-twxbh" Jul 7 05:54:12.596983 containerd[1607]: time="2025-07-07T05:54:12.596086327Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-w79s2,Uid:9e6a7838-3312-4722-84de-01716738f676,Namespace:kube-system,Attempt:0,} returns sandbox id \"3df5c3a27877b6a6ce74f98171cbd4f2b4e2699d383e443f574ec44d764be84b\"" Jul 7 05:54:12.604560 containerd[1607]: time="2025-07-07T05:54:12.604399792Z" level=info msg="CreateContainer within sandbox \"3df5c3a27877b6a6ce74f98171cbd4f2b4e2699d383e443f574ec44d764be84b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 7 05:54:12.623810 containerd[1607]: time="2025-07-07T05:54:12.623747150Z" level=info msg="CreateContainer within sandbox \"3df5c3a27877b6a6ce74f98171cbd4f2b4e2699d383e443f574ec44d764be84b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b09bc6e83eb32c159119c92b70e019ea2719eb0034341b2bf7ac4543853c0c78\"" Jul 7 05:54:12.625784 containerd[1607]: time="2025-07-07T05:54:12.625735987Z" level=info msg="StartContainer for \"b09bc6e83eb32c159119c92b70e019ea2719eb0034341b2bf7ac4543853c0c78\"" Jul 7 05:54:12.686033 containerd[1607]: time="2025-07-07T05:54:12.685990522Z" level=info msg="StartContainer for \"b09bc6e83eb32c159119c92b70e019ea2719eb0034341b2bf7ac4543853c0c78\" returns successfully" Jul 7 05:54:12.891961 containerd[1607]: time="2025-07-07T05:54:12.891827693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-twxbh,Uid:8f473a42-d218-49b5-b872-7a612fffc4f2,Namespace:tigera-operator,Attempt:0,}" Jul 7 05:54:12.919942 containerd[1607]: time="2025-07-07T05:54:12.919772824Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 05:54:12.919942 containerd[1607]: time="2025-07-07T05:54:12.919860313Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 05:54:12.919942 containerd[1607]: time="2025-07-07T05:54:12.919896717Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:54:12.920255 containerd[1607]: time="2025-07-07T05:54:12.920023769Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:54:12.977958 containerd[1607]: time="2025-07-07T05:54:12.977401059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-twxbh,Uid:8f473a42-d218-49b5-b872-7a612fffc4f2,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"d94ec81ccd4434cf51526336b878cfbf02e510035e4a605dcca7287839ba1dc8\"" Jul 7 05:54:12.981655 containerd[1607]: time="2025-07-07T05:54:12.981599715Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 7 05:54:13.058213 kubelet[2790]: I0707 05:54:13.058131 2790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-w79s2" podStartSLOduration=1.058107065 podStartE2EDuration="1.058107065s" podCreationTimestamp="2025-07-07 05:54:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 05:54:13.055995138 +0000 UTC m=+7.218018205" watchObservedRunningTime="2025-07-07 05:54:13.058107065 +0000 UTC m=+7.220130132" Jul 7 05:54:13.419613 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount711093684.mount: Deactivated successfully. Jul 7 05:54:14.588877 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3544112208.mount: Deactivated successfully. Jul 7 05:54:15.063265 containerd[1607]: time="2025-07-07T05:54:15.063125798Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:54:15.065145 containerd[1607]: time="2025-07-07T05:54:15.065088985Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=22150610" Jul 7 05:54:15.065773 containerd[1607]: time="2025-07-07T05:54:15.065655559Z" level=info msg="ImageCreate event name:\"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:54:15.070024 containerd[1607]: time="2025-07-07T05:54:15.069009239Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:54:15.070508 containerd[1607]: time="2025-07-07T05:54:15.070451616Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"22146605\" in 2.088759772s" Jul 7 05:54:15.070639 containerd[1607]: time="2025-07-07T05:54:15.070613272Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\"" Jul 7 05:54:15.074181 containerd[1607]: time="2025-07-07T05:54:15.073492426Z" level=info msg="CreateContainer within sandbox \"d94ec81ccd4434cf51526336b878cfbf02e510035e4a605dcca7287839ba1dc8\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 7 05:54:15.090131 containerd[1607]: time="2025-07-07T05:54:15.090084888Z" level=info msg="CreateContainer within sandbox \"d94ec81ccd4434cf51526336b878cfbf02e510035e4a605dcca7287839ba1dc8\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"a8e38d24267ad32068e5c3185d7f27f0594388af59b234f3f944752710d16114\"" Jul 7 05:54:15.090638 containerd[1607]: time="2025-07-07T05:54:15.090598057Z" level=info msg="StartContainer for \"a8e38d24267ad32068e5c3185d7f27f0594388af59b234f3f944752710d16114\"" Jul 7 05:54:15.160089 containerd[1607]: time="2025-07-07T05:54:15.157771822Z" level=info msg="StartContainer for \"a8e38d24267ad32068e5c3185d7f27f0594388af59b234f3f944752710d16114\" returns successfully" Jul 7 05:54:16.066322 kubelet[2790]: I0707 05:54:16.066034 2790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-twxbh" podStartSLOduration=1.97424293 podStartE2EDuration="4.066010076s" podCreationTimestamp="2025-07-07 05:54:12 +0000 UTC" firstStartedPulling="2025-07-07 05:54:12.979957312 +0000 UTC m=+7.141980379" lastFinishedPulling="2025-07-07 05:54:15.071724498 +0000 UTC m=+9.233747525" observedRunningTime="2025-07-07 05:54:16.065281207 +0000 UTC m=+10.227304234" watchObservedRunningTime="2025-07-07 05:54:16.066010076 +0000 UTC m=+10.228033103" Jul 7 05:54:21.403839 sudo[1911]: pam_unix(sudo:session): session closed for user root Jul 7 05:54:21.566316 sshd[1907]: pam_unix(sshd:session): session closed for user core Jul 7 05:54:21.571174 systemd-logind[1570]: Session 7 logged out. Waiting for processes to exit. Jul 7 05:54:21.571741 systemd[1]: sshd@6-159.69.113.68:22-147.75.109.163:54438.service: Deactivated successfully. Jul 7 05:54:21.579860 systemd[1]: session-7.scope: Deactivated successfully. Jul 7 05:54:21.583648 systemd-logind[1570]: Removed session 7. Jul 7 05:54:29.004742 kubelet[2790]: I0707 05:54:29.004687 2790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ca91317d-0d14-4c91-b747-e47bc6a59fb1-tigera-ca-bundle\") pod \"calico-typha-df66bb8c4-w7vvv\" (UID: \"ca91317d-0d14-4c91-b747-e47bc6a59fb1\") " pod="calico-system/calico-typha-df66bb8c4-w7vvv" Jul 7 05:54:29.004742 kubelet[2790]: I0707 05:54:29.004735 2790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58qnn\" (UniqueName: \"kubernetes.io/projected/ca91317d-0d14-4c91-b747-e47bc6a59fb1-kube-api-access-58qnn\") pod \"calico-typha-df66bb8c4-w7vvv\" (UID: \"ca91317d-0d14-4c91-b747-e47bc6a59fb1\") " pod="calico-system/calico-typha-df66bb8c4-w7vvv" Jul 7 05:54:29.005291 kubelet[2790]: I0707 05:54:29.004756 2790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/ca91317d-0d14-4c91-b747-e47bc6a59fb1-typha-certs\") pod \"calico-typha-df66bb8c4-w7vvv\" (UID: \"ca91317d-0d14-4c91-b747-e47bc6a59fb1\") " pod="calico-system/calico-typha-df66bb8c4-w7vvv" Jul 7 05:54:29.255715 containerd[1607]: time="2025-07-07T05:54:29.254842784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-df66bb8c4-w7vvv,Uid:ca91317d-0d14-4c91-b747-e47bc6a59fb1,Namespace:calico-system,Attempt:0,}" Jul 7 05:54:29.303708 containerd[1607]: time="2025-07-07T05:54:29.303140230Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 05:54:29.303708 containerd[1607]: time="2025-07-07T05:54:29.303199425Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 05:54:29.303708 containerd[1607]: time="2025-07-07T05:54:29.303210585Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:54:29.303708 containerd[1607]: time="2025-07-07T05:54:29.303292538Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:54:29.307472 kubelet[2790]: I0707 05:54:29.306963 2790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/45f6c895-9210-4c4d-ad12-a8de0d181ce1-cni-log-dir\") pod \"calico-node-f57hl\" (UID: \"45f6c895-9210-4c4d-ad12-a8de0d181ce1\") " pod="calico-system/calico-node-f57hl" Jul 7 05:54:29.307472 kubelet[2790]: I0707 05:54:29.307004 2790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/45f6c895-9210-4c4d-ad12-a8de0d181ce1-lib-modules\") pod \"calico-node-f57hl\" (UID: \"45f6c895-9210-4c4d-ad12-a8de0d181ce1\") " pod="calico-system/calico-node-f57hl" Jul 7 05:54:29.307472 kubelet[2790]: I0707 05:54:29.307021 2790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/45f6c895-9210-4c4d-ad12-a8de0d181ce1-var-run-calico\") pod \"calico-node-f57hl\" (UID: \"45f6c895-9210-4c4d-ad12-a8de0d181ce1\") " pod="calico-system/calico-node-f57hl" Jul 7 05:54:29.307472 kubelet[2790]: I0707 05:54:29.307039 2790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/45f6c895-9210-4c4d-ad12-a8de0d181ce1-xtables-lock\") pod \"calico-node-f57hl\" (UID: \"45f6c895-9210-4c4d-ad12-a8de0d181ce1\") " pod="calico-system/calico-node-f57hl" Jul 7 05:54:29.307472 kubelet[2790]: I0707 05:54:29.307056 2790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/45f6c895-9210-4c4d-ad12-a8de0d181ce1-var-lib-calico\") pod \"calico-node-f57hl\" (UID: \"45f6c895-9210-4c4d-ad12-a8de0d181ce1\") " pod="calico-system/calico-node-f57hl" Jul 7 05:54:29.308545 kubelet[2790]: I0707 05:54:29.307102 2790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/45f6c895-9210-4c4d-ad12-a8de0d181ce1-cni-bin-dir\") pod \"calico-node-f57hl\" (UID: \"45f6c895-9210-4c4d-ad12-a8de0d181ce1\") " pod="calico-system/calico-node-f57hl" Jul 7 05:54:29.308545 kubelet[2790]: I0707 05:54:29.307121 2790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/45f6c895-9210-4c4d-ad12-a8de0d181ce1-flexvol-driver-host\") pod \"calico-node-f57hl\" (UID: \"45f6c895-9210-4c4d-ad12-a8de0d181ce1\") " pod="calico-system/calico-node-f57hl" Jul 7 05:54:29.308545 kubelet[2790]: I0707 05:54:29.307142 2790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqz68\" (UniqueName: \"kubernetes.io/projected/45f6c895-9210-4c4d-ad12-a8de0d181ce1-kube-api-access-zqz68\") pod \"calico-node-f57hl\" (UID: \"45f6c895-9210-4c4d-ad12-a8de0d181ce1\") " pod="calico-system/calico-node-f57hl" Jul 7 05:54:29.308545 kubelet[2790]: I0707 05:54:29.307168 2790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/45f6c895-9210-4c4d-ad12-a8de0d181ce1-cni-net-dir\") pod \"calico-node-f57hl\" (UID: \"45f6c895-9210-4c4d-ad12-a8de0d181ce1\") " pod="calico-system/calico-node-f57hl" Jul 7 05:54:29.308545 kubelet[2790]: I0707 05:54:29.307185 2790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/45f6c895-9210-4c4d-ad12-a8de0d181ce1-node-certs\") pod \"calico-node-f57hl\" (UID: \"45f6c895-9210-4c4d-ad12-a8de0d181ce1\") " pod="calico-system/calico-node-f57hl" Jul 7 05:54:29.309045 kubelet[2790]: I0707 05:54:29.307200 2790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/45f6c895-9210-4c4d-ad12-a8de0d181ce1-policysync\") pod \"calico-node-f57hl\" (UID: \"45f6c895-9210-4c4d-ad12-a8de0d181ce1\") " pod="calico-system/calico-node-f57hl" Jul 7 05:54:29.309045 kubelet[2790]: I0707 05:54:29.307217 2790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/45f6c895-9210-4c4d-ad12-a8de0d181ce1-tigera-ca-bundle\") pod \"calico-node-f57hl\" (UID: \"45f6c895-9210-4c4d-ad12-a8de0d181ce1\") " pod="calico-system/calico-node-f57hl" Jul 7 05:54:29.396338 kubelet[2790]: E0707 05:54:29.395634 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s6jw4" podUID="3608e3b2-d1d8-433a-a22d-3188e5368e8c" Jul 7 05:54:29.408348 kubelet[2790]: I0707 05:54:29.408303 2790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/3608e3b2-d1d8-433a-a22d-3188e5368e8c-registration-dir\") pod \"csi-node-driver-s6jw4\" (UID: \"3608e3b2-d1d8-433a-a22d-3188e5368e8c\") " pod="calico-system/csi-node-driver-s6jw4" Jul 7 05:54:29.408348 kubelet[2790]: I0707 05:54:29.408345 2790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/3608e3b2-d1d8-433a-a22d-3188e5368e8c-socket-dir\") pod \"csi-node-driver-s6jw4\" (UID: \"3608e3b2-d1d8-433a-a22d-3188e5368e8c\") " pod="calico-system/csi-node-driver-s6jw4" Jul 7 05:54:29.408498 kubelet[2790]: I0707 05:54:29.408415 2790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3608e3b2-d1d8-433a-a22d-3188e5368e8c-kubelet-dir\") pod \"csi-node-driver-s6jw4\" (UID: \"3608e3b2-d1d8-433a-a22d-3188e5368e8c\") " pod="calico-system/csi-node-driver-s6jw4" Jul 7 05:54:29.408498 kubelet[2790]: I0707 05:54:29.408432 2790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/3608e3b2-d1d8-433a-a22d-3188e5368e8c-varrun\") pod \"csi-node-driver-s6jw4\" (UID: \"3608e3b2-d1d8-433a-a22d-3188e5368e8c\") " pod="calico-system/csi-node-driver-s6jw4" Jul 7 05:54:29.408498 kubelet[2790]: I0707 05:54:29.408449 2790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wr8v9\" (UniqueName: \"kubernetes.io/projected/3608e3b2-d1d8-433a-a22d-3188e5368e8c-kube-api-access-wr8v9\") pod \"csi-node-driver-s6jw4\" (UID: \"3608e3b2-d1d8-433a-a22d-3188e5368e8c\") " pod="calico-system/csi-node-driver-s6jw4" Jul 7 05:54:29.412321 containerd[1607]: time="2025-07-07T05:54:29.411945374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-df66bb8c4-w7vvv,Uid:ca91317d-0d14-4c91-b747-e47bc6a59fb1,Namespace:calico-system,Attempt:0,} returns sandbox id \"895cd6e84a08262d3bf77abb4cc18eebcc3d084ac0a94c0a45671a89e988ec31\"" Jul 7 05:54:29.416256 containerd[1607]: time="2025-07-07T05:54:29.415313141Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 7 05:54:29.416803 kubelet[2790]: E0707 05:54:29.416756 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:29.417320 kubelet[2790]: W0707 05:54:29.417291 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:29.417914 kubelet[2790]: E0707 05:54:29.417645 2790 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:29.437295 kubelet[2790]: E0707 05:54:29.437196 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:29.437295 kubelet[2790]: W0707 05:54:29.437227 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:29.437295 kubelet[2790]: E0707 05:54:29.437248 2790 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:29.510168 kubelet[2790]: E0707 05:54:29.509204 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:29.510168 kubelet[2790]: W0707 05:54:29.509228 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:29.510168 kubelet[2790]: E0707 05:54:29.509248 2790 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:29.511681 kubelet[2790]: E0707 05:54:29.511444 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:29.511681 kubelet[2790]: W0707 05:54:29.511509 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:29.511681 kubelet[2790]: E0707 05:54:29.511561 2790 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:29.513285 kubelet[2790]: E0707 05:54:29.512158 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:29.513285 kubelet[2790]: W0707 05:54:29.512174 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:29.513667 kubelet[2790]: E0707 05:54:29.513550 2790 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:29.513667 kubelet[2790]: E0707 05:54:29.513640 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:29.513667 kubelet[2790]: W0707 05:54:29.513649 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:29.514096 kubelet[2790]: E0707 05:54:29.513970 2790 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:29.514292 kubelet[2790]: E0707 05:54:29.514202 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:29.514292 kubelet[2790]: W0707 05:54:29.514213 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:29.514292 kubelet[2790]: E0707 05:54:29.514229 2790 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:29.515166 kubelet[2790]: E0707 05:54:29.514583 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:29.515166 kubelet[2790]: W0707 05:54:29.514596 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:29.515166 kubelet[2790]: E0707 05:54:29.515132 2790 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:29.515639 kubelet[2790]: E0707 05:54:29.515509 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:29.515639 kubelet[2790]: W0707 05:54:29.515521 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:29.515639 kubelet[2790]: E0707 05:54:29.515611 2790 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:29.516217 kubelet[2790]: E0707 05:54:29.516020 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:29.516217 kubelet[2790]: W0707 05:54:29.516035 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:29.516217 kubelet[2790]: E0707 05:54:29.516146 2790 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:29.517415 kubelet[2790]: E0707 05:54:29.517281 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:29.517415 kubelet[2790]: W0707 05:54:29.517293 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:29.517415 kubelet[2790]: E0707 05:54:29.517345 2790 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:29.517857 kubelet[2790]: E0707 05:54:29.517710 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:29.517857 kubelet[2790]: W0707 05:54:29.517722 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:29.518712 kubelet[2790]: E0707 05:54:29.518333 2790 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:29.518784 containerd[1607]: time="2025-07-07T05:54:29.518571295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-f57hl,Uid:45f6c895-9210-4c4d-ad12-a8de0d181ce1,Namespace:calico-system,Attempt:0,}" Jul 7 05:54:29.519225 kubelet[2790]: E0707 05:54:29.519005 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:29.519225 kubelet[2790]: W0707 05:54:29.519015 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:29.519225 kubelet[2790]: E0707 05:54:29.519092 2790 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:29.519932 kubelet[2790]: E0707 05:54:29.519638 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:29.519932 kubelet[2790]: W0707 05:54:29.519651 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:29.519932 kubelet[2790]: E0707 05:54:29.519697 2790 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:29.521044 kubelet[2790]: E0707 05:54:29.520864 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:29.521044 kubelet[2790]: W0707 05:54:29.520878 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:29.521354 kubelet[2790]: E0707 05:54:29.521209 2790 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:29.521548 kubelet[2790]: E0707 05:54:29.521472 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:29.521548 kubelet[2790]: W0707 05:54:29.521483 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:29.522007 kubelet[2790]: E0707 05:54:29.521919 2790 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:29.522007 kubelet[2790]: E0707 05:54:29.521982 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:29.522007 kubelet[2790]: W0707 05:54:29.521989 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:29.522881 kubelet[2790]: E0707 05:54:29.522726 2790 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:29.523002 kubelet[2790]: E0707 05:54:29.522991 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:29.523561 kubelet[2790]: W0707 05:54:29.523049 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:29.523752 kubelet[2790]: E0707 05:54:29.523651 2790 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:29.523952 kubelet[2790]: E0707 05:54:29.523864 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:29.523952 kubelet[2790]: W0707 05:54:29.523874 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:29.524107 kubelet[2790]: E0707 05:54:29.524030 2790 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:29.524574 kubelet[2790]: E0707 05:54:29.524439 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:29.524574 kubelet[2790]: W0707 05:54:29.524452 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:29.525186 kubelet[2790]: E0707 05:54:29.524717 2790 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:29.525505 kubelet[2790]: E0707 05:54:29.525290 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:29.525505 kubelet[2790]: W0707 05:54:29.525300 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:29.525505 kubelet[2790]: E0707 05:54:29.525377 2790 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:29.526315 kubelet[2790]: E0707 05:54:29.526160 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:29.526315 kubelet[2790]: W0707 05:54:29.526173 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:29.526315 kubelet[2790]: E0707 05:54:29.526243 2790 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:29.526609 kubelet[2790]: E0707 05:54:29.526479 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:29.526609 kubelet[2790]: W0707 05:54:29.526489 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:29.526609 kubelet[2790]: E0707 05:54:29.526571 2790 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:29.527353 kubelet[2790]: E0707 05:54:29.527271 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:29.527353 kubelet[2790]: W0707 05:54:29.527285 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:29.527655 kubelet[2790]: E0707 05:54:29.527453 2790 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:29.529162 kubelet[2790]: E0707 05:54:29.529146 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:29.529847 kubelet[2790]: W0707 05:54:29.529240 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:29.532922 kubelet[2790]: E0707 05:54:29.532897 2790 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:29.533370 kubelet[2790]: E0707 05:54:29.533357 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:29.533464 kubelet[2790]: W0707 05:54:29.533453 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:29.533548 kubelet[2790]: E0707 05:54:29.533537 2790 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:29.533869 kubelet[2790]: E0707 05:54:29.533856 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:29.533955 kubelet[2790]: W0707 05:54:29.533943 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:29.534007 kubelet[2790]: E0707 05:54:29.533998 2790 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:29.566931 kubelet[2790]: E0707 05:54:29.566900 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:29.567435 kubelet[2790]: W0707 05:54:29.567123 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:29.567435 kubelet[2790]: E0707 05:54:29.567152 2790 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:29.580652 containerd[1607]: time="2025-07-07T05:54:29.576623711Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 05:54:29.581628 containerd[1607]: time="2025-07-07T05:54:29.580606829Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 05:54:29.581628 containerd[1607]: time="2025-07-07T05:54:29.581400164Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:54:29.581922 containerd[1607]: time="2025-07-07T05:54:29.581765575Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:54:29.658051 containerd[1607]: time="2025-07-07T05:54:29.657880727Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-f57hl,Uid:45f6c895-9210-4c4d-ad12-a8de0d181ce1,Namespace:calico-system,Attempt:0,} returns sandbox id \"5555711cb5f7df81e9ccbde94691e3f445d8744dd36517b9d3dea7d0fbefa51f\"" Jul 7 05:54:30.950249 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4017729232.mount: Deactivated successfully. Jul 7 05:54:30.969527 kubelet[2790]: E0707 05:54:30.969434 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s6jw4" podUID="3608e3b2-d1d8-433a-a22d-3188e5368e8c" Jul 7 05:54:31.927625 containerd[1607]: time="2025-07-07T05:54:31.927523881Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:54:31.928998 containerd[1607]: time="2025-07-07T05:54:31.928921139Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=33087207" Jul 7 05:54:31.930318 containerd[1607]: time="2025-07-07T05:54:31.929865791Z" level=info msg="ImageCreate event name:\"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:54:31.933604 containerd[1607]: time="2025-07-07T05:54:31.933546844Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:54:31.936105 containerd[1607]: time="2025-07-07T05:54:31.934942863Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"33087061\" in 2.519595444s" Jul 7 05:54:31.936343 containerd[1607]: time="2025-07-07T05:54:31.936310004Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\"" Jul 7 05:54:31.937653 containerd[1607]: time="2025-07-07T05:54:31.937555233Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 7 05:54:31.953213 containerd[1607]: time="2025-07-07T05:54:31.952881722Z" level=info msg="CreateContainer within sandbox \"895cd6e84a08262d3bf77abb4cc18eebcc3d084ac0a94c0a45671a89e988ec31\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 7 05:54:31.971208 containerd[1607]: time="2025-07-07T05:54:31.971165596Z" level=info msg="CreateContainer within sandbox \"895cd6e84a08262d3bf77abb4cc18eebcc3d084ac0a94c0a45671a89e988ec31\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"92f1d91de115edc223a2943a1519adcc9828f515d8b63107205621983d7cbb4c\"" Jul 7 05:54:31.972173 containerd[1607]: time="2025-07-07T05:54:31.971948739Z" level=info msg="StartContainer for \"92f1d91de115edc223a2943a1519adcc9828f515d8b63107205621983d7cbb4c\"" Jul 7 05:54:32.047554 containerd[1607]: time="2025-07-07T05:54:32.047428374Z" level=info msg="StartContainer for \"92f1d91de115edc223a2943a1519adcc9828f515d8b63107205621983d7cbb4c\" returns successfully" Jul 7 05:54:32.127948 kubelet[2790]: E0707 05:54:32.127898 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:32.127948 kubelet[2790]: W0707 05:54:32.127932 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:32.127948 kubelet[2790]: E0707 05:54:32.127950 2790 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:32.128801 kubelet[2790]: E0707 05:54:32.128218 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:32.128801 kubelet[2790]: W0707 05:54:32.128228 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:32.128801 kubelet[2790]: E0707 05:54:32.128239 2790 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:32.129089 kubelet[2790]: E0707 05:54:32.129032 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:32.129089 kubelet[2790]: W0707 05:54:32.129057 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:32.129089 kubelet[2790]: E0707 05:54:32.129089 2790 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:32.130323 kubelet[2790]: E0707 05:54:32.130284 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:32.130323 kubelet[2790]: W0707 05:54:32.130310 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:32.130323 kubelet[2790]: E0707 05:54:32.130324 2790 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:32.132312 kubelet[2790]: E0707 05:54:32.130542 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:32.132312 kubelet[2790]: W0707 05:54:32.130551 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:32.133183 kubelet[2790]: E0707 05:54:32.133144 2790 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:32.134345 kubelet[2790]: E0707 05:54:32.133555 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:32.134345 kubelet[2790]: W0707 05:54:32.133578 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:32.134345 kubelet[2790]: E0707 05:54:32.133593 2790 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:32.134997 kubelet[2790]: E0707 05:54:32.134947 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:32.134997 kubelet[2790]: W0707 05:54:32.134977 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:32.134997 kubelet[2790]: E0707 05:54:32.134991 2790 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:32.136305 kubelet[2790]: E0707 05:54:32.135493 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:32.136305 kubelet[2790]: W0707 05:54:32.135504 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:32.136305 kubelet[2790]: E0707 05:54:32.135515 2790 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:32.136305 kubelet[2790]: E0707 05:54:32.136280 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:32.136305 kubelet[2790]: W0707 05:54:32.136291 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:32.136305 kubelet[2790]: E0707 05:54:32.136306 2790 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:32.137182 kubelet[2790]: E0707 05:54:32.136824 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:32.137182 kubelet[2790]: W0707 05:54:32.136834 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:32.137182 kubelet[2790]: E0707 05:54:32.136853 2790 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:32.140102 kubelet[2790]: E0707 05:54:32.140051 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:32.140102 kubelet[2790]: W0707 05:54:32.140088 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:32.140102 kubelet[2790]: E0707 05:54:32.140106 2790 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:32.143365 kubelet[2790]: E0707 05:54:32.143171 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:32.143365 kubelet[2790]: W0707 05:54:32.143189 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:32.143365 kubelet[2790]: E0707 05:54:32.143203 2790 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:32.144455 kubelet[2790]: E0707 05:54:32.144387 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:32.144455 kubelet[2790]: W0707 05:54:32.144409 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:32.144455 kubelet[2790]: E0707 05:54:32.144425 2790 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:32.146850 kubelet[2790]: E0707 05:54:32.146823 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:32.146850 kubelet[2790]: W0707 05:54:32.146842 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:32.146996 kubelet[2790]: E0707 05:54:32.146857 2790 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:32.147110 kubelet[2790]: E0707 05:54:32.147088 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:32.147110 kubelet[2790]: W0707 05:54:32.147103 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:32.147110 kubelet[2790]: E0707 05:54:32.147113 2790 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:32.147369 kubelet[2790]: E0707 05:54:32.147353 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:32.147369 kubelet[2790]: W0707 05:54:32.147365 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:32.147461 kubelet[2790]: E0707 05:54:32.147374 2790 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:32.147911 kubelet[2790]: E0707 05:54:32.147611 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:32.147911 kubelet[2790]: W0707 05:54:32.147620 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:32.147911 kubelet[2790]: E0707 05:54:32.147639 2790 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:32.147911 kubelet[2790]: E0707 05:54:32.147898 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:32.148988 kubelet[2790]: W0707 05:54:32.147917 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:32.148988 kubelet[2790]: E0707 05:54:32.147936 2790 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:32.148988 kubelet[2790]: E0707 05:54:32.148287 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:32.148988 kubelet[2790]: W0707 05:54:32.148302 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:32.148988 kubelet[2790]: E0707 05:54:32.148327 2790 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:32.148988 kubelet[2790]: E0707 05:54:32.148519 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:32.148988 kubelet[2790]: W0707 05:54:32.148528 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:32.148988 kubelet[2790]: E0707 05:54:32.148547 2790 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:32.148988 kubelet[2790]: E0707 05:54:32.148755 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:32.148988 kubelet[2790]: W0707 05:54:32.148765 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:32.151222 kubelet[2790]: E0707 05:54:32.148851 2790 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:32.151222 kubelet[2790]: E0707 05:54:32.149084 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:32.151222 kubelet[2790]: W0707 05:54:32.149094 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:32.151222 kubelet[2790]: E0707 05:54:32.149192 2790 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:32.151222 kubelet[2790]: E0707 05:54:32.149341 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:32.151222 kubelet[2790]: W0707 05:54:32.149351 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:32.151222 kubelet[2790]: E0707 05:54:32.149485 2790 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:32.151222 kubelet[2790]: E0707 05:54:32.149682 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:32.151222 kubelet[2790]: W0707 05:54:32.149694 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:32.151222 kubelet[2790]: E0707 05:54:32.149719 2790 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:32.153594 kubelet[2790]: E0707 05:54:32.150162 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:32.153594 kubelet[2790]: W0707 05:54:32.150176 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:32.153594 kubelet[2790]: E0707 05:54:32.150197 2790 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:32.153594 kubelet[2790]: E0707 05:54:32.150373 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:32.153594 kubelet[2790]: W0707 05:54:32.150391 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:32.153594 kubelet[2790]: E0707 05:54:32.150402 2790 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:32.153594 kubelet[2790]: E0707 05:54:32.150652 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:32.153594 kubelet[2790]: W0707 05:54:32.150671 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:32.153594 kubelet[2790]: E0707 05:54:32.150764 2790 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:32.153594 kubelet[2790]: E0707 05:54:32.151435 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:32.155649 kubelet[2790]: W0707 05:54:32.151458 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:32.155649 kubelet[2790]: E0707 05:54:32.151991 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:32.155649 kubelet[2790]: W0707 05:54:32.152016 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:32.155649 kubelet[2790]: E0707 05:54:32.152116 2790 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:32.155649 kubelet[2790]: E0707 05:54:32.152151 2790 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:32.155649 kubelet[2790]: E0707 05:54:32.154464 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:32.155649 kubelet[2790]: W0707 05:54:32.154475 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:32.155649 kubelet[2790]: E0707 05:54:32.154492 2790 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:32.155649 kubelet[2790]: E0707 05:54:32.154699 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:32.155649 kubelet[2790]: W0707 05:54:32.154708 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:32.155868 kubelet[2790]: E0707 05:54:32.154725 2790 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:32.155868 kubelet[2790]: E0707 05:54:32.155084 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:32.155868 kubelet[2790]: W0707 05:54:32.155094 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:32.155868 kubelet[2790]: E0707 05:54:32.155104 2790 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:32.155868 kubelet[2790]: E0707 05:54:32.155258 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:32.155868 kubelet[2790]: W0707 05:54:32.155266 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:32.155868 kubelet[2790]: E0707 05:54:32.155274 2790 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:32.970242 kubelet[2790]: E0707 05:54:32.970146 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s6jw4" podUID="3608e3b2-d1d8-433a-a22d-3188e5368e8c" Jul 7 05:54:33.119431 kubelet[2790]: I0707 05:54:33.119377 2790 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 05:54:33.155467 kubelet[2790]: E0707 05:54:33.155295 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:33.155467 kubelet[2790]: W0707 05:54:33.155334 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:33.155467 kubelet[2790]: E0707 05:54:33.155357 2790 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:33.156593 kubelet[2790]: E0707 05:54:33.155859 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:33.156593 kubelet[2790]: W0707 05:54:33.155873 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:33.156593 kubelet[2790]: E0707 05:54:33.155889 2790 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:33.157167 kubelet[2790]: E0707 05:54:33.156779 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:33.157167 kubelet[2790]: W0707 05:54:33.156797 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:33.157167 kubelet[2790]: E0707 05:54:33.156814 2790 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:33.157167 kubelet[2790]: E0707 05:54:33.157056 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:33.157167 kubelet[2790]: W0707 05:54:33.157066 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:33.157167 kubelet[2790]: E0707 05:54:33.157098 2790 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:33.157917 kubelet[2790]: E0707 05:54:33.157730 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:33.157917 kubelet[2790]: W0707 05:54:33.157744 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:33.157917 kubelet[2790]: E0707 05:54:33.157766 2790 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:33.158242 kubelet[2790]: E0707 05:54:33.158134 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:33.158242 kubelet[2790]: W0707 05:54:33.158147 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:33.158242 kubelet[2790]: E0707 05:54:33.158172 2790 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:33.158691 kubelet[2790]: E0707 05:54:33.158582 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:33.158691 kubelet[2790]: W0707 05:54:33.158594 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:33.158691 kubelet[2790]: E0707 05:54:33.158605 2790 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:33.158849 kubelet[2790]: E0707 05:54:33.158840 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:33.158917 kubelet[2790]: W0707 05:54:33.158889 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:33.158917 kubelet[2790]: E0707 05:54:33.158903 2790 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:33.160100 kubelet[2790]: E0707 05:54:33.160031 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:33.160100 kubelet[2790]: W0707 05:54:33.160045 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:33.160100 kubelet[2790]: E0707 05:54:33.160057 2790 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:33.160652 kubelet[2790]: E0707 05:54:33.160586 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:33.160652 kubelet[2790]: W0707 05:54:33.160599 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:33.160652 kubelet[2790]: E0707 05:54:33.160610 2790 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:33.161004 kubelet[2790]: E0707 05:54:33.160914 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:33.161004 kubelet[2790]: W0707 05:54:33.160927 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:33.161004 kubelet[2790]: E0707 05:54:33.160938 2790 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:33.161433 kubelet[2790]: E0707 05:54:33.161350 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:33.161433 kubelet[2790]: W0707 05:54:33.161363 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:33.161433 kubelet[2790]: E0707 05:54:33.161374 2790 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:33.161856 kubelet[2790]: E0707 05:54:33.161776 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:33.161856 kubelet[2790]: W0707 05:54:33.161788 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:33.161856 kubelet[2790]: E0707 05:54:33.161799 2790 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:33.162246 kubelet[2790]: E0707 05:54:33.162172 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:33.162246 kubelet[2790]: W0707 05:54:33.162185 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:33.162246 kubelet[2790]: E0707 05:54:33.162196 2790 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:33.163467 kubelet[2790]: E0707 05:54:33.162548 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:33.163467 kubelet[2790]: W0707 05:54:33.162560 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:33.163467 kubelet[2790]: E0707 05:54:33.163035 2790 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:33.163467 kubelet[2790]: E0707 05:54:33.163331 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:33.163467 kubelet[2790]: W0707 05:54:33.163341 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:33.163467 kubelet[2790]: E0707 05:54:33.163352 2790 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:33.164267 kubelet[2790]: E0707 05:54:33.164219 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:33.164267 kubelet[2790]: W0707 05:54:33.164255 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:33.164360 kubelet[2790]: E0707 05:54:33.164277 2790 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:33.165099 kubelet[2790]: E0707 05:54:33.164557 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:33.165099 kubelet[2790]: W0707 05:54:33.164573 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:33.165099 kubelet[2790]: E0707 05:54:33.164593 2790 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:33.165099 kubelet[2790]: E0707 05:54:33.164766 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:33.165099 kubelet[2790]: W0707 05:54:33.164773 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:33.165099 kubelet[2790]: E0707 05:54:33.164787 2790 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:33.165099 kubelet[2790]: E0707 05:54:33.164933 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:33.165099 kubelet[2790]: W0707 05:54:33.164941 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:33.165099 kubelet[2790]: E0707 05:54:33.164953 2790 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:33.165413 kubelet[2790]: E0707 05:54:33.165176 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:33.165413 kubelet[2790]: W0707 05:54:33.165184 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:33.165413 kubelet[2790]: E0707 05:54:33.165199 2790 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:33.165776 kubelet[2790]: E0707 05:54:33.165708 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:33.165776 kubelet[2790]: W0707 05:54:33.165728 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:33.165776 kubelet[2790]: E0707 05:54:33.165741 2790 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:33.166092 kubelet[2790]: E0707 05:54:33.165949 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:33.166092 kubelet[2790]: W0707 05:54:33.165967 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:33.166092 kubelet[2790]: E0707 05:54:33.165977 2790 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:33.166449 kubelet[2790]: E0707 05:54:33.166362 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:33.166449 kubelet[2790]: W0707 05:54:33.166378 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:33.167060 kubelet[2790]: E0707 05:54:33.167031 2790 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:33.167704 kubelet[2790]: E0707 05:54:33.167371 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:33.167704 kubelet[2790]: W0707 05:54:33.167401 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:33.167704 kubelet[2790]: E0707 05:54:33.167494 2790 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:33.167704 kubelet[2790]: E0707 05:54:33.167652 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:33.167704 kubelet[2790]: W0707 05:54:33.167661 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:33.168997 kubelet[2790]: E0707 05:54:33.168908 2790 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:33.169211 kubelet[2790]: E0707 05:54:33.169160 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:33.169211 kubelet[2790]: W0707 05:54:33.169182 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:33.170206 kubelet[2790]: E0707 05:54:33.169319 2790 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:33.170206 kubelet[2790]: E0707 05:54:33.169351 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:33.170206 kubelet[2790]: W0707 05:54:33.169505 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:33.170206 kubelet[2790]: E0707 05:54:33.169522 2790 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:33.170736 kubelet[2790]: E0707 05:54:33.170717 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:33.170984 kubelet[2790]: W0707 05:54:33.170819 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:33.170984 kubelet[2790]: E0707 05:54:33.170851 2790 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:33.171221 kubelet[2790]: E0707 05:54:33.171206 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:33.171295 kubelet[2790]: W0707 05:54:33.171281 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:33.171406 kubelet[2790]: E0707 05:54:33.171380 2790 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:33.171684 kubelet[2790]: E0707 05:54:33.171668 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:33.171968 kubelet[2790]: W0707 05:54:33.171759 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:33.171968 kubelet[2790]: E0707 05:54:33.171791 2790 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:33.172096 kubelet[2790]: E0707 05:54:33.172020 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:33.172096 kubelet[2790]: W0707 05:54:33.172030 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:33.172096 kubelet[2790]: E0707 05:54:33.172049 2790 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:33.172297 kubelet[2790]: E0707 05:54:33.172243 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:33.172297 kubelet[2790]: W0707 05:54:33.172253 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:33.172297 kubelet[2790]: E0707 05:54:33.172262 2790 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:33.527701 containerd[1607]: time="2025-07-07T05:54:33.527595906Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:54:33.529216 containerd[1607]: time="2025-07-07T05:54:33.529142686Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4266981" Jul 7 05:54:33.531840 containerd[1607]: time="2025-07-07T05:54:33.530165900Z" level=info msg="ImageCreate event name:\"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:54:33.533283 containerd[1607]: time="2025-07-07T05:54:33.533034955Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:54:33.533968 containerd[1607]: time="2025-07-07T05:54:33.533931337Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5636182\" in 1.595919817s" Jul 7 05:54:33.534110 containerd[1607]: time="2025-07-07T05:54:33.534088247Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\"" Jul 7 05:54:33.539823 containerd[1607]: time="2025-07-07T05:54:33.539767281Z" level=info msg="CreateContainer within sandbox \"5555711cb5f7df81e9ccbde94691e3f445d8744dd36517b9d3dea7d0fbefa51f\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 7 05:54:33.556840 containerd[1607]: time="2025-07-07T05:54:33.556704188Z" level=info msg="CreateContainer within sandbox \"5555711cb5f7df81e9ccbde94691e3f445d8744dd36517b9d3dea7d0fbefa51f\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"fe27ba9d82aafd687dafaf8badc1da9b2270706a4dea8f6fa910eb13b04474c1\"" Jul 7 05:54:33.558939 containerd[1607]: time="2025-07-07T05:54:33.557799957Z" level=info msg="StartContainer for \"fe27ba9d82aafd687dafaf8badc1da9b2270706a4dea8f6fa910eb13b04474c1\"" Jul 7 05:54:33.634600 containerd[1607]: time="2025-07-07T05:54:33.634551084Z" level=info msg="StartContainer for \"fe27ba9d82aafd687dafaf8badc1da9b2270706a4dea8f6fa910eb13b04474c1\" returns successfully" Jul 7 05:54:33.788525 containerd[1607]: time="2025-07-07T05:54:33.787927067Z" level=info msg="shim disconnected" id=fe27ba9d82aafd687dafaf8badc1da9b2270706a4dea8f6fa910eb13b04474c1 namespace=k8s.io Jul 7 05:54:33.788525 containerd[1607]: time="2025-07-07T05:54:33.787990903Z" level=warning msg="cleaning up after shim disconnected" id=fe27ba9d82aafd687dafaf8badc1da9b2270706a4dea8f6fa910eb13b04474c1 namespace=k8s.io Jul 7 05:54:33.788525 containerd[1607]: time="2025-07-07T05:54:33.788001142Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 05:54:33.948646 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fe27ba9d82aafd687dafaf8badc1da9b2270706a4dea8f6fa910eb13b04474c1-rootfs.mount: Deactivated successfully. Jul 7 05:54:34.127692 containerd[1607]: time="2025-07-07T05:54:34.127307888Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 7 05:54:34.162977 kubelet[2790]: I0707 05:54:34.162911 2790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-df66bb8c4-w7vvv" podStartSLOduration=3.640050285 podStartE2EDuration="6.162892887s" podCreationTimestamp="2025-07-07 05:54:28 +0000 UTC" firstStartedPulling="2025-07-07 05:54:29.414555563 +0000 UTC m=+23.576578550" lastFinishedPulling="2025-07-07 05:54:31.937398125 +0000 UTC m=+26.099421152" observedRunningTime="2025-07-07 05:54:32.141946463 +0000 UTC m=+26.303969490" watchObservedRunningTime="2025-07-07 05:54:34.162892887 +0000 UTC m=+28.324915914" Jul 7 05:54:34.969485 kubelet[2790]: E0707 05:54:34.969420 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s6jw4" podUID="3608e3b2-d1d8-433a-a22d-3188e5368e8c" Jul 7 05:54:36.971802 kubelet[2790]: E0707 05:54:36.969817 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s6jw4" podUID="3608e3b2-d1d8-433a-a22d-3188e5368e8c" Jul 7 05:54:37.704789 containerd[1607]: time="2025-07-07T05:54:37.704707991Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:54:37.706240 containerd[1607]: time="2025-07-07T05:54:37.706117361Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=65888320" Jul 7 05:54:37.707200 containerd[1607]: time="2025-07-07T05:54:37.707142629Z" level=info msg="ImageCreate event name:\"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:54:37.711397 containerd[1607]: time="2025-07-07T05:54:37.711339300Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:54:37.712456 containerd[1607]: time="2025-07-07T05:54:37.712338610Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"67257561\" in 3.584378641s" Jul 7 05:54:37.712456 containerd[1607]: time="2025-07-07T05:54:37.712375448Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\"" Jul 7 05:54:37.718830 containerd[1607]: time="2025-07-07T05:54:37.717045174Z" level=info msg="CreateContainer within sandbox \"5555711cb5f7df81e9ccbde94691e3f445d8744dd36517b9d3dea7d0fbefa51f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 7 05:54:37.737975 containerd[1607]: time="2025-07-07T05:54:37.737901012Z" level=info msg="CreateContainer within sandbox \"5555711cb5f7df81e9ccbde94691e3f445d8744dd36517b9d3dea7d0fbefa51f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f97315bcc733ace8e432cba89ef3ab61a94746ce729dd9076df42fa7c98db824\"" Jul 7 05:54:37.739431 containerd[1607]: time="2025-07-07T05:54:37.739397577Z" level=info msg="StartContainer for \"f97315bcc733ace8e432cba89ef3ab61a94746ce729dd9076df42fa7c98db824\"" Jul 7 05:54:37.811116 containerd[1607]: time="2025-07-07T05:54:37.810399987Z" level=info msg="StartContainer for \"f97315bcc733ace8e432cba89ef3ab61a94746ce729dd9076df42fa7c98db824\" returns successfully" Jul 7 05:54:38.327458 containerd[1607]: time="2025-07-07T05:54:38.327395790Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 7 05:54:38.349997 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f97315bcc733ace8e432cba89ef3ab61a94746ce729dd9076df42fa7c98db824-rootfs.mount: Deactivated successfully. Jul 7 05:54:38.410839 kubelet[2790]: I0707 05:54:38.410645 2790 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 7 05:54:38.431152 containerd[1607]: time="2025-07-07T05:54:38.431046995Z" level=info msg="shim disconnected" id=f97315bcc733ace8e432cba89ef3ab61a94746ce729dd9076df42fa7c98db824 namespace=k8s.io Jul 7 05:54:38.431152 containerd[1607]: time="2025-07-07T05:54:38.431134191Z" level=warning msg="cleaning up after shim disconnected" id=f97315bcc733ace8e432cba89ef3ab61a94746ce729dd9076df42fa7c98db824 namespace=k8s.io Jul 7 05:54:38.431152 containerd[1607]: time="2025-07-07T05:54:38.431144071Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 05:54:38.464289 containerd[1607]: time="2025-07-07T05:54:38.464143211Z" level=warning msg="cleanup warnings time=\"2025-07-07T05:54:38Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 7 05:54:38.608427 kubelet[2790]: I0707 05:54:38.608356 2790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/cd6c95c2-86b8-43b3-90f9-cea8639bb989-calico-apiserver-certs\") pod \"calico-apiserver-5b7f8cc9dc-mfs7f\" (UID: \"cd6c95c2-86b8-43b3-90f9-cea8639bb989\") " pod="calico-apiserver/calico-apiserver-5b7f8cc9dc-mfs7f" Jul 7 05:54:38.608427 kubelet[2790]: I0707 05:54:38.608423 2790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvdlh\" (UniqueName: \"kubernetes.io/projected/a19b0658-01a5-4cf7-a7b1-1c9d8255c365-kube-api-access-tvdlh\") pod \"coredns-7c65d6cfc9-6l9kt\" (UID: \"a19b0658-01a5-4cf7-a7b1-1c9d8255c365\") " pod="kube-system/coredns-7c65d6cfc9-6l9kt" Jul 7 05:54:38.608649 kubelet[2790]: I0707 05:54:38.608457 2790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a362285d-bb22-4ce9-b882-0bdc9d32c0cc-calico-apiserver-certs\") pod \"calico-apiserver-5b7f8cc9dc-drdvq\" (UID: \"a362285d-bb22-4ce9-b882-0bdc9d32c0cc\") " pod="calico-apiserver/calico-apiserver-5b7f8cc9dc-drdvq" Jul 7 05:54:38.608649 kubelet[2790]: I0707 05:54:38.608496 2790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3b1afbd3-2ba9-4eb3-b456-871a5337cbaa-whisker-backend-key-pair\") pod \"whisker-765dfbf5cc-6g5pp\" (UID: \"3b1afbd3-2ba9-4eb3-b456-871a5337cbaa\") " pod="calico-system/whisker-765dfbf5cc-6g5pp" Jul 7 05:54:38.608649 kubelet[2790]: I0707 05:54:38.608519 2790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksqdp\" (UniqueName: \"kubernetes.io/projected/3b1afbd3-2ba9-4eb3-b456-871a5337cbaa-kube-api-access-ksqdp\") pod \"whisker-765dfbf5cc-6g5pp\" (UID: \"3b1afbd3-2ba9-4eb3-b456-871a5337cbaa\") " pod="calico-system/whisker-765dfbf5cc-6g5pp" Jul 7 05:54:38.608649 kubelet[2790]: I0707 05:54:38.608547 2790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3b1afbd3-2ba9-4eb3-b456-871a5337cbaa-whisker-ca-bundle\") pod \"whisker-765dfbf5cc-6g5pp\" (UID: \"3b1afbd3-2ba9-4eb3-b456-871a5337cbaa\") " pod="calico-system/whisker-765dfbf5cc-6g5pp" Jul 7 05:54:38.608649 kubelet[2790]: I0707 05:54:38.608569 2790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7248fff7-123a-4181-abad-edb494c7cc63-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-6wzbz\" (UID: \"7248fff7-123a-4181-abad-edb494c7cc63\") " pod="calico-system/goldmane-58fd7646b9-6wzbz" Jul 7 05:54:38.608820 kubelet[2790]: I0707 05:54:38.608595 2790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/7248fff7-123a-4181-abad-edb494c7cc63-goldmane-key-pair\") pod \"goldmane-58fd7646b9-6wzbz\" (UID: \"7248fff7-123a-4181-abad-edb494c7cc63\") " pod="calico-system/goldmane-58fd7646b9-6wzbz" Jul 7 05:54:38.608820 kubelet[2790]: I0707 05:54:38.608617 2790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5lr4\" (UniqueName: \"kubernetes.io/projected/7248fff7-123a-4181-abad-edb494c7cc63-kube-api-access-p5lr4\") pod \"goldmane-58fd7646b9-6wzbz\" (UID: \"7248fff7-123a-4181-abad-edb494c7cc63\") " pod="calico-system/goldmane-58fd7646b9-6wzbz" Jul 7 05:54:38.608820 kubelet[2790]: I0707 05:54:38.608648 2790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a19b0658-01a5-4cf7-a7b1-1c9d8255c365-config-volume\") pod \"coredns-7c65d6cfc9-6l9kt\" (UID: \"a19b0658-01a5-4cf7-a7b1-1c9d8255c365\") " pod="kube-system/coredns-7c65d6cfc9-6l9kt" Jul 7 05:54:38.608820 kubelet[2790]: I0707 05:54:38.608675 2790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ttpw\" (UniqueName: \"kubernetes.io/projected/cd6c95c2-86b8-43b3-90f9-cea8639bb989-kube-api-access-4ttpw\") pod \"calico-apiserver-5b7f8cc9dc-mfs7f\" (UID: \"cd6c95c2-86b8-43b3-90f9-cea8639bb989\") " pod="calico-apiserver/calico-apiserver-5b7f8cc9dc-mfs7f" Jul 7 05:54:38.608820 kubelet[2790]: I0707 05:54:38.608702 2790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4hkc\" (UniqueName: \"kubernetes.io/projected/6bb49342-3954-416b-b804-0778891e9fae-kube-api-access-r4hkc\") pod \"calico-kube-controllers-684db96bcc-ct6sd\" (UID: \"6bb49342-3954-416b-b804-0778891e9fae\") " pod="calico-system/calico-kube-controllers-684db96bcc-ct6sd" Jul 7 05:54:38.608970 kubelet[2790]: I0707 05:54:38.608726 2790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a7abeb6-f9e1-4877-bbe3-f7709971f8ca-config-volume\") pod \"coredns-7c65d6cfc9-2f6gw\" (UID: \"7a7abeb6-f9e1-4877-bbe3-f7709971f8ca\") " pod="kube-system/coredns-7c65d6cfc9-2f6gw" Jul 7 05:54:38.608970 kubelet[2790]: I0707 05:54:38.608750 2790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sknbj\" (UniqueName: \"kubernetes.io/projected/7a7abeb6-f9e1-4877-bbe3-f7709971f8ca-kube-api-access-sknbj\") pod \"coredns-7c65d6cfc9-2f6gw\" (UID: \"7a7abeb6-f9e1-4877-bbe3-f7709971f8ca\") " pod="kube-system/coredns-7c65d6cfc9-2f6gw" Jul 7 05:54:38.608970 kubelet[2790]: I0707 05:54:38.608823 2790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6bb49342-3954-416b-b804-0778891e9fae-tigera-ca-bundle\") pod \"calico-kube-controllers-684db96bcc-ct6sd\" (UID: \"6bb49342-3954-416b-b804-0778891e9fae\") " pod="calico-system/calico-kube-controllers-684db96bcc-ct6sd" Jul 7 05:54:38.608970 kubelet[2790]: I0707 05:54:38.608848 2790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7248fff7-123a-4181-abad-edb494c7cc63-config\") pod \"goldmane-58fd7646b9-6wzbz\" (UID: \"7248fff7-123a-4181-abad-edb494c7cc63\") " pod="calico-system/goldmane-58fd7646b9-6wzbz" Jul 7 05:54:38.608970 kubelet[2790]: I0707 05:54:38.608870 2790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pcbt\" (UniqueName: \"kubernetes.io/projected/a362285d-bb22-4ce9-b882-0bdc9d32c0cc-kube-api-access-8pcbt\") pod \"calico-apiserver-5b7f8cc9dc-drdvq\" (UID: \"a362285d-bb22-4ce9-b882-0bdc9d32c0cc\") " pod="calico-apiserver/calico-apiserver-5b7f8cc9dc-drdvq" Jul 7 05:54:38.777202 containerd[1607]: time="2025-07-07T05:54:38.775784834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-6l9kt,Uid:a19b0658-01a5-4cf7-a7b1-1c9d8255c365,Namespace:kube-system,Attempt:0,}" Jul 7 05:54:38.784245 containerd[1607]: time="2025-07-07T05:54:38.784134724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-2f6gw,Uid:7a7abeb6-f9e1-4877-bbe3-f7709971f8ca,Namespace:kube-system,Attempt:0,}" Jul 7 05:54:38.786383 containerd[1607]: time="2025-07-07T05:54:38.786336421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-684db96bcc-ct6sd,Uid:6bb49342-3954-416b-b804-0778891e9fae,Namespace:calico-system,Attempt:0,}" Jul 7 05:54:38.786859 containerd[1607]: time="2025-07-07T05:54:38.786813399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b7f8cc9dc-drdvq,Uid:a362285d-bb22-4ce9-b882-0bdc9d32c0cc,Namespace:calico-apiserver,Attempt:0,}" Jul 7 05:54:38.793519 containerd[1607]: time="2025-07-07T05:54:38.793479288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b7f8cc9dc-mfs7f,Uid:cd6c95c2-86b8-43b3-90f9-cea8639bb989,Namespace:calico-apiserver,Attempt:0,}" Jul 7 05:54:38.798904 containerd[1607]: time="2025-07-07T05:54:38.798838958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-765dfbf5cc-6g5pp,Uid:3b1afbd3-2ba9-4eb3-b456-871a5337cbaa,Namespace:calico-system,Attempt:0,}" Jul 7 05:54:38.809303 containerd[1607]: time="2025-07-07T05:54:38.809231233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-6wzbz,Uid:7248fff7-123a-4181-abad-edb494c7cc63,Namespace:calico-system,Attempt:0,}" Jul 7 05:54:38.960812 containerd[1607]: time="2025-07-07T05:54:38.960605252Z" level=error msg="Failed to destroy network for sandbox \"dac53fde0970fd1ed655e506800edac1d070f45454a87fa92c1e68ac152701df\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:54:38.967148 containerd[1607]: time="2025-07-07T05:54:38.966995034Z" level=error msg="encountered an error cleaning up failed sandbox \"dac53fde0970fd1ed655e506800edac1d070f45454a87fa92c1e68ac152701df\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:54:38.967148 containerd[1607]: time="2025-07-07T05:54:38.967062951Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-6l9kt,Uid:a19b0658-01a5-4cf7-a7b1-1c9d8255c365,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"dac53fde0970fd1ed655e506800edac1d070f45454a87fa92c1e68ac152701df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:54:38.968501 kubelet[2790]: E0707 05:54:38.967313 2790 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dac53fde0970fd1ed655e506800edac1d070f45454a87fa92c1e68ac152701df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:54:38.968501 kubelet[2790]: E0707 05:54:38.967378 2790 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dac53fde0970fd1ed655e506800edac1d070f45454a87fa92c1e68ac152701df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-6l9kt" Jul 7 05:54:38.968501 kubelet[2790]: E0707 05:54:38.967399 2790 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dac53fde0970fd1ed655e506800edac1d070f45454a87fa92c1e68ac152701df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-6l9kt" Jul 7 05:54:38.968601 kubelet[2790]: E0707 05:54:38.967441 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-6l9kt_kube-system(a19b0658-01a5-4cf7-a7b1-1c9d8255c365)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-6l9kt_kube-system(a19b0658-01a5-4cf7-a7b1-1c9d8255c365)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dac53fde0970fd1ed655e506800edac1d070f45454a87fa92c1e68ac152701df\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-6l9kt" podUID="a19b0658-01a5-4cf7-a7b1-1c9d8255c365" Jul 7 05:54:38.975756 containerd[1607]: time="2025-07-07T05:54:38.975661870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s6jw4,Uid:3608e3b2-d1d8-433a-a22d-3188e5368e8c,Namespace:calico-system,Attempt:0,}" Jul 7 05:54:38.983818 containerd[1607]: time="2025-07-07T05:54:38.983762132Z" level=error msg="Failed to destroy network for sandbox \"2cd7df1d3dacf358cc46870201e40226a9d75afa9e0d0afe94a9976c658d88a2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:54:38.985191 containerd[1607]: time="2025-07-07T05:54:38.985030793Z" level=error msg="encountered an error cleaning up failed sandbox \"2cd7df1d3dacf358cc46870201e40226a9d75afa9e0d0afe94a9976c658d88a2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:54:38.985191 containerd[1607]: time="2025-07-07T05:54:38.985137908Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b7f8cc9dc-drdvq,Uid:a362285d-bb22-4ce9-b882-0bdc9d32c0cc,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2cd7df1d3dacf358cc46870201e40226a9d75afa9e0d0afe94a9976c658d88a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:54:38.986945 kubelet[2790]: E0707 05:54:38.985375 2790 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2cd7df1d3dacf358cc46870201e40226a9d75afa9e0d0afe94a9976c658d88a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:54:38.986945 kubelet[2790]: E0707 05:54:38.985534 2790 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2cd7df1d3dacf358cc46870201e40226a9d75afa9e0d0afe94a9976c658d88a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b7f8cc9dc-drdvq" Jul 7 05:54:38.986945 kubelet[2790]: E0707 05:54:38.985553 2790 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2cd7df1d3dacf358cc46870201e40226a9d75afa9e0d0afe94a9976c658d88a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b7f8cc9dc-drdvq" Jul 7 05:54:38.987062 kubelet[2790]: E0707 05:54:38.985611 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5b7f8cc9dc-drdvq_calico-apiserver(a362285d-bb22-4ce9-b882-0bdc9d32c0cc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5b7f8cc9dc-drdvq_calico-apiserver(a362285d-bb22-4ce9-b882-0bdc9d32c0cc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2cd7df1d3dacf358cc46870201e40226a9d75afa9e0d0afe94a9976c658d88a2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b7f8cc9dc-drdvq" podUID="a362285d-bb22-4ce9-b882-0bdc9d32c0cc" Jul 7 05:54:39.013644 containerd[1607]: time="2025-07-07T05:54:39.013556783Z" level=error msg="Failed to destroy network for sandbox \"40c3da616280f08b3c015dbc2ab61858439cbab20b614d0112e0b72710c65590\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:54:39.014445 containerd[1607]: time="2025-07-07T05:54:39.014400146Z" level=error msg="encountered an error cleaning up failed sandbox \"40c3da616280f08b3c015dbc2ab61858439cbab20b614d0112e0b72710c65590\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:54:39.014611 containerd[1607]: time="2025-07-07T05:54:39.014465423Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-2f6gw,Uid:7a7abeb6-f9e1-4877-bbe3-f7709971f8ca,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"40c3da616280f08b3c015dbc2ab61858439cbab20b614d0112e0b72710c65590\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:54:39.015224 kubelet[2790]: E0707 05:54:39.014753 2790 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40c3da616280f08b3c015dbc2ab61858439cbab20b614d0112e0b72710c65590\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:54:39.015224 kubelet[2790]: E0707 05:54:39.014804 2790 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40c3da616280f08b3c015dbc2ab61858439cbab20b614d0112e0b72710c65590\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-2f6gw" Jul 7 05:54:39.015224 kubelet[2790]: E0707 05:54:39.014822 2790 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40c3da616280f08b3c015dbc2ab61858439cbab20b614d0112e0b72710c65590\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-2f6gw" Jul 7 05:54:39.015395 kubelet[2790]: E0707 05:54:39.014858 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-2f6gw_kube-system(7a7abeb6-f9e1-4877-bbe3-f7709971f8ca)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-2f6gw_kube-system(7a7abeb6-f9e1-4877-bbe3-f7709971f8ca)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"40c3da616280f08b3c015dbc2ab61858439cbab20b614d0112e0b72710c65590\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-2f6gw" podUID="7a7abeb6-f9e1-4877-bbe3-f7709971f8ca" Jul 7 05:54:39.075394 containerd[1607]: time="2025-07-07T05:54:39.075260585Z" level=error msg="Failed to destroy network for sandbox \"b9063cf62d330415dbf6a8a15016fceb914fd6d71e809ab0ebce9ba03029c1c1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:54:39.075777 containerd[1607]: time="2025-07-07T05:54:39.075744044Z" level=error msg="encountered an error cleaning up failed sandbox \"b9063cf62d330415dbf6a8a15016fceb914fd6d71e809ab0ebce9ba03029c1c1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:54:39.075926 containerd[1607]: time="2025-07-07T05:54:39.075874998Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s6jw4,Uid:3608e3b2-d1d8-433a-a22d-3188e5368e8c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b9063cf62d330415dbf6a8a15016fceb914fd6d71e809ab0ebce9ba03029c1c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:54:39.077297 kubelet[2790]: E0707 05:54:39.076236 2790 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9063cf62d330415dbf6a8a15016fceb914fd6d71e809ab0ebce9ba03029c1c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:54:39.077297 kubelet[2790]: E0707 05:54:39.076294 2790 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9063cf62d330415dbf6a8a15016fceb914fd6d71e809ab0ebce9ba03029c1c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-s6jw4" Jul 7 05:54:39.077297 kubelet[2790]: E0707 05:54:39.076324 2790 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9063cf62d330415dbf6a8a15016fceb914fd6d71e809ab0ebce9ba03029c1c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-s6jw4" Jul 7 05:54:39.077458 kubelet[2790]: E0707 05:54:39.076373 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-s6jw4_calico-system(3608e3b2-d1d8-433a-a22d-3188e5368e8c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-s6jw4_calico-system(3608e3b2-d1d8-433a-a22d-3188e5368e8c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b9063cf62d330415dbf6a8a15016fceb914fd6d71e809ab0ebce9ba03029c1c1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-s6jw4" podUID="3608e3b2-d1d8-433a-a22d-3188e5368e8c" Jul 7 05:54:39.082045 containerd[1607]: time="2025-07-07T05:54:39.081931215Z" level=error msg="Failed to destroy network for sandbox \"929cf9ce3a7931c08f54ef638f593ed3f0aa29b6cc836733c4d2c9322ba9ed8a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:54:39.083538 containerd[1607]: time="2025-07-07T05:54:39.083393072Z" level=error msg="encountered an error cleaning up failed sandbox \"929cf9ce3a7931c08f54ef638f593ed3f0aa29b6cc836733c4d2c9322ba9ed8a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:54:39.083538 containerd[1607]: time="2025-07-07T05:54:39.083452669Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-765dfbf5cc-6g5pp,Uid:3b1afbd3-2ba9-4eb3-b456-871a5337cbaa,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"929cf9ce3a7931c08f54ef638f593ed3f0aa29b6cc836733c4d2c9322ba9ed8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:54:39.084154 kubelet[2790]: E0707 05:54:39.083803 2790 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"929cf9ce3a7931c08f54ef638f593ed3f0aa29b6cc836733c4d2c9322ba9ed8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:54:39.084154 kubelet[2790]: E0707 05:54:39.083856 2790 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"929cf9ce3a7931c08f54ef638f593ed3f0aa29b6cc836733c4d2c9322ba9ed8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-765dfbf5cc-6g5pp" Jul 7 05:54:39.084154 kubelet[2790]: E0707 05:54:39.083872 2790 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"929cf9ce3a7931c08f54ef638f593ed3f0aa29b6cc836733c4d2c9322ba9ed8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-765dfbf5cc-6g5pp" Jul 7 05:54:39.084297 kubelet[2790]: E0707 05:54:39.083916 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-765dfbf5cc-6g5pp_calico-system(3b1afbd3-2ba9-4eb3-b456-871a5337cbaa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-765dfbf5cc-6g5pp_calico-system(3b1afbd3-2ba9-4eb3-b456-871a5337cbaa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"929cf9ce3a7931c08f54ef638f593ed3f0aa29b6cc836733c4d2c9322ba9ed8a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-765dfbf5cc-6g5pp" podUID="3b1afbd3-2ba9-4eb3-b456-871a5337cbaa" Jul 7 05:54:39.085122 containerd[1607]: time="2025-07-07T05:54:39.084517983Z" level=error msg="Failed to destroy network for sandbox \"6f2aef845c948b3a3da93463475913830fd5abe09ddd3b2abb0208b753c0b097\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:54:39.086779 containerd[1607]: time="2025-07-07T05:54:39.086344344Z" level=error msg="encountered an error cleaning up failed sandbox \"6f2aef845c948b3a3da93463475913830fd5abe09ddd3b2abb0208b753c0b097\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:54:39.086779 containerd[1607]: time="2025-07-07T05:54:39.086434780Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-6wzbz,Uid:7248fff7-123a-4181-abad-edb494c7cc63,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6f2aef845c948b3a3da93463475913830fd5abe09ddd3b2abb0208b753c0b097\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:54:39.086912 kubelet[2790]: E0707 05:54:39.086619 2790 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f2aef845c948b3a3da93463475913830fd5abe09ddd3b2abb0208b753c0b097\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:54:39.086912 kubelet[2790]: E0707 05:54:39.086670 2790 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f2aef845c948b3a3da93463475913830fd5abe09ddd3b2abb0208b753c0b097\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-6wzbz" Jul 7 05:54:39.086912 kubelet[2790]: E0707 05:54:39.086688 2790 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f2aef845c948b3a3da93463475913830fd5abe09ddd3b2abb0208b753c0b097\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-6wzbz" Jul 7 05:54:39.086995 kubelet[2790]: E0707 05:54:39.086732 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-6wzbz_calico-system(7248fff7-123a-4181-abad-edb494c7cc63)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-6wzbz_calico-system(7248fff7-123a-4181-abad-edb494c7cc63)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6f2aef845c948b3a3da93463475913830fd5abe09ddd3b2abb0208b753c0b097\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-6wzbz" podUID="7248fff7-123a-4181-abad-edb494c7cc63" Jul 7 05:54:39.088393 containerd[1607]: time="2025-07-07T05:54:39.088130226Z" level=error msg="Failed to destroy network for sandbox \"82c35a5ef5486a77b5e56400d201a90e9aee74eaf1498a1d1f062f19cbad9514\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:54:39.088571 containerd[1607]: time="2025-07-07T05:54:39.088532049Z" level=error msg="encountered an error cleaning up failed sandbox \"82c35a5ef5486a77b5e56400d201a90e9aee74eaf1498a1d1f062f19cbad9514\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:54:39.088796 containerd[1607]: time="2025-07-07T05:54:39.088773358Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-684db96bcc-ct6sd,Uid:6bb49342-3954-416b-b804-0778891e9fae,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"82c35a5ef5486a77b5e56400d201a90e9aee74eaf1498a1d1f062f19cbad9514\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:54:39.089880 kubelet[2790]: E0707 05:54:39.089800 2790 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82c35a5ef5486a77b5e56400d201a90e9aee74eaf1498a1d1f062f19cbad9514\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:54:39.090047 kubelet[2790]: E0707 05:54:39.089997 2790 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82c35a5ef5486a77b5e56400d201a90e9aee74eaf1498a1d1f062f19cbad9514\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-684db96bcc-ct6sd" Jul 7 05:54:39.091191 kubelet[2790]: E0707 05:54:39.090036 2790 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82c35a5ef5486a77b5e56400d201a90e9aee74eaf1498a1d1f062f19cbad9514\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-684db96bcc-ct6sd" Jul 7 05:54:39.091337 kubelet[2790]: E0707 05:54:39.091172 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-684db96bcc-ct6sd_calico-system(6bb49342-3954-416b-b804-0778891e9fae)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-684db96bcc-ct6sd_calico-system(6bb49342-3954-416b-b804-0778891e9fae)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"82c35a5ef5486a77b5e56400d201a90e9aee74eaf1498a1d1f062f19cbad9514\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-684db96bcc-ct6sd" podUID="6bb49342-3954-416b-b804-0778891e9fae" Jul 7 05:54:39.093924 containerd[1607]: time="2025-07-07T05:54:39.093755342Z" level=error msg="Failed to destroy network for sandbox \"f79c852cf8d5a6dc42e9ae7aa2efc034789040ef8c2994a2b978d11f9e301e43\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:54:39.094317 containerd[1607]: time="2025-07-07T05:54:39.094282119Z" level=error msg="encountered an error cleaning up failed sandbox \"f79c852cf8d5a6dc42e9ae7aa2efc034789040ef8c2994a2b978d11f9e301e43\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:54:39.094479 containerd[1607]: time="2025-07-07T05:54:39.094411073Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b7f8cc9dc-mfs7f,Uid:cd6c95c2-86b8-43b3-90f9-cea8639bb989,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f79c852cf8d5a6dc42e9ae7aa2efc034789040ef8c2994a2b978d11f9e301e43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:54:39.095252 kubelet[2790]: E0707 05:54:39.095115 2790 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f79c852cf8d5a6dc42e9ae7aa2efc034789040ef8c2994a2b978d11f9e301e43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:54:39.095252 kubelet[2790]: E0707 05:54:39.095157 2790 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f79c852cf8d5a6dc42e9ae7aa2efc034789040ef8c2994a2b978d11f9e301e43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b7f8cc9dc-mfs7f" Jul 7 05:54:39.095252 kubelet[2790]: E0707 05:54:39.095185 2790 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f79c852cf8d5a6dc42e9ae7aa2efc034789040ef8c2994a2b978d11f9e301e43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b7f8cc9dc-mfs7f" Jul 7 05:54:39.095408 kubelet[2790]: E0707 05:54:39.095230 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5b7f8cc9dc-mfs7f_calico-apiserver(cd6c95c2-86b8-43b3-90f9-cea8639bb989)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5b7f8cc9dc-mfs7f_calico-apiserver(cd6c95c2-86b8-43b3-90f9-cea8639bb989)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f79c852cf8d5a6dc42e9ae7aa2efc034789040ef8c2994a2b978d11f9e301e43\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b7f8cc9dc-mfs7f" podUID="cd6c95c2-86b8-43b3-90f9-cea8639bb989" Jul 7 05:54:39.149791 kubelet[2790]: I0707 05:54:39.149765 2790 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="82c35a5ef5486a77b5e56400d201a90e9aee74eaf1498a1d1f062f19cbad9514" Jul 7 05:54:39.151366 containerd[1607]: time="2025-07-07T05:54:39.151249047Z" level=info msg="StopPodSandbox for \"82c35a5ef5486a77b5e56400d201a90e9aee74eaf1498a1d1f062f19cbad9514\"" Jul 7 05:54:39.153100 containerd[1607]: time="2025-07-07T05:54:39.152172047Z" level=info msg="Ensure that sandbox 82c35a5ef5486a77b5e56400d201a90e9aee74eaf1498a1d1f062f19cbad9514 in task-service has been cleanup successfully" Jul 7 05:54:39.155419 kubelet[2790]: I0707 05:54:39.155372 2790 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="929cf9ce3a7931c08f54ef638f593ed3f0aa29b6cc836733c4d2c9322ba9ed8a" Jul 7 05:54:39.160195 containerd[1607]: time="2025-07-07T05:54:39.159719199Z" level=info msg="StopPodSandbox for \"929cf9ce3a7931c08f54ef638f593ed3f0aa29b6cc836733c4d2c9322ba9ed8a\"" Jul 7 05:54:39.160195 containerd[1607]: time="2025-07-07T05:54:39.159903791Z" level=info msg="Ensure that sandbox 929cf9ce3a7931c08f54ef638f593ed3f0aa29b6cc836733c4d2c9322ba9ed8a in task-service has been cleanup successfully" Jul 7 05:54:39.162930 kubelet[2790]: I0707 05:54:39.162800 2790 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9063cf62d330415dbf6a8a15016fceb914fd6d71e809ab0ebce9ba03029c1c1" Jul 7 05:54:39.165155 containerd[1607]: time="2025-07-07T05:54:39.164889575Z" level=info msg="StopPodSandbox for \"b9063cf62d330415dbf6a8a15016fceb914fd6d71e809ab0ebce9ba03029c1c1\"" Jul 7 05:54:39.166352 containerd[1607]: time="2025-07-07T05:54:39.166214237Z" level=info msg="Ensure that sandbox b9063cf62d330415dbf6a8a15016fceb914fd6d71e809ab0ebce9ba03029c1c1 in task-service has been cleanup successfully" Jul 7 05:54:39.166452 kubelet[2790]: I0707 05:54:39.166319 2790 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40c3da616280f08b3c015dbc2ab61858439cbab20b614d0112e0b72710c65590" Jul 7 05:54:39.170711 containerd[1607]: time="2025-07-07T05:54:39.170664884Z" level=info msg="StopPodSandbox for \"40c3da616280f08b3c015dbc2ab61858439cbab20b614d0112e0b72710c65590\"" Jul 7 05:54:39.170864 containerd[1607]: time="2025-07-07T05:54:39.170829917Z" level=info msg="Ensure that sandbox 40c3da616280f08b3c015dbc2ab61858439cbab20b614d0112e0b72710c65590 in task-service has been cleanup successfully" Jul 7 05:54:39.174708 kubelet[2790]: I0707 05:54:39.174061 2790 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2cd7df1d3dacf358cc46870201e40226a9d75afa9e0d0afe94a9976c658d88a2" Jul 7 05:54:39.176369 containerd[1607]: time="2025-07-07T05:54:39.175995293Z" level=info msg="StopPodSandbox for \"2cd7df1d3dacf358cc46870201e40226a9d75afa9e0d0afe94a9976c658d88a2\"" Jul 7 05:54:39.176369 containerd[1607]: time="2025-07-07T05:54:39.176241042Z" level=info msg="Ensure that sandbox 2cd7df1d3dacf358cc46870201e40226a9d75afa9e0d0afe94a9976c658d88a2 in task-service has been cleanup successfully" Jul 7 05:54:39.184581 kubelet[2790]: I0707 05:54:39.184556 2790 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f2aef845c948b3a3da93463475913830fd5abe09ddd3b2abb0208b753c0b097" Jul 7 05:54:39.187698 containerd[1607]: time="2025-07-07T05:54:39.187668826Z" level=info msg="StopPodSandbox for \"6f2aef845c948b3a3da93463475913830fd5abe09ddd3b2abb0208b753c0b097\"" Jul 7 05:54:39.188309 containerd[1607]: time="2025-07-07T05:54:39.188186004Z" level=info msg="Ensure that sandbox 6f2aef845c948b3a3da93463475913830fd5abe09ddd3b2abb0208b753c0b097 in task-service has been cleanup successfully" Jul 7 05:54:39.193905 kubelet[2790]: I0707 05:54:39.193397 2790 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f79c852cf8d5a6dc42e9ae7aa2efc034789040ef8c2994a2b978d11f9e301e43" Jul 7 05:54:39.194517 containerd[1607]: time="2025-07-07T05:54:39.194490610Z" level=info msg="StopPodSandbox for \"f79c852cf8d5a6dc42e9ae7aa2efc034789040ef8c2994a2b978d11f9e301e43\"" Jul 7 05:54:39.196355 containerd[1607]: time="2025-07-07T05:54:39.196294092Z" level=info msg="Ensure that sandbox f79c852cf8d5a6dc42e9ae7aa2efc034789040ef8c2994a2b978d11f9e301e43 in task-service has been cleanup successfully" Jul 7 05:54:39.203153 kubelet[2790]: I0707 05:54:39.202271 2790 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dac53fde0970fd1ed655e506800edac1d070f45454a87fa92c1e68ac152701df" Jul 7 05:54:39.203240 containerd[1607]: time="2025-07-07T05:54:39.202957083Z" level=info msg="StopPodSandbox for \"dac53fde0970fd1ed655e506800edac1d070f45454a87fa92c1e68ac152701df\"" Jul 7 05:54:39.203796 containerd[1607]: time="2025-07-07T05:54:39.203356265Z" level=info msg="Ensure that sandbox dac53fde0970fd1ed655e506800edac1d070f45454a87fa92c1e68ac152701df in task-service has been cleanup successfully" Jul 7 05:54:39.225119 containerd[1607]: time="2025-07-07T05:54:39.222408918Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 7 05:54:39.282049 containerd[1607]: time="2025-07-07T05:54:39.281402478Z" level=error msg="StopPodSandbox for \"82c35a5ef5486a77b5e56400d201a90e9aee74eaf1498a1d1f062f19cbad9514\" failed" error="failed to destroy network for sandbox \"82c35a5ef5486a77b5e56400d201a90e9aee74eaf1498a1d1f062f19cbad9514\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:54:39.283121 kubelet[2790]: E0707 05:54:39.283009 2790 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"82c35a5ef5486a77b5e56400d201a90e9aee74eaf1498a1d1f062f19cbad9514\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="82c35a5ef5486a77b5e56400d201a90e9aee74eaf1498a1d1f062f19cbad9514" Jul 7 05:54:39.284273 kubelet[2790]: E0707 05:54:39.283330 2790 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"82c35a5ef5486a77b5e56400d201a90e9aee74eaf1498a1d1f062f19cbad9514"} Jul 7 05:54:39.284273 kubelet[2790]: E0707 05:54:39.283471 2790 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6bb49342-3954-416b-b804-0778891e9fae\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"82c35a5ef5486a77b5e56400d201a90e9aee74eaf1498a1d1f062f19cbad9514\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 05:54:39.284273 kubelet[2790]: E0707 05:54:39.284218 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6bb49342-3954-416b-b804-0778891e9fae\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"82c35a5ef5486a77b5e56400d201a90e9aee74eaf1498a1d1f062f19cbad9514\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-684db96bcc-ct6sd" podUID="6bb49342-3954-416b-b804-0778891e9fae" Jul 7 05:54:39.296669 containerd[1607]: time="2025-07-07T05:54:39.296628497Z" level=error msg="StopPodSandbox for \"f79c852cf8d5a6dc42e9ae7aa2efc034789040ef8c2994a2b978d11f9e301e43\" failed" error="failed to destroy network for sandbox \"f79c852cf8d5a6dc42e9ae7aa2efc034789040ef8c2994a2b978d11f9e301e43\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:54:39.297117 kubelet[2790]: E0707 05:54:39.296955 2790 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f79c852cf8d5a6dc42e9ae7aa2efc034789040ef8c2994a2b978d11f9e301e43\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f79c852cf8d5a6dc42e9ae7aa2efc034789040ef8c2994a2b978d11f9e301e43" Jul 7 05:54:39.297117 kubelet[2790]: E0707 05:54:39.297000 2790 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f79c852cf8d5a6dc42e9ae7aa2efc034789040ef8c2994a2b978d11f9e301e43"} Jul 7 05:54:39.297117 kubelet[2790]: E0707 05:54:39.297029 2790 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cd6c95c2-86b8-43b3-90f9-cea8639bb989\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f79c852cf8d5a6dc42e9ae7aa2efc034789040ef8c2994a2b978d11f9e301e43\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 05:54:39.297117 kubelet[2790]: E0707 05:54:39.297054 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cd6c95c2-86b8-43b3-90f9-cea8639bb989\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f79c852cf8d5a6dc42e9ae7aa2efc034789040ef8c2994a2b978d11f9e301e43\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b7f8cc9dc-mfs7f" podUID="cd6c95c2-86b8-43b3-90f9-cea8639bb989" Jul 7 05:54:39.300824 containerd[1607]: time="2025-07-07T05:54:39.300784517Z" level=error msg="StopPodSandbox for \"dac53fde0970fd1ed655e506800edac1d070f45454a87fa92c1e68ac152701df\" failed" error="failed to destroy network for sandbox \"dac53fde0970fd1ed655e506800edac1d070f45454a87fa92c1e68ac152701df\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:54:39.301324 kubelet[2790]: E0707 05:54:39.301176 2790 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"dac53fde0970fd1ed655e506800edac1d070f45454a87fa92c1e68ac152701df\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="dac53fde0970fd1ed655e506800edac1d070f45454a87fa92c1e68ac152701df" Jul 7 05:54:39.301324 kubelet[2790]: E0707 05:54:39.301216 2790 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"dac53fde0970fd1ed655e506800edac1d070f45454a87fa92c1e68ac152701df"} Jul 7 05:54:39.301324 kubelet[2790]: E0707 05:54:39.301250 2790 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a19b0658-01a5-4cf7-a7b1-1c9d8255c365\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dac53fde0970fd1ed655e506800edac1d070f45454a87fa92c1e68ac152701df\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 05:54:39.301324 kubelet[2790]: E0707 05:54:39.301271 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a19b0658-01a5-4cf7-a7b1-1c9d8255c365\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dac53fde0970fd1ed655e506800edac1d070f45454a87fa92c1e68ac152701df\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-6l9kt" podUID="a19b0658-01a5-4cf7-a7b1-1c9d8255c365" Jul 7 05:54:39.304188 containerd[1607]: time="2025-07-07T05:54:39.303875583Z" level=error msg="StopPodSandbox for \"929cf9ce3a7931c08f54ef638f593ed3f0aa29b6cc836733c4d2c9322ba9ed8a\" failed" error="failed to destroy network for sandbox \"929cf9ce3a7931c08f54ef638f593ed3f0aa29b6cc836733c4d2c9322ba9ed8a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:54:39.304815 kubelet[2790]: E0707 05:54:39.304668 2790 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"929cf9ce3a7931c08f54ef638f593ed3f0aa29b6cc836733c4d2c9322ba9ed8a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="929cf9ce3a7931c08f54ef638f593ed3f0aa29b6cc836733c4d2c9322ba9ed8a" Jul 7 05:54:39.304815 kubelet[2790]: E0707 05:54:39.304730 2790 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"929cf9ce3a7931c08f54ef638f593ed3f0aa29b6cc836733c4d2c9322ba9ed8a"} Jul 7 05:54:39.304815 kubelet[2790]: E0707 05:54:39.304758 2790 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3b1afbd3-2ba9-4eb3-b456-871a5337cbaa\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"929cf9ce3a7931c08f54ef638f593ed3f0aa29b6cc836733c4d2c9322ba9ed8a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 05:54:39.304815 kubelet[2790]: E0707 05:54:39.304789 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3b1afbd3-2ba9-4eb3-b456-871a5337cbaa\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"929cf9ce3a7931c08f54ef638f593ed3f0aa29b6cc836733c4d2c9322ba9ed8a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-765dfbf5cc-6g5pp" podUID="3b1afbd3-2ba9-4eb3-b456-871a5337cbaa" Jul 7 05:54:39.316186 containerd[1607]: time="2025-07-07T05:54:39.314092339Z" level=error msg="StopPodSandbox for \"40c3da616280f08b3c015dbc2ab61858439cbab20b614d0112e0b72710c65590\" failed" error="failed to destroy network for sandbox \"40c3da616280f08b3c015dbc2ab61858439cbab20b614d0112e0b72710c65590\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:54:39.316336 kubelet[2790]: E0707 05:54:39.315154 2790 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"40c3da616280f08b3c015dbc2ab61858439cbab20b614d0112e0b72710c65590\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="40c3da616280f08b3c015dbc2ab61858439cbab20b614d0112e0b72710c65590" Jul 7 05:54:39.316336 kubelet[2790]: E0707 05:54:39.315333 2790 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"40c3da616280f08b3c015dbc2ab61858439cbab20b614d0112e0b72710c65590"} Jul 7 05:54:39.316336 kubelet[2790]: E0707 05:54:39.315412 2790 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7a7abeb6-f9e1-4877-bbe3-f7709971f8ca\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"40c3da616280f08b3c015dbc2ab61858439cbab20b614d0112e0b72710c65590\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 05:54:39.316336 kubelet[2790]: E0707 05:54:39.315459 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7a7abeb6-f9e1-4877-bbe3-f7709971f8ca\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"40c3da616280f08b3c015dbc2ab61858439cbab20b614d0112e0b72710c65590\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-2f6gw" podUID="7a7abeb6-f9e1-4877-bbe3-f7709971f8ca" Jul 7 05:54:39.322333 containerd[1607]: time="2025-07-07T05:54:39.322279304Z" level=error msg="StopPodSandbox for \"6f2aef845c948b3a3da93463475913830fd5abe09ddd3b2abb0208b753c0b097\" failed" error="failed to destroy network for sandbox \"6f2aef845c948b3a3da93463475913830fd5abe09ddd3b2abb0208b753c0b097\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:54:39.322645 kubelet[2790]: E0707 05:54:39.322482 2790 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6f2aef845c948b3a3da93463475913830fd5abe09ddd3b2abb0208b753c0b097\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6f2aef845c948b3a3da93463475913830fd5abe09ddd3b2abb0208b753c0b097" Jul 7 05:54:39.322645 kubelet[2790]: E0707 05:54:39.322517 2790 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6f2aef845c948b3a3da93463475913830fd5abe09ddd3b2abb0208b753c0b097"} Jul 7 05:54:39.322645 kubelet[2790]: E0707 05:54:39.322545 2790 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7248fff7-123a-4181-abad-edb494c7cc63\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6f2aef845c948b3a3da93463475913830fd5abe09ddd3b2abb0208b753c0b097\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 05:54:39.322645 kubelet[2790]: E0707 05:54:39.322574 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7248fff7-123a-4181-abad-edb494c7cc63\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6f2aef845c948b3a3da93463475913830fd5abe09ddd3b2abb0208b753c0b097\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-6wzbz" podUID="7248fff7-123a-4181-abad-edb494c7cc63" Jul 7 05:54:39.323558 containerd[1607]: time="2025-07-07T05:54:39.323509891Z" level=error msg="StopPodSandbox for \"b9063cf62d330415dbf6a8a15016fceb914fd6d71e809ab0ebce9ba03029c1c1\" failed" error="failed to destroy network for sandbox \"b9063cf62d330415dbf6a8a15016fceb914fd6d71e809ab0ebce9ba03029c1c1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:54:39.323859 kubelet[2790]: E0707 05:54:39.323825 2790 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b9063cf62d330415dbf6a8a15016fceb914fd6d71e809ab0ebce9ba03029c1c1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b9063cf62d330415dbf6a8a15016fceb914fd6d71e809ab0ebce9ba03029c1c1" Jul 7 05:54:39.323926 kubelet[2790]: E0707 05:54:39.323860 2790 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b9063cf62d330415dbf6a8a15016fceb914fd6d71e809ab0ebce9ba03029c1c1"} Jul 7 05:54:39.323926 kubelet[2790]: E0707 05:54:39.323886 2790 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3608e3b2-d1d8-433a-a22d-3188e5368e8c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b9063cf62d330415dbf6a8a15016fceb914fd6d71e809ab0ebce9ba03029c1c1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 05:54:39.323926 kubelet[2790]: E0707 05:54:39.323907 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3608e3b2-d1d8-433a-a22d-3188e5368e8c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b9063cf62d330415dbf6a8a15016fceb914fd6d71e809ab0ebce9ba03029c1c1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-s6jw4" podUID="3608e3b2-d1d8-433a-a22d-3188e5368e8c" Jul 7 05:54:39.325182 containerd[1607]: time="2025-07-07T05:54:39.325020705Z" level=error msg="StopPodSandbox for \"2cd7df1d3dacf358cc46870201e40226a9d75afa9e0d0afe94a9976c658d88a2\" failed" error="failed to destroy network for sandbox \"2cd7df1d3dacf358cc46870201e40226a9d75afa9e0d0afe94a9976c658d88a2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:54:39.325432 kubelet[2790]: E0707 05:54:39.325364 2790 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2cd7df1d3dacf358cc46870201e40226a9d75afa9e0d0afe94a9976c658d88a2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2cd7df1d3dacf358cc46870201e40226a9d75afa9e0d0afe94a9976c658d88a2" Jul 7 05:54:39.325432 kubelet[2790]: E0707 05:54:39.325415 2790 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2cd7df1d3dacf358cc46870201e40226a9d75afa9e0d0afe94a9976c658d88a2"} Jul 7 05:54:39.325537 kubelet[2790]: E0707 05:54:39.325442 2790 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a362285d-bb22-4ce9-b882-0bdc9d32c0cc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2cd7df1d3dacf358cc46870201e40226a9d75afa9e0d0afe94a9976c658d88a2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 05:54:39.325537 kubelet[2790]: E0707 05:54:39.325462 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a362285d-bb22-4ce9-b882-0bdc9d32c0cc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2cd7df1d3dacf358cc46870201e40226a9d75afa9e0d0afe94a9976c658d88a2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b7f8cc9dc-drdvq" podUID="a362285d-bb22-4ce9-b882-0bdc9d32c0cc" Jul 7 05:54:46.345413 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4123093207.mount: Deactivated successfully. Jul 7 05:54:46.380224 containerd[1607]: time="2025-07-07T05:54:46.380147052Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:54:46.381372 containerd[1607]: time="2025-07-07T05:54:46.381300505Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=152544909" Jul 7 05:54:46.382191 containerd[1607]: time="2025-07-07T05:54:46.382126645Z" level=info msg="ImageCreate event name:\"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:54:46.385126 containerd[1607]: time="2025-07-07T05:54:46.384988819Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:54:46.386082 containerd[1607]: time="2025-07-07T05:54:46.385656243Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"152544771\" in 7.163201926s" Jul 7 05:54:46.386082 containerd[1607]: time="2025-07-07T05:54:46.385691362Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\"" Jul 7 05:54:46.404315 containerd[1607]: time="2025-07-07T05:54:46.404195370Z" level=info msg="CreateContainer within sandbox \"5555711cb5f7df81e9ccbde94691e3f445d8744dd36517b9d3dea7d0fbefa51f\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 7 05:54:46.425252 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4065963301.mount: Deactivated successfully. Jul 7 05:54:46.428682 containerd[1607]: time="2025-07-07T05:54:46.428630880Z" level=info msg="CreateContainer within sandbox \"5555711cb5f7df81e9ccbde94691e3f445d8744dd36517b9d3dea7d0fbefa51f\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"31fa8b5774b37876cf465f024cb1cb20d29fbbab498557dab0d4135bdd4b271e\"" Jul 7 05:54:46.429784 containerd[1607]: time="2025-07-07T05:54:46.429615897Z" level=info msg="StartContainer for \"31fa8b5774b37876cf465f024cb1cb20d29fbbab498557dab0d4135bdd4b271e\"" Jul 7 05:54:46.505438 containerd[1607]: time="2025-07-07T05:54:46.504557788Z" level=info msg="StartContainer for \"31fa8b5774b37876cf465f024cb1cb20d29fbbab498557dab0d4135bdd4b271e\" returns successfully" Jul 7 05:54:46.655453 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 7 05:54:46.655562 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 7 05:54:46.802935 containerd[1607]: time="2025-07-07T05:54:46.802889666Z" level=info msg="StopPodSandbox for \"929cf9ce3a7931c08f54ef638f593ed3f0aa29b6cc836733c4d2c9322ba9ed8a\"" Jul 7 05:54:46.991141 containerd[1607]: 2025-07-07 05:54:46.905 [INFO][3966] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="929cf9ce3a7931c08f54ef638f593ed3f0aa29b6cc836733c4d2c9322ba9ed8a" Jul 7 05:54:46.991141 containerd[1607]: 2025-07-07 05:54:46.905 [INFO][3966] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="929cf9ce3a7931c08f54ef638f593ed3f0aa29b6cc836733c4d2c9322ba9ed8a" iface="eth0" netns="/var/run/netns/cni-b4fad44e-deba-4626-3854-13216ed35420" Jul 7 05:54:46.991141 containerd[1607]: 2025-07-07 05:54:46.907 [INFO][3966] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="929cf9ce3a7931c08f54ef638f593ed3f0aa29b6cc836733c4d2c9322ba9ed8a" iface="eth0" netns="/var/run/netns/cni-b4fad44e-deba-4626-3854-13216ed35420" Jul 7 05:54:46.991141 containerd[1607]: 2025-07-07 05:54:46.908 [INFO][3966] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="929cf9ce3a7931c08f54ef638f593ed3f0aa29b6cc836733c4d2c9322ba9ed8a" iface="eth0" netns="/var/run/netns/cni-b4fad44e-deba-4626-3854-13216ed35420" Jul 7 05:54:46.991141 containerd[1607]: 2025-07-07 05:54:46.908 [INFO][3966] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="929cf9ce3a7931c08f54ef638f593ed3f0aa29b6cc836733c4d2c9322ba9ed8a" Jul 7 05:54:46.991141 containerd[1607]: 2025-07-07 05:54:46.908 [INFO][3966] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="929cf9ce3a7931c08f54ef638f593ed3f0aa29b6cc836733c4d2c9322ba9ed8a" Jul 7 05:54:46.991141 containerd[1607]: 2025-07-07 05:54:46.961 [INFO][3974] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="929cf9ce3a7931c08f54ef638f593ed3f0aa29b6cc836733c4d2c9322ba9ed8a" HandleID="k8s-pod-network.929cf9ce3a7931c08f54ef638f593ed3f0aa29b6cc836733c4d2c9322ba9ed8a" Workload="ci--4081--3--4--0--cfa01fffc0-k8s-whisker--765dfbf5cc--6g5pp-eth0" Jul 7 05:54:46.991141 containerd[1607]: 2025-07-07 05:54:46.961 [INFO][3974] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:54:46.991141 containerd[1607]: 2025-07-07 05:54:46.961 [INFO][3974] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:54:46.991141 containerd[1607]: 2025-07-07 05:54:46.975 [WARNING][3974] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="929cf9ce3a7931c08f54ef638f593ed3f0aa29b6cc836733c4d2c9322ba9ed8a" HandleID="k8s-pod-network.929cf9ce3a7931c08f54ef638f593ed3f0aa29b6cc836733c4d2c9322ba9ed8a" Workload="ci--4081--3--4--0--cfa01fffc0-k8s-whisker--765dfbf5cc--6g5pp-eth0" Jul 7 05:54:46.991141 containerd[1607]: 2025-07-07 05:54:46.976 [INFO][3974] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="929cf9ce3a7931c08f54ef638f593ed3f0aa29b6cc836733c4d2c9322ba9ed8a" HandleID="k8s-pod-network.929cf9ce3a7931c08f54ef638f593ed3f0aa29b6cc836733c4d2c9322ba9ed8a" Workload="ci--4081--3--4--0--cfa01fffc0-k8s-whisker--765dfbf5cc--6g5pp-eth0" Jul 7 05:54:46.991141 containerd[1607]: 2025-07-07 05:54:46.980 [INFO][3974] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:54:46.991141 containerd[1607]: 2025-07-07 05:54:46.988 [INFO][3966] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="929cf9ce3a7931c08f54ef638f593ed3f0aa29b6cc836733c4d2c9322ba9ed8a" Jul 7 05:54:46.992004 containerd[1607]: time="2025-07-07T05:54:46.991930895Z" level=info msg="TearDown network for sandbox \"929cf9ce3a7931c08f54ef638f593ed3f0aa29b6cc836733c4d2c9322ba9ed8a\" successfully" Jul 7 05:54:46.992004 containerd[1607]: time="2025-07-07T05:54:46.991991253Z" level=info msg="StopPodSandbox for \"929cf9ce3a7931c08f54ef638f593ed3f0aa29b6cc836733c4d2c9322ba9ed8a\" returns successfully" Jul 7 05:54:47.167797 kubelet[2790]: I0707 05:54:47.166504 2790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ksqdp\" (UniqueName: \"kubernetes.io/projected/3b1afbd3-2ba9-4eb3-b456-871a5337cbaa-kube-api-access-ksqdp\") pod \"3b1afbd3-2ba9-4eb3-b456-871a5337cbaa\" (UID: \"3b1afbd3-2ba9-4eb3-b456-871a5337cbaa\") " Jul 7 05:54:47.167797 kubelet[2790]: I0707 05:54:47.166576 2790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3b1afbd3-2ba9-4eb3-b456-871a5337cbaa-whisker-backend-key-pair\") pod \"3b1afbd3-2ba9-4eb3-b456-871a5337cbaa\" (UID: \"3b1afbd3-2ba9-4eb3-b456-871a5337cbaa\") " Jul 7 05:54:47.167797 kubelet[2790]: I0707 05:54:47.166625 2790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3b1afbd3-2ba9-4eb3-b456-871a5337cbaa-whisker-ca-bundle\") pod \"3b1afbd3-2ba9-4eb3-b456-871a5337cbaa\" (UID: \"3b1afbd3-2ba9-4eb3-b456-871a5337cbaa\") " Jul 7 05:54:47.167797 kubelet[2790]: I0707 05:54:47.167316 2790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b1afbd3-2ba9-4eb3-b456-871a5337cbaa-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "3b1afbd3-2ba9-4eb3-b456-871a5337cbaa" (UID: "3b1afbd3-2ba9-4eb3-b456-871a5337cbaa"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 7 05:54:47.171053 kubelet[2790]: I0707 05:54:47.170997 2790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b1afbd3-2ba9-4eb3-b456-871a5337cbaa-kube-api-access-ksqdp" (OuterVolumeSpecName: "kube-api-access-ksqdp") pod "3b1afbd3-2ba9-4eb3-b456-871a5337cbaa" (UID: "3b1afbd3-2ba9-4eb3-b456-871a5337cbaa"). InnerVolumeSpecName "kube-api-access-ksqdp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 7 05:54:47.171430 kubelet[2790]: I0707 05:54:47.171398 2790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b1afbd3-2ba9-4eb3-b456-871a5337cbaa-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "3b1afbd3-2ba9-4eb3-b456-871a5337cbaa" (UID: "3b1afbd3-2ba9-4eb3-b456-871a5337cbaa"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 7 05:54:47.267438 kubelet[2790]: I0707 05:54:47.267306 2790 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ksqdp\" (UniqueName: \"kubernetes.io/projected/3b1afbd3-2ba9-4eb3-b456-871a5337cbaa-kube-api-access-ksqdp\") on node \"ci-4081-3-4-0-cfa01fffc0\" DevicePath \"\"" Jul 7 05:54:47.267438 kubelet[2790]: I0707 05:54:47.267351 2790 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3b1afbd3-2ba9-4eb3-b456-871a5337cbaa-whisker-backend-key-pair\") on node \"ci-4081-3-4-0-cfa01fffc0\" DevicePath \"\"" Jul 7 05:54:47.267438 kubelet[2790]: I0707 05:54:47.267365 2790 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3b1afbd3-2ba9-4eb3-b456-871a5337cbaa-whisker-ca-bundle\") on node \"ci-4081-3-4-0-cfa01fffc0\" DevicePath \"\"" Jul 7 05:54:47.294968 kubelet[2790]: I0707 05:54:47.294828 2790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-f57hl" podStartSLOduration=1.569821012 podStartE2EDuration="18.294807567s" podCreationTimestamp="2025-07-07 05:54:29 +0000 UTC" firstStartedPulling="2025-07-07 05:54:29.662279131 +0000 UTC m=+23.824302118" lastFinishedPulling="2025-07-07 05:54:46.387265606 +0000 UTC m=+40.549288673" observedRunningTime="2025-07-07 05:54:47.277735602 +0000 UTC m=+41.439758629" watchObservedRunningTime="2025-07-07 05:54:47.294807567 +0000 UTC m=+41.456830554" Jul 7 05:54:47.354491 systemd[1]: run-netns-cni\x2db4fad44e\x2ddeba\x2d4626\x2d3854\x2d13216ed35420.mount: Deactivated successfully. Jul 7 05:54:47.354641 systemd[1]: var-lib-kubelet-pods-3b1afbd3\x2d2ba9\x2d4eb3\x2db456\x2d871a5337cbaa-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dksqdp.mount: Deactivated successfully. Jul 7 05:54:47.354766 systemd[1]: var-lib-kubelet-pods-3b1afbd3\x2d2ba9\x2d4eb3\x2db456\x2d871a5337cbaa-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 7 05:54:47.470166 kubelet[2790]: I0707 05:54:47.469910 2790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vz6h5\" (UniqueName: \"kubernetes.io/projected/bd007c35-2d58-4722-8c56-1e5aafc2a6cf-kube-api-access-vz6h5\") pod \"whisker-5b9fd46bcf-ncfpf\" (UID: \"bd007c35-2d58-4722-8c56-1e5aafc2a6cf\") " pod="calico-system/whisker-5b9fd46bcf-ncfpf" Jul 7 05:54:47.470341 kubelet[2790]: I0707 05:54:47.470256 2790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/bd007c35-2d58-4722-8c56-1e5aafc2a6cf-whisker-backend-key-pair\") pod \"whisker-5b9fd46bcf-ncfpf\" (UID: \"bd007c35-2d58-4722-8c56-1e5aafc2a6cf\") " pod="calico-system/whisker-5b9fd46bcf-ncfpf" Jul 7 05:54:47.470341 kubelet[2790]: I0707 05:54:47.470308 2790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bd007c35-2d58-4722-8c56-1e5aafc2a6cf-whisker-ca-bundle\") pod \"whisker-5b9fd46bcf-ncfpf\" (UID: \"bd007c35-2d58-4722-8c56-1e5aafc2a6cf\") " pod="calico-system/whisker-5b9fd46bcf-ncfpf" Jul 7 05:54:47.652838 containerd[1607]: time="2025-07-07T05:54:47.652750115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5b9fd46bcf-ncfpf,Uid:bd007c35-2d58-4722-8c56-1e5aafc2a6cf,Namespace:calico-system,Attempt:0,}" Jul 7 05:54:47.809113 systemd-networkd[1244]: calid43c2d02be5: Link UP Jul 7 05:54:47.810233 systemd-networkd[1244]: calid43c2d02be5: Gained carrier Jul 7 05:54:47.830751 containerd[1607]: 2025-07-07 05:54:47.695 [INFO][4019] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 05:54:47.830751 containerd[1607]: 2025-07-07 05:54:47.711 [INFO][4019] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--4--0--cfa01fffc0-k8s-whisker--5b9fd46bcf--ncfpf-eth0 whisker-5b9fd46bcf- calico-system bd007c35-2d58-4722-8c56-1e5aafc2a6cf 911 0 2025-07-07 05:54:47 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5b9fd46bcf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081-3-4-0-cfa01fffc0 whisker-5b9fd46bcf-ncfpf eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calid43c2d02be5 [] [] }} ContainerID="36a58948e1016c03993c5f0f943df71bd116238ac7cbafb7f4e1343a0751996f" Namespace="calico-system" Pod="whisker-5b9fd46bcf-ncfpf" WorkloadEndpoint="ci--4081--3--4--0--cfa01fffc0-k8s-whisker--5b9fd46bcf--ncfpf-" Jul 7 05:54:47.830751 containerd[1607]: 2025-07-07 05:54:47.711 [INFO][4019] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="36a58948e1016c03993c5f0f943df71bd116238ac7cbafb7f4e1343a0751996f" Namespace="calico-system" Pod="whisker-5b9fd46bcf-ncfpf" WorkloadEndpoint="ci--4081--3--4--0--cfa01fffc0-k8s-whisker--5b9fd46bcf--ncfpf-eth0" Jul 7 05:54:47.830751 containerd[1607]: 2025-07-07 05:54:47.744 [INFO][4030] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="36a58948e1016c03993c5f0f943df71bd116238ac7cbafb7f4e1343a0751996f" HandleID="k8s-pod-network.36a58948e1016c03993c5f0f943df71bd116238ac7cbafb7f4e1343a0751996f" Workload="ci--4081--3--4--0--cfa01fffc0-k8s-whisker--5b9fd46bcf--ncfpf-eth0" Jul 7 05:54:47.830751 containerd[1607]: 2025-07-07 05:54:47.745 [INFO][4030] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="36a58948e1016c03993c5f0f943df71bd116238ac7cbafb7f4e1343a0751996f" HandleID="k8s-pod-network.36a58948e1016c03993c5f0f943df71bd116238ac7cbafb7f4e1343a0751996f" Workload="ci--4081--3--4--0--cfa01fffc0-k8s-whisker--5b9fd46bcf--ncfpf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d2ff0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-4-0-cfa01fffc0", "pod":"whisker-5b9fd46bcf-ncfpf", "timestamp":"2025-07-07 05:54:47.744913837 +0000 UTC"}, Hostname:"ci-4081-3-4-0-cfa01fffc0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 05:54:47.830751 containerd[1607]: 2025-07-07 05:54:47.745 [INFO][4030] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:54:47.830751 containerd[1607]: 2025-07-07 05:54:47.745 [INFO][4030] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:54:47.830751 containerd[1607]: 2025-07-07 05:54:47.745 [INFO][4030] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-4-0-cfa01fffc0' Jul 7 05:54:47.830751 containerd[1607]: 2025-07-07 05:54:47.759 [INFO][4030] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.36a58948e1016c03993c5f0f943df71bd116238ac7cbafb7f4e1343a0751996f" host="ci-4081-3-4-0-cfa01fffc0" Jul 7 05:54:47.830751 containerd[1607]: 2025-07-07 05:54:47.767 [INFO][4030] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-4-0-cfa01fffc0" Jul 7 05:54:47.830751 containerd[1607]: 2025-07-07 05:54:47.773 [INFO][4030] ipam/ipam.go 511: Trying affinity for 192.168.9.64/26 host="ci-4081-3-4-0-cfa01fffc0" Jul 7 05:54:47.830751 containerd[1607]: 2025-07-07 05:54:47.776 [INFO][4030] ipam/ipam.go 158: Attempting to load block cidr=192.168.9.64/26 host="ci-4081-3-4-0-cfa01fffc0" Jul 7 05:54:47.830751 containerd[1607]: 2025-07-07 05:54:47.779 [INFO][4030] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.9.64/26 host="ci-4081-3-4-0-cfa01fffc0" Jul 7 05:54:47.830751 containerd[1607]: 2025-07-07 05:54:47.779 [INFO][4030] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.9.64/26 handle="k8s-pod-network.36a58948e1016c03993c5f0f943df71bd116238ac7cbafb7f4e1343a0751996f" host="ci-4081-3-4-0-cfa01fffc0" Jul 7 05:54:47.830751 containerd[1607]: 2025-07-07 05:54:47.781 [INFO][4030] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.36a58948e1016c03993c5f0f943df71bd116238ac7cbafb7f4e1343a0751996f Jul 7 05:54:47.830751 containerd[1607]: 2025-07-07 05:54:47.786 [INFO][4030] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.9.64/26 handle="k8s-pod-network.36a58948e1016c03993c5f0f943df71bd116238ac7cbafb7f4e1343a0751996f" host="ci-4081-3-4-0-cfa01fffc0" Jul 7 05:54:47.830751 containerd[1607]: 2025-07-07 05:54:47.795 [INFO][4030] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.9.65/26] block=192.168.9.64/26 handle="k8s-pod-network.36a58948e1016c03993c5f0f943df71bd116238ac7cbafb7f4e1343a0751996f" host="ci-4081-3-4-0-cfa01fffc0" Jul 7 05:54:47.830751 containerd[1607]: 2025-07-07 05:54:47.795 [INFO][4030] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.9.65/26] handle="k8s-pod-network.36a58948e1016c03993c5f0f943df71bd116238ac7cbafb7f4e1343a0751996f" host="ci-4081-3-4-0-cfa01fffc0" Jul 7 05:54:47.830751 containerd[1607]: 2025-07-07 05:54:47.795 [INFO][4030] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:54:47.830751 containerd[1607]: 2025-07-07 05:54:47.795 [INFO][4030] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.9.65/26] IPv6=[] ContainerID="36a58948e1016c03993c5f0f943df71bd116238ac7cbafb7f4e1343a0751996f" HandleID="k8s-pod-network.36a58948e1016c03993c5f0f943df71bd116238ac7cbafb7f4e1343a0751996f" Workload="ci--4081--3--4--0--cfa01fffc0-k8s-whisker--5b9fd46bcf--ncfpf-eth0" Jul 7 05:54:47.831593 containerd[1607]: 2025-07-07 05:54:47.798 [INFO][4019] cni-plugin/k8s.go 418: Populated endpoint ContainerID="36a58948e1016c03993c5f0f943df71bd116238ac7cbafb7f4e1343a0751996f" Namespace="calico-system" Pod="whisker-5b9fd46bcf-ncfpf" WorkloadEndpoint="ci--4081--3--4--0--cfa01fffc0-k8s-whisker--5b9fd46bcf--ncfpf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--0--cfa01fffc0-k8s-whisker--5b9fd46bcf--ncfpf-eth0", GenerateName:"whisker-5b9fd46bcf-", Namespace:"calico-system", SelfLink:"", UID:"bd007c35-2d58-4722-8c56-1e5aafc2a6cf", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 5, 54, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5b9fd46bcf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-0-cfa01fffc0", ContainerID:"", Pod:"whisker-5b9fd46bcf-ncfpf", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.9.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calid43c2d02be5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 05:54:47.831593 containerd[1607]: 2025-07-07 05:54:47.798 [INFO][4019] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.9.65/32] ContainerID="36a58948e1016c03993c5f0f943df71bd116238ac7cbafb7f4e1343a0751996f" Namespace="calico-system" Pod="whisker-5b9fd46bcf-ncfpf" WorkloadEndpoint="ci--4081--3--4--0--cfa01fffc0-k8s-whisker--5b9fd46bcf--ncfpf-eth0" Jul 7 05:54:47.831593 containerd[1607]: 2025-07-07 05:54:47.798 [INFO][4019] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid43c2d02be5 ContainerID="36a58948e1016c03993c5f0f943df71bd116238ac7cbafb7f4e1343a0751996f" Namespace="calico-system" Pod="whisker-5b9fd46bcf-ncfpf" WorkloadEndpoint="ci--4081--3--4--0--cfa01fffc0-k8s-whisker--5b9fd46bcf--ncfpf-eth0" Jul 7 05:54:47.831593 containerd[1607]: 2025-07-07 05:54:47.810 [INFO][4019] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="36a58948e1016c03993c5f0f943df71bd116238ac7cbafb7f4e1343a0751996f" Namespace="calico-system" Pod="whisker-5b9fd46bcf-ncfpf" WorkloadEndpoint="ci--4081--3--4--0--cfa01fffc0-k8s-whisker--5b9fd46bcf--ncfpf-eth0" Jul 7 05:54:47.831593 containerd[1607]: 2025-07-07 05:54:47.811 [INFO][4019] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="36a58948e1016c03993c5f0f943df71bd116238ac7cbafb7f4e1343a0751996f" Namespace="calico-system" Pod="whisker-5b9fd46bcf-ncfpf" WorkloadEndpoint="ci--4081--3--4--0--cfa01fffc0-k8s-whisker--5b9fd46bcf--ncfpf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--0--cfa01fffc0-k8s-whisker--5b9fd46bcf--ncfpf-eth0", GenerateName:"whisker-5b9fd46bcf-", Namespace:"calico-system", SelfLink:"", UID:"bd007c35-2d58-4722-8c56-1e5aafc2a6cf", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 5, 54, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5b9fd46bcf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-0-cfa01fffc0", ContainerID:"36a58948e1016c03993c5f0f943df71bd116238ac7cbafb7f4e1343a0751996f", Pod:"whisker-5b9fd46bcf-ncfpf", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.9.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calid43c2d02be5", MAC:"e6:a8:80:30:fd:b2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 05:54:47.831593 containerd[1607]: 2025-07-07 05:54:47.827 [INFO][4019] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="36a58948e1016c03993c5f0f943df71bd116238ac7cbafb7f4e1343a0751996f" Namespace="calico-system" Pod="whisker-5b9fd46bcf-ncfpf" WorkloadEndpoint="ci--4081--3--4--0--cfa01fffc0-k8s-whisker--5b9fd46bcf--ncfpf-eth0" Jul 7 05:54:47.852226 containerd[1607]: time="2025-07-07T05:54:47.851576336Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 05:54:47.852226 containerd[1607]: time="2025-07-07T05:54:47.851671934Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 05:54:47.852226 containerd[1607]: time="2025-07-07T05:54:47.851788572Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:54:47.852226 containerd[1607]: time="2025-07-07T05:54:47.851983928Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:54:47.915498 containerd[1607]: time="2025-07-07T05:54:47.915366288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5b9fd46bcf-ncfpf,Uid:bd007c35-2d58-4722-8c56-1e5aafc2a6cf,Namespace:calico-system,Attempt:0,} returns sandbox id \"36a58948e1016c03993c5f0f943df71bd116238ac7cbafb7f4e1343a0751996f\"" Jul 7 05:54:47.919810 containerd[1607]: time="2025-07-07T05:54:47.918426104Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 7 05:54:47.973521 kubelet[2790]: I0707 05:54:47.973282 2790 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b1afbd3-2ba9-4eb3-b456-871a5337cbaa" path="/var/lib/kubelet/pods/3b1afbd3-2ba9-4eb3-b456-871a5337cbaa/volumes" Jul 7 05:54:49.145164 kubelet[2790]: I0707 05:54:49.144544 2790 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 05:54:49.151627 systemd-networkd[1244]: calid43c2d02be5: Gained IPv6LL Jul 7 05:54:49.736265 containerd[1607]: time="2025-07-07T05:54:49.736198032Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:54:49.740378 containerd[1607]: time="2025-07-07T05:54:49.738695992Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4605614" Jul 7 05:54:49.742121 containerd[1607]: time="2025-07-07T05:54:49.741816382Z" level=info msg="ImageCreate event name:\"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:54:49.751952 containerd[1607]: time="2025-07-07T05:54:49.751759743Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:54:49.755096 containerd[1607]: time="2025-07-07T05:54:49.753983787Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"5974847\" in 1.835516603s" Jul 7 05:54:49.755096 containerd[1607]: time="2025-07-07T05:54:49.754036146Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\"" Jul 7 05:54:49.759143 containerd[1607]: time="2025-07-07T05:54:49.759089385Z" level=info msg="CreateContainer within sandbox \"36a58948e1016c03993c5f0f943df71bd116238ac7cbafb7f4e1343a0751996f\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 7 05:54:49.796348 containerd[1607]: time="2025-07-07T05:54:49.796294589Z" level=info msg="CreateContainer within sandbox \"36a58948e1016c03993c5f0f943df71bd116238ac7cbafb7f4e1343a0751996f\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"d6964f3f031c24ba1e2bbb3a434d66605e160928e930d2374546a0cbb58b9f98\"" Jul 7 05:54:49.798121 containerd[1607]: time="2025-07-07T05:54:49.798021522Z" level=info msg="StartContainer for \"d6964f3f031c24ba1e2bbb3a434d66605e160928e930d2374546a0cbb58b9f98\"" Jul 7 05:54:49.817398 kernel: bpftool[4259]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jul 7 05:54:49.884680 containerd[1607]: time="2025-07-07T05:54:49.884636335Z" level=info msg="StartContainer for \"d6964f3f031c24ba1e2bbb3a434d66605e160928e930d2374546a0cbb58b9f98\" returns successfully" Jul 7 05:54:49.888091 containerd[1607]: time="2025-07-07T05:54:49.888027521Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 7 05:54:49.972494 containerd[1607]: time="2025-07-07T05:54:49.972449929Z" level=info msg="StopPodSandbox for \"6f2aef845c948b3a3da93463475913830fd5abe09ddd3b2abb0208b753c0b097\"" Jul 7 05:54:49.973049 containerd[1607]: time="2025-07-07T05:54:49.972760124Z" level=info msg="StopPodSandbox for \"40c3da616280f08b3c015dbc2ab61858439cbab20b614d0112e0b72710c65590\"" Jul 7 05:54:50.117580 systemd-networkd[1244]: vxlan.calico: Link UP Jul 7 05:54:50.117585 systemd-networkd[1244]: vxlan.calico: Gained carrier Jul 7 05:54:50.229096 containerd[1607]: 2025-07-07 05:54:50.122 [INFO][4313] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="40c3da616280f08b3c015dbc2ab61858439cbab20b614d0112e0b72710c65590" Jul 7 05:54:50.229096 containerd[1607]: 2025-07-07 05:54:50.126 [INFO][4313] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="40c3da616280f08b3c015dbc2ab61858439cbab20b614d0112e0b72710c65590" iface="eth0" netns="/var/run/netns/cni-da91929c-f906-1c6d-06df-81336f7aa558" Jul 7 05:54:50.229096 containerd[1607]: 2025-07-07 05:54:50.128 [INFO][4313] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="40c3da616280f08b3c015dbc2ab61858439cbab20b614d0112e0b72710c65590" iface="eth0" netns="/var/run/netns/cni-da91929c-f906-1c6d-06df-81336f7aa558" Jul 7 05:54:50.229096 containerd[1607]: 2025-07-07 05:54:50.130 [INFO][4313] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="40c3da616280f08b3c015dbc2ab61858439cbab20b614d0112e0b72710c65590" iface="eth0" netns="/var/run/netns/cni-da91929c-f906-1c6d-06df-81336f7aa558" Jul 7 05:54:50.229096 containerd[1607]: 2025-07-07 05:54:50.130 [INFO][4313] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="40c3da616280f08b3c015dbc2ab61858439cbab20b614d0112e0b72710c65590" Jul 7 05:54:50.229096 containerd[1607]: 2025-07-07 05:54:50.130 [INFO][4313] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="40c3da616280f08b3c015dbc2ab61858439cbab20b614d0112e0b72710c65590" Jul 7 05:54:50.229096 containerd[1607]: 2025-07-07 05:54:50.200 [INFO][4350] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="40c3da616280f08b3c015dbc2ab61858439cbab20b614d0112e0b72710c65590" HandleID="k8s-pod-network.40c3da616280f08b3c015dbc2ab61858439cbab20b614d0112e0b72710c65590" Workload="ci--4081--3--4--0--cfa01fffc0-k8s-coredns--7c65d6cfc9--2f6gw-eth0" Jul 7 05:54:50.229096 containerd[1607]: 2025-07-07 05:54:50.201 [INFO][4350] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:54:50.229096 containerd[1607]: 2025-07-07 05:54:50.201 [INFO][4350] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:54:50.229096 containerd[1607]: 2025-07-07 05:54:50.219 [WARNING][4350] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="40c3da616280f08b3c015dbc2ab61858439cbab20b614d0112e0b72710c65590" HandleID="k8s-pod-network.40c3da616280f08b3c015dbc2ab61858439cbab20b614d0112e0b72710c65590" Workload="ci--4081--3--4--0--cfa01fffc0-k8s-coredns--7c65d6cfc9--2f6gw-eth0" Jul 7 05:54:50.229096 containerd[1607]: 2025-07-07 05:54:50.219 [INFO][4350] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="40c3da616280f08b3c015dbc2ab61858439cbab20b614d0112e0b72710c65590" HandleID="k8s-pod-network.40c3da616280f08b3c015dbc2ab61858439cbab20b614d0112e0b72710c65590" Workload="ci--4081--3--4--0--cfa01fffc0-k8s-coredns--7c65d6cfc9--2f6gw-eth0" Jul 7 05:54:50.229096 containerd[1607]: 2025-07-07 05:54:50.221 [INFO][4350] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:54:50.229096 containerd[1607]: 2025-07-07 05:54:50.225 [INFO][4313] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="40c3da616280f08b3c015dbc2ab61858439cbab20b614d0112e0b72710c65590" Jul 7 05:54:50.232341 containerd[1607]: time="2025-07-07T05:54:50.229716533Z" level=info msg="TearDown network for sandbox \"40c3da616280f08b3c015dbc2ab61858439cbab20b614d0112e0b72710c65590\" successfully" Jul 7 05:54:50.232341 containerd[1607]: time="2025-07-07T05:54:50.229838811Z" level=info msg="StopPodSandbox for \"40c3da616280f08b3c015dbc2ab61858439cbab20b614d0112e0b72710c65590\" returns successfully" Jul 7 05:54:50.234393 systemd[1]: run-netns-cni\x2dda91929c\x2df906\x2d1c6d\x2d06df\x2d81336f7aa558.mount: Deactivated successfully. Jul 7 05:54:50.246161 containerd[1607]: time="2025-07-07T05:54:50.245888391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-2f6gw,Uid:7a7abeb6-f9e1-4877-bbe3-f7709971f8ca,Namespace:kube-system,Attempt:1,}" Jul 7 05:54:50.265405 containerd[1607]: 2025-07-07 05:54:50.086 [INFO][4315] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6f2aef845c948b3a3da93463475913830fd5abe09ddd3b2abb0208b753c0b097" Jul 7 05:54:50.265405 containerd[1607]: 2025-07-07 05:54:50.086 [INFO][4315] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6f2aef845c948b3a3da93463475913830fd5abe09ddd3b2abb0208b753c0b097" iface="eth0" netns="/var/run/netns/cni-c54a03c0-0ab7-b69a-18b3-7027acedd5d3" Jul 7 05:54:50.265405 containerd[1607]: 2025-07-07 05:54:50.088 [INFO][4315] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6f2aef845c948b3a3da93463475913830fd5abe09ddd3b2abb0208b753c0b097" iface="eth0" netns="/var/run/netns/cni-c54a03c0-0ab7-b69a-18b3-7027acedd5d3" Jul 7 05:54:50.265405 containerd[1607]: 2025-07-07 05:54:50.090 [INFO][4315] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6f2aef845c948b3a3da93463475913830fd5abe09ddd3b2abb0208b753c0b097" iface="eth0" netns="/var/run/netns/cni-c54a03c0-0ab7-b69a-18b3-7027acedd5d3" Jul 7 05:54:50.265405 containerd[1607]: 2025-07-07 05:54:50.090 [INFO][4315] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6f2aef845c948b3a3da93463475913830fd5abe09ddd3b2abb0208b753c0b097" Jul 7 05:54:50.265405 containerd[1607]: 2025-07-07 05:54:50.090 [INFO][4315] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6f2aef845c948b3a3da93463475913830fd5abe09ddd3b2abb0208b753c0b097" Jul 7 05:54:50.265405 containerd[1607]: 2025-07-07 05:54:50.224 [INFO][4339] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6f2aef845c948b3a3da93463475913830fd5abe09ddd3b2abb0208b753c0b097" HandleID="k8s-pod-network.6f2aef845c948b3a3da93463475913830fd5abe09ddd3b2abb0208b753c0b097" Workload="ci--4081--3--4--0--cfa01fffc0-k8s-goldmane--58fd7646b9--6wzbz-eth0" Jul 7 05:54:50.265405 containerd[1607]: 2025-07-07 05:54:50.225 [INFO][4339] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:54:50.265405 containerd[1607]: 2025-07-07 05:54:50.225 [INFO][4339] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:54:50.265405 containerd[1607]: 2025-07-07 05:54:50.241 [WARNING][4339] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6f2aef845c948b3a3da93463475913830fd5abe09ddd3b2abb0208b753c0b097" HandleID="k8s-pod-network.6f2aef845c948b3a3da93463475913830fd5abe09ddd3b2abb0208b753c0b097" Workload="ci--4081--3--4--0--cfa01fffc0-k8s-goldmane--58fd7646b9--6wzbz-eth0" Jul 7 05:54:50.265405 containerd[1607]: 2025-07-07 05:54:50.242 [INFO][4339] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6f2aef845c948b3a3da93463475913830fd5abe09ddd3b2abb0208b753c0b097" HandleID="k8s-pod-network.6f2aef845c948b3a3da93463475913830fd5abe09ddd3b2abb0208b753c0b097" Workload="ci--4081--3--4--0--cfa01fffc0-k8s-goldmane--58fd7646b9--6wzbz-eth0" Jul 7 05:54:50.265405 containerd[1607]: 2025-07-07 05:54:50.244 [INFO][4339] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:54:50.265405 containerd[1607]: 2025-07-07 05:54:50.256 [INFO][4315] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6f2aef845c948b3a3da93463475913830fd5abe09ddd3b2abb0208b753c0b097" Jul 7 05:54:50.269110 containerd[1607]: time="2025-07-07T05:54:50.267152899Z" level=info msg="TearDown network for sandbox \"6f2aef845c948b3a3da93463475913830fd5abe09ddd3b2abb0208b753c0b097\" successfully" Jul 7 05:54:50.269110 containerd[1607]: time="2025-07-07T05:54:50.267190299Z" level=info msg="StopPodSandbox for \"6f2aef845c948b3a3da93463475913830fd5abe09ddd3b2abb0208b753c0b097\" returns successfully" Jul 7 05:54:50.269110 containerd[1607]: time="2025-07-07T05:54:50.268099486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-6wzbz,Uid:7248fff7-123a-4181-abad-edb494c7cc63,Namespace:calico-system,Attempt:1,}" Jul 7 05:54:50.270998 systemd[1]: run-netns-cni\x2dc54a03c0\x2d0ab7\x2db69a\x2d18b3\x2d7027acedd5d3.mount: Deactivated successfully. Jul 7 05:54:50.502438 systemd-networkd[1244]: calic0a62dad547: Link UP Jul 7 05:54:50.506794 systemd-networkd[1244]: calic0a62dad547: Gained carrier Jul 7 05:54:50.537472 containerd[1607]: 2025-07-07 05:54:50.363 [INFO][4382] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--4--0--cfa01fffc0-k8s-goldmane--58fd7646b9--6wzbz-eth0 goldmane-58fd7646b9- calico-system 7248fff7-123a-4181-abad-edb494c7cc63 937 0 2025-07-07 05:54:29 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081-3-4-0-cfa01fffc0 goldmane-58fd7646b9-6wzbz eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calic0a62dad547 [] [] }} ContainerID="98b6f482989d9a212e9e04c354e8d6b4b7ae7679d893815ad1b097d949b9d704" Namespace="calico-system" Pod="goldmane-58fd7646b9-6wzbz" WorkloadEndpoint="ci--4081--3--4--0--cfa01fffc0-k8s-goldmane--58fd7646b9--6wzbz-" Jul 7 05:54:50.537472 containerd[1607]: 2025-07-07 05:54:50.363 [INFO][4382] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="98b6f482989d9a212e9e04c354e8d6b4b7ae7679d893815ad1b097d949b9d704" Namespace="calico-system" Pod="goldmane-58fd7646b9-6wzbz" WorkloadEndpoint="ci--4081--3--4--0--cfa01fffc0-k8s-goldmane--58fd7646b9--6wzbz-eth0" Jul 7 05:54:50.537472 containerd[1607]: 2025-07-07 05:54:50.423 [INFO][4402] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="98b6f482989d9a212e9e04c354e8d6b4b7ae7679d893815ad1b097d949b9d704" HandleID="k8s-pod-network.98b6f482989d9a212e9e04c354e8d6b4b7ae7679d893815ad1b097d949b9d704" Workload="ci--4081--3--4--0--cfa01fffc0-k8s-goldmane--58fd7646b9--6wzbz-eth0" Jul 7 05:54:50.537472 containerd[1607]: 2025-07-07 05:54:50.423 [INFO][4402] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="98b6f482989d9a212e9e04c354e8d6b4b7ae7679d893815ad1b097d949b9d704" HandleID="k8s-pod-network.98b6f482989d9a212e9e04c354e8d6b4b7ae7679d893815ad1b097d949b9d704" Workload="ci--4081--3--4--0--cfa01fffc0-k8s-goldmane--58fd7646b9--6wzbz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000273da0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-4-0-cfa01fffc0", "pod":"goldmane-58fd7646b9-6wzbz", "timestamp":"2025-07-07 05:54:50.423392355 +0000 UTC"}, Hostname:"ci-4081-3-4-0-cfa01fffc0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 05:54:50.537472 containerd[1607]: 2025-07-07 05:54:50.423 [INFO][4402] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:54:50.537472 containerd[1607]: 2025-07-07 05:54:50.423 [INFO][4402] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:54:50.537472 containerd[1607]: 2025-07-07 05:54:50.423 [INFO][4402] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-4-0-cfa01fffc0' Jul 7 05:54:50.537472 containerd[1607]: 2025-07-07 05:54:50.441 [INFO][4402] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.98b6f482989d9a212e9e04c354e8d6b4b7ae7679d893815ad1b097d949b9d704" host="ci-4081-3-4-0-cfa01fffc0" Jul 7 05:54:50.537472 containerd[1607]: 2025-07-07 05:54:50.449 [INFO][4402] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-4-0-cfa01fffc0" Jul 7 05:54:50.537472 containerd[1607]: 2025-07-07 05:54:50.461 [INFO][4402] ipam/ipam.go 511: Trying affinity for 192.168.9.64/26 host="ci-4081-3-4-0-cfa01fffc0" Jul 7 05:54:50.537472 containerd[1607]: 2025-07-07 05:54:50.466 [INFO][4402] ipam/ipam.go 158: Attempting to load block cidr=192.168.9.64/26 host="ci-4081-3-4-0-cfa01fffc0" Jul 7 05:54:50.537472 containerd[1607]: 2025-07-07 05:54:50.470 [INFO][4402] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.9.64/26 host="ci-4081-3-4-0-cfa01fffc0" Jul 7 05:54:50.537472 containerd[1607]: 2025-07-07 05:54:50.470 [INFO][4402] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.9.64/26 handle="k8s-pod-network.98b6f482989d9a212e9e04c354e8d6b4b7ae7679d893815ad1b097d949b9d704" host="ci-4081-3-4-0-cfa01fffc0" Jul 7 05:54:50.537472 containerd[1607]: 2025-07-07 05:54:50.473 [INFO][4402] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.98b6f482989d9a212e9e04c354e8d6b4b7ae7679d893815ad1b097d949b9d704 Jul 7 05:54:50.537472 containerd[1607]: 2025-07-07 05:54:50.483 [INFO][4402] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.9.64/26 handle="k8s-pod-network.98b6f482989d9a212e9e04c354e8d6b4b7ae7679d893815ad1b097d949b9d704" host="ci-4081-3-4-0-cfa01fffc0" Jul 7 05:54:50.537472 containerd[1607]: 2025-07-07 05:54:50.490 [INFO][4402] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.9.66/26] block=192.168.9.64/26 handle="k8s-pod-network.98b6f482989d9a212e9e04c354e8d6b4b7ae7679d893815ad1b097d949b9d704" host="ci-4081-3-4-0-cfa01fffc0" Jul 7 05:54:50.537472 containerd[1607]: 2025-07-07 05:54:50.491 [INFO][4402] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.9.66/26] handle="k8s-pod-network.98b6f482989d9a212e9e04c354e8d6b4b7ae7679d893815ad1b097d949b9d704" host="ci-4081-3-4-0-cfa01fffc0" Jul 7 05:54:50.537472 containerd[1607]: 2025-07-07 05:54:50.492 [INFO][4402] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:54:50.537472 containerd[1607]: 2025-07-07 05:54:50.492 [INFO][4402] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.9.66/26] IPv6=[] ContainerID="98b6f482989d9a212e9e04c354e8d6b4b7ae7679d893815ad1b097d949b9d704" HandleID="k8s-pod-network.98b6f482989d9a212e9e04c354e8d6b4b7ae7679d893815ad1b097d949b9d704" Workload="ci--4081--3--4--0--cfa01fffc0-k8s-goldmane--58fd7646b9--6wzbz-eth0" Jul 7 05:54:50.539116 containerd[1607]: 2025-07-07 05:54:50.497 [INFO][4382] cni-plugin/k8s.go 418: Populated endpoint ContainerID="98b6f482989d9a212e9e04c354e8d6b4b7ae7679d893815ad1b097d949b9d704" Namespace="calico-system" Pod="goldmane-58fd7646b9-6wzbz" WorkloadEndpoint="ci--4081--3--4--0--cfa01fffc0-k8s-goldmane--58fd7646b9--6wzbz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--0--cfa01fffc0-k8s-goldmane--58fd7646b9--6wzbz-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"7248fff7-123a-4181-abad-edb494c7cc63", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 5, 54, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-0-cfa01fffc0", ContainerID:"", Pod:"goldmane-58fd7646b9-6wzbz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.9.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic0a62dad547", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 05:54:50.539116 containerd[1607]: 2025-07-07 05:54:50.497 [INFO][4382] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.9.66/32] ContainerID="98b6f482989d9a212e9e04c354e8d6b4b7ae7679d893815ad1b097d949b9d704" Namespace="calico-system" Pod="goldmane-58fd7646b9-6wzbz" WorkloadEndpoint="ci--4081--3--4--0--cfa01fffc0-k8s-goldmane--58fd7646b9--6wzbz-eth0" Jul 7 05:54:50.539116 containerd[1607]: 2025-07-07 05:54:50.497 [INFO][4382] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic0a62dad547 ContainerID="98b6f482989d9a212e9e04c354e8d6b4b7ae7679d893815ad1b097d949b9d704" Namespace="calico-system" Pod="goldmane-58fd7646b9-6wzbz" WorkloadEndpoint="ci--4081--3--4--0--cfa01fffc0-k8s-goldmane--58fd7646b9--6wzbz-eth0" Jul 7 05:54:50.539116 containerd[1607]: 2025-07-07 05:54:50.510 [INFO][4382] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="98b6f482989d9a212e9e04c354e8d6b4b7ae7679d893815ad1b097d949b9d704" Namespace="calico-system" Pod="goldmane-58fd7646b9-6wzbz" WorkloadEndpoint="ci--4081--3--4--0--cfa01fffc0-k8s-goldmane--58fd7646b9--6wzbz-eth0" Jul 7 05:54:50.539116 containerd[1607]: 2025-07-07 05:54:50.512 [INFO][4382] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="98b6f482989d9a212e9e04c354e8d6b4b7ae7679d893815ad1b097d949b9d704" Namespace="calico-system" Pod="goldmane-58fd7646b9-6wzbz" WorkloadEndpoint="ci--4081--3--4--0--cfa01fffc0-k8s-goldmane--58fd7646b9--6wzbz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--0--cfa01fffc0-k8s-goldmane--58fd7646b9--6wzbz-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"7248fff7-123a-4181-abad-edb494c7cc63", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 5, 54, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-0-cfa01fffc0", ContainerID:"98b6f482989d9a212e9e04c354e8d6b4b7ae7679d893815ad1b097d949b9d704", Pod:"goldmane-58fd7646b9-6wzbz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.9.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic0a62dad547", MAC:"76:be:e5:90:89:10", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 05:54:50.539116 containerd[1607]: 2025-07-07 05:54:50.535 [INFO][4382] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="98b6f482989d9a212e9e04c354e8d6b4b7ae7679d893815ad1b097d949b9d704" Namespace="calico-system" Pod="goldmane-58fd7646b9-6wzbz" WorkloadEndpoint="ci--4081--3--4--0--cfa01fffc0-k8s-goldmane--58fd7646b9--6wzbz-eth0" Jul 7 05:54:50.576985 containerd[1607]: time="2025-07-07T05:54:50.575056874Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 05:54:50.577581 containerd[1607]: time="2025-07-07T05:54:50.576749571Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 05:54:50.577581 containerd[1607]: time="2025-07-07T05:54:50.576773611Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:54:50.580028 containerd[1607]: time="2025-07-07T05:54:50.579237297Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:54:50.637024 systemd-networkd[1244]: cali8eb5483f1c6: Link UP Jul 7 05:54:50.637283 systemd-networkd[1244]: cali8eb5483f1c6: Gained carrier Jul 7 05:54:50.662027 containerd[1607]: 2025-07-07 05:54:50.369 [INFO][4373] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--4--0--cfa01fffc0-k8s-coredns--7c65d6cfc9--2f6gw-eth0 coredns-7c65d6cfc9- kube-system 7a7abeb6-f9e1-4877-bbe3-f7709971f8ca 938 0 2025-07-07 05:54:12 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-4-0-cfa01fffc0 coredns-7c65d6cfc9-2f6gw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8eb5483f1c6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="cf5f054bac3c87f428062bd494313a25b3bdbdbcb08d1dbe8217ff5ac314acf3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2f6gw" WorkloadEndpoint="ci--4081--3--4--0--cfa01fffc0-k8s-coredns--7c65d6cfc9--2f6gw-" Jul 7 05:54:50.662027 containerd[1607]: 2025-07-07 05:54:50.370 [INFO][4373] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cf5f054bac3c87f428062bd494313a25b3bdbdbcb08d1dbe8217ff5ac314acf3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2f6gw" WorkloadEndpoint="ci--4081--3--4--0--cfa01fffc0-k8s-coredns--7c65d6cfc9--2f6gw-eth0" Jul 7 05:54:50.662027 containerd[1607]: 2025-07-07 05:54:50.441 [INFO][4407] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cf5f054bac3c87f428062bd494313a25b3bdbdbcb08d1dbe8217ff5ac314acf3" HandleID="k8s-pod-network.cf5f054bac3c87f428062bd494313a25b3bdbdbcb08d1dbe8217ff5ac314acf3" Workload="ci--4081--3--4--0--cfa01fffc0-k8s-coredns--7c65d6cfc9--2f6gw-eth0" Jul 7 05:54:50.662027 containerd[1607]: 2025-07-07 05:54:50.441 [INFO][4407] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cf5f054bac3c87f428062bd494313a25b3bdbdbcb08d1dbe8217ff5ac314acf3" HandleID="k8s-pod-network.cf5f054bac3c87f428062bd494313a25b3bdbdbcb08d1dbe8217ff5ac314acf3" Workload="ci--4081--3--4--0--cfa01fffc0-k8s-coredns--7c65d6cfc9--2f6gw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024aff0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-4-0-cfa01fffc0", "pod":"coredns-7c65d6cfc9-2f6gw", "timestamp":"2025-07-07 05:54:50.441015633 +0000 UTC"}, Hostname:"ci-4081-3-4-0-cfa01fffc0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 05:54:50.662027 containerd[1607]: 2025-07-07 05:54:50.441 [INFO][4407] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:54:50.662027 containerd[1607]: 2025-07-07 05:54:50.492 [INFO][4407] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:54:50.662027 containerd[1607]: 2025-07-07 05:54:50.492 [INFO][4407] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-4-0-cfa01fffc0' Jul 7 05:54:50.662027 containerd[1607]: 2025-07-07 05:54:50.540 [INFO][4407] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cf5f054bac3c87f428062bd494313a25b3bdbdbcb08d1dbe8217ff5ac314acf3" host="ci-4081-3-4-0-cfa01fffc0" Jul 7 05:54:50.662027 containerd[1607]: 2025-07-07 05:54:50.551 [INFO][4407] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-4-0-cfa01fffc0" Jul 7 05:54:50.662027 containerd[1607]: 2025-07-07 05:54:50.569 [INFO][4407] ipam/ipam.go 511: Trying affinity for 192.168.9.64/26 host="ci-4081-3-4-0-cfa01fffc0" Jul 7 05:54:50.662027 containerd[1607]: 2025-07-07 05:54:50.573 [INFO][4407] ipam/ipam.go 158: Attempting to load block cidr=192.168.9.64/26 host="ci-4081-3-4-0-cfa01fffc0" Jul 7 05:54:50.662027 containerd[1607]: 2025-07-07 05:54:50.579 [INFO][4407] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.9.64/26 host="ci-4081-3-4-0-cfa01fffc0" Jul 7 05:54:50.662027 containerd[1607]: 2025-07-07 05:54:50.579 [INFO][4407] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.9.64/26 handle="k8s-pod-network.cf5f054bac3c87f428062bd494313a25b3bdbdbcb08d1dbe8217ff5ac314acf3" host="ci-4081-3-4-0-cfa01fffc0" Jul 7 05:54:50.662027 containerd[1607]: 2025-07-07 05:54:50.584 [INFO][4407] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.cf5f054bac3c87f428062bd494313a25b3bdbdbcb08d1dbe8217ff5ac314acf3 Jul 7 05:54:50.662027 containerd[1607]: 2025-07-07 05:54:50.599 [INFO][4407] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.9.64/26 handle="k8s-pod-network.cf5f054bac3c87f428062bd494313a25b3bdbdbcb08d1dbe8217ff5ac314acf3" host="ci-4081-3-4-0-cfa01fffc0" Jul 7 05:54:50.662027 containerd[1607]: 2025-07-07 05:54:50.615 [INFO][4407] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.9.67/26] block=192.168.9.64/26 handle="k8s-pod-network.cf5f054bac3c87f428062bd494313a25b3bdbdbcb08d1dbe8217ff5ac314acf3" host="ci-4081-3-4-0-cfa01fffc0" Jul 7 05:54:50.662027 containerd[1607]: 2025-07-07 05:54:50.615 [INFO][4407] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.9.67/26] handle="k8s-pod-network.cf5f054bac3c87f428062bd494313a25b3bdbdbcb08d1dbe8217ff5ac314acf3" host="ci-4081-3-4-0-cfa01fffc0" Jul 7 05:54:50.662027 containerd[1607]: 2025-07-07 05:54:50.615 [INFO][4407] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:54:50.662027 containerd[1607]: 2025-07-07 05:54:50.615 [INFO][4407] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.9.67/26] IPv6=[] ContainerID="cf5f054bac3c87f428062bd494313a25b3bdbdbcb08d1dbe8217ff5ac314acf3" HandleID="k8s-pod-network.cf5f054bac3c87f428062bd494313a25b3bdbdbcb08d1dbe8217ff5ac314acf3" Workload="ci--4081--3--4--0--cfa01fffc0-k8s-coredns--7c65d6cfc9--2f6gw-eth0" Jul 7 05:54:50.662626 containerd[1607]: 2025-07-07 05:54:50.625 [INFO][4373] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cf5f054bac3c87f428062bd494313a25b3bdbdbcb08d1dbe8217ff5ac314acf3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2f6gw" WorkloadEndpoint="ci--4081--3--4--0--cfa01fffc0-k8s-coredns--7c65d6cfc9--2f6gw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--0--cfa01fffc0-k8s-coredns--7c65d6cfc9--2f6gw-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"7a7abeb6-f9e1-4877-bbe3-f7709971f8ca", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 5, 54, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-0-cfa01fffc0", ContainerID:"", Pod:"coredns-7c65d6cfc9-2f6gw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.9.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8eb5483f1c6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 05:54:50.662626 containerd[1607]: 2025-07-07 05:54:50.625 [INFO][4373] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.9.67/32] ContainerID="cf5f054bac3c87f428062bd494313a25b3bdbdbcb08d1dbe8217ff5ac314acf3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2f6gw" WorkloadEndpoint="ci--4081--3--4--0--cfa01fffc0-k8s-coredns--7c65d6cfc9--2f6gw-eth0" Jul 7 05:54:50.662626 containerd[1607]: 2025-07-07 05:54:50.625 [INFO][4373] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8eb5483f1c6 ContainerID="cf5f054bac3c87f428062bd494313a25b3bdbdbcb08d1dbe8217ff5ac314acf3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2f6gw" WorkloadEndpoint="ci--4081--3--4--0--cfa01fffc0-k8s-coredns--7c65d6cfc9--2f6gw-eth0" Jul 7 05:54:50.662626 containerd[1607]: 2025-07-07 05:54:50.640 [INFO][4373] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cf5f054bac3c87f428062bd494313a25b3bdbdbcb08d1dbe8217ff5ac314acf3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2f6gw" WorkloadEndpoint="ci--4081--3--4--0--cfa01fffc0-k8s-coredns--7c65d6cfc9--2f6gw-eth0" Jul 7 05:54:50.662626 containerd[1607]: 2025-07-07 05:54:50.641 [INFO][4373] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cf5f054bac3c87f428062bd494313a25b3bdbdbcb08d1dbe8217ff5ac314acf3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2f6gw" WorkloadEndpoint="ci--4081--3--4--0--cfa01fffc0-k8s-coredns--7c65d6cfc9--2f6gw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--0--cfa01fffc0-k8s-coredns--7c65d6cfc9--2f6gw-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"7a7abeb6-f9e1-4877-bbe3-f7709971f8ca", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 5, 54, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-0-cfa01fffc0", ContainerID:"cf5f054bac3c87f428062bd494313a25b3bdbdbcb08d1dbe8217ff5ac314acf3", Pod:"coredns-7c65d6cfc9-2f6gw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.9.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8eb5483f1c6", MAC:"3a:e5:75:27:21:36", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 05:54:50.662626 containerd[1607]: 2025-07-07 05:54:50.656 [INFO][4373] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cf5f054bac3c87f428062bd494313a25b3bdbdbcb08d1dbe8217ff5ac314acf3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2f6gw" WorkloadEndpoint="ci--4081--3--4--0--cfa01fffc0-k8s-coredns--7c65d6cfc9--2f6gw-eth0" Jul 7 05:54:50.675042 containerd[1607]: time="2025-07-07T05:54:50.674919384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-6wzbz,Uid:7248fff7-123a-4181-abad-edb494c7cc63,Namespace:calico-system,Attempt:1,} returns sandbox id \"98b6f482989d9a212e9e04c354e8d6b4b7ae7679d893815ad1b097d949b9d704\"" Jul 7 05:54:50.698487 containerd[1607]: time="2025-07-07T05:54:50.698310463Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 05:54:50.698487 containerd[1607]: time="2025-07-07T05:54:50.698432941Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 05:54:50.698821 containerd[1607]: time="2025-07-07T05:54:50.698718937Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:54:50.699404 containerd[1607]: time="2025-07-07T05:54:50.699362768Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:54:50.753281 containerd[1607]: time="2025-07-07T05:54:50.753213669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-2f6gw,Uid:7a7abeb6-f9e1-4877-bbe3-f7709971f8ca,Namespace:kube-system,Attempt:1,} returns sandbox id \"cf5f054bac3c87f428062bd494313a25b3bdbdbcb08d1dbe8217ff5ac314acf3\"" Jul 7 05:54:50.760813 containerd[1607]: time="2025-07-07T05:54:50.760686607Z" level=info msg="CreateContainer within sandbox \"cf5f054bac3c87f428062bd494313a25b3bdbdbcb08d1dbe8217ff5ac314acf3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 05:54:50.777142 containerd[1607]: time="2025-07-07T05:54:50.777025343Z" level=info msg="CreateContainer within sandbox \"cf5f054bac3c87f428062bd494313a25b3bdbdbcb08d1dbe8217ff5ac314acf3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d0b40a968b4583eea6b356b9a2291dee6ddd60bed8ddd68d3b0ae4e38c25adc5\"" Jul 7 05:54:50.779350 containerd[1607]: time="2025-07-07T05:54:50.779107114Z" level=info msg="StartContainer for \"d0b40a968b4583eea6b356b9a2291dee6ddd60bed8ddd68d3b0ae4e38c25adc5\"" Jul 7 05:54:50.856344 containerd[1607]: time="2025-07-07T05:54:50.856291855Z" level=info msg="StartContainer for \"d0b40a968b4583eea6b356b9a2291dee6ddd60bed8ddd68d3b0ae4e38c25adc5\" returns successfully" Jul 7 05:54:51.308219 kubelet[2790]: I0707 05:54:51.307685 2790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-2f6gw" podStartSLOduration=39.307645022 podStartE2EDuration="39.307645022s" podCreationTimestamp="2025-07-07 05:54:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 05:54:51.305437287 +0000 UTC m=+45.467460314" watchObservedRunningTime="2025-07-07 05:54:51.307645022 +0000 UTC m=+45.469668129" Jul 7 05:54:51.903827 systemd-networkd[1244]: cali8eb5483f1c6: Gained IPv6LL Jul 7 05:54:51.983039 containerd[1607]: time="2025-07-07T05:54:51.982612137Z" level=info msg="StopPodSandbox for \"b9063cf62d330415dbf6a8a15016fceb914fd6d71e809ab0ebce9ba03029c1c1\"" Jul 7 05:54:51.983039 containerd[1607]: time="2025-07-07T05:54:51.982856654Z" level=info msg="StopPodSandbox for \"82c35a5ef5486a77b5e56400d201a90e9aee74eaf1498a1d1f062f19cbad9514\"" Jul 7 05:54:51.986427 containerd[1607]: time="2025-07-07T05:54:51.984929111Z" level=info msg="StopPodSandbox for \"f79c852cf8d5a6dc42e9ae7aa2efc034789040ef8c2994a2b978d11f9e301e43\"" Jul 7 05:54:51.989648 containerd[1607]: time="2025-07-07T05:54:51.986606771Z" level=info msg="StopPodSandbox for \"2cd7df1d3dacf358cc46870201e40226a9d75afa9e0d0afe94a9976c658d88a2\"" Jul 7 05:54:52.100802 systemd-networkd[1244]: vxlan.calico: Gained IPv6LL Jul 7 05:54:52.212668 containerd[1607]: 2025-07-07 05:54:52.086 [INFO][4633] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2cd7df1d3dacf358cc46870201e40226a9d75afa9e0d0afe94a9976c658d88a2" Jul 7 05:54:52.212668 containerd[1607]: 2025-07-07 05:54:52.087 [INFO][4633] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2cd7df1d3dacf358cc46870201e40226a9d75afa9e0d0afe94a9976c658d88a2" iface="eth0" netns="/var/run/netns/cni-b6ac6383-b1f4-bba8-f58b-b8b372867d0d" Jul 7 05:54:52.212668 containerd[1607]: 2025-07-07 05:54:52.088 [INFO][4633] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2cd7df1d3dacf358cc46870201e40226a9d75afa9e0d0afe94a9976c658d88a2" iface="eth0" netns="/var/run/netns/cni-b6ac6383-b1f4-bba8-f58b-b8b372867d0d" Jul 7 05:54:52.212668 containerd[1607]: 2025-07-07 05:54:52.088 [INFO][4633] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2cd7df1d3dacf358cc46870201e40226a9d75afa9e0d0afe94a9976c658d88a2" iface="eth0" netns="/var/run/netns/cni-b6ac6383-b1f4-bba8-f58b-b8b372867d0d" Jul 7 05:54:52.212668 containerd[1607]: 2025-07-07 05:54:52.088 [INFO][4633] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2cd7df1d3dacf358cc46870201e40226a9d75afa9e0d0afe94a9976c658d88a2" Jul 7 05:54:52.212668 containerd[1607]: 2025-07-07 05:54:52.088 [INFO][4633] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2cd7df1d3dacf358cc46870201e40226a9d75afa9e0d0afe94a9976c658d88a2" Jul 7 05:54:52.212668 containerd[1607]: 2025-07-07 05:54:52.171 [INFO][4651] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2cd7df1d3dacf358cc46870201e40226a9d75afa9e0d0afe94a9976c658d88a2" HandleID="k8s-pod-network.2cd7df1d3dacf358cc46870201e40226a9d75afa9e0d0afe94a9976c658d88a2" Workload="ci--4081--3--4--0--cfa01fffc0-k8s-calico--apiserver--5b7f8cc9dc--drdvq-eth0" Jul 7 05:54:52.212668 containerd[1607]: 2025-07-07 05:54:52.173 [INFO][4651] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:54:52.212668 containerd[1607]: 2025-07-07 05:54:52.174 [INFO][4651] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:54:52.212668 containerd[1607]: 2025-07-07 05:54:52.193 [WARNING][4651] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2cd7df1d3dacf358cc46870201e40226a9d75afa9e0d0afe94a9976c658d88a2" HandleID="k8s-pod-network.2cd7df1d3dacf358cc46870201e40226a9d75afa9e0d0afe94a9976c658d88a2" Workload="ci--4081--3--4--0--cfa01fffc0-k8s-calico--apiserver--5b7f8cc9dc--drdvq-eth0" Jul 7 05:54:52.212668 containerd[1607]: 2025-07-07 05:54:52.193 [INFO][4651] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2cd7df1d3dacf358cc46870201e40226a9d75afa9e0d0afe94a9976c658d88a2" HandleID="k8s-pod-network.2cd7df1d3dacf358cc46870201e40226a9d75afa9e0d0afe94a9976c658d88a2" Workload="ci--4081--3--4--0--cfa01fffc0-k8s-calico--apiserver--5b7f8cc9dc--drdvq-eth0" Jul 7 05:54:52.212668 containerd[1607]: 2025-07-07 05:54:52.200 [INFO][4651] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:54:52.212668 containerd[1607]: 2025-07-07 05:54:52.202 [INFO][4633] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2cd7df1d3dacf358cc46870201e40226a9d75afa9e0d0afe94a9976c658d88a2" Jul 7 05:54:52.213538 containerd[1607]: time="2025-07-07T05:54:52.213495458Z" level=info msg="TearDown network for sandbox \"2cd7df1d3dacf358cc46870201e40226a9d75afa9e0d0afe94a9976c658d88a2\" successfully" Jul 7 05:54:52.217192 containerd[1607]: time="2025-07-07T05:54:52.217152944Z" level=info msg="StopPodSandbox for \"2cd7df1d3dacf358cc46870201e40226a9d75afa9e0d0afe94a9976c658d88a2\" returns successfully" Jul 7 05:54:52.219757 systemd[1]: run-netns-cni\x2db6ac6383\x2db1f4\x2dbba8\x2df58b\x2db8b372867d0d.mount: Deactivated successfully. Jul 7 05:54:52.222347 containerd[1607]: time="2025-07-07T05:54:52.221446903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b7f8cc9dc-drdvq,Uid:a362285d-bb22-4ce9-b882-0bdc9d32c0cc,Namespace:calico-apiserver,Attempt:1,}" Jul 7 05:54:52.224331 systemd-networkd[1244]: calic0a62dad547: Gained IPv6LL Jul 7 05:54:52.235715 containerd[1607]: 2025-07-07 05:54:52.147 [INFO][4627] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="82c35a5ef5486a77b5e56400d201a90e9aee74eaf1498a1d1f062f19cbad9514" Jul 7 05:54:52.235715 containerd[1607]: 2025-07-07 05:54:52.148 [INFO][4627] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="82c35a5ef5486a77b5e56400d201a90e9aee74eaf1498a1d1f062f19cbad9514" iface="eth0" netns="/var/run/netns/cni-4047b137-f2b1-ae94-3c60-bee5e4f5089b" Jul 7 05:54:52.235715 containerd[1607]: 2025-07-07 05:54:52.148 [INFO][4627] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="82c35a5ef5486a77b5e56400d201a90e9aee74eaf1498a1d1f062f19cbad9514" iface="eth0" netns="/var/run/netns/cni-4047b137-f2b1-ae94-3c60-bee5e4f5089b" Jul 7 05:54:52.235715 containerd[1607]: 2025-07-07 05:54:52.149 [INFO][4627] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="82c35a5ef5486a77b5e56400d201a90e9aee74eaf1498a1d1f062f19cbad9514" iface="eth0" netns="/var/run/netns/cni-4047b137-f2b1-ae94-3c60-bee5e4f5089b" Jul 7 05:54:52.235715 containerd[1607]: 2025-07-07 05:54:52.149 [INFO][4627] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="82c35a5ef5486a77b5e56400d201a90e9aee74eaf1498a1d1f062f19cbad9514" Jul 7 05:54:52.235715 containerd[1607]: 2025-07-07 05:54:52.149 [INFO][4627] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="82c35a5ef5486a77b5e56400d201a90e9aee74eaf1498a1d1f062f19cbad9514" Jul 7 05:54:52.235715 containerd[1607]: 2025-07-07 05:54:52.200 [INFO][4665] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="82c35a5ef5486a77b5e56400d201a90e9aee74eaf1498a1d1f062f19cbad9514" HandleID="k8s-pod-network.82c35a5ef5486a77b5e56400d201a90e9aee74eaf1498a1d1f062f19cbad9514" Workload="ci--4081--3--4--0--cfa01fffc0-k8s-calico--kube--controllers--684db96bcc--ct6sd-eth0" Jul 7 05:54:52.235715 containerd[1607]: 2025-07-07 05:54:52.203 [INFO][4665] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:54:52.235715 containerd[1607]: 2025-07-07 05:54:52.203 [INFO][4665] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:54:52.235715 containerd[1607]: 2025-07-07 05:54:52.223 [WARNING][4665] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="82c35a5ef5486a77b5e56400d201a90e9aee74eaf1498a1d1f062f19cbad9514" HandleID="k8s-pod-network.82c35a5ef5486a77b5e56400d201a90e9aee74eaf1498a1d1f062f19cbad9514" Workload="ci--4081--3--4--0--cfa01fffc0-k8s-calico--kube--controllers--684db96bcc--ct6sd-eth0" Jul 7 05:54:52.235715 containerd[1607]: 2025-07-07 05:54:52.223 [INFO][4665] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="82c35a5ef5486a77b5e56400d201a90e9aee74eaf1498a1d1f062f19cbad9514" HandleID="k8s-pod-network.82c35a5ef5486a77b5e56400d201a90e9aee74eaf1498a1d1f062f19cbad9514" Workload="ci--4081--3--4--0--cfa01fffc0-k8s-calico--kube--controllers--684db96bcc--ct6sd-eth0" Jul 7 05:54:52.235715 containerd[1607]: 2025-07-07 05:54:52.227 [INFO][4665] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:54:52.235715 containerd[1607]: 2025-07-07 05:54:52.229 [INFO][4627] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="82c35a5ef5486a77b5e56400d201a90e9aee74eaf1498a1d1f062f19cbad9514" Jul 7 05:54:52.243089 containerd[1607]: time="2025-07-07T05:54:52.242638505Z" level=info msg="TearDown network for sandbox \"82c35a5ef5486a77b5e56400d201a90e9aee74eaf1498a1d1f062f19cbad9514\" successfully" Jul 7 05:54:52.243089 containerd[1607]: time="2025-07-07T05:54:52.242676985Z" level=info msg="StopPodSandbox for \"82c35a5ef5486a77b5e56400d201a90e9aee74eaf1498a1d1f062f19cbad9514\" returns successfully" Jul 7 05:54:52.247736 containerd[1607]: time="2025-07-07T05:54:52.247639858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-684db96bcc-ct6sd,Uid:6bb49342-3954-416b-b804-0778891e9fae,Namespace:calico-system,Attempt:1,}" Jul 7 05:54:52.249139 systemd[1]: run-netns-cni\x2d4047b137\x2df2b1\x2dae94\x2d3c60\x2dbee5e4f5089b.mount: Deactivated successfully. Jul 7 05:54:52.262156 containerd[1607]: 2025-07-07 05:54:52.129 [INFO][4632] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b9063cf62d330415dbf6a8a15016fceb914fd6d71e809ab0ebce9ba03029c1c1" Jul 7 05:54:52.262156 containerd[1607]: 2025-07-07 05:54:52.129 [INFO][4632] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b9063cf62d330415dbf6a8a15016fceb914fd6d71e809ab0ebce9ba03029c1c1" iface="eth0" netns="/var/run/netns/cni-b47a4663-2e6f-521f-6f36-e8e25d2b1af7" Jul 7 05:54:52.262156 containerd[1607]: 2025-07-07 05:54:52.129 [INFO][4632] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b9063cf62d330415dbf6a8a15016fceb914fd6d71e809ab0ebce9ba03029c1c1" iface="eth0" netns="/var/run/netns/cni-b47a4663-2e6f-521f-6f36-e8e25d2b1af7" Jul 7 05:54:52.262156 containerd[1607]: 2025-07-07 05:54:52.130 [INFO][4632] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b9063cf62d330415dbf6a8a15016fceb914fd6d71e809ab0ebce9ba03029c1c1" iface="eth0" netns="/var/run/netns/cni-b47a4663-2e6f-521f-6f36-e8e25d2b1af7" Jul 7 05:54:52.262156 containerd[1607]: 2025-07-07 05:54:52.130 [INFO][4632] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b9063cf62d330415dbf6a8a15016fceb914fd6d71e809ab0ebce9ba03029c1c1" Jul 7 05:54:52.262156 containerd[1607]: 2025-07-07 05:54:52.130 [INFO][4632] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b9063cf62d330415dbf6a8a15016fceb914fd6d71e809ab0ebce9ba03029c1c1" Jul 7 05:54:52.262156 containerd[1607]: 2025-07-07 05:54:52.204 [INFO][4657] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b9063cf62d330415dbf6a8a15016fceb914fd6d71e809ab0ebce9ba03029c1c1" HandleID="k8s-pod-network.b9063cf62d330415dbf6a8a15016fceb914fd6d71e809ab0ebce9ba03029c1c1" Workload="ci--4081--3--4--0--cfa01fffc0-k8s-csi--node--driver--s6jw4-eth0" Jul 7 05:54:52.262156 containerd[1607]: 2025-07-07 05:54:52.204 [INFO][4657] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:54:52.262156 containerd[1607]: 2025-07-07 05:54:52.227 [INFO][4657] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:54:52.262156 containerd[1607]: 2025-07-07 05:54:52.253 [WARNING][4657] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b9063cf62d330415dbf6a8a15016fceb914fd6d71e809ab0ebce9ba03029c1c1" HandleID="k8s-pod-network.b9063cf62d330415dbf6a8a15016fceb914fd6d71e809ab0ebce9ba03029c1c1" Workload="ci--4081--3--4--0--cfa01fffc0-k8s-csi--node--driver--s6jw4-eth0" Jul 7 05:54:52.262156 containerd[1607]: 2025-07-07 05:54:52.253 [INFO][4657] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b9063cf62d330415dbf6a8a15016fceb914fd6d71e809ab0ebce9ba03029c1c1" HandleID="k8s-pod-network.b9063cf62d330415dbf6a8a15016fceb914fd6d71e809ab0ebce9ba03029c1c1" Workload="ci--4081--3--4--0--cfa01fffc0-k8s-csi--node--driver--s6jw4-eth0" Jul 7 05:54:52.262156 containerd[1607]: 2025-07-07 05:54:52.255 [INFO][4657] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:54:52.262156 containerd[1607]: 2025-07-07 05:54:52.260 [INFO][4632] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b9063cf62d330415dbf6a8a15016fceb914fd6d71e809ab0ebce9ba03029c1c1" Jul 7 05:54:52.263959 containerd[1607]: time="2025-07-07T05:54:52.262653078Z" level=info msg="TearDown network for sandbox \"b9063cf62d330415dbf6a8a15016fceb914fd6d71e809ab0ebce9ba03029c1c1\" successfully" Jul 7 05:54:52.263959 containerd[1607]: time="2025-07-07T05:54:52.262679798Z" level=info msg="StopPodSandbox for \"b9063cf62d330415dbf6a8a15016fceb914fd6d71e809ab0ebce9ba03029c1c1\" returns successfully" Jul 7 05:54:52.265107 containerd[1607]: time="2025-07-07T05:54:52.264351302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s6jw4,Uid:3608e3b2-d1d8-433a-a22d-3188e5368e8c,Namespace:calico-system,Attempt:1,}" Jul 7 05:54:52.268153 systemd[1]: run-netns-cni\x2db47a4663\x2d2e6f\x2d521f\x2d6f36\x2de8e25d2b1af7.mount: Deactivated successfully. Jul 7 05:54:52.300042 containerd[1607]: 2025-07-07 05:54:52.141 [INFO][4618] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f79c852cf8d5a6dc42e9ae7aa2efc034789040ef8c2994a2b978d11f9e301e43" Jul 7 05:54:52.300042 containerd[1607]: 2025-07-07 05:54:52.142 [INFO][4618] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f79c852cf8d5a6dc42e9ae7aa2efc034789040ef8c2994a2b978d11f9e301e43" iface="eth0" netns="/var/run/netns/cni-2cdff68a-1351-ba7c-f1e6-965edaf46d20" Jul 7 05:54:52.300042 containerd[1607]: 2025-07-07 05:54:52.142 [INFO][4618] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f79c852cf8d5a6dc42e9ae7aa2efc034789040ef8c2994a2b978d11f9e301e43" iface="eth0" netns="/var/run/netns/cni-2cdff68a-1351-ba7c-f1e6-965edaf46d20" Jul 7 05:54:52.300042 containerd[1607]: 2025-07-07 05:54:52.142 [INFO][4618] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f79c852cf8d5a6dc42e9ae7aa2efc034789040ef8c2994a2b978d11f9e301e43" iface="eth0" netns="/var/run/netns/cni-2cdff68a-1351-ba7c-f1e6-965edaf46d20" Jul 7 05:54:52.300042 containerd[1607]: 2025-07-07 05:54:52.142 [INFO][4618] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f79c852cf8d5a6dc42e9ae7aa2efc034789040ef8c2994a2b978d11f9e301e43" Jul 7 05:54:52.300042 containerd[1607]: 2025-07-07 05:54:52.142 [INFO][4618] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f79c852cf8d5a6dc42e9ae7aa2efc034789040ef8c2994a2b978d11f9e301e43" Jul 7 05:54:52.300042 containerd[1607]: 2025-07-07 05:54:52.246 [INFO][4663] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f79c852cf8d5a6dc42e9ae7aa2efc034789040ef8c2994a2b978d11f9e301e43" HandleID="k8s-pod-network.f79c852cf8d5a6dc42e9ae7aa2efc034789040ef8c2994a2b978d11f9e301e43" Workload="ci--4081--3--4--0--cfa01fffc0-k8s-calico--apiserver--5b7f8cc9dc--mfs7f-eth0" Jul 7 05:54:52.300042 containerd[1607]: 2025-07-07 05:54:52.247 [INFO][4663] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:54:52.300042 containerd[1607]: 2025-07-07 05:54:52.256 [INFO][4663] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:54:52.300042 containerd[1607]: 2025-07-07 05:54:52.277 [WARNING][4663] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f79c852cf8d5a6dc42e9ae7aa2efc034789040ef8c2994a2b978d11f9e301e43" HandleID="k8s-pod-network.f79c852cf8d5a6dc42e9ae7aa2efc034789040ef8c2994a2b978d11f9e301e43" Workload="ci--4081--3--4--0--cfa01fffc0-k8s-calico--apiserver--5b7f8cc9dc--mfs7f-eth0" Jul 7 05:54:52.300042 containerd[1607]: 2025-07-07 05:54:52.278 [INFO][4663] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f79c852cf8d5a6dc42e9ae7aa2efc034789040ef8c2994a2b978d11f9e301e43" HandleID="k8s-pod-network.f79c852cf8d5a6dc42e9ae7aa2efc034789040ef8c2994a2b978d11f9e301e43" Workload="ci--4081--3--4--0--cfa01fffc0-k8s-calico--apiserver--5b7f8cc9dc--mfs7f-eth0" Jul 7 05:54:52.300042 containerd[1607]: 2025-07-07 05:54:52.284 [INFO][4663] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:54:52.300042 containerd[1607]: 2025-07-07 05:54:52.291 [INFO][4618] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f79c852cf8d5a6dc42e9ae7aa2efc034789040ef8c2994a2b978d11f9e301e43" Jul 7 05:54:52.301409 containerd[1607]: time="2025-07-07T05:54:52.301201797Z" level=info msg="TearDown network for sandbox \"f79c852cf8d5a6dc42e9ae7aa2efc034789040ef8c2994a2b978d11f9e301e43\" successfully" Jul 7 05:54:52.301409 containerd[1607]: time="2025-07-07T05:54:52.301230157Z" level=info msg="StopPodSandbox for \"f79c852cf8d5a6dc42e9ae7aa2efc034789040ef8c2994a2b978d11f9e301e43\" returns successfully" Jul 7 05:54:52.302855 containerd[1607]: time="2025-07-07T05:54:52.302813262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b7f8cc9dc-mfs7f,Uid:cd6c95c2-86b8-43b3-90f9-cea8639bb989,Namespace:calico-apiserver,Attempt:1,}" Jul 7 05:54:52.584120 systemd-networkd[1244]: cali3b722d70bcb: Link UP Jul 7 05:54:52.586517 systemd-networkd[1244]: cali3b722d70bcb: Gained carrier Jul 7 05:54:52.609108 containerd[1607]: 2025-07-07 05:54:52.418 [INFO][4705] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--4--0--cfa01fffc0-k8s-calico--apiserver--5b7f8cc9dc--drdvq-eth0 calico-apiserver-5b7f8cc9dc- calico-apiserver a362285d-bb22-4ce9-b882-0bdc9d32c0cc 964 0 2025-07-07 05:54:23 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5b7f8cc9dc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-4-0-cfa01fffc0 calico-apiserver-5b7f8cc9dc-drdvq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali3b722d70bcb [] [] }} ContainerID="b340b3fe3ac1eac90f5fe013b1659db4390f73c49ab5c744f74504d4c87108c1" Namespace="calico-apiserver" Pod="calico-apiserver-5b7f8cc9dc-drdvq" WorkloadEndpoint="ci--4081--3--4--0--cfa01fffc0-k8s-calico--apiserver--5b7f8cc9dc--drdvq-" Jul 7 05:54:52.609108 containerd[1607]: 2025-07-07 05:54:52.418 [INFO][4705] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b340b3fe3ac1eac90f5fe013b1659db4390f73c49ab5c744f74504d4c87108c1" Namespace="calico-apiserver" Pod="calico-apiserver-5b7f8cc9dc-drdvq" WorkloadEndpoint="ci--4081--3--4--0--cfa01fffc0-k8s-calico--apiserver--5b7f8cc9dc--drdvq-eth0" Jul 7 05:54:52.609108 containerd[1607]: 2025-07-07 05:54:52.496 [INFO][4745] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b340b3fe3ac1eac90f5fe013b1659db4390f73c49ab5c744f74504d4c87108c1" HandleID="k8s-pod-network.b340b3fe3ac1eac90f5fe013b1659db4390f73c49ab5c744f74504d4c87108c1" Workload="ci--4081--3--4--0--cfa01fffc0-k8s-calico--apiserver--5b7f8cc9dc--drdvq-eth0" Jul 7 05:54:52.609108 containerd[1607]: 2025-07-07 05:54:52.496 [INFO][4745] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b340b3fe3ac1eac90f5fe013b1659db4390f73c49ab5c744f74504d4c87108c1" HandleID="k8s-pod-network.b340b3fe3ac1eac90f5fe013b1659db4390f73c49ab5c744f74504d4c87108c1" Workload="ci--4081--3--4--0--cfa01fffc0-k8s-calico--apiserver--5b7f8cc9dc--drdvq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d9a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-4-0-cfa01fffc0", "pod":"calico-apiserver-5b7f8cc9dc-drdvq", "timestamp":"2025-07-07 05:54:52.496012095 +0000 UTC"}, Hostname:"ci-4081-3-4-0-cfa01fffc0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 05:54:52.609108 containerd[1607]: 2025-07-07 05:54:52.498 [INFO][4745] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:54:52.609108 containerd[1607]: 2025-07-07 05:54:52.499 [INFO][4745] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:54:52.609108 containerd[1607]: 2025-07-07 05:54:52.499 [INFO][4745] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-4-0-cfa01fffc0' Jul 7 05:54:52.609108 containerd[1607]: 2025-07-07 05:54:52.517 [INFO][4745] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b340b3fe3ac1eac90f5fe013b1659db4390f73c49ab5c744f74504d4c87108c1" host="ci-4081-3-4-0-cfa01fffc0" Jul 7 05:54:52.609108 containerd[1607]: 2025-07-07 05:54:52.524 [INFO][4745] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-4-0-cfa01fffc0" Jul 7 05:54:52.609108 containerd[1607]: 2025-07-07 05:54:52.532 [INFO][4745] ipam/ipam.go 511: Trying affinity for 192.168.9.64/26 host="ci-4081-3-4-0-cfa01fffc0" Jul 7 05:54:52.609108 containerd[1607]: 2025-07-07 05:54:52.536 [INFO][4745] ipam/ipam.go 158: Attempting to load block cidr=192.168.9.64/26 host="ci-4081-3-4-0-cfa01fffc0" Jul 7 05:54:52.609108 containerd[1607]: 2025-07-07 05:54:52.543 [INFO][4745] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.9.64/26 host="ci-4081-3-4-0-cfa01fffc0" Jul 7 05:54:52.609108 containerd[1607]: 2025-07-07 05:54:52.543 [INFO][4745] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.9.64/26 handle="k8s-pod-network.b340b3fe3ac1eac90f5fe013b1659db4390f73c49ab5c744f74504d4c87108c1" host="ci-4081-3-4-0-cfa01fffc0" Jul 7 05:54:52.609108 containerd[1607]: 2025-07-07 05:54:52.547 [INFO][4745] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b340b3fe3ac1eac90f5fe013b1659db4390f73c49ab5c744f74504d4c87108c1 Jul 7 05:54:52.609108 containerd[1607]: 2025-07-07 05:54:52.555 [INFO][4745] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.9.64/26 handle="k8s-pod-network.b340b3fe3ac1eac90f5fe013b1659db4390f73c49ab5c744f74504d4c87108c1" host="ci-4081-3-4-0-cfa01fffc0" Jul 7 05:54:52.609108 containerd[1607]: 2025-07-07 05:54:52.567 [INFO][4745] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.9.68/26] block=192.168.9.64/26 handle="k8s-pod-network.b340b3fe3ac1eac90f5fe013b1659db4390f73c49ab5c744f74504d4c87108c1" host="ci-4081-3-4-0-cfa01fffc0" Jul 7 05:54:52.609108 containerd[1607]: 2025-07-07 05:54:52.567 [INFO][4745] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.9.68/26] handle="k8s-pod-network.b340b3fe3ac1eac90f5fe013b1659db4390f73c49ab5c744f74504d4c87108c1" host="ci-4081-3-4-0-cfa01fffc0" Jul 7 05:54:52.609108 containerd[1607]: 2025-07-07 05:54:52.567 [INFO][4745] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:54:52.609108 containerd[1607]: 2025-07-07 05:54:52.567 [INFO][4745] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.9.68/26] IPv6=[] ContainerID="b340b3fe3ac1eac90f5fe013b1659db4390f73c49ab5c744f74504d4c87108c1" HandleID="k8s-pod-network.b340b3fe3ac1eac90f5fe013b1659db4390f73c49ab5c744f74504d4c87108c1" Workload="ci--4081--3--4--0--cfa01fffc0-k8s-calico--apiserver--5b7f8cc9dc--drdvq-eth0" Jul 7 05:54:52.609688 containerd[1607]: 2025-07-07 05:54:52.573 [INFO][4705] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b340b3fe3ac1eac90f5fe013b1659db4390f73c49ab5c744f74504d4c87108c1" Namespace="calico-apiserver" Pod="calico-apiserver-5b7f8cc9dc-drdvq" WorkloadEndpoint="ci--4081--3--4--0--cfa01fffc0-k8s-calico--apiserver--5b7f8cc9dc--drdvq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--0--cfa01fffc0-k8s-calico--apiserver--5b7f8cc9dc--drdvq-eth0", GenerateName:"calico-apiserver-5b7f8cc9dc-", Namespace:"calico-apiserver", SelfLink:"", UID:"a362285d-bb22-4ce9-b882-0bdc9d32c0cc", ResourceVersion:"964", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 5, 54, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b7f8cc9dc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-0-cfa01fffc0", ContainerID:"", Pod:"calico-apiserver-5b7f8cc9dc-drdvq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.9.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3b722d70bcb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 05:54:52.609688 containerd[1607]: 2025-07-07 05:54:52.575 [INFO][4705] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.9.68/32] ContainerID="b340b3fe3ac1eac90f5fe013b1659db4390f73c49ab5c744f74504d4c87108c1" Namespace="calico-apiserver" Pod="calico-apiserver-5b7f8cc9dc-drdvq" WorkloadEndpoint="ci--4081--3--4--0--cfa01fffc0-k8s-calico--apiserver--5b7f8cc9dc--drdvq-eth0" Jul 7 05:54:52.609688 containerd[1607]: 2025-07-07 05:54:52.576 [INFO][4705] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3b722d70bcb ContainerID="b340b3fe3ac1eac90f5fe013b1659db4390f73c49ab5c744f74504d4c87108c1" Namespace="calico-apiserver" Pod="calico-apiserver-5b7f8cc9dc-drdvq" WorkloadEndpoint="ci--4081--3--4--0--cfa01fffc0-k8s-calico--apiserver--5b7f8cc9dc--drdvq-eth0" Jul 7 05:54:52.609688 containerd[1607]: 2025-07-07 05:54:52.587 [INFO][4705] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b340b3fe3ac1eac90f5fe013b1659db4390f73c49ab5c744f74504d4c87108c1" Namespace="calico-apiserver" Pod="calico-apiserver-5b7f8cc9dc-drdvq" WorkloadEndpoint="ci--4081--3--4--0--cfa01fffc0-k8s-calico--apiserver--5b7f8cc9dc--drdvq-eth0" Jul 7 05:54:52.609688 containerd[1607]: 2025-07-07 05:54:52.587 [INFO][4705] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b340b3fe3ac1eac90f5fe013b1659db4390f73c49ab5c744f74504d4c87108c1" Namespace="calico-apiserver" Pod="calico-apiserver-5b7f8cc9dc-drdvq" WorkloadEndpoint="ci--4081--3--4--0--cfa01fffc0-k8s-calico--apiserver--5b7f8cc9dc--drdvq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--0--cfa01fffc0-k8s-calico--apiserver--5b7f8cc9dc--drdvq-eth0", GenerateName:"calico-apiserver-5b7f8cc9dc-", Namespace:"calico-apiserver", SelfLink:"", UID:"a362285d-bb22-4ce9-b882-0bdc9d32c0cc", ResourceVersion:"964", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 5, 54, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b7f8cc9dc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-0-cfa01fffc0", ContainerID:"b340b3fe3ac1eac90f5fe013b1659db4390f73c49ab5c744f74504d4c87108c1", Pod:"calico-apiserver-5b7f8cc9dc-drdvq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.9.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3b722d70bcb", MAC:"8e:54:2f:5f:49:91", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 05:54:52.609688 containerd[1607]: 2025-07-07 05:54:52.606 [INFO][4705] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b340b3fe3ac1eac90f5fe013b1659db4390f73c49ab5c744f74504d4c87108c1" Namespace="calico-apiserver" Pod="calico-apiserver-5b7f8cc9dc-drdvq" WorkloadEndpoint="ci--4081--3--4--0--cfa01fffc0-k8s-calico--apiserver--5b7f8cc9dc--drdvq-eth0" Jul 7 05:54:52.678396 containerd[1607]: time="2025-07-07T05:54:52.675947772Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 05:54:52.678798 containerd[1607]: time="2025-07-07T05:54:52.677401798Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 05:54:52.678798 containerd[1607]: time="2025-07-07T05:54:52.677452198Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:54:52.678798 containerd[1607]: time="2025-07-07T05:54:52.678608227Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:54:52.682884 systemd-networkd[1244]: calide74e78d246: Link UP Jul 7 05:54:52.683472 systemd-networkd[1244]: calide74e78d246: Gained carrier Jul 7 05:54:52.725860 containerd[1607]: 2025-07-07 05:54:52.448 [INFO][4716] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--4--0--cfa01fffc0-k8s-calico--apiserver--5b7f8cc9dc--mfs7f-eth0 calico-apiserver-5b7f8cc9dc- calico-apiserver cd6c95c2-86b8-43b3-90f9-cea8639bb989 966 0 2025-07-07 05:54:23 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5b7f8cc9dc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-4-0-cfa01fffc0 calico-apiserver-5b7f8cc9dc-mfs7f eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calide74e78d246 [] [] }} ContainerID="530df6d9d4fa8291379d05b06346b764ed98eebc9a8853f1015e829ca16b8c03" Namespace="calico-apiserver" Pod="calico-apiserver-5b7f8cc9dc-mfs7f" WorkloadEndpoint="ci--4081--3--4--0--cfa01fffc0-k8s-calico--apiserver--5b7f8cc9dc--mfs7f-" Jul 7 05:54:52.725860 containerd[1607]: 2025-07-07 05:54:52.449 [INFO][4716] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="530df6d9d4fa8291379d05b06346b764ed98eebc9a8853f1015e829ca16b8c03" Namespace="calico-apiserver" Pod="calico-apiserver-5b7f8cc9dc-mfs7f" WorkloadEndpoint="ci--4081--3--4--0--cfa01fffc0-k8s-calico--apiserver--5b7f8cc9dc--mfs7f-eth0" Jul 7 05:54:52.725860 containerd[1607]: 2025-07-07 05:54:52.508 [INFO][4751] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="530df6d9d4fa8291379d05b06346b764ed98eebc9a8853f1015e829ca16b8c03" HandleID="k8s-pod-network.530df6d9d4fa8291379d05b06346b764ed98eebc9a8853f1015e829ca16b8c03" Workload="ci--4081--3--4--0--cfa01fffc0-k8s-calico--apiserver--5b7f8cc9dc--mfs7f-eth0" Jul 7 05:54:52.725860 containerd[1607]: 2025-07-07 05:54:52.508 [INFO][4751] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="530df6d9d4fa8291379d05b06346b764ed98eebc9a8853f1015e829ca16b8c03" HandleID="k8s-pod-network.530df6d9d4fa8291379d05b06346b764ed98eebc9a8853f1015e829ca16b8c03" Workload="ci--4081--3--4--0--cfa01fffc0-k8s-calico--apiserver--5b7f8cc9dc--mfs7f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3710), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-4-0-cfa01fffc0", "pod":"calico-apiserver-5b7f8cc9dc-mfs7f", "timestamp":"2025-07-07 05:54:52.508584818 +0000 UTC"}, Hostname:"ci-4081-3-4-0-cfa01fffc0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 05:54:52.725860 containerd[1607]: 2025-07-07 05:54:52.509 [INFO][4751] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:54:52.725860 containerd[1607]: 2025-07-07 05:54:52.567 [INFO][4751] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:54:52.725860 containerd[1607]: 2025-07-07 05:54:52.567 [INFO][4751] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-4-0-cfa01fffc0' Jul 7 05:54:52.725860 containerd[1607]: 2025-07-07 05:54:52.619 [INFO][4751] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.530df6d9d4fa8291379d05b06346b764ed98eebc9a8853f1015e829ca16b8c03" host="ci-4081-3-4-0-cfa01fffc0" Jul 7 05:54:52.725860 containerd[1607]: 2025-07-07 05:54:52.627 [INFO][4751] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-4-0-cfa01fffc0" Jul 7 05:54:52.725860 containerd[1607]: 2025-07-07 05:54:52.634 [INFO][4751] ipam/ipam.go 511: Trying affinity for 192.168.9.64/26 host="ci-4081-3-4-0-cfa01fffc0" Jul 7 05:54:52.725860 containerd[1607]: 2025-07-07 05:54:52.637 [INFO][4751] ipam/ipam.go 158: Attempting to load block cidr=192.168.9.64/26 host="ci-4081-3-4-0-cfa01fffc0" Jul 7 05:54:52.725860 containerd[1607]: 2025-07-07 05:54:52.641 [INFO][4751] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.9.64/26 host="ci-4081-3-4-0-cfa01fffc0" Jul 7 05:54:52.725860 containerd[1607]: 2025-07-07 05:54:52.641 [INFO][4751] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.9.64/26 handle="k8s-pod-network.530df6d9d4fa8291379d05b06346b764ed98eebc9a8853f1015e829ca16b8c03" host="ci-4081-3-4-0-cfa01fffc0" Jul 7 05:54:52.725860 containerd[1607]: 2025-07-07 05:54:52.646 [INFO][4751] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.530df6d9d4fa8291379d05b06346b764ed98eebc9a8853f1015e829ca16b8c03 Jul 7 05:54:52.725860 containerd[1607]: 2025-07-07 05:54:52.657 [INFO][4751] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.9.64/26 handle="k8s-pod-network.530df6d9d4fa8291379d05b06346b764ed98eebc9a8853f1015e829ca16b8c03" host="ci-4081-3-4-0-cfa01fffc0" Jul 7 05:54:52.725860 containerd[1607]: 2025-07-07 05:54:52.666 [INFO][4751] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.9.69/26] block=192.168.9.64/26 handle="k8s-pod-network.530df6d9d4fa8291379d05b06346b764ed98eebc9a8853f1015e829ca16b8c03" host="ci-4081-3-4-0-cfa01fffc0" Jul 7 05:54:52.725860 containerd[1607]: 2025-07-07 05:54:52.666 [INFO][4751] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.9.69/26] handle="k8s-pod-network.530df6d9d4fa8291379d05b06346b764ed98eebc9a8853f1015e829ca16b8c03" host="ci-4081-3-4-0-cfa01fffc0" Jul 7 05:54:52.725860 containerd[1607]: 2025-07-07 05:54:52.666 [INFO][4751] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:54:52.725860 containerd[1607]: 2025-07-07 05:54:52.666 [INFO][4751] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.9.69/26] IPv6=[] ContainerID="530df6d9d4fa8291379d05b06346b764ed98eebc9a8853f1015e829ca16b8c03" HandleID="k8s-pod-network.530df6d9d4fa8291379d05b06346b764ed98eebc9a8853f1015e829ca16b8c03" Workload="ci--4081--3--4--0--cfa01fffc0-k8s-calico--apiserver--5b7f8cc9dc--mfs7f-eth0" Jul 7 05:54:52.726874 containerd[1607]: 2025-07-07 05:54:52.672 [INFO][4716] cni-plugin/k8s.go 418: Populated endpoint ContainerID="530df6d9d4fa8291379d05b06346b764ed98eebc9a8853f1015e829ca16b8c03" Namespace="calico-apiserver" Pod="calico-apiserver-5b7f8cc9dc-mfs7f" WorkloadEndpoint="ci--4081--3--4--0--cfa01fffc0-k8s-calico--apiserver--5b7f8cc9dc--mfs7f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--0--cfa01fffc0-k8s-calico--apiserver--5b7f8cc9dc--mfs7f-eth0", GenerateName:"calico-apiserver-5b7f8cc9dc-", Namespace:"calico-apiserver", SelfLink:"", UID:"cd6c95c2-86b8-43b3-90f9-cea8639bb989", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 5, 54, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b7f8cc9dc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-0-cfa01fffc0", ContainerID:"", Pod:"calico-apiserver-5b7f8cc9dc-mfs7f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.9.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calide74e78d246", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 05:54:52.726874 containerd[1607]: 2025-07-07 05:54:52.673 [INFO][4716] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.9.69/32] ContainerID="530df6d9d4fa8291379d05b06346b764ed98eebc9a8853f1015e829ca16b8c03" Namespace="calico-apiserver" Pod="calico-apiserver-5b7f8cc9dc-mfs7f" WorkloadEndpoint="ci--4081--3--4--0--cfa01fffc0-k8s-calico--apiserver--5b7f8cc9dc--mfs7f-eth0" Jul 7 05:54:52.726874 containerd[1607]: 2025-07-07 05:54:52.673 [INFO][4716] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calide74e78d246 ContainerID="530df6d9d4fa8291379d05b06346b764ed98eebc9a8853f1015e829ca16b8c03" Namespace="calico-apiserver" Pod="calico-apiserver-5b7f8cc9dc-mfs7f" WorkloadEndpoint="ci--4081--3--4--0--cfa01fffc0-k8s-calico--apiserver--5b7f8cc9dc--mfs7f-eth0" Jul 7 05:54:52.726874 containerd[1607]: 2025-07-07 05:54:52.684 [INFO][4716] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="530df6d9d4fa8291379d05b06346b764ed98eebc9a8853f1015e829ca16b8c03" Namespace="calico-apiserver" Pod="calico-apiserver-5b7f8cc9dc-mfs7f" WorkloadEndpoint="ci--4081--3--4--0--cfa01fffc0-k8s-calico--apiserver--5b7f8cc9dc--mfs7f-eth0" Jul 7 05:54:52.726874 containerd[1607]: 2025-07-07 05:54:52.694 [INFO][4716] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="530df6d9d4fa8291379d05b06346b764ed98eebc9a8853f1015e829ca16b8c03" Namespace="calico-apiserver" Pod="calico-apiserver-5b7f8cc9dc-mfs7f" WorkloadEndpoint="ci--4081--3--4--0--cfa01fffc0-k8s-calico--apiserver--5b7f8cc9dc--mfs7f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--0--cfa01fffc0-k8s-calico--apiserver--5b7f8cc9dc--mfs7f-eth0", GenerateName:"calico-apiserver-5b7f8cc9dc-", Namespace:"calico-apiserver", SelfLink:"", UID:"cd6c95c2-86b8-43b3-90f9-cea8639bb989", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 5, 54, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b7f8cc9dc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-0-cfa01fffc0", ContainerID:"530df6d9d4fa8291379d05b06346b764ed98eebc9a8853f1015e829ca16b8c03", Pod:"calico-apiserver-5b7f8cc9dc-mfs7f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.9.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calide74e78d246", MAC:"b6:71:bb:52:0a:0d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 05:54:52.726874 containerd[1607]: 2025-07-07 05:54:52.707 [INFO][4716] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="530df6d9d4fa8291379d05b06346b764ed98eebc9a8853f1015e829ca16b8c03" Namespace="calico-apiserver" Pod="calico-apiserver-5b7f8cc9dc-mfs7f" WorkloadEndpoint="ci--4081--3--4--0--cfa01fffc0-k8s-calico--apiserver--5b7f8cc9dc--mfs7f-eth0" Jul 7 05:54:52.797884 systemd-networkd[1244]: cali21a54a7a706: Link UP Jul 7 05:54:52.798915 systemd-networkd[1244]: cali21a54a7a706: Gained carrier Jul 7 05:54:52.805879 containerd[1607]: time="2025-07-07T05:54:52.801231800Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 05:54:52.805879 containerd[1607]: time="2025-07-07T05:54:52.804412330Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 05:54:52.805879 containerd[1607]: time="2025-07-07T05:54:52.804427050Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:54:52.805879 containerd[1607]: time="2025-07-07T05:54:52.804529449Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:54:52.821689 containerd[1607]: 2025-07-07 05:54:52.397 [INFO][4686] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--4--0--cfa01fffc0-k8s-csi--node--driver--s6jw4-eth0 csi-node-driver- calico-system 3608e3b2-d1d8-433a-a22d-3188e5368e8c 965 0 2025-07-07 05:54:29 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-4-0-cfa01fffc0 csi-node-driver-s6jw4 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali21a54a7a706 [] [] }} ContainerID="d9f5dafa060b0c2265f6754a8c3c22f204c6497b7401178f354cbc8f2a9e917f" Namespace="calico-system" Pod="csi-node-driver-s6jw4" WorkloadEndpoint="ci--4081--3--4--0--cfa01fffc0-k8s-csi--node--driver--s6jw4-" Jul 7 05:54:52.821689 containerd[1607]: 2025-07-07 05:54:52.397 [INFO][4686] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d9f5dafa060b0c2265f6754a8c3c22f204c6497b7401178f354cbc8f2a9e917f" Namespace="calico-system" Pod="csi-node-driver-s6jw4" WorkloadEndpoint="ci--4081--3--4--0--cfa01fffc0-k8s-csi--node--driver--s6jw4-eth0" Jul 7 05:54:52.821689 containerd[1607]: 2025-07-07 05:54:52.510 [INFO][4733] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d9f5dafa060b0c2265f6754a8c3c22f204c6497b7401178f354cbc8f2a9e917f" HandleID="k8s-pod-network.d9f5dafa060b0c2265f6754a8c3c22f204c6497b7401178f354cbc8f2a9e917f" Workload="ci--4081--3--4--0--cfa01fffc0-k8s-csi--node--driver--s6jw4-eth0" Jul 7 05:54:52.821689 containerd[1607]: 2025-07-07 05:54:52.510 [INFO][4733] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d9f5dafa060b0c2265f6754a8c3c22f204c6497b7401178f354cbc8f2a9e917f" HandleID="k8s-pod-network.d9f5dafa060b0c2265f6754a8c3c22f204c6497b7401178f354cbc8f2a9e917f" Workload="ci--4081--3--4--0--cfa01fffc0-k8s-csi--node--driver--s6jw4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400031bb20), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-4-0-cfa01fffc0", "pod":"csi-node-driver-s6jw4", "timestamp":"2025-07-07 05:54:52.50937033 +0000 UTC"}, Hostname:"ci-4081-3-4-0-cfa01fffc0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 05:54:52.821689 containerd[1607]: 2025-07-07 05:54:52.510 [INFO][4733] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:54:52.821689 containerd[1607]: 2025-07-07 05:54:52.666 [INFO][4733] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:54:52.821689 containerd[1607]: 2025-07-07 05:54:52.666 [INFO][4733] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-4-0-cfa01fffc0' Jul 7 05:54:52.821689 containerd[1607]: 2025-07-07 05:54:52.720 [INFO][4733] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d9f5dafa060b0c2265f6754a8c3c22f204c6497b7401178f354cbc8f2a9e917f" host="ci-4081-3-4-0-cfa01fffc0" Jul 7 05:54:52.821689 containerd[1607]: 2025-07-07 05:54:52.729 [INFO][4733] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-4-0-cfa01fffc0" Jul 7 05:54:52.821689 containerd[1607]: 2025-07-07 05:54:52.735 [INFO][4733] ipam/ipam.go 511: Trying affinity for 192.168.9.64/26 host="ci-4081-3-4-0-cfa01fffc0" Jul 7 05:54:52.821689 containerd[1607]: 2025-07-07 05:54:52.741 [INFO][4733] ipam/ipam.go 158: Attempting to load block cidr=192.168.9.64/26 host="ci-4081-3-4-0-cfa01fffc0" Jul 7 05:54:52.821689 containerd[1607]: 2025-07-07 05:54:52.745 [INFO][4733] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.9.64/26 host="ci-4081-3-4-0-cfa01fffc0" Jul 7 05:54:52.821689 containerd[1607]: 2025-07-07 05:54:52.745 [INFO][4733] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.9.64/26 handle="k8s-pod-network.d9f5dafa060b0c2265f6754a8c3c22f204c6497b7401178f354cbc8f2a9e917f" host="ci-4081-3-4-0-cfa01fffc0" Jul 7 05:54:52.821689 containerd[1607]: 2025-07-07 05:54:52.752 [INFO][4733] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d9f5dafa060b0c2265f6754a8c3c22f204c6497b7401178f354cbc8f2a9e917f Jul 7 05:54:52.821689 containerd[1607]: 2025-07-07 05:54:52.762 [INFO][4733] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.9.64/26 handle="k8s-pod-network.d9f5dafa060b0c2265f6754a8c3c22f204c6497b7401178f354cbc8f2a9e917f" host="ci-4081-3-4-0-cfa01fffc0" Jul 7 05:54:52.821689 containerd[1607]: 2025-07-07 05:54:52.778 [INFO][4733] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.9.70/26] block=192.168.9.64/26 handle="k8s-pod-network.d9f5dafa060b0c2265f6754a8c3c22f204c6497b7401178f354cbc8f2a9e917f" host="ci-4081-3-4-0-cfa01fffc0" Jul 7 05:54:52.821689 containerd[1607]: 2025-07-07 05:54:52.778 [INFO][4733] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.9.70/26] handle="k8s-pod-network.d9f5dafa060b0c2265f6754a8c3c22f204c6497b7401178f354cbc8f2a9e917f" host="ci-4081-3-4-0-cfa01fffc0" Jul 7 05:54:52.821689 containerd[1607]: 2025-07-07 05:54:52.779 [INFO][4733] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:54:52.821689 containerd[1607]: 2025-07-07 05:54:52.779 [INFO][4733] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.9.70/26] IPv6=[] ContainerID="d9f5dafa060b0c2265f6754a8c3c22f204c6497b7401178f354cbc8f2a9e917f" HandleID="k8s-pod-network.d9f5dafa060b0c2265f6754a8c3c22f204c6497b7401178f354cbc8f2a9e917f" Workload="ci--4081--3--4--0--cfa01fffc0-k8s-csi--node--driver--s6jw4-eth0" Jul 7 05:54:52.822252 containerd[1607]: 2025-07-07 05:54:52.791 [INFO][4686] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d9f5dafa060b0c2265f6754a8c3c22f204c6497b7401178f354cbc8f2a9e917f" Namespace="calico-system" Pod="csi-node-driver-s6jw4" WorkloadEndpoint="ci--4081--3--4--0--cfa01fffc0-k8s-csi--node--driver--s6jw4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--0--cfa01fffc0-k8s-csi--node--driver--s6jw4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3608e3b2-d1d8-433a-a22d-3188e5368e8c", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 5, 54, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-0-cfa01fffc0", ContainerID:"", Pod:"csi-node-driver-s6jw4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.9.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali21a54a7a706", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 05:54:52.822252 containerd[1607]: 2025-07-07 05:54:52.791 [INFO][4686] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.9.70/32] ContainerID="d9f5dafa060b0c2265f6754a8c3c22f204c6497b7401178f354cbc8f2a9e917f" Namespace="calico-system" Pod="csi-node-driver-s6jw4" WorkloadEndpoint="ci--4081--3--4--0--cfa01fffc0-k8s-csi--node--driver--s6jw4-eth0" Jul 7 05:54:52.822252 containerd[1607]: 2025-07-07 05:54:52.791 [INFO][4686] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali21a54a7a706 ContainerID="d9f5dafa060b0c2265f6754a8c3c22f204c6497b7401178f354cbc8f2a9e917f" Namespace="calico-system" Pod="csi-node-driver-s6jw4" WorkloadEndpoint="ci--4081--3--4--0--cfa01fffc0-k8s-csi--node--driver--s6jw4-eth0" Jul 7 05:54:52.822252 containerd[1607]: 2025-07-07 05:54:52.799 [INFO][4686] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d9f5dafa060b0c2265f6754a8c3c22f204c6497b7401178f354cbc8f2a9e917f" Namespace="calico-system" Pod="csi-node-driver-s6jw4" WorkloadEndpoint="ci--4081--3--4--0--cfa01fffc0-k8s-csi--node--driver--s6jw4-eth0" Jul 7 05:54:52.822252 containerd[1607]: 2025-07-07 05:54:52.799 [INFO][4686] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d9f5dafa060b0c2265f6754a8c3c22f204c6497b7401178f354cbc8f2a9e917f" Namespace="calico-system" Pod="csi-node-driver-s6jw4" WorkloadEndpoint="ci--4081--3--4--0--cfa01fffc0-k8s-csi--node--driver--s6jw4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--0--cfa01fffc0-k8s-csi--node--driver--s6jw4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3608e3b2-d1d8-433a-a22d-3188e5368e8c", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 5, 54, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-0-cfa01fffc0", ContainerID:"d9f5dafa060b0c2265f6754a8c3c22f204c6497b7401178f354cbc8f2a9e917f", Pod:"csi-node-driver-s6jw4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.9.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali21a54a7a706", MAC:"2e:83:04:70:37:60", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 05:54:52.822252 containerd[1607]: 2025-07-07 05:54:52.817 [INFO][4686] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d9f5dafa060b0c2265f6754a8c3c22f204c6497b7401178f354cbc8f2a9e917f" Namespace="calico-system" Pod="csi-node-driver-s6jw4" WorkloadEndpoint="ci--4081--3--4--0--cfa01fffc0-k8s-csi--node--driver--s6jw4-eth0" Jul 7 05:54:52.935409 systemd-networkd[1244]: calid86b477a19c: Link UP Jul 7 05:54:52.937141 systemd-networkd[1244]: calid86b477a19c: Gained carrier Jul 7 05:54:52.945289 containerd[1607]: time="2025-07-07T05:54:52.941175051Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 05:54:52.945289 containerd[1607]: time="2025-07-07T05:54:52.941668207Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 05:54:52.946416 containerd[1607]: time="2025-07-07T05:54:52.941690486Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:54:52.946416 containerd[1607]: time="2025-07-07T05:54:52.945703049Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:54:52.947585 containerd[1607]: time="2025-07-07T05:54:52.947332434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b7f8cc9dc-drdvq,Uid:a362285d-bb22-4ce9-b882-0bdc9d32c0cc,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"b340b3fe3ac1eac90f5fe013b1659db4390f73c49ab5c744f74504d4c87108c1\"" Jul 7 05:54:52.971614 containerd[1607]: 2025-07-07 05:54:52.413 [INFO][4695] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--4--0--cfa01fffc0-k8s-calico--kube--controllers--684db96bcc--ct6sd-eth0 calico-kube-controllers-684db96bcc- calico-system 6bb49342-3954-416b-b804-0778891e9fae 967 0 2025-07-07 05:54:29 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:684db96bcc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-4-0-cfa01fffc0 calico-kube-controllers-684db96bcc-ct6sd eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calid86b477a19c [] [] }} ContainerID="ab0e7c49058eda2586c9d94e63fd1f9b1ab4c821b0b5e13e11259c3badc984e8" Namespace="calico-system" Pod="calico-kube-controllers-684db96bcc-ct6sd" WorkloadEndpoint="ci--4081--3--4--0--cfa01fffc0-k8s-calico--kube--controllers--684db96bcc--ct6sd-" Jul 7 05:54:52.971614 containerd[1607]: 2025-07-07 05:54:52.414 [INFO][4695] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ab0e7c49058eda2586c9d94e63fd1f9b1ab4c821b0b5e13e11259c3badc984e8" Namespace="calico-system" Pod="calico-kube-controllers-684db96bcc-ct6sd" WorkloadEndpoint="ci--4081--3--4--0--cfa01fffc0-k8s-calico--kube--controllers--684db96bcc--ct6sd-eth0" Jul 7 05:54:52.971614 containerd[1607]: 2025-07-07 05:54:52.526 [INFO][4739] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ab0e7c49058eda2586c9d94e63fd1f9b1ab4c821b0b5e13e11259c3badc984e8" HandleID="k8s-pod-network.ab0e7c49058eda2586c9d94e63fd1f9b1ab4c821b0b5e13e11259c3badc984e8" Workload="ci--4081--3--4--0--cfa01fffc0-k8s-calico--kube--controllers--684db96bcc--ct6sd-eth0" Jul 7 05:54:52.971614 containerd[1607]: 2025-07-07 05:54:52.527 [INFO][4739] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ab0e7c49058eda2586c9d94e63fd1f9b1ab4c821b0b5e13e11259c3badc984e8" HandleID="k8s-pod-network.ab0e7c49058eda2586c9d94e63fd1f9b1ab4c821b0b5e13e11259c3badc984e8" Workload="ci--4081--3--4--0--cfa01fffc0-k8s-calico--kube--controllers--684db96bcc--ct6sd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400032f010), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-4-0-cfa01fffc0", "pod":"calico-kube-controllers-684db96bcc-ct6sd", "timestamp":"2025-07-07 05:54:52.526874366 +0000 UTC"}, Hostname:"ci-4081-3-4-0-cfa01fffc0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 05:54:52.971614 containerd[1607]: 2025-07-07 05:54:52.527 [INFO][4739] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:54:52.971614 containerd[1607]: 2025-07-07 05:54:52.779 [INFO][4739] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:54:52.971614 containerd[1607]: 2025-07-07 05:54:52.780 [INFO][4739] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-4-0-cfa01fffc0' Jul 7 05:54:52.971614 containerd[1607]: 2025-07-07 05:54:52.823 [INFO][4739] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ab0e7c49058eda2586c9d94e63fd1f9b1ab4c821b0b5e13e11259c3badc984e8" host="ci-4081-3-4-0-cfa01fffc0" Jul 7 05:54:52.971614 containerd[1607]: 2025-07-07 05:54:52.842 [INFO][4739] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-4-0-cfa01fffc0" Jul 7 05:54:52.971614 containerd[1607]: 2025-07-07 05:54:52.857 [INFO][4739] ipam/ipam.go 511: Trying affinity for 192.168.9.64/26 host="ci-4081-3-4-0-cfa01fffc0" Jul 7 05:54:52.971614 containerd[1607]: 2025-07-07 05:54:52.864 [INFO][4739] ipam/ipam.go 158: Attempting to load block cidr=192.168.9.64/26 host="ci-4081-3-4-0-cfa01fffc0" Jul 7 05:54:52.971614 containerd[1607]: 2025-07-07 05:54:52.871 [INFO][4739] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.9.64/26 host="ci-4081-3-4-0-cfa01fffc0" Jul 7 05:54:52.971614 containerd[1607]: 2025-07-07 05:54:52.872 [INFO][4739] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.9.64/26 handle="k8s-pod-network.ab0e7c49058eda2586c9d94e63fd1f9b1ab4c821b0b5e13e11259c3badc984e8" host="ci-4081-3-4-0-cfa01fffc0" Jul 7 05:54:52.971614 containerd[1607]: 2025-07-07 05:54:52.876 [INFO][4739] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ab0e7c49058eda2586c9d94e63fd1f9b1ab4c821b0b5e13e11259c3badc984e8 Jul 7 05:54:52.971614 containerd[1607]: 2025-07-07 05:54:52.891 [INFO][4739] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.9.64/26 handle="k8s-pod-network.ab0e7c49058eda2586c9d94e63fd1f9b1ab4c821b0b5e13e11259c3badc984e8" host="ci-4081-3-4-0-cfa01fffc0" Jul 7 05:54:52.971614 containerd[1607]: 2025-07-07 05:54:52.902 [INFO][4739] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.9.71/26] block=192.168.9.64/26 handle="k8s-pod-network.ab0e7c49058eda2586c9d94e63fd1f9b1ab4c821b0b5e13e11259c3badc984e8" host="ci-4081-3-4-0-cfa01fffc0" Jul 7 05:54:52.971614 containerd[1607]: 2025-07-07 05:54:52.902 [INFO][4739] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.9.71/26] handle="k8s-pod-network.ab0e7c49058eda2586c9d94e63fd1f9b1ab4c821b0b5e13e11259c3badc984e8" host="ci-4081-3-4-0-cfa01fffc0" Jul 7 05:54:52.971614 containerd[1607]: 2025-07-07 05:54:52.902 [INFO][4739] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:54:52.971614 containerd[1607]: 2025-07-07 05:54:52.902 [INFO][4739] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.9.71/26] IPv6=[] ContainerID="ab0e7c49058eda2586c9d94e63fd1f9b1ab4c821b0b5e13e11259c3badc984e8" HandleID="k8s-pod-network.ab0e7c49058eda2586c9d94e63fd1f9b1ab4c821b0b5e13e11259c3badc984e8" Workload="ci--4081--3--4--0--cfa01fffc0-k8s-calico--kube--controllers--684db96bcc--ct6sd-eth0" Jul 7 05:54:52.972177 containerd[1607]: 2025-07-07 05:54:52.914 [INFO][4695] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ab0e7c49058eda2586c9d94e63fd1f9b1ab4c821b0b5e13e11259c3badc984e8" Namespace="calico-system" Pod="calico-kube-controllers-684db96bcc-ct6sd" WorkloadEndpoint="ci--4081--3--4--0--cfa01fffc0-k8s-calico--kube--controllers--684db96bcc--ct6sd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--0--cfa01fffc0-k8s-calico--kube--controllers--684db96bcc--ct6sd-eth0", GenerateName:"calico-kube-controllers-684db96bcc-", Namespace:"calico-system", SelfLink:"", UID:"6bb49342-3954-416b-b804-0778891e9fae", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 5, 54, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"684db96bcc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-0-cfa01fffc0", ContainerID:"", Pod:"calico-kube-controllers-684db96bcc-ct6sd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.9.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid86b477a19c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 05:54:52.972177 containerd[1607]: 2025-07-07 05:54:52.914 [INFO][4695] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.9.71/32] ContainerID="ab0e7c49058eda2586c9d94e63fd1f9b1ab4c821b0b5e13e11259c3badc984e8" Namespace="calico-system" Pod="calico-kube-controllers-684db96bcc-ct6sd" WorkloadEndpoint="ci--4081--3--4--0--cfa01fffc0-k8s-calico--kube--controllers--684db96bcc--ct6sd-eth0" Jul 7 05:54:52.972177 containerd[1607]: 2025-07-07 05:54:52.914 [INFO][4695] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid86b477a19c ContainerID="ab0e7c49058eda2586c9d94e63fd1f9b1ab4c821b0b5e13e11259c3badc984e8" Namespace="calico-system" Pod="calico-kube-controllers-684db96bcc-ct6sd" WorkloadEndpoint="ci--4081--3--4--0--cfa01fffc0-k8s-calico--kube--controllers--684db96bcc--ct6sd-eth0" Jul 7 05:54:52.972177 containerd[1607]: 2025-07-07 05:54:52.941 [INFO][4695] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ab0e7c49058eda2586c9d94e63fd1f9b1ab4c821b0b5e13e11259c3badc984e8" Namespace="calico-system" Pod="calico-kube-controllers-684db96bcc-ct6sd" WorkloadEndpoint="ci--4081--3--4--0--cfa01fffc0-k8s-calico--kube--controllers--684db96bcc--ct6sd-eth0" Jul 7 05:54:52.972177 containerd[1607]: 2025-07-07 05:54:52.944 [INFO][4695] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ab0e7c49058eda2586c9d94e63fd1f9b1ab4c821b0b5e13e11259c3badc984e8" Namespace="calico-system" Pod="calico-kube-controllers-684db96bcc-ct6sd" WorkloadEndpoint="ci--4081--3--4--0--cfa01fffc0-k8s-calico--kube--controllers--684db96bcc--ct6sd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--0--cfa01fffc0-k8s-calico--kube--controllers--684db96bcc--ct6sd-eth0", GenerateName:"calico-kube-controllers-684db96bcc-", Namespace:"calico-system", SelfLink:"", UID:"6bb49342-3954-416b-b804-0778891e9fae", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 5, 54, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"684db96bcc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-0-cfa01fffc0", ContainerID:"ab0e7c49058eda2586c9d94e63fd1f9b1ab4c821b0b5e13e11259c3badc984e8", Pod:"calico-kube-controllers-684db96bcc-ct6sd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.9.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid86b477a19c", MAC:"96:6a:ac:4a:00:04", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 05:54:52.972177 containerd[1607]: 2025-07-07 05:54:52.960 [INFO][4695] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ab0e7c49058eda2586c9d94e63fd1f9b1ab4c821b0b5e13e11259c3badc984e8" Namespace="calico-system" Pod="calico-kube-controllers-684db96bcc-ct6sd" WorkloadEndpoint="ci--4081--3--4--0--cfa01fffc0-k8s-calico--kube--controllers--684db96bcc--ct6sd-eth0" Jul 7 05:54:52.972177 containerd[1607]: time="2025-07-07T05:54:52.971499447Z" level=info msg="StopPodSandbox for \"dac53fde0970fd1ed655e506800edac1d070f45454a87fa92c1e68ac152701df\"" Jul 7 05:54:52.993786 containerd[1607]: time="2025-07-07T05:54:52.993638880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b7f8cc9dc-mfs7f,Uid:cd6c95c2-86b8-43b3-90f9-cea8639bb989,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"530df6d9d4fa8291379d05b06346b764ed98eebc9a8853f1015e829ca16b8c03\"" Jul 7 05:54:53.015433 containerd[1607]: time="2025-07-07T05:54:53.014629832Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 05:54:53.015433 containerd[1607]: time="2025-07-07T05:54:53.014674712Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 05:54:53.015433 containerd[1607]: time="2025-07-07T05:54:53.014697672Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:54:53.015433 containerd[1607]: time="2025-07-07T05:54:53.014784151Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:54:53.078035 containerd[1607]: time="2025-07-07T05:54:53.077788573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s6jw4,Uid:3608e3b2-d1d8-433a-a22d-3188e5368e8c,Namespace:calico-system,Attempt:1,} returns sandbox id \"d9f5dafa060b0c2265f6754a8c3c22f204c6497b7401178f354cbc8f2a9e917f\"" Jul 7 05:54:53.147409 containerd[1607]: time="2025-07-07T05:54:53.147313827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-684db96bcc-ct6sd,Uid:6bb49342-3954-416b-b804-0778891e9fae,Namespace:calico-system,Attempt:1,} returns sandbox id \"ab0e7c49058eda2586c9d94e63fd1f9b1ab4c821b0b5e13e11259c3badc984e8\"" Jul 7 05:54:53.191059 containerd[1607]: 2025-07-07 05:54:53.108 [INFO][4916] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dac53fde0970fd1ed655e506800edac1d070f45454a87fa92c1e68ac152701df" Jul 7 05:54:53.191059 containerd[1607]: 2025-07-07 05:54:53.108 [INFO][4916] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="dac53fde0970fd1ed655e506800edac1d070f45454a87fa92c1e68ac152701df" iface="eth0" netns="/var/run/netns/cni-eb58a69f-5afa-29ba-ed0b-04d1b68addb8" Jul 7 05:54:53.191059 containerd[1607]: 2025-07-07 05:54:53.110 [INFO][4916] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="dac53fde0970fd1ed655e506800edac1d070f45454a87fa92c1e68ac152701df" iface="eth0" netns="/var/run/netns/cni-eb58a69f-5afa-29ba-ed0b-04d1b68addb8" Jul 7 05:54:53.191059 containerd[1607]: 2025-07-07 05:54:53.111 [INFO][4916] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="dac53fde0970fd1ed655e506800edac1d070f45454a87fa92c1e68ac152701df" iface="eth0" netns="/var/run/netns/cni-eb58a69f-5afa-29ba-ed0b-04d1b68addb8" Jul 7 05:54:53.191059 containerd[1607]: 2025-07-07 05:54:53.111 [INFO][4916] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dac53fde0970fd1ed655e506800edac1d070f45454a87fa92c1e68ac152701df" Jul 7 05:54:53.191059 containerd[1607]: 2025-07-07 05:54:53.111 [INFO][4916] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dac53fde0970fd1ed655e506800edac1d070f45454a87fa92c1e68ac152701df" Jul 7 05:54:53.191059 containerd[1607]: 2025-07-07 05:54:53.158 [INFO][4975] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dac53fde0970fd1ed655e506800edac1d070f45454a87fa92c1e68ac152701df" HandleID="k8s-pod-network.dac53fde0970fd1ed655e506800edac1d070f45454a87fa92c1e68ac152701df" Workload="ci--4081--3--4--0--cfa01fffc0-k8s-coredns--7c65d6cfc9--6l9kt-eth0" Jul 7 05:54:53.191059 containerd[1607]: 2025-07-07 05:54:53.158 [INFO][4975] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:54:53.191059 containerd[1607]: 2025-07-07 05:54:53.158 [INFO][4975] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:54:53.191059 containerd[1607]: 2025-07-07 05:54:53.174 [WARNING][4975] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dac53fde0970fd1ed655e506800edac1d070f45454a87fa92c1e68ac152701df" HandleID="k8s-pod-network.dac53fde0970fd1ed655e506800edac1d070f45454a87fa92c1e68ac152701df" Workload="ci--4081--3--4--0--cfa01fffc0-k8s-coredns--7c65d6cfc9--6l9kt-eth0" Jul 7 05:54:53.191059 containerd[1607]: 2025-07-07 05:54:53.174 [INFO][4975] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dac53fde0970fd1ed655e506800edac1d070f45454a87fa92c1e68ac152701df" HandleID="k8s-pod-network.dac53fde0970fd1ed655e506800edac1d070f45454a87fa92c1e68ac152701df" Workload="ci--4081--3--4--0--cfa01fffc0-k8s-coredns--7c65d6cfc9--6l9kt-eth0" Jul 7 05:54:53.191059 containerd[1607]: 2025-07-07 05:54:53.180 [INFO][4975] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:54:53.191059 containerd[1607]: 2025-07-07 05:54:53.187 [INFO][4916] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dac53fde0970fd1ed655e506800edac1d070f45454a87fa92c1e68ac152701df" Jul 7 05:54:53.193653 containerd[1607]: time="2025-07-07T05:54:53.193359973Z" level=info msg="TearDown network for sandbox \"dac53fde0970fd1ed655e506800edac1d070f45454a87fa92c1e68ac152701df\" successfully" Jul 7 05:54:53.193653 containerd[1607]: time="2025-07-07T05:54:53.193399052Z" level=info msg="StopPodSandbox for \"dac53fde0970fd1ed655e506800edac1d070f45454a87fa92c1e68ac152701df\" returns successfully" Jul 7 05:54:53.194686 containerd[1607]: time="2025-07-07T05:54:53.194626563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-6l9kt,Uid:a19b0658-01a5-4cf7-a7b1-1c9d8255c365,Namespace:kube-system,Attempt:1,}" Jul 7 05:54:53.227420 systemd[1]: run-netns-cni\x2d2cdff68a\x2d1351\x2dba7c\x2df1e6\x2d965edaf46d20.mount: Deactivated successfully. Jul 7 05:54:53.227931 systemd[1]: run-netns-cni\x2deb58a69f\x2d5afa\x2d29ba\x2ded0b\x2d04d1b68addb8.mount: Deactivated successfully. Jul 7 05:54:53.380273 systemd-networkd[1244]: cali4db48ea0fbc: Link UP Jul 7 05:54:53.380519 systemd-networkd[1244]: cali4db48ea0fbc: Gained carrier Jul 7 05:54:53.407286 containerd[1607]: 2025-07-07 05:54:53.266 [INFO][4989] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--4--0--cfa01fffc0-k8s-coredns--7c65d6cfc9--6l9kt-eth0 coredns-7c65d6cfc9- kube-system a19b0658-01a5-4cf7-a7b1-1c9d8255c365 985 0 2025-07-07 05:54:12 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-4-0-cfa01fffc0 coredns-7c65d6cfc9-6l9kt eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4db48ea0fbc [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="66c68d98ca0f4bfb5c119ea937327844178ca9f71db242eccc244ddd17cb2044" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6l9kt" WorkloadEndpoint="ci--4081--3--4--0--cfa01fffc0-k8s-coredns--7c65d6cfc9--6l9kt-" Jul 7 05:54:53.407286 containerd[1607]: 2025-07-07 05:54:53.266 [INFO][4989] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="66c68d98ca0f4bfb5c119ea937327844178ca9f71db242eccc244ddd17cb2044" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6l9kt" WorkloadEndpoint="ci--4081--3--4--0--cfa01fffc0-k8s-coredns--7c65d6cfc9--6l9kt-eth0" Jul 7 05:54:53.407286 containerd[1607]: 2025-07-07 05:54:53.303 [INFO][5001] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="66c68d98ca0f4bfb5c119ea937327844178ca9f71db242eccc244ddd17cb2044" HandleID="k8s-pod-network.66c68d98ca0f4bfb5c119ea937327844178ca9f71db242eccc244ddd17cb2044" Workload="ci--4081--3--4--0--cfa01fffc0-k8s-coredns--7c65d6cfc9--6l9kt-eth0" Jul 7 05:54:53.407286 containerd[1607]: 2025-07-07 05:54:53.303 [INFO][5001] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="66c68d98ca0f4bfb5c119ea937327844178ca9f71db242eccc244ddd17cb2044" HandleID="k8s-pod-network.66c68d98ca0f4bfb5c119ea937327844178ca9f71db242eccc244ddd17cb2044" Workload="ci--4081--3--4--0--cfa01fffc0-k8s-coredns--7c65d6cfc9--6l9kt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d32e0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-4-0-cfa01fffc0", "pod":"coredns-7c65d6cfc9-6l9kt", "timestamp":"2025-07-07 05:54:53.30369245 +0000 UTC"}, Hostname:"ci-4081-3-4-0-cfa01fffc0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 05:54:53.407286 containerd[1607]: 2025-07-07 05:54:53.303 [INFO][5001] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:54:53.407286 containerd[1607]: 2025-07-07 05:54:53.303 [INFO][5001] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:54:53.407286 containerd[1607]: 2025-07-07 05:54:53.303 [INFO][5001] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-4-0-cfa01fffc0' Jul 7 05:54:53.407286 containerd[1607]: 2025-07-07 05:54:53.320 [INFO][5001] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.66c68d98ca0f4bfb5c119ea937327844178ca9f71db242eccc244ddd17cb2044" host="ci-4081-3-4-0-cfa01fffc0" Jul 7 05:54:53.407286 containerd[1607]: 2025-07-07 05:54:53.327 [INFO][5001] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-4-0-cfa01fffc0" Jul 7 05:54:53.407286 containerd[1607]: 2025-07-07 05:54:53.333 [INFO][5001] ipam/ipam.go 511: Trying affinity for 192.168.9.64/26 host="ci-4081-3-4-0-cfa01fffc0" Jul 7 05:54:53.407286 containerd[1607]: 2025-07-07 05:54:53.336 [INFO][5001] ipam/ipam.go 158: Attempting to load block cidr=192.168.9.64/26 host="ci-4081-3-4-0-cfa01fffc0" Jul 7 05:54:53.407286 containerd[1607]: 2025-07-07 05:54:53.342 [INFO][5001] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.9.64/26 host="ci-4081-3-4-0-cfa01fffc0" Jul 7 05:54:53.407286 containerd[1607]: 2025-07-07 05:54:53.342 [INFO][5001] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.9.64/26 handle="k8s-pod-network.66c68d98ca0f4bfb5c119ea937327844178ca9f71db242eccc244ddd17cb2044" host="ci-4081-3-4-0-cfa01fffc0" Jul 7 05:54:53.407286 containerd[1607]: 2025-07-07 05:54:53.347 [INFO][5001] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.66c68d98ca0f4bfb5c119ea937327844178ca9f71db242eccc244ddd17cb2044 Jul 7 05:54:53.407286 containerd[1607]: 2025-07-07 05:54:53.354 [INFO][5001] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.9.64/26 handle="k8s-pod-network.66c68d98ca0f4bfb5c119ea937327844178ca9f71db242eccc244ddd17cb2044" host="ci-4081-3-4-0-cfa01fffc0" Jul 7 05:54:53.407286 containerd[1607]: 2025-07-07 05:54:53.368 [INFO][5001] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.9.72/26] block=192.168.9.64/26 handle="k8s-pod-network.66c68d98ca0f4bfb5c119ea937327844178ca9f71db242eccc244ddd17cb2044" host="ci-4081-3-4-0-cfa01fffc0" Jul 7 05:54:53.407286 containerd[1607]: 2025-07-07 05:54:53.368 [INFO][5001] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.9.72/26] handle="k8s-pod-network.66c68d98ca0f4bfb5c119ea937327844178ca9f71db242eccc244ddd17cb2044" host="ci-4081-3-4-0-cfa01fffc0" Jul 7 05:54:53.407286 containerd[1607]: 2025-07-07 05:54:53.368 [INFO][5001] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:54:53.407286 containerd[1607]: 2025-07-07 05:54:53.368 [INFO][5001] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.9.72/26] IPv6=[] ContainerID="66c68d98ca0f4bfb5c119ea937327844178ca9f71db242eccc244ddd17cb2044" HandleID="k8s-pod-network.66c68d98ca0f4bfb5c119ea937327844178ca9f71db242eccc244ddd17cb2044" Workload="ci--4081--3--4--0--cfa01fffc0-k8s-coredns--7c65d6cfc9--6l9kt-eth0" Jul 7 05:54:53.409886 containerd[1607]: 2025-07-07 05:54:53.373 [INFO][4989] cni-plugin/k8s.go 418: Populated endpoint ContainerID="66c68d98ca0f4bfb5c119ea937327844178ca9f71db242eccc244ddd17cb2044" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6l9kt" WorkloadEndpoint="ci--4081--3--4--0--cfa01fffc0-k8s-coredns--7c65d6cfc9--6l9kt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--0--cfa01fffc0-k8s-coredns--7c65d6cfc9--6l9kt-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"a19b0658-01a5-4cf7-a7b1-1c9d8255c365", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 5, 54, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-0-cfa01fffc0", ContainerID:"", Pod:"coredns-7c65d6cfc9-6l9kt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.9.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4db48ea0fbc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 05:54:53.409886 containerd[1607]: 2025-07-07 05:54:53.373 [INFO][4989] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.9.72/32] ContainerID="66c68d98ca0f4bfb5c119ea937327844178ca9f71db242eccc244ddd17cb2044" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6l9kt" WorkloadEndpoint="ci--4081--3--4--0--cfa01fffc0-k8s-coredns--7c65d6cfc9--6l9kt-eth0" Jul 7 05:54:53.409886 containerd[1607]: 2025-07-07 05:54:53.373 [INFO][4989] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4db48ea0fbc ContainerID="66c68d98ca0f4bfb5c119ea937327844178ca9f71db242eccc244ddd17cb2044" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6l9kt" WorkloadEndpoint="ci--4081--3--4--0--cfa01fffc0-k8s-coredns--7c65d6cfc9--6l9kt-eth0" Jul 7 05:54:53.409886 containerd[1607]: 2025-07-07 05:54:53.380 [INFO][4989] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="66c68d98ca0f4bfb5c119ea937327844178ca9f71db242eccc244ddd17cb2044" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6l9kt" WorkloadEndpoint="ci--4081--3--4--0--cfa01fffc0-k8s-coredns--7c65d6cfc9--6l9kt-eth0" Jul 7 05:54:53.409886 containerd[1607]: 2025-07-07 05:54:53.381 [INFO][4989] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="66c68d98ca0f4bfb5c119ea937327844178ca9f71db242eccc244ddd17cb2044" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6l9kt" WorkloadEndpoint="ci--4081--3--4--0--cfa01fffc0-k8s-coredns--7c65d6cfc9--6l9kt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--0--cfa01fffc0-k8s-coredns--7c65d6cfc9--6l9kt-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"a19b0658-01a5-4cf7-a7b1-1c9d8255c365", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 5, 54, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-0-cfa01fffc0", ContainerID:"66c68d98ca0f4bfb5c119ea937327844178ca9f71db242eccc244ddd17cb2044", Pod:"coredns-7c65d6cfc9-6l9kt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.9.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4db48ea0fbc", MAC:"5a:06:d4:af:91:9c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 05:54:53.409886 containerd[1607]: 2025-07-07 05:54:53.400 [INFO][4989] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="66c68d98ca0f4bfb5c119ea937327844178ca9f71db242eccc244ddd17cb2044" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6l9kt" WorkloadEndpoint="ci--4081--3--4--0--cfa01fffc0-k8s-coredns--7c65d6cfc9--6l9kt-eth0" Jul 7 05:54:53.441972 containerd[1607]: time="2025-07-07T05:54:53.441253490Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 05:54:53.441972 containerd[1607]: time="2025-07-07T05:54:53.441324689Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 05:54:53.441972 containerd[1607]: time="2025-07-07T05:54:53.441340809Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:54:53.441972 containerd[1607]: time="2025-07-07T05:54:53.441501608Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:54:53.477821 systemd[1]: run-containerd-runc-k8s.io-66c68d98ca0f4bfb5c119ea937327844178ca9f71db242eccc244ddd17cb2044-runc.1vmMRr.mount: Deactivated successfully. Jul 7 05:54:53.519667 containerd[1607]: time="2025-07-07T05:54:53.519622240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-6l9kt,Uid:a19b0658-01a5-4cf7-a7b1-1c9d8255c365,Namespace:kube-system,Attempt:1,} returns sandbox id \"66c68d98ca0f4bfb5c119ea937327844178ca9f71db242eccc244ddd17cb2044\"" Jul 7 05:54:53.526429 containerd[1607]: time="2025-07-07T05:54:53.526100473Z" level=info msg="CreateContainer within sandbox \"66c68d98ca0f4bfb5c119ea937327844178ca9f71db242eccc244ddd17cb2044\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 05:54:53.552787 containerd[1607]: time="2025-07-07T05:54:53.552665160Z" level=info msg="CreateContainer within sandbox \"66c68d98ca0f4bfb5c119ea937327844178ca9f71db242eccc244ddd17cb2044\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"41f67b3bb0eb1814fe780f7751dfa842f638b8cc5452eb8fddfdb43910ed6415\"" Jul 7 05:54:53.554592 containerd[1607]: time="2025-07-07T05:54:53.553862791Z" level=info msg="StartContainer for \"41f67b3bb0eb1814fe780f7751dfa842f638b8cc5452eb8fddfdb43910ed6415\"" Jul 7 05:54:53.638886 containerd[1607]: time="2025-07-07T05:54:53.638834173Z" level=info msg="StartContainer for \"41f67b3bb0eb1814fe780f7751dfa842f638b8cc5452eb8fddfdb43910ed6415\" returns successfully" Jul 7 05:54:53.659730 containerd[1607]: time="2025-07-07T05:54:53.659667262Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:54:53.662513 containerd[1607]: time="2025-07-07T05:54:53.662460401Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=30814581" Jul 7 05:54:53.663407 containerd[1607]: time="2025-07-07T05:54:53.663363235Z" level=info msg="ImageCreate event name:\"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:54:53.667198 containerd[1607]: time="2025-07-07T05:54:53.667135287Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:54:53.669284 containerd[1607]: time="2025-07-07T05:54:53.668017121Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"30814411\" in 3.779919441s" Jul 7 05:54:53.669284 containerd[1607]: time="2025-07-07T05:54:53.668052841Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\"" Jul 7 05:54:53.671776 containerd[1607]: time="2025-07-07T05:54:53.671378777Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 7 05:54:53.674290 containerd[1607]: time="2025-07-07T05:54:53.674248756Z" level=info msg="CreateContainer within sandbox \"36a58948e1016c03993c5f0f943df71bd116238ac7cbafb7f4e1343a0751996f\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 7 05:54:53.695120 containerd[1607]: time="2025-07-07T05:54:53.694894446Z" level=info msg="CreateContainer within sandbox \"36a58948e1016c03993c5f0f943df71bd116238ac7cbafb7f4e1343a0751996f\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"9490fd09c4469bb8111690e545e51f38656b4c92490234f7b0af2a287157f4ef\"" Jul 7 05:54:53.697809 containerd[1607]: time="2025-07-07T05:54:53.697754825Z" level=info msg="StartContainer for \"9490fd09c4469bb8111690e545e51f38656b4c92490234f7b0af2a287157f4ef\"" Jul 7 05:54:53.759132 containerd[1607]: time="2025-07-07T05:54:53.758753021Z" level=info msg="StartContainer for \"9490fd09c4469bb8111690e545e51f38656b4c92490234f7b0af2a287157f4ef\" returns successfully" Jul 7 05:54:54.207611 systemd-networkd[1244]: cali3b722d70bcb: Gained IPv6LL Jul 7 05:54:54.218993 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2884643700.mount: Deactivated successfully. Jul 7 05:54:54.362494 kubelet[2790]: I0707 05:54:54.362408 2790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-6l9kt" podStartSLOduration=42.36238516 podStartE2EDuration="42.36238516s" podCreationTimestamp="2025-07-07 05:54:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 05:54:54.359220856 +0000 UTC m=+48.521243923" watchObservedRunningTime="2025-07-07 05:54:54.36238516 +0000 UTC m=+48.524408267" Jul 7 05:54:54.382119 kubelet[2790]: I0707 05:54:54.378809 2790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-5b9fd46bcf-ncfpf" podStartSLOduration=1.625384737 podStartE2EDuration="7.378781714s" podCreationTimestamp="2025-07-07 05:54:47 +0000 UTC" firstStartedPulling="2025-07-07 05:54:47.917519003 +0000 UTC m=+42.079542030" lastFinishedPulling="2025-07-07 05:54:53.67091598 +0000 UTC m=+47.832939007" observedRunningTime="2025-07-07 05:54:54.378022917 +0000 UTC m=+48.540045944" watchObservedRunningTime="2025-07-07 05:54:54.378781714 +0000 UTC m=+48.540804781" Jul 7 05:54:54.401841 systemd-networkd[1244]: calide74e78d246: Gained IPv6LL Jul 7 05:54:54.592340 systemd-networkd[1244]: cali21a54a7a706: Gained IPv6LL Jul 7 05:54:54.592886 systemd-networkd[1244]: calid86b477a19c: Gained IPv6LL Jul 7 05:54:54.719963 systemd-networkd[1244]: cali4db48ea0fbc: Gained IPv6LL Jul 7 05:54:56.517273 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1332559278.mount: Deactivated successfully. Jul 7 05:54:56.942876 containerd[1607]: time="2025-07-07T05:54:56.942821618Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:54:56.944680 containerd[1607]: time="2025-07-07T05:54:56.943606857Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=61838790" Jul 7 05:54:56.946109 containerd[1607]: time="2025-07-07T05:54:56.945973893Z" level=info msg="ImageCreate event name:\"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:54:56.949384 containerd[1607]: time="2025-07-07T05:54:56.949293889Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:54:56.950637 containerd[1607]: time="2025-07-07T05:54:56.950455247Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"61838636\" in 3.279045111s" Jul 7 05:54:56.950637 containerd[1607]: time="2025-07-07T05:54:56.950549327Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\"" Jul 7 05:54:56.953538 containerd[1607]: time="2025-07-07T05:54:56.953167523Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 7 05:54:56.955562 containerd[1607]: time="2025-07-07T05:54:56.955390040Z" level=info msg="CreateContainer within sandbox \"98b6f482989d9a212e9e04c354e8d6b4b7ae7679d893815ad1b097d949b9d704\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 7 05:54:56.986989 containerd[1607]: time="2025-07-07T05:54:56.986941476Z" level=info msg="CreateContainer within sandbox \"98b6f482989d9a212e9e04c354e8d6b4b7ae7679d893815ad1b097d949b9d704\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"a949d074eb422be6bf65a1ca32df4efead6380bb4d9fc26aa2ee2f06034fdf25\"" Jul 7 05:54:56.987724 containerd[1607]: time="2025-07-07T05:54:56.987651395Z" level=info msg="StartContainer for \"a949d074eb422be6bf65a1ca32df4efead6380bb4d9fc26aa2ee2f06034fdf25\"" Jul 7 05:54:57.020868 systemd[1]: run-containerd-runc-k8s.io-a949d074eb422be6bf65a1ca32df4efead6380bb4d9fc26aa2ee2f06034fdf25-runc.UI4iw9.mount: Deactivated successfully. Jul 7 05:54:57.075458 containerd[1607]: time="2025-07-07T05:54:57.074858567Z" level=info msg="StartContainer for \"a949d074eb422be6bf65a1ca32df4efead6380bb4d9fc26aa2ee2f06034fdf25\" returns successfully" Jul 7 05:54:57.382148 kubelet[2790]: I0707 05:54:57.380942 2790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-6wzbz" podStartSLOduration=22.107899577 podStartE2EDuration="28.380923898s" podCreationTimestamp="2025-07-07 05:54:29 +0000 UTC" firstStartedPulling="2025-07-07 05:54:50.679255564 +0000 UTC m=+44.841278551" lastFinishedPulling="2025-07-07 05:54:56.952279805 +0000 UTC m=+51.114302872" observedRunningTime="2025-07-07 05:54:57.380433258 +0000 UTC m=+51.542456285" watchObservedRunningTime="2025-07-07 05:54:57.380923898 +0000 UTC m=+51.542946965" Jul 7 05:55:00.193629 containerd[1607]: time="2025-07-07T05:55:00.193532367Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:55:00.195372 containerd[1607]: time="2025-07-07T05:55:00.195312777Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=44517149" Jul 7 05:55:00.196752 containerd[1607]: time="2025-07-07T05:55:00.196698704Z" level=info msg="ImageCreate event name:\"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:55:00.199554 containerd[1607]: time="2025-07-07T05:55:00.199356279Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:55:00.200677 containerd[1607]: time="2025-07-07T05:55:00.200631966Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 3.247413683s" Jul 7 05:55:00.200837 containerd[1607]: time="2025-07-07T05:55:00.200813967Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 7 05:55:00.202105 containerd[1607]: time="2025-07-07T05:55:00.202051654Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 7 05:55:00.203957 containerd[1607]: time="2025-07-07T05:55:00.203923225Z" level=info msg="CreateContainer within sandbox \"b340b3fe3ac1eac90f5fe013b1659db4390f73c49ab5c744f74504d4c87108c1\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 7 05:55:00.223053 containerd[1607]: time="2025-07-07T05:55:00.222824530Z" level=info msg="CreateContainer within sandbox \"b340b3fe3ac1eac90f5fe013b1659db4390f73c49ab5c744f74504d4c87108c1\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ef82826160d43c9facf614d1c52ba742845bb4b3ed82f17eb9b485fe2a32f2c3\"" Jul 7 05:55:00.227140 containerd[1607]: time="2025-07-07T05:55:00.225680466Z" level=info msg="StartContainer for \"ef82826160d43c9facf614d1c52ba742845bb4b3ed82f17eb9b485fe2a32f2c3\"" Jul 7 05:55:00.330921 containerd[1607]: time="2025-07-07T05:55:00.330867855Z" level=info msg="StartContainer for \"ef82826160d43c9facf614d1c52ba742845bb4b3ed82f17eb9b485fe2a32f2c3\" returns successfully" Jul 7 05:55:00.570246 containerd[1607]: time="2025-07-07T05:55:00.570091752Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:55:00.571735 containerd[1607]: time="2025-07-07T05:55:00.571008517Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 7 05:55:00.574602 containerd[1607]: time="2025-07-07T05:55:00.574566177Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 372.319482ms" Jul 7 05:55:00.574744 containerd[1607]: time="2025-07-07T05:55:00.574727418Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 7 05:55:00.578138 containerd[1607]: time="2025-07-07T05:55:00.577278872Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 7 05:55:00.604812 containerd[1607]: time="2025-07-07T05:55:00.604768306Z" level=info msg="CreateContainer within sandbox \"530df6d9d4fa8291379d05b06346b764ed98eebc9a8853f1015e829ca16b8c03\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 7 05:55:00.623804 containerd[1607]: time="2025-07-07T05:55:00.623755612Z" level=info msg="CreateContainer within sandbox \"530df6d9d4fa8291379d05b06346b764ed98eebc9a8853f1015e829ca16b8c03\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"a11c7dac42a9590366f201f2cf8e0f0f8b630092550c8f0b2dc6cab0bb54c17b\"" Jul 7 05:55:00.627060 containerd[1607]: time="2025-07-07T05:55:00.625524742Z" level=info msg="StartContainer for \"a11c7dac42a9590366f201f2cf8e0f0f8b630092550c8f0b2dc6cab0bb54c17b\"" Jul 7 05:55:00.759835 containerd[1607]: time="2025-07-07T05:55:00.759778853Z" level=info msg="StartContainer for \"a11c7dac42a9590366f201f2cf8e0f0f8b630092550c8f0b2dc6cab0bb54c17b\" returns successfully" Jul 7 05:55:01.385538 kubelet[2790]: I0707 05:55:01.384871 2790 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 05:55:01.404115 kubelet[2790]: I0707 05:55:01.402763 2790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5b7f8cc9dc-drdvq" podStartSLOduration=31.156506919 podStartE2EDuration="38.402744337s" podCreationTimestamp="2025-07-07 05:54:23 +0000 UTC" firstStartedPulling="2025-07-07 05:54:52.955671556 +0000 UTC m=+47.117694583" lastFinishedPulling="2025-07-07 05:55:00.201908974 +0000 UTC m=+54.363932001" observedRunningTime="2025-07-07 05:55:00.398879155 +0000 UTC m=+54.560902182" watchObservedRunningTime="2025-07-07 05:55:01.402744337 +0000 UTC m=+55.564767324" Jul 7 05:55:02.326905 containerd[1607]: time="2025-07-07T05:55:02.325640536Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:55:02.328744 containerd[1607]: time="2025-07-07T05:55:02.328687522Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8225702" Jul 7 05:55:02.331120 containerd[1607]: time="2025-07-07T05:55:02.331058383Z" level=info msg="ImageCreate event name:\"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:55:02.332947 containerd[1607]: time="2025-07-07T05:55:02.332909919Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:55:02.333894 containerd[1607]: time="2025-07-07T05:55:02.333569645Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"9594943\" in 1.756260213s" Jul 7 05:55:02.334297 containerd[1607]: time="2025-07-07T05:55:02.334274851Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\"" Jul 7 05:55:02.335940 containerd[1607]: time="2025-07-07T05:55:02.335916346Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 7 05:55:02.341791 containerd[1607]: time="2025-07-07T05:55:02.341509355Z" level=info msg="CreateContainer within sandbox \"d9f5dafa060b0c2265f6754a8c3c22f204c6497b7401178f354cbc8f2a9e917f\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 7 05:55:02.363568 containerd[1607]: time="2025-07-07T05:55:02.363529068Z" level=info msg="CreateContainer within sandbox \"d9f5dafa060b0c2265f6754a8c3c22f204c6497b7401178f354cbc8f2a9e917f\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"64639774abd94becb7500789338571de1fd2dcd8d5885b10214b45aaebc0ef1a\"" Jul 7 05:55:02.380401 containerd[1607]: time="2025-07-07T05:55:02.380341495Z" level=info msg="StartContainer for \"64639774abd94becb7500789338571de1fd2dcd8d5885b10214b45aaebc0ef1a\"" Jul 7 05:55:02.400655 kubelet[2790]: I0707 05:55:02.400273 2790 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 05:55:02.489391 systemd[1]: run-containerd-runc-k8s.io-64639774abd94becb7500789338571de1fd2dcd8d5885b10214b45aaebc0ef1a-runc.95nUru.mount: Deactivated successfully. Jul 7 05:55:02.548780 containerd[1607]: time="2025-07-07T05:55:02.548677732Z" level=info msg="StartContainer for \"64639774abd94becb7500789338571de1fd2dcd8d5885b10214b45aaebc0ef1a\" returns successfully" Jul 7 05:55:05.837370 containerd[1607]: time="2025-07-07T05:55:05.837275642Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:55:05.838860 containerd[1607]: time="2025-07-07T05:55:05.838786222Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=48128336" Jul 7 05:55:05.839799 containerd[1607]: time="2025-07-07T05:55:05.839740354Z" level=info msg="ImageCreate event name:\"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:55:05.842554 containerd[1607]: time="2025-07-07T05:55:05.842499471Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:55:05.844474 containerd[1607]: time="2025-07-07T05:55:05.844326575Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"49497545\" in 3.507915865s" Jul 7 05:55:05.844474 containerd[1607]: time="2025-07-07T05:55:05.844366895Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\"" Jul 7 05:55:05.846408 containerd[1607]: time="2025-07-07T05:55:05.845753514Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 7 05:55:05.862083 containerd[1607]: time="2025-07-07T05:55:05.862024488Z" level=info msg="CreateContainer within sandbox \"ab0e7c49058eda2586c9d94e63fd1f9b1ab4c821b0b5e13e11259c3badc984e8\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 7 05:55:05.879650 containerd[1607]: time="2025-07-07T05:55:05.879590920Z" level=info msg="CreateContainer within sandbox \"ab0e7c49058eda2586c9d94e63fd1f9b1ab4c821b0b5e13e11259c3badc984e8\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"a7c0f14d21e0cabed9bb1c9ef5c53fd9351ff1703a2705b1b0ef3f422ad5bdd8\"" Jul 7 05:55:05.881602 containerd[1607]: time="2025-07-07T05:55:05.881552945Z" level=info msg="StartContainer for \"a7c0f14d21e0cabed9bb1c9ef5c53fd9351ff1703a2705b1b0ef3f422ad5bdd8\"" Jul 7 05:55:05.961922 containerd[1607]: time="2025-07-07T05:55:05.961822603Z" level=info msg="StartContainer for \"a7c0f14d21e0cabed9bb1c9ef5c53fd9351ff1703a2705b1b0ef3f422ad5bdd8\" returns successfully" Jul 7 05:55:06.014935 containerd[1607]: time="2025-07-07T05:55:06.014646758Z" level=info msg="StopPodSandbox for \"dac53fde0970fd1ed655e506800edac1d070f45454a87fa92c1e68ac152701df\"" Jul 7 05:55:06.160834 containerd[1607]: 2025-07-07 05:55:06.080 [WARNING][5450] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dac53fde0970fd1ed655e506800edac1d070f45454a87fa92c1e68ac152701df" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--0--cfa01fffc0-k8s-coredns--7c65d6cfc9--6l9kt-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"a19b0658-01a5-4cf7-a7b1-1c9d8255c365", ResourceVersion:"1006", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 5, 54, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-0-cfa01fffc0", ContainerID:"66c68d98ca0f4bfb5c119ea937327844178ca9f71db242eccc244ddd17cb2044", Pod:"coredns-7c65d6cfc9-6l9kt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.9.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4db48ea0fbc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 05:55:06.160834 containerd[1607]: 2025-07-07 05:55:06.080 [INFO][5450] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dac53fde0970fd1ed655e506800edac1d070f45454a87fa92c1e68ac152701df" Jul 7 05:55:06.160834 containerd[1607]: 2025-07-07 05:55:06.080 [INFO][5450] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dac53fde0970fd1ed655e506800edac1d070f45454a87fa92c1e68ac152701df" iface="eth0" netns="" Jul 7 05:55:06.160834 containerd[1607]: 2025-07-07 05:55:06.081 [INFO][5450] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dac53fde0970fd1ed655e506800edac1d070f45454a87fa92c1e68ac152701df" Jul 7 05:55:06.160834 containerd[1607]: 2025-07-07 05:55:06.081 [INFO][5450] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dac53fde0970fd1ed655e506800edac1d070f45454a87fa92c1e68ac152701df" Jul 7 05:55:06.160834 containerd[1607]: 2025-07-07 05:55:06.143 [INFO][5458] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dac53fde0970fd1ed655e506800edac1d070f45454a87fa92c1e68ac152701df" HandleID="k8s-pod-network.dac53fde0970fd1ed655e506800edac1d070f45454a87fa92c1e68ac152701df" Workload="ci--4081--3--4--0--cfa01fffc0-k8s-coredns--7c65d6cfc9--6l9kt-eth0" Jul 7 05:55:06.160834 containerd[1607]: 2025-07-07 05:55:06.144 [INFO][5458] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:55:06.160834 containerd[1607]: 2025-07-07 05:55:06.144 [INFO][5458] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:55:06.160834 containerd[1607]: 2025-07-07 05:55:06.154 [WARNING][5458] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dac53fde0970fd1ed655e506800edac1d070f45454a87fa92c1e68ac152701df" HandleID="k8s-pod-network.dac53fde0970fd1ed655e506800edac1d070f45454a87fa92c1e68ac152701df" Workload="ci--4081--3--4--0--cfa01fffc0-k8s-coredns--7c65d6cfc9--6l9kt-eth0" Jul 7 05:55:06.160834 containerd[1607]: 2025-07-07 05:55:06.154 [INFO][5458] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dac53fde0970fd1ed655e506800edac1d070f45454a87fa92c1e68ac152701df" HandleID="k8s-pod-network.dac53fde0970fd1ed655e506800edac1d070f45454a87fa92c1e68ac152701df" Workload="ci--4081--3--4--0--cfa01fffc0-k8s-coredns--7c65d6cfc9--6l9kt-eth0" Jul 7 05:55:06.160834 containerd[1607]: 2025-07-07 05:55:06.156 [INFO][5458] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:55:06.160834 containerd[1607]: 2025-07-07 05:55:06.158 [INFO][5450] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dac53fde0970fd1ed655e506800edac1d070f45454a87fa92c1e68ac152701df" Jul 7 05:55:06.161848 containerd[1607]: time="2025-07-07T05:55:06.160857486Z" level=info msg="TearDown network for sandbox \"dac53fde0970fd1ed655e506800edac1d070f45454a87fa92c1e68ac152701df\" successfully" Jul 7 05:55:06.161848 containerd[1607]: time="2025-07-07T05:55:06.160908047Z" level=info msg="StopPodSandbox for \"dac53fde0970fd1ed655e506800edac1d070f45454a87fa92c1e68ac152701df\" returns successfully" Jul 7 05:55:06.161926 containerd[1607]: time="2025-07-07T05:55:06.161849661Z" level=info msg="RemovePodSandbox for \"dac53fde0970fd1ed655e506800edac1d070f45454a87fa92c1e68ac152701df\"" Jul 7 05:55:06.161926 containerd[1607]: time="2025-07-07T05:55:06.161887821Z" level=info msg="Forcibly stopping sandbox \"dac53fde0970fd1ed655e506800edac1d070f45454a87fa92c1e68ac152701df\"" Jul 7 05:55:06.249306 containerd[1607]: 2025-07-07 05:55:06.206 [WARNING][5474] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dac53fde0970fd1ed655e506800edac1d070f45454a87fa92c1e68ac152701df" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--0--cfa01fffc0-k8s-coredns--7c65d6cfc9--6l9kt-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"a19b0658-01a5-4cf7-a7b1-1c9d8255c365", ResourceVersion:"1006", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 5, 54, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-0-cfa01fffc0", ContainerID:"66c68d98ca0f4bfb5c119ea937327844178ca9f71db242eccc244ddd17cb2044", Pod:"coredns-7c65d6cfc9-6l9kt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.9.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4db48ea0fbc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 05:55:06.249306 containerd[1607]: 2025-07-07 05:55:06.206 [INFO][5474] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dac53fde0970fd1ed655e506800edac1d070f45454a87fa92c1e68ac152701df" Jul 7 05:55:06.249306 containerd[1607]: 2025-07-07 05:55:06.206 [INFO][5474] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dac53fde0970fd1ed655e506800edac1d070f45454a87fa92c1e68ac152701df" iface="eth0" netns="" Jul 7 05:55:06.249306 containerd[1607]: 2025-07-07 05:55:06.206 [INFO][5474] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dac53fde0970fd1ed655e506800edac1d070f45454a87fa92c1e68ac152701df" Jul 7 05:55:06.249306 containerd[1607]: 2025-07-07 05:55:06.206 [INFO][5474] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dac53fde0970fd1ed655e506800edac1d070f45454a87fa92c1e68ac152701df" Jul 7 05:55:06.249306 containerd[1607]: 2025-07-07 05:55:06.230 [INFO][5481] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dac53fde0970fd1ed655e506800edac1d070f45454a87fa92c1e68ac152701df" HandleID="k8s-pod-network.dac53fde0970fd1ed655e506800edac1d070f45454a87fa92c1e68ac152701df" Workload="ci--4081--3--4--0--cfa01fffc0-k8s-coredns--7c65d6cfc9--6l9kt-eth0" Jul 7 05:55:06.249306 containerd[1607]: 2025-07-07 05:55:06.230 [INFO][5481] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:55:06.249306 containerd[1607]: 2025-07-07 05:55:06.230 [INFO][5481] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:55:06.249306 containerd[1607]: 2025-07-07 05:55:06.242 [WARNING][5481] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dac53fde0970fd1ed655e506800edac1d070f45454a87fa92c1e68ac152701df" HandleID="k8s-pod-network.dac53fde0970fd1ed655e506800edac1d070f45454a87fa92c1e68ac152701df" Workload="ci--4081--3--4--0--cfa01fffc0-k8s-coredns--7c65d6cfc9--6l9kt-eth0" Jul 7 05:55:06.249306 containerd[1607]: 2025-07-07 05:55:06.242 [INFO][5481] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dac53fde0970fd1ed655e506800edac1d070f45454a87fa92c1e68ac152701df" HandleID="k8s-pod-network.dac53fde0970fd1ed655e506800edac1d070f45454a87fa92c1e68ac152701df" Workload="ci--4081--3--4--0--cfa01fffc0-k8s-coredns--7c65d6cfc9--6l9kt-eth0" Jul 7 05:55:06.249306 containerd[1607]: 2025-07-07 05:55:06.245 [INFO][5481] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:55:06.249306 containerd[1607]: 2025-07-07 05:55:06.246 [INFO][5474] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dac53fde0970fd1ed655e506800edac1d070f45454a87fa92c1e68ac152701df" Jul 7 05:55:06.250374 containerd[1607]: time="2025-07-07T05:55:06.249911383Z" level=info msg="TearDown network for sandbox \"dac53fde0970fd1ed655e506800edac1d070f45454a87fa92c1e68ac152701df\" successfully" Jul 7 05:55:06.256255 containerd[1607]: time="2025-07-07T05:55:06.256127913Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dac53fde0970fd1ed655e506800edac1d070f45454a87fa92c1e68ac152701df\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 05:55:06.256255 containerd[1607]: time="2025-07-07T05:55:06.256238795Z" level=info msg="RemovePodSandbox \"dac53fde0970fd1ed655e506800edac1d070f45454a87fa92c1e68ac152701df\" returns successfully" Jul 7 05:55:06.257805 containerd[1607]: time="2025-07-07T05:55:06.257238289Z" level=info msg="StopPodSandbox for \"82c35a5ef5486a77b5e56400d201a90e9aee74eaf1498a1d1f062f19cbad9514\"" Jul 7 05:55:06.383831 containerd[1607]: 2025-07-07 05:55:06.312 [WARNING][5495] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="82c35a5ef5486a77b5e56400d201a90e9aee74eaf1498a1d1f062f19cbad9514" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--0--cfa01fffc0-k8s-calico--kube--controllers--684db96bcc--ct6sd-eth0", GenerateName:"calico-kube-controllers-684db96bcc-", Namespace:"calico-system", SelfLink:"", UID:"6bb49342-3954-416b-b804-0778891e9fae", ResourceVersion:"980", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 5, 54, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"684db96bcc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-0-cfa01fffc0", ContainerID:"ab0e7c49058eda2586c9d94e63fd1f9b1ab4c821b0b5e13e11259c3badc984e8", Pod:"calico-kube-controllers-684db96bcc-ct6sd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.9.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid86b477a19c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 05:55:06.383831 containerd[1607]: 2025-07-07 05:55:06.313 [INFO][5495] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="82c35a5ef5486a77b5e56400d201a90e9aee74eaf1498a1d1f062f19cbad9514" Jul 7 05:55:06.383831 containerd[1607]: 2025-07-07 05:55:06.313 [INFO][5495] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="82c35a5ef5486a77b5e56400d201a90e9aee74eaf1498a1d1f062f19cbad9514" iface="eth0" netns="" Jul 7 05:55:06.383831 containerd[1607]: 2025-07-07 05:55:06.313 [INFO][5495] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="82c35a5ef5486a77b5e56400d201a90e9aee74eaf1498a1d1f062f19cbad9514" Jul 7 05:55:06.383831 containerd[1607]: 2025-07-07 05:55:06.313 [INFO][5495] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="82c35a5ef5486a77b5e56400d201a90e9aee74eaf1498a1d1f062f19cbad9514" Jul 7 05:55:06.383831 containerd[1607]: 2025-07-07 05:55:06.360 [INFO][5503] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="82c35a5ef5486a77b5e56400d201a90e9aee74eaf1498a1d1f062f19cbad9514" HandleID="k8s-pod-network.82c35a5ef5486a77b5e56400d201a90e9aee74eaf1498a1d1f062f19cbad9514" Workload="ci--4081--3--4--0--cfa01fffc0-k8s-calico--kube--controllers--684db96bcc--ct6sd-eth0" Jul 7 05:55:06.383831 containerd[1607]: 2025-07-07 05:55:06.362 [INFO][5503] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:55:06.383831 containerd[1607]: 2025-07-07 05:55:06.362 [INFO][5503] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:55:06.383831 containerd[1607]: 2025-07-07 05:55:06.378 [WARNING][5503] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="82c35a5ef5486a77b5e56400d201a90e9aee74eaf1498a1d1f062f19cbad9514" HandleID="k8s-pod-network.82c35a5ef5486a77b5e56400d201a90e9aee74eaf1498a1d1f062f19cbad9514" Workload="ci--4081--3--4--0--cfa01fffc0-k8s-calico--kube--controllers--684db96bcc--ct6sd-eth0" Jul 7 05:55:06.383831 containerd[1607]: 2025-07-07 05:55:06.378 [INFO][5503] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="82c35a5ef5486a77b5e56400d201a90e9aee74eaf1498a1d1f062f19cbad9514" HandleID="k8s-pod-network.82c35a5ef5486a77b5e56400d201a90e9aee74eaf1498a1d1f062f19cbad9514" Workload="ci--4081--3--4--0--cfa01fffc0-k8s-calico--kube--controllers--684db96bcc--ct6sd-eth0" Jul 7 05:55:06.383831 containerd[1607]: 2025-07-07 05:55:06.380 [INFO][5503] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:55:06.383831 containerd[1607]: 2025-07-07 05:55:06.382 [INFO][5495] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="82c35a5ef5486a77b5e56400d201a90e9aee74eaf1498a1d1f062f19cbad9514" Jul 7 05:55:06.384391 containerd[1607]: time="2025-07-07T05:55:06.384350700Z" level=info msg="TearDown network for sandbox \"82c35a5ef5486a77b5e56400d201a90e9aee74eaf1498a1d1f062f19cbad9514\" successfully" Jul 7 05:55:06.384690 containerd[1607]: time="2025-07-07T05:55:06.384667984Z" level=info msg="StopPodSandbox for \"82c35a5ef5486a77b5e56400d201a90e9aee74eaf1498a1d1f062f19cbad9514\" returns successfully" Jul 7 05:55:06.385623 containerd[1607]: time="2025-07-07T05:55:06.385583637Z" level=info msg="RemovePodSandbox for \"82c35a5ef5486a77b5e56400d201a90e9aee74eaf1498a1d1f062f19cbad9514\"" Jul 7 05:55:06.385921 containerd[1607]: time="2025-07-07T05:55:06.385885162Z" level=info msg="Forcibly stopping sandbox \"82c35a5ef5486a77b5e56400d201a90e9aee74eaf1498a1d1f062f19cbad9514\"" Jul 7 05:55:06.453137 kubelet[2790]: I0707 05:55:06.452936 2790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-684db96bcc-ct6sd" podStartSLOduration=24.757542392 podStartE2EDuration="37.452917538s" podCreationTimestamp="2025-07-07 05:54:29 +0000 UTC" firstStartedPulling="2025-07-07 05:54:53.150225406 +0000 UTC m=+47.312248433" lastFinishedPulling="2025-07-07 05:55:05.845600552 +0000 UTC m=+60.007623579" observedRunningTime="2025-07-07 05:55:06.452497652 +0000 UTC m=+60.614520759" watchObservedRunningTime="2025-07-07 05:55:06.452917538 +0000 UTC m=+60.614940565" Jul 7 05:55:06.456318 kubelet[2790]: I0707 05:55:06.455101 2790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5b7f8cc9dc-mfs7f" podStartSLOduration=35.874449493 podStartE2EDuration="43.455084249s" podCreationTimestamp="2025-07-07 05:54:23 +0000 UTC" firstStartedPulling="2025-07-07 05:54:52.995038307 +0000 UTC m=+47.157061294" lastFinishedPulling="2025-07-07 05:55:00.575673023 +0000 UTC m=+54.737696050" observedRunningTime="2025-07-07 05:55:01.403478942 +0000 UTC m=+55.565501969" watchObservedRunningTime="2025-07-07 05:55:06.455084249 +0000 UTC m=+60.617107276" Jul 7 05:55:06.512337 containerd[1607]: 2025-07-07 05:55:06.458 [WARNING][5520] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="82c35a5ef5486a77b5e56400d201a90e9aee74eaf1498a1d1f062f19cbad9514" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--0--cfa01fffc0-k8s-calico--kube--controllers--684db96bcc--ct6sd-eth0", GenerateName:"calico-kube-controllers-684db96bcc-", Namespace:"calico-system", SelfLink:"", UID:"6bb49342-3954-416b-b804-0778891e9fae", ResourceVersion:"1068", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 5, 54, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"684db96bcc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-0-cfa01fffc0", ContainerID:"ab0e7c49058eda2586c9d94e63fd1f9b1ab4c821b0b5e13e11259c3badc984e8", Pod:"calico-kube-controllers-684db96bcc-ct6sd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.9.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid86b477a19c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 05:55:06.512337 containerd[1607]: 2025-07-07 05:55:06.458 [INFO][5520] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="82c35a5ef5486a77b5e56400d201a90e9aee74eaf1498a1d1f062f19cbad9514" Jul 7 05:55:06.512337 containerd[1607]: 2025-07-07 05:55:06.459 [INFO][5520] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="82c35a5ef5486a77b5e56400d201a90e9aee74eaf1498a1d1f062f19cbad9514" iface="eth0" netns="" Jul 7 05:55:06.512337 containerd[1607]: 2025-07-07 05:55:06.459 [INFO][5520] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="82c35a5ef5486a77b5e56400d201a90e9aee74eaf1498a1d1f062f19cbad9514" Jul 7 05:55:06.512337 containerd[1607]: 2025-07-07 05:55:06.459 [INFO][5520] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="82c35a5ef5486a77b5e56400d201a90e9aee74eaf1498a1d1f062f19cbad9514" Jul 7 05:55:06.512337 containerd[1607]: 2025-07-07 05:55:06.487 [INFO][5533] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="82c35a5ef5486a77b5e56400d201a90e9aee74eaf1498a1d1f062f19cbad9514" HandleID="k8s-pod-network.82c35a5ef5486a77b5e56400d201a90e9aee74eaf1498a1d1f062f19cbad9514" Workload="ci--4081--3--4--0--cfa01fffc0-k8s-calico--kube--controllers--684db96bcc--ct6sd-eth0" Jul 7 05:55:06.512337 containerd[1607]: 2025-07-07 05:55:06.487 [INFO][5533] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:55:06.512337 containerd[1607]: 2025-07-07 05:55:06.487 [INFO][5533] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:55:06.512337 containerd[1607]: 2025-07-07 05:55:06.504 [WARNING][5533] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="82c35a5ef5486a77b5e56400d201a90e9aee74eaf1498a1d1f062f19cbad9514" HandleID="k8s-pod-network.82c35a5ef5486a77b5e56400d201a90e9aee74eaf1498a1d1f062f19cbad9514" Workload="ci--4081--3--4--0--cfa01fffc0-k8s-calico--kube--controllers--684db96bcc--ct6sd-eth0" Jul 7 05:55:06.512337 containerd[1607]: 2025-07-07 05:55:06.504 [INFO][5533] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="82c35a5ef5486a77b5e56400d201a90e9aee74eaf1498a1d1f062f19cbad9514" HandleID="k8s-pod-network.82c35a5ef5486a77b5e56400d201a90e9aee74eaf1498a1d1f062f19cbad9514" Workload="ci--4081--3--4--0--cfa01fffc0-k8s-calico--kube--controllers--684db96bcc--ct6sd-eth0" Jul 7 05:55:06.512337 containerd[1607]: 2025-07-07 05:55:06.506 [INFO][5533] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:55:06.512337 containerd[1607]: 2025-07-07 05:55:06.510 [INFO][5520] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="82c35a5ef5486a77b5e56400d201a90e9aee74eaf1498a1d1f062f19cbad9514" Jul 7 05:55:06.512337 containerd[1607]: time="2025-07-07T05:55:06.512140560Z" level=info msg="TearDown network for sandbox \"82c35a5ef5486a77b5e56400d201a90e9aee74eaf1498a1d1f062f19cbad9514\" successfully" Jul 7 05:55:06.526410 containerd[1607]: time="2025-07-07T05:55:06.526191364Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"82c35a5ef5486a77b5e56400d201a90e9aee74eaf1498a1d1f062f19cbad9514\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 05:55:06.526410 containerd[1607]: time="2025-07-07T05:55:06.526348206Z" level=info msg="RemovePodSandbox \"82c35a5ef5486a77b5e56400d201a90e9aee74eaf1498a1d1f062f19cbad9514\" returns successfully" Jul 7 05:55:06.527664 containerd[1607]: time="2025-07-07T05:55:06.527596545Z" level=info msg="StopPodSandbox for \"2cd7df1d3dacf358cc46870201e40226a9d75afa9e0d0afe94a9976c658d88a2\"" Jul 7 05:55:06.621729 containerd[1607]: 2025-07-07 05:55:06.573 [WARNING][5561] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2cd7df1d3dacf358cc46870201e40226a9d75afa9e0d0afe94a9976c658d88a2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--0--cfa01fffc0-k8s-calico--apiserver--5b7f8cc9dc--drdvq-eth0", GenerateName:"calico-apiserver-5b7f8cc9dc-", Namespace:"calico-apiserver", SelfLink:"", UID:"a362285d-bb22-4ce9-b882-0bdc9d32c0cc", ResourceVersion:"1038", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 5, 54, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b7f8cc9dc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-0-cfa01fffc0", ContainerID:"b340b3fe3ac1eac90f5fe013b1659db4390f73c49ab5c744f74504d4c87108c1", Pod:"calico-apiserver-5b7f8cc9dc-drdvq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.9.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3b722d70bcb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 05:55:06.621729 containerd[1607]: 2025-07-07 05:55:06.574 [INFO][5561] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2cd7df1d3dacf358cc46870201e40226a9d75afa9e0d0afe94a9976c658d88a2" Jul 7 05:55:06.621729 containerd[1607]: 2025-07-07 05:55:06.574 [INFO][5561] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2cd7df1d3dacf358cc46870201e40226a9d75afa9e0d0afe94a9976c658d88a2" iface="eth0" netns="" Jul 7 05:55:06.621729 containerd[1607]: 2025-07-07 05:55:06.574 [INFO][5561] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2cd7df1d3dacf358cc46870201e40226a9d75afa9e0d0afe94a9976c658d88a2" Jul 7 05:55:06.621729 containerd[1607]: 2025-07-07 05:55:06.574 [INFO][5561] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2cd7df1d3dacf358cc46870201e40226a9d75afa9e0d0afe94a9976c658d88a2" Jul 7 05:55:06.621729 containerd[1607]: 2025-07-07 05:55:06.599 [INFO][5569] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2cd7df1d3dacf358cc46870201e40226a9d75afa9e0d0afe94a9976c658d88a2" HandleID="k8s-pod-network.2cd7df1d3dacf358cc46870201e40226a9d75afa9e0d0afe94a9976c658d88a2" Workload="ci--4081--3--4--0--cfa01fffc0-k8s-calico--apiserver--5b7f8cc9dc--drdvq-eth0" Jul 7 05:55:06.621729 containerd[1607]: 2025-07-07 05:55:06.599 [INFO][5569] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:55:06.621729 containerd[1607]: 2025-07-07 05:55:06.599 [INFO][5569] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:55:06.621729 containerd[1607]: 2025-07-07 05:55:06.614 [WARNING][5569] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2cd7df1d3dacf358cc46870201e40226a9d75afa9e0d0afe94a9976c658d88a2" HandleID="k8s-pod-network.2cd7df1d3dacf358cc46870201e40226a9d75afa9e0d0afe94a9976c658d88a2" Workload="ci--4081--3--4--0--cfa01fffc0-k8s-calico--apiserver--5b7f8cc9dc--drdvq-eth0" Jul 7 05:55:06.621729 containerd[1607]: 2025-07-07 05:55:06.615 [INFO][5569] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2cd7df1d3dacf358cc46870201e40226a9d75afa9e0d0afe94a9976c658d88a2" HandleID="k8s-pod-network.2cd7df1d3dacf358cc46870201e40226a9d75afa9e0d0afe94a9976c658d88a2" Workload="ci--4081--3--4--0--cfa01fffc0-k8s-calico--apiserver--5b7f8cc9dc--drdvq-eth0" Jul 7 05:55:06.621729 containerd[1607]: 2025-07-07 05:55:06.618 [INFO][5569] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:55:06.621729 containerd[1607]: 2025-07-07 05:55:06.619 [INFO][5561] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2cd7df1d3dacf358cc46870201e40226a9d75afa9e0d0afe94a9976c658d88a2" Jul 7 05:55:06.622906 containerd[1607]: time="2025-07-07T05:55:06.621833316Z" level=info msg="TearDown network for sandbox \"2cd7df1d3dacf358cc46870201e40226a9d75afa9e0d0afe94a9976c658d88a2\" successfully" Jul 7 05:55:06.622906 containerd[1607]: time="2025-07-07T05:55:06.621862837Z" level=info msg="StopPodSandbox for \"2cd7df1d3dacf358cc46870201e40226a9d75afa9e0d0afe94a9976c658d88a2\" returns successfully" Jul 7 05:55:06.623058 containerd[1607]: time="2025-07-07T05:55:06.622997653Z" level=info msg="RemovePodSandbox for \"2cd7df1d3dacf358cc46870201e40226a9d75afa9e0d0afe94a9976c658d88a2\"" Jul 7 05:55:06.623153 containerd[1607]: time="2025-07-07T05:55:06.623064374Z" level=info msg="Forcibly stopping sandbox \"2cd7df1d3dacf358cc46870201e40226a9d75afa9e0d0afe94a9976c658d88a2\"" Jul 7 05:55:06.717682 containerd[1607]: 2025-07-07 05:55:06.674 [WARNING][5584] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2cd7df1d3dacf358cc46870201e40226a9d75afa9e0d0afe94a9976c658d88a2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--0--cfa01fffc0-k8s-calico--apiserver--5b7f8cc9dc--drdvq-eth0", GenerateName:"calico-apiserver-5b7f8cc9dc-", Namespace:"calico-apiserver", SelfLink:"", UID:"a362285d-bb22-4ce9-b882-0bdc9d32c0cc", ResourceVersion:"1038", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 5, 54, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b7f8cc9dc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-0-cfa01fffc0", ContainerID:"b340b3fe3ac1eac90f5fe013b1659db4390f73c49ab5c744f74504d4c87108c1", Pod:"calico-apiserver-5b7f8cc9dc-drdvq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.9.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3b722d70bcb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 05:55:06.717682 containerd[1607]: 2025-07-07 05:55:06.674 [INFO][5584] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2cd7df1d3dacf358cc46870201e40226a9d75afa9e0d0afe94a9976c658d88a2" Jul 7 05:55:06.717682 containerd[1607]: 2025-07-07 05:55:06.674 [INFO][5584] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2cd7df1d3dacf358cc46870201e40226a9d75afa9e0d0afe94a9976c658d88a2" iface="eth0" netns="" Jul 7 05:55:06.717682 containerd[1607]: 2025-07-07 05:55:06.674 [INFO][5584] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2cd7df1d3dacf358cc46870201e40226a9d75afa9e0d0afe94a9976c658d88a2" Jul 7 05:55:06.717682 containerd[1607]: 2025-07-07 05:55:06.674 [INFO][5584] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2cd7df1d3dacf358cc46870201e40226a9d75afa9e0d0afe94a9976c658d88a2" Jul 7 05:55:06.717682 containerd[1607]: 2025-07-07 05:55:06.696 [INFO][5593] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2cd7df1d3dacf358cc46870201e40226a9d75afa9e0d0afe94a9976c658d88a2" HandleID="k8s-pod-network.2cd7df1d3dacf358cc46870201e40226a9d75afa9e0d0afe94a9976c658d88a2" Workload="ci--4081--3--4--0--cfa01fffc0-k8s-calico--apiserver--5b7f8cc9dc--drdvq-eth0" Jul 7 05:55:06.717682 containerd[1607]: 2025-07-07 05:55:06.697 [INFO][5593] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:55:06.717682 containerd[1607]: 2025-07-07 05:55:06.697 [INFO][5593] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:55:06.717682 containerd[1607]: 2025-07-07 05:55:06.710 [WARNING][5593] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2cd7df1d3dacf358cc46870201e40226a9d75afa9e0d0afe94a9976c658d88a2" HandleID="k8s-pod-network.2cd7df1d3dacf358cc46870201e40226a9d75afa9e0d0afe94a9976c658d88a2" Workload="ci--4081--3--4--0--cfa01fffc0-k8s-calico--apiserver--5b7f8cc9dc--drdvq-eth0" Jul 7 05:55:06.717682 containerd[1607]: 2025-07-07 05:55:06.710 [INFO][5593] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2cd7df1d3dacf358cc46870201e40226a9d75afa9e0d0afe94a9976c658d88a2" HandleID="k8s-pod-network.2cd7df1d3dacf358cc46870201e40226a9d75afa9e0d0afe94a9976c658d88a2" Workload="ci--4081--3--4--0--cfa01fffc0-k8s-calico--apiserver--5b7f8cc9dc--drdvq-eth0" Jul 7 05:55:06.717682 containerd[1607]: 2025-07-07 05:55:06.713 [INFO][5593] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:55:06.717682 containerd[1607]: 2025-07-07 05:55:06.715 [INFO][5584] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2cd7df1d3dacf358cc46870201e40226a9d75afa9e0d0afe94a9976c658d88a2" Jul 7 05:55:06.717682 containerd[1607]: time="2025-07-07T05:55:06.717590110Z" level=info msg="TearDown network for sandbox \"2cd7df1d3dacf358cc46870201e40226a9d75afa9e0d0afe94a9976c658d88a2\" successfully" Jul 7 05:55:06.722052 containerd[1607]: time="2025-07-07T05:55:06.722008455Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2cd7df1d3dacf358cc46870201e40226a9d75afa9e0d0afe94a9976c658d88a2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 05:55:06.722229 containerd[1607]: time="2025-07-07T05:55:06.722200937Z" level=info msg="RemovePodSandbox \"2cd7df1d3dacf358cc46870201e40226a9d75afa9e0d0afe94a9976c658d88a2\" returns successfully" Jul 7 05:55:06.722884 containerd[1607]: time="2025-07-07T05:55:06.722837467Z" level=info msg="StopPodSandbox for \"f79c852cf8d5a6dc42e9ae7aa2efc034789040ef8c2994a2b978d11f9e301e43\"" Jul 7 05:55:06.823445 containerd[1607]: 2025-07-07 05:55:06.776 [WARNING][5607] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f79c852cf8d5a6dc42e9ae7aa2efc034789040ef8c2994a2b978d11f9e301e43" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--0--cfa01fffc0-k8s-calico--apiserver--5b7f8cc9dc--mfs7f-eth0", GenerateName:"calico-apiserver-5b7f8cc9dc-", Namespace:"calico-apiserver", SelfLink:"", UID:"cd6c95c2-86b8-43b3-90f9-cea8639bb989", ResourceVersion:"1045", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 5, 54, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b7f8cc9dc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-0-cfa01fffc0", ContainerID:"530df6d9d4fa8291379d05b06346b764ed98eebc9a8853f1015e829ca16b8c03", Pod:"calico-apiserver-5b7f8cc9dc-mfs7f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.9.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calide74e78d246", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 05:55:06.823445 containerd[1607]: 2025-07-07 05:55:06.776 [INFO][5607] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f79c852cf8d5a6dc42e9ae7aa2efc034789040ef8c2994a2b978d11f9e301e43" Jul 7 05:55:06.823445 containerd[1607]: 2025-07-07 05:55:06.776 [INFO][5607] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f79c852cf8d5a6dc42e9ae7aa2efc034789040ef8c2994a2b978d11f9e301e43" iface="eth0" netns="" Jul 7 05:55:06.823445 containerd[1607]: 2025-07-07 05:55:06.776 [INFO][5607] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f79c852cf8d5a6dc42e9ae7aa2efc034789040ef8c2994a2b978d11f9e301e43" Jul 7 05:55:06.823445 containerd[1607]: 2025-07-07 05:55:06.776 [INFO][5607] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f79c852cf8d5a6dc42e9ae7aa2efc034789040ef8c2994a2b978d11f9e301e43" Jul 7 05:55:06.823445 containerd[1607]: 2025-07-07 05:55:06.804 [INFO][5614] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f79c852cf8d5a6dc42e9ae7aa2efc034789040ef8c2994a2b978d11f9e301e43" HandleID="k8s-pod-network.f79c852cf8d5a6dc42e9ae7aa2efc034789040ef8c2994a2b978d11f9e301e43" Workload="ci--4081--3--4--0--cfa01fffc0-k8s-calico--apiserver--5b7f8cc9dc--mfs7f-eth0" Jul 7 05:55:06.823445 containerd[1607]: 2025-07-07 05:55:06.804 [INFO][5614] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:55:06.823445 containerd[1607]: 2025-07-07 05:55:06.804 [INFO][5614] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:55:06.823445 containerd[1607]: 2025-07-07 05:55:06.815 [WARNING][5614] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f79c852cf8d5a6dc42e9ae7aa2efc034789040ef8c2994a2b978d11f9e301e43" HandleID="k8s-pod-network.f79c852cf8d5a6dc42e9ae7aa2efc034789040ef8c2994a2b978d11f9e301e43" Workload="ci--4081--3--4--0--cfa01fffc0-k8s-calico--apiserver--5b7f8cc9dc--mfs7f-eth0" Jul 7 05:55:06.823445 containerd[1607]: 2025-07-07 05:55:06.815 [INFO][5614] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f79c852cf8d5a6dc42e9ae7aa2efc034789040ef8c2994a2b978d11f9e301e43" HandleID="k8s-pod-network.f79c852cf8d5a6dc42e9ae7aa2efc034789040ef8c2994a2b978d11f9e301e43" Workload="ci--4081--3--4--0--cfa01fffc0-k8s-calico--apiserver--5b7f8cc9dc--mfs7f-eth0" Jul 7 05:55:06.823445 containerd[1607]: 2025-07-07 05:55:06.818 [INFO][5614] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:55:06.823445 containerd[1607]: 2025-07-07 05:55:06.820 [INFO][5607] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f79c852cf8d5a6dc42e9ae7aa2efc034789040ef8c2994a2b978d11f9e301e43" Jul 7 05:55:06.824715 containerd[1607]: time="2025-07-07T05:55:06.823480372Z" level=info msg="TearDown network for sandbox \"f79c852cf8d5a6dc42e9ae7aa2efc034789040ef8c2994a2b978d11f9e301e43\" successfully" Jul 7 05:55:06.824715 containerd[1607]: time="2025-07-07T05:55:06.823507692Z" level=info msg="StopPodSandbox for \"f79c852cf8d5a6dc42e9ae7aa2efc034789040ef8c2994a2b978d11f9e301e43\" returns successfully" Jul 7 05:55:06.824715 containerd[1607]: time="2025-07-07T05:55:06.824048260Z" level=info msg="RemovePodSandbox for \"f79c852cf8d5a6dc42e9ae7aa2efc034789040ef8c2994a2b978d11f9e301e43\"" Jul 7 05:55:06.824715 containerd[1607]: time="2025-07-07T05:55:06.824162901Z" level=info msg="Forcibly stopping sandbox \"f79c852cf8d5a6dc42e9ae7aa2efc034789040ef8c2994a2b978d11f9e301e43\"" Jul 7 05:55:06.916467 containerd[1607]: 2025-07-07 05:55:06.872 [WARNING][5628] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f79c852cf8d5a6dc42e9ae7aa2efc034789040ef8c2994a2b978d11f9e301e43" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--0--cfa01fffc0-k8s-calico--apiserver--5b7f8cc9dc--mfs7f-eth0", GenerateName:"calico-apiserver-5b7f8cc9dc-", Namespace:"calico-apiserver", SelfLink:"", UID:"cd6c95c2-86b8-43b3-90f9-cea8639bb989", ResourceVersion:"1045", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 5, 54, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b7f8cc9dc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-0-cfa01fffc0", ContainerID:"530df6d9d4fa8291379d05b06346b764ed98eebc9a8853f1015e829ca16b8c03", Pod:"calico-apiserver-5b7f8cc9dc-mfs7f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.9.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calide74e78d246", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 05:55:06.916467 containerd[1607]: 2025-07-07 05:55:06.872 [INFO][5628] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f79c852cf8d5a6dc42e9ae7aa2efc034789040ef8c2994a2b978d11f9e301e43" Jul 7 05:55:06.916467 containerd[1607]: 2025-07-07 05:55:06.873 [INFO][5628] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f79c852cf8d5a6dc42e9ae7aa2efc034789040ef8c2994a2b978d11f9e301e43" iface="eth0" netns="" Jul 7 05:55:06.916467 containerd[1607]: 2025-07-07 05:55:06.873 [INFO][5628] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f79c852cf8d5a6dc42e9ae7aa2efc034789040ef8c2994a2b978d11f9e301e43" Jul 7 05:55:06.916467 containerd[1607]: 2025-07-07 05:55:06.873 [INFO][5628] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f79c852cf8d5a6dc42e9ae7aa2efc034789040ef8c2994a2b978d11f9e301e43" Jul 7 05:55:06.916467 containerd[1607]: 2025-07-07 05:55:06.895 [INFO][5635] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f79c852cf8d5a6dc42e9ae7aa2efc034789040ef8c2994a2b978d11f9e301e43" HandleID="k8s-pod-network.f79c852cf8d5a6dc42e9ae7aa2efc034789040ef8c2994a2b978d11f9e301e43" Workload="ci--4081--3--4--0--cfa01fffc0-k8s-calico--apiserver--5b7f8cc9dc--mfs7f-eth0" Jul 7 05:55:06.916467 containerd[1607]: 2025-07-07 05:55:06.895 [INFO][5635] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:55:06.916467 containerd[1607]: 2025-07-07 05:55:06.895 [INFO][5635] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:55:06.916467 containerd[1607]: 2025-07-07 05:55:06.910 [WARNING][5635] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f79c852cf8d5a6dc42e9ae7aa2efc034789040ef8c2994a2b978d11f9e301e43" HandleID="k8s-pod-network.f79c852cf8d5a6dc42e9ae7aa2efc034789040ef8c2994a2b978d11f9e301e43" Workload="ci--4081--3--4--0--cfa01fffc0-k8s-calico--apiserver--5b7f8cc9dc--mfs7f-eth0" Jul 7 05:55:06.916467 containerd[1607]: 2025-07-07 05:55:06.910 [INFO][5635] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f79c852cf8d5a6dc42e9ae7aa2efc034789040ef8c2994a2b978d11f9e301e43" HandleID="k8s-pod-network.f79c852cf8d5a6dc42e9ae7aa2efc034789040ef8c2994a2b978d11f9e301e43" Workload="ci--4081--3--4--0--cfa01fffc0-k8s-calico--apiserver--5b7f8cc9dc--mfs7f-eth0" Jul 7 05:55:06.916467 containerd[1607]: 2025-07-07 05:55:06.912 [INFO][5635] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:55:06.916467 containerd[1607]: 2025-07-07 05:55:06.914 [INFO][5628] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f79c852cf8d5a6dc42e9ae7aa2efc034789040ef8c2994a2b978d11f9e301e43" Jul 7 05:55:06.917183 containerd[1607]: time="2025-07-07T05:55:06.916520606Z" level=info msg="TearDown network for sandbox \"f79c852cf8d5a6dc42e9ae7aa2efc034789040ef8c2994a2b978d11f9e301e43\" successfully" Jul 7 05:55:06.921721 containerd[1607]: time="2025-07-07T05:55:06.921668801Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f79c852cf8d5a6dc42e9ae7aa2efc034789040ef8c2994a2b978d11f9e301e43\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 05:55:06.921907 containerd[1607]: time="2025-07-07T05:55:06.921752682Z" level=info msg="RemovePodSandbox \"f79c852cf8d5a6dc42e9ae7aa2efc034789040ef8c2994a2b978d11f9e301e43\" returns successfully" Jul 7 05:55:06.922364 containerd[1607]: time="2025-07-07T05:55:06.922287930Z" level=info msg="StopPodSandbox for \"b9063cf62d330415dbf6a8a15016fceb914fd6d71e809ab0ebce9ba03029c1c1\"" Jul 7 05:55:07.012712 containerd[1607]: 2025-07-07 05:55:06.966 [WARNING][5650] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b9063cf62d330415dbf6a8a15016fceb914fd6d71e809ab0ebce9ba03029c1c1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--0--cfa01fffc0-k8s-csi--node--driver--s6jw4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3608e3b2-d1d8-433a-a22d-3188e5368e8c", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 5, 54, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-0-cfa01fffc0", ContainerID:"d9f5dafa060b0c2265f6754a8c3c22f204c6497b7401178f354cbc8f2a9e917f", Pod:"csi-node-driver-s6jw4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.9.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali21a54a7a706", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 05:55:07.012712 containerd[1607]: 2025-07-07 05:55:06.967 [INFO][5650] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b9063cf62d330415dbf6a8a15016fceb914fd6d71e809ab0ebce9ba03029c1c1" Jul 7 05:55:07.012712 containerd[1607]: 2025-07-07 05:55:06.967 [INFO][5650] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b9063cf62d330415dbf6a8a15016fceb914fd6d71e809ab0ebce9ba03029c1c1" iface="eth0" netns="" Jul 7 05:55:07.012712 containerd[1607]: 2025-07-07 05:55:06.967 [INFO][5650] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b9063cf62d330415dbf6a8a15016fceb914fd6d71e809ab0ebce9ba03029c1c1" Jul 7 05:55:07.012712 containerd[1607]: 2025-07-07 05:55:06.967 [INFO][5650] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b9063cf62d330415dbf6a8a15016fceb914fd6d71e809ab0ebce9ba03029c1c1" Jul 7 05:55:07.012712 containerd[1607]: 2025-07-07 05:55:06.992 [INFO][5657] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b9063cf62d330415dbf6a8a15016fceb914fd6d71e809ab0ebce9ba03029c1c1" HandleID="k8s-pod-network.b9063cf62d330415dbf6a8a15016fceb914fd6d71e809ab0ebce9ba03029c1c1" Workload="ci--4081--3--4--0--cfa01fffc0-k8s-csi--node--driver--s6jw4-eth0" Jul 7 05:55:07.012712 containerd[1607]: 2025-07-07 05:55:06.992 [INFO][5657] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:55:07.012712 containerd[1607]: 2025-07-07 05:55:06.992 [INFO][5657] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:55:07.012712 containerd[1607]: 2025-07-07 05:55:07.006 [WARNING][5657] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b9063cf62d330415dbf6a8a15016fceb914fd6d71e809ab0ebce9ba03029c1c1" HandleID="k8s-pod-network.b9063cf62d330415dbf6a8a15016fceb914fd6d71e809ab0ebce9ba03029c1c1" Workload="ci--4081--3--4--0--cfa01fffc0-k8s-csi--node--driver--s6jw4-eth0" Jul 7 05:55:07.012712 containerd[1607]: 2025-07-07 05:55:07.006 [INFO][5657] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b9063cf62d330415dbf6a8a15016fceb914fd6d71e809ab0ebce9ba03029c1c1" HandleID="k8s-pod-network.b9063cf62d330415dbf6a8a15016fceb914fd6d71e809ab0ebce9ba03029c1c1" Workload="ci--4081--3--4--0--cfa01fffc0-k8s-csi--node--driver--s6jw4-eth0" Jul 7 05:55:07.012712 containerd[1607]: 2025-07-07 05:55:07.008 [INFO][5657] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:55:07.012712 containerd[1607]: 2025-07-07 05:55:07.010 [INFO][5650] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b9063cf62d330415dbf6a8a15016fceb914fd6d71e809ab0ebce9ba03029c1c1" Jul 7 05:55:07.012712 containerd[1607]: time="2025-07-07T05:55:07.012517778Z" level=info msg="TearDown network for sandbox \"b9063cf62d330415dbf6a8a15016fceb914fd6d71e809ab0ebce9ba03029c1c1\" successfully" Jul 7 05:55:07.012712 containerd[1607]: time="2025-07-07T05:55:07.012546739Z" level=info msg="StopPodSandbox for \"b9063cf62d330415dbf6a8a15016fceb914fd6d71e809ab0ebce9ba03029c1c1\" returns successfully" Jul 7 05:55:07.013816 containerd[1607]: time="2025-07-07T05:55:07.013100988Z" level=info msg="RemovePodSandbox for \"b9063cf62d330415dbf6a8a15016fceb914fd6d71e809ab0ebce9ba03029c1c1\"" Jul 7 05:55:07.013816 containerd[1607]: time="2025-07-07T05:55:07.013138588Z" level=info msg="Forcibly stopping sandbox \"b9063cf62d330415dbf6a8a15016fceb914fd6d71e809ab0ebce9ba03029c1c1\"" Jul 7 05:55:07.113251 containerd[1607]: 2025-07-07 05:55:07.069 [WARNING][5671] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b9063cf62d330415dbf6a8a15016fceb914fd6d71e809ab0ebce9ba03029c1c1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--0--cfa01fffc0-k8s-csi--node--driver--s6jw4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3608e3b2-d1d8-433a-a22d-3188e5368e8c", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 5, 54, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-0-cfa01fffc0", ContainerID:"d9f5dafa060b0c2265f6754a8c3c22f204c6497b7401178f354cbc8f2a9e917f", Pod:"csi-node-driver-s6jw4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.9.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali21a54a7a706", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 05:55:07.113251 containerd[1607]: 2025-07-07 05:55:07.069 [INFO][5671] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b9063cf62d330415dbf6a8a15016fceb914fd6d71e809ab0ebce9ba03029c1c1" Jul 7 05:55:07.113251 containerd[1607]: 2025-07-07 05:55:07.070 [INFO][5671] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b9063cf62d330415dbf6a8a15016fceb914fd6d71e809ab0ebce9ba03029c1c1" iface="eth0" netns="" Jul 7 05:55:07.113251 containerd[1607]: 2025-07-07 05:55:07.070 [INFO][5671] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b9063cf62d330415dbf6a8a15016fceb914fd6d71e809ab0ebce9ba03029c1c1" Jul 7 05:55:07.113251 containerd[1607]: 2025-07-07 05:55:07.070 [INFO][5671] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b9063cf62d330415dbf6a8a15016fceb914fd6d71e809ab0ebce9ba03029c1c1" Jul 7 05:55:07.113251 containerd[1607]: 2025-07-07 05:55:07.093 [INFO][5678] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b9063cf62d330415dbf6a8a15016fceb914fd6d71e809ab0ebce9ba03029c1c1" HandleID="k8s-pod-network.b9063cf62d330415dbf6a8a15016fceb914fd6d71e809ab0ebce9ba03029c1c1" Workload="ci--4081--3--4--0--cfa01fffc0-k8s-csi--node--driver--s6jw4-eth0" Jul 7 05:55:07.113251 containerd[1607]: 2025-07-07 05:55:07.093 [INFO][5678] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:55:07.113251 containerd[1607]: 2025-07-07 05:55:07.093 [INFO][5678] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:55:07.113251 containerd[1607]: 2025-07-07 05:55:07.107 [WARNING][5678] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b9063cf62d330415dbf6a8a15016fceb914fd6d71e809ab0ebce9ba03029c1c1" HandleID="k8s-pod-network.b9063cf62d330415dbf6a8a15016fceb914fd6d71e809ab0ebce9ba03029c1c1" Workload="ci--4081--3--4--0--cfa01fffc0-k8s-csi--node--driver--s6jw4-eth0" Jul 7 05:55:07.113251 containerd[1607]: 2025-07-07 05:55:07.107 [INFO][5678] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b9063cf62d330415dbf6a8a15016fceb914fd6d71e809ab0ebce9ba03029c1c1" HandleID="k8s-pod-network.b9063cf62d330415dbf6a8a15016fceb914fd6d71e809ab0ebce9ba03029c1c1" Workload="ci--4081--3--4--0--cfa01fffc0-k8s-csi--node--driver--s6jw4-eth0" Jul 7 05:55:07.113251 containerd[1607]: 2025-07-07 05:55:07.109 [INFO][5678] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:55:07.113251 containerd[1607]: 2025-07-07 05:55:07.111 [INFO][5671] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b9063cf62d330415dbf6a8a15016fceb914fd6d71e809ab0ebce9ba03029c1c1" Jul 7 05:55:07.113896 containerd[1607]: time="2025-07-07T05:55:07.113279180Z" level=info msg="TearDown network for sandbox \"b9063cf62d330415dbf6a8a15016fceb914fd6d71e809ab0ebce9ba03029c1c1\" successfully" Jul 7 05:55:07.118219 containerd[1607]: time="2025-07-07T05:55:07.118158017Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b9063cf62d330415dbf6a8a15016fceb914fd6d71e809ab0ebce9ba03029c1c1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 05:55:07.118347 containerd[1607]: time="2025-07-07T05:55:07.118257659Z" level=info msg="RemovePodSandbox \"b9063cf62d330415dbf6a8a15016fceb914fd6d71e809ab0ebce9ba03029c1c1\" returns successfully" Jul 7 05:55:07.119261 containerd[1607]: time="2025-07-07T05:55:07.118916949Z" level=info msg="StopPodSandbox for \"40c3da616280f08b3c015dbc2ab61858439cbab20b614d0112e0b72710c65590\"" Jul 7 05:55:07.215034 containerd[1607]: 2025-07-07 05:55:07.168 [WARNING][5692] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="40c3da616280f08b3c015dbc2ab61858439cbab20b614d0112e0b72710c65590" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--0--cfa01fffc0-k8s-coredns--7c65d6cfc9--2f6gw-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"7a7abeb6-f9e1-4877-bbe3-f7709971f8ca", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 5, 54, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-0-cfa01fffc0", ContainerID:"cf5f054bac3c87f428062bd494313a25b3bdbdbcb08d1dbe8217ff5ac314acf3", Pod:"coredns-7c65d6cfc9-2f6gw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.9.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8eb5483f1c6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 05:55:07.215034 containerd[1607]: 2025-07-07 05:55:07.169 [INFO][5692] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="40c3da616280f08b3c015dbc2ab61858439cbab20b614d0112e0b72710c65590" Jul 7 05:55:07.215034 containerd[1607]: 2025-07-07 05:55:07.169 [INFO][5692] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="40c3da616280f08b3c015dbc2ab61858439cbab20b614d0112e0b72710c65590" iface="eth0" netns="" Jul 7 05:55:07.215034 containerd[1607]: 2025-07-07 05:55:07.169 [INFO][5692] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="40c3da616280f08b3c015dbc2ab61858439cbab20b614d0112e0b72710c65590" Jul 7 05:55:07.215034 containerd[1607]: 2025-07-07 05:55:07.169 [INFO][5692] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="40c3da616280f08b3c015dbc2ab61858439cbab20b614d0112e0b72710c65590" Jul 7 05:55:07.215034 containerd[1607]: 2025-07-07 05:55:07.189 [INFO][5700] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="40c3da616280f08b3c015dbc2ab61858439cbab20b614d0112e0b72710c65590" HandleID="k8s-pod-network.40c3da616280f08b3c015dbc2ab61858439cbab20b614d0112e0b72710c65590" Workload="ci--4081--3--4--0--cfa01fffc0-k8s-coredns--7c65d6cfc9--2f6gw-eth0" Jul 7 05:55:07.215034 containerd[1607]: 2025-07-07 05:55:07.189 [INFO][5700] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:55:07.215034 containerd[1607]: 2025-07-07 05:55:07.189 [INFO][5700] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:55:07.215034 containerd[1607]: 2025-07-07 05:55:07.208 [WARNING][5700] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="40c3da616280f08b3c015dbc2ab61858439cbab20b614d0112e0b72710c65590" HandleID="k8s-pod-network.40c3da616280f08b3c015dbc2ab61858439cbab20b614d0112e0b72710c65590" Workload="ci--4081--3--4--0--cfa01fffc0-k8s-coredns--7c65d6cfc9--2f6gw-eth0" Jul 7 05:55:07.215034 containerd[1607]: 2025-07-07 05:55:07.209 [INFO][5700] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="40c3da616280f08b3c015dbc2ab61858439cbab20b614d0112e0b72710c65590" HandleID="k8s-pod-network.40c3da616280f08b3c015dbc2ab61858439cbab20b614d0112e0b72710c65590" Workload="ci--4081--3--4--0--cfa01fffc0-k8s-coredns--7c65d6cfc9--2f6gw-eth0" Jul 7 05:55:07.215034 containerd[1607]: 2025-07-07 05:55:07.211 [INFO][5700] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:55:07.215034 containerd[1607]: 2025-07-07 05:55:07.213 [INFO][5692] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="40c3da616280f08b3c015dbc2ab61858439cbab20b614d0112e0b72710c65590" Jul 7 05:55:07.216314 containerd[1607]: time="2025-07-07T05:55:07.215820809Z" level=info msg="TearDown network for sandbox \"40c3da616280f08b3c015dbc2ab61858439cbab20b614d0112e0b72710c65590\" successfully" Jul 7 05:55:07.216314 containerd[1607]: time="2025-07-07T05:55:07.215877610Z" level=info msg="StopPodSandbox for \"40c3da616280f08b3c015dbc2ab61858439cbab20b614d0112e0b72710c65590\" returns successfully" Jul 7 05:55:07.217547 containerd[1607]: time="2025-07-07T05:55:07.217488516Z" level=info msg="RemovePodSandbox for \"40c3da616280f08b3c015dbc2ab61858439cbab20b614d0112e0b72710c65590\"" Jul 7 05:55:07.217690 containerd[1607]: time="2025-07-07T05:55:07.217563957Z" level=info msg="Forcibly stopping sandbox \"40c3da616280f08b3c015dbc2ab61858439cbab20b614d0112e0b72710c65590\"" Jul 7 05:55:07.303988 containerd[1607]: 2025-07-07 05:55:07.262 [WARNING][5714] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="40c3da616280f08b3c015dbc2ab61858439cbab20b614d0112e0b72710c65590" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--0--cfa01fffc0-k8s-coredns--7c65d6cfc9--2f6gw-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"7a7abeb6-f9e1-4877-bbe3-f7709971f8ca", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 5, 54, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-0-cfa01fffc0", ContainerID:"cf5f054bac3c87f428062bd494313a25b3bdbdbcb08d1dbe8217ff5ac314acf3", Pod:"coredns-7c65d6cfc9-2f6gw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.9.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8eb5483f1c6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 05:55:07.303988 containerd[1607]: 2025-07-07 05:55:07.263 [INFO][5714] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="40c3da616280f08b3c015dbc2ab61858439cbab20b614d0112e0b72710c65590" Jul 7 05:55:07.303988 containerd[1607]: 2025-07-07 05:55:07.263 [INFO][5714] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="40c3da616280f08b3c015dbc2ab61858439cbab20b614d0112e0b72710c65590" iface="eth0" netns="" Jul 7 05:55:07.303988 containerd[1607]: 2025-07-07 05:55:07.263 [INFO][5714] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="40c3da616280f08b3c015dbc2ab61858439cbab20b614d0112e0b72710c65590" Jul 7 05:55:07.303988 containerd[1607]: 2025-07-07 05:55:07.263 [INFO][5714] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="40c3da616280f08b3c015dbc2ab61858439cbab20b614d0112e0b72710c65590" Jul 7 05:55:07.303988 containerd[1607]: 2025-07-07 05:55:07.286 [INFO][5721] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="40c3da616280f08b3c015dbc2ab61858439cbab20b614d0112e0b72710c65590" HandleID="k8s-pod-network.40c3da616280f08b3c015dbc2ab61858439cbab20b614d0112e0b72710c65590" Workload="ci--4081--3--4--0--cfa01fffc0-k8s-coredns--7c65d6cfc9--2f6gw-eth0" Jul 7 05:55:07.303988 containerd[1607]: 2025-07-07 05:55:07.287 [INFO][5721] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:55:07.303988 containerd[1607]: 2025-07-07 05:55:07.287 [INFO][5721] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:55:07.303988 containerd[1607]: 2025-07-07 05:55:07.298 [WARNING][5721] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="40c3da616280f08b3c015dbc2ab61858439cbab20b614d0112e0b72710c65590" HandleID="k8s-pod-network.40c3da616280f08b3c015dbc2ab61858439cbab20b614d0112e0b72710c65590" Workload="ci--4081--3--4--0--cfa01fffc0-k8s-coredns--7c65d6cfc9--2f6gw-eth0" Jul 7 05:55:07.303988 containerd[1607]: 2025-07-07 05:55:07.298 [INFO][5721] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="40c3da616280f08b3c015dbc2ab61858439cbab20b614d0112e0b72710c65590" HandleID="k8s-pod-network.40c3da616280f08b3c015dbc2ab61858439cbab20b614d0112e0b72710c65590" Workload="ci--4081--3--4--0--cfa01fffc0-k8s-coredns--7c65d6cfc9--2f6gw-eth0" Jul 7 05:55:07.303988 containerd[1607]: 2025-07-07 05:55:07.300 [INFO][5721] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:55:07.303988 containerd[1607]: 2025-07-07 05:55:07.302 [INFO][5714] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="40c3da616280f08b3c015dbc2ab61858439cbab20b614d0112e0b72710c65590" Jul 7 05:55:07.306094 containerd[1607]: time="2025-07-07T05:55:07.303901529Z" level=info msg="TearDown network for sandbox \"40c3da616280f08b3c015dbc2ab61858439cbab20b614d0112e0b72710c65590\" successfully" Jul 7 05:55:07.310536 containerd[1607]: time="2025-07-07T05:55:07.310477513Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"40c3da616280f08b3c015dbc2ab61858439cbab20b614d0112e0b72710c65590\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 05:55:07.310944 containerd[1607]: time="2025-07-07T05:55:07.310561675Z" level=info msg="RemovePodSandbox \"40c3da616280f08b3c015dbc2ab61858439cbab20b614d0112e0b72710c65590\" returns successfully" Jul 7 05:55:07.311189 containerd[1607]: time="2025-07-07T05:55:07.311097683Z" level=info msg="StopPodSandbox for \"6f2aef845c948b3a3da93463475913830fd5abe09ddd3b2abb0208b753c0b097\"" Jul 7 05:55:07.418406 containerd[1607]: 2025-07-07 05:55:07.366 [WARNING][5735] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6f2aef845c948b3a3da93463475913830fd5abe09ddd3b2abb0208b753c0b097" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--0--cfa01fffc0-k8s-goldmane--58fd7646b9--6wzbz-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"7248fff7-123a-4181-abad-edb494c7cc63", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 5, 54, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-0-cfa01fffc0", ContainerID:"98b6f482989d9a212e9e04c354e8d6b4b7ae7679d893815ad1b097d949b9d704", Pod:"goldmane-58fd7646b9-6wzbz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.9.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic0a62dad547", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 05:55:07.418406 containerd[1607]: 2025-07-07 05:55:07.367 [INFO][5735] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6f2aef845c948b3a3da93463475913830fd5abe09ddd3b2abb0208b753c0b097" Jul 7 05:55:07.418406 containerd[1607]: 2025-07-07 05:55:07.367 [INFO][5735] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6f2aef845c948b3a3da93463475913830fd5abe09ddd3b2abb0208b753c0b097" iface="eth0" netns="" Jul 7 05:55:07.418406 containerd[1607]: 2025-07-07 05:55:07.367 [INFO][5735] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6f2aef845c948b3a3da93463475913830fd5abe09ddd3b2abb0208b753c0b097" Jul 7 05:55:07.418406 containerd[1607]: 2025-07-07 05:55:07.367 [INFO][5735] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6f2aef845c948b3a3da93463475913830fd5abe09ddd3b2abb0208b753c0b097" Jul 7 05:55:07.418406 containerd[1607]: 2025-07-07 05:55:07.393 [INFO][5742] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6f2aef845c948b3a3da93463475913830fd5abe09ddd3b2abb0208b753c0b097" HandleID="k8s-pod-network.6f2aef845c948b3a3da93463475913830fd5abe09ddd3b2abb0208b753c0b097" Workload="ci--4081--3--4--0--cfa01fffc0-k8s-goldmane--58fd7646b9--6wzbz-eth0" Jul 7 05:55:07.418406 containerd[1607]: 2025-07-07 05:55:07.393 [INFO][5742] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:55:07.418406 containerd[1607]: 2025-07-07 05:55:07.393 [INFO][5742] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:55:07.418406 containerd[1607]: 2025-07-07 05:55:07.407 [WARNING][5742] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6f2aef845c948b3a3da93463475913830fd5abe09ddd3b2abb0208b753c0b097" HandleID="k8s-pod-network.6f2aef845c948b3a3da93463475913830fd5abe09ddd3b2abb0208b753c0b097" Workload="ci--4081--3--4--0--cfa01fffc0-k8s-goldmane--58fd7646b9--6wzbz-eth0" Jul 7 05:55:07.418406 containerd[1607]: 2025-07-07 05:55:07.407 [INFO][5742] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6f2aef845c948b3a3da93463475913830fd5abe09ddd3b2abb0208b753c0b097" HandleID="k8s-pod-network.6f2aef845c948b3a3da93463475913830fd5abe09ddd3b2abb0208b753c0b097" Workload="ci--4081--3--4--0--cfa01fffc0-k8s-goldmane--58fd7646b9--6wzbz-eth0" Jul 7 05:55:07.418406 containerd[1607]: 2025-07-07 05:55:07.411 [INFO][5742] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:55:07.418406 containerd[1607]: 2025-07-07 05:55:07.415 [INFO][5735] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6f2aef845c948b3a3da93463475913830fd5abe09ddd3b2abb0208b753c0b097" Jul 7 05:55:07.418406 containerd[1607]: time="2025-07-07T05:55:07.418286067Z" level=info msg="TearDown network for sandbox \"6f2aef845c948b3a3da93463475913830fd5abe09ddd3b2abb0208b753c0b097\" successfully" Jul 7 05:55:07.418406 containerd[1607]: time="2025-07-07T05:55:07.418338267Z" level=info msg="StopPodSandbox for \"6f2aef845c948b3a3da93463475913830fd5abe09ddd3b2abb0208b753c0b097\" returns successfully" Jul 7 05:55:07.419638 containerd[1607]: time="2025-07-07T05:55:07.418845875Z" level=info msg="RemovePodSandbox for \"6f2aef845c948b3a3da93463475913830fd5abe09ddd3b2abb0208b753c0b097\"" Jul 7 05:55:07.419638 containerd[1607]: time="2025-07-07T05:55:07.418877276Z" level=info msg="Forcibly stopping sandbox \"6f2aef845c948b3a3da93463475913830fd5abe09ddd3b2abb0208b753c0b097\"" Jul 7 05:55:07.517128 containerd[1607]: 2025-07-07 05:55:07.468 [WARNING][5756] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6f2aef845c948b3a3da93463475913830fd5abe09ddd3b2abb0208b753c0b097" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--0--cfa01fffc0-k8s-goldmane--58fd7646b9--6wzbz-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"7248fff7-123a-4181-abad-edb494c7cc63", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 5, 54, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-0-cfa01fffc0", ContainerID:"98b6f482989d9a212e9e04c354e8d6b4b7ae7679d893815ad1b097d949b9d704", Pod:"goldmane-58fd7646b9-6wzbz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.9.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic0a62dad547", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 05:55:07.517128 containerd[1607]: 2025-07-07 05:55:07.468 [INFO][5756] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6f2aef845c948b3a3da93463475913830fd5abe09ddd3b2abb0208b753c0b097" Jul 7 05:55:07.517128 containerd[1607]: 2025-07-07 05:55:07.468 [INFO][5756] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6f2aef845c948b3a3da93463475913830fd5abe09ddd3b2abb0208b753c0b097" iface="eth0" netns="" Jul 7 05:55:07.517128 containerd[1607]: 2025-07-07 05:55:07.468 [INFO][5756] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6f2aef845c948b3a3da93463475913830fd5abe09ddd3b2abb0208b753c0b097" Jul 7 05:55:07.517128 containerd[1607]: 2025-07-07 05:55:07.469 [INFO][5756] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6f2aef845c948b3a3da93463475913830fd5abe09ddd3b2abb0208b753c0b097" Jul 7 05:55:07.517128 containerd[1607]: 2025-07-07 05:55:07.494 [INFO][5763] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6f2aef845c948b3a3da93463475913830fd5abe09ddd3b2abb0208b753c0b097" HandleID="k8s-pod-network.6f2aef845c948b3a3da93463475913830fd5abe09ddd3b2abb0208b753c0b097" Workload="ci--4081--3--4--0--cfa01fffc0-k8s-goldmane--58fd7646b9--6wzbz-eth0" Jul 7 05:55:07.517128 containerd[1607]: 2025-07-07 05:55:07.494 [INFO][5763] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:55:07.517128 containerd[1607]: 2025-07-07 05:55:07.494 [INFO][5763] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:55:07.517128 containerd[1607]: 2025-07-07 05:55:07.505 [WARNING][5763] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6f2aef845c948b3a3da93463475913830fd5abe09ddd3b2abb0208b753c0b097" HandleID="k8s-pod-network.6f2aef845c948b3a3da93463475913830fd5abe09ddd3b2abb0208b753c0b097" Workload="ci--4081--3--4--0--cfa01fffc0-k8s-goldmane--58fd7646b9--6wzbz-eth0" Jul 7 05:55:07.517128 containerd[1607]: 2025-07-07 05:55:07.505 [INFO][5763] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6f2aef845c948b3a3da93463475913830fd5abe09ddd3b2abb0208b753c0b097" HandleID="k8s-pod-network.6f2aef845c948b3a3da93463475913830fd5abe09ddd3b2abb0208b753c0b097" Workload="ci--4081--3--4--0--cfa01fffc0-k8s-goldmane--58fd7646b9--6wzbz-eth0" Jul 7 05:55:07.517128 containerd[1607]: 2025-07-07 05:55:07.510 [INFO][5763] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:55:07.517128 containerd[1607]: 2025-07-07 05:55:07.512 [INFO][5756] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6f2aef845c948b3a3da93463475913830fd5abe09ddd3b2abb0208b753c0b097" Jul 7 05:55:07.517128 containerd[1607]: time="2025-07-07T05:55:07.516500227Z" level=info msg="TearDown network for sandbox \"6f2aef845c948b3a3da93463475913830fd5abe09ddd3b2abb0208b753c0b097\" successfully" Jul 7 05:55:07.522273 containerd[1607]: time="2025-07-07T05:55:07.522211998Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6f2aef845c948b3a3da93463475913830fd5abe09ddd3b2abb0208b753c0b097\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 05:55:07.522582 containerd[1607]: time="2025-07-07T05:55:07.522544083Z" level=info msg="RemovePodSandbox \"6f2aef845c948b3a3da93463475913830fd5abe09ddd3b2abb0208b753c0b097\" returns successfully" Jul 7 05:55:07.524016 containerd[1607]: time="2025-07-07T05:55:07.523841264Z" level=info msg="StopPodSandbox for \"929cf9ce3a7931c08f54ef638f593ed3f0aa29b6cc836733c4d2c9322ba9ed8a\"" Jul 7 05:55:07.647405 containerd[1607]: 2025-07-07 05:55:07.580 [WARNING][5781] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="929cf9ce3a7931c08f54ef638f593ed3f0aa29b6cc836733c4d2c9322ba9ed8a" WorkloadEndpoint="ci--4081--3--4--0--cfa01fffc0-k8s-whisker--765dfbf5cc--6g5pp-eth0" Jul 7 05:55:07.647405 containerd[1607]: 2025-07-07 05:55:07.581 [INFO][5781] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="929cf9ce3a7931c08f54ef638f593ed3f0aa29b6cc836733c4d2c9322ba9ed8a" Jul 7 05:55:07.647405 containerd[1607]: 2025-07-07 05:55:07.581 [INFO][5781] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="929cf9ce3a7931c08f54ef638f593ed3f0aa29b6cc836733c4d2c9322ba9ed8a" iface="eth0" netns="" Jul 7 05:55:07.647405 containerd[1607]: 2025-07-07 05:55:07.581 [INFO][5781] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="929cf9ce3a7931c08f54ef638f593ed3f0aa29b6cc836733c4d2c9322ba9ed8a" Jul 7 05:55:07.647405 containerd[1607]: 2025-07-07 05:55:07.581 [INFO][5781] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="929cf9ce3a7931c08f54ef638f593ed3f0aa29b6cc836733c4d2c9322ba9ed8a" Jul 7 05:55:07.647405 containerd[1607]: 2025-07-07 05:55:07.625 [INFO][5788] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="929cf9ce3a7931c08f54ef638f593ed3f0aa29b6cc836733c4d2c9322ba9ed8a" HandleID="k8s-pod-network.929cf9ce3a7931c08f54ef638f593ed3f0aa29b6cc836733c4d2c9322ba9ed8a" Workload="ci--4081--3--4--0--cfa01fffc0-k8s-whisker--765dfbf5cc--6g5pp-eth0" Jul 7 05:55:07.647405 containerd[1607]: 2025-07-07 05:55:07.625 [INFO][5788] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:55:07.647405 containerd[1607]: 2025-07-07 05:55:07.625 [INFO][5788] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:55:07.647405 containerd[1607]: 2025-07-07 05:55:07.637 [WARNING][5788] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="929cf9ce3a7931c08f54ef638f593ed3f0aa29b6cc836733c4d2c9322ba9ed8a" HandleID="k8s-pod-network.929cf9ce3a7931c08f54ef638f593ed3f0aa29b6cc836733c4d2c9322ba9ed8a" Workload="ci--4081--3--4--0--cfa01fffc0-k8s-whisker--765dfbf5cc--6g5pp-eth0" Jul 7 05:55:07.647405 containerd[1607]: 2025-07-07 05:55:07.638 [INFO][5788] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="929cf9ce3a7931c08f54ef638f593ed3f0aa29b6cc836733c4d2c9322ba9ed8a" HandleID="k8s-pod-network.929cf9ce3a7931c08f54ef638f593ed3f0aa29b6cc836733c4d2c9322ba9ed8a" Workload="ci--4081--3--4--0--cfa01fffc0-k8s-whisker--765dfbf5cc--6g5pp-eth0" Jul 7 05:55:07.647405 containerd[1607]: 2025-07-07 05:55:07.640 [INFO][5788] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:55:07.647405 containerd[1607]: 2025-07-07 05:55:07.642 [INFO][5781] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="929cf9ce3a7931c08f54ef638f593ed3f0aa29b6cc836733c4d2c9322ba9ed8a" Jul 7 05:55:07.647405 containerd[1607]: time="2025-07-07T05:55:07.647391507Z" level=info msg="TearDown network for sandbox \"929cf9ce3a7931c08f54ef638f593ed3f0aa29b6cc836733c4d2c9322ba9ed8a\" successfully" Jul 7 05:55:07.648264 containerd[1607]: time="2025-07-07T05:55:07.647417548Z" level=info msg="StopPodSandbox for \"929cf9ce3a7931c08f54ef638f593ed3f0aa29b6cc836733c4d2c9322ba9ed8a\" returns successfully" Jul 7 05:55:07.648568 containerd[1607]: time="2025-07-07T05:55:07.648463204Z" level=info msg="RemovePodSandbox for \"929cf9ce3a7931c08f54ef638f593ed3f0aa29b6cc836733c4d2c9322ba9ed8a\"" Jul 7 05:55:07.648568 containerd[1607]: time="2025-07-07T05:55:07.648503525Z" level=info msg="Forcibly stopping sandbox \"929cf9ce3a7931c08f54ef638f593ed3f0aa29b6cc836733c4d2c9322ba9ed8a\"" Jul 7 05:55:07.758063 containerd[1607]: 2025-07-07 05:55:07.708 [WARNING][5802] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="929cf9ce3a7931c08f54ef638f593ed3f0aa29b6cc836733c4d2c9322ba9ed8a" WorkloadEndpoint="ci--4081--3--4--0--cfa01fffc0-k8s-whisker--765dfbf5cc--6g5pp-eth0" Jul 7 05:55:07.758063 containerd[1607]: 2025-07-07 05:55:07.708 [INFO][5802] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="929cf9ce3a7931c08f54ef638f593ed3f0aa29b6cc836733c4d2c9322ba9ed8a" Jul 7 05:55:07.758063 containerd[1607]: 2025-07-07 05:55:07.708 [INFO][5802] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="929cf9ce3a7931c08f54ef638f593ed3f0aa29b6cc836733c4d2c9322ba9ed8a" iface="eth0" netns="" Jul 7 05:55:07.758063 containerd[1607]: 2025-07-07 05:55:07.708 [INFO][5802] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="929cf9ce3a7931c08f54ef638f593ed3f0aa29b6cc836733c4d2c9322ba9ed8a" Jul 7 05:55:07.758063 containerd[1607]: 2025-07-07 05:55:07.708 [INFO][5802] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="929cf9ce3a7931c08f54ef638f593ed3f0aa29b6cc836733c4d2c9322ba9ed8a" Jul 7 05:55:07.758063 containerd[1607]: 2025-07-07 05:55:07.738 [INFO][5810] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="929cf9ce3a7931c08f54ef638f593ed3f0aa29b6cc836733c4d2c9322ba9ed8a" HandleID="k8s-pod-network.929cf9ce3a7931c08f54ef638f593ed3f0aa29b6cc836733c4d2c9322ba9ed8a" Workload="ci--4081--3--4--0--cfa01fffc0-k8s-whisker--765dfbf5cc--6g5pp-eth0" Jul 7 05:55:07.758063 containerd[1607]: 2025-07-07 05:55:07.738 [INFO][5810] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:55:07.758063 containerd[1607]: 2025-07-07 05:55:07.738 [INFO][5810] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:55:07.758063 containerd[1607]: 2025-07-07 05:55:07.748 [WARNING][5810] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="929cf9ce3a7931c08f54ef638f593ed3f0aa29b6cc836733c4d2c9322ba9ed8a" HandleID="k8s-pod-network.929cf9ce3a7931c08f54ef638f593ed3f0aa29b6cc836733c4d2c9322ba9ed8a" Workload="ci--4081--3--4--0--cfa01fffc0-k8s-whisker--765dfbf5cc--6g5pp-eth0" Jul 7 05:55:07.758063 containerd[1607]: 2025-07-07 05:55:07.749 [INFO][5810] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="929cf9ce3a7931c08f54ef638f593ed3f0aa29b6cc836733c4d2c9322ba9ed8a" HandleID="k8s-pod-network.929cf9ce3a7931c08f54ef638f593ed3f0aa29b6cc836733c4d2c9322ba9ed8a" Workload="ci--4081--3--4--0--cfa01fffc0-k8s-whisker--765dfbf5cc--6g5pp-eth0" Jul 7 05:55:07.758063 containerd[1607]: 2025-07-07 05:55:07.752 [INFO][5810] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:55:07.758063 containerd[1607]: 2025-07-07 05:55:07.755 [INFO][5802] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="929cf9ce3a7931c08f54ef638f593ed3f0aa29b6cc836733c4d2c9322ba9ed8a" Jul 7 05:55:07.758063 containerd[1607]: time="2025-07-07T05:55:07.757843422Z" level=info msg="TearDown network for sandbox \"929cf9ce3a7931c08f54ef638f593ed3f0aa29b6cc836733c4d2c9322ba9ed8a\" successfully" Jul 7 05:55:07.783860 containerd[1607]: time="2025-07-07T05:55:07.783801995Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"929cf9ce3a7931c08f54ef638f593ed3f0aa29b6cc836733c4d2c9322ba9ed8a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 05:55:07.784001 containerd[1607]: time="2025-07-07T05:55:07.783913317Z" level=info msg="RemovePodSandbox \"929cf9ce3a7931c08f54ef638f593ed3f0aa29b6cc836733c4d2c9322ba9ed8a\" returns successfully" Jul 7 05:55:07.807999 containerd[1607]: time="2025-07-07T05:55:07.807921858Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:55:07.809563 containerd[1607]: time="2025-07-07T05:55:07.809260520Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=13754366" Jul 7 05:55:07.810685 containerd[1607]: time="2025-07-07T05:55:07.810560380Z" level=info msg="ImageCreate event name:\"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:55:07.814824 containerd[1607]: time="2025-07-07T05:55:07.814752367Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:55:07.816148 containerd[1607]: time="2025-07-07T05:55:07.815928866Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"15123559\" in 1.970136431s" Jul 7 05:55:07.816148 containerd[1607]: time="2025-07-07T05:55:07.815987666Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\"" Jul 7 05:55:07.821421 containerd[1607]: time="2025-07-07T05:55:07.821352352Z" level=info msg="CreateContainer within sandbox \"d9f5dafa060b0c2265f6754a8c3c22f204c6497b7401178f354cbc8f2a9e917f\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 7 05:55:07.839888 containerd[1607]: time="2025-07-07T05:55:07.839822125Z" level=info msg="CreateContainer within sandbox \"d9f5dafa060b0c2265f6754a8c3c22f204c6497b7401178f354cbc8f2a9e917f\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"097f15f0aed10268b89ee0190bc8318295fcbfea22c5a9ed491835a96c76942b\"" Jul 7 05:55:07.842605 containerd[1607]: time="2025-07-07T05:55:07.842548289Z" level=info msg="StartContainer for \"097f15f0aed10268b89ee0190bc8318295fcbfea22c5a9ed491835a96c76942b\"" Jul 7 05:55:07.923941 containerd[1607]: time="2025-07-07T05:55:07.923837060Z" level=info msg="StartContainer for \"097f15f0aed10268b89ee0190bc8318295fcbfea22c5a9ed491835a96c76942b\" returns successfully" Jul 7 05:55:08.123476 kubelet[2790]: I0707 05:55:08.123408 2790 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 7 05:55:08.127808 kubelet[2790]: I0707 05:55:08.127533 2790 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 7 05:55:08.912364 kubelet[2790]: I0707 05:55:08.912293 2790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-s6jw4" podStartSLOduration=25.178411041 podStartE2EDuration="39.912274306s" podCreationTimestamp="2025-07-07 05:54:29 +0000 UTC" firstStartedPulling="2025-07-07 05:54:53.084721003 +0000 UTC m=+47.246744030" lastFinishedPulling="2025-07-07 05:55:07.818584268 +0000 UTC m=+61.980607295" observedRunningTime="2025-07-07 05:55:08.496497201 +0000 UTC m=+62.658520228" watchObservedRunningTime="2025-07-07 05:55:08.912274306 +0000 UTC m=+63.074297333" Jul 7 05:55:12.746216 kubelet[2790]: I0707 05:55:12.745549 2790 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 05:55:17.533815 kubelet[2790]: I0707 05:55:17.533655 2790 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 05:55:38.818034 systemd[1]: run-containerd-runc-k8s.io-a7c0f14d21e0cabed9bb1c9ef5c53fd9351ff1703a2705b1b0ef3f422ad5bdd8-runc.zh5DZH.mount: Deactivated successfully. Jul 7 05:55:53.306471 systemd[1]: Started sshd@7-159.69.113.68:22-185.247.137.132:55585.service - OpenSSH per-connection server daemon (185.247.137.132:55585). Jul 7 05:55:55.299713 sshd[6024]: Connection closed by 185.247.137.132 port 55585 Jul 7 05:55:55.303343 systemd[1]: sshd@7-159.69.113.68:22-185.247.137.132:55585.service: Deactivated successfully. Jul 7 05:55:55.340229 systemd[1]: Started sshd@8-159.69.113.68:22-185.247.137.132:43093.service - OpenSSH per-connection server daemon (185.247.137.132:43093). Jul 7 05:55:55.457684 sshd[6028]: Connection closed by 185.247.137.132 port 43093 [preauth] Jul 7 05:55:55.460820 systemd[1]: sshd@8-159.69.113.68:22-185.247.137.132:43093.service: Deactivated successfully. Jul 7 05:56:38.837311 systemd[1]: run-containerd-runc-k8s.io-a949d074eb422be6bf65a1ca32df4efead6380bb4d9fc26aa2ee2f06034fdf25-runc.V8BJ7H.mount: Deactivated successfully. Jul 7 05:57:08.816605 systemd[1]: run-containerd-runc-k8s.io-a7c0f14d21e0cabed9bb1c9ef5c53fd9351ff1703a2705b1b0ef3f422ad5bdd8-runc.7kJ0vT.mount: Deactivated successfully. Jul 7 05:57:38.841227 systemd[1]: run-containerd-runc-k8s.io-a949d074eb422be6bf65a1ca32df4efead6380bb4d9fc26aa2ee2f06034fdf25-runc.IXCX7V.mount: Deactivated successfully. Jul 7 05:58:45.588663 systemd[1]: run-containerd-runc-k8s.io-31fa8b5774b37876cf465f024cb1cb20d29fbbab498557dab0d4135bdd4b271e-runc.M8pyom.mount: Deactivated successfully. Jul 7 05:59:03.467476 systemd[1]: Started sshd@9-159.69.113.68:22-147.75.109.163:42018.service - OpenSSH per-connection server daemon (147.75.109.163:42018). Jul 7 05:59:04.476603 sshd[6620]: Accepted publickey for core from 147.75.109.163 port 42018 ssh2: RSA SHA256:kLE+u5/r4/ydHwbzB201ybJdYCioVP+NA3MAI6UVV6g Jul 7 05:59:04.477569 sshd[6620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:59:04.487669 systemd-logind[1570]: New session 8 of user core. Jul 7 05:59:04.493442 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 7 05:59:05.319313 sshd[6620]: pam_unix(sshd:session): session closed for user core Jul 7 05:59:05.327400 systemd[1]: sshd@9-159.69.113.68:22-147.75.109.163:42018.service: Deactivated successfully. Jul 7 05:59:05.330133 systemd-logind[1570]: Session 8 logged out. Waiting for processes to exit. Jul 7 05:59:05.333585 systemd[1]: session-8.scope: Deactivated successfully. Jul 7 05:59:05.335502 systemd-logind[1570]: Removed session 8. Jul 7 05:59:08.824017 systemd[1]: run-containerd-runc-k8s.io-a7c0f14d21e0cabed9bb1c9ef5c53fd9351ff1703a2705b1b0ef3f422ad5bdd8-runc.Nyfy3H.mount: Deactivated successfully. Jul 7 05:59:08.858471 systemd[1]: run-containerd-runc-k8s.io-a949d074eb422be6bf65a1ca32df4efead6380bb4d9fc26aa2ee2f06034fdf25-runc.l7l7kM.mount: Deactivated successfully. Jul 7 05:59:10.482368 systemd[1]: Started sshd@10-159.69.113.68:22-147.75.109.163:37334.service - OpenSSH per-connection server daemon (147.75.109.163:37334). Jul 7 05:59:11.469704 sshd[6685]: Accepted publickey for core from 147.75.109.163 port 37334 ssh2: RSA SHA256:kLE+u5/r4/ydHwbzB201ybJdYCioVP+NA3MAI6UVV6g Jul 7 05:59:11.471603 sshd[6685]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:59:11.477747 systemd-logind[1570]: New session 9 of user core. Jul 7 05:59:11.485525 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 7 05:59:12.240979 sshd[6685]: pam_unix(sshd:session): session closed for user core Jul 7 05:59:12.245879 systemd[1]: sshd@10-159.69.113.68:22-147.75.109.163:37334.service: Deactivated successfully. Jul 7 05:59:12.250652 systemd-logind[1570]: Session 9 logged out. Waiting for processes to exit. Jul 7 05:59:12.251392 systemd[1]: session-9.scope: Deactivated successfully. Jul 7 05:59:12.253656 systemd-logind[1570]: Removed session 9. Jul 7 05:59:17.416389 systemd[1]: Started sshd@11-159.69.113.68:22-147.75.109.163:35842.service - OpenSSH per-connection server daemon (147.75.109.163:35842). Jul 7 05:59:18.419231 sshd[6724]: Accepted publickey for core from 147.75.109.163 port 35842 ssh2: RSA SHA256:kLE+u5/r4/ydHwbzB201ybJdYCioVP+NA3MAI6UVV6g Jul 7 05:59:18.421742 sshd[6724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:59:18.428278 systemd-logind[1570]: New session 10 of user core. Jul 7 05:59:18.433179 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 7 05:59:19.204875 sshd[6724]: pam_unix(sshd:session): session closed for user core Jul 7 05:59:19.209360 systemd[1]: sshd@11-159.69.113.68:22-147.75.109.163:35842.service: Deactivated successfully. Jul 7 05:59:19.214755 systemd[1]: session-10.scope: Deactivated successfully. Jul 7 05:59:19.217151 systemd-logind[1570]: Session 10 logged out. Waiting for processes to exit. Jul 7 05:59:19.218176 systemd-logind[1570]: Removed session 10. Jul 7 05:59:19.370551 systemd[1]: Started sshd@12-159.69.113.68:22-147.75.109.163:35852.service - OpenSSH per-connection server daemon (147.75.109.163:35852). Jul 7 05:59:20.348364 sshd[6744]: Accepted publickey for core from 147.75.109.163 port 35852 ssh2: RSA SHA256:kLE+u5/r4/ydHwbzB201ybJdYCioVP+NA3MAI6UVV6g Jul 7 05:59:20.350374 sshd[6744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:59:20.361964 systemd-logind[1570]: New session 11 of user core. Jul 7 05:59:20.366416 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 7 05:59:21.137163 sshd[6744]: pam_unix(sshd:session): session closed for user core Jul 7 05:59:21.140767 systemd-logind[1570]: Session 11 logged out. Waiting for processes to exit. Jul 7 05:59:21.141455 systemd[1]: sshd@12-159.69.113.68:22-147.75.109.163:35852.service: Deactivated successfully. Jul 7 05:59:21.147519 systemd[1]: session-11.scope: Deactivated successfully. Jul 7 05:59:21.148637 systemd-logind[1570]: Removed session 11. Jul 7 05:59:21.304430 systemd[1]: Started sshd@13-159.69.113.68:22-147.75.109.163:35864.service - OpenSSH per-connection server daemon (147.75.109.163:35864). Jul 7 05:59:22.294716 sshd[6757]: Accepted publickey for core from 147.75.109.163 port 35864 ssh2: RSA SHA256:kLE+u5/r4/ydHwbzB201ybJdYCioVP+NA3MAI6UVV6g Jul 7 05:59:22.297102 sshd[6757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:59:22.302653 systemd-logind[1570]: New session 12 of user core. Jul 7 05:59:22.308535 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 7 05:59:23.071429 sshd[6757]: pam_unix(sshd:session): session closed for user core Jul 7 05:59:23.078282 systemd[1]: sshd@13-159.69.113.68:22-147.75.109.163:35864.service: Deactivated successfully. Jul 7 05:59:23.082085 systemd[1]: session-12.scope: Deactivated successfully. Jul 7 05:59:23.084230 systemd-logind[1570]: Session 12 logged out. Waiting for processes to exit. Jul 7 05:59:23.085362 systemd-logind[1570]: Removed session 12. Jul 7 05:59:28.243285 systemd[1]: Started sshd@14-159.69.113.68:22-147.75.109.163:43328.service - OpenSSH per-connection server daemon (147.75.109.163:43328). Jul 7 05:59:29.235589 sshd[6792]: Accepted publickey for core from 147.75.109.163 port 43328 ssh2: RSA SHA256:kLE+u5/r4/ydHwbzB201ybJdYCioVP+NA3MAI6UVV6g Jul 7 05:59:29.238333 sshd[6792]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:59:29.246728 systemd-logind[1570]: New session 13 of user core. Jul 7 05:59:29.253777 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 7 05:59:30.014862 sshd[6792]: pam_unix(sshd:session): session closed for user core Jul 7 05:59:30.020707 systemd-logind[1570]: Session 13 logged out. Waiting for processes to exit. Jul 7 05:59:30.021584 systemd[1]: sshd@14-159.69.113.68:22-147.75.109.163:43328.service: Deactivated successfully. Jul 7 05:59:30.027079 systemd[1]: session-13.scope: Deactivated successfully. Jul 7 05:59:30.029039 systemd-logind[1570]: Removed session 13. Jul 7 05:59:30.184534 systemd[1]: Started sshd@15-159.69.113.68:22-147.75.109.163:43336.service - OpenSSH per-connection server daemon (147.75.109.163:43336). Jul 7 05:59:31.174525 sshd[6805]: Accepted publickey for core from 147.75.109.163 port 43336 ssh2: RSA SHA256:kLE+u5/r4/ydHwbzB201ybJdYCioVP+NA3MAI6UVV6g Jul 7 05:59:31.176870 sshd[6805]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:59:31.183143 systemd-logind[1570]: New session 14 of user core. Jul 7 05:59:31.188770 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 7 05:59:32.094587 sshd[6805]: pam_unix(sshd:session): session closed for user core Jul 7 05:59:32.100968 systemd[1]: sshd@15-159.69.113.68:22-147.75.109.163:43336.service: Deactivated successfully. Jul 7 05:59:32.105292 systemd[1]: session-14.scope: Deactivated successfully. Jul 7 05:59:32.105361 systemd-logind[1570]: Session 14 logged out. Waiting for processes to exit. Jul 7 05:59:32.108588 systemd-logind[1570]: Removed session 14. Jul 7 05:59:32.258849 systemd[1]: Started sshd@16-159.69.113.68:22-147.75.109.163:43340.service - OpenSSH per-connection server daemon (147.75.109.163:43340). Jul 7 05:59:33.245719 sshd[6818]: Accepted publickey for core from 147.75.109.163 port 43340 ssh2: RSA SHA256:kLE+u5/r4/ydHwbzB201ybJdYCioVP+NA3MAI6UVV6g Jul 7 05:59:33.247806 sshd[6818]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:59:33.254911 systemd-logind[1570]: New session 15 of user core. Jul 7 05:59:33.261528 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 7 05:59:36.213762 sshd[6818]: pam_unix(sshd:session): session closed for user core Jul 7 05:59:36.223243 systemd[1]: sshd@16-159.69.113.68:22-147.75.109.163:43340.service: Deactivated successfully. Jul 7 05:59:36.229427 systemd[1]: session-15.scope: Deactivated successfully. Jul 7 05:59:36.230848 systemd-logind[1570]: Session 15 logged out. Waiting for processes to exit. Jul 7 05:59:36.232634 systemd-logind[1570]: Removed session 15. Jul 7 05:59:36.389503 systemd[1]: Started sshd@17-159.69.113.68:22-147.75.109.163:58336.service - OpenSSH per-connection server daemon (147.75.109.163:58336). Jul 7 05:59:37.403084 sshd[6860]: Accepted publickey for core from 147.75.109.163 port 58336 ssh2: RSA SHA256:kLE+u5/r4/ydHwbzB201ybJdYCioVP+NA3MAI6UVV6g Jul 7 05:59:37.405092 sshd[6860]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:59:37.413136 systemd-logind[1570]: New session 16 of user core. Jul 7 05:59:37.418124 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 7 05:59:38.375329 sshd[6860]: pam_unix(sshd:session): session closed for user core Jul 7 05:59:38.381374 systemd[1]: sshd@17-159.69.113.68:22-147.75.109.163:58336.service: Deactivated successfully. Jul 7 05:59:38.388776 systemd[1]: session-16.scope: Deactivated successfully. Jul 7 05:59:38.391282 systemd-logind[1570]: Session 16 logged out. Waiting for processes to exit. Jul 7 05:59:38.392670 systemd-logind[1570]: Removed session 16. Jul 7 05:59:38.535397 systemd[1]: Started sshd@18-159.69.113.68:22-147.75.109.163:58348.service - OpenSSH per-connection server daemon (147.75.109.163:58348). Jul 7 05:59:39.522172 sshd[6872]: Accepted publickey for core from 147.75.109.163 port 58348 ssh2: RSA SHA256:kLE+u5/r4/ydHwbzB201ybJdYCioVP+NA3MAI6UVV6g Jul 7 05:59:39.524805 sshd[6872]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:59:39.533884 systemd-logind[1570]: New session 17 of user core. Jul 7 05:59:39.538364 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 7 05:59:40.367342 sshd[6872]: pam_unix(sshd:session): session closed for user core Jul 7 05:59:40.376695 systemd[1]: sshd@18-159.69.113.68:22-147.75.109.163:58348.service: Deactivated successfully. Jul 7 05:59:40.381702 systemd[1]: session-17.scope: Deactivated successfully. Jul 7 05:59:40.383171 systemd-logind[1570]: Session 17 logged out. Waiting for processes to exit. Jul 7 05:59:40.385087 systemd-logind[1570]: Removed session 17. Jul 7 05:59:45.534508 systemd[1]: Started sshd@19-159.69.113.68:22-147.75.109.163:58352.service - OpenSSH per-connection server daemon (147.75.109.163:58352). Jul 7 05:59:46.518168 sshd[6930]: Accepted publickey for core from 147.75.109.163 port 58352 ssh2: RSA SHA256:kLE+u5/r4/ydHwbzB201ybJdYCioVP+NA3MAI6UVV6g Jul 7 05:59:46.520470 sshd[6930]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:59:46.525737 systemd-logind[1570]: New session 18 of user core. Jul 7 05:59:46.532453 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 7 05:59:47.288263 sshd[6930]: pam_unix(sshd:session): session closed for user core Jul 7 05:59:47.292782 systemd-logind[1570]: Session 18 logged out. Waiting for processes to exit. Jul 7 05:59:47.292976 systemd[1]: sshd@19-159.69.113.68:22-147.75.109.163:58352.service: Deactivated successfully. Jul 7 05:59:47.298628 systemd[1]: session-18.scope: Deactivated successfully. Jul 7 05:59:47.302805 systemd-logind[1570]: Removed session 18. Jul 7 05:59:52.455405 systemd[1]: Started sshd@20-159.69.113.68:22-147.75.109.163:47060.service - OpenSSH per-connection server daemon (147.75.109.163:47060). Jul 7 05:59:53.444960 sshd[6965]: Accepted publickey for core from 147.75.109.163 port 47060 ssh2: RSA SHA256:kLE+u5/r4/ydHwbzB201ybJdYCioVP+NA3MAI6UVV6g Jul 7 05:59:53.448782 sshd[6965]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:59:53.455269 systemd-logind[1570]: New session 19 of user core. Jul 7 05:59:53.459828 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 7 05:59:54.206871 sshd[6965]: pam_unix(sshd:session): session closed for user core Jul 7 05:59:54.213169 systemd[1]: sshd@20-159.69.113.68:22-147.75.109.163:47060.service: Deactivated successfully. Jul 7 05:59:54.218306 systemd[1]: session-19.scope: Deactivated successfully. Jul 7 05:59:54.219527 systemd-logind[1570]: Session 19 logged out. Waiting for processes to exit. Jul 7 05:59:54.221465 systemd-logind[1570]: Removed session 19. Jul 7 06:00:25.202155 containerd[1607]: time="2025-07-07T06:00:25.200637592Z" level=info msg="shim disconnected" id=a8e38d24267ad32068e5c3185d7f27f0594388af59b234f3f944752710d16114 namespace=k8s.io Jul 7 06:00:25.202155 containerd[1607]: time="2025-07-07T06:00:25.200722436Z" level=warning msg="cleaning up after shim disconnected" id=a8e38d24267ad32068e5c3185d7f27f0594388af59b234f3f944752710d16114 namespace=k8s.io Jul 7 06:00:25.202155 containerd[1607]: time="2025-07-07T06:00:25.200734397Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 06:00:25.206109 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a8e38d24267ad32068e5c3185d7f27f0594388af59b234f3f944752710d16114-rootfs.mount: Deactivated successfully. Jul 7 06:00:25.419304 kubelet[2790]: E0707 06:00:25.417427 2790 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:54808->10.0.0.2:2379: read: connection timed out" Jul 7 06:00:25.434453 kubelet[2790]: I0707 06:00:25.434193 2790 scope.go:117] "RemoveContainer" containerID="a8e38d24267ad32068e5c3185d7f27f0594388af59b234f3f944752710d16114" Jul 7 06:00:25.450343 containerd[1607]: time="2025-07-07T06:00:25.449568658Z" level=info msg="shim disconnected" id=c5bdc75753922f22c8b44ae09326552d2f183581458ff6a1917a58f97d8e46f3 namespace=k8s.io Jul 7 06:00:25.450343 containerd[1607]: time="2025-07-07T06:00:25.449729905Z" level=warning msg="cleaning up after shim disconnected" id=c5bdc75753922f22c8b44ae09326552d2f183581458ff6a1917a58f97d8e46f3 namespace=k8s.io Jul 7 06:00:25.450343 containerd[1607]: time="2025-07-07T06:00:25.449739506Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 06:00:25.450710 containerd[1607]: time="2025-07-07T06:00:25.450521060Z" level=info msg="CreateContainer within sandbox \"d94ec81ccd4434cf51526336b878cfbf02e510035e4a605dcca7287839ba1dc8\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jul 7 06:00:25.453492 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c5bdc75753922f22c8b44ae09326552d2f183581458ff6a1917a58f97d8e46f3-rootfs.mount: Deactivated successfully. Jul 7 06:00:25.476748 containerd[1607]: time="2025-07-07T06:00:25.476682393Z" level=info msg="CreateContainer within sandbox \"d94ec81ccd4434cf51526336b878cfbf02e510035e4a605dcca7287839ba1dc8\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"ce105267c97f4d6982d43aef0456572fbb3a6cbf80ef833e93a9040a76e477cd\"" Jul 7 06:00:25.478240 containerd[1607]: time="2025-07-07T06:00:25.478182218Z" level=info msg="StartContainer for \"ce105267c97f4d6982d43aef0456572fbb3a6cbf80ef833e93a9040a76e477cd\"" Jul 7 06:00:25.544503 containerd[1607]: time="2025-07-07T06:00:25.544414088Z" level=info msg="StartContainer for \"ce105267c97f4d6982d43aef0456572fbb3a6cbf80ef833e93a9040a76e477cd\" returns successfully" Jul 7 06:00:26.037633 containerd[1607]: time="2025-07-07T06:00:26.037552376Z" level=info msg="shim disconnected" id=b593dfcaa1d4d94d1816808d3582f85b4f475c2b2fd758157104513a24434a2a namespace=k8s.io Jul 7 06:00:26.038421 containerd[1607]: time="2025-07-07T06:00:26.038038797Z" level=warning msg="cleaning up after shim disconnected" id=b593dfcaa1d4d94d1816808d3582f85b4f475c2b2fd758157104513a24434a2a namespace=k8s.io Jul 7 06:00:26.038421 containerd[1607]: time="2025-07-07T06:00:26.038165283Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 06:00:26.205388 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b593dfcaa1d4d94d1816808d3582f85b4f475c2b2fd758157104513a24434a2a-rootfs.mount: Deactivated successfully. Jul 7 06:00:26.439324 kubelet[2790]: I0707 06:00:26.439112 2790 scope.go:117] "RemoveContainer" containerID="c5bdc75753922f22c8b44ae09326552d2f183581458ff6a1917a58f97d8e46f3" Jul 7 06:00:26.444004 containerd[1607]: time="2025-07-07T06:00:26.443601665Z" level=info msg="CreateContainer within sandbox \"7c48f27bd136a1dbb8f0feef2adad87b6af8ffd1b0673b25bd71ee9bcb71f360\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jul 7 06:00:26.445260 kubelet[2790]: I0707 06:00:26.444800 2790 scope.go:117] "RemoveContainer" containerID="b593dfcaa1d4d94d1816808d3582f85b4f475c2b2fd758157104513a24434a2a" Jul 7 06:00:26.447270 containerd[1607]: time="2025-07-07T06:00:26.447160660Z" level=info msg="CreateContainer within sandbox \"2fb2deafd53670f8b1d531dad1f9043993744a60dd93509970d6a5c91d826e5c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jul 7 06:00:26.469439 containerd[1607]: time="2025-07-07T06:00:26.467780274Z" level=info msg="CreateContainer within sandbox \"7c48f27bd136a1dbb8f0feef2adad87b6af8ffd1b0673b25bd71ee9bcb71f360\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"865cf493eb3c30e2a87344da34579ac7e84b1d661a56f7b0bbd439ce5ae8ce20\"" Jul 7 06:00:26.469756 containerd[1607]: time="2025-07-07T06:00:26.469722798Z" level=info msg="StartContainer for \"865cf493eb3c30e2a87344da34579ac7e84b1d661a56f7b0bbd439ce5ae8ce20\"" Jul 7 06:00:26.472366 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2924681778.mount: Deactivated successfully. Jul 7 06:00:26.483494 containerd[1607]: time="2025-07-07T06:00:26.482638278Z" level=info msg="CreateContainer within sandbox \"2fb2deafd53670f8b1d531dad1f9043993744a60dd93509970d6a5c91d826e5c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"dcbb4ed26c6776e4d09afc50c13b20c21c8e44b742af3d7875714f682ce36ca2\"" Jul 7 06:00:26.487706 containerd[1607]: time="2025-07-07T06:00:26.487481808Z" level=info msg="StartContainer for \"dcbb4ed26c6776e4d09afc50c13b20c21c8e44b742af3d7875714f682ce36ca2\"" Jul 7 06:00:26.549302 containerd[1607]: time="2025-07-07T06:00:26.549257087Z" level=info msg="StartContainer for \"865cf493eb3c30e2a87344da34579ac7e84b1d661a56f7b0bbd439ce5ae8ce20\" returns successfully" Jul 7 06:00:26.572746 containerd[1607]: time="2025-07-07T06:00:26.572629341Z" level=info msg="StartContainer for \"dcbb4ed26c6776e4d09afc50c13b20c21c8e44b742af3d7875714f682ce36ca2\" returns successfully"