Jul 12 00:05:57.898868 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 12 00:05:57.898893 kernel: Linux version 6.6.96-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Jul 11 22:42:11 -00 2025 Jul 12 00:05:57.898903 kernel: KASLR enabled Jul 12 00:05:57.898909 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Jul 12 00:05:57.898915 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390c1018 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b43d18 Jul 12 00:05:57.898920 kernel: random: crng init done Jul 12 00:05:57.898927 kernel: ACPI: Early table checksum verification disabled Jul 12 00:05:57.898933 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Jul 12 00:05:57.898940 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Jul 12 00:05:57.898947 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:05:57.898953 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:05:57.898959 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:05:57.898965 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:05:57.898972 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:05:57.898979 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:05:57.898987 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:05:57.898993 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:05:57.898999 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:05:57.899006 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Jul 12 00:05:57.899012 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Jul 12 00:05:57.899018 kernel: NUMA: Failed to initialise from firmware Jul 12 00:05:57.899024 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Jul 12 00:05:57.899031 kernel: NUMA: NODE_DATA [mem 0x13966e800-0x139673fff] Jul 12 00:05:57.899037 kernel: Zone ranges: Jul 12 00:05:57.899043 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jul 12 00:05:57.899051 kernel: DMA32 empty Jul 12 00:05:57.899057 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Jul 12 00:05:57.899064 kernel: Movable zone start for each node Jul 12 00:05:57.899070 kernel: Early memory node ranges Jul 12 00:05:57.899076 kernel: node 0: [mem 0x0000000040000000-0x000000013676ffff] Jul 12 00:05:57.899083 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Jul 12 00:05:57.899089 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Jul 12 00:05:57.899095 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Jul 12 00:05:57.899102 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Jul 12 00:05:57.899108 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Jul 12 00:05:57.899114 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Jul 12 00:05:57.899121 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Jul 12 00:05:57.899140 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Jul 12 00:05:57.899149 kernel: psci: probing for conduit method from ACPI. Jul 12 00:05:57.899157 kernel: psci: PSCIv1.1 detected in firmware. Jul 12 00:05:57.899168 kernel: psci: Using standard PSCI v0.2 function IDs Jul 12 00:05:57.899174 kernel: psci: Trusted OS migration not required Jul 12 00:05:57.899181 kernel: psci: SMC Calling Convention v1.1 Jul 12 00:05:57.899190 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 12 00:05:57.899197 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jul 12 00:05:57.899203 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jul 12 00:05:57.899250 kernel: pcpu-alloc: [0] 0 [0] 1 Jul 12 00:05:57.899257 kernel: Detected PIPT I-cache on CPU0 Jul 12 00:05:57.899264 kernel: CPU features: detected: GIC system register CPU interface Jul 12 00:05:57.899271 kernel: CPU features: detected: Hardware dirty bit management Jul 12 00:05:57.899277 kernel: CPU features: detected: Spectre-v4 Jul 12 00:05:57.899284 kernel: CPU features: detected: Spectre-BHB Jul 12 00:05:57.899325 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 12 00:05:57.899338 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 12 00:05:57.899345 kernel: CPU features: detected: ARM erratum 1418040 Jul 12 00:05:57.899352 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 12 00:05:57.899358 kernel: alternatives: applying boot alternatives Jul 12 00:05:57.899366 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=52e0eba0325ad9e58f7b221f0132165c94b480ebf93a398f4fe935660ba9e15c Jul 12 00:05:57.899374 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 12 00:05:57.899381 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 12 00:05:57.899388 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 12 00:05:57.899394 kernel: Fallback order for Node 0: 0 Jul 12 00:05:57.899401 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Jul 12 00:05:57.899408 kernel: Policy zone: Normal Jul 12 00:05:57.899416 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 12 00:05:57.899423 kernel: software IO TLB: area num 2. Jul 12 00:05:57.899429 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Jul 12 00:05:57.899437 kernel: Memory: 3882804K/4096000K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39424K init, 897K bss, 213196K reserved, 0K cma-reserved) Jul 12 00:05:57.899443 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 12 00:05:57.899450 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 12 00:05:57.899458 kernel: rcu: RCU event tracing is enabled. Jul 12 00:05:57.899465 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 12 00:05:57.899471 kernel: Trampoline variant of Tasks RCU enabled. Jul 12 00:05:57.899478 kernel: Tracing variant of Tasks RCU enabled. Jul 12 00:05:57.899485 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 12 00:05:57.899494 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 12 00:05:57.899500 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 12 00:05:57.899507 kernel: GICv3: 256 SPIs implemented Jul 12 00:05:57.899514 kernel: GICv3: 0 Extended SPIs implemented Jul 12 00:05:57.899520 kernel: Root IRQ handler: gic_handle_irq Jul 12 00:05:57.899527 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 12 00:05:57.899534 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 12 00:05:57.899541 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 12 00:05:57.899548 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Jul 12 00:05:57.899555 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Jul 12 00:05:57.899561 kernel: GICv3: using LPI property table @0x00000001000e0000 Jul 12 00:05:57.899568 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Jul 12 00:05:57.899577 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 12 00:05:57.899584 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 12 00:05:57.899591 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 12 00:05:57.899597 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 12 00:05:57.899604 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 12 00:05:57.899611 kernel: Console: colour dummy device 80x25 Jul 12 00:05:57.899618 kernel: ACPI: Core revision 20230628 Jul 12 00:05:57.899626 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 12 00:05:57.899632 kernel: pid_max: default: 32768 minimum: 301 Jul 12 00:05:57.899640 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 12 00:05:57.899648 kernel: landlock: Up and running. Jul 12 00:05:57.899655 kernel: SELinux: Initializing. Jul 12 00:05:57.899662 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 12 00:05:57.899669 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 12 00:05:57.899677 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 12 00:05:57.899684 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 12 00:05:57.899691 kernel: rcu: Hierarchical SRCU implementation. Jul 12 00:05:57.899698 kernel: rcu: Max phase no-delay instances is 400. Jul 12 00:05:57.899705 kernel: Platform MSI: ITS@0x8080000 domain created Jul 12 00:05:57.899713 kernel: PCI/MSI: ITS@0x8080000 domain created Jul 12 00:05:57.899720 kernel: Remapping and enabling EFI services. Jul 12 00:05:57.899727 kernel: smp: Bringing up secondary CPUs ... Jul 12 00:05:57.899734 kernel: Detected PIPT I-cache on CPU1 Jul 12 00:05:57.899741 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 12 00:05:57.899749 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Jul 12 00:05:57.899755 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 12 00:05:57.899762 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 12 00:05:57.899789 kernel: smp: Brought up 1 node, 2 CPUs Jul 12 00:05:57.899797 kernel: SMP: Total of 2 processors activated. Jul 12 00:05:57.899807 kernel: CPU features: detected: 32-bit EL0 Support Jul 12 00:05:57.899814 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 12 00:05:57.899827 kernel: CPU features: detected: Common not Private translations Jul 12 00:05:57.899836 kernel: CPU features: detected: CRC32 instructions Jul 12 00:05:57.899843 kernel: CPU features: detected: Enhanced Virtualization Traps Jul 12 00:05:57.899851 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 12 00:05:57.899858 kernel: CPU features: detected: LSE atomic instructions Jul 12 00:05:57.899865 kernel: CPU features: detected: Privileged Access Never Jul 12 00:05:57.899873 kernel: CPU features: detected: RAS Extension Support Jul 12 00:05:57.899881 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 12 00:05:57.899888 kernel: CPU: All CPU(s) started at EL1 Jul 12 00:05:57.899896 kernel: alternatives: applying system-wide alternatives Jul 12 00:05:57.899903 kernel: devtmpfs: initialized Jul 12 00:05:57.899911 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 12 00:05:57.899918 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 12 00:05:57.899925 kernel: pinctrl core: initialized pinctrl subsystem Jul 12 00:05:57.899934 kernel: SMBIOS 3.0.0 present. Jul 12 00:05:57.899942 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Jul 12 00:05:57.899949 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 12 00:05:57.899957 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 12 00:05:57.899964 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 12 00:05:57.899971 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 12 00:05:57.899979 kernel: audit: initializing netlink subsys (disabled) Jul 12 00:05:57.899986 kernel: audit: type=2000 audit(0.014:1): state=initialized audit_enabled=0 res=1 Jul 12 00:05:57.899993 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 12 00:05:57.900002 kernel: cpuidle: using governor menu Jul 12 00:05:57.900010 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 12 00:05:57.900017 kernel: ASID allocator initialised with 32768 entries Jul 12 00:05:57.900024 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 12 00:05:57.900032 kernel: Serial: AMBA PL011 UART driver Jul 12 00:05:57.900039 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 12 00:05:57.900047 kernel: Modules: 0 pages in range for non-PLT usage Jul 12 00:05:57.900054 kernel: Modules: 509008 pages in range for PLT usage Jul 12 00:05:57.900061 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 12 00:05:57.900070 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 12 00:05:57.900078 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 12 00:05:57.900085 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 12 00:05:57.900093 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 12 00:05:57.900100 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 12 00:05:57.900107 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 12 00:05:57.900114 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 12 00:05:57.900122 kernel: ACPI: Added _OSI(Module Device) Jul 12 00:05:57.900129 kernel: ACPI: Added _OSI(Processor Device) Jul 12 00:05:57.900138 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 12 00:05:57.900145 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 12 00:05:57.900152 kernel: ACPI: Interpreter enabled Jul 12 00:05:57.900160 kernel: ACPI: Using GIC for interrupt routing Jul 12 00:05:57.900167 kernel: ACPI: MCFG table detected, 1 entries Jul 12 00:05:57.900175 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 12 00:05:57.900182 kernel: printk: console [ttyAMA0] enabled Jul 12 00:05:57.900189 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 12 00:05:57.901987 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 12 00:05:57.902089 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 12 00:05:57.902155 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 12 00:05:57.902263 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 12 00:05:57.902387 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 12 00:05:57.902400 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 12 00:05:57.902408 kernel: PCI host bridge to bus 0000:00 Jul 12 00:05:57.902487 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 12 00:05:57.902553 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 12 00:05:57.902610 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 12 00:05:57.902666 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 12 00:05:57.902745 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jul 12 00:05:57.902830 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Jul 12 00:05:57.902895 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Jul 12 00:05:57.902963 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Jul 12 00:05:57.903035 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jul 12 00:05:57.903100 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Jul 12 00:05:57.903173 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jul 12 00:05:57.903259 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Jul 12 00:05:57.903359 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jul 12 00:05:57.903434 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Jul 12 00:05:57.903508 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jul 12 00:05:57.903574 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Jul 12 00:05:57.903644 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jul 12 00:05:57.903710 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Jul 12 00:05:57.903780 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jul 12 00:05:57.903847 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Jul 12 00:05:57.903918 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jul 12 00:05:57.903983 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Jul 12 00:05:57.904063 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jul 12 00:05:57.904128 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Jul 12 00:05:57.904200 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Jul 12 00:05:57.907410 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Jul 12 00:05:57.907518 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Jul 12 00:05:57.907585 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Jul 12 00:05:57.907663 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Jul 12 00:05:57.907732 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Jul 12 00:05:57.907798 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jul 12 00:05:57.907866 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Jul 12 00:05:57.907948 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Jul 12 00:05:57.908016 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Jul 12 00:05:57.908093 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Jul 12 00:05:57.908160 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Jul 12 00:05:57.908470 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Jul 12 00:05:57.908568 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Jul 12 00:05:57.908636 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Jul 12 00:05:57.908718 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Jul 12 00:05:57.908788 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Jul 12 00:05:57.908873 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Jul 12 00:05:57.908941 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Jul 12 00:05:57.909008 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Jul 12 00:05:57.909084 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Jul 12 00:05:57.909155 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Jul 12 00:05:57.910368 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Jul 12 00:05:57.910484 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Jul 12 00:05:57.910558 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jul 12 00:05:57.910625 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Jul 12 00:05:57.910691 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Jul 12 00:05:57.911354 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jul 12 00:05:57.911441 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jul 12 00:05:57.911507 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Jul 12 00:05:57.911577 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jul 12 00:05:57.911643 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Jul 12 00:05:57.911707 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Jul 12 00:05:57.911776 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jul 12 00:05:57.911843 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Jul 12 00:05:57.911917 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Jul 12 00:05:57.911985 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jul 12 00:05:57.912051 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Jul 12 00:05:57.912116 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x000fffff] to [bus 05] add_size 200000 add_align 100000 Jul 12 00:05:57.912186 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jul 12 00:05:57.913768 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Jul 12 00:05:57.913852 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Jul 12 00:05:57.913928 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jul 12 00:05:57.913993 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Jul 12 00:05:57.914057 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Jul 12 00:05:57.914124 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jul 12 00:05:57.914191 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Jul 12 00:05:57.914275 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Jul 12 00:05:57.915307 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jul 12 00:05:57.915422 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Jul 12 00:05:57.915497 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Jul 12 00:05:57.915565 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Jul 12 00:05:57.915631 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Jul 12 00:05:57.915702 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Jul 12 00:05:57.915766 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Jul 12 00:05:57.915833 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Jul 12 00:05:57.915900 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Jul 12 00:05:57.915967 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Jul 12 00:05:57.916032 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Jul 12 00:05:57.916101 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Jul 12 00:05:57.916176 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Jul 12 00:05:57.916272 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Jul 12 00:05:57.916389 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Jul 12 00:05:57.916468 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Jul 12 00:05:57.916533 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Jul 12 00:05:57.916601 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Jul 12 00:05:57.916665 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Jul 12 00:05:57.916729 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Jul 12 00:05:57.916793 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Jul 12 00:05:57.916864 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Jul 12 00:05:57.916932 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Jul 12 00:05:57.916997 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Jul 12 00:05:57.917062 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Jul 12 00:05:57.917128 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Jul 12 00:05:57.917192 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Jul 12 00:05:57.919365 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Jul 12 00:05:57.919471 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Jul 12 00:05:57.919568 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Jul 12 00:05:57.919647 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Jul 12 00:05:57.919716 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Jul 12 00:05:57.919781 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Jul 12 00:05:57.919850 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Jul 12 00:05:57.919916 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Jul 12 00:05:57.919986 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Jul 12 00:05:57.920050 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Jul 12 00:05:57.920119 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Jul 12 00:05:57.920188 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Jul 12 00:05:57.920305 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Jul 12 00:05:57.920382 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Jul 12 00:05:57.920451 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Jul 12 00:05:57.920524 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Jul 12 00:05:57.920591 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jul 12 00:05:57.920660 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Jul 12 00:05:57.920727 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jul 12 00:05:57.920796 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jul 12 00:05:57.920860 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Jul 12 00:05:57.920925 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Jul 12 00:05:57.920997 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Jul 12 00:05:57.921065 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jul 12 00:05:57.921134 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jul 12 00:05:57.922361 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Jul 12 00:05:57.922468 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Jul 12 00:05:57.922548 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Jul 12 00:05:57.922665 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Jul 12 00:05:57.922758 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jul 12 00:05:57.922839 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jul 12 00:05:57.922928 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Jul 12 00:05:57.923006 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Jul 12 00:05:57.923096 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Jul 12 00:05:57.923178 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jul 12 00:05:57.927375 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jul 12 00:05:57.927470 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Jul 12 00:05:57.927538 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Jul 12 00:05:57.927617 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Jul 12 00:05:57.927695 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jul 12 00:05:57.927762 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jul 12 00:05:57.927827 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Jul 12 00:05:57.927893 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Jul 12 00:05:57.927966 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Jul 12 00:05:57.928056 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Jul 12 00:05:57.928129 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jul 12 00:05:57.928194 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jul 12 00:05:57.928344 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Jul 12 00:05:57.928424 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Jul 12 00:05:57.928498 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Jul 12 00:05:57.928578 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Jul 12 00:05:57.928647 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Jul 12 00:05:57.928714 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jul 12 00:05:57.928778 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jul 12 00:05:57.928841 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Jul 12 00:05:57.928909 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Jul 12 00:05:57.928979 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jul 12 00:05:57.929044 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jul 12 00:05:57.929108 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Jul 12 00:05:57.929171 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Jul 12 00:05:57.929313 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jul 12 00:05:57.929390 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Jul 12 00:05:57.929471 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Jul 12 00:05:57.929544 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Jul 12 00:05:57.929613 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 12 00:05:57.929672 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 12 00:05:57.929731 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 12 00:05:57.931002 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jul 12 00:05:57.931070 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Jul 12 00:05:57.931129 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Jul 12 00:05:57.931204 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Jul 12 00:05:57.931562 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Jul 12 00:05:57.931623 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Jul 12 00:05:57.931692 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Jul 12 00:05:57.931751 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Jul 12 00:05:57.931808 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Jul 12 00:05:57.931880 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Jul 12 00:05:57.931939 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Jul 12 00:05:57.931999 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Jul 12 00:05:57.932078 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Jul 12 00:05:57.932139 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Jul 12 00:05:57.932197 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Jul 12 00:05:57.932285 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Jul 12 00:05:57.932369 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Jul 12 00:05:57.932429 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Jul 12 00:05:57.932505 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Jul 12 00:05:57.932569 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Jul 12 00:05:57.932632 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Jul 12 00:05:57.932698 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Jul 12 00:05:57.932766 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Jul 12 00:05:57.932825 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Jul 12 00:05:57.932894 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Jul 12 00:05:57.932954 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Jul 12 00:05:57.933013 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Jul 12 00:05:57.933026 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 12 00:05:57.933034 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 12 00:05:57.933042 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 12 00:05:57.933050 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 12 00:05:57.933058 kernel: iommu: Default domain type: Translated Jul 12 00:05:57.933066 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 12 00:05:57.933074 kernel: efivars: Registered efivars operations Jul 12 00:05:57.933082 kernel: vgaarb: loaded Jul 12 00:05:57.933090 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 12 00:05:57.933099 kernel: VFS: Disk quotas dquot_6.6.0 Jul 12 00:05:57.933107 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 12 00:05:57.933115 kernel: pnp: PnP ACPI init Jul 12 00:05:57.933189 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 12 00:05:57.933200 kernel: pnp: PnP ACPI: found 1 devices Jul 12 00:05:57.935311 kernel: NET: Registered PF_INET protocol family Jul 12 00:05:57.935332 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 12 00:05:57.935342 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 12 00:05:57.935358 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 12 00:05:57.935367 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 12 00:05:57.935375 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 12 00:05:57.935382 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 12 00:05:57.935390 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 12 00:05:57.935398 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 12 00:05:57.935406 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 12 00:05:57.935551 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Jul 12 00:05:57.935565 kernel: PCI: CLS 0 bytes, default 64 Jul 12 00:05:57.935576 kernel: kvm [1]: HYP mode not available Jul 12 00:05:57.935584 kernel: Initialise system trusted keyrings Jul 12 00:05:57.935592 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 12 00:05:57.935600 kernel: Key type asymmetric registered Jul 12 00:05:57.935608 kernel: Asymmetric key parser 'x509' registered Jul 12 00:05:57.935616 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 12 00:05:57.935623 kernel: io scheduler mq-deadline registered Jul 12 00:05:57.935631 kernel: io scheduler kyber registered Jul 12 00:05:57.935639 kernel: io scheduler bfq registered Jul 12 00:05:57.935649 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jul 12 00:05:57.935721 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Jul 12 00:05:57.935787 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Jul 12 00:05:57.935854 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 12 00:05:57.935922 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Jul 12 00:05:57.935987 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Jul 12 00:05:57.936053 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 12 00:05:57.936122 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Jul 12 00:05:57.936189 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Jul 12 00:05:57.936418 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 12 00:05:57.936496 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Jul 12 00:05:57.936561 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Jul 12 00:05:57.936630 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 12 00:05:57.936698 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Jul 12 00:05:57.936763 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Jul 12 00:05:57.936826 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 12 00:05:57.936894 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Jul 12 00:05:57.936957 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Jul 12 00:05:57.937023 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 12 00:05:57.937091 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Jul 12 00:05:57.937155 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Jul 12 00:05:57.937235 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 12 00:05:57.937354 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Jul 12 00:05:57.937430 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Jul 12 00:05:57.937503 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 12 00:05:57.937514 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Jul 12 00:05:57.937583 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Jul 12 00:05:57.937650 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Jul 12 00:05:57.937715 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 12 00:05:57.937726 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 12 00:05:57.937734 kernel: ACPI: button: Power Button [PWRB] Jul 12 00:05:57.937745 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 12 00:05:57.937819 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Jul 12 00:05:57.937892 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Jul 12 00:05:57.937903 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 12 00:05:57.937911 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jul 12 00:05:57.937977 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Jul 12 00:05:57.937988 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Jul 12 00:05:57.937996 kernel: thunder_xcv, ver 1.0 Jul 12 00:05:57.938003 kernel: thunder_bgx, ver 1.0 Jul 12 00:05:57.938013 kernel: nicpf, ver 1.0 Jul 12 00:05:57.938021 kernel: nicvf, ver 1.0 Jul 12 00:05:57.938100 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 12 00:05:57.938162 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-12T00:05:57 UTC (1752278757) Jul 12 00:05:57.938173 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 12 00:05:57.938180 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jul 12 00:05:57.938188 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jul 12 00:05:57.938196 kernel: watchdog: Hard watchdog permanently disabled Jul 12 00:05:57.938246 kernel: NET: Registered PF_INET6 protocol family Jul 12 00:05:57.938256 kernel: Segment Routing with IPv6 Jul 12 00:05:57.938263 kernel: In-situ OAM (IOAM) with IPv6 Jul 12 00:05:57.938271 kernel: NET: Registered PF_PACKET protocol family Jul 12 00:05:57.938279 kernel: Key type dns_resolver registered Jul 12 00:05:57.938287 kernel: registered taskstats version 1 Jul 12 00:05:57.938306 kernel: Loading compiled-in X.509 certificates Jul 12 00:05:57.938314 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.96-flatcar: ed6b382df707adbd5942eaa048a1031fe26cbf15' Jul 12 00:05:57.938322 kernel: Key type .fscrypt registered Jul 12 00:05:57.938333 kernel: Key type fscrypt-provisioning registered Jul 12 00:05:57.938341 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 12 00:05:57.938348 kernel: ima: Allocated hash algorithm: sha1 Jul 12 00:05:57.938356 kernel: ima: No architecture policies found Jul 12 00:05:57.938364 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 12 00:05:57.938372 kernel: clk: Disabling unused clocks Jul 12 00:05:57.938379 kernel: Freeing unused kernel memory: 39424K Jul 12 00:05:57.938387 kernel: Run /init as init process Jul 12 00:05:57.938395 kernel: with arguments: Jul 12 00:05:57.938404 kernel: /init Jul 12 00:05:57.938412 kernel: with environment: Jul 12 00:05:57.938419 kernel: HOME=/ Jul 12 00:05:57.938426 kernel: TERM=linux Jul 12 00:05:57.938434 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 12 00:05:57.938444 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 12 00:05:57.938454 systemd[1]: Detected virtualization kvm. Jul 12 00:05:57.938462 systemd[1]: Detected architecture arm64. Jul 12 00:05:57.938472 systemd[1]: Running in initrd. Jul 12 00:05:57.938480 systemd[1]: No hostname configured, using default hostname. Jul 12 00:05:57.938488 systemd[1]: Hostname set to . Jul 12 00:05:57.938496 systemd[1]: Initializing machine ID from VM UUID. Jul 12 00:05:57.938504 systemd[1]: Queued start job for default target initrd.target. Jul 12 00:05:57.938512 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 12 00:05:57.938521 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 12 00:05:57.938529 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 12 00:05:57.938539 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 12 00:05:57.938550 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 12 00:05:57.938558 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 12 00:05:57.938568 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 12 00:05:57.938576 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 12 00:05:57.938584 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 12 00:05:57.938593 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 12 00:05:57.938602 systemd[1]: Reached target paths.target - Path Units. Jul 12 00:05:57.938611 systemd[1]: Reached target slices.target - Slice Units. Jul 12 00:05:57.938619 systemd[1]: Reached target swap.target - Swaps. Jul 12 00:05:57.938627 systemd[1]: Reached target timers.target - Timer Units. Jul 12 00:05:57.938635 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 12 00:05:57.938643 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 12 00:05:57.938652 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 12 00:05:57.938660 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 12 00:05:57.938669 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 12 00:05:57.938678 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 12 00:05:57.938686 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 12 00:05:57.938695 systemd[1]: Reached target sockets.target - Socket Units. Jul 12 00:05:57.938703 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 12 00:05:57.938712 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 12 00:05:57.938720 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 12 00:05:57.938728 systemd[1]: Starting systemd-fsck-usr.service... Jul 12 00:05:57.938736 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 12 00:05:57.938746 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 12 00:05:57.938755 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 00:05:57.938791 systemd-journald[237]: Collecting audit messages is disabled. Jul 12 00:05:57.938813 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 12 00:05:57.938824 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 12 00:05:57.938832 systemd[1]: Finished systemd-fsck-usr.service. Jul 12 00:05:57.938841 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 12 00:05:57.938850 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 12 00:05:57.938860 kernel: Bridge firewalling registered Jul 12 00:05:57.938868 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 12 00:05:57.938876 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:05:57.938885 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 12 00:05:57.938893 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 12 00:05:57.938901 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 12 00:05:57.938911 systemd-journald[237]: Journal started Jul 12 00:05:57.938933 systemd-journald[237]: Runtime Journal (/run/log/journal/3960ffddbd704c2da8756520b0109dcd) is 8.0M, max 76.6M, 68.6M free. Jul 12 00:05:57.896156 systemd-modules-load[238]: Inserted module 'overlay' Jul 12 00:05:57.941720 systemd[1]: Started systemd-journald.service - Journal Service. Jul 12 00:05:57.920275 systemd-modules-load[238]: Inserted module 'br_netfilter' Jul 12 00:05:57.945465 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 12 00:05:57.950497 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 12 00:05:57.959571 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 12 00:05:57.967175 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 12 00:05:57.976482 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 12 00:05:57.979800 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 12 00:05:57.983383 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 12 00:05:57.990063 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 12 00:05:57.996013 dracut-cmdline[268]: dracut-dracut-053 Jul 12 00:05:58.000965 dracut-cmdline[268]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=52e0eba0325ad9e58f7b221f0132165c94b480ebf93a398f4fe935660ba9e15c Jul 12 00:05:58.025678 systemd-resolved[276]: Positive Trust Anchors: Jul 12 00:05:58.026448 systemd-resolved[276]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 12 00:05:58.026485 systemd-resolved[276]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 12 00:05:58.035667 systemd-resolved[276]: Defaulting to hostname 'linux'. Jul 12 00:05:58.037951 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 12 00:05:58.039215 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 12 00:05:58.086279 kernel: SCSI subsystem initialized Jul 12 00:05:58.091256 kernel: Loading iSCSI transport class v2.0-870. Jul 12 00:05:58.099258 kernel: iscsi: registered transport (tcp) Jul 12 00:05:58.113447 kernel: iscsi: registered transport (qla4xxx) Jul 12 00:05:58.113566 kernel: QLogic iSCSI HBA Driver Jul 12 00:05:58.167246 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 12 00:05:58.174451 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 12 00:05:58.196794 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 12 00:05:58.196907 kernel: device-mapper: uevent: version 1.0.3 Jul 12 00:05:58.196950 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 12 00:05:58.248285 kernel: raid6: neonx8 gen() 15606 MB/s Jul 12 00:05:58.264263 kernel: raid6: neonx4 gen() 15518 MB/s Jul 12 00:05:58.281264 kernel: raid6: neonx2 gen() 13101 MB/s Jul 12 00:05:58.298273 kernel: raid6: neonx1 gen() 10327 MB/s Jul 12 00:05:58.315282 kernel: raid6: int64x8 gen() 6770 MB/s Jul 12 00:05:58.332270 kernel: raid6: int64x4 gen() 7283 MB/s Jul 12 00:05:58.349271 kernel: raid6: int64x2 gen() 6054 MB/s Jul 12 00:05:58.366301 kernel: raid6: int64x1 gen() 4971 MB/s Jul 12 00:05:58.366397 kernel: raid6: using algorithm neonx8 gen() 15606 MB/s Jul 12 00:05:58.383556 kernel: raid6: .... xor() 11852 MB/s, rmw enabled Jul 12 00:05:58.383651 kernel: raid6: using neon recovery algorithm Jul 12 00:05:58.388275 kernel: xor: measuring software checksum speed Jul 12 00:05:58.388355 kernel: 8regs : 19688 MB/sec Jul 12 00:05:58.388375 kernel: 32regs : 17059 MB/sec Jul 12 00:05:58.389251 kernel: arm64_neon : 26963 MB/sec Jul 12 00:05:58.389281 kernel: xor: using function: arm64_neon (26963 MB/sec) Jul 12 00:05:58.441268 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 12 00:05:58.458272 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 12 00:05:58.464507 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 12 00:05:58.488490 systemd-udevd[456]: Using default interface naming scheme 'v255'. Jul 12 00:05:58.492302 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 12 00:05:58.504545 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 12 00:05:58.522949 dracut-pre-trigger[463]: rd.md=0: removing MD RAID activation Jul 12 00:05:58.560157 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 12 00:05:58.567864 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 12 00:05:58.620863 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 12 00:05:58.630547 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 12 00:05:58.648526 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 12 00:05:58.649822 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 12 00:05:58.651762 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 12 00:05:58.652895 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 12 00:05:58.662789 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 12 00:05:58.689323 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 12 00:05:58.749923 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 12 00:05:58.750050 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 12 00:05:58.752751 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 12 00:05:58.753445 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 12 00:05:58.753630 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:05:58.755374 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 00:05:58.765773 kernel: scsi host0: Virtio SCSI HBA Jul 12 00:05:58.766989 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 00:05:58.768892 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 12 00:05:58.770229 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jul 12 00:05:58.775503 kernel: ACPI: bus type USB registered Jul 12 00:05:58.775560 kernel: usbcore: registered new interface driver usbfs Jul 12 00:05:58.779161 kernel: usbcore: registered new interface driver hub Jul 12 00:05:58.781250 kernel: usbcore: registered new device driver usb Jul 12 00:05:58.788131 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:05:58.794452 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 12 00:05:58.813235 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jul 12 00:05:58.813507 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Jul 12 00:05:58.813607 kernel: sr 0:0:0:0: Power-on or device reset occurred Jul 12 00:05:58.815240 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jul 12 00:05:58.816679 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jul 12 00:05:58.816882 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Jul 12 00:05:58.818933 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Jul 12 00:05:58.819151 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 12 00:05:58.819671 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Jul 12 00:05:58.820382 kernel: hub 1-0:1.0: USB hub found Jul 12 00:05:58.822163 kernel: hub 1-0:1.0: 4 ports detected Jul 12 00:05:58.823956 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Jul 12 00:05:58.824067 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jul 12 00:05:58.825898 kernel: hub 2-0:1.0: USB hub found Jul 12 00:05:58.826079 kernel: hub 2-0:1.0: 4 ports detected Jul 12 00:05:58.828389 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 12 00:05:58.834621 kernel: sd 0:0:0:1: Power-on or device reset occurred Jul 12 00:05:58.834781 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Jul 12 00:05:58.834867 kernel: sd 0:0:0:1: [sda] Write Protect is off Jul 12 00:05:58.834948 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Jul 12 00:05:58.835027 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jul 12 00:05:58.839329 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 12 00:05:58.839373 kernel: GPT:17805311 != 80003071 Jul 12 00:05:58.839384 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 12 00:05:58.840493 kernel: GPT:17805311 != 80003071 Jul 12 00:05:58.840554 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 12 00:05:58.840580 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 12 00:05:58.841602 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Jul 12 00:05:58.885240 kernel: BTRFS: device fsid 394cecf3-1fd4-438a-991e-dc2b4121da0c devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (526) Jul 12 00:05:58.890232 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (511) Jul 12 00:05:58.894271 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jul 12 00:05:58.901169 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jul 12 00:05:58.906630 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jul 12 00:05:58.908075 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Jul 12 00:05:58.914604 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jul 12 00:05:58.924494 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 12 00:05:58.934263 disk-uuid[574]: Primary Header is updated. Jul 12 00:05:58.934263 disk-uuid[574]: Secondary Entries is updated. Jul 12 00:05:58.934263 disk-uuid[574]: Secondary Header is updated. Jul 12 00:05:58.940235 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 12 00:05:58.945240 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 12 00:05:58.950240 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 12 00:05:59.060353 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jul 12 00:05:59.197633 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Jul 12 00:05:59.197725 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Jul 12 00:05:59.198031 kernel: usbcore: registered new interface driver usbhid Jul 12 00:05:59.198068 kernel: usbhid: USB HID core driver Jul 12 00:05:59.304248 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Jul 12 00:05:59.435252 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Jul 12 00:05:59.489359 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Jul 12 00:05:59.955297 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 12 00:05:59.955358 disk-uuid[575]: The operation has completed successfully. Jul 12 00:06:00.011338 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 12 00:06:00.011474 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 12 00:06:00.017512 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 12 00:06:00.022740 sh[592]: Success Jul 12 00:06:00.040559 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 12 00:06:00.100154 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 12 00:06:00.108358 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 12 00:06:00.113840 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 12 00:06:00.130562 kernel: BTRFS info (device dm-0): first mount of filesystem 394cecf3-1fd4-438a-991e-dc2b4121da0c Jul 12 00:06:00.130653 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:06:00.130667 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 12 00:06:00.130683 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 12 00:06:00.130693 kernel: BTRFS info (device dm-0): using free space tree Jul 12 00:06:00.138246 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jul 12 00:06:00.141476 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 12 00:06:00.143101 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 12 00:06:00.150525 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 12 00:06:00.155111 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 12 00:06:00.165252 kernel: BTRFS info (device sda6): first mount of filesystem 2ba3179f-4493-4560-9191-8e514f82bd95 Jul 12 00:06:00.165322 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:06:00.165334 kernel: BTRFS info (device sda6): using free space tree Jul 12 00:06:00.168323 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 12 00:06:00.168381 kernel: BTRFS info (device sda6): auto enabling async discard Jul 12 00:06:00.178751 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 12 00:06:00.179348 kernel: BTRFS info (device sda6): last unmount of filesystem 2ba3179f-4493-4560-9191-8e514f82bd95 Jul 12 00:06:00.185774 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 12 00:06:00.195421 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 12 00:06:00.299012 ignition[669]: Ignition 2.19.0 Jul 12 00:06:00.299025 ignition[669]: Stage: fetch-offline Jul 12 00:06:00.299071 ignition[669]: no configs at "/usr/lib/ignition/base.d" Jul 12 00:06:00.299080 ignition[669]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jul 12 00:06:00.299273 ignition[669]: parsed url from cmdline: "" Jul 12 00:06:00.299279 ignition[669]: no config URL provided Jul 12 00:06:00.299285 ignition[669]: reading system config file "/usr/lib/ignition/user.ign" Jul 12 00:06:00.302658 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 12 00:06:00.299293 ignition[669]: no config at "/usr/lib/ignition/user.ign" Jul 12 00:06:00.299299 ignition[669]: failed to fetch config: resource requires networking Jul 12 00:06:00.299497 ignition[669]: Ignition finished successfully Jul 12 00:06:00.305568 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 12 00:06:00.312522 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 12 00:06:00.332749 systemd-networkd[782]: lo: Link UP Jul 12 00:06:00.332762 systemd-networkd[782]: lo: Gained carrier Jul 12 00:06:00.334602 systemd-networkd[782]: Enumeration completed Jul 12 00:06:00.335040 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 12 00:06:00.335986 systemd[1]: Reached target network.target - Network. Jul 12 00:06:00.336740 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:06:00.336744 systemd-networkd[782]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 12 00:06:00.337673 systemd-networkd[782]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:06:00.337676 systemd-networkd[782]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 12 00:06:00.338332 systemd-networkd[782]: eth0: Link UP Jul 12 00:06:00.338336 systemd-networkd[782]: eth0: Gained carrier Jul 12 00:06:00.338346 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:06:00.346050 systemd-networkd[782]: eth1: Link UP Jul 12 00:06:00.346060 systemd-networkd[782]: eth1: Gained carrier Jul 12 00:06:00.346069 systemd-networkd[782]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:06:00.347641 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 12 00:06:00.361934 ignition[784]: Ignition 2.19.0 Jul 12 00:06:00.361947 ignition[784]: Stage: fetch Jul 12 00:06:00.362126 ignition[784]: no configs at "/usr/lib/ignition/base.d" Jul 12 00:06:00.362136 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jul 12 00:06:00.362299 ignition[784]: parsed url from cmdline: "" Jul 12 00:06:00.362304 ignition[784]: no config URL provided Jul 12 00:06:00.362310 ignition[784]: reading system config file "/usr/lib/ignition/user.ign" Jul 12 00:06:00.362320 ignition[784]: no config at "/usr/lib/ignition/user.ign" Jul 12 00:06:00.362342 ignition[784]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Jul 12 00:06:00.362995 ignition[784]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Jul 12 00:06:00.381330 systemd-networkd[782]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 12 00:06:00.413335 systemd-networkd[782]: eth0: DHCPv4 address 91.99.219.165/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jul 12 00:06:00.564069 ignition[784]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Jul 12 00:06:00.569123 ignition[784]: GET result: OK Jul 12 00:06:00.569280 ignition[784]: parsing config with SHA512: c85611f6e7f14266a0de88622795fc55a412eb99eaa3948d6bff375fea2a6a531801da56de8add79a53dd75047dd6038e6ff78e9a9b777958a0bf64bd78b457d Jul 12 00:06:00.574858 unknown[784]: fetched base config from "system" Jul 12 00:06:00.574869 unknown[784]: fetched base config from "system" Jul 12 00:06:00.575312 ignition[784]: fetch: fetch complete Jul 12 00:06:00.574874 unknown[784]: fetched user config from "hetzner" Jul 12 00:06:00.575318 ignition[784]: fetch: fetch passed Jul 12 00:06:00.577131 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 12 00:06:00.575375 ignition[784]: Ignition finished successfully Jul 12 00:06:00.584485 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 12 00:06:00.599165 ignition[791]: Ignition 2.19.0 Jul 12 00:06:00.599177 ignition[791]: Stage: kargs Jul 12 00:06:00.599409 ignition[791]: no configs at "/usr/lib/ignition/base.d" Jul 12 00:06:00.599420 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jul 12 00:06:00.600389 ignition[791]: kargs: kargs passed Jul 12 00:06:00.600464 ignition[791]: Ignition finished successfully Jul 12 00:06:00.602545 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 12 00:06:00.608444 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 12 00:06:00.622846 ignition[797]: Ignition 2.19.0 Jul 12 00:06:00.622863 ignition[797]: Stage: disks Jul 12 00:06:00.623105 ignition[797]: no configs at "/usr/lib/ignition/base.d" Jul 12 00:06:00.623116 ignition[797]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jul 12 00:06:00.624955 ignition[797]: disks: disks passed Jul 12 00:06:00.625022 ignition[797]: Ignition finished successfully Jul 12 00:06:00.627200 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 12 00:06:00.627946 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 12 00:06:00.628853 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 12 00:06:00.630001 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 12 00:06:00.631154 systemd[1]: Reached target sysinit.target - System Initialization. Jul 12 00:06:00.632102 systemd[1]: Reached target basic.target - Basic System. Jul 12 00:06:00.635426 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 12 00:06:00.662290 systemd-fsck[805]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jul 12 00:06:00.667570 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 12 00:06:00.671505 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 12 00:06:00.727472 kernel: EXT4-fs (sda9): mounted filesystem 44c8362f-9431-4909-bc9a-f90e514bd0e9 r/w with ordered data mode. Quota mode: none. Jul 12 00:06:00.728089 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 12 00:06:00.729338 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 12 00:06:00.737374 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 12 00:06:00.741067 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 12 00:06:00.747924 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jul 12 00:06:00.751402 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 12 00:06:00.751467 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 12 00:06:00.757198 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (813) Jul 12 00:06:00.754982 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 12 00:06:00.758618 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 12 00:06:00.762170 kernel: BTRFS info (device sda6): first mount of filesystem 2ba3179f-4493-4560-9191-8e514f82bd95 Jul 12 00:06:00.762200 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:06:00.762242 kernel: BTRFS info (device sda6): using free space tree Jul 12 00:06:00.771480 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 12 00:06:00.771549 kernel: BTRFS info (device sda6): auto enabling async discard Jul 12 00:06:00.776935 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 12 00:06:00.821341 coreos-metadata[815]: Jul 12 00:06:00.821 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Jul 12 00:06:00.822950 coreos-metadata[815]: Jul 12 00:06:00.822 INFO Fetch successful Jul 12 00:06:00.823622 coreos-metadata[815]: Jul 12 00:06:00.823 INFO wrote hostname ci-4081-3-4-n-f6981960e0 to /sysroot/etc/hostname Jul 12 00:06:00.827376 initrd-setup-root[842]: cut: /sysroot/etc/passwd: No such file or directory Jul 12 00:06:00.828288 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 12 00:06:00.834800 initrd-setup-root[850]: cut: /sysroot/etc/group: No such file or directory Jul 12 00:06:00.840544 initrd-setup-root[857]: cut: /sysroot/etc/shadow: No such file or directory Jul 12 00:06:00.844940 initrd-setup-root[864]: cut: /sysroot/etc/gshadow: No such file or directory Jul 12 00:06:00.947513 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 12 00:06:00.953577 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 12 00:06:00.955503 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 12 00:06:00.968269 kernel: BTRFS info (device sda6): last unmount of filesystem 2ba3179f-4493-4560-9191-8e514f82bd95 Jul 12 00:06:00.986901 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 12 00:06:00.994062 ignition[932]: INFO : Ignition 2.19.0 Jul 12 00:06:00.995594 ignition[932]: INFO : Stage: mount Jul 12 00:06:00.996105 ignition[932]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 12 00:06:00.996105 ignition[932]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jul 12 00:06:00.998313 ignition[932]: INFO : mount: mount passed Jul 12 00:06:00.998313 ignition[932]: INFO : Ignition finished successfully Jul 12 00:06:00.999175 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 12 00:06:01.005498 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 12 00:06:01.129149 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 12 00:06:01.137565 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 12 00:06:01.148493 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (944) Jul 12 00:06:01.148574 kernel: BTRFS info (device sda6): first mount of filesystem 2ba3179f-4493-4560-9191-8e514f82bd95 Jul 12 00:06:01.148600 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:06:01.149309 kernel: BTRFS info (device sda6): using free space tree Jul 12 00:06:01.153296 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 12 00:06:01.153356 kernel: BTRFS info (device sda6): auto enabling async discard Jul 12 00:06:01.156915 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 12 00:06:01.186395 ignition[961]: INFO : Ignition 2.19.0 Jul 12 00:06:01.186395 ignition[961]: INFO : Stage: files Jul 12 00:06:01.187736 ignition[961]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 12 00:06:01.187736 ignition[961]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jul 12 00:06:01.187736 ignition[961]: DEBUG : files: compiled without relabeling support, skipping Jul 12 00:06:01.190588 ignition[961]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 12 00:06:01.190588 ignition[961]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 12 00:06:01.193150 ignition[961]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 12 00:06:01.194170 ignition[961]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 12 00:06:01.194170 ignition[961]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 12 00:06:01.193622 unknown[961]: wrote ssh authorized keys file for user: core Jul 12 00:06:01.196843 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jul 12 00:06:01.196843 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jul 12 00:06:01.832666 systemd-networkd[782]: eth1: Gained IPv6LL Jul 12 00:06:01.896462 systemd-networkd[782]: eth0: Gained IPv6LL Jul 12 00:06:02.924354 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 12 00:06:04.764502 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jul 12 00:06:04.767167 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 12 00:06:04.767167 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 12 00:06:04.767167 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 12 00:06:04.767167 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 12 00:06:04.767167 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 12 00:06:04.767167 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 12 00:06:04.767167 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 12 00:06:04.767167 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 12 00:06:04.767167 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 12 00:06:04.767167 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 12 00:06:04.767167 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 12 00:06:04.767167 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 12 00:06:04.767167 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 12 00:06:04.767167 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jul 12 00:06:05.513733 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 12 00:06:06.595389 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 12 00:06:06.597407 ignition[961]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 12 00:06:06.597407 ignition[961]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 12 00:06:06.600396 ignition[961]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 12 00:06:06.600396 ignition[961]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 12 00:06:06.600396 ignition[961]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jul 12 00:06:06.600396 ignition[961]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jul 12 00:06:06.600396 ignition[961]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jul 12 00:06:06.600396 ignition[961]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jul 12 00:06:06.600396 ignition[961]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Jul 12 00:06:06.600396 ignition[961]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Jul 12 00:06:06.600396 ignition[961]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 12 00:06:06.600396 ignition[961]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 12 00:06:06.600396 ignition[961]: INFO : files: files passed Jul 12 00:06:06.600396 ignition[961]: INFO : Ignition finished successfully Jul 12 00:06:06.600853 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 12 00:06:06.607448 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 12 00:06:06.611300 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 12 00:06:06.615490 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 12 00:06:06.615612 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 12 00:06:06.637984 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 12 00:06:06.637984 initrd-setup-root-after-ignition[989]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 12 00:06:06.641086 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 12 00:06:06.644288 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 12 00:06:06.645539 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 12 00:06:06.651467 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 12 00:06:06.689444 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 12 00:06:06.690301 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 12 00:06:06.691877 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 12 00:06:06.693238 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 12 00:06:06.693836 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 12 00:06:06.697419 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 12 00:06:06.714504 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 12 00:06:06.719396 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 12 00:06:06.734387 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 12 00:06:06.735857 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 12 00:06:06.736642 systemd[1]: Stopped target timers.target - Timer Units. Jul 12 00:06:06.737675 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 12 00:06:06.737798 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 12 00:06:06.739176 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 12 00:06:06.739801 systemd[1]: Stopped target basic.target - Basic System. Jul 12 00:06:06.741221 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 12 00:06:06.742312 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 12 00:06:06.743304 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 12 00:06:06.744400 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 12 00:06:06.745427 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 12 00:06:06.746593 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 12 00:06:06.747576 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 12 00:06:06.748687 systemd[1]: Stopped target swap.target - Swaps. Jul 12 00:06:06.749575 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 12 00:06:06.749695 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 12 00:06:06.750996 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 12 00:06:06.751697 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 12 00:06:06.752770 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 12 00:06:06.756286 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 12 00:06:06.756976 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 12 00:06:06.757093 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 12 00:06:06.758722 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 12 00:06:06.758844 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 12 00:06:06.760040 systemd[1]: ignition-files.service: Deactivated successfully. Jul 12 00:06:06.760135 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 12 00:06:06.762787 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 12 00:06:06.763007 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 12 00:06:06.773501 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 12 00:06:06.778474 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 12 00:06:06.778957 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 12 00:06:06.779081 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 12 00:06:06.783974 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 12 00:06:06.784082 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 12 00:06:06.794234 ignition[1013]: INFO : Ignition 2.19.0 Jul 12 00:06:06.794234 ignition[1013]: INFO : Stage: umount Jul 12 00:06:06.793968 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 12 00:06:06.794063 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 12 00:06:06.799993 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 12 00:06:06.799993 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jul 12 00:06:06.799993 ignition[1013]: INFO : umount: umount passed Jul 12 00:06:06.799993 ignition[1013]: INFO : Ignition finished successfully Jul 12 00:06:06.800069 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 12 00:06:06.800178 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 12 00:06:06.801144 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 12 00:06:06.801202 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 12 00:06:06.803156 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 12 00:06:06.803527 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 12 00:06:06.804083 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 12 00:06:06.804124 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 12 00:06:06.808140 systemd[1]: Stopped target network.target - Network. Jul 12 00:06:06.811393 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 12 00:06:06.811522 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 12 00:06:06.814537 systemd[1]: Stopped target paths.target - Path Units. Jul 12 00:06:06.816635 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 12 00:06:06.819493 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 12 00:06:06.820127 systemd[1]: Stopped target slices.target - Slice Units. Jul 12 00:06:06.822533 systemd[1]: Stopped target sockets.target - Socket Units. Jul 12 00:06:06.823299 systemd[1]: iscsid.socket: Deactivated successfully. Jul 12 00:06:06.823347 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 12 00:06:06.824155 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 12 00:06:06.824202 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 12 00:06:06.825275 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 12 00:06:06.825329 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 12 00:06:06.826089 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 12 00:06:06.826126 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 12 00:06:06.829443 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 12 00:06:06.832229 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 12 00:06:06.835484 systemd-networkd[782]: eth0: DHCPv6 lease lost Jul 12 00:06:06.836001 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 12 00:06:06.836684 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 12 00:06:06.836797 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 12 00:06:06.839341 systemd-networkd[782]: eth1: DHCPv6 lease lost Jul 12 00:06:06.840475 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 12 00:06:06.840590 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 12 00:06:06.843023 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 12 00:06:06.843147 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 12 00:06:06.844996 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 12 00:06:06.845285 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 12 00:06:06.846132 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 12 00:06:06.846198 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 12 00:06:06.861325 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 12 00:06:06.862613 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 12 00:06:06.862728 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 12 00:06:06.865542 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 12 00:06:06.865636 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 12 00:06:06.866919 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 12 00:06:06.866963 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 12 00:06:06.868323 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 12 00:06:06.868371 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 12 00:06:06.869370 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 12 00:06:06.879359 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 12 00:06:06.879495 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 12 00:06:06.882952 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 12 00:06:06.883116 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 12 00:06:06.884878 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 12 00:06:06.884918 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 12 00:06:06.885825 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 12 00:06:06.885860 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 12 00:06:06.886471 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 12 00:06:06.886516 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 12 00:06:06.888038 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 12 00:06:06.888084 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 12 00:06:06.889750 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 12 00:06:06.889808 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 12 00:06:06.896442 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 12 00:06:06.896999 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 12 00:06:06.897056 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 12 00:06:06.898289 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 12 00:06:06.898335 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 12 00:06:06.899506 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 12 00:06:06.899547 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 12 00:06:06.901182 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 12 00:06:06.901265 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:06:06.909369 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 12 00:06:06.910687 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 12 00:06:06.912458 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 12 00:06:06.921460 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 12 00:06:06.931094 systemd[1]: Switching root. Jul 12 00:06:06.974417 systemd-journald[237]: Journal stopped Jul 12 00:06:07.827190 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Jul 12 00:06:07.827273 kernel: SELinux: policy capability network_peer_controls=1 Jul 12 00:06:07.827287 kernel: SELinux: policy capability open_perms=1 Jul 12 00:06:07.827297 kernel: SELinux: policy capability extended_socket_class=1 Jul 12 00:06:07.827307 kernel: SELinux: policy capability always_check_network=0 Jul 12 00:06:07.827320 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 12 00:06:07.827334 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 12 00:06:07.827343 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 12 00:06:07.827352 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 12 00:06:07.827361 kernel: audit: type=1403 audit(1752278767.125:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 12 00:06:07.827372 systemd[1]: Successfully loaded SELinux policy in 37.108ms. Jul 12 00:06:07.827396 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.737ms. Jul 12 00:06:07.827408 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 12 00:06:07.827418 systemd[1]: Detected virtualization kvm. Jul 12 00:06:07.827431 systemd[1]: Detected architecture arm64. Jul 12 00:06:07.827441 systemd[1]: Detected first boot. Jul 12 00:06:07.827451 systemd[1]: Hostname set to . Jul 12 00:06:07.827461 systemd[1]: Initializing machine ID from VM UUID. Jul 12 00:06:07.827471 zram_generator::config[1056]: No configuration found. Jul 12 00:06:07.827482 systemd[1]: Populated /etc with preset unit settings. Jul 12 00:06:07.827492 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 12 00:06:07.827506 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 12 00:06:07.827518 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 12 00:06:07.827529 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 12 00:06:07.827539 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 12 00:06:07.827550 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 12 00:06:07.827561 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 12 00:06:07.827571 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 12 00:06:07.827581 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 12 00:06:07.827592 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 12 00:06:07.827604 systemd[1]: Created slice user.slice - User and Session Slice. Jul 12 00:06:07.827614 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 12 00:06:07.827624 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 12 00:06:07.827634 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 12 00:06:07.827645 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 12 00:06:07.827655 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 12 00:06:07.827665 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 12 00:06:07.827675 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 12 00:06:07.827686 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 12 00:06:07.827696 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 12 00:06:07.827708 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 12 00:06:07.827718 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 12 00:06:07.827728 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 12 00:06:07.827738 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 12 00:06:07.827752 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 12 00:06:07.827764 systemd[1]: Reached target slices.target - Slice Units. Jul 12 00:06:07.827776 systemd[1]: Reached target swap.target - Swaps. Jul 12 00:06:07.827787 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 12 00:06:07.827798 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 12 00:06:07.827808 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 12 00:06:07.827819 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 12 00:06:07.827829 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 12 00:06:07.827840 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 12 00:06:07.827850 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 12 00:06:07.827860 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 12 00:06:07.827871 systemd[1]: Mounting media.mount - External Media Directory... Jul 12 00:06:07.827882 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 12 00:06:07.827892 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 12 00:06:07.827920 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 12 00:06:07.827934 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 12 00:06:07.827944 systemd[1]: Reached target machines.target - Containers. Jul 12 00:06:07.827955 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 12 00:06:07.827966 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 12 00:06:07.827986 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 12 00:06:07.827999 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 12 00:06:07.828009 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 12 00:06:07.828020 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 12 00:06:07.828031 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 12 00:06:07.828042 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 12 00:06:07.828055 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 12 00:06:07.828066 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 12 00:06:07.828076 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 12 00:06:07.828087 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 12 00:06:07.828098 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 12 00:06:07.828108 systemd[1]: Stopped systemd-fsck-usr.service. Jul 12 00:06:07.828118 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 12 00:06:07.828128 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 12 00:06:07.828139 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 12 00:06:07.828152 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 12 00:06:07.828162 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 12 00:06:07.828204 systemd-journald[1119]: Collecting audit messages is disabled. Jul 12 00:06:07.832010 systemd[1]: verity-setup.service: Deactivated successfully. Jul 12 00:06:07.832030 systemd[1]: Stopped verity-setup.service. Jul 12 00:06:07.832042 kernel: ACPI: bus type drm_connector registered Jul 12 00:06:07.832058 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 12 00:06:07.832073 systemd-journald[1119]: Journal started Jul 12 00:06:07.832095 systemd-journald[1119]: Runtime Journal (/run/log/journal/3960ffddbd704c2da8756520b0109dcd) is 8.0M, max 76.6M, 68.6M free. Jul 12 00:06:07.612295 systemd[1]: Queued start job for default target multi-user.target. Jul 12 00:06:07.640349 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jul 12 00:06:07.640991 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 12 00:06:07.835753 systemd[1]: Started systemd-journald.service - Journal Service. Jul 12 00:06:07.839569 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 12 00:06:07.840513 systemd[1]: Mounted media.mount - External Media Directory. Jul 12 00:06:07.841472 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 12 00:06:07.843104 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 12 00:06:07.844576 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 12 00:06:07.845731 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 12 00:06:07.847599 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 12 00:06:07.847793 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 12 00:06:07.850620 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 00:06:07.850760 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 12 00:06:07.854609 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 12 00:06:07.855380 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 12 00:06:07.856322 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 00:06:07.856712 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 12 00:06:07.866906 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 12 00:06:07.869927 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 12 00:06:07.886366 kernel: fuse: init (API version 7.39) Jul 12 00:06:07.886427 kernel: loop: module loaded Jul 12 00:06:07.887810 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 12 00:06:07.888013 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 12 00:06:07.892707 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 12 00:06:07.895626 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 00:06:07.895820 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 12 00:06:07.896863 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 12 00:06:07.899026 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 12 00:06:07.906300 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 12 00:06:07.909342 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 12 00:06:07.909950 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 12 00:06:07.909989 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 12 00:06:07.916425 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 12 00:06:07.924429 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 12 00:06:07.927019 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 12 00:06:07.927820 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 12 00:06:07.934408 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 12 00:06:07.938562 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 12 00:06:07.940892 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 12 00:06:07.942610 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 12 00:06:07.943619 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 12 00:06:07.947407 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 12 00:06:07.951576 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 12 00:06:07.962411 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 12 00:06:07.965914 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 12 00:06:07.968983 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 12 00:06:07.973306 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 12 00:06:07.989018 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 12 00:06:08.000586 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 12 00:06:08.005747 kernel: loop0: detected capacity change from 0 to 114432 Jul 12 00:06:08.017110 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 12 00:06:08.019151 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 12 00:06:08.023467 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 12 00:06:08.024126 systemd-journald[1119]: Time spent on flushing to /var/log/journal/3960ffddbd704c2da8756520b0109dcd is 72.343ms for 1132 entries. Jul 12 00:06:08.024126 systemd-journald[1119]: System Journal (/var/log/journal/3960ffddbd704c2da8756520b0109dcd) is 8.0M, max 584.8M, 576.8M free. Jul 12 00:06:08.109358 systemd-journald[1119]: Received client request to flush runtime journal. Jul 12 00:06:08.109415 kernel: loop1: detected capacity change from 0 to 207008 Jul 12 00:06:08.025908 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 12 00:06:08.041913 udevadm[1176]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 12 00:06:08.051237 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 12 00:06:08.066918 systemd-tmpfiles[1171]: ACLs are not supported, ignoring. Jul 12 00:06:08.066929 systemd-tmpfiles[1171]: ACLs are not supported, ignoring. Jul 12 00:06:08.077081 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 12 00:06:08.093348 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 12 00:06:08.111720 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 12 00:06:08.113804 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 12 00:06:08.116340 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 12 00:06:08.128951 kernel: loop2: detected capacity change from 0 to 8 Jul 12 00:06:08.150243 kernel: loop3: detected capacity change from 0 to 114328 Jul 12 00:06:08.150384 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 12 00:06:08.164492 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 12 00:06:08.199229 kernel: loop4: detected capacity change from 0 to 114432 Jul 12 00:06:08.211124 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. Jul 12 00:06:08.211711 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. Jul 12 00:06:08.221467 kernel: loop5: detected capacity change from 0 to 207008 Jul 12 00:06:08.222434 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 12 00:06:08.246245 kernel: loop6: detected capacity change from 0 to 8 Jul 12 00:06:08.249236 kernel: loop7: detected capacity change from 0 to 114328 Jul 12 00:06:08.261302 (sd-merge)[1198]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Jul 12 00:06:08.261786 (sd-merge)[1198]: Merged extensions into '/usr'. Jul 12 00:06:08.267694 systemd[1]: Reloading requested from client PID 1170 ('systemd-sysext') (unit systemd-sysext.service)... Jul 12 00:06:08.267716 systemd[1]: Reloading... Jul 12 00:06:08.368027 zram_generator::config[1225]: No configuration found. Jul 12 00:06:08.500436 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:06:08.536385 ldconfig[1165]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 12 00:06:08.550653 systemd[1]: Reloading finished in 282 ms. Jul 12 00:06:08.586743 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 12 00:06:08.590014 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 12 00:06:08.599522 systemd[1]: Starting ensure-sysext.service... Jul 12 00:06:08.610393 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 12 00:06:08.620124 systemd[1]: Reloading requested from client PID 1262 ('systemctl') (unit ensure-sysext.service)... Jul 12 00:06:08.620143 systemd[1]: Reloading... Jul 12 00:06:08.637612 systemd-tmpfiles[1263]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 12 00:06:08.642528 systemd-tmpfiles[1263]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 12 00:06:08.646638 systemd-tmpfiles[1263]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 12 00:06:08.646870 systemd-tmpfiles[1263]: ACLs are not supported, ignoring. Jul 12 00:06:08.646921 systemd-tmpfiles[1263]: ACLs are not supported, ignoring. Jul 12 00:06:08.653953 systemd-tmpfiles[1263]: Detected autofs mount point /boot during canonicalization of boot. Jul 12 00:06:08.653970 systemd-tmpfiles[1263]: Skipping /boot Jul 12 00:06:08.666995 systemd-tmpfiles[1263]: Detected autofs mount point /boot during canonicalization of boot. Jul 12 00:06:08.667012 systemd-tmpfiles[1263]: Skipping /boot Jul 12 00:06:08.729230 zram_generator::config[1293]: No configuration found. Jul 12 00:06:08.832423 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:06:08.878674 systemd[1]: Reloading finished in 258 ms. Jul 12 00:06:08.901016 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 12 00:06:08.908794 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 12 00:06:08.926571 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 12 00:06:08.932486 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 12 00:06:08.936559 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 12 00:06:08.941405 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 12 00:06:08.950520 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 12 00:06:08.953882 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 12 00:06:08.959980 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 12 00:06:08.968518 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 12 00:06:08.972496 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 12 00:06:08.978532 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 12 00:06:08.979363 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 12 00:06:08.981524 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 12 00:06:08.981685 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 12 00:06:08.985526 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 12 00:06:08.991471 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 12 00:06:08.998506 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 12 00:06:09.000408 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 12 00:06:09.007719 systemd[1]: Finished ensure-sysext.service. Jul 12 00:06:09.013635 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 12 00:06:09.015739 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 12 00:06:09.017800 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 00:06:09.017969 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 12 00:06:09.028977 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 12 00:06:09.034418 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 12 00:06:09.042877 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 12 00:06:09.043935 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 00:06:09.044073 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 12 00:06:09.046046 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 12 00:06:09.046108 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 12 00:06:09.051819 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 12 00:06:09.054524 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 12 00:06:09.056474 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 00:06:09.056619 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 12 00:06:09.057515 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 12 00:06:09.075297 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 12 00:06:09.077481 systemd-udevd[1335]: Using default interface naming scheme 'v255'. Jul 12 00:06:09.078203 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 12 00:06:09.102695 augenrules[1371]: No rules Jul 12 00:06:09.105303 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 12 00:06:09.116349 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 12 00:06:09.125401 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 12 00:06:09.167452 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 12 00:06:09.168584 systemd[1]: Reached target time-set.target - System Time Set. Jul 12 00:06:09.199450 systemd-resolved[1333]: Positive Trust Anchors: Jul 12 00:06:09.199474 systemd-resolved[1333]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 12 00:06:09.199508 systemd-resolved[1333]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 12 00:06:09.207563 systemd-resolved[1333]: Using system hostname 'ci-4081-3-4-n-f6981960e0'. Jul 12 00:06:09.209043 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 12 00:06:09.210151 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 12 00:06:09.211936 systemd-networkd[1379]: lo: Link UP Jul 12 00:06:09.212625 systemd-networkd[1379]: lo: Gained carrier Jul 12 00:06:09.213467 systemd-networkd[1379]: Enumeration completed Jul 12 00:06:09.214097 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 12 00:06:09.215035 systemd[1]: Reached target network.target - Network. Jul 12 00:06:09.230517 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 12 00:06:09.233370 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 12 00:06:09.304732 systemd-networkd[1379]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:06:09.304919 systemd-networkd[1379]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 12 00:06:09.306339 systemd-networkd[1379]: eth0: Link UP Jul 12 00:06:09.306509 systemd-networkd[1379]: eth0: Gained carrier Jul 12 00:06:09.306597 systemd-networkd[1379]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:06:09.330693 systemd-networkd[1379]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:06:09.330702 systemd-networkd[1379]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 12 00:06:09.331309 systemd-networkd[1379]: eth1: Link UP Jul 12 00:06:09.331314 systemd-networkd[1379]: eth1: Gained carrier Jul 12 00:06:09.331333 systemd-networkd[1379]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:06:09.332233 kernel: mousedev: PS/2 mouse device common for all mice Jul 12 00:06:09.363240 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1393) Jul 12 00:06:09.373395 systemd-networkd[1379]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 12 00:06:09.381077 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Jul 12 00:06:09.381280 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 12 00:06:09.393787 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 12 00:06:09.393971 systemd-networkd[1379]: eth0: DHCPv4 address 91.99.219.165/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jul 12 00:06:09.396083 systemd-timesyncd[1352]: Network configuration changed, trying to establish connection. Jul 12 00:06:09.396851 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 12 00:06:09.401154 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 12 00:06:09.402350 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 12 00:06:09.402398 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 12 00:06:09.404852 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 00:06:09.405053 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 12 00:06:09.421110 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 00:06:09.422304 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 12 00:06:09.425637 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 12 00:06:09.425950 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 00:06:09.427261 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 12 00:06:09.429710 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 12 00:06:09.455803 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jul 12 00:06:09.460238 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Jul 12 00:06:09.460308 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jul 12 00:06:09.460322 kernel: [drm] features: -context_init Jul 12 00:06:09.461228 kernel: [drm] number of scanouts: 1 Jul 12 00:06:09.461294 kernel: [drm] number of cap sets: 0 Jul 12 00:06:09.463227 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Jul 12 00:06:09.468465 kernel: Console: switching to colour frame buffer device 160x50 Jul 12 00:06:09.467896 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 12 00:06:09.471252 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jul 12 00:06:09.486780 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 00:06:09.494457 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 12 00:06:09.494656 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:06:09.503355 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 00:06:09.504276 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 12 00:06:09.567043 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:06:09.613910 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 12 00:06:09.624548 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 12 00:06:09.638588 lvm[1441]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 12 00:06:09.672013 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 12 00:06:09.674096 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 12 00:06:09.674892 systemd[1]: Reached target sysinit.target - System Initialization. Jul 12 00:06:09.675721 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 12 00:06:09.676719 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 12 00:06:09.677818 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 12 00:06:09.678559 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 12 00:06:09.679281 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 12 00:06:09.679947 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 12 00:06:09.680003 systemd[1]: Reached target paths.target - Path Units. Jul 12 00:06:09.680510 systemd[1]: Reached target timers.target - Timer Units. Jul 12 00:06:09.681798 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 12 00:06:09.683812 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 12 00:06:09.699448 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 12 00:06:09.701895 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 12 00:06:09.704448 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 12 00:06:09.705755 systemd[1]: Reached target sockets.target - Socket Units. Jul 12 00:06:09.706370 systemd[1]: Reached target basic.target - Basic System. Jul 12 00:06:09.706966 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 12 00:06:09.707004 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 12 00:06:09.710359 systemd[1]: Starting containerd.service - containerd container runtime... Jul 12 00:06:09.712564 lvm[1445]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 12 00:06:09.715390 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 12 00:06:09.719490 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 12 00:06:09.726084 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 12 00:06:09.731452 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 12 00:06:09.732086 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 12 00:06:09.736410 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 12 00:06:09.741365 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 12 00:06:09.743547 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Jul 12 00:06:09.746586 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 12 00:06:09.752684 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 12 00:06:09.763696 jq[1449]: false Jul 12 00:06:09.767429 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 12 00:06:09.769847 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 12 00:06:09.770419 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 12 00:06:09.772995 systemd[1]: Starting update-engine.service - Update Engine... Jul 12 00:06:09.780366 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 12 00:06:09.781826 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 12 00:06:09.787631 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 12 00:06:09.787851 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 12 00:06:09.807823 dbus-daemon[1448]: [system] SELinux support is enabled Jul 12 00:06:09.808007 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 12 00:06:09.812204 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 12 00:06:09.812260 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 12 00:06:09.814328 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 12 00:06:09.814362 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 12 00:06:09.817664 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 12 00:06:09.817842 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 12 00:06:09.840605 update_engine[1460]: I20250712 00:06:09.840386 1460 main.cc:92] Flatcar Update Engine starting Jul 12 00:06:09.841940 jq[1461]: true Jul 12 00:06:09.848553 systemd[1]: Started update-engine.service - Update Engine. Jul 12 00:06:09.850876 update_engine[1460]: I20250712 00:06:09.850654 1460 update_check_scheduler.cc:74] Next update check in 2m29s Jul 12 00:06:09.852911 coreos-metadata[1447]: Jul 12 00:06:09.852 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Jul 12 00:06:09.855907 extend-filesystems[1450]: Found loop4 Jul 12 00:06:09.855907 extend-filesystems[1450]: Found loop5 Jul 12 00:06:09.855907 extend-filesystems[1450]: Found loop6 Jul 12 00:06:09.855907 extend-filesystems[1450]: Found loop7 Jul 12 00:06:09.855907 extend-filesystems[1450]: Found sda Jul 12 00:06:09.855724 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 12 00:06:09.875844 tar[1468]: linux-arm64/LICENSE Jul 12 00:06:09.875844 tar[1468]: linux-arm64/helm Jul 12 00:06:09.880580 extend-filesystems[1450]: Found sda1 Jul 12 00:06:09.880580 extend-filesystems[1450]: Found sda2 Jul 12 00:06:09.880580 extend-filesystems[1450]: Found sda3 Jul 12 00:06:09.880580 extend-filesystems[1450]: Found usr Jul 12 00:06:09.880580 extend-filesystems[1450]: Found sda4 Jul 12 00:06:09.880580 extend-filesystems[1450]: Found sda6 Jul 12 00:06:09.880580 extend-filesystems[1450]: Found sda7 Jul 12 00:06:09.880580 extend-filesystems[1450]: Found sda9 Jul 12 00:06:09.880580 extend-filesystems[1450]: Checking size of /dev/sda9 Jul 12 00:06:09.921887 coreos-metadata[1447]: Jul 12 00:06:09.856 INFO Fetch successful Jul 12 00:06:09.921887 coreos-metadata[1447]: Jul 12 00:06:09.858 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Jul 12 00:06:09.921887 coreos-metadata[1447]: Jul 12 00:06:09.858 INFO Fetch successful Jul 12 00:06:09.857708 (ntainerd)[1476]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 12 00:06:09.933417 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Jul 12 00:06:09.887499 systemd[1]: motdgen.service: Deactivated successfully. Jul 12 00:06:09.933731 extend-filesystems[1450]: Resized partition /dev/sda9 Jul 12 00:06:09.946397 jq[1484]: true Jul 12 00:06:09.887673 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 12 00:06:09.946717 extend-filesystems[1494]: resize2fs 1.47.1 (20-May-2024) Jul 12 00:06:09.971826 systemd-logind[1459]: New seat seat0. Jul 12 00:06:09.984552 systemd-logind[1459]: Watching system buttons on /dev/input/event0 (Power Button) Jul 12 00:06:09.984576 systemd-logind[1459]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Jul 12 00:06:09.985095 systemd[1]: Started systemd-logind.service - User Login Management. Jul 12 00:06:10.004334 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1390) Jul 12 00:06:10.015633 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 12 00:06:10.017439 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 12 00:06:10.095261 bash[1525]: Updated "/home/core/.ssh/authorized_keys" Jul 12 00:06:10.101046 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 12 00:06:10.107906 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Jul 12 00:06:10.112551 systemd[1]: Starting sshkeys.service... Jul 12 00:06:10.140618 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 12 00:06:10.144852 extend-filesystems[1494]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jul 12 00:06:10.144852 extend-filesystems[1494]: old_desc_blocks = 1, new_desc_blocks = 5 Jul 12 00:06:10.144852 extend-filesystems[1494]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Jul 12 00:06:10.152565 extend-filesystems[1450]: Resized filesystem in /dev/sda9 Jul 12 00:06:10.152565 extend-filesystems[1450]: Found sr0 Jul 12 00:06:10.150410 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 12 00:06:10.153636 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 12 00:06:10.153811 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 12 00:06:10.159389 locksmithd[1482]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 12 00:06:10.204315 coreos-metadata[1531]: Jul 12 00:06:10.204 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Jul 12 00:06:10.208491 coreos-metadata[1531]: Jul 12 00:06:10.205 INFO Fetch successful Jul 12 00:06:10.211073 unknown[1531]: wrote ssh authorized keys file for user: core Jul 12 00:06:10.240117 containerd[1476]: time="2025-07-12T00:06:10.240014560Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jul 12 00:06:10.241437 update-ssh-keys[1536]: Updated "/home/core/.ssh/authorized_keys" Jul 12 00:06:10.243819 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 12 00:06:10.252135 systemd[1]: Finished sshkeys.service. Jul 12 00:06:10.274423 containerd[1476]: time="2025-07-12T00:06:10.274365320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 12 00:06:10.276778 containerd[1476]: time="2025-07-12T00:06:10.276730680Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.96-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:06:10.277234 containerd[1476]: time="2025-07-12T00:06:10.276960560Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 12 00:06:10.277234 containerd[1476]: time="2025-07-12T00:06:10.276989520Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 12 00:06:10.277234 containerd[1476]: time="2025-07-12T00:06:10.277185960Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 12 00:06:10.277591 containerd[1476]: time="2025-07-12T00:06:10.277414880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 12 00:06:10.277591 containerd[1476]: time="2025-07-12T00:06:10.277512320Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:06:10.277591 containerd[1476]: time="2025-07-12T00:06:10.277527240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 12 00:06:10.277843 containerd[1476]: time="2025-07-12T00:06:10.277821480Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:06:10.278331 containerd[1476]: time="2025-07-12T00:06:10.277888000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 12 00:06:10.278331 containerd[1476]: time="2025-07-12T00:06:10.277907520Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:06:10.278331 containerd[1476]: time="2025-07-12T00:06:10.277917760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 12 00:06:10.278331 containerd[1476]: time="2025-07-12T00:06:10.278007160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 12 00:06:10.278331 containerd[1476]: time="2025-07-12T00:06:10.278291000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 12 00:06:10.278608 containerd[1476]: time="2025-07-12T00:06:10.278587760Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:06:10.278673 containerd[1476]: time="2025-07-12T00:06:10.278659680Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 12 00:06:10.278805 containerd[1476]: time="2025-07-12T00:06:10.278788800Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 12 00:06:10.278912 containerd[1476]: time="2025-07-12T00:06:10.278896240Z" level=info msg="metadata content store policy set" policy=shared Jul 12 00:06:10.287230 containerd[1476]: time="2025-07-12T00:06:10.286189280Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 12 00:06:10.287230 containerd[1476]: time="2025-07-12T00:06:10.286689960Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 12 00:06:10.287230 containerd[1476]: time="2025-07-12T00:06:10.286717560Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 12 00:06:10.287230 containerd[1476]: time="2025-07-12T00:06:10.286733480Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 12 00:06:10.287230 containerd[1476]: time="2025-07-12T00:06:10.286747600Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 12 00:06:10.287230 containerd[1476]: time="2025-07-12T00:06:10.286896960Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 12 00:06:10.287230 containerd[1476]: time="2025-07-12T00:06:10.287117160Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 12 00:06:10.288112 containerd[1476]: time="2025-07-12T00:06:10.288089400Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 12 00:06:10.288237 containerd[1476]: time="2025-07-12T00:06:10.288215280Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 12 00:06:10.288306 containerd[1476]: time="2025-07-12T00:06:10.288292760Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 12 00:06:10.288369 containerd[1476]: time="2025-07-12T00:06:10.288346440Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 12 00:06:10.288421 containerd[1476]: time="2025-07-12T00:06:10.288409240Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 12 00:06:10.288497 containerd[1476]: time="2025-07-12T00:06:10.288484840Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 12 00:06:10.288584 containerd[1476]: time="2025-07-12T00:06:10.288570600Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 12 00:06:10.288668 containerd[1476]: time="2025-07-12T00:06:10.288652960Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 12 00:06:10.288920 containerd[1476]: time="2025-07-12T00:06:10.288860320Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 12 00:06:10.288920 containerd[1476]: time="2025-07-12T00:06:10.288883120Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 12 00:06:10.288920 containerd[1476]: time="2025-07-12T00:06:10.288897240Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 12 00:06:10.289462 containerd[1476]: time="2025-07-12T00:06:10.289242480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 12 00:06:10.289462 containerd[1476]: time="2025-07-12T00:06:10.289269800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 12 00:06:10.289462 containerd[1476]: time="2025-07-12T00:06:10.289283120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 12 00:06:10.289462 containerd[1476]: time="2025-07-12T00:06:10.289296800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 12 00:06:10.289462 containerd[1476]: time="2025-07-12T00:06:10.289323080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 12 00:06:10.289462 containerd[1476]: time="2025-07-12T00:06:10.289336880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 12 00:06:10.289462 containerd[1476]: time="2025-07-12T00:06:10.289356960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 12 00:06:10.289462 containerd[1476]: time="2025-07-12T00:06:10.289370080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 12 00:06:10.289462 containerd[1476]: time="2025-07-12T00:06:10.289390800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 12 00:06:10.289462 containerd[1476]: time="2025-07-12T00:06:10.289412960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 12 00:06:10.289462 containerd[1476]: time="2025-07-12T00:06:10.289429480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 12 00:06:10.289462 containerd[1476]: time="2025-07-12T00:06:10.289442120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 12 00:06:10.290200 containerd[1476]: time="2025-07-12T00:06:10.289763600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 12 00:06:10.290200 containerd[1476]: time="2025-07-12T00:06:10.289794880Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 12 00:06:10.290200 containerd[1476]: time="2025-07-12T00:06:10.289821200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 12 00:06:10.290200 containerd[1476]: time="2025-07-12T00:06:10.290061720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 12 00:06:10.290200 containerd[1476]: time="2025-07-12T00:06:10.290077000Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 12 00:06:10.290835 containerd[1476]: time="2025-07-12T00:06:10.290487360Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 12 00:06:10.290835 containerd[1476]: time="2025-07-12T00:06:10.290516400Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 12 00:06:10.291843 containerd[1476]: time="2025-07-12T00:06:10.290527720Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 12 00:06:10.291843 containerd[1476]: time="2025-07-12T00:06:10.290991960Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 12 00:06:10.291843 containerd[1476]: time="2025-07-12T00:06:10.291006520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 12 00:06:10.291843 containerd[1476]: time="2025-07-12T00:06:10.291022520Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 12 00:06:10.291843 containerd[1476]: time="2025-07-12T00:06:10.291032800Z" level=info msg="NRI interface is disabled by configuration." Jul 12 00:06:10.291843 containerd[1476]: time="2025-07-12T00:06:10.291043000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 12 00:06:10.292249 containerd[1476]: time="2025-07-12T00:06:10.292131920Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 12 00:06:10.292506 containerd[1476]: time="2025-07-12T00:06:10.292487160Z" level=info msg="Connect containerd service" Jul 12 00:06:10.292665 containerd[1476]: time="2025-07-12T00:06:10.292649760Z" level=info msg="using legacy CRI server" Jul 12 00:06:10.293066 containerd[1476]: time="2025-07-12T00:06:10.292912520Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 12 00:06:10.293066 containerd[1476]: time="2025-07-12T00:06:10.293035520Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 12 00:06:10.296081 containerd[1476]: time="2025-07-12T00:06:10.295651440Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 12 00:06:10.296081 containerd[1476]: time="2025-07-12T00:06:10.295846240Z" level=info msg="Start subscribing containerd event" Jul 12 00:06:10.296081 containerd[1476]: time="2025-07-12T00:06:10.295891280Z" level=info msg="Start recovering state" Jul 12 00:06:10.296081 containerd[1476]: time="2025-07-12T00:06:10.296052080Z" level=info msg="Start event monitor" Jul 12 00:06:10.296620 containerd[1476]: time="2025-07-12T00:06:10.296067000Z" level=info msg="Start snapshots syncer" Jul 12 00:06:10.296690 containerd[1476]: time="2025-07-12T00:06:10.296677200Z" level=info msg="Start cni network conf syncer for default" Jul 12 00:06:10.297223 containerd[1476]: time="2025-07-12T00:06:10.296792320Z" level=info msg="Start streaming server" Jul 12 00:06:10.297756 containerd[1476]: time="2025-07-12T00:06:10.297464560Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 12 00:06:10.297756 containerd[1476]: time="2025-07-12T00:06:10.297521240Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 12 00:06:10.297756 containerd[1476]: time="2025-07-12T00:06:10.297567560Z" level=info msg="containerd successfully booted in 0.058727s" Jul 12 00:06:10.347808 systemd[1]: Started containerd.service - containerd container runtime. Jul 12 00:06:10.431565 sshd_keygen[1483]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 12 00:06:10.458332 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 12 00:06:10.466108 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 12 00:06:10.472994 systemd[1]: issuegen.service: Deactivated successfully. Jul 12 00:06:10.473215 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 12 00:06:10.483186 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 12 00:06:10.493609 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 12 00:06:10.502596 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 12 00:06:10.506530 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 12 00:06:10.508554 systemd[1]: Reached target getty.target - Login Prompts. Jul 12 00:06:10.602954 tar[1468]: linux-arm64/README.md Jul 12 00:06:10.615339 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 12 00:06:10.728412 systemd-networkd[1379]: eth1: Gained IPv6LL Jul 12 00:06:10.729962 systemd-timesyncd[1352]: Network configuration changed, trying to establish connection. Jul 12 00:06:10.733064 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 12 00:06:10.735276 systemd[1]: Reached target network-online.target - Network is Online. Jul 12 00:06:10.747657 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:06:10.752050 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 12 00:06:10.780328 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 12 00:06:10.856961 systemd-networkd[1379]: eth0: Gained IPv6LL Jul 12 00:06:10.857866 systemd-timesyncd[1352]: Network configuration changed, trying to establish connection. Jul 12 00:06:11.558114 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:06:11.561436 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 12 00:06:11.564403 systemd[1]: Startup finished in 787ms (kernel) + 9.428s (initrd) + 4.476s (userspace) = 14.692s. Jul 12 00:06:11.570528 (kubelet)[1578]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 12 00:06:12.065150 kubelet[1578]: E0712 00:06:12.065040 1578 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:06:12.067822 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:06:12.068054 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:06:22.319109 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 12 00:06:22.325565 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:06:22.448913 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:06:22.458669 (kubelet)[1598]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 12 00:06:22.513824 kubelet[1598]: E0712 00:06:22.513763 1598 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:06:22.519605 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:06:22.519855 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:06:32.770581 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 12 00:06:32.778568 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:06:32.928410 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:06:32.929568 (kubelet)[1613]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 12 00:06:32.981790 kubelet[1613]: E0712 00:06:32.981660 1613 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:06:32.984527 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:06:32.984737 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:06:41.160783 systemd-timesyncd[1352]: Contacted time server 85.215.93.134:123 (2.flatcar.pool.ntp.org). Jul 12 00:06:41.160990 systemd-timesyncd[1352]: Initial clock synchronization to Sat 2025-07-12 00:06:41.098888 UTC. Jul 12 00:06:43.039819 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 12 00:06:43.046523 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:06:43.158438 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:06:43.163832 (kubelet)[1629]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 12 00:06:43.212354 kubelet[1629]: E0712 00:06:43.212308 1629 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:06:43.216099 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:06:43.216373 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:06:45.149072 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 12 00:06:45.160353 systemd[1]: Started sshd@0-91.99.219.165:22-139.178.68.195:55226.service - OpenSSH per-connection server daemon (139.178.68.195:55226). Jul 12 00:06:46.151849 sshd[1638]: Accepted publickey for core from 139.178.68.195 port 55226 ssh2: RSA SHA256:F+XLD192VdJplBwsaXiDmdHN61qgjd2kCMtCNVPlP/M Jul 12 00:06:46.154776 sshd[1638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:06:46.166838 systemd-logind[1459]: New session 1 of user core. Jul 12 00:06:46.168869 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 12 00:06:46.174830 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 12 00:06:46.193251 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 12 00:06:46.199627 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 12 00:06:46.208318 (systemd)[1642]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:06:46.315100 systemd[1642]: Queued start job for default target default.target. Jul 12 00:06:46.324959 systemd[1642]: Created slice app.slice - User Application Slice. Jul 12 00:06:46.325005 systemd[1642]: Reached target paths.target - Paths. Jul 12 00:06:46.325029 systemd[1642]: Reached target timers.target - Timers. Jul 12 00:06:46.327054 systemd[1642]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 12 00:06:46.346628 systemd[1642]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 12 00:06:46.346734 systemd[1642]: Reached target sockets.target - Sockets. Jul 12 00:06:46.346748 systemd[1642]: Reached target basic.target - Basic System. Jul 12 00:06:46.346832 systemd[1642]: Reached target default.target - Main User Target. Jul 12 00:06:46.346865 systemd[1642]: Startup finished in 131ms. Jul 12 00:06:46.347295 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 12 00:06:46.355514 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 12 00:06:47.047477 systemd[1]: Started sshd@1-91.99.219.165:22-139.178.68.195:55228.service - OpenSSH per-connection server daemon (139.178.68.195:55228). Jul 12 00:06:48.021325 sshd[1653]: Accepted publickey for core from 139.178.68.195 port 55228 ssh2: RSA SHA256:F+XLD192VdJplBwsaXiDmdHN61qgjd2kCMtCNVPlP/M Jul 12 00:06:48.023348 sshd[1653]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:06:48.029703 systemd-logind[1459]: New session 2 of user core. Jul 12 00:06:48.035588 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 12 00:06:48.697117 sshd[1653]: pam_unix(sshd:session): session closed for user core Jul 12 00:06:48.703386 systemd[1]: sshd@1-91.99.219.165:22-139.178.68.195:55228.service: Deactivated successfully. Jul 12 00:06:48.707624 systemd[1]: session-2.scope: Deactivated successfully. Jul 12 00:06:48.709846 systemd-logind[1459]: Session 2 logged out. Waiting for processes to exit. Jul 12 00:06:48.710974 systemd-logind[1459]: Removed session 2. Jul 12 00:06:48.878669 systemd[1]: Started sshd@2-91.99.219.165:22-139.178.68.195:34774.service - OpenSSH per-connection server daemon (139.178.68.195:34774). Jul 12 00:06:49.865590 sshd[1660]: Accepted publickey for core from 139.178.68.195 port 34774 ssh2: RSA SHA256:F+XLD192VdJplBwsaXiDmdHN61qgjd2kCMtCNVPlP/M Jul 12 00:06:49.867923 sshd[1660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:06:49.874163 systemd-logind[1459]: New session 3 of user core. Jul 12 00:06:49.879521 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 12 00:06:50.550794 sshd[1660]: pam_unix(sshd:session): session closed for user core Jul 12 00:06:50.555906 systemd-logind[1459]: Session 3 logged out. Waiting for processes to exit. Jul 12 00:06:50.556592 systemd[1]: sshd@2-91.99.219.165:22-139.178.68.195:34774.service: Deactivated successfully. Jul 12 00:06:50.559979 systemd[1]: session-3.scope: Deactivated successfully. Jul 12 00:06:50.561458 systemd-logind[1459]: Removed session 3. Jul 12 00:06:50.727396 systemd[1]: Started sshd@3-91.99.219.165:22-139.178.68.195:34790.service - OpenSSH per-connection server daemon (139.178.68.195:34790). Jul 12 00:06:51.698949 sshd[1667]: Accepted publickey for core from 139.178.68.195 port 34790 ssh2: RSA SHA256:F+XLD192VdJplBwsaXiDmdHN61qgjd2kCMtCNVPlP/M Jul 12 00:06:51.700629 sshd[1667]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:06:51.707274 systemd-logind[1459]: New session 4 of user core. Jul 12 00:06:51.718550 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 12 00:06:52.377443 sshd[1667]: pam_unix(sshd:session): session closed for user core Jul 12 00:06:52.382929 systemd-logind[1459]: Session 4 logged out. Waiting for processes to exit. Jul 12 00:06:52.383938 systemd[1]: sshd@3-91.99.219.165:22-139.178.68.195:34790.service: Deactivated successfully. Jul 12 00:06:52.385969 systemd[1]: session-4.scope: Deactivated successfully. Jul 12 00:06:52.387192 systemd-logind[1459]: Removed session 4. Jul 12 00:06:52.560728 systemd[1]: Started sshd@4-91.99.219.165:22-139.178.68.195:34796.service - OpenSSH per-connection server daemon (139.178.68.195:34796). Jul 12 00:06:53.289471 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 12 00:06:53.297609 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:06:53.418908 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:06:53.430789 (kubelet)[1684]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 12 00:06:53.473720 kubelet[1684]: E0712 00:06:53.473674 1684 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:06:53.476592 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:06:53.476740 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:06:53.536847 sshd[1674]: Accepted publickey for core from 139.178.68.195 port 34796 ssh2: RSA SHA256:F+XLD192VdJplBwsaXiDmdHN61qgjd2kCMtCNVPlP/M Jul 12 00:06:53.539347 sshd[1674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:06:53.545900 systemd-logind[1459]: New session 5 of user core. Jul 12 00:06:53.552427 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 12 00:06:54.064675 sudo[1691]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 12 00:06:54.064964 sudo[1691]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 12 00:06:54.083795 sudo[1691]: pam_unix(sudo:session): session closed for user root Jul 12 00:06:54.244075 sshd[1674]: pam_unix(sshd:session): session closed for user core Jul 12 00:06:54.249804 systemd-logind[1459]: Session 5 logged out. Waiting for processes to exit. Jul 12 00:06:54.250445 systemd[1]: sshd@4-91.99.219.165:22-139.178.68.195:34796.service: Deactivated successfully. Jul 12 00:06:54.252345 systemd[1]: session-5.scope: Deactivated successfully. Jul 12 00:06:54.254530 systemd-logind[1459]: Removed session 5. Jul 12 00:06:54.423685 systemd[1]: Started sshd@5-91.99.219.165:22-139.178.68.195:34802.service - OpenSSH per-connection server daemon (139.178.68.195:34802). Jul 12 00:06:54.873291 update_engine[1460]: I20250712 00:06:54.873085 1460 update_attempter.cc:509] Updating boot flags... Jul 12 00:06:54.915260 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1707) Jul 12 00:06:54.987239 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1703) Jul 12 00:06:55.401436 sshd[1696]: Accepted publickey for core from 139.178.68.195 port 34802 ssh2: RSA SHA256:F+XLD192VdJplBwsaXiDmdHN61qgjd2kCMtCNVPlP/M Jul 12 00:06:55.404019 sshd[1696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:06:55.408498 systemd-logind[1459]: New session 6 of user core. Jul 12 00:06:55.416552 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 12 00:06:55.921831 sudo[1718]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 12 00:06:55.922279 sudo[1718]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 12 00:06:55.927353 sudo[1718]: pam_unix(sudo:session): session closed for user root Jul 12 00:06:55.934320 sudo[1717]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 12 00:06:55.934616 sudo[1717]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 12 00:06:55.949748 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 12 00:06:55.952783 auditctl[1721]: No rules Jul 12 00:06:55.953094 systemd[1]: audit-rules.service: Deactivated successfully. Jul 12 00:06:55.953280 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 12 00:06:55.956167 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 12 00:06:55.995119 augenrules[1739]: No rules Jul 12 00:06:55.998288 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 12 00:06:56.000003 sudo[1717]: pam_unix(sudo:session): session closed for user root Jul 12 00:06:56.159387 sshd[1696]: pam_unix(sshd:session): session closed for user core Jul 12 00:06:56.164573 systemd[1]: sshd@5-91.99.219.165:22-139.178.68.195:34802.service: Deactivated successfully. Jul 12 00:06:56.166528 systemd[1]: session-6.scope: Deactivated successfully. Jul 12 00:06:56.168042 systemd-logind[1459]: Session 6 logged out. Waiting for processes to exit. Jul 12 00:06:56.169370 systemd-logind[1459]: Removed session 6. Jul 12 00:06:56.344977 systemd[1]: Started sshd@6-91.99.219.165:22-139.178.68.195:34808.service - OpenSSH per-connection server daemon (139.178.68.195:34808). Jul 12 00:06:57.318543 sshd[1747]: Accepted publickey for core from 139.178.68.195 port 34808 ssh2: RSA SHA256:F+XLD192VdJplBwsaXiDmdHN61qgjd2kCMtCNVPlP/M Jul 12 00:06:57.320769 sshd[1747]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:06:57.325671 systemd-logind[1459]: New session 7 of user core. Jul 12 00:06:57.339551 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 12 00:06:57.839412 sudo[1750]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 12 00:06:57.839706 sudo[1750]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 12 00:06:58.131637 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 12 00:06:58.133043 (dockerd)[1765]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 12 00:06:58.386371 dockerd[1765]: time="2025-07-12T00:06:58.385847726Z" level=info msg="Starting up" Jul 12 00:06:58.460975 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3268893109-merged.mount: Deactivated successfully. Jul 12 00:06:58.478930 dockerd[1765]: time="2025-07-12T00:06:58.478596786Z" level=info msg="Loading containers: start." Jul 12 00:06:58.585255 kernel: Initializing XFRM netlink socket Jul 12 00:06:58.679519 systemd-networkd[1379]: docker0: Link UP Jul 12 00:06:58.697793 dockerd[1765]: time="2025-07-12T00:06:58.697621670Z" level=info msg="Loading containers: done." Jul 12 00:06:58.714258 dockerd[1765]: time="2025-07-12T00:06:58.713821365Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 12 00:06:58.714258 dockerd[1765]: time="2025-07-12T00:06:58.713964675Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jul 12 00:06:58.714258 dockerd[1765]: time="2025-07-12T00:06:58.714141953Z" level=info msg="Daemon has completed initialization" Jul 12 00:06:58.751692 dockerd[1765]: time="2025-07-12T00:06:58.751479649Z" level=info msg="API listen on /run/docker.sock" Jul 12 00:06:58.752727 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 12 00:06:59.454264 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1150431756-merged.mount: Deactivated successfully. Jul 12 00:06:59.828073 containerd[1476]: time="2025-07-12T00:06:59.828035317Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jul 12 00:07:00.531431 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2958236757.mount: Deactivated successfully. Jul 12 00:07:02.252641 containerd[1476]: time="2025-07-12T00:07:02.252575516Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:02.254411 containerd[1476]: time="2025-07-12T00:07:02.254144319Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.6: active requests=0, bytes read=26328286" Jul 12 00:07:02.255538 containerd[1476]: time="2025-07-12T00:07:02.255473970Z" level=info msg="ImageCreate event name:\"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:02.259753 containerd[1476]: time="2025-07-12T00:07:02.259685564Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:02.262124 containerd[1476]: time="2025-07-12T00:07:02.261781566Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.6\" with image id \"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\", size \"26324994\" in 2.433704639s" Jul 12 00:07:02.262124 containerd[1476]: time="2025-07-12T00:07:02.261822464Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\"" Jul 12 00:07:02.262805 containerd[1476]: time="2025-07-12T00:07:02.262694879Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jul 12 00:07:03.539917 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jul 12 00:07:03.548555 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:07:03.666421 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:07:03.671457 (kubelet)[1964]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 12 00:07:03.717220 kubelet[1964]: E0712 00:07:03.717134 1964 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:07:03.719859 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:07:03.720057 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:07:04.290889 containerd[1476]: time="2025-07-12T00:07:04.289343060Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:04.290889 containerd[1476]: time="2025-07-12T00:07:04.290625177Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.6: active requests=0, bytes read=22529248" Jul 12 00:07:04.291418 containerd[1476]: time="2025-07-12T00:07:04.291325651Z" level=info msg="ImageCreate event name:\"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:04.296388 containerd[1476]: time="2025-07-12T00:07:04.296311935Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:04.298260 containerd[1476]: time="2025-07-12T00:07:04.298188129Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.6\" with image id \"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\", size \"24065018\" in 2.035230349s" Jul 12 00:07:04.298428 containerd[1476]: time="2025-07-12T00:07:04.298404161Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\"" Jul 12 00:07:04.299259 containerd[1476]: time="2025-07-12T00:07:04.299200356Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jul 12 00:07:05.948819 containerd[1476]: time="2025-07-12T00:07:05.948722141Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:05.950330 containerd[1476]: time="2025-07-12T00:07:05.950240199Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.6: active requests=0, bytes read=17484161" Jul 12 00:07:05.951513 containerd[1476]: time="2025-07-12T00:07:05.951448727Z" level=info msg="ImageCreate event name:\"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:05.954973 containerd[1476]: time="2025-07-12T00:07:05.954907931Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:05.956738 containerd[1476]: time="2025-07-12T00:07:05.956446342Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.6\" with image id \"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\", size \"19019949\" in 1.657181652s" Jul 12 00:07:05.956738 containerd[1476]: time="2025-07-12T00:07:05.956495124Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\"" Jul 12 00:07:05.957150 containerd[1476]: time="2025-07-12T00:07:05.957112504Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jul 12 00:07:06.915938 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2835071089.mount: Deactivated successfully. Jul 12 00:07:07.302860 containerd[1476]: time="2025-07-12T00:07:07.302792405Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:07.304479 containerd[1476]: time="2025-07-12T00:07:07.304412322Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=27378432" Jul 12 00:07:07.305245 containerd[1476]: time="2025-07-12T00:07:07.305145402Z" level=info msg="ImageCreate event name:\"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:07.307369 containerd[1476]: time="2025-07-12T00:07:07.307293814Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:07.308031 containerd[1476]: time="2025-07-12T00:07:07.307863499Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"27377425\" in 1.350709369s" Jul 12 00:07:07.308031 containerd[1476]: time="2025-07-12T00:07:07.307906927Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\"" Jul 12 00:07:07.308802 containerd[1476]: time="2025-07-12T00:07:07.308621571Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 12 00:07:07.852634 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3651695642.mount: Deactivated successfully. Jul 12 00:07:08.578133 containerd[1476]: time="2025-07-12T00:07:08.578074561Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:08.579532 containerd[1476]: time="2025-07-12T00:07:08.579491302Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951714" Jul 12 00:07:08.580247 containerd[1476]: time="2025-07-12T00:07:08.580017816Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:08.583476 containerd[1476]: time="2025-07-12T00:07:08.583418803Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:08.585200 containerd[1476]: time="2025-07-12T00:07:08.584860098Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.276207534s" Jul 12 00:07:08.585200 containerd[1476]: time="2025-07-12T00:07:08.584899688Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 12 00:07:08.585540 containerd[1476]: time="2025-07-12T00:07:08.585520180Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 12 00:07:09.140012 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount555605229.mount: Deactivated successfully. Jul 12 00:07:09.148001 containerd[1476]: time="2025-07-12T00:07:09.147909104Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:09.149231 containerd[1476]: time="2025-07-12T00:07:09.149144485Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" Jul 12 00:07:09.150601 containerd[1476]: time="2025-07-12T00:07:09.150542872Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:09.154035 containerd[1476]: time="2025-07-12T00:07:09.153976193Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:09.155250 containerd[1476]: time="2025-07-12T00:07:09.154555992Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 568.940315ms" Jul 12 00:07:09.155250 containerd[1476]: time="2025-07-12T00:07:09.154590225Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 12 00:07:09.155701 containerd[1476]: time="2025-07-12T00:07:09.155677157Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jul 12 00:07:09.740903 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3232246133.mount: Deactivated successfully. Jul 12 00:07:12.832275 containerd[1476]: time="2025-07-12T00:07:12.832089771Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:12.834236 containerd[1476]: time="2025-07-12T00:07:12.833869122Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812537" Jul 12 00:07:12.835608 containerd[1476]: time="2025-07-12T00:07:12.835569963Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:12.839649 containerd[1476]: time="2025-07-12T00:07:12.839583080Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:12.841356 containerd[1476]: time="2025-07-12T00:07:12.840898896Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 3.685185305s" Jul 12 00:07:12.841356 containerd[1476]: time="2025-07-12T00:07:12.840946289Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jul 12 00:07:13.789900 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jul 12 00:07:13.800939 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:07:13.919400 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:07:13.925070 (kubelet)[2124]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 12 00:07:13.965301 kubelet[2124]: E0712 00:07:13.965243 2124 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:07:13.968959 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:07:13.969169 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:07:18.220484 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:07:18.227632 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:07:18.260765 systemd[1]: Reloading requested from client PID 2139 ('systemctl') (unit session-7.scope)... Jul 12 00:07:18.261068 systemd[1]: Reloading... Jul 12 00:07:18.388237 zram_generator::config[2179]: No configuration found. Jul 12 00:07:18.505316 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:07:18.576952 systemd[1]: Reloading finished in 315 ms. Jul 12 00:07:18.636416 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:07:18.638499 (kubelet)[2218]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 12 00:07:18.642225 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:07:18.643982 systemd[1]: kubelet.service: Deactivated successfully. Jul 12 00:07:18.644276 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:07:18.646176 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:07:18.781502 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:07:18.786233 (kubelet)[2231]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 12 00:07:18.826055 kubelet[2231]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:07:18.826055 kubelet[2231]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 12 00:07:18.826055 kubelet[2231]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:07:18.826462 kubelet[2231]: I0712 00:07:18.826111 2231 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 12 00:07:19.532424 kubelet[2231]: I0712 00:07:19.532327 2231 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 12 00:07:19.532424 kubelet[2231]: I0712 00:07:19.532378 2231 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 12 00:07:19.532851 kubelet[2231]: I0712 00:07:19.532799 2231 server.go:954] "Client rotation is on, will bootstrap in background" Jul 12 00:07:19.569858 kubelet[2231]: E0712 00:07:19.569807 2231 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://91.99.219.165:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 91.99.219.165:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:07:19.572790 kubelet[2231]: I0712 00:07:19.572291 2231 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 12 00:07:19.578243 kubelet[2231]: E0712 00:07:19.577934 2231 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 12 00:07:19.578243 kubelet[2231]: I0712 00:07:19.577982 2231 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 12 00:07:19.581601 kubelet[2231]: I0712 00:07:19.581457 2231 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 12 00:07:19.583580 kubelet[2231]: I0712 00:07:19.582655 2231 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 12 00:07:19.583580 kubelet[2231]: I0712 00:07:19.582708 2231 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-4-n-f6981960e0","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 12 00:07:19.583580 kubelet[2231]: I0712 00:07:19.583057 2231 topology_manager.go:138] "Creating topology manager with none policy" Jul 12 00:07:19.583580 kubelet[2231]: I0712 00:07:19.583070 2231 container_manager_linux.go:304] "Creating device plugin manager" Jul 12 00:07:19.583804 kubelet[2231]: I0712 00:07:19.583349 2231 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:07:19.587140 kubelet[2231]: I0712 00:07:19.587098 2231 kubelet.go:446] "Attempting to sync node with API server" Jul 12 00:07:19.587140 kubelet[2231]: I0712 00:07:19.587139 2231 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 12 00:07:19.588468 kubelet[2231]: I0712 00:07:19.587163 2231 kubelet.go:352] "Adding apiserver pod source" Jul 12 00:07:19.588468 kubelet[2231]: I0712 00:07:19.587174 2231 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 12 00:07:19.594059 kubelet[2231]: W0712 00:07:19.593914 2231 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://91.99.219.165:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 91.99.219.165:6443: connect: connection refused Jul 12 00:07:19.594059 kubelet[2231]: E0712 00:07:19.594022 2231 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://91.99.219.165:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 91.99.219.165:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:07:19.594697 kubelet[2231]: I0712 00:07:19.594305 2231 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 12 00:07:19.595572 kubelet[2231]: I0712 00:07:19.595550 2231 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 12 00:07:19.595782 kubelet[2231]: W0712 00:07:19.595770 2231 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 12 00:07:19.597019 kubelet[2231]: W0712 00:07:19.596960 2231 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://91.99.219.165:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-4-n-f6981960e0&limit=500&resourceVersion=0": dial tcp 91.99.219.165:6443: connect: connection refused Jul 12 00:07:19.597118 kubelet[2231]: E0712 00:07:19.597043 2231 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://91.99.219.165:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-4-n-f6981960e0&limit=500&resourceVersion=0\": dial tcp 91.99.219.165:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:07:19.598089 kubelet[2231]: I0712 00:07:19.597693 2231 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 12 00:07:19.598089 kubelet[2231]: I0712 00:07:19.597751 2231 server.go:1287] "Started kubelet" Jul 12 00:07:19.598089 kubelet[2231]: I0712 00:07:19.597890 2231 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 12 00:07:19.599038 kubelet[2231]: I0712 00:07:19.598764 2231 server.go:479] "Adding debug handlers to kubelet server" Jul 12 00:07:19.601561 kubelet[2231]: I0712 00:07:19.601503 2231 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 12 00:07:19.603821 kubelet[2231]: I0712 00:07:19.603731 2231 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 12 00:07:19.604302 kubelet[2231]: I0712 00:07:19.604277 2231 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 12 00:07:19.607366 kubelet[2231]: E0712 00:07:19.607111 2231 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://91.99.219.165:6443/api/v1/namespaces/default/events\": dial tcp 91.99.219.165:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-4-n-f6981960e0.1851585215a7ecfd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-4-n-f6981960e0,UID:ci-4081-3-4-n-f6981960e0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-4-n-f6981960e0,},FirstTimestamp:2025-07-12 00:07:19.597722877 +0000 UTC m=+0.807855180,LastTimestamp:2025-07-12 00:07:19.597722877 +0000 UTC m=+0.807855180,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-4-n-f6981960e0,}" Jul 12 00:07:19.608230 kubelet[2231]: I0712 00:07:19.607582 2231 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 12 00:07:19.608230 kubelet[2231]: I0712 00:07:19.607752 2231 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 12 00:07:19.608230 kubelet[2231]: E0712 00:07:19.608085 2231 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-4-n-f6981960e0\" not found" Jul 12 00:07:19.610050 kubelet[2231]: I0712 00:07:19.610020 2231 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 12 00:07:19.610159 kubelet[2231]: I0712 00:07:19.610093 2231 reconciler.go:26] "Reconciler: start to sync state" Jul 12 00:07:19.611817 kubelet[2231]: W0712 00:07:19.611776 2231 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://91.99.219.165:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 91.99.219.165:6443: connect: connection refused Jul 12 00:07:19.611980 kubelet[2231]: E0712 00:07:19.611961 2231 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://91.99.219.165:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 91.99.219.165:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:07:19.612130 kubelet[2231]: E0712 00:07:19.612098 2231 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.99.219.165:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-4-n-f6981960e0?timeout=10s\": dial tcp 91.99.219.165:6443: connect: connection refused" interval="200ms" Jul 12 00:07:19.612430 kubelet[2231]: I0712 00:07:19.612410 2231 factory.go:221] Registration of the systemd container factory successfully Jul 12 00:07:19.612628 kubelet[2231]: I0712 00:07:19.612607 2231 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 12 00:07:19.615003 kubelet[2231]: I0712 00:07:19.614982 2231 factory.go:221] Registration of the containerd container factory successfully Jul 12 00:07:19.622651 kubelet[2231]: I0712 00:07:19.622595 2231 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 12 00:07:19.623765 kubelet[2231]: I0712 00:07:19.623728 2231 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 12 00:07:19.623765 kubelet[2231]: I0712 00:07:19.623761 2231 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 12 00:07:19.623872 kubelet[2231]: I0712 00:07:19.623785 2231 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 12 00:07:19.623872 kubelet[2231]: I0712 00:07:19.623792 2231 kubelet.go:2382] "Starting kubelet main sync loop" Jul 12 00:07:19.623872 kubelet[2231]: E0712 00:07:19.623834 2231 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 12 00:07:19.632893 kubelet[2231]: W0712 00:07:19.632724 2231 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://91.99.219.165:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 91.99.219.165:6443: connect: connection refused Jul 12 00:07:19.632893 kubelet[2231]: E0712 00:07:19.632807 2231 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://91.99.219.165:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 91.99.219.165:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:07:19.635279 kubelet[2231]: E0712 00:07:19.635236 2231 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 12 00:07:19.641561 kubelet[2231]: I0712 00:07:19.641470 2231 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 12 00:07:19.641561 kubelet[2231]: I0712 00:07:19.641483 2231 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 12 00:07:19.641561 kubelet[2231]: I0712 00:07:19.641501 2231 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:07:19.644383 kubelet[2231]: I0712 00:07:19.644092 2231 policy_none.go:49] "None policy: Start" Jul 12 00:07:19.644383 kubelet[2231]: I0712 00:07:19.644116 2231 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 12 00:07:19.644383 kubelet[2231]: I0712 00:07:19.644128 2231 state_mem.go:35] "Initializing new in-memory state store" Jul 12 00:07:19.655138 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 12 00:07:19.669602 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 12 00:07:19.675825 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 12 00:07:19.692768 kubelet[2231]: I0712 00:07:19.692421 2231 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 12 00:07:19.692768 kubelet[2231]: I0712 00:07:19.692752 2231 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 12 00:07:19.692988 kubelet[2231]: I0712 00:07:19.692773 2231 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 12 00:07:19.694609 kubelet[2231]: I0712 00:07:19.694178 2231 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 12 00:07:19.696602 kubelet[2231]: E0712 00:07:19.696456 2231 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 12 00:07:19.696602 kubelet[2231]: E0712 00:07:19.696513 2231 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-4-n-f6981960e0\" not found" Jul 12 00:07:19.738473 systemd[1]: Created slice kubepods-burstable-pod556a5b967f4fc11cbe925e5ca942e407.slice - libcontainer container kubepods-burstable-pod556a5b967f4fc11cbe925e5ca942e407.slice. Jul 12 00:07:19.748117 kubelet[2231]: E0712 00:07:19.747722 2231 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-4-n-f6981960e0\" not found" node="ci-4081-3-4-n-f6981960e0" Jul 12 00:07:19.752855 systemd[1]: Created slice kubepods-burstable-pod1fe895dc943439b6b6996ef2a9df9659.slice - libcontainer container kubepods-burstable-pod1fe895dc943439b6b6996ef2a9df9659.slice. Jul 12 00:07:19.755732 kubelet[2231]: E0712 00:07:19.755688 2231 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-4-n-f6981960e0\" not found" node="ci-4081-3-4-n-f6981960e0" Jul 12 00:07:19.758419 systemd[1]: Created slice kubepods-burstable-pod4b34889502034e70050ac8568ea3f507.slice - libcontainer container kubepods-burstable-pod4b34889502034e70050ac8568ea3f507.slice. Jul 12 00:07:19.760426 kubelet[2231]: E0712 00:07:19.760396 2231 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-4-n-f6981960e0\" not found" node="ci-4081-3-4-n-f6981960e0" Jul 12 00:07:19.795091 kubelet[2231]: I0712 00:07:19.795016 2231 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-4-n-f6981960e0" Jul 12 00:07:19.795640 kubelet[2231]: E0712 00:07:19.795489 2231 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://91.99.219.165:6443/api/v1/nodes\": dial tcp 91.99.219.165:6443: connect: connection refused" node="ci-4081-3-4-n-f6981960e0" Jul 12 00:07:19.811607 kubelet[2231]: I0712 00:07:19.811454 2231 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/556a5b967f4fc11cbe925e5ca942e407-kubeconfig\") pod \"kube-scheduler-ci-4081-3-4-n-f6981960e0\" (UID: \"556a5b967f4fc11cbe925e5ca942e407\") " pod="kube-system/kube-scheduler-ci-4081-3-4-n-f6981960e0" Jul 12 00:07:19.811607 kubelet[2231]: I0712 00:07:19.811499 2231 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4b34889502034e70050ac8568ea3f507-k8s-certs\") pod \"kube-apiserver-ci-4081-3-4-n-f6981960e0\" (UID: \"4b34889502034e70050ac8568ea3f507\") " pod="kube-system/kube-apiserver-ci-4081-3-4-n-f6981960e0" Jul 12 00:07:19.811607 kubelet[2231]: I0712 00:07:19.811543 2231 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1fe895dc943439b6b6996ef2a9df9659-ca-certs\") pod \"kube-controller-manager-ci-4081-3-4-n-f6981960e0\" (UID: \"1fe895dc943439b6b6996ef2a9df9659\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-n-f6981960e0" Jul 12 00:07:19.811607 kubelet[2231]: I0712 00:07:19.811576 2231 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1fe895dc943439b6b6996ef2a9df9659-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-4-n-f6981960e0\" (UID: \"1fe895dc943439b6b6996ef2a9df9659\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-n-f6981960e0" Jul 12 00:07:19.812074 kubelet[2231]: I0712 00:07:19.811867 2231 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1fe895dc943439b6b6996ef2a9df9659-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-4-n-f6981960e0\" (UID: \"1fe895dc943439b6b6996ef2a9df9659\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-n-f6981960e0" Jul 12 00:07:19.812074 kubelet[2231]: I0712 00:07:19.811916 2231 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1fe895dc943439b6b6996ef2a9df9659-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-4-n-f6981960e0\" (UID: \"1fe895dc943439b6b6996ef2a9df9659\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-n-f6981960e0" Jul 12 00:07:19.812074 kubelet[2231]: I0712 00:07:19.811956 2231 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4b34889502034e70050ac8568ea3f507-ca-certs\") pod \"kube-apiserver-ci-4081-3-4-n-f6981960e0\" (UID: \"4b34889502034e70050ac8568ea3f507\") " pod="kube-system/kube-apiserver-ci-4081-3-4-n-f6981960e0" Jul 12 00:07:19.812074 kubelet[2231]: I0712 00:07:19.811994 2231 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4b34889502034e70050ac8568ea3f507-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-4-n-f6981960e0\" (UID: \"4b34889502034e70050ac8568ea3f507\") " pod="kube-system/kube-apiserver-ci-4081-3-4-n-f6981960e0" Jul 12 00:07:19.812074 kubelet[2231]: I0712 00:07:19.812020 2231 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1fe895dc943439b6b6996ef2a9df9659-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-4-n-f6981960e0\" (UID: \"1fe895dc943439b6b6996ef2a9df9659\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-n-f6981960e0" Jul 12 00:07:19.812915 kubelet[2231]: E0712 00:07:19.812870 2231 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.99.219.165:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-4-n-f6981960e0?timeout=10s\": dial tcp 91.99.219.165:6443: connect: connection refused" interval="400ms" Jul 12 00:07:19.998762 kubelet[2231]: I0712 00:07:19.998660 2231 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-4-n-f6981960e0" Jul 12 00:07:19.999251 kubelet[2231]: E0712 00:07:19.999020 2231 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://91.99.219.165:6443/api/v1/nodes\": dial tcp 91.99.219.165:6443: connect: connection refused" node="ci-4081-3-4-n-f6981960e0" Jul 12 00:07:20.050942 containerd[1476]: time="2025-07-12T00:07:20.050347647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-4-n-f6981960e0,Uid:556a5b967f4fc11cbe925e5ca942e407,Namespace:kube-system,Attempt:0,}" Jul 12 00:07:20.057358 containerd[1476]: time="2025-07-12T00:07:20.057293454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-4-n-f6981960e0,Uid:1fe895dc943439b6b6996ef2a9df9659,Namespace:kube-system,Attempt:0,}" Jul 12 00:07:20.062513 containerd[1476]: time="2025-07-12T00:07:20.062100898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-4-n-f6981960e0,Uid:4b34889502034e70050ac8568ea3f507,Namespace:kube-system,Attempt:0,}" Jul 12 00:07:20.214062 kubelet[2231]: E0712 00:07:20.213952 2231 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.99.219.165:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-4-n-f6981960e0?timeout=10s\": dial tcp 91.99.219.165:6443: connect: connection refused" interval="800ms" Jul 12 00:07:20.402014 kubelet[2231]: I0712 00:07:20.401833 2231 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-4-n-f6981960e0" Jul 12 00:07:20.402499 kubelet[2231]: E0712 00:07:20.402465 2231 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://91.99.219.165:6443/api/v1/nodes\": dial tcp 91.99.219.165:6443: connect: connection refused" node="ci-4081-3-4-n-f6981960e0" Jul 12 00:07:20.493579 kubelet[2231]: W0712 00:07:20.493380 2231 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://91.99.219.165:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 91.99.219.165:6443: connect: connection refused Jul 12 00:07:20.493579 kubelet[2231]: E0712 00:07:20.493492 2231 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://91.99.219.165:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 91.99.219.165:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:07:20.602085 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3969102805.mount: Deactivated successfully. Jul 12 00:07:20.608261 containerd[1476]: time="2025-07-12T00:07:20.608159575Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 00:07:20.610849 containerd[1476]: time="2025-07-12T00:07:20.610803297Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 00:07:20.613106 containerd[1476]: time="2025-07-12T00:07:20.613060259Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Jul 12 00:07:20.614204 containerd[1476]: time="2025-07-12T00:07:20.614165541Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 12 00:07:20.615787 containerd[1476]: time="2025-07-12T00:07:20.615721662Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 00:07:20.617202 containerd[1476]: time="2025-07-12T00:07:20.617106383Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 12 00:07:20.617325 containerd[1476]: time="2025-07-12T00:07:20.617267543Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 00:07:20.621582 containerd[1476]: time="2025-07-12T00:07:20.621400267Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 00:07:20.624172 containerd[1476]: time="2025-07-12T00:07:20.623934030Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 573.473743ms" Jul 12 00:07:20.624709 containerd[1476]: time="2025-07-12T00:07:20.624643390Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 562.381852ms" Jul 12 00:07:20.626671 containerd[1476]: time="2025-07-12T00:07:20.626411672Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 568.988738ms" Jul 12 00:07:20.632777 kubelet[2231]: W0712 00:07:20.632674 2231 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://91.99.219.165:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 91.99.219.165:6443: connect: connection refused Jul 12 00:07:20.632869 kubelet[2231]: E0712 00:07:20.632785 2231 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://91.99.219.165:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 91.99.219.165:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:07:20.764454 containerd[1476]: time="2025-07-12T00:07:20.763931322Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:07:20.764454 containerd[1476]: time="2025-07-12T00:07:20.763991362Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:07:20.764454 containerd[1476]: time="2025-07-12T00:07:20.764007322Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:07:20.764667 containerd[1476]: time="2025-07-12T00:07:20.764379923Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:07:20.770973 containerd[1476]: time="2025-07-12T00:07:20.770712889Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:07:20.770973 containerd[1476]: time="2025-07-12T00:07:20.770784009Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:07:20.770973 containerd[1476]: time="2025-07-12T00:07:20.770799369Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:07:20.770973 containerd[1476]: time="2025-07-12T00:07:20.770637289Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:07:20.770973 containerd[1476]: time="2025-07-12T00:07:20.770750929Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:07:20.770973 containerd[1476]: time="2025-07-12T00:07:20.770767889Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:07:20.771578 containerd[1476]: time="2025-07-12T00:07:20.771028529Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:07:20.772444 containerd[1476]: time="2025-07-12T00:07:20.771176609Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:07:20.799447 systemd[1]: Started cri-containerd-0f936b6283f602e818ea118388bc228749944123e77afb0ed639dcb62f96585a.scope - libcontainer container 0f936b6283f602e818ea118388bc228749944123e77afb0ed639dcb62f96585a. Jul 12 00:07:20.802709 systemd[1]: Started cri-containerd-8c622f2bc8d3af2caa95fab028ac675182cf1afaa4ca1a25f64286f729188284.scope - libcontainer container 8c622f2bc8d3af2caa95fab028ac675182cf1afaa4ca1a25f64286f729188284. Jul 12 00:07:20.808795 systemd[1]: Started cri-containerd-3e9c858037753c3193eb2e7e5ab5b27f9b819cf6ab92a156520e3403524a2171.scope - libcontainer container 3e9c858037753c3193eb2e7e5ab5b27f9b819cf6ab92a156520e3403524a2171. Jul 12 00:07:20.854693 containerd[1476]: time="2025-07-12T00:07:20.854591768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-4-n-f6981960e0,Uid:1fe895dc943439b6b6996ef2a9df9659,Namespace:kube-system,Attempt:0,} returns sandbox id \"8c622f2bc8d3af2caa95fab028ac675182cf1afaa4ca1a25f64286f729188284\"" Jul 12 00:07:20.862575 containerd[1476]: time="2025-07-12T00:07:20.862456976Z" level=info msg="CreateContainer within sandbox \"8c622f2bc8d3af2caa95fab028ac675182cf1afaa4ca1a25f64286f729188284\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 12 00:07:20.870841 containerd[1476]: time="2025-07-12T00:07:20.870776903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-4-n-f6981960e0,Uid:556a5b967f4fc11cbe925e5ca942e407,Namespace:kube-system,Attempt:0,} returns sandbox id \"0f936b6283f602e818ea118388bc228749944123e77afb0ed639dcb62f96585a\"" Jul 12 00:07:20.875046 containerd[1476]: time="2025-07-12T00:07:20.874980307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-4-n-f6981960e0,Uid:4b34889502034e70050ac8568ea3f507,Namespace:kube-system,Attempt:0,} returns sandbox id \"3e9c858037753c3193eb2e7e5ab5b27f9b819cf6ab92a156520e3403524a2171\"" Jul 12 00:07:20.876205 containerd[1476]: time="2025-07-12T00:07:20.876159788Z" level=info msg="CreateContainer within sandbox \"0f936b6283f602e818ea118388bc228749944123e77afb0ed639dcb62f96585a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 12 00:07:20.878484 containerd[1476]: time="2025-07-12T00:07:20.878359951Z" level=info msg="CreateContainer within sandbox \"3e9c858037753c3193eb2e7e5ab5b27f9b819cf6ab92a156520e3403524a2171\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 12 00:07:20.884645 containerd[1476]: time="2025-07-12T00:07:20.884589556Z" level=info msg="CreateContainer within sandbox \"8c622f2bc8d3af2caa95fab028ac675182cf1afaa4ca1a25f64286f729188284\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"65dfa0bb4a2519202cf4c95f3ed7c0c8ba2c61657424b029de3587fd549083c2\"" Jul 12 00:07:20.886134 containerd[1476]: time="2025-07-12T00:07:20.885701598Z" level=info msg="StartContainer for \"65dfa0bb4a2519202cf4c95f3ed7c0c8ba2c61657424b029de3587fd549083c2\"" Jul 12 00:07:20.904177 containerd[1476]: time="2025-07-12T00:07:20.904133295Z" level=info msg="CreateContainer within sandbox \"3e9c858037753c3193eb2e7e5ab5b27f9b819cf6ab92a156520e3403524a2171\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7427922df57548fd9a25a3d7ff1fc164b43f0ec77f0d6a96a45cd980f397cdbf\"" Jul 12 00:07:20.905381 containerd[1476]: time="2025-07-12T00:07:20.905338496Z" level=info msg="StartContainer for \"7427922df57548fd9a25a3d7ff1fc164b43f0ec77f0d6a96a45cd980f397cdbf\"" Jul 12 00:07:20.909167 containerd[1476]: time="2025-07-12T00:07:20.909128740Z" level=info msg="CreateContainer within sandbox \"0f936b6283f602e818ea118388bc228749944123e77afb0ed639dcb62f96585a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"11d2ca7083c92ff4416a4c9dc0215d8d7eb76a094efbbedf7eda7603a0f62f43\"" Jul 12 00:07:20.911314 containerd[1476]: time="2025-07-12T00:07:20.910341061Z" level=info msg="StartContainer for \"11d2ca7083c92ff4416a4c9dc0215d8d7eb76a094efbbedf7eda7603a0f62f43\"" Jul 12 00:07:20.927441 systemd[1]: Started cri-containerd-65dfa0bb4a2519202cf4c95f3ed7c0c8ba2c61657424b029de3587fd549083c2.scope - libcontainer container 65dfa0bb4a2519202cf4c95f3ed7c0c8ba2c61657424b029de3587fd549083c2. Jul 12 00:07:20.930553 kubelet[2231]: W0712 00:07:20.930480 2231 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://91.99.219.165:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-4-n-f6981960e0&limit=500&resourceVersion=0": dial tcp 91.99.219.165:6443: connect: connection refused Jul 12 00:07:20.930812 kubelet[2231]: E0712 00:07:20.930784 2231 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://91.99.219.165:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-4-n-f6981960e0&limit=500&resourceVersion=0\": dial tcp 91.99.219.165:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:07:20.957595 systemd[1]: Started cri-containerd-7427922df57548fd9a25a3d7ff1fc164b43f0ec77f0d6a96a45cd980f397cdbf.scope - libcontainer container 7427922df57548fd9a25a3d7ff1fc164b43f0ec77f0d6a96a45cd980f397cdbf. Jul 12 00:07:20.969439 systemd[1]: Started cri-containerd-11d2ca7083c92ff4416a4c9dc0215d8d7eb76a094efbbedf7eda7603a0f62f43.scope - libcontainer container 11d2ca7083c92ff4416a4c9dc0215d8d7eb76a094efbbedf7eda7603a0f62f43. Jul 12 00:07:20.996849 containerd[1476]: time="2025-07-12T00:07:20.996801343Z" level=info msg="StartContainer for \"65dfa0bb4a2519202cf4c95f3ed7c0c8ba2c61657424b029de3587fd549083c2\" returns successfully" Jul 12 00:07:21.015436 kubelet[2231]: E0712 00:07:21.015130 2231 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.99.219.165:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-4-n-f6981960e0?timeout=10s\": dial tcp 91.99.219.165:6443: connect: connection refused" interval="1.6s" Jul 12 00:07:21.025187 containerd[1476]: time="2025-07-12T00:07:21.024775528Z" level=info msg="StartContainer for \"11d2ca7083c92ff4416a4c9dc0215d8d7eb76a094efbbedf7eda7603a0f62f43\" returns successfully" Jul 12 00:07:21.031958 containerd[1476]: time="2025-07-12T00:07:21.031601294Z" level=info msg="StartContainer for \"7427922df57548fd9a25a3d7ff1fc164b43f0ec77f0d6a96a45cd980f397cdbf\" returns successfully" Jul 12 00:07:21.149492 kubelet[2231]: W0712 00:07:21.149376 2231 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://91.99.219.165:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 91.99.219.165:6443: connect: connection refused Jul 12 00:07:21.149492 kubelet[2231]: E0712 00:07:21.149458 2231 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://91.99.219.165:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 91.99.219.165:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:07:21.206225 kubelet[2231]: I0712 00:07:21.204299 2231 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-4-n-f6981960e0" Jul 12 00:07:21.646054 kubelet[2231]: E0712 00:07:21.645826 2231 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-4-n-f6981960e0\" not found" node="ci-4081-3-4-n-f6981960e0" Jul 12 00:07:21.653620 kubelet[2231]: E0712 00:07:21.653579 2231 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-4-n-f6981960e0\" not found" node="ci-4081-3-4-n-f6981960e0" Jul 12 00:07:21.661651 kubelet[2231]: E0712 00:07:21.661474 2231 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-4-n-f6981960e0\" not found" node="ci-4081-3-4-n-f6981960e0" Jul 12 00:07:22.665252 kubelet[2231]: E0712 00:07:22.664843 2231 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-4-n-f6981960e0\" not found" node="ci-4081-3-4-n-f6981960e0" Jul 12 00:07:22.665252 kubelet[2231]: E0712 00:07:22.664886 2231 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-4-n-f6981960e0\" not found" node="ci-4081-3-4-n-f6981960e0" Jul 12 00:07:23.527844 kubelet[2231]: E0712 00:07:23.527652 2231 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-4-n-f6981960e0\" not found" node="ci-4081-3-4-n-f6981960e0" Jul 12 00:07:23.594361 kubelet[2231]: I0712 00:07:23.594325 2231 apiserver.go:52] "Watching apiserver" Jul 12 00:07:23.691598 kubelet[2231]: E0712 00:07:23.691542 2231 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-4-n-f6981960e0\" not found" node="ci-4081-3-4-n-f6981960e0" Jul 12 00:07:23.710307 kubelet[2231]: I0712 00:07:23.710246 2231 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 12 00:07:23.845315 kubelet[2231]: I0712 00:07:23.845272 2231 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-4-n-f6981960e0" Jul 12 00:07:23.909906 kubelet[2231]: I0712 00:07:23.909851 2231 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-4-n-f6981960e0" Jul 12 00:07:23.922154 kubelet[2231]: E0712 00:07:23.922065 2231 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-4-n-f6981960e0\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-4-n-f6981960e0" Jul 12 00:07:23.922154 kubelet[2231]: I0712 00:07:23.922155 2231 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-4-n-f6981960e0" Jul 12 00:07:23.925496 kubelet[2231]: E0712 00:07:23.925446 2231 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-4-n-f6981960e0\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-4-n-f6981960e0" Jul 12 00:07:23.925496 kubelet[2231]: I0712 00:07:23.925477 2231 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-4-n-f6981960e0" Jul 12 00:07:23.929855 kubelet[2231]: E0712 00:07:23.929815 2231 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-4-n-f6981960e0\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-4-n-f6981960e0" Jul 12 00:07:25.750915 systemd[1]: Reloading requested from client PID 2501 ('systemctl') (unit session-7.scope)... Jul 12 00:07:25.750937 systemd[1]: Reloading... Jul 12 00:07:25.854289 zram_generator::config[2544]: No configuration found. Jul 12 00:07:25.946550 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:07:26.033380 systemd[1]: Reloading finished in 282 ms. Jul 12 00:07:26.074589 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:07:26.089052 systemd[1]: kubelet.service: Deactivated successfully. Jul 12 00:07:26.089386 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:07:26.089456 systemd[1]: kubelet.service: Consumed 1.253s CPU time, 127.4M memory peak, 0B memory swap peak. Jul 12 00:07:26.093790 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:07:26.213165 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:07:26.223871 (kubelet)[2586]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 12 00:07:26.284008 kubelet[2586]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:07:26.284008 kubelet[2586]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 12 00:07:26.284008 kubelet[2586]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:07:26.284008 kubelet[2586]: I0712 00:07:26.283636 2586 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 12 00:07:26.293694 kubelet[2586]: I0712 00:07:26.293623 2586 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 12 00:07:26.293694 kubelet[2586]: I0712 00:07:26.293660 2586 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 12 00:07:26.296140 kubelet[2586]: I0712 00:07:26.294062 2586 server.go:954] "Client rotation is on, will bootstrap in background" Jul 12 00:07:26.296611 kubelet[2586]: I0712 00:07:26.296581 2586 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 12 00:07:26.301014 kubelet[2586]: I0712 00:07:26.300982 2586 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 12 00:07:26.304906 kubelet[2586]: E0712 00:07:26.304858 2586 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 12 00:07:26.305244 kubelet[2586]: I0712 00:07:26.305184 2586 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 12 00:07:26.309527 kubelet[2586]: I0712 00:07:26.309505 2586 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 12 00:07:26.309872 kubelet[2586]: I0712 00:07:26.309843 2586 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 12 00:07:26.310125 kubelet[2586]: I0712 00:07:26.309950 2586 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-4-n-f6981960e0","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 12 00:07:26.310288 kubelet[2586]: I0712 00:07:26.310274 2586 topology_manager.go:138] "Creating topology manager with none policy" Jul 12 00:07:26.310354 kubelet[2586]: I0712 00:07:26.310346 2586 container_manager_linux.go:304] "Creating device plugin manager" Jul 12 00:07:26.310455 kubelet[2586]: I0712 00:07:26.310446 2586 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:07:26.310686 kubelet[2586]: I0712 00:07:26.310676 2586 kubelet.go:446] "Attempting to sync node with API server" Jul 12 00:07:26.311322 kubelet[2586]: I0712 00:07:26.311304 2586 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 12 00:07:26.314320 kubelet[2586]: I0712 00:07:26.314301 2586 kubelet.go:352] "Adding apiserver pod source" Jul 12 00:07:26.314421 kubelet[2586]: I0712 00:07:26.314411 2586 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 12 00:07:26.316022 kubelet[2586]: I0712 00:07:26.316003 2586 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 12 00:07:26.318214 kubelet[2586]: I0712 00:07:26.316582 2586 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 12 00:07:26.318816 kubelet[2586]: I0712 00:07:26.318798 2586 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 12 00:07:26.318917 kubelet[2586]: I0712 00:07:26.318908 2586 server.go:1287] "Started kubelet" Jul 12 00:07:26.320478 kubelet[2586]: I0712 00:07:26.320438 2586 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 12 00:07:26.320829 kubelet[2586]: I0712 00:07:26.320796 2586 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 12 00:07:26.324216 kubelet[2586]: I0712 00:07:26.321796 2586 server.go:479] "Adding debug handlers to kubelet server" Jul 12 00:07:26.331147 kubelet[2586]: I0712 00:07:26.330884 2586 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 12 00:07:26.333451 kubelet[2586]: I0712 00:07:26.333431 2586 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 12 00:07:26.334730 kubelet[2586]: I0712 00:07:26.334689 2586 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 12 00:07:26.334824 kubelet[2586]: I0712 00:07:26.331989 2586 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 12 00:07:26.335147 kubelet[2586]: E0712 00:07:26.335106 2586 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-4-n-f6981960e0\" not found" Jul 12 00:07:26.339218 kubelet[2586]: I0712 00:07:26.335808 2586 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 12 00:07:26.339218 kubelet[2586]: I0712 00:07:26.335969 2586 reconciler.go:26] "Reconciler: start to sync state" Jul 12 00:07:26.341394 kubelet[2586]: I0712 00:07:26.341363 2586 factory.go:221] Registration of the systemd container factory successfully Jul 12 00:07:26.341540 kubelet[2586]: I0712 00:07:26.341513 2586 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 12 00:07:26.368990 kubelet[2586]: I0712 00:07:26.368887 2586 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 12 00:07:26.371318 kubelet[2586]: I0712 00:07:26.371286 2586 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 12 00:07:26.371482 kubelet[2586]: I0712 00:07:26.371456 2586 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 12 00:07:26.371597 kubelet[2586]: I0712 00:07:26.371545 2586 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 12 00:07:26.371655 kubelet[2586]: I0712 00:07:26.371647 2586 kubelet.go:2382] "Starting kubelet main sync loop" Jul 12 00:07:26.371770 kubelet[2586]: E0712 00:07:26.371751 2586 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 12 00:07:26.380976 kubelet[2586]: I0712 00:07:26.380932 2586 factory.go:221] Registration of the containerd container factory successfully Jul 12 00:07:26.428705 kubelet[2586]: I0712 00:07:26.428674 2586 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 12 00:07:26.428705 kubelet[2586]: I0712 00:07:26.428696 2586 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 12 00:07:26.428705 kubelet[2586]: I0712 00:07:26.428720 2586 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:07:26.428906 kubelet[2586]: I0712 00:07:26.428889 2586 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 12 00:07:26.428951 kubelet[2586]: I0712 00:07:26.428905 2586 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 12 00:07:26.428976 kubelet[2586]: I0712 00:07:26.428955 2586 policy_none.go:49] "None policy: Start" Jul 12 00:07:26.428976 kubelet[2586]: I0712 00:07:26.428967 2586 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 12 00:07:26.429015 kubelet[2586]: I0712 00:07:26.428978 2586 state_mem.go:35] "Initializing new in-memory state store" Jul 12 00:07:26.429094 kubelet[2586]: I0712 00:07:26.429083 2586 state_mem.go:75] "Updated machine memory state" Jul 12 00:07:26.435146 kubelet[2586]: I0712 00:07:26.434507 2586 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 12 00:07:26.435146 kubelet[2586]: I0712 00:07:26.434689 2586 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 12 00:07:26.435146 kubelet[2586]: I0712 00:07:26.434700 2586 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 12 00:07:26.435146 kubelet[2586]: I0712 00:07:26.434918 2586 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 12 00:07:26.438472 kubelet[2586]: E0712 00:07:26.438435 2586 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 12 00:07:26.473865 kubelet[2586]: I0712 00:07:26.473375 2586 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-4-n-f6981960e0" Jul 12 00:07:26.473865 kubelet[2586]: I0712 00:07:26.473430 2586 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-4-n-f6981960e0" Jul 12 00:07:26.475632 kubelet[2586]: I0712 00:07:26.475493 2586 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-4-n-f6981960e0" Jul 12 00:07:26.537976 kubelet[2586]: I0712 00:07:26.537784 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1fe895dc943439b6b6996ef2a9df9659-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-4-n-f6981960e0\" (UID: \"1fe895dc943439b6b6996ef2a9df9659\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-n-f6981960e0" Jul 12 00:07:26.537976 kubelet[2586]: I0712 00:07:26.537837 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4b34889502034e70050ac8568ea3f507-ca-certs\") pod \"kube-apiserver-ci-4081-3-4-n-f6981960e0\" (UID: \"4b34889502034e70050ac8568ea3f507\") " pod="kube-system/kube-apiserver-ci-4081-3-4-n-f6981960e0" Jul 12 00:07:26.537976 kubelet[2586]: I0712 00:07:26.537856 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1fe895dc943439b6b6996ef2a9df9659-ca-certs\") pod \"kube-controller-manager-ci-4081-3-4-n-f6981960e0\" (UID: \"1fe895dc943439b6b6996ef2a9df9659\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-n-f6981960e0" Jul 12 00:07:26.537976 kubelet[2586]: I0712 00:07:26.537875 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1fe895dc943439b6b6996ef2a9df9659-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-4-n-f6981960e0\" (UID: \"1fe895dc943439b6b6996ef2a9df9659\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-n-f6981960e0" Jul 12 00:07:26.537976 kubelet[2586]: I0712 00:07:26.537892 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1fe895dc943439b6b6996ef2a9df9659-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-4-n-f6981960e0\" (UID: \"1fe895dc943439b6b6996ef2a9df9659\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-n-f6981960e0" Jul 12 00:07:26.538456 kubelet[2586]: I0712 00:07:26.537908 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4b34889502034e70050ac8568ea3f507-k8s-certs\") pod \"kube-apiserver-ci-4081-3-4-n-f6981960e0\" (UID: \"4b34889502034e70050ac8568ea3f507\") " pod="kube-system/kube-apiserver-ci-4081-3-4-n-f6981960e0" Jul 12 00:07:26.538456 kubelet[2586]: I0712 00:07:26.537924 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4b34889502034e70050ac8568ea3f507-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-4-n-f6981960e0\" (UID: \"4b34889502034e70050ac8568ea3f507\") " pod="kube-system/kube-apiserver-ci-4081-3-4-n-f6981960e0" Jul 12 00:07:26.538456 kubelet[2586]: I0712 00:07:26.537939 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1fe895dc943439b6b6996ef2a9df9659-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-4-n-f6981960e0\" (UID: \"1fe895dc943439b6b6996ef2a9df9659\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-n-f6981960e0" Jul 12 00:07:26.538456 kubelet[2586]: I0712 00:07:26.537957 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/556a5b967f4fc11cbe925e5ca942e407-kubeconfig\") pod \"kube-scheduler-ci-4081-3-4-n-f6981960e0\" (UID: \"556a5b967f4fc11cbe925e5ca942e407\") " pod="kube-system/kube-scheduler-ci-4081-3-4-n-f6981960e0" Jul 12 00:07:26.541562 kubelet[2586]: I0712 00:07:26.541450 2586 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-4-n-f6981960e0" Jul 12 00:07:26.552442 kubelet[2586]: I0712 00:07:26.552390 2586 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081-3-4-n-f6981960e0" Jul 12 00:07:26.552675 kubelet[2586]: I0712 00:07:26.552534 2586 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-4-n-f6981960e0" Jul 12 00:07:27.315667 kubelet[2586]: I0712 00:07:27.315570 2586 apiserver.go:52] "Watching apiserver" Jul 12 00:07:27.336236 kubelet[2586]: I0712 00:07:27.336180 2586 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 12 00:07:27.405787 kubelet[2586]: I0712 00:07:27.405701 2586 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-4-n-f6981960e0" Jul 12 00:07:27.413994 kubelet[2586]: E0712 00:07:27.413954 2586 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-4-n-f6981960e0\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-4-n-f6981960e0" Jul 12 00:07:27.429963 kubelet[2586]: I0712 00:07:27.429759 2586 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-4-n-f6981960e0" podStartSLOduration=1.42972992 podStartE2EDuration="1.42972992s" podCreationTimestamp="2025-07-12 00:07:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:07:27.429394959 +0000 UTC m=+1.199247333" watchObservedRunningTime="2025-07-12 00:07:27.42972992 +0000 UTC m=+1.199582294" Jul 12 00:07:27.455488 kubelet[2586]: I0712 00:07:27.454900 2586 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-4-n-f6981960e0" podStartSLOduration=1.454877176 podStartE2EDuration="1.454877176s" podCreationTimestamp="2025-07-12 00:07:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:07:27.440198126 +0000 UTC m=+1.210050540" watchObservedRunningTime="2025-07-12 00:07:27.454877176 +0000 UTC m=+1.224729590" Jul 12 00:07:27.469052 kubelet[2586]: I0712 00:07:27.468340 2586 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-4-n-f6981960e0" podStartSLOduration=1.468326225 podStartE2EDuration="1.468326225s" podCreationTimestamp="2025-07-12 00:07:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:07:27.456729697 +0000 UTC m=+1.226582071" watchObservedRunningTime="2025-07-12 00:07:27.468326225 +0000 UTC m=+1.238178599" Jul 12 00:07:31.459496 kubelet[2586]: I0712 00:07:31.459399 2586 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 12 00:07:31.460364 containerd[1476]: time="2025-07-12T00:07:31.460322727Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 12 00:07:31.460903 kubelet[2586]: I0712 00:07:31.460583 2586 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 12 00:07:32.338761 systemd[1]: Created slice kubepods-besteffort-pod8eece415_17c9_49e0_90bb_dbd005386a85.slice - libcontainer container kubepods-besteffort-pod8eece415_17c9_49e0_90bb_dbd005386a85.slice. Jul 12 00:07:32.376445 kubelet[2586]: I0712 00:07:32.376353 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8eece415-17c9-49e0-90bb-dbd005386a85-lib-modules\") pod \"kube-proxy-rsgsv\" (UID: \"8eece415-17c9-49e0-90bb-dbd005386a85\") " pod="kube-system/kube-proxy-rsgsv" Jul 12 00:07:32.376994 kubelet[2586]: I0712 00:07:32.376833 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6d5b\" (UniqueName: \"kubernetes.io/projected/8eece415-17c9-49e0-90bb-dbd005386a85-kube-api-access-v6d5b\") pod \"kube-proxy-rsgsv\" (UID: \"8eece415-17c9-49e0-90bb-dbd005386a85\") " pod="kube-system/kube-proxy-rsgsv" Jul 12 00:07:32.376994 kubelet[2586]: I0712 00:07:32.376869 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8eece415-17c9-49e0-90bb-dbd005386a85-kube-proxy\") pod \"kube-proxy-rsgsv\" (UID: \"8eece415-17c9-49e0-90bb-dbd005386a85\") " pod="kube-system/kube-proxy-rsgsv" Jul 12 00:07:32.376994 kubelet[2586]: I0712 00:07:32.376886 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8eece415-17c9-49e0-90bb-dbd005386a85-xtables-lock\") pod \"kube-proxy-rsgsv\" (UID: \"8eece415-17c9-49e0-90bb-dbd005386a85\") " pod="kube-system/kube-proxy-rsgsv" Jul 12 00:07:32.617765 systemd[1]: Created slice kubepods-besteffort-pod26ab643a_92ef_499e_ba47_d2164aeca2fd.slice - libcontainer container kubepods-besteffort-pod26ab643a_92ef_499e_ba47_d2164aeca2fd.slice. Jul 12 00:07:32.652302 containerd[1476]: time="2025-07-12T00:07:32.652189275Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rsgsv,Uid:8eece415-17c9-49e0-90bb-dbd005386a85,Namespace:kube-system,Attempt:0,}" Jul 12 00:07:32.678871 kubelet[2586]: I0712 00:07:32.678755 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/26ab643a-92ef-499e-ba47-d2164aeca2fd-var-lib-calico\") pod \"tigera-operator-747864d56d-8s8j6\" (UID: \"26ab643a-92ef-499e-ba47-d2164aeca2fd\") " pod="tigera-operator/tigera-operator-747864d56d-8s8j6" Jul 12 00:07:32.678871 kubelet[2586]: I0712 00:07:32.678806 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdp96\" (UniqueName: \"kubernetes.io/projected/26ab643a-92ef-499e-ba47-d2164aeca2fd-kube-api-access-wdp96\") pod \"tigera-operator-747864d56d-8s8j6\" (UID: \"26ab643a-92ef-499e-ba47-d2164aeca2fd\") " pod="tigera-operator/tigera-operator-747864d56d-8s8j6" Jul 12 00:07:32.681295 containerd[1476]: time="2025-07-12T00:07:32.680931329Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:07:32.681295 containerd[1476]: time="2025-07-12T00:07:32.680991569Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:07:32.681295 containerd[1476]: time="2025-07-12T00:07:32.681007609Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:07:32.681295 containerd[1476]: time="2025-07-12T00:07:32.681082329Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:07:32.697355 systemd[1]: run-containerd-runc-k8s.io-ff9f1e7205d3cda090bbda086a2437b6358141f32b67ee24b370f519772962bf-runc.VY80mP.mount: Deactivated successfully. Jul 12 00:07:32.710553 systemd[1]: Started cri-containerd-ff9f1e7205d3cda090bbda086a2437b6358141f32b67ee24b370f519772962bf.scope - libcontainer container ff9f1e7205d3cda090bbda086a2437b6358141f32b67ee24b370f519772962bf. Jul 12 00:07:32.738014 containerd[1476]: time="2025-07-12T00:07:32.737946279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rsgsv,Uid:8eece415-17c9-49e0-90bb-dbd005386a85,Namespace:kube-system,Attempt:0,} returns sandbox id \"ff9f1e7205d3cda090bbda086a2437b6358141f32b67ee24b370f519772962bf\"" Jul 12 00:07:32.744502 containerd[1476]: time="2025-07-12T00:07:32.744273042Z" level=info msg="CreateContainer within sandbox \"ff9f1e7205d3cda090bbda086a2437b6358141f32b67ee24b370f519772962bf\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 12 00:07:32.761273 containerd[1476]: time="2025-07-12T00:07:32.761120091Z" level=info msg="CreateContainer within sandbox \"ff9f1e7205d3cda090bbda086a2437b6358141f32b67ee24b370f519772962bf\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d9df56594168444d7509269baddd4641607edd3da72f5b331c220bbc2a85e054\"" Jul 12 00:07:32.762627 containerd[1476]: time="2025-07-12T00:07:32.762253091Z" level=info msg="StartContainer for \"d9df56594168444d7509269baddd4641607edd3da72f5b331c220bbc2a85e054\"" Jul 12 00:07:32.796518 systemd[1]: Started cri-containerd-d9df56594168444d7509269baddd4641607edd3da72f5b331c220bbc2a85e054.scope - libcontainer container d9df56594168444d7509269baddd4641607edd3da72f5b331c220bbc2a85e054. Jul 12 00:07:32.827582 containerd[1476]: time="2025-07-12T00:07:32.827517965Z" level=info msg="StartContainer for \"d9df56594168444d7509269baddd4641607edd3da72f5b331c220bbc2a85e054\" returns successfully" Jul 12 00:07:32.925036 containerd[1476]: time="2025-07-12T00:07:32.924195375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-8s8j6,Uid:26ab643a-92ef-499e-ba47-d2164aeca2fd,Namespace:tigera-operator,Attempt:0,}" Jul 12 00:07:32.952862 containerd[1476]: time="2025-07-12T00:07:32.951763989Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:07:32.952862 containerd[1476]: time="2025-07-12T00:07:32.951831629Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:07:32.952862 containerd[1476]: time="2025-07-12T00:07:32.951846909Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:07:32.952862 containerd[1476]: time="2025-07-12T00:07:32.951931429Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:07:32.972481 systemd[1]: Started cri-containerd-1ee7e0147964535d75b6d23e436cf684091586f9ca7732d02702fd1f4f7964cd.scope - libcontainer container 1ee7e0147964535d75b6d23e436cf684091586f9ca7732d02702fd1f4f7964cd. Jul 12 00:07:33.008254 containerd[1476]: time="2025-07-12T00:07:33.008172138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-8s8j6,Uid:26ab643a-92ef-499e-ba47-d2164aeca2fd,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"1ee7e0147964535d75b6d23e436cf684091586f9ca7732d02702fd1f4f7964cd\"" Jul 12 00:07:33.017969 containerd[1476]: time="2025-07-12T00:07:33.017193902Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 12 00:07:34.483329 kubelet[2586]: I0712 00:07:34.483202 2586 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rsgsv" podStartSLOduration=2.483176334 podStartE2EDuration="2.483176334s" podCreationTimestamp="2025-07-12 00:07:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:07:33.446282114 +0000 UTC m=+7.216134488" watchObservedRunningTime="2025-07-12 00:07:34.483176334 +0000 UTC m=+8.253028748" Jul 12 00:07:35.072180 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1166141723.mount: Deactivated successfully. Jul 12 00:07:35.532575 containerd[1476]: time="2025-07-12T00:07:35.532100697Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:35.534295 containerd[1476]: time="2025-07-12T00:07:35.534246978Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=22150610" Jul 12 00:07:35.535550 containerd[1476]: time="2025-07-12T00:07:35.535485739Z" level=info msg="ImageCreate event name:\"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:35.538320 containerd[1476]: time="2025-07-12T00:07:35.538260940Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:35.539322 containerd[1476]: time="2025-07-12T00:07:35.539162180Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"22146605\" in 2.521125437s" Jul 12 00:07:35.539322 containerd[1476]: time="2025-07-12T00:07:35.539204420Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\"" Jul 12 00:07:35.544181 containerd[1476]: time="2025-07-12T00:07:35.543661822Z" level=info msg="CreateContainer within sandbox \"1ee7e0147964535d75b6d23e436cf684091586f9ca7732d02702fd1f4f7964cd\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 12 00:07:35.560024 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount921291477.mount: Deactivated successfully. Jul 12 00:07:35.562847 containerd[1476]: time="2025-07-12T00:07:35.562808151Z" level=info msg="CreateContainer within sandbox \"1ee7e0147964535d75b6d23e436cf684091586f9ca7732d02702fd1f4f7964cd\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"ab4fb9411743bb06730c77bc3eee8e3927c714499c24ba38bb7233cb37a7bcb4\"" Jul 12 00:07:35.563691 containerd[1476]: time="2025-07-12T00:07:35.563533511Z" level=info msg="StartContainer for \"ab4fb9411743bb06730c77bc3eee8e3927c714499c24ba38bb7233cb37a7bcb4\"" Jul 12 00:07:35.590432 systemd[1]: Started cri-containerd-ab4fb9411743bb06730c77bc3eee8e3927c714499c24ba38bb7233cb37a7bcb4.scope - libcontainer container ab4fb9411743bb06730c77bc3eee8e3927c714499c24ba38bb7233cb37a7bcb4. Jul 12 00:07:35.623756 containerd[1476]: time="2025-07-12T00:07:35.623638898Z" level=info msg="StartContainer for \"ab4fb9411743bb06730c77bc3eee8e3927c714499c24ba38bb7233cb37a7bcb4\" returns successfully" Jul 12 00:07:41.859643 sudo[1750]: pam_unix(sudo:session): session closed for user root Jul 12 00:07:42.022500 sshd[1747]: pam_unix(sshd:session): session closed for user core Jul 12 00:07:42.029874 systemd-logind[1459]: Session 7 logged out. Waiting for processes to exit. Jul 12 00:07:42.030569 systemd[1]: session-7.scope: Deactivated successfully. Jul 12 00:07:42.031568 systemd[1]: session-7.scope: Consumed 7.110s CPU time, 150.2M memory peak, 0B memory swap peak. Jul 12 00:07:42.031968 systemd[1]: sshd@6-91.99.219.165:22-139.178.68.195:34808.service: Deactivated successfully. Jul 12 00:07:42.041788 systemd-logind[1459]: Removed session 7. Jul 12 00:07:52.066827 kubelet[2586]: I0712 00:07:52.066750 2586 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-8s8j6" podStartSLOduration=17.540067051 podStartE2EDuration="20.066731931s" podCreationTimestamp="2025-07-12 00:07:32 +0000 UTC" firstStartedPulling="2025-07-12 00:07:33.013481901 +0000 UTC m=+6.783334275" lastFinishedPulling="2025-07-12 00:07:35.540146821 +0000 UTC m=+9.309999155" observedRunningTime="2025-07-12 00:07:36.449784302 +0000 UTC m=+10.219636716" watchObservedRunningTime="2025-07-12 00:07:52.066731931 +0000 UTC m=+25.836584305" Jul 12 00:07:52.079491 systemd[1]: Created slice kubepods-besteffort-pod4976b82e_b01c_418f_be11_a474987b8c03.slice - libcontainer container kubepods-besteffort-pod4976b82e_b01c_418f_be11_a474987b8c03.slice. Jul 12 00:07:52.114603 kubelet[2586]: I0712 00:07:52.114556 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4976b82e-b01c-418f-be11-a474987b8c03-tigera-ca-bundle\") pod \"calico-typha-7fcd97f75f-v952r\" (UID: \"4976b82e-b01c-418f-be11-a474987b8c03\") " pod="calico-system/calico-typha-7fcd97f75f-v952r" Jul 12 00:07:52.114603 kubelet[2586]: I0712 00:07:52.114603 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/4976b82e-b01c-418f-be11-a474987b8c03-typha-certs\") pod \"calico-typha-7fcd97f75f-v952r\" (UID: \"4976b82e-b01c-418f-be11-a474987b8c03\") " pod="calico-system/calico-typha-7fcd97f75f-v952r" Jul 12 00:07:52.114796 kubelet[2586]: I0712 00:07:52.114626 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mvfz\" (UniqueName: \"kubernetes.io/projected/4976b82e-b01c-418f-be11-a474987b8c03-kube-api-access-8mvfz\") pod \"calico-typha-7fcd97f75f-v952r\" (UID: \"4976b82e-b01c-418f-be11-a474987b8c03\") " pod="calico-system/calico-typha-7fcd97f75f-v952r" Jul 12 00:07:52.258255 systemd[1]: Created slice kubepods-besteffort-pode4c5a459_5eaa_4111_8d40_f40a3d81f107.slice - libcontainer container kubepods-besteffort-pode4c5a459_5eaa_4111_8d40_f40a3d81f107.slice. Jul 12 00:07:52.315986 kubelet[2586]: I0712 00:07:52.315927 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e4c5a459-5eaa-4111-8d40-f40a3d81f107-cni-bin-dir\") pod \"calico-node-jhlqb\" (UID: \"e4c5a459-5eaa-4111-8d40-f40a3d81f107\") " pod="calico-system/calico-node-jhlqb" Jul 12 00:07:52.315986 kubelet[2586]: I0712 00:07:52.315973 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e4c5a459-5eaa-4111-8d40-f40a3d81f107-flexvol-driver-host\") pod \"calico-node-jhlqb\" (UID: \"e4c5a459-5eaa-4111-8d40-f40a3d81f107\") " pod="calico-system/calico-node-jhlqb" Jul 12 00:07:52.315986 kubelet[2586]: I0712 00:07:52.315993 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e4c5a459-5eaa-4111-8d40-f40a3d81f107-tigera-ca-bundle\") pod \"calico-node-jhlqb\" (UID: \"e4c5a459-5eaa-4111-8d40-f40a3d81f107\") " pod="calico-system/calico-node-jhlqb" Jul 12 00:07:52.316347 kubelet[2586]: I0712 00:07:52.316014 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e4c5a459-5eaa-4111-8d40-f40a3d81f107-var-lib-calico\") pod \"calico-node-jhlqb\" (UID: \"e4c5a459-5eaa-4111-8d40-f40a3d81f107\") " pod="calico-system/calico-node-jhlqb" Jul 12 00:07:52.316347 kubelet[2586]: I0712 00:07:52.316037 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e4c5a459-5eaa-4111-8d40-f40a3d81f107-var-run-calico\") pod \"calico-node-jhlqb\" (UID: \"e4c5a459-5eaa-4111-8d40-f40a3d81f107\") " pod="calico-system/calico-node-jhlqb" Jul 12 00:07:52.316347 kubelet[2586]: I0712 00:07:52.316053 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhtq6\" (UniqueName: \"kubernetes.io/projected/e4c5a459-5eaa-4111-8d40-f40a3d81f107-kube-api-access-nhtq6\") pod \"calico-node-jhlqb\" (UID: \"e4c5a459-5eaa-4111-8d40-f40a3d81f107\") " pod="calico-system/calico-node-jhlqb" Jul 12 00:07:52.316347 kubelet[2586]: I0712 00:07:52.316071 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e4c5a459-5eaa-4111-8d40-f40a3d81f107-cni-log-dir\") pod \"calico-node-jhlqb\" (UID: \"e4c5a459-5eaa-4111-8d40-f40a3d81f107\") " pod="calico-system/calico-node-jhlqb" Jul 12 00:07:52.316347 kubelet[2586]: I0712 00:07:52.316087 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e4c5a459-5eaa-4111-8d40-f40a3d81f107-policysync\") pod \"calico-node-jhlqb\" (UID: \"e4c5a459-5eaa-4111-8d40-f40a3d81f107\") " pod="calico-system/calico-node-jhlqb" Jul 12 00:07:52.316477 kubelet[2586]: I0712 00:07:52.316104 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e4c5a459-5eaa-4111-8d40-f40a3d81f107-lib-modules\") pod \"calico-node-jhlqb\" (UID: \"e4c5a459-5eaa-4111-8d40-f40a3d81f107\") " pod="calico-system/calico-node-jhlqb" Jul 12 00:07:52.316477 kubelet[2586]: I0712 00:07:52.316123 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e4c5a459-5eaa-4111-8d40-f40a3d81f107-node-certs\") pod \"calico-node-jhlqb\" (UID: \"e4c5a459-5eaa-4111-8d40-f40a3d81f107\") " pod="calico-system/calico-node-jhlqb" Jul 12 00:07:52.316477 kubelet[2586]: I0712 00:07:52.316141 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e4c5a459-5eaa-4111-8d40-f40a3d81f107-cni-net-dir\") pod \"calico-node-jhlqb\" (UID: \"e4c5a459-5eaa-4111-8d40-f40a3d81f107\") " pod="calico-system/calico-node-jhlqb" Jul 12 00:07:52.316477 kubelet[2586]: I0712 00:07:52.316202 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e4c5a459-5eaa-4111-8d40-f40a3d81f107-xtables-lock\") pod \"calico-node-jhlqb\" (UID: \"e4c5a459-5eaa-4111-8d40-f40a3d81f107\") " pod="calico-system/calico-node-jhlqb" Jul 12 00:07:52.381448 kubelet[2586]: E0712 00:07:52.380774 2586 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4mp89" podUID="b6e2f19f-7174-405d-8aeb-93e33315aa19" Jul 12 00:07:52.385243 containerd[1476]: time="2025-07-12T00:07:52.384376330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7fcd97f75f-v952r,Uid:4976b82e-b01c-418f-be11-a474987b8c03,Namespace:calico-system,Attempt:0,}" Jul 12 00:07:52.420596 kubelet[2586]: I0712 00:07:52.420231 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5c5k\" (UniqueName: \"kubernetes.io/projected/b6e2f19f-7174-405d-8aeb-93e33315aa19-kube-api-access-f5c5k\") pod \"csi-node-driver-4mp89\" (UID: \"b6e2f19f-7174-405d-8aeb-93e33315aa19\") " pod="calico-system/csi-node-driver-4mp89" Jul 12 00:07:52.420596 kubelet[2586]: I0712 00:07:52.420326 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b6e2f19f-7174-405d-8aeb-93e33315aa19-registration-dir\") pod \"csi-node-driver-4mp89\" (UID: \"b6e2f19f-7174-405d-8aeb-93e33315aa19\") " pod="calico-system/csi-node-driver-4mp89" Jul 12 00:07:52.420596 kubelet[2586]: I0712 00:07:52.420355 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b6e2f19f-7174-405d-8aeb-93e33315aa19-socket-dir\") pod \"csi-node-driver-4mp89\" (UID: \"b6e2f19f-7174-405d-8aeb-93e33315aa19\") " pod="calico-system/csi-node-driver-4mp89" Jul 12 00:07:52.420596 kubelet[2586]: I0712 00:07:52.420404 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b6e2f19f-7174-405d-8aeb-93e33315aa19-kubelet-dir\") pod \"csi-node-driver-4mp89\" (UID: \"b6e2f19f-7174-405d-8aeb-93e33315aa19\") " pod="calico-system/csi-node-driver-4mp89" Jul 12 00:07:52.420596 kubelet[2586]: I0712 00:07:52.420451 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/b6e2f19f-7174-405d-8aeb-93e33315aa19-varrun\") pod \"csi-node-driver-4mp89\" (UID: \"b6e2f19f-7174-405d-8aeb-93e33315aa19\") " pod="calico-system/csi-node-driver-4mp89" Jul 12 00:07:52.425370 kubelet[2586]: E0712 00:07:52.424151 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:52.425940 kubelet[2586]: W0712 00:07:52.424197 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:52.425940 kubelet[2586]: E0712 00:07:52.425787 2586 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:52.430518 kubelet[2586]: E0712 00:07:52.429133 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:52.430518 kubelet[2586]: W0712 00:07:52.429158 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:52.430518 kubelet[2586]: E0712 00:07:52.429220 2586 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:52.430518 kubelet[2586]: E0712 00:07:52.430466 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:52.430518 kubelet[2586]: W0712 00:07:52.430488 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:52.430518 kubelet[2586]: E0712 00:07:52.430510 2586 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:52.431311 kubelet[2586]: E0712 00:07:52.431016 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:52.431311 kubelet[2586]: W0712 00:07:52.431036 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:52.431311 kubelet[2586]: E0712 00:07:52.431056 2586 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:52.432553 kubelet[2586]: E0712 00:07:52.432049 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:52.432553 kubelet[2586]: W0712 00:07:52.432078 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:52.432553 kubelet[2586]: E0712 00:07:52.432096 2586 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:52.433960 kubelet[2586]: E0712 00:07:52.433402 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:52.433960 kubelet[2586]: W0712 00:07:52.433426 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:52.433960 kubelet[2586]: E0712 00:07:52.433446 2586 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:52.434674 kubelet[2586]: E0712 00:07:52.434441 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:52.434674 kubelet[2586]: W0712 00:07:52.434462 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:52.434674 kubelet[2586]: E0712 00:07:52.434484 2586 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:52.435600 kubelet[2586]: E0712 00:07:52.434908 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:52.435600 kubelet[2586]: W0712 00:07:52.434925 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:52.435600 kubelet[2586]: E0712 00:07:52.434950 2586 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:52.436369 kubelet[2586]: E0712 00:07:52.436311 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:52.436547 kubelet[2586]: W0712 00:07:52.436448 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:52.436547 kubelet[2586]: E0712 00:07:52.436476 2586 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:52.438801 kubelet[2586]: E0712 00:07:52.438353 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:52.438801 kubelet[2586]: W0712 00:07:52.438372 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:52.438801 kubelet[2586]: E0712 00:07:52.438415 2586 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:52.439251 kubelet[2586]: E0712 00:07:52.439044 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:52.439251 kubelet[2586]: W0712 00:07:52.439061 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:52.439251 kubelet[2586]: E0712 00:07:52.439084 2586 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:52.439694 kubelet[2586]: E0712 00:07:52.439428 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:52.439694 kubelet[2586]: W0712 00:07:52.439445 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:52.440228 kubelet[2586]: E0712 00:07:52.439459 2586 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:52.456032 kubelet[2586]: E0712 00:07:52.455917 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:52.456032 kubelet[2586]: W0712 00:07:52.455949 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:52.456032 kubelet[2586]: E0712 00:07:52.455971 2586 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:52.461292 containerd[1476]: time="2025-07-12T00:07:52.461150949Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:07:52.461444 containerd[1476]: time="2025-07-12T00:07:52.461419549Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:07:52.461562 containerd[1476]: time="2025-07-12T00:07:52.461540789Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:07:52.461848 containerd[1476]: time="2025-07-12T00:07:52.461807910Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:07:52.496652 systemd[1]: Started cri-containerd-b6b8967dc21e1e121ee5ba54ff93887890f67a3f81fdef8b6485f28e75076711.scope - libcontainer container b6b8967dc21e1e121ee5ba54ff93887890f67a3f81fdef8b6485f28e75076711. Jul 12 00:07:52.521517 kubelet[2586]: E0712 00:07:52.521488 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:52.521695 kubelet[2586]: W0712 00:07:52.521676 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:52.521808 kubelet[2586]: E0712 00:07:52.521767 2586 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:52.522266 kubelet[2586]: E0712 00:07:52.522226 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:52.522266 kubelet[2586]: W0712 00:07:52.522245 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:52.522462 kubelet[2586]: E0712 00:07:52.522315 2586 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:52.522917 kubelet[2586]: E0712 00:07:52.522802 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:52.522917 kubelet[2586]: W0712 00:07:52.522817 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:52.522917 kubelet[2586]: E0712 00:07:52.522849 2586 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:52.523566 kubelet[2586]: E0712 00:07:52.523433 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:52.523566 kubelet[2586]: W0712 00:07:52.523447 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:52.523566 kubelet[2586]: E0712 00:07:52.523492 2586 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:52.523916 kubelet[2586]: E0712 00:07:52.523893 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:52.523916 kubelet[2586]: W0712 00:07:52.523911 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:52.524014 kubelet[2586]: E0712 00:07:52.523934 2586 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:52.524577 kubelet[2586]: E0712 00:07:52.524491 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:52.524577 kubelet[2586]: W0712 00:07:52.524508 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:52.524577 kubelet[2586]: E0712 00:07:52.524544 2586 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:52.525183 kubelet[2586]: E0712 00:07:52.525129 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:52.525183 kubelet[2586]: W0712 00:07:52.525146 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:52.525959 kubelet[2586]: E0712 00:07:52.525932 2586 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:52.526291 kubelet[2586]: E0712 00:07:52.526087 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:52.526291 kubelet[2586]: W0712 00:07:52.526101 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:52.526414 kubelet[2586]: E0712 00:07:52.526398 2586 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:52.526513 kubelet[2586]: E0712 00:07:52.526504 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:52.526613 kubelet[2586]: W0712 00:07:52.526560 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:52.526613 kubelet[2586]: E0712 00:07:52.526605 2586 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:52.527388 kubelet[2586]: E0712 00:07:52.527310 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:52.527388 kubelet[2586]: W0712 00:07:52.527327 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:52.527388 kubelet[2586]: E0712 00:07:52.527372 2586 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:52.528390 kubelet[2586]: E0712 00:07:52.528309 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:52.528390 kubelet[2586]: W0712 00:07:52.528326 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:52.528390 kubelet[2586]: E0712 00:07:52.528370 2586 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:52.529084 kubelet[2586]: E0712 00:07:52.528745 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:52.529084 kubelet[2586]: W0712 00:07:52.528758 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:52.529084 kubelet[2586]: E0712 00:07:52.528799 2586 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:52.529340 kubelet[2586]: E0712 00:07:52.529326 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:52.529389 kubelet[2586]: W0712 00:07:52.529378 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:52.529532 kubelet[2586]: E0712 00:07:52.529477 2586 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:52.531527 kubelet[2586]: E0712 00:07:52.531304 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:52.531527 kubelet[2586]: W0712 00:07:52.531328 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:52.531929 kubelet[2586]: E0712 00:07:52.531798 2586 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:52.532310 kubelet[2586]: E0712 00:07:52.532051 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:52.532310 kubelet[2586]: W0712 00:07:52.532065 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:52.532516 kubelet[2586]: E0712 00:07:52.532497 2586 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:52.532692 kubelet[2586]: E0712 00:07:52.532650 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:52.532805 kubelet[2586]: W0712 00:07:52.532749 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:52.533050 kubelet[2586]: E0712 00:07:52.532915 2586 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:52.533659 kubelet[2586]: E0712 00:07:52.533552 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:52.533659 kubelet[2586]: W0712 00:07:52.533566 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:52.533659 kubelet[2586]: E0712 00:07:52.533603 2586 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:52.534307 kubelet[2586]: E0712 00:07:52.534119 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:52.534307 kubelet[2586]: W0712 00:07:52.534133 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:52.534307 kubelet[2586]: E0712 00:07:52.534186 2586 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:52.534736 kubelet[2586]: E0712 00:07:52.534527 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:52.534736 kubelet[2586]: W0712 00:07:52.534539 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:52.534881 kubelet[2586]: E0712 00:07:52.534863 2586 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:52.535320 kubelet[2586]: E0712 00:07:52.534969 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:52.535320 kubelet[2586]: W0712 00:07:52.534979 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:52.535320 kubelet[2586]: E0712 00:07:52.535024 2586 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:52.537114 kubelet[2586]: E0712 00:07:52.536974 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:52.537504 kubelet[2586]: W0712 00:07:52.537191 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:52.537504 kubelet[2586]: E0712 00:07:52.537330 2586 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:52.538158 kubelet[2586]: E0712 00:07:52.538083 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:52.538158 kubelet[2586]: W0712 00:07:52.538099 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:52.538158 kubelet[2586]: E0712 00:07:52.538143 2586 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:52.538886 kubelet[2586]: E0712 00:07:52.538628 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:52.538886 kubelet[2586]: W0712 00:07:52.538684 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:52.538886 kubelet[2586]: E0712 00:07:52.538728 2586 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:52.539345 kubelet[2586]: E0712 00:07:52.539329 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:52.539567 kubelet[2586]: W0712 00:07:52.539412 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:52.539567 kubelet[2586]: E0712 00:07:52.539440 2586 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:52.539709 kubelet[2586]: E0712 00:07:52.539699 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:52.539765 kubelet[2586]: W0712 00:07:52.539755 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:52.539820 kubelet[2586]: E0712 00:07:52.539809 2586 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:52.560711 kubelet[2586]: E0712 00:07:52.560679 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:52.560711 kubelet[2586]: W0712 00:07:52.560703 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:52.560850 kubelet[2586]: E0712 00:07:52.560725 2586 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:52.563501 containerd[1476]: time="2025-07-12T00:07:52.563463455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jhlqb,Uid:e4c5a459-5eaa-4111-8d40-f40a3d81f107,Namespace:calico-system,Attempt:0,}" Jul 12 00:07:52.591604 containerd[1476]: time="2025-07-12T00:07:52.591398342Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:07:52.591604 containerd[1476]: time="2025-07-12T00:07:52.591531502Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:07:52.591604 containerd[1476]: time="2025-07-12T00:07:52.591570702Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:07:52.592081 containerd[1476]: time="2025-07-12T00:07:52.591854742Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:07:52.611423 systemd[1]: Started cri-containerd-00cc2dd48bbb6da1d76604ea700d974fe4165f3fc09952749a5bf5ec587607bb.scope - libcontainer container 00cc2dd48bbb6da1d76604ea700d974fe4165f3fc09952749a5bf5ec587607bb. Jul 12 00:07:52.639784 containerd[1476]: time="2025-07-12T00:07:52.639512034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7fcd97f75f-v952r,Uid:4976b82e-b01c-418f-be11-a474987b8c03,Namespace:calico-system,Attempt:0,} returns sandbox id \"b6b8967dc21e1e121ee5ba54ff93887890f67a3f81fdef8b6485f28e75076711\"" Jul 12 00:07:52.645041 containerd[1476]: time="2025-07-12T00:07:52.644755835Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 12 00:07:52.678644 containerd[1476]: time="2025-07-12T00:07:52.678592803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jhlqb,Uid:e4c5a459-5eaa-4111-8d40-f40a3d81f107,Namespace:calico-system,Attempt:0,} returns sandbox id \"00cc2dd48bbb6da1d76604ea700d974fe4165f3fc09952749a5bf5ec587607bb\"" Jul 12 00:07:54.113432 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1948936281.mount: Deactivated successfully. Jul 12 00:07:54.373132 kubelet[2586]: E0712 00:07:54.372920 2586 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4mp89" podUID="b6e2f19f-7174-405d-8aeb-93e33315aa19" Jul 12 00:07:55.226010 containerd[1476]: time="2025-07-12T00:07:55.225937134Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:55.227809 containerd[1476]: time="2025-07-12T00:07:55.227744654Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=33087207" Jul 12 00:07:55.228909 containerd[1476]: time="2025-07-12T00:07:55.228843454Z" level=info msg="ImageCreate event name:\"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:55.231621 containerd[1476]: time="2025-07-12T00:07:55.231559535Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:55.232771 containerd[1476]: time="2025-07-12T00:07:55.232356815Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"33087061\" in 2.58756242s" Jul 12 00:07:55.232771 containerd[1476]: time="2025-07-12T00:07:55.232400655Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\"" Jul 12 00:07:55.233607 containerd[1476]: time="2025-07-12T00:07:55.233581855Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 12 00:07:55.249031 containerd[1476]: time="2025-07-12T00:07:55.248989059Z" level=info msg="CreateContainer within sandbox \"b6b8967dc21e1e121ee5ba54ff93887890f67a3f81fdef8b6485f28e75076711\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 12 00:07:55.267352 containerd[1476]: time="2025-07-12T00:07:55.267302383Z" level=info msg="CreateContainer within sandbox \"b6b8967dc21e1e121ee5ba54ff93887890f67a3f81fdef8b6485f28e75076711\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"9088416c4c001fd550cf030ba683f1933dd8df24f14e0bbb6f58b8e70f91fff2\"" Jul 12 00:07:55.268862 containerd[1476]: time="2025-07-12T00:07:55.268733984Z" level=info msg="StartContainer for \"9088416c4c001fd550cf030ba683f1933dd8df24f14e0bbb6f58b8e70f91fff2\"" Jul 12 00:07:55.305459 systemd[1]: Started cri-containerd-9088416c4c001fd550cf030ba683f1933dd8df24f14e0bbb6f58b8e70f91fff2.scope - libcontainer container 9088416c4c001fd550cf030ba683f1933dd8df24f14e0bbb6f58b8e70f91fff2. Jul 12 00:07:55.342854 containerd[1476]: time="2025-07-12T00:07:55.342809041Z" level=info msg="StartContainer for \"9088416c4c001fd550cf030ba683f1933dd8df24f14e0bbb6f58b8e70f91fff2\" returns successfully" Jul 12 00:07:55.518740 kubelet[2586]: E0712 00:07:55.518623 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:55.518740 kubelet[2586]: W0712 00:07:55.518653 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:55.518740 kubelet[2586]: E0712 00:07:55.518676 2586 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:55.519907 kubelet[2586]: E0712 00:07:55.519870 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:55.520016 kubelet[2586]: W0712 00:07:55.519946 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:55.520062 kubelet[2586]: E0712 00:07:55.520025 2586 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:55.520377 kubelet[2586]: E0712 00:07:55.520350 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:55.520377 kubelet[2586]: W0712 00:07:55.520372 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:55.520442 kubelet[2586]: E0712 00:07:55.520390 2586 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:55.521067 kubelet[2586]: E0712 00:07:55.521045 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:55.521117 kubelet[2586]: W0712 00:07:55.521067 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:55.521117 kubelet[2586]: E0712 00:07:55.521083 2586 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:55.521370 kubelet[2586]: E0712 00:07:55.521351 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:55.521370 kubelet[2586]: W0712 00:07:55.521371 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:55.521448 kubelet[2586]: E0712 00:07:55.521382 2586 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:55.522696 kubelet[2586]: E0712 00:07:55.522503 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:55.522696 kubelet[2586]: W0712 00:07:55.522529 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:55.522696 kubelet[2586]: E0712 00:07:55.522546 2586 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:55.523858 kubelet[2586]: E0712 00:07:55.523774 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:55.523858 kubelet[2586]: W0712 00:07:55.523790 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:55.523858 kubelet[2586]: E0712 00:07:55.523806 2586 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:55.524386 kubelet[2586]: E0712 00:07:55.524257 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:55.524386 kubelet[2586]: W0712 00:07:55.524272 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:55.524386 kubelet[2586]: E0712 00:07:55.524286 2586 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:55.524861 kubelet[2586]: E0712 00:07:55.524756 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:55.524861 kubelet[2586]: W0712 00:07:55.524771 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:55.524861 kubelet[2586]: E0712 00:07:55.524784 2586 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:55.525579 kubelet[2586]: E0712 00:07:55.525456 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:55.525579 kubelet[2586]: W0712 00:07:55.525471 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:55.525579 kubelet[2586]: E0712 00:07:55.525485 2586 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:55.526601 kubelet[2586]: E0712 00:07:55.526460 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:55.526601 kubelet[2586]: W0712 00:07:55.526476 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:55.526601 kubelet[2586]: E0712 00:07:55.526491 2586 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:55.526883 kubelet[2586]: E0712 00:07:55.526808 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:55.526883 kubelet[2586]: W0712 00:07:55.526821 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:55.526883 kubelet[2586]: E0712 00:07:55.526833 2586 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:55.529492 kubelet[2586]: E0712 00:07:55.529332 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:55.529492 kubelet[2586]: W0712 00:07:55.529354 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:55.529492 kubelet[2586]: E0712 00:07:55.529370 2586 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:55.529814 kubelet[2586]: E0712 00:07:55.529740 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:55.529814 kubelet[2586]: W0712 00:07:55.529754 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:55.529814 kubelet[2586]: E0712 00:07:55.529767 2586 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:55.530312 kubelet[2586]: E0712 00:07:55.530116 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:55.530312 kubelet[2586]: W0712 00:07:55.530129 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:55.530312 kubelet[2586]: E0712 00:07:55.530238 2586 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:55.545764 kubelet[2586]: E0712 00:07:55.545732 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:55.546074 kubelet[2586]: W0712 00:07:55.545930 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:55.546074 kubelet[2586]: E0712 00:07:55.545969 2586 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:55.546642 kubelet[2586]: E0712 00:07:55.546514 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:55.546642 kubelet[2586]: W0712 00:07:55.546534 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:55.546642 kubelet[2586]: E0712 00:07:55.546564 2586 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:55.546820 kubelet[2586]: E0712 00:07:55.546791 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:55.546820 kubelet[2586]: W0712 00:07:55.546815 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:55.546872 kubelet[2586]: E0712 00:07:55.546838 2586 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:55.548437 kubelet[2586]: E0712 00:07:55.548412 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:55.548437 kubelet[2586]: W0712 00:07:55.548435 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:55.548624 kubelet[2586]: E0712 00:07:55.548517 2586 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:55.549471 kubelet[2586]: E0712 00:07:55.549443 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:55.549471 kubelet[2586]: W0712 00:07:55.549465 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:55.549711 kubelet[2586]: E0712 00:07:55.549548 2586 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:55.549782 kubelet[2586]: E0712 00:07:55.549765 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:55.549782 kubelet[2586]: W0712 00:07:55.549780 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:55.549907 kubelet[2586]: E0712 00:07:55.549842 2586 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:55.549940 kubelet[2586]: E0712 00:07:55.549934 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:55.550034 kubelet[2586]: W0712 00:07:55.549942 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:55.550034 kubelet[2586]: E0712 00:07:55.549971 2586 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:55.550088 kubelet[2586]: E0712 00:07:55.550075 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:55.550088 kubelet[2586]: W0712 00:07:55.550083 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:55.550153 kubelet[2586]: E0712 00:07:55.550097 2586 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:55.550297 kubelet[2586]: E0712 00:07:55.550281 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:55.550297 kubelet[2586]: W0712 00:07:55.550293 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:55.550297 kubelet[2586]: E0712 00:07:55.550306 2586 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:55.550455 kubelet[2586]: E0712 00:07:55.550440 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:55.550455 kubelet[2586]: W0712 00:07:55.550450 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:55.550511 kubelet[2586]: E0712 00:07:55.550464 2586 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:55.550648 kubelet[2586]: E0712 00:07:55.550634 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:55.550648 kubelet[2586]: W0712 00:07:55.550645 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:55.550725 kubelet[2586]: E0712 00:07:55.550661 2586 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:55.552255 kubelet[2586]: E0712 00:07:55.551333 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:55.552255 kubelet[2586]: W0712 00:07:55.551352 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:55.552255 kubelet[2586]: E0712 00:07:55.551375 2586 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:55.552815 kubelet[2586]: E0712 00:07:55.552662 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:55.552815 kubelet[2586]: W0712 00:07:55.552683 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:55.552815 kubelet[2586]: E0712 00:07:55.552726 2586 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:55.553129 kubelet[2586]: E0712 00:07:55.552997 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:55.553129 kubelet[2586]: W0712 00:07:55.553010 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:55.553247 kubelet[2586]: E0712 00:07:55.553126 2586 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:55.554297 kubelet[2586]: E0712 00:07:55.553663 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:55.554297 kubelet[2586]: W0712 00:07:55.553681 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:55.554408 kubelet[2586]: E0712 00:07:55.554312 2586 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:55.554623 kubelet[2586]: E0712 00:07:55.554601 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:55.554623 kubelet[2586]: W0712 00:07:55.554619 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:55.554806 kubelet[2586]: E0712 00:07:55.554736 2586 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:55.556363 kubelet[2586]: E0712 00:07:55.556329 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:55.556363 kubelet[2586]: W0712 00:07:55.556353 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:55.556469 kubelet[2586]: E0712 00:07:55.556379 2586 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:55.556674 kubelet[2586]: E0712 00:07:55.556655 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:55.556674 kubelet[2586]: W0712 00:07:55.556671 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:55.556741 kubelet[2586]: E0712 00:07:55.556683 2586 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:56.373822 kubelet[2586]: E0712 00:07:56.373161 2586 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4mp89" podUID="b6e2f19f-7174-405d-8aeb-93e33315aa19" Jul 12 00:07:56.482992 kubelet[2586]: I0712 00:07:56.482908 2586 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 00:07:56.535900 kubelet[2586]: E0712 00:07:56.535714 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:56.535900 kubelet[2586]: W0712 00:07:56.535742 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:56.535900 kubelet[2586]: E0712 00:07:56.535772 2586 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:56.536855 kubelet[2586]: E0712 00:07:56.536444 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:56.536855 kubelet[2586]: W0712 00:07:56.536463 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:56.536855 kubelet[2586]: E0712 00:07:56.536660 2586 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:56.537868 kubelet[2586]: E0712 00:07:56.537761 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:56.537868 kubelet[2586]: W0712 00:07:56.537806 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:56.537868 kubelet[2586]: E0712 00:07:56.537825 2586 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:56.539276 kubelet[2586]: E0712 00:07:56.539250 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:56.539276 kubelet[2586]: W0712 00:07:56.539270 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:56.539510 kubelet[2586]: E0712 00:07:56.539290 2586 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:56.539783 kubelet[2586]: E0712 00:07:56.539542 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:56.539783 kubelet[2586]: W0712 00:07:56.539558 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:56.539783 kubelet[2586]: E0712 00:07:56.539570 2586 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:56.539783 kubelet[2586]: E0712 00:07:56.539719 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:56.539783 kubelet[2586]: W0712 00:07:56.539727 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:56.539783 kubelet[2586]: E0712 00:07:56.539736 2586 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:56.540636 kubelet[2586]: E0712 00:07:56.539864 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:56.540636 kubelet[2586]: W0712 00:07:56.539872 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:56.540636 kubelet[2586]: E0712 00:07:56.539880 2586 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:56.540636 kubelet[2586]: E0712 00:07:56.540062 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:56.540636 kubelet[2586]: W0712 00:07:56.540070 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:56.540636 kubelet[2586]: E0712 00:07:56.540080 2586 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:56.540636 kubelet[2586]: E0712 00:07:56.540287 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:56.540636 kubelet[2586]: W0712 00:07:56.540296 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:56.540636 kubelet[2586]: E0712 00:07:56.540307 2586 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:56.540636 kubelet[2586]: E0712 00:07:56.540440 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:56.540859 kubelet[2586]: W0712 00:07:56.540448 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:56.540859 kubelet[2586]: E0712 00:07:56.540455 2586 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:56.540859 kubelet[2586]: E0712 00:07:56.540573 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:56.540859 kubelet[2586]: W0712 00:07:56.540579 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:56.540859 kubelet[2586]: E0712 00:07:56.540587 2586 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:56.540859 kubelet[2586]: E0712 00:07:56.540710 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:56.540859 kubelet[2586]: W0712 00:07:56.540717 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:56.540859 kubelet[2586]: E0712 00:07:56.540730 2586 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:56.540859 kubelet[2586]: E0712 00:07:56.540853 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:56.540859 kubelet[2586]: W0712 00:07:56.540861 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:56.541091 kubelet[2586]: E0712 00:07:56.540869 2586 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:56.541091 kubelet[2586]: E0712 00:07:56.540986 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:56.541091 kubelet[2586]: W0712 00:07:56.540993 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:56.541091 kubelet[2586]: E0712 00:07:56.541000 2586 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:56.541190 kubelet[2586]: E0712 00:07:56.541109 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:56.541190 kubelet[2586]: W0712 00:07:56.541117 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:56.541190 kubelet[2586]: E0712 00:07:56.541160 2586 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:56.558341 kubelet[2586]: E0712 00:07:56.556662 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:56.558341 kubelet[2586]: W0712 00:07:56.556690 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:56.558341 kubelet[2586]: E0712 00:07:56.556711 2586 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:56.558341 kubelet[2586]: E0712 00:07:56.558297 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:56.558774 kubelet[2586]: W0712 00:07:56.558320 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:56.558774 kubelet[2586]: E0712 00:07:56.558660 2586 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:56.559352 kubelet[2586]: E0712 00:07:56.559204 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:56.559352 kubelet[2586]: W0712 00:07:56.559232 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:56.559352 kubelet[2586]: E0712 00:07:56.559277 2586 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:56.559969 kubelet[2586]: E0712 00:07:56.559847 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:56.559969 kubelet[2586]: W0712 00:07:56.559874 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:56.559969 kubelet[2586]: E0712 00:07:56.559908 2586 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:56.560311 kubelet[2586]: E0712 00:07:56.560091 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:56.560311 kubelet[2586]: W0712 00:07:56.560107 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:56.560311 kubelet[2586]: E0712 00:07:56.560162 2586 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:56.560557 kubelet[2586]: E0712 00:07:56.560535 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:56.560557 kubelet[2586]: W0712 00:07:56.560556 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:56.560690 kubelet[2586]: E0712 00:07:56.560574 2586 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:56.560885 kubelet[2586]: E0712 00:07:56.560870 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:56.560885 kubelet[2586]: W0712 00:07:56.560885 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:56.561023 kubelet[2586]: E0712 00:07:56.560928 2586 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:56.562036 kubelet[2586]: E0712 00:07:56.562017 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:56.562036 kubelet[2586]: W0712 00:07:56.562036 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:56.562159 kubelet[2586]: E0712 00:07:56.562138 2586 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:56.562413 kubelet[2586]: E0712 00:07:56.562395 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:56.562413 kubelet[2586]: W0712 00:07:56.562413 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:56.562720 kubelet[2586]: E0712 00:07:56.562497 2586 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:56.562932 kubelet[2586]: E0712 00:07:56.562914 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:56.562976 kubelet[2586]: W0712 00:07:56.562933 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:56.563053 kubelet[2586]: E0712 00:07:56.563007 2586 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:56.563813 kubelet[2586]: E0712 00:07:56.563790 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:56.563813 kubelet[2586]: W0712 00:07:56.563817 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:56.563957 kubelet[2586]: E0712 00:07:56.563837 2586 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:56.565558 kubelet[2586]: E0712 00:07:56.565480 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:56.565558 kubelet[2586]: W0712 00:07:56.565500 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:56.565558 kubelet[2586]: E0712 00:07:56.565518 2586 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:56.566144 kubelet[2586]: E0712 00:07:56.566001 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:56.566144 kubelet[2586]: W0712 00:07:56.566017 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:56.566144 kubelet[2586]: E0712 00:07:56.566033 2586 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:56.566881 kubelet[2586]: E0712 00:07:56.566713 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:56.566881 kubelet[2586]: W0712 00:07:56.566746 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:56.566881 kubelet[2586]: E0712 00:07:56.566760 2586 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:56.568222 kubelet[2586]: E0712 00:07:56.568072 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:56.568222 kubelet[2586]: W0712 00:07:56.568097 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:56.572240 kubelet[2586]: E0712 00:07:56.570753 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:56.572240 kubelet[2586]: W0712 00:07:56.570778 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:56.572240 kubelet[2586]: E0712 00:07:56.570802 2586 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:56.572585 kubelet[2586]: E0712 00:07:56.572556 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:56.572633 kubelet[2586]: W0712 00:07:56.572580 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:56.572633 kubelet[2586]: E0712 00:07:56.572607 2586 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:56.574467 kubelet[2586]: E0712 00:07:56.574434 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:56.574467 kubelet[2586]: W0712 00:07:56.574460 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:56.574545 kubelet[2586]: E0712 00:07:56.574488 2586 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:56.575416 kubelet[2586]: E0712 00:07:56.575379 2586 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:56.729844 containerd[1476]: time="2025-07-12T00:07:56.729699477Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:56.733510 containerd[1476]: time="2025-07-12T00:07:56.733181758Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4266981" Jul 12 00:07:56.736003 containerd[1476]: time="2025-07-12T00:07:56.735936918Z" level=info msg="ImageCreate event name:\"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:56.740269 containerd[1476]: time="2025-07-12T00:07:56.740166559Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:56.740955 containerd[1476]: time="2025-07-12T00:07:56.740789919Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5636182\" in 1.506843543s" Jul 12 00:07:56.740955 containerd[1476]: time="2025-07-12T00:07:56.740828559Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\"" Jul 12 00:07:56.745563 containerd[1476]: time="2025-07-12T00:07:56.745421560Z" level=info msg="CreateContainer within sandbox \"00cc2dd48bbb6da1d76604ea700d974fe4165f3fc09952749a5bf5ec587607bb\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 12 00:07:56.763931 containerd[1476]: time="2025-07-12T00:07:56.763882924Z" level=info msg="CreateContainer within sandbox \"00cc2dd48bbb6da1d76604ea700d974fe4165f3fc09952749a5bf5ec587607bb\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"2226fb1fce6c07b8311e1609677d3a8714b2801e1852827d7337ef1a41bd3ded\"" Jul 12 00:07:56.766261 containerd[1476]: time="2025-07-12T00:07:56.764734285Z" level=info msg="StartContainer for \"2226fb1fce6c07b8311e1609677d3a8714b2801e1852827d7337ef1a41bd3ded\"" Jul 12 00:07:56.801414 systemd[1]: Started cri-containerd-2226fb1fce6c07b8311e1609677d3a8714b2801e1852827d7337ef1a41bd3ded.scope - libcontainer container 2226fb1fce6c07b8311e1609677d3a8714b2801e1852827d7337ef1a41bd3ded. Jul 12 00:07:56.831579 containerd[1476]: time="2025-07-12T00:07:56.831245140Z" level=info msg="StartContainer for \"2226fb1fce6c07b8311e1609677d3a8714b2801e1852827d7337ef1a41bd3ded\" returns successfully" Jul 12 00:07:56.850438 systemd[1]: cri-containerd-2226fb1fce6c07b8311e1609677d3a8714b2801e1852827d7337ef1a41bd3ded.scope: Deactivated successfully. Jul 12 00:07:56.881289 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2226fb1fce6c07b8311e1609677d3a8714b2801e1852827d7337ef1a41bd3ded-rootfs.mount: Deactivated successfully. Jul 12 00:07:56.979659 containerd[1476]: time="2025-07-12T00:07:56.979387613Z" level=info msg="shim disconnected" id=2226fb1fce6c07b8311e1609677d3a8714b2801e1852827d7337ef1a41bd3ded namespace=k8s.io Jul 12 00:07:56.979659 containerd[1476]: time="2025-07-12T00:07:56.979468813Z" level=warning msg="cleaning up after shim disconnected" id=2226fb1fce6c07b8311e1609677d3a8714b2801e1852827d7337ef1a41bd3ded namespace=k8s.io Jul 12 00:07:56.979659 containerd[1476]: time="2025-07-12T00:07:56.979480533Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:07:57.490548 containerd[1476]: time="2025-07-12T00:07:57.490503126Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 12 00:07:57.510634 kubelet[2586]: I0712 00:07:57.510522 2586 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7fcd97f75f-v952r" podStartSLOduration=2.92065391 podStartE2EDuration="5.51050117s" podCreationTimestamp="2025-07-12 00:07:52 +0000 UTC" firstStartedPulling="2025-07-12 00:07:52.643589155 +0000 UTC m=+26.413441529" lastFinishedPulling="2025-07-12 00:07:55.233436455 +0000 UTC m=+29.003288789" observedRunningTime="2025-07-12 00:07:55.51399436 +0000 UTC m=+29.283846774" watchObservedRunningTime="2025-07-12 00:07:57.51050117 +0000 UTC m=+31.280353584" Jul 12 00:07:58.375123 kubelet[2586]: E0712 00:07:58.373155 2586 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4mp89" podUID="b6e2f19f-7174-405d-8aeb-93e33315aa19" Jul 12 00:08:00.374439 kubelet[2586]: E0712 00:08:00.374393 2586 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4mp89" podUID="b6e2f19f-7174-405d-8aeb-93e33315aa19" Jul 12 00:08:00.723355 kubelet[2586]: I0712 00:08:00.722646 2586 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 00:08:01.027406 containerd[1476]: time="2025-07-12T00:08:01.027259399Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:01.029197 containerd[1476]: time="2025-07-12T00:08:01.029106799Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=65888320" Jul 12 00:08:01.030765 containerd[1476]: time="2025-07-12T00:08:01.030662040Z" level=info msg="ImageCreate event name:\"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:01.033969 containerd[1476]: time="2025-07-12T00:08:01.033892000Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:01.035000 containerd[1476]: time="2025-07-12T00:08:01.034875281Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"67257561\" in 3.544332435s" Jul 12 00:08:01.035000 containerd[1476]: time="2025-07-12T00:08:01.034912481Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\"" Jul 12 00:08:01.038235 containerd[1476]: time="2025-07-12T00:08:01.037993201Z" level=info msg="CreateContainer within sandbox \"00cc2dd48bbb6da1d76604ea700d974fe4165f3fc09952749a5bf5ec587607bb\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 12 00:08:01.055851 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2256558965.mount: Deactivated successfully. Jul 12 00:08:01.058191 containerd[1476]: time="2025-07-12T00:08:01.058122205Z" level=info msg="CreateContainer within sandbox \"00cc2dd48bbb6da1d76604ea700d974fe4165f3fc09952749a5bf5ec587607bb\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"067d528d5199f1e3ee81e4689c011838cf8f2ec5922b7ab716d1b395010df73c\"" Jul 12 00:08:01.061335 containerd[1476]: time="2025-07-12T00:08:01.059641086Z" level=info msg="StartContainer for \"067d528d5199f1e3ee81e4689c011838cf8f2ec5922b7ab716d1b395010df73c\"" Jul 12 00:08:01.096448 systemd[1]: Started cri-containerd-067d528d5199f1e3ee81e4689c011838cf8f2ec5922b7ab716d1b395010df73c.scope - libcontainer container 067d528d5199f1e3ee81e4689c011838cf8f2ec5922b7ab716d1b395010df73c. Jul 12 00:08:01.130262 containerd[1476]: time="2025-07-12T00:08:01.129487700Z" level=info msg="StartContainer for \"067d528d5199f1e3ee81e4689c011838cf8f2ec5922b7ab716d1b395010df73c\" returns successfully" Jul 12 00:08:01.695239 containerd[1476]: time="2025-07-12T00:08:01.695167575Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 12 00:08:01.697847 systemd[1]: cri-containerd-067d528d5199f1e3ee81e4689c011838cf8f2ec5922b7ab716d1b395010df73c.scope: Deactivated successfully. Jul 12 00:08:01.719035 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-067d528d5199f1e3ee81e4689c011838cf8f2ec5922b7ab716d1b395010df73c-rootfs.mount: Deactivated successfully. Jul 12 00:08:01.750748 kubelet[2586]: I0712 00:08:01.750707 2586 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 12 00:08:01.791168 kubelet[2586]: I0712 00:08:01.790441 2586 status_manager.go:890] "Failed to get status for pod" podUID="061bde8d-e1c8-4411-baca-b18c385b32b1" pod="kube-system/coredns-668d6bf9bc-6tkc8" err="pods \"coredns-668d6bf9bc-6tkc8\" is forbidden: User \"system:node:ci-4081-3-4-n-f6981960e0\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081-3-4-n-f6981960e0' and this object" Jul 12 00:08:01.791584 kubelet[2586]: W0712 00:08:01.791384 2586 reflector.go:569] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-4081-3-4-n-f6981960e0" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-3-4-n-f6981960e0' and this object Jul 12 00:08:01.791584 kubelet[2586]: E0712 00:08:01.791419 2586 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:ci-4081-3-4-n-f6981960e0\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081-3-4-n-f6981960e0' and this object" logger="UnhandledError" Jul 12 00:08:01.796714 systemd[1]: Created slice kubepods-burstable-pod061bde8d_e1c8_4411_baca_b18c385b32b1.slice - libcontainer container kubepods-burstable-pod061bde8d_e1c8_4411_baca_b18c385b32b1.slice. Jul 12 00:08:01.800161 kubelet[2586]: I0712 00:08:01.799688 2586 status_manager.go:890] "Failed to get status for pod" podUID="0b42f234-5a87-408f-b8c4-2ac2eed39fd7" pod="kube-system/coredns-668d6bf9bc-fmlsv" err="pods \"coredns-668d6bf9bc-fmlsv\" is forbidden: User \"system:node:ci-4081-3-4-n-f6981960e0\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081-3-4-n-f6981960e0' and this object" Jul 12 00:08:01.801243 containerd[1476]: time="2025-07-12T00:08:01.801058957Z" level=info msg="shim disconnected" id=067d528d5199f1e3ee81e4689c011838cf8f2ec5922b7ab716d1b395010df73c namespace=k8s.io Jul 12 00:08:01.801243 containerd[1476]: time="2025-07-12T00:08:01.801126317Z" level=warning msg="cleaning up after shim disconnected" id=067d528d5199f1e3ee81e4689c011838cf8f2ec5922b7ab716d1b395010df73c namespace=k8s.io Jul 12 00:08:01.801243 containerd[1476]: time="2025-07-12T00:08:01.801134677Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:08:01.809227 systemd[1]: Created slice kubepods-burstable-pod0b42f234_5a87_408f_b8c4_2ac2eed39fd7.slice - libcontainer container kubepods-burstable-pod0b42f234_5a87_408f_b8c4_2ac2eed39fd7.slice. Jul 12 00:08:01.828517 systemd[1]: Created slice kubepods-besteffort-pod2a54327b_26b7_4764_93f1_2d3f8be7ff94.slice - libcontainer container kubepods-besteffort-pod2a54327b_26b7_4764_93f1_2d3f8be7ff94.slice. Jul 12 00:08:01.851513 systemd[1]: Created slice kubepods-besteffort-pode5057915_c048_4fff_b278_81f27d624590.slice - libcontainer container kubepods-besteffort-pode5057915_c048_4fff_b278_81f27d624590.slice. Jul 12 00:08:01.862972 systemd[1]: Created slice kubepods-besteffort-pode5b5b2b4_7d84_4c99_896a_91d48632272f.slice - libcontainer container kubepods-besteffort-pode5b5b2b4_7d84_4c99_896a_91d48632272f.slice. Jul 12 00:08:01.873710 systemd[1]: Created slice kubepods-besteffort-pod38a85e2e_1a0b_4fb0_b2a5_5c3bb45039e7.slice - libcontainer container kubepods-besteffort-pod38a85e2e_1a0b_4fb0_b2a5_5c3bb45039e7.slice. Jul 12 00:08:01.883532 systemd[1]: Created slice kubepods-besteffort-pode0f120ef_124f_4f0f_8ada_007ac6b4610e.slice - libcontainer container kubepods-besteffort-pode0f120ef_124f_4f0f_8ada_007ac6b4610e.slice. Jul 12 00:08:01.900307 kubelet[2586]: I0712 00:08:01.900267 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e5057915-c048-4fff-b278-81f27d624590-whisker-ca-bundle\") pod \"whisker-74957578d7-gb4g4\" (UID: \"e5057915-c048-4fff-b278-81f27d624590\") " pod="calico-system/whisker-74957578d7-gb4g4" Jul 12 00:08:01.900758 kubelet[2586]: I0712 00:08:01.900697 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4rv6\" (UniqueName: \"kubernetes.io/projected/e5057915-c048-4fff-b278-81f27d624590-kube-api-access-l4rv6\") pod \"whisker-74957578d7-gb4g4\" (UID: \"e5057915-c048-4fff-b278-81f27d624590\") " pod="calico-system/whisker-74957578d7-gb4g4" Jul 12 00:08:01.900952 kubelet[2586]: I0712 00:08:01.900934 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e0f120ef-124f-4f0f-8ada-007ac6b4610e-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-pqdm4\" (UID: \"e0f120ef-124f-4f0f-8ada-007ac6b4610e\") " pod="calico-system/goldmane-768f4c5c69-pqdm4" Jul 12 00:08:01.901090 kubelet[2586]: I0712 00:08:01.901036 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2a54327b-26b7-4764-93f1-2d3f8be7ff94-calico-apiserver-certs\") pod \"calico-apiserver-694bf789d4-jsvl4\" (UID: \"2a54327b-26b7-4764-93f1-2d3f8be7ff94\") " pod="calico-apiserver/calico-apiserver-694bf789d4-jsvl4" Jul 12 00:08:01.901204 kubelet[2586]: I0712 00:08:01.901187 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/061bde8d-e1c8-4411-baca-b18c385b32b1-config-volume\") pod \"coredns-668d6bf9bc-6tkc8\" (UID: \"061bde8d-e1c8-4411-baca-b18c385b32b1\") " pod="kube-system/coredns-668d6bf9bc-6tkc8" Jul 12 00:08:01.901371 kubelet[2586]: I0712 00:08:01.901354 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8kpq\" (UniqueName: \"kubernetes.io/projected/e5b5b2b4-7d84-4c99-896a-91d48632272f-kube-api-access-x8kpq\") pod \"calico-apiserver-694bf789d4-flhzn\" (UID: \"e5b5b2b4-7d84-4c99-896a-91d48632272f\") " pod="calico-apiserver/calico-apiserver-694bf789d4-flhzn" Jul 12 00:08:01.901505 kubelet[2586]: I0712 00:08:01.901490 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0f120ef-124f-4f0f-8ada-007ac6b4610e-config\") pod \"goldmane-768f4c5c69-pqdm4\" (UID: \"e0f120ef-124f-4f0f-8ada-007ac6b4610e\") " pod="calico-system/goldmane-768f4c5c69-pqdm4" Jul 12 00:08:01.901654 kubelet[2586]: I0712 00:08:01.901629 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0b42f234-5a87-408f-b8c4-2ac2eed39fd7-config-volume\") pod \"coredns-668d6bf9bc-fmlsv\" (UID: \"0b42f234-5a87-408f-b8c4-2ac2eed39fd7\") " pod="kube-system/coredns-668d6bf9bc-fmlsv" Jul 12 00:08:01.901817 kubelet[2586]: I0712 00:08:01.901790 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e5b5b2b4-7d84-4c99-896a-91d48632272f-calico-apiserver-certs\") pod \"calico-apiserver-694bf789d4-flhzn\" (UID: \"e5b5b2b4-7d84-4c99-896a-91d48632272f\") " pod="calico-apiserver/calico-apiserver-694bf789d4-flhzn" Jul 12 00:08:01.901935 kubelet[2586]: I0712 00:08:01.901921 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e5057915-c048-4fff-b278-81f27d624590-whisker-backend-key-pair\") pod \"whisker-74957578d7-gb4g4\" (UID: \"e5057915-c048-4fff-b278-81f27d624590\") " pod="calico-system/whisker-74957578d7-gb4g4" Jul 12 00:08:01.902017 kubelet[2586]: I0712 00:08:01.902005 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkj26\" (UniqueName: \"kubernetes.io/projected/e0f120ef-124f-4f0f-8ada-007ac6b4610e-kube-api-access-pkj26\") pod \"goldmane-768f4c5c69-pqdm4\" (UID: \"e0f120ef-124f-4f0f-8ada-007ac6b4610e\") " pod="calico-system/goldmane-768f4c5c69-pqdm4" Jul 12 00:08:01.902182 kubelet[2586]: I0712 00:08:01.902153 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7r45j\" (UniqueName: \"kubernetes.io/projected/38a85e2e-1a0b-4fb0-b2a5-5c3bb45039e7-kube-api-access-7r45j\") pod \"calico-kube-controllers-6ccd79f89c-9zkg8\" (UID: \"38a85e2e-1a0b-4fb0-b2a5-5c3bb45039e7\") " pod="calico-system/calico-kube-controllers-6ccd79f89c-9zkg8" Jul 12 00:08:01.902348 kubelet[2586]: I0712 00:08:01.902330 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-868vv\" (UniqueName: \"kubernetes.io/projected/061bde8d-e1c8-4411-baca-b18c385b32b1-kube-api-access-868vv\") pod \"coredns-668d6bf9bc-6tkc8\" (UID: \"061bde8d-e1c8-4411-baca-b18c385b32b1\") " pod="kube-system/coredns-668d6bf9bc-6tkc8" Jul 12 00:08:01.902440 kubelet[2586]: I0712 00:08:01.902425 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcrp9\" (UniqueName: \"kubernetes.io/projected/0b42f234-5a87-408f-b8c4-2ac2eed39fd7-kube-api-access-tcrp9\") pod \"coredns-668d6bf9bc-fmlsv\" (UID: \"0b42f234-5a87-408f-b8c4-2ac2eed39fd7\") " pod="kube-system/coredns-668d6bf9bc-fmlsv" Jul 12 00:08:01.902617 kubelet[2586]: I0712 00:08:01.902513 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/e0f120ef-124f-4f0f-8ada-007ac6b4610e-goldmane-key-pair\") pod \"goldmane-768f4c5c69-pqdm4\" (UID: \"e0f120ef-124f-4f0f-8ada-007ac6b4610e\") " pod="calico-system/goldmane-768f4c5c69-pqdm4" Jul 12 00:08:01.902617 kubelet[2586]: I0712 00:08:01.902542 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/38a85e2e-1a0b-4fb0-b2a5-5c3bb45039e7-tigera-ca-bundle\") pod \"calico-kube-controllers-6ccd79f89c-9zkg8\" (UID: \"38a85e2e-1a0b-4fb0-b2a5-5c3bb45039e7\") " pod="calico-system/calico-kube-controllers-6ccd79f89c-9zkg8" Jul 12 00:08:01.902617 kubelet[2586]: I0712 00:08:01.902568 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zp579\" (UniqueName: \"kubernetes.io/projected/2a54327b-26b7-4764-93f1-2d3f8be7ff94-kube-api-access-zp579\") pod \"calico-apiserver-694bf789d4-jsvl4\" (UID: \"2a54327b-26b7-4764-93f1-2d3f8be7ff94\") " pod="calico-apiserver/calico-apiserver-694bf789d4-jsvl4" Jul 12 00:08:02.143768 containerd[1476]: time="2025-07-12T00:08:02.143599706Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-694bf789d4-jsvl4,Uid:2a54327b-26b7-4764-93f1-2d3f8be7ff94,Namespace:calico-apiserver,Attempt:0,}" Jul 12 00:08:02.159342 containerd[1476]: time="2025-07-12T00:08:02.158046309Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-74957578d7-gb4g4,Uid:e5057915-c048-4fff-b278-81f27d624590,Namespace:calico-system,Attempt:0,}" Jul 12 00:08:02.181777 containerd[1476]: time="2025-07-12T00:08:02.180651593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6ccd79f89c-9zkg8,Uid:38a85e2e-1a0b-4fb0-b2a5-5c3bb45039e7,Namespace:calico-system,Attempt:0,}" Jul 12 00:08:02.182469 containerd[1476]: time="2025-07-12T00:08:02.182435994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-694bf789d4-flhzn,Uid:e5b5b2b4-7d84-4c99-896a-91d48632272f,Namespace:calico-apiserver,Attempt:0,}" Jul 12 00:08:02.195248 containerd[1476]: time="2025-07-12T00:08:02.195136476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-pqdm4,Uid:e0f120ef-124f-4f0f-8ada-007ac6b4610e,Namespace:calico-system,Attempt:0,}" Jul 12 00:08:02.318656 containerd[1476]: time="2025-07-12T00:08:02.318608661Z" level=error msg="Failed to destroy network for sandbox \"8974938e7da3714d90a216188a2be8fea95b7a46f44521d586098b43736913f5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:02.319141 containerd[1476]: time="2025-07-12T00:08:02.318945821Z" level=error msg="encountered an error cleaning up failed sandbox \"8974938e7da3714d90a216188a2be8fea95b7a46f44521d586098b43736913f5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:02.319141 containerd[1476]: time="2025-07-12T00:08:02.318999861Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-74957578d7-gb4g4,Uid:e5057915-c048-4fff-b278-81f27d624590,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8974938e7da3714d90a216188a2be8fea95b7a46f44521d586098b43736913f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:02.319329 kubelet[2586]: E0712 00:08:02.319220 2586 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8974938e7da3714d90a216188a2be8fea95b7a46f44521d586098b43736913f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:02.319329 kubelet[2586]: E0712 00:08:02.319289 2586 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8974938e7da3714d90a216188a2be8fea95b7a46f44521d586098b43736913f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-74957578d7-gb4g4" Jul 12 00:08:02.319329 kubelet[2586]: E0712 00:08:02.319307 2586 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8974938e7da3714d90a216188a2be8fea95b7a46f44521d586098b43736913f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-74957578d7-gb4g4" Jul 12 00:08:02.319413 kubelet[2586]: E0712 00:08:02.319343 2586 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-74957578d7-gb4g4_calico-system(e5057915-c048-4fff-b278-81f27d624590)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-74957578d7-gb4g4_calico-system(e5057915-c048-4fff-b278-81f27d624590)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8974938e7da3714d90a216188a2be8fea95b7a46f44521d586098b43736913f5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-74957578d7-gb4g4" podUID="e5057915-c048-4fff-b278-81f27d624590" Jul 12 00:08:02.330333 containerd[1476]: time="2025-07-12T00:08:02.330280703Z" level=error msg="Failed to destroy network for sandbox \"9df73bfef9de647a300e05ba079706468df39e9261ee5401adaf5fd3aafd652c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:02.330456 containerd[1476]: time="2025-07-12T00:08:02.330407543Z" level=error msg="Failed to destroy network for sandbox \"ce26802e152e971c80e08181f83520eb382bb5d7d7ac3b69465dd45b9fbccadc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:02.331708 containerd[1476]: time="2025-07-12T00:08:02.331670064Z" level=error msg="encountered an error cleaning up failed sandbox \"ce26802e152e971c80e08181f83520eb382bb5d7d7ac3b69465dd45b9fbccadc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:02.331801 containerd[1476]: time="2025-07-12T00:08:02.331741944Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-694bf789d4-jsvl4,Uid:2a54327b-26b7-4764-93f1-2d3f8be7ff94,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ce26802e152e971c80e08181f83520eb382bb5d7d7ac3b69465dd45b9fbccadc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:02.332521 kubelet[2586]: E0712 00:08:02.331937 2586 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce26802e152e971c80e08181f83520eb382bb5d7d7ac3b69465dd45b9fbccadc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:02.332521 kubelet[2586]: E0712 00:08:02.331988 2586 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce26802e152e971c80e08181f83520eb382bb5d7d7ac3b69465dd45b9fbccadc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-694bf789d4-jsvl4" Jul 12 00:08:02.332521 kubelet[2586]: E0712 00:08:02.332008 2586 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce26802e152e971c80e08181f83520eb382bb5d7d7ac3b69465dd45b9fbccadc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-694bf789d4-jsvl4" Jul 12 00:08:02.332665 kubelet[2586]: E0712 00:08:02.332075 2586 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-694bf789d4-jsvl4_calico-apiserver(2a54327b-26b7-4764-93f1-2d3f8be7ff94)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-694bf789d4-jsvl4_calico-apiserver(2a54327b-26b7-4764-93f1-2d3f8be7ff94)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ce26802e152e971c80e08181f83520eb382bb5d7d7ac3b69465dd45b9fbccadc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-694bf789d4-jsvl4" podUID="2a54327b-26b7-4764-93f1-2d3f8be7ff94" Jul 12 00:08:02.334029 containerd[1476]: time="2025-07-12T00:08:02.333994184Z" level=error msg="encountered an error cleaning up failed sandbox \"9df73bfef9de647a300e05ba079706468df39e9261ee5401adaf5fd3aafd652c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:02.334239 containerd[1476]: time="2025-07-12T00:08:02.334128624Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6ccd79f89c-9zkg8,Uid:38a85e2e-1a0b-4fb0-b2a5-5c3bb45039e7,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9df73bfef9de647a300e05ba079706468df39e9261ee5401adaf5fd3aafd652c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:02.334692 kubelet[2586]: E0712 00:08:02.334659 2586 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9df73bfef9de647a300e05ba079706468df39e9261ee5401adaf5fd3aafd652c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:02.334766 kubelet[2586]: E0712 00:08:02.334714 2586 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9df73bfef9de647a300e05ba079706468df39e9261ee5401adaf5fd3aafd652c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6ccd79f89c-9zkg8" Jul 12 00:08:02.334766 kubelet[2586]: E0712 00:08:02.334732 2586 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9df73bfef9de647a300e05ba079706468df39e9261ee5401adaf5fd3aafd652c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6ccd79f89c-9zkg8" Jul 12 00:08:02.334828 kubelet[2586]: E0712 00:08:02.334778 2586 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6ccd79f89c-9zkg8_calico-system(38a85e2e-1a0b-4fb0-b2a5-5c3bb45039e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6ccd79f89c-9zkg8_calico-system(38a85e2e-1a0b-4fb0-b2a5-5c3bb45039e7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9df73bfef9de647a300e05ba079706468df39e9261ee5401adaf5fd3aafd652c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6ccd79f89c-9zkg8" podUID="38a85e2e-1a0b-4fb0-b2a5-5c3bb45039e7" Jul 12 00:08:02.364359 containerd[1476]: time="2025-07-12T00:08:02.364125990Z" level=error msg="Failed to destroy network for sandbox \"052d37f2cf1ebb5aa1e2ca5d415c4032512d9f4c46ee90f50cb782a6f634192f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:02.365466 containerd[1476]: time="2025-07-12T00:08:02.365021990Z" level=error msg="encountered an error cleaning up failed sandbox \"052d37f2cf1ebb5aa1e2ca5d415c4032512d9f4c46ee90f50cb782a6f634192f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:02.365466 containerd[1476]: time="2025-07-12T00:08:02.365387150Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-pqdm4,Uid:e0f120ef-124f-4f0f-8ada-007ac6b4610e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"052d37f2cf1ebb5aa1e2ca5d415c4032512d9f4c46ee90f50cb782a6f634192f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:02.367015 kubelet[2586]: E0712 00:08:02.366120 2586 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"052d37f2cf1ebb5aa1e2ca5d415c4032512d9f4c46ee90f50cb782a6f634192f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:02.367015 kubelet[2586]: E0712 00:08:02.366192 2586 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"052d37f2cf1ebb5aa1e2ca5d415c4032512d9f4c46ee90f50cb782a6f634192f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-pqdm4" Jul 12 00:08:02.367015 kubelet[2586]: E0712 00:08:02.366283 2586 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"052d37f2cf1ebb5aa1e2ca5d415c4032512d9f4c46ee90f50cb782a6f634192f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-pqdm4" Jul 12 00:08:02.367203 kubelet[2586]: E0712 00:08:02.366339 2586 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-pqdm4_calico-system(e0f120ef-124f-4f0f-8ada-007ac6b4610e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-pqdm4_calico-system(e0f120ef-124f-4f0f-8ada-007ac6b4610e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"052d37f2cf1ebb5aa1e2ca5d415c4032512d9f4c46ee90f50cb782a6f634192f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-pqdm4" podUID="e0f120ef-124f-4f0f-8ada-007ac6b4610e" Jul 12 00:08:02.367297 containerd[1476]: time="2025-07-12T00:08:02.367122831Z" level=error msg="Failed to destroy network for sandbox \"57c3d17ae3e26b35ea0253cde19561f89e2dcf55a8e7ea56ac63881363ba322e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:02.367539 containerd[1476]: time="2025-07-12T00:08:02.367464591Z" level=error msg="encountered an error cleaning up failed sandbox \"57c3d17ae3e26b35ea0253cde19561f89e2dcf55a8e7ea56ac63881363ba322e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:02.367539 containerd[1476]: time="2025-07-12T00:08:02.367522831Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-694bf789d4-flhzn,Uid:e5b5b2b4-7d84-4c99-896a-91d48632272f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"57c3d17ae3e26b35ea0253cde19561f89e2dcf55a8e7ea56ac63881363ba322e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:02.368469 kubelet[2586]: E0712 00:08:02.367728 2586 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"57c3d17ae3e26b35ea0253cde19561f89e2dcf55a8e7ea56ac63881363ba322e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:02.368469 kubelet[2586]: E0712 00:08:02.367787 2586 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"57c3d17ae3e26b35ea0253cde19561f89e2dcf55a8e7ea56ac63881363ba322e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-694bf789d4-flhzn" Jul 12 00:08:02.368469 kubelet[2586]: E0712 00:08:02.367805 2586 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"57c3d17ae3e26b35ea0253cde19561f89e2dcf55a8e7ea56ac63881363ba322e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-694bf789d4-flhzn" Jul 12 00:08:02.368680 kubelet[2586]: E0712 00:08:02.367848 2586 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-694bf789d4-flhzn_calico-apiserver(e5b5b2b4-7d84-4c99-896a-91d48632272f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-694bf789d4-flhzn_calico-apiserver(e5b5b2b4-7d84-4c99-896a-91d48632272f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"57c3d17ae3e26b35ea0253cde19561f89e2dcf55a8e7ea56ac63881363ba322e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-694bf789d4-flhzn" podUID="e5b5b2b4-7d84-4c99-896a-91d48632272f" Jul 12 00:08:02.384806 systemd[1]: Created slice kubepods-besteffort-podb6e2f19f_7174_405d_8aeb_93e33315aa19.slice - libcontainer container kubepods-besteffort-podb6e2f19f_7174_405d_8aeb_93e33315aa19.slice. Jul 12 00:08:02.388956 containerd[1476]: time="2025-07-12T00:08:02.388919915Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4mp89,Uid:b6e2f19f-7174-405d-8aeb-93e33315aa19,Namespace:calico-system,Attempt:0,}" Jul 12 00:08:02.455797 containerd[1476]: time="2025-07-12T00:08:02.455471529Z" level=error msg="Failed to destroy network for sandbox \"dfb11f025a4fe378fe4ca5bf556aafeec694dfaebafc673d0a0b058f92c57f49\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:02.457544 containerd[1476]: time="2025-07-12T00:08:02.457425769Z" level=error msg="encountered an error cleaning up failed sandbox \"dfb11f025a4fe378fe4ca5bf556aafeec694dfaebafc673d0a0b058f92c57f49\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:02.457807 containerd[1476]: time="2025-07-12T00:08:02.457536049Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4mp89,Uid:b6e2f19f-7174-405d-8aeb-93e33315aa19,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"dfb11f025a4fe378fe4ca5bf556aafeec694dfaebafc673d0a0b058f92c57f49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:02.457939 kubelet[2586]: E0712 00:08:02.457884 2586 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dfb11f025a4fe378fe4ca5bf556aafeec694dfaebafc673d0a0b058f92c57f49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:02.458125 kubelet[2586]: E0712 00:08:02.457962 2586 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dfb11f025a4fe378fe4ca5bf556aafeec694dfaebafc673d0a0b058f92c57f49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4mp89" Jul 12 00:08:02.458125 kubelet[2586]: E0712 00:08:02.457994 2586 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dfb11f025a4fe378fe4ca5bf556aafeec694dfaebafc673d0a0b058f92c57f49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4mp89" Jul 12 00:08:02.458125 kubelet[2586]: E0712 00:08:02.458096 2586 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4mp89_calico-system(b6e2f19f-7174-405d-8aeb-93e33315aa19)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4mp89_calico-system(b6e2f19f-7174-405d-8aeb-93e33315aa19)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dfb11f025a4fe378fe4ca5bf556aafeec694dfaebafc673d0a0b058f92c57f49\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4mp89" podUID="b6e2f19f-7174-405d-8aeb-93e33315aa19" Jul 12 00:08:02.506300 kubelet[2586]: I0712 00:08:02.505784 2586 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce26802e152e971c80e08181f83520eb382bb5d7d7ac3b69465dd45b9fbccadc" Jul 12 00:08:02.508636 containerd[1476]: time="2025-07-12T00:08:02.507946179Z" level=info msg="StopPodSandbox for \"ce26802e152e971c80e08181f83520eb382bb5d7d7ac3b69465dd45b9fbccadc\"" Jul 12 00:08:02.508636 containerd[1476]: time="2025-07-12T00:08:02.508184139Z" level=info msg="Ensure that sandbox ce26802e152e971c80e08181f83520eb382bb5d7d7ac3b69465dd45b9fbccadc in task-service has been cleanup successfully" Jul 12 00:08:02.516399 containerd[1476]: time="2025-07-12T00:08:02.516363621Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 12 00:08:02.516741 kubelet[2586]: I0712 00:08:02.516708 2586 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dfb11f025a4fe378fe4ca5bf556aafeec694dfaebafc673d0a0b058f92c57f49" Jul 12 00:08:02.520369 containerd[1476]: time="2025-07-12T00:08:02.517916581Z" level=info msg="StopPodSandbox for \"dfb11f025a4fe378fe4ca5bf556aafeec694dfaebafc673d0a0b058f92c57f49\"" Jul 12 00:08:02.520369 containerd[1476]: time="2025-07-12T00:08:02.518108141Z" level=info msg="Ensure that sandbox dfb11f025a4fe378fe4ca5bf556aafeec694dfaebafc673d0a0b058f92c57f49 in task-service has been cleanup successfully" Jul 12 00:08:02.525263 kubelet[2586]: I0712 00:08:02.523429 2586 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="57c3d17ae3e26b35ea0253cde19561f89e2dcf55a8e7ea56ac63881363ba322e" Jul 12 00:08:02.526355 containerd[1476]: time="2025-07-12T00:08:02.526309023Z" level=info msg="StopPodSandbox for \"57c3d17ae3e26b35ea0253cde19561f89e2dcf55a8e7ea56ac63881363ba322e\"" Jul 12 00:08:02.526508 containerd[1476]: time="2025-07-12T00:08:02.526484983Z" level=info msg="Ensure that sandbox 57c3d17ae3e26b35ea0253cde19561f89e2dcf55a8e7ea56ac63881363ba322e in task-service has been cleanup successfully" Jul 12 00:08:02.533824 kubelet[2586]: I0712 00:08:02.532352 2586 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="052d37f2cf1ebb5aa1e2ca5d415c4032512d9f4c46ee90f50cb782a6f634192f" Jul 12 00:08:02.534125 containerd[1476]: time="2025-07-12T00:08:02.534083624Z" level=info msg="StopPodSandbox for \"052d37f2cf1ebb5aa1e2ca5d415c4032512d9f4c46ee90f50cb782a6f634192f\"" Jul 12 00:08:02.534749 containerd[1476]: time="2025-07-12T00:08:02.534716664Z" level=info msg="Ensure that sandbox 052d37f2cf1ebb5aa1e2ca5d415c4032512d9f4c46ee90f50cb782a6f634192f in task-service has been cleanup successfully" Jul 12 00:08:02.548560 kubelet[2586]: I0712 00:08:02.547303 2586 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9df73bfef9de647a300e05ba079706468df39e9261ee5401adaf5fd3aafd652c" Jul 12 00:08:02.549098 containerd[1476]: time="2025-07-12T00:08:02.549023947Z" level=info msg="StopPodSandbox for \"9df73bfef9de647a300e05ba079706468df39e9261ee5401adaf5fd3aafd652c\"" Jul 12 00:08:02.549864 containerd[1476]: time="2025-07-12T00:08:02.549637987Z" level=info msg="Ensure that sandbox 9df73bfef9de647a300e05ba079706468df39e9261ee5401adaf5fd3aafd652c in task-service has been cleanup successfully" Jul 12 00:08:02.569360 kubelet[2586]: I0712 00:08:02.569323 2586 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8974938e7da3714d90a216188a2be8fea95b7a46f44521d586098b43736913f5" Jul 12 00:08:02.570178 containerd[1476]: time="2025-07-12T00:08:02.570109551Z" level=info msg="StopPodSandbox for \"8974938e7da3714d90a216188a2be8fea95b7a46f44521d586098b43736913f5\"" Jul 12 00:08:02.570635 containerd[1476]: time="2025-07-12T00:08:02.570305472Z" level=info msg="Ensure that sandbox 8974938e7da3714d90a216188a2be8fea95b7a46f44521d586098b43736913f5 in task-service has been cleanup successfully" Jul 12 00:08:02.614717 containerd[1476]: time="2025-07-12T00:08:02.614493840Z" level=error msg="StopPodSandbox for \"ce26802e152e971c80e08181f83520eb382bb5d7d7ac3b69465dd45b9fbccadc\" failed" error="failed to destroy network for sandbox \"ce26802e152e971c80e08181f83520eb382bb5d7d7ac3b69465dd45b9fbccadc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:02.614885 kubelet[2586]: E0712 00:08:02.614734 2586 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ce26802e152e971c80e08181f83520eb382bb5d7d7ac3b69465dd45b9fbccadc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ce26802e152e971c80e08181f83520eb382bb5d7d7ac3b69465dd45b9fbccadc" Jul 12 00:08:02.614885 kubelet[2586]: E0712 00:08:02.614818 2586 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ce26802e152e971c80e08181f83520eb382bb5d7d7ac3b69465dd45b9fbccadc"} Jul 12 00:08:02.614885 kubelet[2586]: E0712 00:08:02.614871 2586 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2a54327b-26b7-4764-93f1-2d3f8be7ff94\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ce26802e152e971c80e08181f83520eb382bb5d7d7ac3b69465dd45b9fbccadc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:08:02.616587 kubelet[2586]: E0712 00:08:02.614891 2586 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2a54327b-26b7-4764-93f1-2d3f8be7ff94\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ce26802e152e971c80e08181f83520eb382bb5d7d7ac3b69465dd45b9fbccadc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-694bf789d4-jsvl4" podUID="2a54327b-26b7-4764-93f1-2d3f8be7ff94" Jul 12 00:08:02.630412 containerd[1476]: time="2025-07-12T00:08:02.630352124Z" level=error msg="StopPodSandbox for \"dfb11f025a4fe378fe4ca5bf556aafeec694dfaebafc673d0a0b058f92c57f49\" failed" error="failed to destroy network for sandbox \"dfb11f025a4fe378fe4ca5bf556aafeec694dfaebafc673d0a0b058f92c57f49\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:02.630920 containerd[1476]: time="2025-07-12T00:08:02.630538364Z" level=error msg="StopPodSandbox for \"57c3d17ae3e26b35ea0253cde19561f89e2dcf55a8e7ea56ac63881363ba322e\" failed" error="failed to destroy network for sandbox \"57c3d17ae3e26b35ea0253cde19561f89e2dcf55a8e7ea56ac63881363ba322e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:02.630977 kubelet[2586]: E0712 00:08:02.630577 2586 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"dfb11f025a4fe378fe4ca5bf556aafeec694dfaebafc673d0a0b058f92c57f49\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="dfb11f025a4fe378fe4ca5bf556aafeec694dfaebafc673d0a0b058f92c57f49" Jul 12 00:08:02.630977 kubelet[2586]: E0712 00:08:02.630625 2586 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"dfb11f025a4fe378fe4ca5bf556aafeec694dfaebafc673d0a0b058f92c57f49"} Jul 12 00:08:02.630977 kubelet[2586]: E0712 00:08:02.630662 2586 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b6e2f19f-7174-405d-8aeb-93e33315aa19\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dfb11f025a4fe378fe4ca5bf556aafeec694dfaebafc673d0a0b058f92c57f49\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:08:02.630977 kubelet[2586]: E0712 00:08:02.630682 2586 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b6e2f19f-7174-405d-8aeb-93e33315aa19\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dfb11f025a4fe378fe4ca5bf556aafeec694dfaebafc673d0a0b058f92c57f49\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4mp89" podUID="b6e2f19f-7174-405d-8aeb-93e33315aa19" Jul 12 00:08:02.631264 kubelet[2586]: E0712 00:08:02.630737 2586 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"57c3d17ae3e26b35ea0253cde19561f89e2dcf55a8e7ea56ac63881363ba322e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="57c3d17ae3e26b35ea0253cde19561f89e2dcf55a8e7ea56ac63881363ba322e" Jul 12 00:08:02.631264 kubelet[2586]: E0712 00:08:02.630754 2586 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"57c3d17ae3e26b35ea0253cde19561f89e2dcf55a8e7ea56ac63881363ba322e"} Jul 12 00:08:02.631264 kubelet[2586]: E0712 00:08:02.630770 2586 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e5b5b2b4-7d84-4c99-896a-91d48632272f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"57c3d17ae3e26b35ea0253cde19561f89e2dcf55a8e7ea56ac63881363ba322e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:08:02.631264 kubelet[2586]: E0712 00:08:02.630783 2586 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e5b5b2b4-7d84-4c99-896a-91d48632272f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"57c3d17ae3e26b35ea0253cde19561f89e2dcf55a8e7ea56ac63881363ba322e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-694bf789d4-flhzn" podUID="e5b5b2b4-7d84-4c99-896a-91d48632272f" Jul 12 00:08:02.632707 containerd[1476]: time="2025-07-12T00:08:02.632657964Z" level=error msg="StopPodSandbox for \"052d37f2cf1ebb5aa1e2ca5d415c4032512d9f4c46ee90f50cb782a6f634192f\" failed" error="failed to destroy network for sandbox \"052d37f2cf1ebb5aa1e2ca5d415c4032512d9f4c46ee90f50cb782a6f634192f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:02.633323 kubelet[2586]: E0712 00:08:02.633003 2586 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"052d37f2cf1ebb5aa1e2ca5d415c4032512d9f4c46ee90f50cb782a6f634192f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="052d37f2cf1ebb5aa1e2ca5d415c4032512d9f4c46ee90f50cb782a6f634192f" Jul 12 00:08:02.633323 kubelet[2586]: E0712 00:08:02.633135 2586 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"052d37f2cf1ebb5aa1e2ca5d415c4032512d9f4c46ee90f50cb782a6f634192f"} Jul 12 00:08:02.633323 kubelet[2586]: E0712 00:08:02.633171 2586 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e0f120ef-124f-4f0f-8ada-007ac6b4610e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"052d37f2cf1ebb5aa1e2ca5d415c4032512d9f4c46ee90f50cb782a6f634192f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:08:02.633323 kubelet[2586]: E0712 00:08:02.633202 2586 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e0f120ef-124f-4f0f-8ada-007ac6b4610e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"052d37f2cf1ebb5aa1e2ca5d415c4032512d9f4c46ee90f50cb782a6f634192f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-pqdm4" podUID="e0f120ef-124f-4f0f-8ada-007ac6b4610e" Jul 12 00:08:02.638977 containerd[1476]: time="2025-07-12T00:08:02.638467125Z" level=error msg="StopPodSandbox for \"9df73bfef9de647a300e05ba079706468df39e9261ee5401adaf5fd3aafd652c\" failed" error="failed to destroy network for sandbox \"9df73bfef9de647a300e05ba079706468df39e9261ee5401adaf5fd3aafd652c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:02.639177 kubelet[2586]: E0712 00:08:02.638736 2586 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9df73bfef9de647a300e05ba079706468df39e9261ee5401adaf5fd3aafd652c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9df73bfef9de647a300e05ba079706468df39e9261ee5401adaf5fd3aafd652c" Jul 12 00:08:02.639177 kubelet[2586]: E0712 00:08:02.638809 2586 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9df73bfef9de647a300e05ba079706468df39e9261ee5401adaf5fd3aafd652c"} Jul 12 00:08:02.639177 kubelet[2586]: E0712 00:08:02.638873 2586 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"38a85e2e-1a0b-4fb0-b2a5-5c3bb45039e7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9df73bfef9de647a300e05ba079706468df39e9261ee5401adaf5fd3aafd652c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:08:02.639177 kubelet[2586]: E0712 00:08:02.638917 2586 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"38a85e2e-1a0b-4fb0-b2a5-5c3bb45039e7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9df73bfef9de647a300e05ba079706468df39e9261ee5401adaf5fd3aafd652c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6ccd79f89c-9zkg8" podUID="38a85e2e-1a0b-4fb0-b2a5-5c3bb45039e7" Jul 12 00:08:02.642451 containerd[1476]: time="2025-07-12T00:08:02.642328086Z" level=error msg="StopPodSandbox for \"8974938e7da3714d90a216188a2be8fea95b7a46f44521d586098b43736913f5\" failed" error="failed to destroy network for sandbox \"8974938e7da3714d90a216188a2be8fea95b7a46f44521d586098b43736913f5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:02.642734 kubelet[2586]: E0712 00:08:02.642635 2586 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8974938e7da3714d90a216188a2be8fea95b7a46f44521d586098b43736913f5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8974938e7da3714d90a216188a2be8fea95b7a46f44521d586098b43736913f5" Jul 12 00:08:02.642734 kubelet[2586]: E0712 00:08:02.642683 2586 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8974938e7da3714d90a216188a2be8fea95b7a46f44521d586098b43736913f5"} Jul 12 00:08:02.642734 kubelet[2586]: E0712 00:08:02.642713 2586 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e5057915-c048-4fff-b278-81f27d624590\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8974938e7da3714d90a216188a2be8fea95b7a46f44521d586098b43736913f5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:08:02.643015 kubelet[2586]: E0712 00:08:02.642738 2586 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e5057915-c048-4fff-b278-81f27d624590\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8974938e7da3714d90a216188a2be8fea95b7a46f44521d586098b43736913f5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-74957578d7-gb4g4" podUID="e5057915-c048-4fff-b278-81f27d624590" Jul 12 00:08:03.006387 kubelet[2586]: E0712 00:08:03.005428 2586 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Jul 12 00:08:03.006387 kubelet[2586]: E0712 00:08:03.005534 2586 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/061bde8d-e1c8-4411-baca-b18c385b32b1-config-volume podName:061bde8d-e1c8-4411-baca-b18c385b32b1 nodeName:}" failed. No retries permitted until 2025-07-12 00:08:03.505508759 +0000 UTC m=+37.275361133 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/061bde8d-e1c8-4411-baca-b18c385b32b1-config-volume") pod "coredns-668d6bf9bc-6tkc8" (UID: "061bde8d-e1c8-4411-baca-b18c385b32b1") : failed to sync configmap cache: timed out waiting for the condition Jul 12 00:08:03.009038 kubelet[2586]: E0712 00:08:03.008962 2586 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Jul 12 00:08:03.009224 kubelet[2586]: E0712 00:08:03.009096 2586 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b42f234-5a87-408f-b8c4-2ac2eed39fd7-config-volume podName:0b42f234-5a87-408f-b8c4-2ac2eed39fd7 nodeName:}" failed. No retries permitted until 2025-07-12 00:08:03.509030119 +0000 UTC m=+37.278882493 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/0b42f234-5a87-408f-b8c4-2ac2eed39fd7-config-volume") pod "coredns-668d6bf9bc-fmlsv" (UID: "0b42f234-5a87-408f-b8c4-2ac2eed39fd7") : failed to sync configmap cache: timed out waiting for the condition Jul 12 00:08:03.057542 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8974938e7da3714d90a216188a2be8fea95b7a46f44521d586098b43736913f5-shm.mount: Deactivated successfully. Jul 12 00:08:03.057812 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ce26802e152e971c80e08181f83520eb382bb5d7d7ac3b69465dd45b9fbccadc-shm.mount: Deactivated successfully. Jul 12 00:08:03.600977 containerd[1476]: time="2025-07-12T00:08:03.600814036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6tkc8,Uid:061bde8d-e1c8-4411-baca-b18c385b32b1,Namespace:kube-system,Attempt:0,}" Jul 12 00:08:03.631194 containerd[1476]: time="2025-07-12T00:08:03.630545282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fmlsv,Uid:0b42f234-5a87-408f-b8c4-2ac2eed39fd7,Namespace:kube-system,Attempt:0,}" Jul 12 00:08:03.686527 containerd[1476]: time="2025-07-12T00:08:03.686479573Z" level=error msg="Failed to destroy network for sandbox \"f8f6a27f6ade4dc0d004d973501bf4189b322089e6565d8a7a1df5958c9ea008\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:03.686904 containerd[1476]: time="2025-07-12T00:08:03.686851653Z" level=error msg="encountered an error cleaning up failed sandbox \"f8f6a27f6ade4dc0d004d973501bf4189b322089e6565d8a7a1df5958c9ea008\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:03.686980 containerd[1476]: time="2025-07-12T00:08:03.686909613Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6tkc8,Uid:061bde8d-e1c8-4411-baca-b18c385b32b1,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f8f6a27f6ade4dc0d004d973501bf4189b322089e6565d8a7a1df5958c9ea008\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:03.688093 kubelet[2586]: E0712 00:08:03.687720 2586 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f8f6a27f6ade4dc0d004d973501bf4189b322089e6565d8a7a1df5958c9ea008\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:03.688093 kubelet[2586]: E0712 00:08:03.687849 2586 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f8f6a27f6ade4dc0d004d973501bf4189b322089e6565d8a7a1df5958c9ea008\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-6tkc8" Jul 12 00:08:03.688093 kubelet[2586]: E0712 00:08:03.687873 2586 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f8f6a27f6ade4dc0d004d973501bf4189b322089e6565d8a7a1df5958c9ea008\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-6tkc8" Jul 12 00:08:03.688514 kubelet[2586]: E0712 00:08:03.687932 2586 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-6tkc8_kube-system(061bde8d-e1c8-4411-baca-b18c385b32b1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-6tkc8_kube-system(061bde8d-e1c8-4411-baca-b18c385b32b1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f8f6a27f6ade4dc0d004d973501bf4189b322089e6565d8a7a1df5958c9ea008\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-6tkc8" podUID="061bde8d-e1c8-4411-baca-b18c385b32b1" Jul 12 00:08:03.716875 containerd[1476]: time="2025-07-12T00:08:03.716627499Z" level=error msg="Failed to destroy network for sandbox \"2f88498ac61eea5b2478fc2e605c91f4d59c435e0afe1dc62377d1e9141c86f3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:03.717682 containerd[1476]: time="2025-07-12T00:08:03.717552099Z" level=error msg="encountered an error cleaning up failed sandbox \"2f88498ac61eea5b2478fc2e605c91f4d59c435e0afe1dc62377d1e9141c86f3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:03.718452 containerd[1476]: time="2025-07-12T00:08:03.718241699Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fmlsv,Uid:0b42f234-5a87-408f-b8c4-2ac2eed39fd7,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2f88498ac61eea5b2478fc2e605c91f4d59c435e0afe1dc62377d1e9141c86f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:03.718615 kubelet[2586]: E0712 00:08:03.718544 2586 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f88498ac61eea5b2478fc2e605c91f4d59c435e0afe1dc62377d1e9141c86f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:03.718821 kubelet[2586]: E0712 00:08:03.718613 2586 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f88498ac61eea5b2478fc2e605c91f4d59c435e0afe1dc62377d1e9141c86f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-fmlsv" Jul 12 00:08:03.718821 kubelet[2586]: E0712 00:08:03.718645 2586 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f88498ac61eea5b2478fc2e605c91f4d59c435e0afe1dc62377d1e9141c86f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-fmlsv" Jul 12 00:08:03.718821 kubelet[2586]: E0712 00:08:03.718720 2586 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-fmlsv_kube-system(0b42f234-5a87-408f-b8c4-2ac2eed39fd7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-fmlsv_kube-system(0b42f234-5a87-408f-b8c4-2ac2eed39fd7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2f88498ac61eea5b2478fc2e605c91f4d59c435e0afe1dc62377d1e9141c86f3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-fmlsv" podUID="0b42f234-5a87-408f-b8c4-2ac2eed39fd7" Jul 12 00:08:04.053732 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2f88498ac61eea5b2478fc2e605c91f4d59c435e0afe1dc62377d1e9141c86f3-shm.mount: Deactivated successfully. Jul 12 00:08:04.053939 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f8f6a27f6ade4dc0d004d973501bf4189b322089e6565d8a7a1df5958c9ea008-shm.mount: Deactivated successfully. Jul 12 00:08:04.580541 kubelet[2586]: I0712 00:08:04.580469 2586 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f88498ac61eea5b2478fc2e605c91f4d59c435e0afe1dc62377d1e9141c86f3" Jul 12 00:08:04.582107 containerd[1476]: time="2025-07-12T00:08:04.581529027Z" level=info msg="StopPodSandbox for \"2f88498ac61eea5b2478fc2e605c91f4d59c435e0afe1dc62377d1e9141c86f3\"" Jul 12 00:08:04.582107 containerd[1476]: time="2025-07-12T00:08:04.581815067Z" level=info msg="Ensure that sandbox 2f88498ac61eea5b2478fc2e605c91f4d59c435e0afe1dc62377d1e9141c86f3 in task-service has been cleanup successfully" Jul 12 00:08:04.584996 kubelet[2586]: I0712 00:08:04.584936 2586 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f8f6a27f6ade4dc0d004d973501bf4189b322089e6565d8a7a1df5958c9ea008" Jul 12 00:08:04.587438 containerd[1476]: time="2025-07-12T00:08:04.587364388Z" level=info msg="StopPodSandbox for \"f8f6a27f6ade4dc0d004d973501bf4189b322089e6565d8a7a1df5958c9ea008\"" Jul 12 00:08:04.587622 containerd[1476]: time="2025-07-12T00:08:04.587574108Z" level=info msg="Ensure that sandbox f8f6a27f6ade4dc0d004d973501bf4189b322089e6565d8a7a1df5958c9ea008 in task-service has been cleanup successfully" Jul 12 00:08:04.625165 containerd[1476]: time="2025-07-12T00:08:04.624143516Z" level=error msg="StopPodSandbox for \"f8f6a27f6ade4dc0d004d973501bf4189b322089e6565d8a7a1df5958c9ea008\" failed" error="failed to destroy network for sandbox \"f8f6a27f6ade4dc0d004d973501bf4189b322089e6565d8a7a1df5958c9ea008\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:04.627297 kubelet[2586]: E0712 00:08:04.627140 2586 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f8f6a27f6ade4dc0d004d973501bf4189b322089e6565d8a7a1df5958c9ea008\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f8f6a27f6ade4dc0d004d973501bf4189b322089e6565d8a7a1df5958c9ea008" Jul 12 00:08:04.627297 kubelet[2586]: E0712 00:08:04.627191 2586 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f8f6a27f6ade4dc0d004d973501bf4189b322089e6565d8a7a1df5958c9ea008"} Jul 12 00:08:04.627297 kubelet[2586]: E0712 00:08:04.627240 2586 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"061bde8d-e1c8-4411-baca-b18c385b32b1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f8f6a27f6ade4dc0d004d973501bf4189b322089e6565d8a7a1df5958c9ea008\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:08:04.627297 kubelet[2586]: E0712 00:08:04.627263 2586 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"061bde8d-e1c8-4411-baca-b18c385b32b1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f8f6a27f6ade4dc0d004d973501bf4189b322089e6565d8a7a1df5958c9ea008\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-6tkc8" podUID="061bde8d-e1c8-4411-baca-b18c385b32b1" Jul 12 00:08:04.627815 containerd[1476]: time="2025-07-12T00:08:04.627756396Z" level=error msg="StopPodSandbox for \"2f88498ac61eea5b2478fc2e605c91f4d59c435e0afe1dc62377d1e9141c86f3\" failed" error="failed to destroy network for sandbox \"2f88498ac61eea5b2478fc2e605c91f4d59c435e0afe1dc62377d1e9141c86f3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:04.628602 kubelet[2586]: E0712 00:08:04.628461 2586 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2f88498ac61eea5b2478fc2e605c91f4d59c435e0afe1dc62377d1e9141c86f3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2f88498ac61eea5b2478fc2e605c91f4d59c435e0afe1dc62377d1e9141c86f3" Jul 12 00:08:04.628602 kubelet[2586]: E0712 00:08:04.628507 2586 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2f88498ac61eea5b2478fc2e605c91f4d59c435e0afe1dc62377d1e9141c86f3"} Jul 12 00:08:04.628602 kubelet[2586]: E0712 00:08:04.628538 2586 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0b42f234-5a87-408f-b8c4-2ac2eed39fd7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2f88498ac61eea5b2478fc2e605c91f4d59c435e0afe1dc62377d1e9141c86f3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:08:04.628602 kubelet[2586]: E0712 00:08:04.628557 2586 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0b42f234-5a87-408f-b8c4-2ac2eed39fd7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2f88498ac61eea5b2478fc2e605c91f4d59c435e0afe1dc62377d1e9141c86f3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-fmlsv" podUID="0b42f234-5a87-408f-b8c4-2ac2eed39fd7" Jul 12 00:08:09.561189 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1585326689.mount: Deactivated successfully. Jul 12 00:08:09.588009 containerd[1476]: time="2025-07-12T00:08:09.587934043Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:09.589104 containerd[1476]: time="2025-07-12T00:08:09.588930003Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=152544909" Jul 12 00:08:09.590347 containerd[1476]: time="2025-07-12T00:08:09.590246923Z" level=info msg="ImageCreate event name:\"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:09.594359 containerd[1476]: time="2025-07-12T00:08:09.594307844Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:09.595231 containerd[1476]: time="2025-07-12T00:08:09.594761084Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"152544771\" in 7.077834223s" Jul 12 00:08:09.595231 containerd[1476]: time="2025-07-12T00:08:09.594790164Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\"" Jul 12 00:08:09.645236 containerd[1476]: time="2025-07-12T00:08:09.645061133Z" level=info msg="CreateContainer within sandbox \"00cc2dd48bbb6da1d76604ea700d974fe4165f3fc09952749a5bf5ec587607bb\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 12 00:08:09.676547 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount158761205.mount: Deactivated successfully. Jul 12 00:08:09.683598 containerd[1476]: time="2025-07-12T00:08:09.683406500Z" level=info msg="CreateContainer within sandbox \"00cc2dd48bbb6da1d76604ea700d974fe4165f3fc09952749a5bf5ec587607bb\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"24d10bf637c0d2a469876f829b200d1518c8c194abfbc75ae069036014e0ea3c\"" Jul 12 00:08:09.686723 containerd[1476]: time="2025-07-12T00:08:09.685330340Z" level=info msg="StartContainer for \"24d10bf637c0d2a469876f829b200d1518c8c194abfbc75ae069036014e0ea3c\"" Jul 12 00:08:09.725897 systemd[1]: Started cri-containerd-24d10bf637c0d2a469876f829b200d1518c8c194abfbc75ae069036014e0ea3c.scope - libcontainer container 24d10bf637c0d2a469876f829b200d1518c8c194abfbc75ae069036014e0ea3c. Jul 12 00:08:09.775546 containerd[1476]: time="2025-07-12T00:08:09.775352197Z" level=info msg="StartContainer for \"24d10bf637c0d2a469876f829b200d1518c8c194abfbc75ae069036014e0ea3c\" returns successfully" Jul 12 00:08:09.910817 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 12 00:08:09.911084 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 12 00:08:10.047284 containerd[1476]: time="2025-07-12T00:08:10.047244366Z" level=info msg="StopPodSandbox for \"8974938e7da3714d90a216188a2be8fea95b7a46f44521d586098b43736913f5\"" Jul 12 00:08:10.254347 containerd[1476]: 2025-07-12 00:08:10.188 [INFO][3770] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8974938e7da3714d90a216188a2be8fea95b7a46f44521d586098b43736913f5" Jul 12 00:08:10.254347 containerd[1476]: 2025-07-12 00:08:10.188 [INFO][3770] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8974938e7da3714d90a216188a2be8fea95b7a46f44521d586098b43736913f5" iface="eth0" netns="/var/run/netns/cni-0aa1cd48-a5de-063d-a6bf-bdfba8a8970e" Jul 12 00:08:10.254347 containerd[1476]: 2025-07-12 00:08:10.189 [INFO][3770] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8974938e7da3714d90a216188a2be8fea95b7a46f44521d586098b43736913f5" iface="eth0" netns="/var/run/netns/cni-0aa1cd48-a5de-063d-a6bf-bdfba8a8970e" Jul 12 00:08:10.254347 containerd[1476]: 2025-07-12 00:08:10.190 [INFO][3770] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8974938e7da3714d90a216188a2be8fea95b7a46f44521d586098b43736913f5" iface="eth0" netns="/var/run/netns/cni-0aa1cd48-a5de-063d-a6bf-bdfba8a8970e" Jul 12 00:08:10.254347 containerd[1476]: 2025-07-12 00:08:10.190 [INFO][3770] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8974938e7da3714d90a216188a2be8fea95b7a46f44521d586098b43736913f5" Jul 12 00:08:10.254347 containerd[1476]: 2025-07-12 00:08:10.190 [INFO][3770] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8974938e7da3714d90a216188a2be8fea95b7a46f44521d586098b43736913f5" Jul 12 00:08:10.254347 containerd[1476]: 2025-07-12 00:08:10.236 [INFO][3783] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8974938e7da3714d90a216188a2be8fea95b7a46f44521d586098b43736913f5" HandleID="k8s-pod-network.8974938e7da3714d90a216188a2be8fea95b7a46f44521d586098b43736913f5" Workload="ci--4081--3--4--n--f6981960e0-k8s-whisker--74957578d7--gb4g4-eth0" Jul 12 00:08:10.254347 containerd[1476]: 2025-07-12 00:08:10.236 [INFO][3783] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:10.254347 containerd[1476]: 2025-07-12 00:08:10.236 [INFO][3783] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:10.254347 containerd[1476]: 2025-07-12 00:08:10.247 [WARNING][3783] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8974938e7da3714d90a216188a2be8fea95b7a46f44521d586098b43736913f5" HandleID="k8s-pod-network.8974938e7da3714d90a216188a2be8fea95b7a46f44521d586098b43736913f5" Workload="ci--4081--3--4--n--f6981960e0-k8s-whisker--74957578d7--gb4g4-eth0" Jul 12 00:08:10.254347 containerd[1476]: 2025-07-12 00:08:10.247 [INFO][3783] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8974938e7da3714d90a216188a2be8fea95b7a46f44521d586098b43736913f5" HandleID="k8s-pod-network.8974938e7da3714d90a216188a2be8fea95b7a46f44521d586098b43736913f5" Workload="ci--4081--3--4--n--f6981960e0-k8s-whisker--74957578d7--gb4g4-eth0" Jul 12 00:08:10.254347 containerd[1476]: 2025-07-12 00:08:10.249 [INFO][3783] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:10.254347 containerd[1476]: 2025-07-12 00:08:10.251 [INFO][3770] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8974938e7da3714d90a216188a2be8fea95b7a46f44521d586098b43736913f5" Jul 12 00:08:10.254820 containerd[1476]: time="2025-07-12T00:08:10.254627723Z" level=info msg="TearDown network for sandbox \"8974938e7da3714d90a216188a2be8fea95b7a46f44521d586098b43736913f5\" successfully" Jul 12 00:08:10.260415 containerd[1476]: time="2025-07-12T00:08:10.260338644Z" level=info msg="StopPodSandbox for \"8974938e7da3714d90a216188a2be8fea95b7a46f44521d586098b43736913f5\" returns successfully" Jul 12 00:08:10.382701 kubelet[2586]: I0712 00:08:10.382146 2586 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e5057915-c048-4fff-b278-81f27d624590-whisker-backend-key-pair\") pod \"e5057915-c048-4fff-b278-81f27d624590\" (UID: \"e5057915-c048-4fff-b278-81f27d624590\") " Jul 12 00:08:10.382701 kubelet[2586]: I0712 00:08:10.382200 2586 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e5057915-c048-4fff-b278-81f27d624590-whisker-ca-bundle\") pod \"e5057915-c048-4fff-b278-81f27d624590\" (UID: \"e5057915-c048-4fff-b278-81f27d624590\") " Jul 12 00:08:10.382701 kubelet[2586]: I0712 00:08:10.382240 2586 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l4rv6\" (UniqueName: \"kubernetes.io/projected/e5057915-c048-4fff-b278-81f27d624590-kube-api-access-l4rv6\") pod \"e5057915-c048-4fff-b278-81f27d624590\" (UID: \"e5057915-c048-4fff-b278-81f27d624590\") " Jul 12 00:08:10.387925 kubelet[2586]: I0712 00:08:10.387875 2586 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5057915-c048-4fff-b278-81f27d624590-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "e5057915-c048-4fff-b278-81f27d624590" (UID: "e5057915-c048-4fff-b278-81f27d624590"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 12 00:08:10.389245 kubelet[2586]: I0712 00:08:10.388468 2586 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5057915-c048-4fff-b278-81f27d624590-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "e5057915-c048-4fff-b278-81f27d624590" (UID: "e5057915-c048-4fff-b278-81f27d624590"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 12 00:08:10.389922 kubelet[2586]: I0712 00:08:10.389856 2586 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5057915-c048-4fff-b278-81f27d624590-kube-api-access-l4rv6" (OuterVolumeSpecName: "kube-api-access-l4rv6") pod "e5057915-c048-4fff-b278-81f27d624590" (UID: "e5057915-c048-4fff-b278-81f27d624590"). InnerVolumeSpecName "kube-api-access-l4rv6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 12 00:08:10.482629 kubelet[2586]: I0712 00:08:10.482545 2586 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e5057915-c048-4fff-b278-81f27d624590-whisker-backend-key-pair\") on node \"ci-4081-3-4-n-f6981960e0\" DevicePath \"\"" Jul 12 00:08:10.482629 kubelet[2586]: I0712 00:08:10.482589 2586 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e5057915-c048-4fff-b278-81f27d624590-whisker-ca-bundle\") on node \"ci-4081-3-4-n-f6981960e0\" DevicePath \"\"" Jul 12 00:08:10.482629 kubelet[2586]: I0712 00:08:10.482603 2586 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l4rv6\" (UniqueName: \"kubernetes.io/projected/e5057915-c048-4fff-b278-81f27d624590-kube-api-access-l4rv6\") on node \"ci-4081-3-4-n-f6981960e0\" DevicePath \"\"" Jul 12 00:08:10.562912 systemd[1]: run-netns-cni\x2d0aa1cd48\x2da5de\x2d063d\x2da6bf\x2dbdfba8a8970e.mount: Deactivated successfully. Jul 12 00:08:10.563075 systemd[1]: var-lib-kubelet-pods-e5057915\x2dc048\x2d4fff\x2db278\x2d81f27d624590-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dl4rv6.mount: Deactivated successfully. Jul 12 00:08:10.563261 systemd[1]: var-lib-kubelet-pods-e5057915\x2dc048\x2d4fff\x2db278\x2d81f27d624590-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 12 00:08:10.624557 systemd[1]: Removed slice kubepods-besteffort-pode5057915_c048_4fff_b278_81f27d624590.slice - libcontainer container kubepods-besteffort-pode5057915_c048_4fff_b278_81f27d624590.slice. Jul 12 00:08:10.645214 kubelet[2586]: I0712 00:08:10.645065 2586 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-jhlqb" podStartSLOduration=1.7006743069999999 podStartE2EDuration="18.645030673s" podCreationTimestamp="2025-07-12 00:07:52 +0000 UTC" firstStartedPulling="2025-07-12 00:07:52.681004884 +0000 UTC m=+26.450857258" lastFinishedPulling="2025-07-12 00:08:09.62536125 +0000 UTC m=+43.395213624" observedRunningTime="2025-07-12 00:08:10.644408993 +0000 UTC m=+44.414261367" watchObservedRunningTime="2025-07-12 00:08:10.645030673 +0000 UTC m=+44.414883047" Jul 12 00:08:10.729361 systemd[1]: Created slice kubepods-besteffort-poda8243cc5_6a4b_49e7_8e79_7ffa69495793.slice - libcontainer container kubepods-besteffort-poda8243cc5_6a4b_49e7_8e79_7ffa69495793.slice. Jul 12 00:08:10.786480 kubelet[2586]: I0712 00:08:10.786375 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xvnt\" (UniqueName: \"kubernetes.io/projected/a8243cc5-6a4b-49e7-8e79-7ffa69495793-kube-api-access-9xvnt\") pod \"whisker-7bf64cfb9c-pbtft\" (UID: \"a8243cc5-6a4b-49e7-8e79-7ffa69495793\") " pod="calico-system/whisker-7bf64cfb9c-pbtft" Jul 12 00:08:10.786480 kubelet[2586]: I0712 00:08:10.786423 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a8243cc5-6a4b-49e7-8e79-7ffa69495793-whisker-ca-bundle\") pod \"whisker-7bf64cfb9c-pbtft\" (UID: \"a8243cc5-6a4b-49e7-8e79-7ffa69495793\") " pod="calico-system/whisker-7bf64cfb9c-pbtft" Jul 12 00:08:10.786480 kubelet[2586]: I0712 00:08:10.786449 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a8243cc5-6a4b-49e7-8e79-7ffa69495793-whisker-backend-key-pair\") pod \"whisker-7bf64cfb9c-pbtft\" (UID: \"a8243cc5-6a4b-49e7-8e79-7ffa69495793\") " pod="calico-system/whisker-7bf64cfb9c-pbtft" Jul 12 00:08:11.034882 containerd[1476]: time="2025-07-12T00:08:11.034724182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7bf64cfb9c-pbtft,Uid:a8243cc5-6a4b-49e7-8e79-7ffa69495793,Namespace:calico-system,Attempt:0,}" Jul 12 00:08:11.182336 systemd-networkd[1379]: cali124d1b2c8c0: Link UP Jul 12 00:08:11.183557 systemd-networkd[1379]: cali124d1b2c8c0: Gained carrier Jul 12 00:08:11.206452 containerd[1476]: 2025-07-12 00:08:11.073 [INFO][3824] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 12 00:08:11.206452 containerd[1476]: 2025-07-12 00:08:11.092 [INFO][3824] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--4--n--f6981960e0-k8s-whisker--7bf64cfb9c--pbtft-eth0 whisker-7bf64cfb9c- calico-system a8243cc5-6a4b-49e7-8e79-7ffa69495793 922 0 2025-07-12 00:08:10 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7bf64cfb9c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081-3-4-n-f6981960e0 whisker-7bf64cfb9c-pbtft eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali124d1b2c8c0 [] [] }} ContainerID="f9a73424a476b85fe00b4f4ddb525313485620c231d2c8e896fc44ea7bdb5ca0" Namespace="calico-system" Pod="whisker-7bf64cfb9c-pbtft" WorkloadEndpoint="ci--4081--3--4--n--f6981960e0-k8s-whisker--7bf64cfb9c--pbtft-" Jul 12 00:08:11.206452 containerd[1476]: 2025-07-12 00:08:11.092 [INFO][3824] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f9a73424a476b85fe00b4f4ddb525313485620c231d2c8e896fc44ea7bdb5ca0" Namespace="calico-system" Pod="whisker-7bf64cfb9c-pbtft" WorkloadEndpoint="ci--4081--3--4--n--f6981960e0-k8s-whisker--7bf64cfb9c--pbtft-eth0" Jul 12 00:08:11.206452 containerd[1476]: 2025-07-12 00:08:11.119 [INFO][3837] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f9a73424a476b85fe00b4f4ddb525313485620c231d2c8e896fc44ea7bdb5ca0" HandleID="k8s-pod-network.f9a73424a476b85fe00b4f4ddb525313485620c231d2c8e896fc44ea7bdb5ca0" Workload="ci--4081--3--4--n--f6981960e0-k8s-whisker--7bf64cfb9c--pbtft-eth0" Jul 12 00:08:11.206452 containerd[1476]: 2025-07-12 00:08:11.119 [INFO][3837] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f9a73424a476b85fe00b4f4ddb525313485620c231d2c8e896fc44ea7bdb5ca0" HandleID="k8s-pod-network.f9a73424a476b85fe00b4f4ddb525313485620c231d2c8e896fc44ea7bdb5ca0" Workload="ci--4081--3--4--n--f6981960e0-k8s-whisker--7bf64cfb9c--pbtft-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b610), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-4-n-f6981960e0", "pod":"whisker-7bf64cfb9c-pbtft", "timestamp":"2025-07-12 00:08:11.119595637 +0000 UTC"}, Hostname:"ci-4081-3-4-n-f6981960e0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:08:11.206452 containerd[1476]: 2025-07-12 00:08:11.119 [INFO][3837] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:11.206452 containerd[1476]: 2025-07-12 00:08:11.119 [INFO][3837] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:11.206452 containerd[1476]: 2025-07-12 00:08:11.119 [INFO][3837] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-4-n-f6981960e0' Jul 12 00:08:11.206452 containerd[1476]: 2025-07-12 00:08:11.132 [INFO][3837] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f9a73424a476b85fe00b4f4ddb525313485620c231d2c8e896fc44ea7bdb5ca0" host="ci-4081-3-4-n-f6981960e0" Jul 12 00:08:11.206452 containerd[1476]: 2025-07-12 00:08:11.139 [INFO][3837] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-4-n-f6981960e0" Jul 12 00:08:11.206452 containerd[1476]: 2025-07-12 00:08:11.147 [INFO][3837] ipam/ipam.go 511: Trying affinity for 192.168.59.64/26 host="ci-4081-3-4-n-f6981960e0" Jul 12 00:08:11.206452 containerd[1476]: 2025-07-12 00:08:11.149 [INFO][3837] ipam/ipam.go 158: Attempting to load block cidr=192.168.59.64/26 host="ci-4081-3-4-n-f6981960e0" Jul 12 00:08:11.206452 containerd[1476]: 2025-07-12 00:08:11.153 [INFO][3837] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.59.64/26 host="ci-4081-3-4-n-f6981960e0" Jul 12 00:08:11.206452 containerd[1476]: 2025-07-12 00:08:11.153 [INFO][3837] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.59.64/26 handle="k8s-pod-network.f9a73424a476b85fe00b4f4ddb525313485620c231d2c8e896fc44ea7bdb5ca0" host="ci-4081-3-4-n-f6981960e0" Jul 12 00:08:11.206452 containerd[1476]: 2025-07-12 00:08:11.155 [INFO][3837] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f9a73424a476b85fe00b4f4ddb525313485620c231d2c8e896fc44ea7bdb5ca0 Jul 12 00:08:11.206452 containerd[1476]: 2025-07-12 00:08:11.161 [INFO][3837] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.59.64/26 handle="k8s-pod-network.f9a73424a476b85fe00b4f4ddb525313485620c231d2c8e896fc44ea7bdb5ca0" host="ci-4081-3-4-n-f6981960e0" Jul 12 00:08:11.206452 containerd[1476]: 2025-07-12 00:08:11.169 [INFO][3837] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.59.65/26] block=192.168.59.64/26 handle="k8s-pod-network.f9a73424a476b85fe00b4f4ddb525313485620c231d2c8e896fc44ea7bdb5ca0" host="ci-4081-3-4-n-f6981960e0" Jul 12 00:08:11.206452 containerd[1476]: 2025-07-12 00:08:11.169 [INFO][3837] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.59.65/26] handle="k8s-pod-network.f9a73424a476b85fe00b4f4ddb525313485620c231d2c8e896fc44ea7bdb5ca0" host="ci-4081-3-4-n-f6981960e0" Jul 12 00:08:11.206452 containerd[1476]: 2025-07-12 00:08:11.169 [INFO][3837] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:11.206452 containerd[1476]: 2025-07-12 00:08:11.169 [INFO][3837] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.59.65/26] IPv6=[] ContainerID="f9a73424a476b85fe00b4f4ddb525313485620c231d2c8e896fc44ea7bdb5ca0" HandleID="k8s-pod-network.f9a73424a476b85fe00b4f4ddb525313485620c231d2c8e896fc44ea7bdb5ca0" Workload="ci--4081--3--4--n--f6981960e0-k8s-whisker--7bf64cfb9c--pbtft-eth0" Jul 12 00:08:11.207173 containerd[1476]: 2025-07-12 00:08:11.173 [INFO][3824] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f9a73424a476b85fe00b4f4ddb525313485620c231d2c8e896fc44ea7bdb5ca0" Namespace="calico-system" Pod="whisker-7bf64cfb9c-pbtft" WorkloadEndpoint="ci--4081--3--4--n--f6981960e0-k8s-whisker--7bf64cfb9c--pbtft-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--n--f6981960e0-k8s-whisker--7bf64cfb9c--pbtft-eth0", GenerateName:"whisker-7bf64cfb9c-", Namespace:"calico-system", SelfLink:"", UID:"a8243cc5-6a4b-49e7-8e79-7ffa69495793", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 8, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7bf64cfb9c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-n-f6981960e0", ContainerID:"", Pod:"whisker-7bf64cfb9c-pbtft", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.59.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali124d1b2c8c0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:11.207173 containerd[1476]: 2025-07-12 00:08:11.173 [INFO][3824] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.59.65/32] ContainerID="f9a73424a476b85fe00b4f4ddb525313485620c231d2c8e896fc44ea7bdb5ca0" Namespace="calico-system" Pod="whisker-7bf64cfb9c-pbtft" WorkloadEndpoint="ci--4081--3--4--n--f6981960e0-k8s-whisker--7bf64cfb9c--pbtft-eth0" Jul 12 00:08:11.207173 containerd[1476]: 2025-07-12 00:08:11.173 [INFO][3824] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali124d1b2c8c0 ContainerID="f9a73424a476b85fe00b4f4ddb525313485620c231d2c8e896fc44ea7bdb5ca0" Namespace="calico-system" Pod="whisker-7bf64cfb9c-pbtft" WorkloadEndpoint="ci--4081--3--4--n--f6981960e0-k8s-whisker--7bf64cfb9c--pbtft-eth0" Jul 12 00:08:11.207173 containerd[1476]: 2025-07-12 00:08:11.183 [INFO][3824] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f9a73424a476b85fe00b4f4ddb525313485620c231d2c8e896fc44ea7bdb5ca0" Namespace="calico-system" Pod="whisker-7bf64cfb9c-pbtft" WorkloadEndpoint="ci--4081--3--4--n--f6981960e0-k8s-whisker--7bf64cfb9c--pbtft-eth0" Jul 12 00:08:11.207173 containerd[1476]: 2025-07-12 00:08:11.184 [INFO][3824] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f9a73424a476b85fe00b4f4ddb525313485620c231d2c8e896fc44ea7bdb5ca0" Namespace="calico-system" Pod="whisker-7bf64cfb9c-pbtft" WorkloadEndpoint="ci--4081--3--4--n--f6981960e0-k8s-whisker--7bf64cfb9c--pbtft-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--n--f6981960e0-k8s-whisker--7bf64cfb9c--pbtft-eth0", GenerateName:"whisker-7bf64cfb9c-", Namespace:"calico-system", SelfLink:"", UID:"a8243cc5-6a4b-49e7-8e79-7ffa69495793", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 8, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7bf64cfb9c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-n-f6981960e0", ContainerID:"f9a73424a476b85fe00b4f4ddb525313485620c231d2c8e896fc44ea7bdb5ca0", Pod:"whisker-7bf64cfb9c-pbtft", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.59.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali124d1b2c8c0", MAC:"7a:e9:8b:f4:58:6c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:11.207173 containerd[1476]: 2025-07-12 00:08:11.204 [INFO][3824] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f9a73424a476b85fe00b4f4ddb525313485620c231d2c8e896fc44ea7bdb5ca0" Namespace="calico-system" Pod="whisker-7bf64cfb9c-pbtft" WorkloadEndpoint="ci--4081--3--4--n--f6981960e0-k8s-whisker--7bf64cfb9c--pbtft-eth0" Jul 12 00:08:11.229462 containerd[1476]: time="2025-07-12T00:08:11.229337897Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:08:11.229675 containerd[1476]: time="2025-07-12T00:08:11.229464377Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:08:11.229675 containerd[1476]: time="2025-07-12T00:08:11.229507977Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:11.229675 containerd[1476]: time="2025-07-12T00:08:11.229667297Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:11.250472 systemd[1]: Started cri-containerd-f9a73424a476b85fe00b4f4ddb525313485620c231d2c8e896fc44ea7bdb5ca0.scope - libcontainer container f9a73424a476b85fe00b4f4ddb525313485620c231d2c8e896fc44ea7bdb5ca0. Jul 12 00:08:11.284425 containerd[1476]: time="2025-07-12T00:08:11.284378907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7bf64cfb9c-pbtft,Uid:a8243cc5-6a4b-49e7-8e79-7ffa69495793,Namespace:calico-system,Attempt:0,} returns sandbox id \"f9a73424a476b85fe00b4f4ddb525313485620c231d2c8e896fc44ea7bdb5ca0\"" Jul 12 00:08:11.289536 containerd[1476]: time="2025-07-12T00:08:11.289453387Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 12 00:08:11.648095 systemd[1]: run-containerd-runc-k8s.io-24d10bf637c0d2a469876f829b200d1518c8c194abfbc75ae069036014e0ea3c-runc.Hd7UWM.mount: Deactivated successfully. Jul 12 00:08:11.993630 kernel: bpftool[4030]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jul 12 00:08:12.171869 systemd-networkd[1379]: vxlan.calico: Link UP Jul 12 00:08:12.171877 systemd-networkd[1379]: vxlan.calico: Gained carrier Jul 12 00:08:12.401789 kubelet[2586]: I0712 00:08:12.401633 2586 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5057915-c048-4fff-b278-81f27d624590" path="/var/lib/kubelet/pods/e5057915-c048-4fff-b278-81f27d624590/volumes" Jul 12 00:08:12.648589 systemd-networkd[1379]: cali124d1b2c8c0: Gained IPv6LL Jul 12 00:08:12.846646 containerd[1476]: time="2025-07-12T00:08:12.846464541Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:12.847783 containerd[1476]: time="2025-07-12T00:08:12.847721222Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4605614" Jul 12 00:08:12.848610 containerd[1476]: time="2025-07-12T00:08:12.848507382Z" level=info msg="ImageCreate event name:\"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:12.851733 containerd[1476]: time="2025-07-12T00:08:12.851668782Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:12.853199 containerd[1476]: time="2025-07-12T00:08:12.853068302Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"5974847\" in 1.563569755s" Jul 12 00:08:12.853199 containerd[1476]: time="2025-07-12T00:08:12.853106982Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\"" Jul 12 00:08:12.857346 containerd[1476]: time="2025-07-12T00:08:12.857182503Z" level=info msg="CreateContainer within sandbox \"f9a73424a476b85fe00b4f4ddb525313485620c231d2c8e896fc44ea7bdb5ca0\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 12 00:08:12.871799 containerd[1476]: time="2025-07-12T00:08:12.871641626Z" level=info msg="CreateContainer within sandbox \"f9a73424a476b85fe00b4f4ddb525313485620c231d2c8e896fc44ea7bdb5ca0\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"0b2a1118857865877953dd28c448a3b0f47880635031298221bf658f2c2f6610\"" Jul 12 00:08:12.873348 containerd[1476]: time="2025-07-12T00:08:12.873301666Z" level=info msg="StartContainer for \"0b2a1118857865877953dd28c448a3b0f47880635031298221bf658f2c2f6610\"" Jul 12 00:08:12.913552 systemd[1]: Started cri-containerd-0b2a1118857865877953dd28c448a3b0f47880635031298221bf658f2c2f6610.scope - libcontainer container 0b2a1118857865877953dd28c448a3b0f47880635031298221bf658f2c2f6610. Jul 12 00:08:12.967077 containerd[1476]: time="2025-07-12T00:08:12.966653522Z" level=info msg="StartContainer for \"0b2a1118857865877953dd28c448a3b0f47880635031298221bf658f2c2f6610\" returns successfully" Jul 12 00:08:12.970905 containerd[1476]: time="2025-07-12T00:08:12.970184163Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 12 00:08:13.352913 systemd-networkd[1379]: vxlan.calico: Gained IPv6LL Jul 12 00:08:13.375179 containerd[1476]: time="2025-07-12T00:08:13.374912913Z" level=info msg="StopPodSandbox for \"ce26802e152e971c80e08181f83520eb382bb5d7d7ac3b69465dd45b9fbccadc\"" Jul 12 00:08:13.376305 containerd[1476]: time="2025-07-12T00:08:13.375975153Z" level=info msg="StopPodSandbox for \"57c3d17ae3e26b35ea0253cde19561f89e2dcf55a8e7ea56ac63881363ba322e\"" Jul 12 00:08:13.498160 containerd[1476]: 2025-07-12 00:08:13.443 [INFO][4159] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ce26802e152e971c80e08181f83520eb382bb5d7d7ac3b69465dd45b9fbccadc" Jul 12 00:08:13.498160 containerd[1476]: 2025-07-12 00:08:13.443 [INFO][4159] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ce26802e152e971c80e08181f83520eb382bb5d7d7ac3b69465dd45b9fbccadc" iface="eth0" netns="/var/run/netns/cni-1fd83352-7258-3ec6-4079-b2338d7b8528" Jul 12 00:08:13.498160 containerd[1476]: 2025-07-12 00:08:13.445 [INFO][4159] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ce26802e152e971c80e08181f83520eb382bb5d7d7ac3b69465dd45b9fbccadc" iface="eth0" netns="/var/run/netns/cni-1fd83352-7258-3ec6-4079-b2338d7b8528" Jul 12 00:08:13.498160 containerd[1476]: 2025-07-12 00:08:13.446 [INFO][4159] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ce26802e152e971c80e08181f83520eb382bb5d7d7ac3b69465dd45b9fbccadc" iface="eth0" netns="/var/run/netns/cni-1fd83352-7258-3ec6-4079-b2338d7b8528" Jul 12 00:08:13.498160 containerd[1476]: 2025-07-12 00:08:13.446 [INFO][4159] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ce26802e152e971c80e08181f83520eb382bb5d7d7ac3b69465dd45b9fbccadc" Jul 12 00:08:13.498160 containerd[1476]: 2025-07-12 00:08:13.446 [INFO][4159] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ce26802e152e971c80e08181f83520eb382bb5d7d7ac3b69465dd45b9fbccadc" Jul 12 00:08:13.498160 containerd[1476]: 2025-07-12 00:08:13.473 [INFO][4173] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ce26802e152e971c80e08181f83520eb382bb5d7d7ac3b69465dd45b9fbccadc" HandleID="k8s-pod-network.ce26802e152e971c80e08181f83520eb382bb5d7d7ac3b69465dd45b9fbccadc" Workload="ci--4081--3--4--n--f6981960e0-k8s-calico--apiserver--694bf789d4--jsvl4-eth0" Jul 12 00:08:13.498160 containerd[1476]: 2025-07-12 00:08:13.473 [INFO][4173] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:13.498160 containerd[1476]: 2025-07-12 00:08:13.473 [INFO][4173] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:13.498160 containerd[1476]: 2025-07-12 00:08:13.486 [WARNING][4173] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ce26802e152e971c80e08181f83520eb382bb5d7d7ac3b69465dd45b9fbccadc" HandleID="k8s-pod-network.ce26802e152e971c80e08181f83520eb382bb5d7d7ac3b69465dd45b9fbccadc" Workload="ci--4081--3--4--n--f6981960e0-k8s-calico--apiserver--694bf789d4--jsvl4-eth0" Jul 12 00:08:13.498160 containerd[1476]: 2025-07-12 00:08:13.486 [INFO][4173] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ce26802e152e971c80e08181f83520eb382bb5d7d7ac3b69465dd45b9fbccadc" HandleID="k8s-pod-network.ce26802e152e971c80e08181f83520eb382bb5d7d7ac3b69465dd45b9fbccadc" Workload="ci--4081--3--4--n--f6981960e0-k8s-calico--apiserver--694bf789d4--jsvl4-eth0" Jul 12 00:08:13.498160 containerd[1476]: 2025-07-12 00:08:13.491 [INFO][4173] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:13.498160 containerd[1476]: 2025-07-12 00:08:13.494 [INFO][4159] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ce26802e152e971c80e08181f83520eb382bb5d7d7ac3b69465dd45b9fbccadc" Jul 12 00:08:13.499559 containerd[1476]: time="2025-07-12T00:08:13.498802535Z" level=info msg="TearDown network for sandbox \"ce26802e152e971c80e08181f83520eb382bb5d7d7ac3b69465dd45b9fbccadc\" successfully" Jul 12 00:08:13.501467 containerd[1476]: time="2025-07-12T00:08:13.501271655Z" level=info msg="StopPodSandbox for \"ce26802e152e971c80e08181f83520eb382bb5d7d7ac3b69465dd45b9fbccadc\" returns successfully" Jul 12 00:08:13.502480 systemd[1]: run-netns-cni\x2d1fd83352\x2d7258\x2d3ec6\x2d4079\x2db2338d7b8528.mount: Deactivated successfully. Jul 12 00:08:13.504438 containerd[1476]: time="2025-07-12T00:08:13.503062095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-694bf789d4-jsvl4,Uid:2a54327b-26b7-4764-93f1-2d3f8be7ff94,Namespace:calico-apiserver,Attempt:1,}" Jul 12 00:08:13.536954 containerd[1476]: 2025-07-12 00:08:13.470 [INFO][4164] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="57c3d17ae3e26b35ea0253cde19561f89e2dcf55a8e7ea56ac63881363ba322e" Jul 12 00:08:13.536954 containerd[1476]: 2025-07-12 00:08:13.470 [INFO][4164] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="57c3d17ae3e26b35ea0253cde19561f89e2dcf55a8e7ea56ac63881363ba322e" iface="eth0" netns="/var/run/netns/cni-1c5c6669-b354-7c10-9685-e7dbcba8c9bc" Jul 12 00:08:13.536954 containerd[1476]: 2025-07-12 00:08:13.470 [INFO][4164] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="57c3d17ae3e26b35ea0253cde19561f89e2dcf55a8e7ea56ac63881363ba322e" iface="eth0" netns="/var/run/netns/cni-1c5c6669-b354-7c10-9685-e7dbcba8c9bc" Jul 12 00:08:13.536954 containerd[1476]: 2025-07-12 00:08:13.470 [INFO][4164] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="57c3d17ae3e26b35ea0253cde19561f89e2dcf55a8e7ea56ac63881363ba322e" iface="eth0" netns="/var/run/netns/cni-1c5c6669-b354-7c10-9685-e7dbcba8c9bc" Jul 12 00:08:13.536954 containerd[1476]: 2025-07-12 00:08:13.470 [INFO][4164] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="57c3d17ae3e26b35ea0253cde19561f89e2dcf55a8e7ea56ac63881363ba322e" Jul 12 00:08:13.536954 containerd[1476]: 2025-07-12 00:08:13.470 [INFO][4164] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="57c3d17ae3e26b35ea0253cde19561f89e2dcf55a8e7ea56ac63881363ba322e" Jul 12 00:08:13.536954 containerd[1476]: 2025-07-12 00:08:13.508 [INFO][4179] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="57c3d17ae3e26b35ea0253cde19561f89e2dcf55a8e7ea56ac63881363ba322e" HandleID="k8s-pod-network.57c3d17ae3e26b35ea0253cde19561f89e2dcf55a8e7ea56ac63881363ba322e" Workload="ci--4081--3--4--n--f6981960e0-k8s-calico--apiserver--694bf789d4--flhzn-eth0" Jul 12 00:08:13.536954 containerd[1476]: 2025-07-12 00:08:13.508 [INFO][4179] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:13.536954 containerd[1476]: 2025-07-12 00:08:13.508 [INFO][4179] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:13.536954 containerd[1476]: 2025-07-12 00:08:13.525 [WARNING][4179] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="57c3d17ae3e26b35ea0253cde19561f89e2dcf55a8e7ea56ac63881363ba322e" HandleID="k8s-pod-network.57c3d17ae3e26b35ea0253cde19561f89e2dcf55a8e7ea56ac63881363ba322e" Workload="ci--4081--3--4--n--f6981960e0-k8s-calico--apiserver--694bf789d4--flhzn-eth0" Jul 12 00:08:13.536954 containerd[1476]: 2025-07-12 00:08:13.526 [INFO][4179] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="57c3d17ae3e26b35ea0253cde19561f89e2dcf55a8e7ea56ac63881363ba322e" HandleID="k8s-pod-network.57c3d17ae3e26b35ea0253cde19561f89e2dcf55a8e7ea56ac63881363ba322e" Workload="ci--4081--3--4--n--f6981960e0-k8s-calico--apiserver--694bf789d4--flhzn-eth0" Jul 12 00:08:13.536954 containerd[1476]: 2025-07-12 00:08:13.529 [INFO][4179] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:13.536954 containerd[1476]: 2025-07-12 00:08:13.535 [INFO][4164] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="57c3d17ae3e26b35ea0253cde19561f89e2dcf55a8e7ea56ac63881363ba322e" Jul 12 00:08:13.537836 containerd[1476]: time="2025-07-12T00:08:13.537665701Z" level=info msg="TearDown network for sandbox \"57c3d17ae3e26b35ea0253cde19561f89e2dcf55a8e7ea56ac63881363ba322e\" successfully" Jul 12 00:08:13.537836 containerd[1476]: time="2025-07-12T00:08:13.537711141Z" level=info msg="StopPodSandbox for \"57c3d17ae3e26b35ea0253cde19561f89e2dcf55a8e7ea56ac63881363ba322e\" returns successfully" Jul 12 00:08:13.539099 containerd[1476]: time="2025-07-12T00:08:13.539060182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-694bf789d4-flhzn,Uid:e5b5b2b4-7d84-4c99-896a-91d48632272f,Namespace:calico-apiserver,Attempt:1,}" Jul 12 00:08:13.709017 systemd-networkd[1379]: cali39a6f1d11b5: Link UP Jul 12 00:08:13.711164 systemd-networkd[1379]: cali39a6f1d11b5: Gained carrier Jul 12 00:08:13.733736 containerd[1476]: 2025-07-12 00:08:13.583 [INFO][4194] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--4--n--f6981960e0-k8s-calico--apiserver--694bf789d4--jsvl4-eth0 calico-apiserver-694bf789d4- calico-apiserver 2a54327b-26b7-4764-93f1-2d3f8be7ff94 941 0 2025-07-12 00:07:46 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:694bf789d4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-4-n-f6981960e0 calico-apiserver-694bf789d4-jsvl4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali39a6f1d11b5 [] [] }} ContainerID="8c4bbfe07e06f65b4a50d12b02600ae50a5dbde1094ed5dd89303b87d235b8f5" Namespace="calico-apiserver" Pod="calico-apiserver-694bf789d4-jsvl4" WorkloadEndpoint="ci--4081--3--4--n--f6981960e0-k8s-calico--apiserver--694bf789d4--jsvl4-" Jul 12 00:08:13.733736 containerd[1476]: 2025-07-12 00:08:13.583 [INFO][4194] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8c4bbfe07e06f65b4a50d12b02600ae50a5dbde1094ed5dd89303b87d235b8f5" Namespace="calico-apiserver" Pod="calico-apiserver-694bf789d4-jsvl4" WorkloadEndpoint="ci--4081--3--4--n--f6981960e0-k8s-calico--apiserver--694bf789d4--jsvl4-eth0" Jul 12 00:08:13.733736 containerd[1476]: 2025-07-12 00:08:13.635 [INFO][4216] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8c4bbfe07e06f65b4a50d12b02600ae50a5dbde1094ed5dd89303b87d235b8f5" HandleID="k8s-pod-network.8c4bbfe07e06f65b4a50d12b02600ae50a5dbde1094ed5dd89303b87d235b8f5" Workload="ci--4081--3--4--n--f6981960e0-k8s-calico--apiserver--694bf789d4--jsvl4-eth0" Jul 12 00:08:13.733736 containerd[1476]: 2025-07-12 00:08:13.635 [INFO][4216] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8c4bbfe07e06f65b4a50d12b02600ae50a5dbde1094ed5dd89303b87d235b8f5" HandleID="k8s-pod-network.8c4bbfe07e06f65b4a50d12b02600ae50a5dbde1094ed5dd89303b87d235b8f5" Workload="ci--4081--3--4--n--f6981960e0-k8s-calico--apiserver--694bf789d4--jsvl4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024afd0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-4-n-f6981960e0", "pod":"calico-apiserver-694bf789d4-jsvl4", "timestamp":"2025-07-12 00:08:13.635626718 +0000 UTC"}, Hostname:"ci-4081-3-4-n-f6981960e0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:08:13.733736 containerd[1476]: 2025-07-12 00:08:13.635 [INFO][4216] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:13.733736 containerd[1476]: 2025-07-12 00:08:13.636 [INFO][4216] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:13.733736 containerd[1476]: 2025-07-12 00:08:13.636 [INFO][4216] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-4-n-f6981960e0' Jul 12 00:08:13.733736 containerd[1476]: 2025-07-12 00:08:13.648 [INFO][4216] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8c4bbfe07e06f65b4a50d12b02600ae50a5dbde1094ed5dd89303b87d235b8f5" host="ci-4081-3-4-n-f6981960e0" Jul 12 00:08:13.733736 containerd[1476]: 2025-07-12 00:08:13.656 [INFO][4216] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-4-n-f6981960e0" Jul 12 00:08:13.733736 containerd[1476]: 2025-07-12 00:08:13.662 [INFO][4216] ipam/ipam.go 511: Trying affinity for 192.168.59.64/26 host="ci-4081-3-4-n-f6981960e0" Jul 12 00:08:13.733736 containerd[1476]: 2025-07-12 00:08:13.665 [INFO][4216] ipam/ipam.go 158: Attempting to load block cidr=192.168.59.64/26 host="ci-4081-3-4-n-f6981960e0" Jul 12 00:08:13.733736 containerd[1476]: 2025-07-12 00:08:13.668 [INFO][4216] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.59.64/26 host="ci-4081-3-4-n-f6981960e0" Jul 12 00:08:13.733736 containerd[1476]: 2025-07-12 00:08:13.668 [INFO][4216] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.59.64/26 handle="k8s-pod-network.8c4bbfe07e06f65b4a50d12b02600ae50a5dbde1094ed5dd89303b87d235b8f5" host="ci-4081-3-4-n-f6981960e0" Jul 12 00:08:13.733736 containerd[1476]: 2025-07-12 00:08:13.670 [INFO][4216] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8c4bbfe07e06f65b4a50d12b02600ae50a5dbde1094ed5dd89303b87d235b8f5 Jul 12 00:08:13.733736 containerd[1476]: 2025-07-12 00:08:13.682 [INFO][4216] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.59.64/26 handle="k8s-pod-network.8c4bbfe07e06f65b4a50d12b02600ae50a5dbde1094ed5dd89303b87d235b8f5" host="ci-4081-3-4-n-f6981960e0" Jul 12 00:08:13.733736 containerd[1476]: 2025-07-12 00:08:13.689 [INFO][4216] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.59.66/26] block=192.168.59.64/26 handle="k8s-pod-network.8c4bbfe07e06f65b4a50d12b02600ae50a5dbde1094ed5dd89303b87d235b8f5" host="ci-4081-3-4-n-f6981960e0" Jul 12 00:08:13.733736 containerd[1476]: 2025-07-12 00:08:13.689 [INFO][4216] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.59.66/26] handle="k8s-pod-network.8c4bbfe07e06f65b4a50d12b02600ae50a5dbde1094ed5dd89303b87d235b8f5" host="ci-4081-3-4-n-f6981960e0" Jul 12 00:08:13.733736 containerd[1476]: 2025-07-12 00:08:13.689 [INFO][4216] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:13.733736 containerd[1476]: 2025-07-12 00:08:13.689 [INFO][4216] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.59.66/26] IPv6=[] ContainerID="8c4bbfe07e06f65b4a50d12b02600ae50a5dbde1094ed5dd89303b87d235b8f5" HandleID="k8s-pod-network.8c4bbfe07e06f65b4a50d12b02600ae50a5dbde1094ed5dd89303b87d235b8f5" Workload="ci--4081--3--4--n--f6981960e0-k8s-calico--apiserver--694bf789d4--jsvl4-eth0" Jul 12 00:08:13.734623 containerd[1476]: 2025-07-12 00:08:13.693 [INFO][4194] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8c4bbfe07e06f65b4a50d12b02600ae50a5dbde1094ed5dd89303b87d235b8f5" Namespace="calico-apiserver" Pod="calico-apiserver-694bf789d4-jsvl4" WorkloadEndpoint="ci--4081--3--4--n--f6981960e0-k8s-calico--apiserver--694bf789d4--jsvl4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--n--f6981960e0-k8s-calico--apiserver--694bf789d4--jsvl4-eth0", GenerateName:"calico-apiserver-694bf789d4-", Namespace:"calico-apiserver", SelfLink:"", UID:"2a54327b-26b7-4764-93f1-2d3f8be7ff94", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 7, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"694bf789d4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-n-f6981960e0", ContainerID:"", Pod:"calico-apiserver-694bf789d4-jsvl4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.59.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali39a6f1d11b5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:13.734623 containerd[1476]: 2025-07-12 00:08:13.693 [INFO][4194] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.59.66/32] ContainerID="8c4bbfe07e06f65b4a50d12b02600ae50a5dbde1094ed5dd89303b87d235b8f5" Namespace="calico-apiserver" Pod="calico-apiserver-694bf789d4-jsvl4" WorkloadEndpoint="ci--4081--3--4--n--f6981960e0-k8s-calico--apiserver--694bf789d4--jsvl4-eth0" Jul 12 00:08:13.734623 containerd[1476]: 2025-07-12 00:08:13.693 [INFO][4194] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali39a6f1d11b5 ContainerID="8c4bbfe07e06f65b4a50d12b02600ae50a5dbde1094ed5dd89303b87d235b8f5" Namespace="calico-apiserver" Pod="calico-apiserver-694bf789d4-jsvl4" WorkloadEndpoint="ci--4081--3--4--n--f6981960e0-k8s-calico--apiserver--694bf789d4--jsvl4-eth0" Jul 12 00:08:13.734623 containerd[1476]: 2025-07-12 00:08:13.711 [INFO][4194] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8c4bbfe07e06f65b4a50d12b02600ae50a5dbde1094ed5dd89303b87d235b8f5" Namespace="calico-apiserver" Pod="calico-apiserver-694bf789d4-jsvl4" WorkloadEndpoint="ci--4081--3--4--n--f6981960e0-k8s-calico--apiserver--694bf789d4--jsvl4-eth0" Jul 12 00:08:13.734623 containerd[1476]: 2025-07-12 00:08:13.712 [INFO][4194] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8c4bbfe07e06f65b4a50d12b02600ae50a5dbde1094ed5dd89303b87d235b8f5" Namespace="calico-apiserver" Pod="calico-apiserver-694bf789d4-jsvl4" WorkloadEndpoint="ci--4081--3--4--n--f6981960e0-k8s-calico--apiserver--694bf789d4--jsvl4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--n--f6981960e0-k8s-calico--apiserver--694bf789d4--jsvl4-eth0", GenerateName:"calico-apiserver-694bf789d4-", Namespace:"calico-apiserver", SelfLink:"", UID:"2a54327b-26b7-4764-93f1-2d3f8be7ff94", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 7, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"694bf789d4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-n-f6981960e0", ContainerID:"8c4bbfe07e06f65b4a50d12b02600ae50a5dbde1094ed5dd89303b87d235b8f5", Pod:"calico-apiserver-694bf789d4-jsvl4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.59.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali39a6f1d11b5", MAC:"d6:0c:97:4f:d3:bc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:13.734623 containerd[1476]: 2025-07-12 00:08:13.726 [INFO][4194] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8c4bbfe07e06f65b4a50d12b02600ae50a5dbde1094ed5dd89303b87d235b8f5" Namespace="calico-apiserver" Pod="calico-apiserver-694bf789d4-jsvl4" WorkloadEndpoint="ci--4081--3--4--n--f6981960e0-k8s-calico--apiserver--694bf789d4--jsvl4-eth0" Jul 12 00:08:13.768292 containerd[1476]: time="2025-07-12T00:08:13.766742661Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:08:13.768292 containerd[1476]: time="2025-07-12T00:08:13.766890701Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:08:13.768292 containerd[1476]: time="2025-07-12T00:08:13.767100701Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:13.768292 containerd[1476]: time="2025-07-12T00:08:13.767354341Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:13.789419 systemd[1]: Started cri-containerd-8c4bbfe07e06f65b4a50d12b02600ae50a5dbde1094ed5dd89303b87d235b8f5.scope - libcontainer container 8c4bbfe07e06f65b4a50d12b02600ae50a5dbde1094ed5dd89303b87d235b8f5. Jul 12 00:08:13.825864 systemd-networkd[1379]: cali1aaf8d26815: Link UP Jul 12 00:08:13.828013 systemd-networkd[1379]: cali1aaf8d26815: Gained carrier Jul 12 00:08:13.849007 containerd[1476]: 2025-07-12 00:08:13.609 [INFO][4202] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--4--n--f6981960e0-k8s-calico--apiserver--694bf789d4--flhzn-eth0 calico-apiserver-694bf789d4- calico-apiserver e5b5b2b4-7d84-4c99-896a-91d48632272f 942 0 2025-07-12 00:07:46 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:694bf789d4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-4-n-f6981960e0 calico-apiserver-694bf789d4-flhzn eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1aaf8d26815 [] [] }} ContainerID="0ebb3e5362eed38ae55f3c510adfc101f5a7606e559f7e010fa5bf64daa198a2" Namespace="calico-apiserver" Pod="calico-apiserver-694bf789d4-flhzn" WorkloadEndpoint="ci--4081--3--4--n--f6981960e0-k8s-calico--apiserver--694bf789d4--flhzn-" Jul 12 00:08:13.849007 containerd[1476]: 2025-07-12 00:08:13.609 [INFO][4202] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0ebb3e5362eed38ae55f3c510adfc101f5a7606e559f7e010fa5bf64daa198a2" Namespace="calico-apiserver" Pod="calico-apiserver-694bf789d4-flhzn" WorkloadEndpoint="ci--4081--3--4--n--f6981960e0-k8s-calico--apiserver--694bf789d4--flhzn-eth0" Jul 12 00:08:13.849007 containerd[1476]: 2025-07-12 00:08:13.652 [INFO][4221] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0ebb3e5362eed38ae55f3c510adfc101f5a7606e559f7e010fa5bf64daa198a2" HandleID="k8s-pod-network.0ebb3e5362eed38ae55f3c510adfc101f5a7606e559f7e010fa5bf64daa198a2" Workload="ci--4081--3--4--n--f6981960e0-k8s-calico--apiserver--694bf789d4--flhzn-eth0" Jul 12 00:08:13.849007 containerd[1476]: 2025-07-12 00:08:13.652 [INFO][4221] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0ebb3e5362eed38ae55f3c510adfc101f5a7606e559f7e010fa5bf64daa198a2" HandleID="k8s-pod-network.0ebb3e5362eed38ae55f3c510adfc101f5a7606e559f7e010fa5bf64daa198a2" Workload="ci--4081--3--4--n--f6981960e0-k8s-calico--apiserver--694bf789d4--flhzn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b640), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-4-n-f6981960e0", "pod":"calico-apiserver-694bf789d4-flhzn", "timestamp":"2025-07-12 00:08:13.652164681 +0000 UTC"}, Hostname:"ci-4081-3-4-n-f6981960e0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:08:13.849007 containerd[1476]: 2025-07-12 00:08:13.652 [INFO][4221] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:13.849007 containerd[1476]: 2025-07-12 00:08:13.689 [INFO][4221] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:13.849007 containerd[1476]: 2025-07-12 00:08:13.689 [INFO][4221] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-4-n-f6981960e0' Jul 12 00:08:13.849007 containerd[1476]: 2025-07-12 00:08:13.751 [INFO][4221] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0ebb3e5362eed38ae55f3c510adfc101f5a7606e559f7e010fa5bf64daa198a2" host="ci-4081-3-4-n-f6981960e0" Jul 12 00:08:13.849007 containerd[1476]: 2025-07-12 00:08:13.760 [INFO][4221] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-4-n-f6981960e0" Jul 12 00:08:13.849007 containerd[1476]: 2025-07-12 00:08:13.770 [INFO][4221] ipam/ipam.go 511: Trying affinity for 192.168.59.64/26 host="ci-4081-3-4-n-f6981960e0" Jul 12 00:08:13.849007 containerd[1476]: 2025-07-12 00:08:13.773 [INFO][4221] ipam/ipam.go 158: Attempting to load block cidr=192.168.59.64/26 host="ci-4081-3-4-n-f6981960e0" Jul 12 00:08:13.849007 containerd[1476]: 2025-07-12 00:08:13.781 [INFO][4221] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.59.64/26 host="ci-4081-3-4-n-f6981960e0" Jul 12 00:08:13.849007 containerd[1476]: 2025-07-12 00:08:13.781 [INFO][4221] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.59.64/26 handle="k8s-pod-network.0ebb3e5362eed38ae55f3c510adfc101f5a7606e559f7e010fa5bf64daa198a2" host="ci-4081-3-4-n-f6981960e0" Jul 12 00:08:13.849007 containerd[1476]: 2025-07-12 00:08:13.790 [INFO][4221] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0ebb3e5362eed38ae55f3c510adfc101f5a7606e559f7e010fa5bf64daa198a2 Jul 12 00:08:13.849007 containerd[1476]: 2025-07-12 00:08:13.801 [INFO][4221] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.59.64/26 handle="k8s-pod-network.0ebb3e5362eed38ae55f3c510adfc101f5a7606e559f7e010fa5bf64daa198a2" host="ci-4081-3-4-n-f6981960e0" Jul 12 00:08:13.849007 containerd[1476]: 2025-07-12 00:08:13.815 [INFO][4221] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.59.67/26] block=192.168.59.64/26 handle="k8s-pod-network.0ebb3e5362eed38ae55f3c510adfc101f5a7606e559f7e010fa5bf64daa198a2" host="ci-4081-3-4-n-f6981960e0" Jul 12 00:08:13.849007 containerd[1476]: 2025-07-12 00:08:13.815 [INFO][4221] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.59.67/26] handle="k8s-pod-network.0ebb3e5362eed38ae55f3c510adfc101f5a7606e559f7e010fa5bf64daa198a2" host="ci-4081-3-4-n-f6981960e0" Jul 12 00:08:13.849007 containerd[1476]: 2025-07-12 00:08:13.815 [INFO][4221] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:13.849007 containerd[1476]: 2025-07-12 00:08:13.815 [INFO][4221] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.59.67/26] IPv6=[] ContainerID="0ebb3e5362eed38ae55f3c510adfc101f5a7606e559f7e010fa5bf64daa198a2" HandleID="k8s-pod-network.0ebb3e5362eed38ae55f3c510adfc101f5a7606e559f7e010fa5bf64daa198a2" Workload="ci--4081--3--4--n--f6981960e0-k8s-calico--apiserver--694bf789d4--flhzn-eth0" Jul 12 00:08:13.849962 containerd[1476]: 2025-07-12 00:08:13.818 [INFO][4202] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0ebb3e5362eed38ae55f3c510adfc101f5a7606e559f7e010fa5bf64daa198a2" Namespace="calico-apiserver" Pod="calico-apiserver-694bf789d4-flhzn" WorkloadEndpoint="ci--4081--3--4--n--f6981960e0-k8s-calico--apiserver--694bf789d4--flhzn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--n--f6981960e0-k8s-calico--apiserver--694bf789d4--flhzn-eth0", GenerateName:"calico-apiserver-694bf789d4-", Namespace:"calico-apiserver", SelfLink:"", UID:"e5b5b2b4-7d84-4c99-896a-91d48632272f", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 7, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"694bf789d4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-n-f6981960e0", ContainerID:"", Pod:"calico-apiserver-694bf789d4-flhzn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.59.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1aaf8d26815", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:13.849962 containerd[1476]: 2025-07-12 00:08:13.818 [INFO][4202] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.59.67/32] ContainerID="0ebb3e5362eed38ae55f3c510adfc101f5a7606e559f7e010fa5bf64daa198a2" Namespace="calico-apiserver" Pod="calico-apiserver-694bf789d4-flhzn" WorkloadEndpoint="ci--4081--3--4--n--f6981960e0-k8s-calico--apiserver--694bf789d4--flhzn-eth0" Jul 12 00:08:13.849962 containerd[1476]: 2025-07-12 00:08:13.818 [INFO][4202] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1aaf8d26815 ContainerID="0ebb3e5362eed38ae55f3c510adfc101f5a7606e559f7e010fa5bf64daa198a2" Namespace="calico-apiserver" Pod="calico-apiserver-694bf789d4-flhzn" WorkloadEndpoint="ci--4081--3--4--n--f6981960e0-k8s-calico--apiserver--694bf789d4--flhzn-eth0" Jul 12 00:08:13.849962 containerd[1476]: 2025-07-12 00:08:13.829 [INFO][4202] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0ebb3e5362eed38ae55f3c510adfc101f5a7606e559f7e010fa5bf64daa198a2" Namespace="calico-apiserver" Pod="calico-apiserver-694bf789d4-flhzn" WorkloadEndpoint="ci--4081--3--4--n--f6981960e0-k8s-calico--apiserver--694bf789d4--flhzn-eth0" Jul 12 00:08:13.849962 containerd[1476]: 2025-07-12 00:08:13.829 [INFO][4202] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0ebb3e5362eed38ae55f3c510adfc101f5a7606e559f7e010fa5bf64daa198a2" Namespace="calico-apiserver" Pod="calico-apiserver-694bf789d4-flhzn" WorkloadEndpoint="ci--4081--3--4--n--f6981960e0-k8s-calico--apiserver--694bf789d4--flhzn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--n--f6981960e0-k8s-calico--apiserver--694bf789d4--flhzn-eth0", GenerateName:"calico-apiserver-694bf789d4-", Namespace:"calico-apiserver", SelfLink:"", UID:"e5b5b2b4-7d84-4c99-896a-91d48632272f", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 7, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"694bf789d4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-n-f6981960e0", ContainerID:"0ebb3e5362eed38ae55f3c510adfc101f5a7606e559f7e010fa5bf64daa198a2", Pod:"calico-apiserver-694bf789d4-flhzn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.59.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1aaf8d26815", MAC:"06:56:e5:1c:11:c8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:13.849962 containerd[1476]: 2025-07-12 00:08:13.846 [INFO][4202] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0ebb3e5362eed38ae55f3c510adfc101f5a7606e559f7e010fa5bf64daa198a2" Namespace="calico-apiserver" Pod="calico-apiserver-694bf789d4-flhzn" WorkloadEndpoint="ci--4081--3--4--n--f6981960e0-k8s-calico--apiserver--694bf789d4--flhzn-eth0" Jul 12 00:08:13.871883 systemd[1]: run-netns-cni\x2d1c5c6669\x2db354\x2d7c10\x2d9685\x2de7dbcba8c9bc.mount: Deactivated successfully. Jul 12 00:08:13.891787 containerd[1476]: time="2025-07-12T00:08:13.891669323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-694bf789d4-jsvl4,Uid:2a54327b-26b7-4764-93f1-2d3f8be7ff94,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"8c4bbfe07e06f65b4a50d12b02600ae50a5dbde1094ed5dd89303b87d235b8f5\"" Jul 12 00:08:13.905466 containerd[1476]: time="2025-07-12T00:08:13.905352045Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:08:13.905679 containerd[1476]: time="2025-07-12T00:08:13.905503685Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:08:13.905679 containerd[1476]: time="2025-07-12T00:08:13.905547445Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:13.905792 containerd[1476]: time="2025-07-12T00:08:13.905717405Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:13.939427 systemd[1]: Started cri-containerd-0ebb3e5362eed38ae55f3c510adfc101f5a7606e559f7e010fa5bf64daa198a2.scope - libcontainer container 0ebb3e5362eed38ae55f3c510adfc101f5a7606e559f7e010fa5bf64daa198a2. Jul 12 00:08:13.972016 containerd[1476]: time="2025-07-12T00:08:13.971859617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-694bf789d4-flhzn,Uid:e5b5b2b4-7d84-4c99-896a-91d48632272f,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"0ebb3e5362eed38ae55f3c510adfc101f5a7606e559f7e010fa5bf64daa198a2\"" Jul 12 00:08:14.374670 containerd[1476]: time="2025-07-12T00:08:14.374587206Z" level=info msg="StopPodSandbox for \"052d37f2cf1ebb5aa1e2ca5d415c4032512d9f4c46ee90f50cb782a6f634192f\"" Jul 12 00:08:14.481673 containerd[1476]: 2025-07-12 00:08:14.432 [INFO][4339] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="052d37f2cf1ebb5aa1e2ca5d415c4032512d9f4c46ee90f50cb782a6f634192f" Jul 12 00:08:14.481673 containerd[1476]: 2025-07-12 00:08:14.432 [INFO][4339] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="052d37f2cf1ebb5aa1e2ca5d415c4032512d9f4c46ee90f50cb782a6f634192f" iface="eth0" netns="/var/run/netns/cni-dc8cb3f5-4400-6e9b-8cfe-170e825d3670" Jul 12 00:08:14.481673 containerd[1476]: 2025-07-12 00:08:14.434 [INFO][4339] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="052d37f2cf1ebb5aa1e2ca5d415c4032512d9f4c46ee90f50cb782a6f634192f" iface="eth0" netns="/var/run/netns/cni-dc8cb3f5-4400-6e9b-8cfe-170e825d3670" Jul 12 00:08:14.481673 containerd[1476]: 2025-07-12 00:08:14.438 [INFO][4339] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="052d37f2cf1ebb5aa1e2ca5d415c4032512d9f4c46ee90f50cb782a6f634192f" iface="eth0" netns="/var/run/netns/cni-dc8cb3f5-4400-6e9b-8cfe-170e825d3670" Jul 12 00:08:14.481673 containerd[1476]: 2025-07-12 00:08:14.438 [INFO][4339] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="052d37f2cf1ebb5aa1e2ca5d415c4032512d9f4c46ee90f50cb782a6f634192f" Jul 12 00:08:14.481673 containerd[1476]: 2025-07-12 00:08:14.438 [INFO][4339] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="052d37f2cf1ebb5aa1e2ca5d415c4032512d9f4c46ee90f50cb782a6f634192f" Jul 12 00:08:14.481673 containerd[1476]: 2025-07-12 00:08:14.463 [INFO][4346] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="052d37f2cf1ebb5aa1e2ca5d415c4032512d9f4c46ee90f50cb782a6f634192f" HandleID="k8s-pod-network.052d37f2cf1ebb5aa1e2ca5d415c4032512d9f4c46ee90f50cb782a6f634192f" Workload="ci--4081--3--4--n--f6981960e0-k8s-goldmane--768f4c5c69--pqdm4-eth0" Jul 12 00:08:14.481673 containerd[1476]: 2025-07-12 00:08:14.463 [INFO][4346] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:14.481673 containerd[1476]: 2025-07-12 00:08:14.464 [INFO][4346] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:14.481673 containerd[1476]: 2025-07-12 00:08:14.474 [WARNING][4346] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="052d37f2cf1ebb5aa1e2ca5d415c4032512d9f4c46ee90f50cb782a6f634192f" HandleID="k8s-pod-network.052d37f2cf1ebb5aa1e2ca5d415c4032512d9f4c46ee90f50cb782a6f634192f" Workload="ci--4081--3--4--n--f6981960e0-k8s-goldmane--768f4c5c69--pqdm4-eth0" Jul 12 00:08:14.481673 containerd[1476]: 2025-07-12 00:08:14.474 [INFO][4346] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="052d37f2cf1ebb5aa1e2ca5d415c4032512d9f4c46ee90f50cb782a6f634192f" HandleID="k8s-pod-network.052d37f2cf1ebb5aa1e2ca5d415c4032512d9f4c46ee90f50cb782a6f634192f" Workload="ci--4081--3--4--n--f6981960e0-k8s-goldmane--768f4c5c69--pqdm4-eth0" Jul 12 00:08:14.481673 containerd[1476]: 2025-07-12 00:08:14.477 [INFO][4346] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:14.481673 containerd[1476]: 2025-07-12 00:08:14.478 [INFO][4339] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="052d37f2cf1ebb5aa1e2ca5d415c4032512d9f4c46ee90f50cb782a6f634192f" Jul 12 00:08:14.486182 containerd[1476]: time="2025-07-12T00:08:14.484326225Z" level=info msg="TearDown network for sandbox \"052d37f2cf1ebb5aa1e2ca5d415c4032512d9f4c46ee90f50cb782a6f634192f\" successfully" Jul 12 00:08:14.486182 containerd[1476]: time="2025-07-12T00:08:14.484385505Z" level=info msg="StopPodSandbox for \"052d37f2cf1ebb5aa1e2ca5d415c4032512d9f4c46ee90f50cb782a6f634192f\" returns successfully" Jul 12 00:08:14.486650 systemd[1]: run-netns-cni\x2ddc8cb3f5\x2d4400\x2d6e9b\x2d8cfe\x2d170e825d3670.mount: Deactivated successfully. Jul 12 00:08:14.488067 containerd[1476]: time="2025-07-12T00:08:14.487066785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-pqdm4,Uid:e0f120ef-124f-4f0f-8ada-007ac6b4610e,Namespace:calico-system,Attempt:1,}" Jul 12 00:08:14.646820 systemd-networkd[1379]: cali123987b7434: Link UP Jul 12 00:08:14.647897 systemd-networkd[1379]: cali123987b7434: Gained carrier Jul 12 00:08:14.666359 containerd[1476]: 2025-07-12 00:08:14.557 [INFO][4352] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--4--n--f6981960e0-k8s-goldmane--768f4c5c69--pqdm4-eth0 goldmane-768f4c5c69- calico-system e0f120ef-124f-4f0f-8ada-007ac6b4610e 953 0 2025-07-12 00:07:52 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081-3-4-n-f6981960e0 goldmane-768f4c5c69-pqdm4 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali123987b7434 [] [] }} ContainerID="dd4746ebf21357a55eaf8de5210e3dab958bb17173571e427997a90c831e8f64" Namespace="calico-system" Pod="goldmane-768f4c5c69-pqdm4" WorkloadEndpoint="ci--4081--3--4--n--f6981960e0-k8s-goldmane--768f4c5c69--pqdm4-" Jul 12 00:08:14.666359 containerd[1476]: 2025-07-12 00:08:14.557 [INFO][4352] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dd4746ebf21357a55eaf8de5210e3dab958bb17173571e427997a90c831e8f64" Namespace="calico-system" Pod="goldmane-768f4c5c69-pqdm4" WorkloadEndpoint="ci--4081--3--4--n--f6981960e0-k8s-goldmane--768f4c5c69--pqdm4-eth0" Jul 12 00:08:14.666359 containerd[1476]: 2025-07-12 00:08:14.585 [INFO][4365] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dd4746ebf21357a55eaf8de5210e3dab958bb17173571e427997a90c831e8f64" HandleID="k8s-pod-network.dd4746ebf21357a55eaf8de5210e3dab958bb17173571e427997a90c831e8f64" Workload="ci--4081--3--4--n--f6981960e0-k8s-goldmane--768f4c5c69--pqdm4-eth0" Jul 12 00:08:14.666359 containerd[1476]: 2025-07-12 00:08:14.585 [INFO][4365] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="dd4746ebf21357a55eaf8de5210e3dab958bb17173571e427997a90c831e8f64" HandleID="k8s-pod-network.dd4746ebf21357a55eaf8de5210e3dab958bb17173571e427997a90c831e8f64" Workload="ci--4081--3--4--n--f6981960e0-k8s-goldmane--768f4c5c69--pqdm4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b610), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-4-n-f6981960e0", "pod":"goldmane-768f4c5c69-pqdm4", "timestamp":"2025-07-12 00:08:14.585523002 +0000 UTC"}, Hostname:"ci-4081-3-4-n-f6981960e0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:08:14.666359 containerd[1476]: 2025-07-12 00:08:14.585 [INFO][4365] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:14.666359 containerd[1476]: 2025-07-12 00:08:14.585 [INFO][4365] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:14.666359 containerd[1476]: 2025-07-12 00:08:14.585 [INFO][4365] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-4-n-f6981960e0' Jul 12 00:08:14.666359 containerd[1476]: 2025-07-12 00:08:14.596 [INFO][4365] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.dd4746ebf21357a55eaf8de5210e3dab958bb17173571e427997a90c831e8f64" host="ci-4081-3-4-n-f6981960e0" Jul 12 00:08:14.666359 containerd[1476]: 2025-07-12 00:08:14.604 [INFO][4365] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-4-n-f6981960e0" Jul 12 00:08:14.666359 containerd[1476]: 2025-07-12 00:08:14.612 [INFO][4365] ipam/ipam.go 511: Trying affinity for 192.168.59.64/26 host="ci-4081-3-4-n-f6981960e0" Jul 12 00:08:14.666359 containerd[1476]: 2025-07-12 00:08:14.616 [INFO][4365] ipam/ipam.go 158: Attempting to load block cidr=192.168.59.64/26 host="ci-4081-3-4-n-f6981960e0" Jul 12 00:08:14.666359 containerd[1476]: 2025-07-12 00:08:14.622 [INFO][4365] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.59.64/26 host="ci-4081-3-4-n-f6981960e0" Jul 12 00:08:14.666359 containerd[1476]: 2025-07-12 00:08:14.622 [INFO][4365] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.59.64/26 handle="k8s-pod-network.dd4746ebf21357a55eaf8de5210e3dab958bb17173571e427997a90c831e8f64" host="ci-4081-3-4-n-f6981960e0" Jul 12 00:08:14.666359 containerd[1476]: 2025-07-12 00:08:14.624 [INFO][4365] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.dd4746ebf21357a55eaf8de5210e3dab958bb17173571e427997a90c831e8f64 Jul 12 00:08:14.666359 containerd[1476]: 2025-07-12 00:08:14.629 [INFO][4365] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.59.64/26 handle="k8s-pod-network.dd4746ebf21357a55eaf8de5210e3dab958bb17173571e427997a90c831e8f64" host="ci-4081-3-4-n-f6981960e0" Jul 12 00:08:14.666359 containerd[1476]: 2025-07-12 00:08:14.638 [INFO][4365] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.59.68/26] block=192.168.59.64/26 handle="k8s-pod-network.dd4746ebf21357a55eaf8de5210e3dab958bb17173571e427997a90c831e8f64" host="ci-4081-3-4-n-f6981960e0" Jul 12 00:08:14.666359 containerd[1476]: 2025-07-12 00:08:14.639 [INFO][4365] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.59.68/26] handle="k8s-pod-network.dd4746ebf21357a55eaf8de5210e3dab958bb17173571e427997a90c831e8f64" host="ci-4081-3-4-n-f6981960e0" Jul 12 00:08:14.666359 containerd[1476]: 2025-07-12 00:08:14.639 [INFO][4365] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:14.666359 containerd[1476]: 2025-07-12 00:08:14.639 [INFO][4365] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.59.68/26] IPv6=[] ContainerID="dd4746ebf21357a55eaf8de5210e3dab958bb17173571e427997a90c831e8f64" HandleID="k8s-pod-network.dd4746ebf21357a55eaf8de5210e3dab958bb17173571e427997a90c831e8f64" Workload="ci--4081--3--4--n--f6981960e0-k8s-goldmane--768f4c5c69--pqdm4-eth0" Jul 12 00:08:14.667453 containerd[1476]: 2025-07-12 00:08:14.642 [INFO][4352] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dd4746ebf21357a55eaf8de5210e3dab958bb17173571e427997a90c831e8f64" Namespace="calico-system" Pod="goldmane-768f4c5c69-pqdm4" WorkloadEndpoint="ci--4081--3--4--n--f6981960e0-k8s-goldmane--768f4c5c69--pqdm4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--n--f6981960e0-k8s-goldmane--768f4c5c69--pqdm4-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"e0f120ef-124f-4f0f-8ada-007ac6b4610e", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 7, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-n-f6981960e0", ContainerID:"", Pod:"goldmane-768f4c5c69-pqdm4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.59.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali123987b7434", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:14.667453 containerd[1476]: 2025-07-12 00:08:14.642 [INFO][4352] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.59.68/32] ContainerID="dd4746ebf21357a55eaf8de5210e3dab958bb17173571e427997a90c831e8f64" Namespace="calico-system" Pod="goldmane-768f4c5c69-pqdm4" WorkloadEndpoint="ci--4081--3--4--n--f6981960e0-k8s-goldmane--768f4c5c69--pqdm4-eth0" Jul 12 00:08:14.667453 containerd[1476]: 2025-07-12 00:08:14.642 [INFO][4352] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali123987b7434 ContainerID="dd4746ebf21357a55eaf8de5210e3dab958bb17173571e427997a90c831e8f64" Namespace="calico-system" Pod="goldmane-768f4c5c69-pqdm4" WorkloadEndpoint="ci--4081--3--4--n--f6981960e0-k8s-goldmane--768f4c5c69--pqdm4-eth0" Jul 12 00:08:14.667453 containerd[1476]: 2025-07-12 00:08:14.649 [INFO][4352] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dd4746ebf21357a55eaf8de5210e3dab958bb17173571e427997a90c831e8f64" Namespace="calico-system" Pod="goldmane-768f4c5c69-pqdm4" WorkloadEndpoint="ci--4081--3--4--n--f6981960e0-k8s-goldmane--768f4c5c69--pqdm4-eth0" Jul 12 00:08:14.667453 containerd[1476]: 2025-07-12 00:08:14.649 [INFO][4352] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dd4746ebf21357a55eaf8de5210e3dab958bb17173571e427997a90c831e8f64" Namespace="calico-system" Pod="goldmane-768f4c5c69-pqdm4" WorkloadEndpoint="ci--4081--3--4--n--f6981960e0-k8s-goldmane--768f4c5c69--pqdm4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--n--f6981960e0-k8s-goldmane--768f4c5c69--pqdm4-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"e0f120ef-124f-4f0f-8ada-007ac6b4610e", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 7, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-n-f6981960e0", ContainerID:"dd4746ebf21357a55eaf8de5210e3dab958bb17173571e427997a90c831e8f64", Pod:"goldmane-768f4c5c69-pqdm4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.59.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali123987b7434", MAC:"aa:71:a4:2b:e6:5e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:14.667453 containerd[1476]: 2025-07-12 00:08:14.661 [INFO][4352] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dd4746ebf21357a55eaf8de5210e3dab958bb17173571e427997a90c831e8f64" Namespace="calico-system" Pod="goldmane-768f4c5c69-pqdm4" WorkloadEndpoint="ci--4081--3--4--n--f6981960e0-k8s-goldmane--768f4c5c69--pqdm4-eth0" Jul 12 00:08:14.695062 containerd[1476]: time="2025-07-12T00:08:14.694482101Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:08:14.695062 containerd[1476]: time="2025-07-12T00:08:14.694550781Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:08:14.695062 containerd[1476]: time="2025-07-12T00:08:14.694562781Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:14.695062 containerd[1476]: time="2025-07-12T00:08:14.694954901Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:14.718568 systemd[1]: Started cri-containerd-dd4746ebf21357a55eaf8de5210e3dab958bb17173571e427997a90c831e8f64.scope - libcontainer container dd4746ebf21357a55eaf8de5210e3dab958bb17173571e427997a90c831e8f64. Jul 12 00:08:14.765291 containerd[1476]: time="2025-07-12T00:08:14.765249153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-pqdm4,Uid:e0f120ef-124f-4f0f-8ada-007ac6b4610e,Namespace:calico-system,Attempt:1,} returns sandbox id \"dd4746ebf21357a55eaf8de5210e3dab958bb17173571e427997a90c831e8f64\"" Jul 12 00:08:15.016584 systemd-networkd[1379]: cali39a6f1d11b5: Gained IPv6LL Jul 12 00:08:15.374920 containerd[1476]: time="2025-07-12T00:08:15.374558697Z" level=info msg="StopPodSandbox for \"9df73bfef9de647a300e05ba079706468df39e9261ee5401adaf5fd3aafd652c\"" Jul 12 00:08:15.549523 containerd[1476]: 2025-07-12 00:08:15.465 [INFO][4440] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9df73bfef9de647a300e05ba079706468df39e9261ee5401adaf5fd3aafd652c" Jul 12 00:08:15.549523 containerd[1476]: 2025-07-12 00:08:15.466 [INFO][4440] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9df73bfef9de647a300e05ba079706468df39e9261ee5401adaf5fd3aafd652c" iface="eth0" netns="/var/run/netns/cni-99cf3ab7-cc51-4888-179b-dc69b33afdb4" Jul 12 00:08:15.549523 containerd[1476]: 2025-07-12 00:08:15.466 [INFO][4440] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9df73bfef9de647a300e05ba079706468df39e9261ee5401adaf5fd3aafd652c" iface="eth0" netns="/var/run/netns/cni-99cf3ab7-cc51-4888-179b-dc69b33afdb4" Jul 12 00:08:15.549523 containerd[1476]: 2025-07-12 00:08:15.466 [INFO][4440] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9df73bfef9de647a300e05ba079706468df39e9261ee5401adaf5fd3aafd652c" iface="eth0" netns="/var/run/netns/cni-99cf3ab7-cc51-4888-179b-dc69b33afdb4" Jul 12 00:08:15.549523 containerd[1476]: 2025-07-12 00:08:15.466 [INFO][4440] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9df73bfef9de647a300e05ba079706468df39e9261ee5401adaf5fd3aafd652c" Jul 12 00:08:15.549523 containerd[1476]: 2025-07-12 00:08:15.466 [INFO][4440] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9df73bfef9de647a300e05ba079706468df39e9261ee5401adaf5fd3aafd652c" Jul 12 00:08:15.549523 containerd[1476]: 2025-07-12 00:08:15.517 [INFO][4448] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9df73bfef9de647a300e05ba079706468df39e9261ee5401adaf5fd3aafd652c" HandleID="k8s-pod-network.9df73bfef9de647a300e05ba079706468df39e9261ee5401adaf5fd3aafd652c" Workload="ci--4081--3--4--n--f6981960e0-k8s-calico--kube--controllers--6ccd79f89c--9zkg8-eth0" Jul 12 00:08:15.549523 containerd[1476]: 2025-07-12 00:08:15.517 [INFO][4448] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:15.549523 containerd[1476]: 2025-07-12 00:08:15.517 [INFO][4448] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:15.549523 containerd[1476]: 2025-07-12 00:08:15.538 [WARNING][4448] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9df73bfef9de647a300e05ba079706468df39e9261ee5401adaf5fd3aafd652c" HandleID="k8s-pod-network.9df73bfef9de647a300e05ba079706468df39e9261ee5401adaf5fd3aafd652c" Workload="ci--4081--3--4--n--f6981960e0-k8s-calico--kube--controllers--6ccd79f89c--9zkg8-eth0" Jul 12 00:08:15.549523 containerd[1476]: 2025-07-12 00:08:15.538 [INFO][4448] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9df73bfef9de647a300e05ba079706468df39e9261ee5401adaf5fd3aafd652c" HandleID="k8s-pod-network.9df73bfef9de647a300e05ba079706468df39e9261ee5401adaf5fd3aafd652c" Workload="ci--4081--3--4--n--f6981960e0-k8s-calico--kube--controllers--6ccd79f89c--9zkg8-eth0" Jul 12 00:08:15.549523 containerd[1476]: 2025-07-12 00:08:15.541 [INFO][4448] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:15.549523 containerd[1476]: 2025-07-12 00:08:15.544 [INFO][4440] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9df73bfef9de647a300e05ba079706468df39e9261ee5401adaf5fd3aafd652c" Jul 12 00:08:15.549874 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2421575863.mount: Deactivated successfully. Jul 12 00:08:15.551681 containerd[1476]: time="2025-07-12T00:08:15.551424487Z" level=info msg="TearDown network for sandbox \"9df73bfef9de647a300e05ba079706468df39e9261ee5401adaf5fd3aafd652c\" successfully" Jul 12 00:08:15.551681 containerd[1476]: time="2025-07-12T00:08:15.551457687Z" level=info msg="StopPodSandbox for \"9df73bfef9de647a300e05ba079706468df39e9261ee5401adaf5fd3aafd652c\" returns successfully" Jul 12 00:08:15.553878 containerd[1476]: time="2025-07-12T00:08:15.553666327Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6ccd79f89c-9zkg8,Uid:38a85e2e-1a0b-4fb0-b2a5-5c3bb45039e7,Namespace:calico-system,Attempt:1,}" Jul 12 00:08:15.555164 systemd[1]: run-netns-cni\x2d99cf3ab7\x2dcc51\x2d4888\x2d179b\x2ddc69b33afdb4.mount: Deactivated successfully. Jul 12 00:08:15.586411 containerd[1476]: time="2025-07-12T00:08:15.586358493Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:15.587573 containerd[1476]: time="2025-07-12T00:08:15.587522973Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=30814581" Jul 12 00:08:15.588456 containerd[1476]: time="2025-07-12T00:08:15.588403973Z" level=info msg="ImageCreate event name:\"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:15.596032 containerd[1476]: time="2025-07-12T00:08:15.595195054Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:15.596844 containerd[1476]: time="2025-07-12T00:08:15.595986615Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"30814411\" in 2.625697972s" Jul 12 00:08:15.596844 containerd[1476]: time="2025-07-12T00:08:15.596422535Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\"" Jul 12 00:08:15.598172 containerd[1476]: time="2025-07-12T00:08:15.598136775Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 12 00:08:15.600616 containerd[1476]: time="2025-07-12T00:08:15.600538015Z" level=info msg="CreateContainer within sandbox \"f9a73424a476b85fe00b4f4ddb525313485620c231d2c8e896fc44ea7bdb5ca0\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 12 00:08:15.619783 containerd[1476]: time="2025-07-12T00:08:15.619547899Z" level=info msg="CreateContainer within sandbox \"f9a73424a476b85fe00b4f4ddb525313485620c231d2c8e896fc44ea7bdb5ca0\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"27dfc18a46d7a62a3e00ced14c5a3586948aaac32b7d7b46b2b203e8cabb90f4\"" Jul 12 00:08:15.622393 containerd[1476]: time="2025-07-12T00:08:15.621822579Z" level=info msg="StartContainer for \"27dfc18a46d7a62a3e00ced14c5a3586948aaac32b7d7b46b2b203e8cabb90f4\"" Jul 12 00:08:15.682406 systemd[1]: Started cri-containerd-27dfc18a46d7a62a3e00ced14c5a3586948aaac32b7d7b46b2b203e8cabb90f4.scope - libcontainer container 27dfc18a46d7a62a3e00ced14c5a3586948aaac32b7d7b46b2b203e8cabb90f4. Jul 12 00:08:15.733613 systemd-networkd[1379]: calif74458574fa: Link UP Jul 12 00:08:15.738969 systemd-networkd[1379]: calif74458574fa: Gained carrier Jul 12 00:08:15.754103 containerd[1476]: time="2025-07-12T00:08:15.754009881Z" level=info msg="StartContainer for \"27dfc18a46d7a62a3e00ced14c5a3586948aaac32b7d7b46b2b203e8cabb90f4\" returns successfully" Jul 12 00:08:15.762280 containerd[1476]: 2025-07-12 00:08:15.627 [INFO][4457] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--4--n--f6981960e0-k8s-calico--kube--controllers--6ccd79f89c--9zkg8-eth0 calico-kube-controllers-6ccd79f89c- calico-system 38a85e2e-1a0b-4fb0-b2a5-5c3bb45039e7 962 0 2025-07-12 00:07:52 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6ccd79f89c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-4-n-f6981960e0 calico-kube-controllers-6ccd79f89c-9zkg8 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calif74458574fa [] [] }} ContainerID="fa4e53246feea475034aa56b3e1bcf8ba87fd317210cddc5f586fe9f3222c2b5" Namespace="calico-system" Pod="calico-kube-controllers-6ccd79f89c-9zkg8" WorkloadEndpoint="ci--4081--3--4--n--f6981960e0-k8s-calico--kube--controllers--6ccd79f89c--9zkg8-" Jul 12 00:08:15.762280 containerd[1476]: 2025-07-12 00:08:15.628 [INFO][4457] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fa4e53246feea475034aa56b3e1bcf8ba87fd317210cddc5f586fe9f3222c2b5" Namespace="calico-system" Pod="calico-kube-controllers-6ccd79f89c-9zkg8" WorkloadEndpoint="ci--4081--3--4--n--f6981960e0-k8s-calico--kube--controllers--6ccd79f89c--9zkg8-eth0" Jul 12 00:08:15.762280 containerd[1476]: 2025-07-12 00:08:15.667 [INFO][4476] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fa4e53246feea475034aa56b3e1bcf8ba87fd317210cddc5f586fe9f3222c2b5" HandleID="k8s-pod-network.fa4e53246feea475034aa56b3e1bcf8ba87fd317210cddc5f586fe9f3222c2b5" Workload="ci--4081--3--4--n--f6981960e0-k8s-calico--kube--controllers--6ccd79f89c--9zkg8-eth0" Jul 12 00:08:15.762280 containerd[1476]: 2025-07-12 00:08:15.667 [INFO][4476] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fa4e53246feea475034aa56b3e1bcf8ba87fd317210cddc5f586fe9f3222c2b5" HandleID="k8s-pod-network.fa4e53246feea475034aa56b3e1bcf8ba87fd317210cddc5f586fe9f3222c2b5" Workload="ci--4081--3--4--n--f6981960e0-k8s-calico--kube--controllers--6ccd79f89c--9zkg8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3530), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-4-n-f6981960e0", "pod":"calico-kube-controllers-6ccd79f89c-9zkg8", "timestamp":"2025-07-12 00:08:15.667183027 +0000 UTC"}, Hostname:"ci-4081-3-4-n-f6981960e0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:08:15.762280 containerd[1476]: 2025-07-12 00:08:15.667 [INFO][4476] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:15.762280 containerd[1476]: 2025-07-12 00:08:15.667 [INFO][4476] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:15.762280 containerd[1476]: 2025-07-12 00:08:15.667 [INFO][4476] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-4-n-f6981960e0' Jul 12 00:08:15.762280 containerd[1476]: 2025-07-12 00:08:15.684 [INFO][4476] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fa4e53246feea475034aa56b3e1bcf8ba87fd317210cddc5f586fe9f3222c2b5" host="ci-4081-3-4-n-f6981960e0" Jul 12 00:08:15.762280 containerd[1476]: 2025-07-12 00:08:15.691 [INFO][4476] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-4-n-f6981960e0" Jul 12 00:08:15.762280 containerd[1476]: 2025-07-12 00:08:15.698 [INFO][4476] ipam/ipam.go 511: Trying affinity for 192.168.59.64/26 host="ci-4081-3-4-n-f6981960e0" Jul 12 00:08:15.762280 containerd[1476]: 2025-07-12 00:08:15.701 [INFO][4476] ipam/ipam.go 158: Attempting to load block cidr=192.168.59.64/26 host="ci-4081-3-4-n-f6981960e0" Jul 12 00:08:15.762280 containerd[1476]: 2025-07-12 00:08:15.704 [INFO][4476] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.59.64/26 host="ci-4081-3-4-n-f6981960e0" Jul 12 00:08:15.762280 containerd[1476]: 2025-07-12 00:08:15.704 [INFO][4476] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.59.64/26 handle="k8s-pod-network.fa4e53246feea475034aa56b3e1bcf8ba87fd317210cddc5f586fe9f3222c2b5" host="ci-4081-3-4-n-f6981960e0" Jul 12 00:08:15.762280 containerd[1476]: 2025-07-12 00:08:15.707 [INFO][4476] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.fa4e53246feea475034aa56b3e1bcf8ba87fd317210cddc5f586fe9f3222c2b5 Jul 12 00:08:15.762280 containerd[1476]: 2025-07-12 00:08:15.713 [INFO][4476] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.59.64/26 handle="k8s-pod-network.fa4e53246feea475034aa56b3e1bcf8ba87fd317210cddc5f586fe9f3222c2b5" host="ci-4081-3-4-n-f6981960e0" Jul 12 00:08:15.762280 containerd[1476]: 2025-07-12 00:08:15.725 [INFO][4476] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.59.69/26] block=192.168.59.64/26 handle="k8s-pod-network.fa4e53246feea475034aa56b3e1bcf8ba87fd317210cddc5f586fe9f3222c2b5" host="ci-4081-3-4-n-f6981960e0" Jul 12 00:08:15.762280 containerd[1476]: 2025-07-12 00:08:15.726 [INFO][4476] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.59.69/26] handle="k8s-pod-network.fa4e53246feea475034aa56b3e1bcf8ba87fd317210cddc5f586fe9f3222c2b5" host="ci-4081-3-4-n-f6981960e0" Jul 12 00:08:15.762280 containerd[1476]: 2025-07-12 00:08:15.726 [INFO][4476] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:15.762280 containerd[1476]: 2025-07-12 00:08:15.726 [INFO][4476] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.59.69/26] IPv6=[] ContainerID="fa4e53246feea475034aa56b3e1bcf8ba87fd317210cddc5f586fe9f3222c2b5" HandleID="k8s-pod-network.fa4e53246feea475034aa56b3e1bcf8ba87fd317210cddc5f586fe9f3222c2b5" Workload="ci--4081--3--4--n--f6981960e0-k8s-calico--kube--controllers--6ccd79f89c--9zkg8-eth0" Jul 12 00:08:15.764408 containerd[1476]: 2025-07-12 00:08:15.728 [INFO][4457] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fa4e53246feea475034aa56b3e1bcf8ba87fd317210cddc5f586fe9f3222c2b5" Namespace="calico-system" Pod="calico-kube-controllers-6ccd79f89c-9zkg8" WorkloadEndpoint="ci--4081--3--4--n--f6981960e0-k8s-calico--kube--controllers--6ccd79f89c--9zkg8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--n--f6981960e0-k8s-calico--kube--controllers--6ccd79f89c--9zkg8-eth0", GenerateName:"calico-kube-controllers-6ccd79f89c-", Namespace:"calico-system", SelfLink:"", UID:"38a85e2e-1a0b-4fb0-b2a5-5c3bb45039e7", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 7, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6ccd79f89c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-n-f6981960e0", ContainerID:"", Pod:"calico-kube-controllers-6ccd79f89c-9zkg8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.59.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif74458574fa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:15.764408 containerd[1476]: 2025-07-12 00:08:15.728 [INFO][4457] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.59.69/32] ContainerID="fa4e53246feea475034aa56b3e1bcf8ba87fd317210cddc5f586fe9f3222c2b5" Namespace="calico-system" Pod="calico-kube-controllers-6ccd79f89c-9zkg8" WorkloadEndpoint="ci--4081--3--4--n--f6981960e0-k8s-calico--kube--controllers--6ccd79f89c--9zkg8-eth0" Jul 12 00:08:15.764408 containerd[1476]: 2025-07-12 00:08:15.728 [INFO][4457] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif74458574fa ContainerID="fa4e53246feea475034aa56b3e1bcf8ba87fd317210cddc5f586fe9f3222c2b5" Namespace="calico-system" Pod="calico-kube-controllers-6ccd79f89c-9zkg8" WorkloadEndpoint="ci--4081--3--4--n--f6981960e0-k8s-calico--kube--controllers--6ccd79f89c--9zkg8-eth0" Jul 12 00:08:15.764408 containerd[1476]: 2025-07-12 00:08:15.738 [INFO][4457] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fa4e53246feea475034aa56b3e1bcf8ba87fd317210cddc5f586fe9f3222c2b5" Namespace="calico-system" Pod="calico-kube-controllers-6ccd79f89c-9zkg8" WorkloadEndpoint="ci--4081--3--4--n--f6981960e0-k8s-calico--kube--controllers--6ccd79f89c--9zkg8-eth0" Jul 12 00:08:15.764408 containerd[1476]: 2025-07-12 00:08:15.739 [INFO][4457] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fa4e53246feea475034aa56b3e1bcf8ba87fd317210cddc5f586fe9f3222c2b5" Namespace="calico-system" Pod="calico-kube-controllers-6ccd79f89c-9zkg8" WorkloadEndpoint="ci--4081--3--4--n--f6981960e0-k8s-calico--kube--controllers--6ccd79f89c--9zkg8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--n--f6981960e0-k8s-calico--kube--controllers--6ccd79f89c--9zkg8-eth0", GenerateName:"calico-kube-controllers-6ccd79f89c-", Namespace:"calico-system", SelfLink:"", UID:"38a85e2e-1a0b-4fb0-b2a5-5c3bb45039e7", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 7, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6ccd79f89c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-n-f6981960e0", ContainerID:"fa4e53246feea475034aa56b3e1bcf8ba87fd317210cddc5f586fe9f3222c2b5", Pod:"calico-kube-controllers-6ccd79f89c-9zkg8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.59.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif74458574fa", MAC:"46:61:a7:41:66:31", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:15.764408 containerd[1476]: 2025-07-12 00:08:15.753 [INFO][4457] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fa4e53246feea475034aa56b3e1bcf8ba87fd317210cddc5f586fe9f3222c2b5" Namespace="calico-system" Pod="calico-kube-controllers-6ccd79f89c-9zkg8" WorkloadEndpoint="ci--4081--3--4--n--f6981960e0-k8s-calico--kube--controllers--6ccd79f89c--9zkg8-eth0" Jul 12 00:08:15.784463 systemd-networkd[1379]: cali1aaf8d26815: Gained IPv6LL Jul 12 00:08:15.791843 containerd[1476]: time="2025-07-12T00:08:15.791749888Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:08:15.792035 containerd[1476]: time="2025-07-12T00:08:15.791858008Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:08:15.792035 containerd[1476]: time="2025-07-12T00:08:15.791884808Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:15.793127 containerd[1476]: time="2025-07-12T00:08:15.792060368Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:15.822418 systemd[1]: Started cri-containerd-fa4e53246feea475034aa56b3e1bcf8ba87fd317210cddc5f586fe9f3222c2b5.scope - libcontainer container fa4e53246feea475034aa56b3e1bcf8ba87fd317210cddc5f586fe9f3222c2b5. Jul 12 00:08:15.869835 containerd[1476]: time="2025-07-12T00:08:15.869795981Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6ccd79f89c-9zkg8,Uid:38a85e2e-1a0b-4fb0-b2a5-5c3bb45039e7,Namespace:calico-system,Attempt:1,} returns sandbox id \"fa4e53246feea475034aa56b3e1bcf8ba87fd317210cddc5f586fe9f3222c2b5\"" Jul 12 00:08:16.105113 systemd-networkd[1379]: cali123987b7434: Gained IPv6LL Jul 12 00:08:16.377412 containerd[1476]: time="2025-07-12T00:08:16.376144467Z" level=info msg="StopPodSandbox for \"dfb11f025a4fe378fe4ca5bf556aafeec694dfaebafc673d0a0b058f92c57f49\"" Jul 12 00:08:16.479987 containerd[1476]: 2025-07-12 00:08:16.433 [INFO][4574] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dfb11f025a4fe378fe4ca5bf556aafeec694dfaebafc673d0a0b058f92c57f49" Jul 12 00:08:16.479987 containerd[1476]: 2025-07-12 00:08:16.433 [INFO][4574] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="dfb11f025a4fe378fe4ca5bf556aafeec694dfaebafc673d0a0b058f92c57f49" iface="eth0" netns="/var/run/netns/cni-7e5e691d-57f1-9fb1-d7bf-a66905b90031" Jul 12 00:08:16.479987 containerd[1476]: 2025-07-12 00:08:16.435 [INFO][4574] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="dfb11f025a4fe378fe4ca5bf556aafeec694dfaebafc673d0a0b058f92c57f49" iface="eth0" netns="/var/run/netns/cni-7e5e691d-57f1-9fb1-d7bf-a66905b90031" Jul 12 00:08:16.479987 containerd[1476]: 2025-07-12 00:08:16.435 [INFO][4574] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="dfb11f025a4fe378fe4ca5bf556aafeec694dfaebafc673d0a0b058f92c57f49" iface="eth0" netns="/var/run/netns/cni-7e5e691d-57f1-9fb1-d7bf-a66905b90031" Jul 12 00:08:16.479987 containerd[1476]: 2025-07-12 00:08:16.435 [INFO][4574] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dfb11f025a4fe378fe4ca5bf556aafeec694dfaebafc673d0a0b058f92c57f49" Jul 12 00:08:16.479987 containerd[1476]: 2025-07-12 00:08:16.435 [INFO][4574] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dfb11f025a4fe378fe4ca5bf556aafeec694dfaebafc673d0a0b058f92c57f49" Jul 12 00:08:16.479987 containerd[1476]: 2025-07-12 00:08:16.460 [INFO][4581] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dfb11f025a4fe378fe4ca5bf556aafeec694dfaebafc673d0a0b058f92c57f49" HandleID="k8s-pod-network.dfb11f025a4fe378fe4ca5bf556aafeec694dfaebafc673d0a0b058f92c57f49" Workload="ci--4081--3--4--n--f6981960e0-k8s-csi--node--driver--4mp89-eth0" Jul 12 00:08:16.479987 containerd[1476]: 2025-07-12 00:08:16.460 [INFO][4581] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:16.479987 containerd[1476]: 2025-07-12 00:08:16.461 [INFO][4581] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:16.479987 containerd[1476]: 2025-07-12 00:08:16.472 [WARNING][4581] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dfb11f025a4fe378fe4ca5bf556aafeec694dfaebafc673d0a0b058f92c57f49" HandleID="k8s-pod-network.dfb11f025a4fe378fe4ca5bf556aafeec694dfaebafc673d0a0b058f92c57f49" Workload="ci--4081--3--4--n--f6981960e0-k8s-csi--node--driver--4mp89-eth0" Jul 12 00:08:16.479987 containerd[1476]: 2025-07-12 00:08:16.472 [INFO][4581] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dfb11f025a4fe378fe4ca5bf556aafeec694dfaebafc673d0a0b058f92c57f49" HandleID="k8s-pod-network.dfb11f025a4fe378fe4ca5bf556aafeec694dfaebafc673d0a0b058f92c57f49" Workload="ci--4081--3--4--n--f6981960e0-k8s-csi--node--driver--4mp89-eth0" Jul 12 00:08:16.479987 containerd[1476]: 2025-07-12 00:08:16.475 [INFO][4581] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:16.479987 containerd[1476]: 2025-07-12 00:08:16.477 [INFO][4574] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dfb11f025a4fe378fe4ca5bf556aafeec694dfaebafc673d0a0b058f92c57f49" Jul 12 00:08:16.482042 containerd[1476]: time="2025-07-12T00:08:16.481800085Z" level=info msg="TearDown network for sandbox \"dfb11f025a4fe378fe4ca5bf556aafeec694dfaebafc673d0a0b058f92c57f49\" successfully" Jul 12 00:08:16.482042 containerd[1476]: time="2025-07-12T00:08:16.481846685Z" level=info msg="StopPodSandbox for \"dfb11f025a4fe378fe4ca5bf556aafeec694dfaebafc673d0a0b058f92c57f49\" returns successfully" Jul 12 00:08:16.482976 containerd[1476]: time="2025-07-12T00:08:16.482684765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4mp89,Uid:b6e2f19f-7174-405d-8aeb-93e33315aa19,Namespace:calico-system,Attempt:1,}" Jul 12 00:08:16.486046 systemd[1]: run-netns-cni\x2d7e5e691d\x2d57f1\x2d9fb1\x2dd7bf\x2da66905b90031.mount: Deactivated successfully. Jul 12 00:08:16.649437 systemd-networkd[1379]: calie0537441883: Link UP Jul 12 00:08:16.649669 systemd-networkd[1379]: calie0537441883: Gained carrier Jul 12 00:08:16.670069 containerd[1476]: 2025-07-12 00:08:16.555 [INFO][4587] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--4--n--f6981960e0-k8s-csi--node--driver--4mp89-eth0 csi-node-driver- calico-system b6e2f19f-7174-405d-8aeb-93e33315aa19 973 0 2025-07-12 00:07:52 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-4-n-f6981960e0 csi-node-driver-4mp89 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calie0537441883 [] [] }} ContainerID="7969c9147ee6357aa27a4bc2549209e11d366f5a0df0c4d99f7a5ce8e02ca6d9" Namespace="calico-system" Pod="csi-node-driver-4mp89" WorkloadEndpoint="ci--4081--3--4--n--f6981960e0-k8s-csi--node--driver--4mp89-" Jul 12 00:08:16.670069 containerd[1476]: 2025-07-12 00:08:16.556 [INFO][4587] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7969c9147ee6357aa27a4bc2549209e11d366f5a0df0c4d99f7a5ce8e02ca6d9" Namespace="calico-system" Pod="csi-node-driver-4mp89" WorkloadEndpoint="ci--4081--3--4--n--f6981960e0-k8s-csi--node--driver--4mp89-eth0" Jul 12 00:08:16.670069 containerd[1476]: 2025-07-12 00:08:16.591 [INFO][4600] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7969c9147ee6357aa27a4bc2549209e11d366f5a0df0c4d99f7a5ce8e02ca6d9" HandleID="k8s-pod-network.7969c9147ee6357aa27a4bc2549209e11d366f5a0df0c4d99f7a5ce8e02ca6d9" Workload="ci--4081--3--4--n--f6981960e0-k8s-csi--node--driver--4mp89-eth0" Jul 12 00:08:16.670069 containerd[1476]: 2025-07-12 00:08:16.591 [INFO][4600] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7969c9147ee6357aa27a4bc2549209e11d366f5a0df0c4d99f7a5ce8e02ca6d9" HandleID="k8s-pod-network.7969c9147ee6357aa27a4bc2549209e11d366f5a0df0c4d99f7a5ce8e02ca6d9" Workload="ci--4081--3--4--n--f6981960e0-k8s-csi--node--driver--4mp89-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002caff0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-4-n-f6981960e0", "pod":"csi-node-driver-4mp89", "timestamp":"2025-07-12 00:08:16.591338343 +0000 UTC"}, Hostname:"ci-4081-3-4-n-f6981960e0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:08:16.670069 containerd[1476]: 2025-07-12 00:08:16.591 [INFO][4600] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:16.670069 containerd[1476]: 2025-07-12 00:08:16.591 [INFO][4600] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:16.670069 containerd[1476]: 2025-07-12 00:08:16.591 [INFO][4600] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-4-n-f6981960e0' Jul 12 00:08:16.670069 containerd[1476]: 2025-07-12 00:08:16.603 [INFO][4600] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7969c9147ee6357aa27a4bc2549209e11d366f5a0df0c4d99f7a5ce8e02ca6d9" host="ci-4081-3-4-n-f6981960e0" Jul 12 00:08:16.670069 containerd[1476]: 2025-07-12 00:08:16.610 [INFO][4600] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-4-n-f6981960e0" Jul 12 00:08:16.670069 containerd[1476]: 2025-07-12 00:08:16.615 [INFO][4600] ipam/ipam.go 511: Trying affinity for 192.168.59.64/26 host="ci-4081-3-4-n-f6981960e0" Jul 12 00:08:16.670069 containerd[1476]: 2025-07-12 00:08:16.619 [INFO][4600] ipam/ipam.go 158: Attempting to load block cidr=192.168.59.64/26 host="ci-4081-3-4-n-f6981960e0" Jul 12 00:08:16.670069 containerd[1476]: 2025-07-12 00:08:16.622 [INFO][4600] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.59.64/26 host="ci-4081-3-4-n-f6981960e0" Jul 12 00:08:16.670069 containerd[1476]: 2025-07-12 00:08:16.622 [INFO][4600] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.59.64/26 handle="k8s-pod-network.7969c9147ee6357aa27a4bc2549209e11d366f5a0df0c4d99f7a5ce8e02ca6d9" host="ci-4081-3-4-n-f6981960e0" Jul 12 00:08:16.670069 containerd[1476]: 2025-07-12 00:08:16.626 [INFO][4600] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7969c9147ee6357aa27a4bc2549209e11d366f5a0df0c4d99f7a5ce8e02ca6d9 Jul 12 00:08:16.670069 containerd[1476]: 2025-07-12 00:08:16.633 [INFO][4600] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.59.64/26 handle="k8s-pod-network.7969c9147ee6357aa27a4bc2549209e11d366f5a0df0c4d99f7a5ce8e02ca6d9" host="ci-4081-3-4-n-f6981960e0" Jul 12 00:08:16.670069 containerd[1476]: 2025-07-12 00:08:16.643 [INFO][4600] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.59.70/26] block=192.168.59.64/26 handle="k8s-pod-network.7969c9147ee6357aa27a4bc2549209e11d366f5a0df0c4d99f7a5ce8e02ca6d9" host="ci-4081-3-4-n-f6981960e0" Jul 12 00:08:16.670069 containerd[1476]: 2025-07-12 00:08:16.643 [INFO][4600] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.59.70/26] handle="k8s-pod-network.7969c9147ee6357aa27a4bc2549209e11d366f5a0df0c4d99f7a5ce8e02ca6d9" host="ci-4081-3-4-n-f6981960e0" Jul 12 00:08:16.670069 containerd[1476]: 2025-07-12 00:08:16.643 [INFO][4600] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:16.670069 containerd[1476]: 2025-07-12 00:08:16.643 [INFO][4600] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.59.70/26] IPv6=[] ContainerID="7969c9147ee6357aa27a4bc2549209e11d366f5a0df0c4d99f7a5ce8e02ca6d9" HandleID="k8s-pod-network.7969c9147ee6357aa27a4bc2549209e11d366f5a0df0c4d99f7a5ce8e02ca6d9" Workload="ci--4081--3--4--n--f6981960e0-k8s-csi--node--driver--4mp89-eth0" Jul 12 00:08:16.670812 containerd[1476]: 2025-07-12 00:08:16.646 [INFO][4587] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7969c9147ee6357aa27a4bc2549209e11d366f5a0df0c4d99f7a5ce8e02ca6d9" Namespace="calico-system" Pod="csi-node-driver-4mp89" WorkloadEndpoint="ci--4081--3--4--n--f6981960e0-k8s-csi--node--driver--4mp89-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--n--f6981960e0-k8s-csi--node--driver--4mp89-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b6e2f19f-7174-405d-8aeb-93e33315aa19", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 7, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-n-f6981960e0", ContainerID:"", Pod:"csi-node-driver-4mp89", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.59.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie0537441883", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:16.670812 containerd[1476]: 2025-07-12 00:08:16.646 [INFO][4587] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.59.70/32] ContainerID="7969c9147ee6357aa27a4bc2549209e11d366f5a0df0c4d99f7a5ce8e02ca6d9" Namespace="calico-system" Pod="csi-node-driver-4mp89" WorkloadEndpoint="ci--4081--3--4--n--f6981960e0-k8s-csi--node--driver--4mp89-eth0" Jul 12 00:08:16.670812 containerd[1476]: 2025-07-12 00:08:16.646 [INFO][4587] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie0537441883 ContainerID="7969c9147ee6357aa27a4bc2549209e11d366f5a0df0c4d99f7a5ce8e02ca6d9" Namespace="calico-system" Pod="csi-node-driver-4mp89" WorkloadEndpoint="ci--4081--3--4--n--f6981960e0-k8s-csi--node--driver--4mp89-eth0" Jul 12 00:08:16.670812 containerd[1476]: 2025-07-12 00:08:16.649 [INFO][4587] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7969c9147ee6357aa27a4bc2549209e11d366f5a0df0c4d99f7a5ce8e02ca6d9" Namespace="calico-system" Pod="csi-node-driver-4mp89" WorkloadEndpoint="ci--4081--3--4--n--f6981960e0-k8s-csi--node--driver--4mp89-eth0" Jul 12 00:08:16.670812 containerd[1476]: 2025-07-12 00:08:16.651 [INFO][4587] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7969c9147ee6357aa27a4bc2549209e11d366f5a0df0c4d99f7a5ce8e02ca6d9" Namespace="calico-system" Pod="csi-node-driver-4mp89" WorkloadEndpoint="ci--4081--3--4--n--f6981960e0-k8s-csi--node--driver--4mp89-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--n--f6981960e0-k8s-csi--node--driver--4mp89-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b6e2f19f-7174-405d-8aeb-93e33315aa19", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 7, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-n-f6981960e0", ContainerID:"7969c9147ee6357aa27a4bc2549209e11d366f5a0df0c4d99f7a5ce8e02ca6d9", Pod:"csi-node-driver-4mp89", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.59.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie0537441883", MAC:"3e:71:37:d8:0f:e0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:16.670812 containerd[1476]: 2025-07-12 00:08:16.667 [INFO][4587] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7969c9147ee6357aa27a4bc2549209e11d366f5a0df0c4d99f7a5ce8e02ca6d9" Namespace="calico-system" Pod="csi-node-driver-4mp89" WorkloadEndpoint="ci--4081--3--4--n--f6981960e0-k8s-csi--node--driver--4mp89-eth0" Jul 12 00:08:16.694922 containerd[1476]: time="2025-07-12T00:08:16.694627360Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:08:16.694922 containerd[1476]: time="2025-07-12T00:08:16.694679680Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:08:16.696840 containerd[1476]: time="2025-07-12T00:08:16.696542401Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:16.697549 containerd[1476]: time="2025-07-12T00:08:16.697457161Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:16.738072 systemd[1]: Started cri-containerd-7969c9147ee6357aa27a4bc2549209e11d366f5a0df0c4d99f7a5ce8e02ca6d9.scope - libcontainer container 7969c9147ee6357aa27a4bc2549209e11d366f5a0df0c4d99f7a5ce8e02ca6d9. Jul 12 00:08:16.778653 containerd[1476]: time="2025-07-12T00:08:16.778598335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4mp89,Uid:b6e2f19f-7174-405d-8aeb-93e33315aa19,Namespace:calico-system,Attempt:1,} returns sandbox id \"7969c9147ee6357aa27a4bc2549209e11d366f5a0df0c4d99f7a5ce8e02ca6d9\"" Jul 12 00:08:17.768485 systemd-networkd[1379]: calif74458574fa: Gained IPv6LL Jul 12 00:08:17.961070 systemd-networkd[1379]: calie0537441883: Gained IPv6LL Jul 12 00:08:18.376052 containerd[1476]: time="2025-07-12T00:08:18.375175357Z" level=info msg="StopPodSandbox for \"f8f6a27f6ade4dc0d004d973501bf4189b322089e6565d8a7a1df5958c9ea008\"" Jul 12 00:08:18.454422 kubelet[2586]: I0712 00:08:18.453470 2586 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-7bf64cfb9c-pbtft" podStartSLOduration=4.142350553 podStartE2EDuration="8.453452261s" podCreationTimestamp="2025-07-12 00:08:10 +0000 UTC" firstStartedPulling="2025-07-12 00:08:11.286843067 +0000 UTC m=+45.056695441" lastFinishedPulling="2025-07-12 00:08:15.597944775 +0000 UTC m=+49.367797149" observedRunningTime="2025-07-12 00:08:16.726053086 +0000 UTC m=+50.495905460" watchObservedRunningTime="2025-07-12 00:08:18.453452261 +0000 UTC m=+52.223304635" Jul 12 00:08:18.503200 containerd[1476]: 2025-07-12 00:08:18.456 [INFO][4675] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f8f6a27f6ade4dc0d004d973501bf4189b322089e6565d8a7a1df5958c9ea008" Jul 12 00:08:18.503200 containerd[1476]: 2025-07-12 00:08:18.456 [INFO][4675] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f8f6a27f6ade4dc0d004d973501bf4189b322089e6565d8a7a1df5958c9ea008" iface="eth0" netns="/var/run/netns/cni-8fa48a03-7060-e4d5-e0ec-eb73c4e9bef9" Jul 12 00:08:18.503200 containerd[1476]: 2025-07-12 00:08:18.456 [INFO][4675] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f8f6a27f6ade4dc0d004d973501bf4189b322089e6565d8a7a1df5958c9ea008" iface="eth0" netns="/var/run/netns/cni-8fa48a03-7060-e4d5-e0ec-eb73c4e9bef9" Jul 12 00:08:18.503200 containerd[1476]: 2025-07-12 00:08:18.456 [INFO][4675] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f8f6a27f6ade4dc0d004d973501bf4189b322089e6565d8a7a1df5958c9ea008" iface="eth0" netns="/var/run/netns/cni-8fa48a03-7060-e4d5-e0ec-eb73c4e9bef9" Jul 12 00:08:18.503200 containerd[1476]: 2025-07-12 00:08:18.456 [INFO][4675] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f8f6a27f6ade4dc0d004d973501bf4189b322089e6565d8a7a1df5958c9ea008" Jul 12 00:08:18.503200 containerd[1476]: 2025-07-12 00:08:18.456 [INFO][4675] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f8f6a27f6ade4dc0d004d973501bf4189b322089e6565d8a7a1df5958c9ea008" Jul 12 00:08:18.503200 containerd[1476]: 2025-07-12 00:08:18.483 [INFO][4683] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f8f6a27f6ade4dc0d004d973501bf4189b322089e6565d8a7a1df5958c9ea008" HandleID="k8s-pod-network.f8f6a27f6ade4dc0d004d973501bf4189b322089e6565d8a7a1df5958c9ea008" Workload="ci--4081--3--4--n--f6981960e0-k8s-coredns--668d6bf9bc--6tkc8-eth0" Jul 12 00:08:18.503200 containerd[1476]: 2025-07-12 00:08:18.483 [INFO][4683] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:18.503200 containerd[1476]: 2025-07-12 00:08:18.483 [INFO][4683] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:18.503200 containerd[1476]: 2025-07-12 00:08:18.493 [WARNING][4683] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f8f6a27f6ade4dc0d004d973501bf4189b322089e6565d8a7a1df5958c9ea008" HandleID="k8s-pod-network.f8f6a27f6ade4dc0d004d973501bf4189b322089e6565d8a7a1df5958c9ea008" Workload="ci--4081--3--4--n--f6981960e0-k8s-coredns--668d6bf9bc--6tkc8-eth0" Jul 12 00:08:18.503200 containerd[1476]: 2025-07-12 00:08:18.493 [INFO][4683] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f8f6a27f6ade4dc0d004d973501bf4189b322089e6565d8a7a1df5958c9ea008" HandleID="k8s-pod-network.f8f6a27f6ade4dc0d004d973501bf4189b322089e6565d8a7a1df5958c9ea008" Workload="ci--4081--3--4--n--f6981960e0-k8s-coredns--668d6bf9bc--6tkc8-eth0" Jul 12 00:08:18.503200 containerd[1476]: 2025-07-12 00:08:18.496 [INFO][4683] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:18.503200 containerd[1476]: 2025-07-12 00:08:18.499 [INFO][4675] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f8f6a27f6ade4dc0d004d973501bf4189b322089e6565d8a7a1df5958c9ea008" Jul 12 00:08:18.503200 containerd[1476]: time="2025-07-12T00:08:18.502469562Z" level=info msg="TearDown network for sandbox \"f8f6a27f6ade4dc0d004d973501bf4189b322089e6565d8a7a1df5958c9ea008\" successfully" Jul 12 00:08:18.503200 containerd[1476]: time="2025-07-12T00:08:18.503119327Z" level=info msg="StopPodSandbox for \"f8f6a27f6ade4dc0d004d973501bf4189b322089e6565d8a7a1df5958c9ea008\" returns successfully" Jul 12 00:08:18.506769 containerd[1476]: time="2025-07-12T00:08:18.506623831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6tkc8,Uid:061bde8d-e1c8-4411-baca-b18c385b32b1,Namespace:kube-system,Attempt:1,}" Jul 12 00:08:18.507444 systemd[1]: run-netns-cni\x2d8fa48a03\x2d7060\x2de4d5\x2de0ec\x2deb73c4e9bef9.mount: Deactivated successfully. Jul 12 00:08:18.660396 containerd[1476]: time="2025-07-12T00:08:18.659553174Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:18.662547 containerd[1476]: time="2025-07-12T00:08:18.662502435Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=44517149" Jul 12 00:08:18.665067 containerd[1476]: time="2025-07-12T00:08:18.664945332Z" level=info msg="ImageCreate event name:\"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:18.671978 containerd[1476]: time="2025-07-12T00:08:18.671903020Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:18.673004 containerd[1476]: time="2025-07-12T00:08:18.672783306Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 3.074607411s" Jul 12 00:08:18.673004 containerd[1476]: time="2025-07-12T00:08:18.672817667Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 12 00:08:18.677470 containerd[1476]: time="2025-07-12T00:08:18.677414339Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 12 00:08:18.679701 containerd[1476]: time="2025-07-12T00:08:18.679671354Z" level=info msg="CreateContainer within sandbox \"8c4bbfe07e06f65b4a50d12b02600ae50a5dbde1094ed5dd89303b87d235b8f5\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 12 00:08:18.700969 containerd[1476]: time="2025-07-12T00:08:18.700352658Z" level=info msg="CreateContainer within sandbox \"8c4bbfe07e06f65b4a50d12b02600ae50a5dbde1094ed5dd89303b87d235b8f5\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"646414db7b486549a059e99cd57303b8692167d9775b69daa1d89a4e93f59308\"" Jul 12 00:08:18.700586 systemd-networkd[1379]: calicc9f8c933c6: Link UP Jul 12 00:08:18.701761 systemd-networkd[1379]: calicc9f8c933c6: Gained carrier Jul 12 00:08:18.705671 containerd[1476]: time="2025-07-12T00:08:18.702454273Z" level=info msg="StartContainer for \"646414db7b486549a059e99cd57303b8692167d9775b69daa1d89a4e93f59308\"" Jul 12 00:08:18.740556 containerd[1476]: 2025-07-12 00:08:18.582 [INFO][4690] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--4--n--f6981960e0-k8s-coredns--668d6bf9bc--6tkc8-eth0 coredns-668d6bf9bc- kube-system 061bde8d-e1c8-4411-baca-b18c385b32b1 988 0 2025-07-12 00:07:32 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-4-n-f6981960e0 coredns-668d6bf9bc-6tkc8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calicc9f8c933c6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="c3dc8a562b776eb5bc46312057e3045a6bde4f2759238dcc4b5e78570e27d770" Namespace="kube-system" Pod="coredns-668d6bf9bc-6tkc8" WorkloadEndpoint="ci--4081--3--4--n--f6981960e0-k8s-coredns--668d6bf9bc--6tkc8-" Jul 12 00:08:18.740556 containerd[1476]: 2025-07-12 00:08:18.583 [INFO][4690] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c3dc8a562b776eb5bc46312057e3045a6bde4f2759238dcc4b5e78570e27d770" Namespace="kube-system" Pod="coredns-668d6bf9bc-6tkc8" WorkloadEndpoint="ci--4081--3--4--n--f6981960e0-k8s-coredns--668d6bf9bc--6tkc8-eth0" Jul 12 00:08:18.740556 containerd[1476]: 2025-07-12 00:08:18.628 [INFO][4703] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c3dc8a562b776eb5bc46312057e3045a6bde4f2759238dcc4b5e78570e27d770" HandleID="k8s-pod-network.c3dc8a562b776eb5bc46312057e3045a6bde4f2759238dcc4b5e78570e27d770" Workload="ci--4081--3--4--n--f6981960e0-k8s-coredns--668d6bf9bc--6tkc8-eth0" Jul 12 00:08:18.740556 containerd[1476]: 2025-07-12 00:08:18.629 [INFO][4703] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c3dc8a562b776eb5bc46312057e3045a6bde4f2759238dcc4b5e78570e27d770" HandleID="k8s-pod-network.c3dc8a562b776eb5bc46312057e3045a6bde4f2759238dcc4b5e78570e27d770" Workload="ci--4081--3--4--n--f6981960e0-k8s-coredns--668d6bf9bc--6tkc8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ab4e0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-4-n-f6981960e0", "pod":"coredns-668d6bf9bc-6tkc8", "timestamp":"2025-07-12 00:08:18.628807481 +0000 UTC"}, Hostname:"ci-4081-3-4-n-f6981960e0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:08:18.740556 containerd[1476]: 2025-07-12 00:08:18.630 [INFO][4703] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:18.740556 containerd[1476]: 2025-07-12 00:08:18.630 [INFO][4703] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:18.740556 containerd[1476]: 2025-07-12 00:08:18.630 [INFO][4703] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-4-n-f6981960e0' Jul 12 00:08:18.740556 containerd[1476]: 2025-07-12 00:08:18.643 [INFO][4703] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c3dc8a562b776eb5bc46312057e3045a6bde4f2759238dcc4b5e78570e27d770" host="ci-4081-3-4-n-f6981960e0" Jul 12 00:08:18.740556 containerd[1476]: 2025-07-12 00:08:18.650 [INFO][4703] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-4-n-f6981960e0" Jul 12 00:08:18.740556 containerd[1476]: 2025-07-12 00:08:18.657 [INFO][4703] ipam/ipam.go 511: Trying affinity for 192.168.59.64/26 host="ci-4081-3-4-n-f6981960e0" Jul 12 00:08:18.740556 containerd[1476]: 2025-07-12 00:08:18.661 [INFO][4703] ipam/ipam.go 158: Attempting to load block cidr=192.168.59.64/26 host="ci-4081-3-4-n-f6981960e0" Jul 12 00:08:18.740556 containerd[1476]: 2025-07-12 00:08:18.664 [INFO][4703] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.59.64/26 host="ci-4081-3-4-n-f6981960e0" Jul 12 00:08:18.740556 containerd[1476]: 2025-07-12 00:08:18.664 [INFO][4703] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.59.64/26 handle="k8s-pod-network.c3dc8a562b776eb5bc46312057e3045a6bde4f2759238dcc4b5e78570e27d770" host="ci-4081-3-4-n-f6981960e0" Jul 12 00:08:18.740556 containerd[1476]: 2025-07-12 00:08:18.668 [INFO][4703] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c3dc8a562b776eb5bc46312057e3045a6bde4f2759238dcc4b5e78570e27d770 Jul 12 00:08:18.740556 containerd[1476]: 2025-07-12 00:08:18.674 [INFO][4703] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.59.64/26 handle="k8s-pod-network.c3dc8a562b776eb5bc46312057e3045a6bde4f2759238dcc4b5e78570e27d770" host="ci-4081-3-4-n-f6981960e0" Jul 12 00:08:18.740556 containerd[1476]: 2025-07-12 00:08:18.687 [INFO][4703] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.59.71/26] block=192.168.59.64/26 handle="k8s-pod-network.c3dc8a562b776eb5bc46312057e3045a6bde4f2759238dcc4b5e78570e27d770" host="ci-4081-3-4-n-f6981960e0" Jul 12 00:08:18.740556 containerd[1476]: 2025-07-12 00:08:18.687 [INFO][4703] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.59.71/26] handle="k8s-pod-network.c3dc8a562b776eb5bc46312057e3045a6bde4f2759238dcc4b5e78570e27d770" host="ci-4081-3-4-n-f6981960e0" Jul 12 00:08:18.740556 containerd[1476]: 2025-07-12 00:08:18.687 [INFO][4703] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:18.740556 containerd[1476]: 2025-07-12 00:08:18.687 [INFO][4703] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.59.71/26] IPv6=[] ContainerID="c3dc8a562b776eb5bc46312057e3045a6bde4f2759238dcc4b5e78570e27d770" HandleID="k8s-pod-network.c3dc8a562b776eb5bc46312057e3045a6bde4f2759238dcc4b5e78570e27d770" Workload="ci--4081--3--4--n--f6981960e0-k8s-coredns--668d6bf9bc--6tkc8-eth0" Jul 12 00:08:18.743518 containerd[1476]: 2025-07-12 00:08:18.696 [INFO][4690] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c3dc8a562b776eb5bc46312057e3045a6bde4f2759238dcc4b5e78570e27d770" Namespace="kube-system" Pod="coredns-668d6bf9bc-6tkc8" WorkloadEndpoint="ci--4081--3--4--n--f6981960e0-k8s-coredns--668d6bf9bc--6tkc8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--n--f6981960e0-k8s-coredns--668d6bf9bc--6tkc8-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"061bde8d-e1c8-4411-baca-b18c385b32b1", ResourceVersion:"988", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 7, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-n-f6981960e0", ContainerID:"", Pod:"coredns-668d6bf9bc-6tkc8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.59.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicc9f8c933c6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:18.743518 containerd[1476]: 2025-07-12 00:08:18.696 [INFO][4690] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.59.71/32] ContainerID="c3dc8a562b776eb5bc46312057e3045a6bde4f2759238dcc4b5e78570e27d770" Namespace="kube-system" Pod="coredns-668d6bf9bc-6tkc8" WorkloadEndpoint="ci--4081--3--4--n--f6981960e0-k8s-coredns--668d6bf9bc--6tkc8-eth0" Jul 12 00:08:18.743518 containerd[1476]: 2025-07-12 00:08:18.696 [INFO][4690] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicc9f8c933c6 ContainerID="c3dc8a562b776eb5bc46312057e3045a6bde4f2759238dcc4b5e78570e27d770" Namespace="kube-system" Pod="coredns-668d6bf9bc-6tkc8" WorkloadEndpoint="ci--4081--3--4--n--f6981960e0-k8s-coredns--668d6bf9bc--6tkc8-eth0" Jul 12 00:08:18.743518 containerd[1476]: 2025-07-12 00:08:18.699 [INFO][4690] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c3dc8a562b776eb5bc46312057e3045a6bde4f2759238dcc4b5e78570e27d770" Namespace="kube-system" Pod="coredns-668d6bf9bc-6tkc8" WorkloadEndpoint="ci--4081--3--4--n--f6981960e0-k8s-coredns--668d6bf9bc--6tkc8-eth0" Jul 12 00:08:18.743518 containerd[1476]: 2025-07-12 00:08:18.710 [INFO][4690] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c3dc8a562b776eb5bc46312057e3045a6bde4f2759238dcc4b5e78570e27d770" Namespace="kube-system" Pod="coredns-668d6bf9bc-6tkc8" WorkloadEndpoint="ci--4081--3--4--n--f6981960e0-k8s-coredns--668d6bf9bc--6tkc8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--n--f6981960e0-k8s-coredns--668d6bf9bc--6tkc8-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"061bde8d-e1c8-4411-baca-b18c385b32b1", ResourceVersion:"988", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 7, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-n-f6981960e0", ContainerID:"c3dc8a562b776eb5bc46312057e3045a6bde4f2759238dcc4b5e78570e27d770", Pod:"coredns-668d6bf9bc-6tkc8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.59.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicc9f8c933c6", MAC:"7e:d8:51:65:9f:f8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:18.743518 containerd[1476]: 2025-07-12 00:08:18.729 [INFO][4690] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c3dc8a562b776eb5bc46312057e3045a6bde4f2759238dcc4b5e78570e27d770" Namespace="kube-system" Pod="coredns-668d6bf9bc-6tkc8" WorkloadEndpoint="ci--4081--3--4--n--f6981960e0-k8s-coredns--668d6bf9bc--6tkc8-eth0" Jul 12 00:08:18.774960 systemd[1]: Started cri-containerd-646414db7b486549a059e99cd57303b8692167d9775b69daa1d89a4e93f59308.scope - libcontainer container 646414db7b486549a059e99cd57303b8692167d9775b69daa1d89a4e93f59308. Jul 12 00:08:18.804261 containerd[1476]: time="2025-07-12T00:08:18.803353414Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:08:18.804478 containerd[1476]: time="2025-07-12T00:08:18.804253820Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:08:18.804478 containerd[1476]: time="2025-07-12T00:08:18.804297381Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:18.804780 containerd[1476]: time="2025-07-12T00:08:18.804447382Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:18.829385 systemd[1]: Started cri-containerd-c3dc8a562b776eb5bc46312057e3045a6bde4f2759238dcc4b5e78570e27d770.scope - libcontainer container c3dc8a562b776eb5bc46312057e3045a6bde4f2759238dcc4b5e78570e27d770. Jul 12 00:08:18.842553 containerd[1476]: time="2025-07-12T00:08:18.842505126Z" level=info msg="StartContainer for \"646414db7b486549a059e99cd57303b8692167d9775b69daa1d89a4e93f59308\" returns successfully" Jul 12 00:08:18.876281 containerd[1476]: time="2025-07-12T00:08:18.875267074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6tkc8,Uid:061bde8d-e1c8-4411-baca-b18c385b32b1,Namespace:kube-system,Attempt:1,} returns sandbox id \"c3dc8a562b776eb5bc46312057e3045a6bde4f2759238dcc4b5e78570e27d770\"" Jul 12 00:08:18.881362 containerd[1476]: time="2025-07-12T00:08:18.881320636Z" level=info msg="CreateContainer within sandbox \"c3dc8a562b776eb5bc46312057e3045a6bde4f2759238dcc4b5e78570e27d770\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 12 00:08:18.911595 containerd[1476]: time="2025-07-12T00:08:18.910527799Z" level=info msg="CreateContainer within sandbox \"c3dc8a562b776eb5bc46312057e3045a6bde4f2759238dcc4b5e78570e27d770\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8e2c12bdb3015dc7988a3b8a69400335b7a2ff9b206a4472fdabed1175235127\"" Jul 12 00:08:18.912551 containerd[1476]: time="2025-07-12T00:08:18.912507333Z" level=info msg="StartContainer for \"8e2c12bdb3015dc7988a3b8a69400335b7a2ff9b206a4472fdabed1175235127\"" Jul 12 00:08:18.952286 systemd[1]: Started cri-containerd-8e2c12bdb3015dc7988a3b8a69400335b7a2ff9b206a4472fdabed1175235127.scope - libcontainer container 8e2c12bdb3015dc7988a3b8a69400335b7a2ff9b206a4472fdabed1175235127. Jul 12 00:08:18.995599 containerd[1476]: time="2025-07-12T00:08:18.995545551Z" level=info msg="StartContainer for \"8e2c12bdb3015dc7988a3b8a69400335b7a2ff9b206a4472fdabed1175235127\" returns successfully" Jul 12 00:08:19.056569 containerd[1476]: time="2025-07-12T00:08:19.056496084Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:19.057699 containerd[1476]: time="2025-07-12T00:08:19.057647052Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 12 00:08:19.061653 containerd[1476]: time="2025-07-12T00:08:19.061500198Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 384.032419ms" Jul 12 00:08:19.061653 containerd[1476]: time="2025-07-12T00:08:19.061550238Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 12 00:08:19.064600 containerd[1476]: time="2025-07-12T00:08:19.064523698Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 12 00:08:19.067541 containerd[1476]: time="2025-07-12T00:08:19.067489758Z" level=info msg="CreateContainer within sandbox \"0ebb3e5362eed38ae55f3c510adfc101f5a7606e559f7e010fa5bf64daa198a2\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 12 00:08:19.098424 containerd[1476]: time="2025-07-12T00:08:19.098298007Z" level=info msg="CreateContainer within sandbox \"0ebb3e5362eed38ae55f3c510adfc101f5a7606e559f7e010fa5bf64daa198a2\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ece1542a1b66f4574cee93e58c4c3113f9a2adc8d9f91f07c8d56e9d7ad1c073\"" Jul 12 00:08:19.099231 containerd[1476]: time="2025-07-12T00:08:19.098993491Z" level=info msg="StartContainer for \"ece1542a1b66f4574cee93e58c4c3113f9a2adc8d9f91f07c8d56e9d7ad1c073\"" Jul 12 00:08:19.134409 systemd[1]: Started cri-containerd-ece1542a1b66f4574cee93e58c4c3113f9a2adc8d9f91f07c8d56e9d7ad1c073.scope - libcontainer container ece1542a1b66f4574cee93e58c4c3113f9a2adc8d9f91f07c8d56e9d7ad1c073. Jul 12 00:08:19.192411 containerd[1476]: time="2025-07-12T00:08:19.192198442Z" level=info msg="StartContainer for \"ece1542a1b66f4574cee93e58c4c3113f9a2adc8d9f91f07c8d56e9d7ad1c073\" returns successfully" Jul 12 00:08:19.373799 containerd[1476]: time="2025-07-12T00:08:19.373744670Z" level=info msg="StopPodSandbox for \"2f88498ac61eea5b2478fc2e605c91f4d59c435e0afe1dc62377d1e9141c86f3\"" Jul 12 00:08:19.499537 containerd[1476]: 2025-07-12 00:08:19.440 [INFO][4891] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2f88498ac61eea5b2478fc2e605c91f4d59c435e0afe1dc62377d1e9141c86f3" Jul 12 00:08:19.499537 containerd[1476]: 2025-07-12 00:08:19.440 [INFO][4891] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2f88498ac61eea5b2478fc2e605c91f4d59c435e0afe1dc62377d1e9141c86f3" iface="eth0" netns="/var/run/netns/cni-662a7407-bac8-ef9a-2fec-20a2c69065c4" Jul 12 00:08:19.499537 containerd[1476]: 2025-07-12 00:08:19.440 [INFO][4891] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2f88498ac61eea5b2478fc2e605c91f4d59c435e0afe1dc62377d1e9141c86f3" iface="eth0" netns="/var/run/netns/cni-662a7407-bac8-ef9a-2fec-20a2c69065c4" Jul 12 00:08:19.499537 containerd[1476]: 2025-07-12 00:08:19.441 [INFO][4891] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2f88498ac61eea5b2478fc2e605c91f4d59c435e0afe1dc62377d1e9141c86f3" iface="eth0" netns="/var/run/netns/cni-662a7407-bac8-ef9a-2fec-20a2c69065c4" Jul 12 00:08:19.499537 containerd[1476]: 2025-07-12 00:08:19.441 [INFO][4891] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2f88498ac61eea5b2478fc2e605c91f4d59c435e0afe1dc62377d1e9141c86f3" Jul 12 00:08:19.499537 containerd[1476]: 2025-07-12 00:08:19.441 [INFO][4891] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2f88498ac61eea5b2478fc2e605c91f4d59c435e0afe1dc62377d1e9141c86f3" Jul 12 00:08:19.499537 containerd[1476]: 2025-07-12 00:08:19.474 [INFO][4898] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2f88498ac61eea5b2478fc2e605c91f4d59c435e0afe1dc62377d1e9141c86f3" HandleID="k8s-pod-network.2f88498ac61eea5b2478fc2e605c91f4d59c435e0afe1dc62377d1e9141c86f3" Workload="ci--4081--3--4--n--f6981960e0-k8s-coredns--668d6bf9bc--fmlsv-eth0" Jul 12 00:08:19.499537 containerd[1476]: 2025-07-12 00:08:19.475 [INFO][4898] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:19.499537 containerd[1476]: 2025-07-12 00:08:19.475 [INFO][4898] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:19.499537 containerd[1476]: 2025-07-12 00:08:19.492 [WARNING][4898] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2f88498ac61eea5b2478fc2e605c91f4d59c435e0afe1dc62377d1e9141c86f3" HandleID="k8s-pod-network.2f88498ac61eea5b2478fc2e605c91f4d59c435e0afe1dc62377d1e9141c86f3" Workload="ci--4081--3--4--n--f6981960e0-k8s-coredns--668d6bf9bc--fmlsv-eth0" Jul 12 00:08:19.499537 containerd[1476]: 2025-07-12 00:08:19.492 [INFO][4898] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2f88498ac61eea5b2478fc2e605c91f4d59c435e0afe1dc62377d1e9141c86f3" HandleID="k8s-pod-network.2f88498ac61eea5b2478fc2e605c91f4d59c435e0afe1dc62377d1e9141c86f3" Workload="ci--4081--3--4--n--f6981960e0-k8s-coredns--668d6bf9bc--fmlsv-eth0" Jul 12 00:08:19.499537 containerd[1476]: 2025-07-12 00:08:19.494 [INFO][4898] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:19.499537 containerd[1476]: 2025-07-12 00:08:19.496 [INFO][4891] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2f88498ac61eea5b2478fc2e605c91f4d59c435e0afe1dc62377d1e9141c86f3" Jul 12 00:08:19.502325 containerd[1476]: time="2025-07-12T00:08:19.501611015Z" level=info msg="TearDown network for sandbox \"2f88498ac61eea5b2478fc2e605c91f4d59c435e0afe1dc62377d1e9141c86f3\" successfully" Jul 12 00:08:19.502325 containerd[1476]: time="2025-07-12T00:08:19.501643415Z" level=info msg="StopPodSandbox for \"2f88498ac61eea5b2478fc2e605c91f4d59c435e0afe1dc62377d1e9141c86f3\" returns successfully" Jul 12 00:08:19.503332 containerd[1476]: time="2025-07-12T00:08:19.502469701Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fmlsv,Uid:0b42f234-5a87-408f-b8c4-2ac2eed39fd7,Namespace:kube-system,Attempt:1,}" Jul 12 00:08:19.512298 systemd[1]: run-netns-cni\x2d662a7407\x2dbac8\x2def9a\x2d2fec\x2d20a2c69065c4.mount: Deactivated successfully. Jul 12 00:08:19.707558 systemd-networkd[1379]: calicb4d9d88fbc: Link UP Jul 12 00:08:19.708478 systemd-networkd[1379]: calicb4d9d88fbc: Gained carrier Jul 12 00:08:19.726979 containerd[1476]: 2025-07-12 00:08:19.583 [INFO][4904] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--4--n--f6981960e0-k8s-coredns--668d6bf9bc--fmlsv-eth0 coredns-668d6bf9bc- kube-system 0b42f234-5a87-408f-b8c4-2ac2eed39fd7 1004 0 2025-07-12 00:07:32 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-4-n-f6981960e0 coredns-668d6bf9bc-fmlsv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calicb4d9d88fbc [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="9cc12992014623b00d9ecad0db7318fce2574d5916a96ab692acfdb84b8c79d3" Namespace="kube-system" Pod="coredns-668d6bf9bc-fmlsv" WorkloadEndpoint="ci--4081--3--4--n--f6981960e0-k8s-coredns--668d6bf9bc--fmlsv-" Jul 12 00:08:19.726979 containerd[1476]: 2025-07-12 00:08:19.583 [INFO][4904] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9cc12992014623b00d9ecad0db7318fce2574d5916a96ab692acfdb84b8c79d3" Namespace="kube-system" Pod="coredns-668d6bf9bc-fmlsv" WorkloadEndpoint="ci--4081--3--4--n--f6981960e0-k8s-coredns--668d6bf9bc--fmlsv-eth0" Jul 12 00:08:19.726979 containerd[1476]: 2025-07-12 00:08:19.654 [INFO][4917] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9cc12992014623b00d9ecad0db7318fce2574d5916a96ab692acfdb84b8c79d3" HandleID="k8s-pod-network.9cc12992014623b00d9ecad0db7318fce2574d5916a96ab692acfdb84b8c79d3" Workload="ci--4081--3--4--n--f6981960e0-k8s-coredns--668d6bf9bc--fmlsv-eth0" Jul 12 00:08:19.726979 containerd[1476]: 2025-07-12 00:08:19.654 [INFO][4917] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9cc12992014623b00d9ecad0db7318fce2574d5916a96ab692acfdb84b8c79d3" HandleID="k8s-pod-network.9cc12992014623b00d9ecad0db7318fce2574d5916a96ab692acfdb84b8c79d3" Workload="ci--4081--3--4--n--f6981960e0-k8s-coredns--668d6bf9bc--fmlsv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d880), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-4-n-f6981960e0", "pod":"coredns-668d6bf9bc-fmlsv", "timestamp":"2025-07-12 00:08:19.654562009 +0000 UTC"}, Hostname:"ci-4081-3-4-n-f6981960e0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:08:19.726979 containerd[1476]: 2025-07-12 00:08:19.654 [INFO][4917] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:19.726979 containerd[1476]: 2025-07-12 00:08:19.654 [INFO][4917] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:19.726979 containerd[1476]: 2025-07-12 00:08:19.654 [INFO][4917] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-4-n-f6981960e0' Jul 12 00:08:19.726979 containerd[1476]: 2025-07-12 00:08:19.664 [INFO][4917] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9cc12992014623b00d9ecad0db7318fce2574d5916a96ab692acfdb84b8c79d3" host="ci-4081-3-4-n-f6981960e0" Jul 12 00:08:19.726979 containerd[1476]: 2025-07-12 00:08:19.670 [INFO][4917] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-4-n-f6981960e0" Jul 12 00:08:19.726979 containerd[1476]: 2025-07-12 00:08:19.676 [INFO][4917] ipam/ipam.go 511: Trying affinity for 192.168.59.64/26 host="ci-4081-3-4-n-f6981960e0" Jul 12 00:08:19.726979 containerd[1476]: 2025-07-12 00:08:19.678 [INFO][4917] ipam/ipam.go 158: Attempting to load block cidr=192.168.59.64/26 host="ci-4081-3-4-n-f6981960e0" Jul 12 00:08:19.726979 containerd[1476]: 2025-07-12 00:08:19.681 [INFO][4917] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.59.64/26 host="ci-4081-3-4-n-f6981960e0" Jul 12 00:08:19.726979 containerd[1476]: 2025-07-12 00:08:19.681 [INFO][4917] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.59.64/26 handle="k8s-pod-network.9cc12992014623b00d9ecad0db7318fce2574d5916a96ab692acfdb84b8c79d3" host="ci-4081-3-4-n-f6981960e0" Jul 12 00:08:19.726979 containerd[1476]: 2025-07-12 00:08:19.683 [INFO][4917] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9cc12992014623b00d9ecad0db7318fce2574d5916a96ab692acfdb84b8c79d3 Jul 12 00:08:19.726979 containerd[1476]: 2025-07-12 00:08:19.688 [INFO][4917] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.59.64/26 handle="k8s-pod-network.9cc12992014623b00d9ecad0db7318fce2574d5916a96ab692acfdb84b8c79d3" host="ci-4081-3-4-n-f6981960e0" Jul 12 00:08:19.726979 containerd[1476]: 2025-07-12 00:08:19.697 [INFO][4917] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.59.72/26] block=192.168.59.64/26 handle="k8s-pod-network.9cc12992014623b00d9ecad0db7318fce2574d5916a96ab692acfdb84b8c79d3" host="ci-4081-3-4-n-f6981960e0" Jul 12 00:08:19.726979 containerd[1476]: 2025-07-12 00:08:19.697 [INFO][4917] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.59.72/26] handle="k8s-pod-network.9cc12992014623b00d9ecad0db7318fce2574d5916a96ab692acfdb84b8c79d3" host="ci-4081-3-4-n-f6981960e0" Jul 12 00:08:19.726979 containerd[1476]: 2025-07-12 00:08:19.697 [INFO][4917] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:19.726979 containerd[1476]: 2025-07-12 00:08:19.697 [INFO][4917] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.59.72/26] IPv6=[] ContainerID="9cc12992014623b00d9ecad0db7318fce2574d5916a96ab692acfdb84b8c79d3" HandleID="k8s-pod-network.9cc12992014623b00d9ecad0db7318fce2574d5916a96ab692acfdb84b8c79d3" Workload="ci--4081--3--4--n--f6981960e0-k8s-coredns--668d6bf9bc--fmlsv-eth0" Jul 12 00:08:19.728406 containerd[1476]: 2025-07-12 00:08:19.702 [INFO][4904] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9cc12992014623b00d9ecad0db7318fce2574d5916a96ab692acfdb84b8c79d3" Namespace="kube-system" Pod="coredns-668d6bf9bc-fmlsv" WorkloadEndpoint="ci--4081--3--4--n--f6981960e0-k8s-coredns--668d6bf9bc--fmlsv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--n--f6981960e0-k8s-coredns--668d6bf9bc--fmlsv-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"0b42f234-5a87-408f-b8c4-2ac2eed39fd7", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 7, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-n-f6981960e0", ContainerID:"", Pod:"coredns-668d6bf9bc-fmlsv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.59.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicb4d9d88fbc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:19.728406 containerd[1476]: 2025-07-12 00:08:19.703 [INFO][4904] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.59.72/32] ContainerID="9cc12992014623b00d9ecad0db7318fce2574d5916a96ab692acfdb84b8c79d3" Namespace="kube-system" Pod="coredns-668d6bf9bc-fmlsv" WorkloadEndpoint="ci--4081--3--4--n--f6981960e0-k8s-coredns--668d6bf9bc--fmlsv-eth0" Jul 12 00:08:19.728406 containerd[1476]: 2025-07-12 00:08:19.703 [INFO][4904] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicb4d9d88fbc ContainerID="9cc12992014623b00d9ecad0db7318fce2574d5916a96ab692acfdb84b8c79d3" Namespace="kube-system" Pod="coredns-668d6bf9bc-fmlsv" WorkloadEndpoint="ci--4081--3--4--n--f6981960e0-k8s-coredns--668d6bf9bc--fmlsv-eth0" Jul 12 00:08:19.728406 containerd[1476]: 2025-07-12 00:08:19.710 [INFO][4904] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9cc12992014623b00d9ecad0db7318fce2574d5916a96ab692acfdb84b8c79d3" Namespace="kube-system" Pod="coredns-668d6bf9bc-fmlsv" WorkloadEndpoint="ci--4081--3--4--n--f6981960e0-k8s-coredns--668d6bf9bc--fmlsv-eth0" Jul 12 00:08:19.728406 containerd[1476]: 2025-07-12 00:08:19.710 [INFO][4904] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9cc12992014623b00d9ecad0db7318fce2574d5916a96ab692acfdb84b8c79d3" Namespace="kube-system" Pod="coredns-668d6bf9bc-fmlsv" WorkloadEndpoint="ci--4081--3--4--n--f6981960e0-k8s-coredns--668d6bf9bc--fmlsv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--n--f6981960e0-k8s-coredns--668d6bf9bc--fmlsv-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"0b42f234-5a87-408f-b8c4-2ac2eed39fd7", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 7, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-n-f6981960e0", ContainerID:"9cc12992014623b00d9ecad0db7318fce2574d5916a96ab692acfdb84b8c79d3", Pod:"coredns-668d6bf9bc-fmlsv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.59.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicb4d9d88fbc", MAC:"32:8b:42:af:66:d0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:19.728406 containerd[1476]: 2025-07-12 00:08:19.722 [INFO][4904] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9cc12992014623b00d9ecad0db7318fce2574d5916a96ab692acfdb84b8c79d3" Namespace="kube-system" Pod="coredns-668d6bf9bc-fmlsv" WorkloadEndpoint="ci--4081--3--4--n--f6981960e0-k8s-coredns--668d6bf9bc--fmlsv-eth0" Jul 12 00:08:19.767961 containerd[1476]: time="2025-07-12T00:08:19.767409853Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:08:19.767961 containerd[1476]: time="2025-07-12T00:08:19.767474853Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:08:19.767961 containerd[1476]: time="2025-07-12T00:08:19.767501773Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:19.767961 containerd[1476]: time="2025-07-12T00:08:19.767597854Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:19.798495 kubelet[2586]: I0712 00:08:19.797440 2586 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-6tkc8" podStartSLOduration=47.797420856 podStartE2EDuration="47.797420856s" podCreationTimestamp="2025-07-12 00:07:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:08:19.761858495 +0000 UTC m=+53.531710869" watchObservedRunningTime="2025-07-12 00:08:19.797420856 +0000 UTC m=+53.567273190" Jul 12 00:08:19.798495 kubelet[2586]: I0712 00:08:19.797537 2586 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-694bf789d4-flhzn" podStartSLOduration=28.709776027 podStartE2EDuration="33.797533056s" podCreationTimestamp="2025-07-12 00:07:46 +0000 UTC" firstStartedPulling="2025-07-12 00:08:13.974897617 +0000 UTC m=+47.744749991" lastFinishedPulling="2025-07-12 00:08:19.062654646 +0000 UTC m=+52.832507020" observedRunningTime="2025-07-12 00:08:19.794810158 +0000 UTC m=+53.564662532" watchObservedRunningTime="2025-07-12 00:08:19.797533056 +0000 UTC m=+53.567385470" Jul 12 00:08:19.825648 systemd[1]: Started cri-containerd-9cc12992014623b00d9ecad0db7318fce2574d5916a96ab692acfdb84b8c79d3.scope - libcontainer container 9cc12992014623b00d9ecad0db7318fce2574d5916a96ab692acfdb84b8c79d3. Jul 12 00:08:19.899351 containerd[1476]: time="2025-07-12T00:08:19.898129417Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fmlsv,Uid:0b42f234-5a87-408f-b8c4-2ac2eed39fd7,Namespace:kube-system,Attempt:1,} returns sandbox id \"9cc12992014623b00d9ecad0db7318fce2574d5916a96ab692acfdb84b8c79d3\"" Jul 12 00:08:19.904774 containerd[1476]: time="2025-07-12T00:08:19.904629541Z" level=info msg="CreateContainer within sandbox \"9cc12992014623b00d9ecad0db7318fce2574d5916a96ab692acfdb84b8c79d3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 12 00:08:19.928356 containerd[1476]: time="2025-07-12T00:08:19.928181940Z" level=info msg="CreateContainer within sandbox \"9cc12992014623b00d9ecad0db7318fce2574d5916a96ab692acfdb84b8c79d3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"13542ffbc426aa0d8891f921e942f24a98c6f7b88c7191554fcdd016c4074b0c\"" Jul 12 00:08:19.930131 containerd[1476]: time="2025-07-12T00:08:19.929039826Z" level=info msg="StartContainer for \"13542ffbc426aa0d8891f921e942f24a98c6f7b88c7191554fcdd016c4074b0c\"" Jul 12 00:08:19.944357 systemd-networkd[1379]: calicc9f8c933c6: Gained IPv6LL Jul 12 00:08:19.976417 systemd[1]: Started cri-containerd-13542ffbc426aa0d8891f921e942f24a98c6f7b88c7191554fcdd016c4074b0c.scope - libcontainer container 13542ffbc426aa0d8891f921e942f24a98c6f7b88c7191554fcdd016c4074b0c. Jul 12 00:08:20.011674 containerd[1476]: time="2025-07-12T00:08:20.010815817Z" level=info msg="StartContainer for \"13542ffbc426aa0d8891f921e942f24a98c6f7b88c7191554fcdd016c4074b0c\" returns successfully" Jul 12 00:08:20.762965 kubelet[2586]: I0712 00:08:20.760226 2586 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 00:08:20.762965 kubelet[2586]: I0712 00:08:20.761027 2586 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 00:08:20.784146 kubelet[2586]: I0712 00:08:20.784085 2586 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-694bf789d4-jsvl4" podStartSLOduration=30.003183383 podStartE2EDuration="34.784069066s" podCreationTimestamp="2025-07-12 00:07:46 +0000 UTC" firstStartedPulling="2025-07-12 00:08:13.894727443 +0000 UTC m=+47.664579817" lastFinishedPulling="2025-07-12 00:08:18.675613126 +0000 UTC m=+52.445465500" observedRunningTime="2025-07-12 00:08:19.857998305 +0000 UTC m=+53.627850679" watchObservedRunningTime="2025-07-12 00:08:20.784069066 +0000 UTC m=+54.553921400" Jul 12 00:08:20.798106 kubelet[2586]: I0712 00:08:20.798049 2586 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-fmlsv" podStartSLOduration=48.798032998 podStartE2EDuration="48.798032998s" podCreationTimestamp="2025-07-12 00:07:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:08:20.78460115 +0000 UTC m=+54.554453524" watchObservedRunningTime="2025-07-12 00:08:20.798032998 +0000 UTC m=+54.567885372" Jul 12 00:08:21.096407 systemd-networkd[1379]: calicb4d9d88fbc: Gained IPv6LL Jul 12 00:08:21.866088 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount62485449.mount: Deactivated successfully. Jul 12 00:08:22.583872 containerd[1476]: time="2025-07-12T00:08:22.582973924Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:22.586353 containerd[1476]: time="2025-07-12T00:08:22.586314545Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=61838790" Jul 12 00:08:22.587402 containerd[1476]: time="2025-07-12T00:08:22.587354391Z" level=info msg="ImageCreate event name:\"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:22.592347 containerd[1476]: time="2025-07-12T00:08:22.591192335Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:22.593590 containerd[1476]: time="2025-07-12T00:08:22.592146461Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"61838636\" in 3.527552922s" Jul 12 00:08:22.594232 containerd[1476]: time="2025-07-12T00:08:22.593724391Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\"" Jul 12 00:08:22.597759 containerd[1476]: time="2025-07-12T00:08:22.597725736Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 12 00:08:22.599270 containerd[1476]: time="2025-07-12T00:08:22.599238145Z" level=info msg="CreateContainer within sandbox \"dd4746ebf21357a55eaf8de5210e3dab958bb17173571e427997a90c831e8f64\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 12 00:08:22.623619 containerd[1476]: time="2025-07-12T00:08:22.623575337Z" level=info msg="CreateContainer within sandbox \"dd4746ebf21357a55eaf8de5210e3dab958bb17173571e427997a90c831e8f64\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"7e7e426f1728df69d8d0ed8e116250c7986224f1cfb000100df973e77880ff68\"" Jul 12 00:08:22.624772 containerd[1476]: time="2025-07-12T00:08:22.624570023Z" level=info msg="StartContainer for \"7e7e426f1728df69d8d0ed8e116250c7986224f1cfb000100df973e77880ff68\"" Jul 12 00:08:22.725866 systemd[1]: Started cri-containerd-7e7e426f1728df69d8d0ed8e116250c7986224f1cfb000100df973e77880ff68.scope - libcontainer container 7e7e426f1728df69d8d0ed8e116250c7986224f1cfb000100df973e77880ff68. Jul 12 00:08:22.812446 containerd[1476]: time="2025-07-12T00:08:22.812397234Z" level=info msg="StartContainer for \"7e7e426f1728df69d8d0ed8e116250c7986224f1cfb000100df973e77880ff68\" returns successfully" Jul 12 00:08:23.819078 systemd[1]: run-containerd-runc-k8s.io-7e7e426f1728df69d8d0ed8e116250c7986224f1cfb000100df973e77880ff68-runc.EZloX2.mount: Deactivated successfully. Jul 12 00:08:25.761006 containerd[1476]: time="2025-07-12T00:08:25.760920826Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:25.762771 containerd[1476]: time="2025-07-12T00:08:25.762632876Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=48128336" Jul 12 00:08:25.763514 containerd[1476]: time="2025-07-12T00:08:25.763343960Z" level=info msg="ImageCreate event name:\"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:25.765910 containerd[1476]: time="2025-07-12T00:08:25.765850334Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:25.766817 containerd[1476]: time="2025-07-12T00:08:25.766666499Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"49497545\" in 3.168243119s" Jul 12 00:08:25.766817 containerd[1476]: time="2025-07-12T00:08:25.766711739Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\"" Jul 12 00:08:25.781168 containerd[1476]: time="2025-07-12T00:08:25.780448738Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 12 00:08:25.788219 containerd[1476]: time="2025-07-12T00:08:25.788117142Z" level=info msg="CreateContainer within sandbox \"fa4e53246feea475034aa56b3e1bcf8ba87fd317210cddc5f586fe9f3222c2b5\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 12 00:08:25.811252 containerd[1476]: time="2025-07-12T00:08:25.810457871Z" level=info msg="CreateContainer within sandbox \"fa4e53246feea475034aa56b3e1bcf8ba87fd317210cddc5f586fe9f3222c2b5\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"5db9f737554e95d0b7f5809091ed85d89e746674464e3c47473f0702eab6e7a3\"" Jul 12 00:08:25.811975 containerd[1476]: time="2025-07-12T00:08:25.811880239Z" level=info msg="StartContainer for \"5db9f737554e95d0b7f5809091ed85d89e746674464e3c47473f0702eab6e7a3\"" Jul 12 00:08:25.848430 systemd[1]: Started cri-containerd-5db9f737554e95d0b7f5809091ed85d89e746674464e3c47473f0702eab6e7a3.scope - libcontainer container 5db9f737554e95d0b7f5809091ed85d89e746674464e3c47473f0702eab6e7a3. Jul 12 00:08:25.892781 containerd[1476]: time="2025-07-12T00:08:25.892543742Z" level=info msg="StartContainer for \"5db9f737554e95d0b7f5809091ed85d89e746674464e3c47473f0702eab6e7a3\" returns successfully" Jul 12 00:08:26.383955 containerd[1476]: time="2025-07-12T00:08:26.383885388Z" level=info msg="StopPodSandbox for \"dfb11f025a4fe378fe4ca5bf556aafeec694dfaebafc673d0a0b058f92c57f49\"" Jul 12 00:08:26.477275 containerd[1476]: 2025-07-12 00:08:26.428 [WARNING][5185] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dfb11f025a4fe378fe4ca5bf556aafeec694dfaebafc673d0a0b058f92c57f49" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--n--f6981960e0-k8s-csi--node--driver--4mp89-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b6e2f19f-7174-405d-8aeb-93e33315aa19", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 7, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-n-f6981960e0", ContainerID:"7969c9147ee6357aa27a4bc2549209e11d366f5a0df0c4d99f7a5ce8e02ca6d9", Pod:"csi-node-driver-4mp89", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.59.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie0537441883", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:26.477275 containerd[1476]: 2025-07-12 00:08:26.429 [INFO][5185] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dfb11f025a4fe378fe4ca5bf556aafeec694dfaebafc673d0a0b058f92c57f49" Jul 12 00:08:26.477275 containerd[1476]: 2025-07-12 00:08:26.429 [INFO][5185] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dfb11f025a4fe378fe4ca5bf556aafeec694dfaebafc673d0a0b058f92c57f49" iface="eth0" netns="" Jul 12 00:08:26.477275 containerd[1476]: 2025-07-12 00:08:26.429 [INFO][5185] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dfb11f025a4fe378fe4ca5bf556aafeec694dfaebafc673d0a0b058f92c57f49" Jul 12 00:08:26.477275 containerd[1476]: 2025-07-12 00:08:26.429 [INFO][5185] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dfb11f025a4fe378fe4ca5bf556aafeec694dfaebafc673d0a0b058f92c57f49" Jul 12 00:08:26.477275 containerd[1476]: 2025-07-12 00:08:26.461 [INFO][5192] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dfb11f025a4fe378fe4ca5bf556aafeec694dfaebafc673d0a0b058f92c57f49" HandleID="k8s-pod-network.dfb11f025a4fe378fe4ca5bf556aafeec694dfaebafc673d0a0b058f92c57f49" Workload="ci--4081--3--4--n--f6981960e0-k8s-csi--node--driver--4mp89-eth0" Jul 12 00:08:26.477275 containerd[1476]: 2025-07-12 00:08:26.461 [INFO][5192] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:26.477275 containerd[1476]: 2025-07-12 00:08:26.462 [INFO][5192] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:26.477275 containerd[1476]: 2025-07-12 00:08:26.471 [WARNING][5192] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dfb11f025a4fe378fe4ca5bf556aafeec694dfaebafc673d0a0b058f92c57f49" HandleID="k8s-pod-network.dfb11f025a4fe378fe4ca5bf556aafeec694dfaebafc673d0a0b058f92c57f49" Workload="ci--4081--3--4--n--f6981960e0-k8s-csi--node--driver--4mp89-eth0" Jul 12 00:08:26.477275 containerd[1476]: 2025-07-12 00:08:26.471 [INFO][5192] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dfb11f025a4fe378fe4ca5bf556aafeec694dfaebafc673d0a0b058f92c57f49" HandleID="k8s-pod-network.dfb11f025a4fe378fe4ca5bf556aafeec694dfaebafc673d0a0b058f92c57f49" Workload="ci--4081--3--4--n--f6981960e0-k8s-csi--node--driver--4mp89-eth0" Jul 12 00:08:26.477275 containerd[1476]: 2025-07-12 00:08:26.473 [INFO][5192] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:26.477275 containerd[1476]: 2025-07-12 00:08:26.475 [INFO][5185] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dfb11f025a4fe378fe4ca5bf556aafeec694dfaebafc673d0a0b058f92c57f49" Jul 12 00:08:26.478288 containerd[1476]: time="2025-07-12T00:08:26.477506952Z" level=info msg="TearDown network for sandbox \"dfb11f025a4fe378fe4ca5bf556aafeec694dfaebafc673d0a0b058f92c57f49\" successfully" Jul 12 00:08:26.478288 containerd[1476]: time="2025-07-12T00:08:26.477546472Z" level=info msg="StopPodSandbox for \"dfb11f025a4fe378fe4ca5bf556aafeec694dfaebafc673d0a0b058f92c57f49\" returns successfully" Jul 12 00:08:26.480039 containerd[1476]: time="2025-07-12T00:08:26.479557084Z" level=info msg="RemovePodSandbox for \"dfb11f025a4fe378fe4ca5bf556aafeec694dfaebafc673d0a0b058f92c57f49\"" Jul 12 00:08:26.480039 containerd[1476]: time="2025-07-12T00:08:26.479608044Z" level=info msg="Forcibly stopping sandbox \"dfb11f025a4fe378fe4ca5bf556aafeec694dfaebafc673d0a0b058f92c57f49\"" Jul 12 00:08:26.565184 containerd[1476]: 2025-07-12 00:08:26.522 [WARNING][5206] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dfb11f025a4fe378fe4ca5bf556aafeec694dfaebafc673d0a0b058f92c57f49" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--n--f6981960e0-k8s-csi--node--driver--4mp89-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b6e2f19f-7174-405d-8aeb-93e33315aa19", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 7, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-n-f6981960e0", ContainerID:"7969c9147ee6357aa27a4bc2549209e11d366f5a0df0c4d99f7a5ce8e02ca6d9", Pod:"csi-node-driver-4mp89", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.59.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie0537441883", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:26.565184 containerd[1476]: 2025-07-12 00:08:26.523 [INFO][5206] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dfb11f025a4fe378fe4ca5bf556aafeec694dfaebafc673d0a0b058f92c57f49" Jul 12 00:08:26.565184 containerd[1476]: 2025-07-12 00:08:26.523 [INFO][5206] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dfb11f025a4fe378fe4ca5bf556aafeec694dfaebafc673d0a0b058f92c57f49" iface="eth0" netns="" Jul 12 00:08:26.565184 containerd[1476]: 2025-07-12 00:08:26.523 [INFO][5206] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dfb11f025a4fe378fe4ca5bf556aafeec694dfaebafc673d0a0b058f92c57f49" Jul 12 00:08:26.565184 containerd[1476]: 2025-07-12 00:08:26.523 [INFO][5206] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dfb11f025a4fe378fe4ca5bf556aafeec694dfaebafc673d0a0b058f92c57f49" Jul 12 00:08:26.565184 containerd[1476]: 2025-07-12 00:08:26.547 [INFO][5213] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dfb11f025a4fe378fe4ca5bf556aafeec694dfaebafc673d0a0b058f92c57f49" HandleID="k8s-pod-network.dfb11f025a4fe378fe4ca5bf556aafeec694dfaebafc673d0a0b058f92c57f49" Workload="ci--4081--3--4--n--f6981960e0-k8s-csi--node--driver--4mp89-eth0" Jul 12 00:08:26.565184 containerd[1476]: 2025-07-12 00:08:26.547 [INFO][5213] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:26.565184 containerd[1476]: 2025-07-12 00:08:26.547 [INFO][5213] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:26.565184 containerd[1476]: 2025-07-12 00:08:26.560 [WARNING][5213] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dfb11f025a4fe378fe4ca5bf556aafeec694dfaebafc673d0a0b058f92c57f49" HandleID="k8s-pod-network.dfb11f025a4fe378fe4ca5bf556aafeec694dfaebafc673d0a0b058f92c57f49" Workload="ci--4081--3--4--n--f6981960e0-k8s-csi--node--driver--4mp89-eth0" Jul 12 00:08:26.565184 containerd[1476]: 2025-07-12 00:08:26.560 [INFO][5213] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dfb11f025a4fe378fe4ca5bf556aafeec694dfaebafc673d0a0b058f92c57f49" HandleID="k8s-pod-network.dfb11f025a4fe378fe4ca5bf556aafeec694dfaebafc673d0a0b058f92c57f49" Workload="ci--4081--3--4--n--f6981960e0-k8s-csi--node--driver--4mp89-eth0" Jul 12 00:08:26.565184 containerd[1476]: 2025-07-12 00:08:26.562 [INFO][5213] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:26.565184 containerd[1476]: 2025-07-12 00:08:26.563 [INFO][5206] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dfb11f025a4fe378fe4ca5bf556aafeec694dfaebafc673d0a0b058f92c57f49" Jul 12 00:08:26.565777 containerd[1476]: time="2025-07-12T00:08:26.565237323Z" level=info msg="TearDown network for sandbox \"dfb11f025a4fe378fe4ca5bf556aafeec694dfaebafc673d0a0b058f92c57f49\" successfully" Jul 12 00:08:26.570351 containerd[1476]: time="2025-07-12T00:08:26.570297672Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dfb11f025a4fe378fe4ca5bf556aafeec694dfaebafc673d0a0b058f92c57f49\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 12 00:08:26.570471 containerd[1476]: time="2025-07-12T00:08:26.570395952Z" level=info msg="RemovePodSandbox \"dfb11f025a4fe378fe4ca5bf556aafeec694dfaebafc673d0a0b058f92c57f49\" returns successfully" Jul 12 00:08:26.571677 containerd[1476]: time="2025-07-12T00:08:26.571330277Z" level=info msg="StopPodSandbox for \"ce26802e152e971c80e08181f83520eb382bb5d7d7ac3b69465dd45b9fbccadc\"" Jul 12 00:08:26.682068 containerd[1476]: 2025-07-12 00:08:26.623 [WARNING][5227] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ce26802e152e971c80e08181f83520eb382bb5d7d7ac3b69465dd45b9fbccadc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--n--f6981960e0-k8s-calico--apiserver--694bf789d4--jsvl4-eth0", GenerateName:"calico-apiserver-694bf789d4-", Namespace:"calico-apiserver", SelfLink:"", UID:"2a54327b-26b7-4764-93f1-2d3f8be7ff94", ResourceVersion:"1019", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 7, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"694bf789d4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-n-f6981960e0", ContainerID:"8c4bbfe07e06f65b4a50d12b02600ae50a5dbde1094ed5dd89303b87d235b8f5", Pod:"calico-apiserver-694bf789d4-jsvl4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.59.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali39a6f1d11b5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:26.682068 containerd[1476]: 2025-07-12 00:08:26.624 [INFO][5227] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ce26802e152e971c80e08181f83520eb382bb5d7d7ac3b69465dd45b9fbccadc" Jul 12 00:08:26.682068 containerd[1476]: 2025-07-12 00:08:26.624 [INFO][5227] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ce26802e152e971c80e08181f83520eb382bb5d7d7ac3b69465dd45b9fbccadc" iface="eth0" netns="" Jul 12 00:08:26.682068 containerd[1476]: 2025-07-12 00:08:26.624 [INFO][5227] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ce26802e152e971c80e08181f83520eb382bb5d7d7ac3b69465dd45b9fbccadc" Jul 12 00:08:26.682068 containerd[1476]: 2025-07-12 00:08:26.624 [INFO][5227] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ce26802e152e971c80e08181f83520eb382bb5d7d7ac3b69465dd45b9fbccadc" Jul 12 00:08:26.682068 containerd[1476]: 2025-07-12 00:08:26.665 [INFO][5236] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ce26802e152e971c80e08181f83520eb382bb5d7d7ac3b69465dd45b9fbccadc" HandleID="k8s-pod-network.ce26802e152e971c80e08181f83520eb382bb5d7d7ac3b69465dd45b9fbccadc" Workload="ci--4081--3--4--n--f6981960e0-k8s-calico--apiserver--694bf789d4--jsvl4-eth0" Jul 12 00:08:26.682068 containerd[1476]: 2025-07-12 00:08:26.666 [INFO][5236] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:26.682068 containerd[1476]: 2025-07-12 00:08:26.666 [INFO][5236] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:26.682068 containerd[1476]: 2025-07-12 00:08:26.675 [WARNING][5236] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ce26802e152e971c80e08181f83520eb382bb5d7d7ac3b69465dd45b9fbccadc" HandleID="k8s-pod-network.ce26802e152e971c80e08181f83520eb382bb5d7d7ac3b69465dd45b9fbccadc" Workload="ci--4081--3--4--n--f6981960e0-k8s-calico--apiserver--694bf789d4--jsvl4-eth0" Jul 12 00:08:26.682068 containerd[1476]: 2025-07-12 00:08:26.675 [INFO][5236] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ce26802e152e971c80e08181f83520eb382bb5d7d7ac3b69465dd45b9fbccadc" HandleID="k8s-pod-network.ce26802e152e971c80e08181f83520eb382bb5d7d7ac3b69465dd45b9fbccadc" Workload="ci--4081--3--4--n--f6981960e0-k8s-calico--apiserver--694bf789d4--jsvl4-eth0" Jul 12 00:08:26.682068 containerd[1476]: 2025-07-12 00:08:26.677 [INFO][5236] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:26.682068 containerd[1476]: 2025-07-12 00:08:26.679 [INFO][5227] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ce26802e152e971c80e08181f83520eb382bb5d7d7ac3b69465dd45b9fbccadc" Jul 12 00:08:26.682068 containerd[1476]: time="2025-07-12T00:08:26.682043657Z" level=info msg="TearDown network for sandbox \"ce26802e152e971c80e08181f83520eb382bb5d7d7ac3b69465dd45b9fbccadc\" successfully" Jul 12 00:08:26.683786 containerd[1476]: time="2025-07-12T00:08:26.682085977Z" level=info msg="StopPodSandbox for \"ce26802e152e971c80e08181f83520eb382bb5d7d7ac3b69465dd45b9fbccadc\" returns successfully" Jul 12 00:08:26.684623 containerd[1476]: time="2025-07-12T00:08:26.684017388Z" level=info msg="RemovePodSandbox for \"ce26802e152e971c80e08181f83520eb382bb5d7d7ac3b69465dd45b9fbccadc\"" Jul 12 00:08:26.684623 containerd[1476]: time="2025-07-12T00:08:26.684528791Z" level=info msg="Forcibly stopping sandbox \"ce26802e152e971c80e08181f83520eb382bb5d7d7ac3b69465dd45b9fbccadc\"" Jul 12 00:08:26.762402 containerd[1476]: 2025-07-12 00:08:26.724 [WARNING][5252] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ce26802e152e971c80e08181f83520eb382bb5d7d7ac3b69465dd45b9fbccadc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--n--f6981960e0-k8s-calico--apiserver--694bf789d4--jsvl4-eth0", GenerateName:"calico-apiserver-694bf789d4-", Namespace:"calico-apiserver", SelfLink:"", UID:"2a54327b-26b7-4764-93f1-2d3f8be7ff94", ResourceVersion:"1019", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 7, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"694bf789d4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-n-f6981960e0", ContainerID:"8c4bbfe07e06f65b4a50d12b02600ae50a5dbde1094ed5dd89303b87d235b8f5", Pod:"calico-apiserver-694bf789d4-jsvl4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.59.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali39a6f1d11b5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:26.762402 containerd[1476]: 2025-07-12 00:08:26.724 [INFO][5252] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ce26802e152e971c80e08181f83520eb382bb5d7d7ac3b69465dd45b9fbccadc" Jul 12 00:08:26.762402 containerd[1476]: 2025-07-12 00:08:26.724 [INFO][5252] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ce26802e152e971c80e08181f83520eb382bb5d7d7ac3b69465dd45b9fbccadc" iface="eth0" netns="" Jul 12 00:08:26.762402 containerd[1476]: 2025-07-12 00:08:26.724 [INFO][5252] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ce26802e152e971c80e08181f83520eb382bb5d7d7ac3b69465dd45b9fbccadc" Jul 12 00:08:26.762402 containerd[1476]: 2025-07-12 00:08:26.724 [INFO][5252] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ce26802e152e971c80e08181f83520eb382bb5d7d7ac3b69465dd45b9fbccadc" Jul 12 00:08:26.762402 containerd[1476]: 2025-07-12 00:08:26.746 [INFO][5259] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ce26802e152e971c80e08181f83520eb382bb5d7d7ac3b69465dd45b9fbccadc" HandleID="k8s-pod-network.ce26802e152e971c80e08181f83520eb382bb5d7d7ac3b69465dd45b9fbccadc" Workload="ci--4081--3--4--n--f6981960e0-k8s-calico--apiserver--694bf789d4--jsvl4-eth0" Jul 12 00:08:26.762402 containerd[1476]: 2025-07-12 00:08:26.746 [INFO][5259] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:26.762402 containerd[1476]: 2025-07-12 00:08:26.747 [INFO][5259] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:26.762402 containerd[1476]: 2025-07-12 00:08:26.756 [WARNING][5259] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ce26802e152e971c80e08181f83520eb382bb5d7d7ac3b69465dd45b9fbccadc" HandleID="k8s-pod-network.ce26802e152e971c80e08181f83520eb382bb5d7d7ac3b69465dd45b9fbccadc" Workload="ci--4081--3--4--n--f6981960e0-k8s-calico--apiserver--694bf789d4--jsvl4-eth0" Jul 12 00:08:26.762402 containerd[1476]: 2025-07-12 00:08:26.756 [INFO][5259] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ce26802e152e971c80e08181f83520eb382bb5d7d7ac3b69465dd45b9fbccadc" HandleID="k8s-pod-network.ce26802e152e971c80e08181f83520eb382bb5d7d7ac3b69465dd45b9fbccadc" Workload="ci--4081--3--4--n--f6981960e0-k8s-calico--apiserver--694bf789d4--jsvl4-eth0" Jul 12 00:08:26.762402 containerd[1476]: 2025-07-12 00:08:26.758 [INFO][5259] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:26.762402 containerd[1476]: 2025-07-12 00:08:26.760 [INFO][5252] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ce26802e152e971c80e08181f83520eb382bb5d7d7ac3b69465dd45b9fbccadc" Jul 12 00:08:26.763426 containerd[1476]: time="2025-07-12T00:08:26.762452387Z" level=info msg="TearDown network for sandbox \"ce26802e152e971c80e08181f83520eb382bb5d7d7ac3b69465dd45b9fbccadc\" successfully" Jul 12 00:08:26.766248 containerd[1476]: time="2025-07-12T00:08:26.766180568Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ce26802e152e971c80e08181f83520eb382bb5d7d7ac3b69465dd45b9fbccadc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 12 00:08:26.766506 containerd[1476]: time="2025-07-12T00:08:26.766271968Z" level=info msg="RemovePodSandbox \"ce26802e152e971c80e08181f83520eb382bb5d7d7ac3b69465dd45b9fbccadc\" returns successfully" Jul 12 00:08:26.768039 containerd[1476]: time="2025-07-12T00:08:26.767325694Z" level=info msg="StopPodSandbox for \"8974938e7da3714d90a216188a2be8fea95b7a46f44521d586098b43736913f5\"" Jul 12 00:08:26.846240 kubelet[2586]: I0712 00:08:26.843854 2586 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-pqdm4" podStartSLOduration=27.014792709 podStartE2EDuration="34.843819762s" podCreationTimestamp="2025-07-12 00:07:52 +0000 UTC" firstStartedPulling="2025-07-12 00:08:14.767137433 +0000 UTC m=+48.536989807" lastFinishedPulling="2025-07-12 00:08:22.596164486 +0000 UTC m=+56.366016860" observedRunningTime="2025-07-12 00:08:23.799589693 +0000 UTC m=+57.569442067" watchObservedRunningTime="2025-07-12 00:08:26.843819762 +0000 UTC m=+60.613672136" Jul 12 00:08:26.846240 kubelet[2586]: I0712 00:08:26.844156 2586 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6ccd79f89c-9zkg8" podStartSLOduration=24.953688119 podStartE2EDuration="34.844150404s" podCreationTimestamp="2025-07-12 00:07:52 +0000 UTC" firstStartedPulling="2025-07-12 00:08:15.877664462 +0000 UTC m=+49.647516796" lastFinishedPulling="2025-07-12 00:08:25.768126707 +0000 UTC m=+59.537979081" observedRunningTime="2025-07-12 00:08:26.842163593 +0000 UTC m=+60.612015927" watchObservedRunningTime="2025-07-12 00:08:26.844150404 +0000 UTC m=+60.614002738" Jul 12 00:08:26.899599 containerd[1476]: 2025-07-12 00:08:26.828 [WARNING][5273] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="8974938e7da3714d90a216188a2be8fea95b7a46f44521d586098b43736913f5" WorkloadEndpoint="ci--4081--3--4--n--f6981960e0-k8s-whisker--74957578d7--gb4g4-eth0" Jul 12 00:08:26.899599 containerd[1476]: 2025-07-12 00:08:26.828 [INFO][5273] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8974938e7da3714d90a216188a2be8fea95b7a46f44521d586098b43736913f5" Jul 12 00:08:26.899599 containerd[1476]: 2025-07-12 00:08:26.828 [INFO][5273] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8974938e7da3714d90a216188a2be8fea95b7a46f44521d586098b43736913f5" iface="eth0" netns="" Jul 12 00:08:26.899599 containerd[1476]: 2025-07-12 00:08:26.828 [INFO][5273] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8974938e7da3714d90a216188a2be8fea95b7a46f44521d586098b43736913f5" Jul 12 00:08:26.899599 containerd[1476]: 2025-07-12 00:08:26.828 [INFO][5273] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8974938e7da3714d90a216188a2be8fea95b7a46f44521d586098b43736913f5" Jul 12 00:08:26.899599 containerd[1476]: 2025-07-12 00:08:26.877 [INFO][5290] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8974938e7da3714d90a216188a2be8fea95b7a46f44521d586098b43736913f5" HandleID="k8s-pod-network.8974938e7da3714d90a216188a2be8fea95b7a46f44521d586098b43736913f5" Workload="ci--4081--3--4--n--f6981960e0-k8s-whisker--74957578d7--gb4g4-eth0" Jul 12 00:08:26.899599 containerd[1476]: 2025-07-12 00:08:26.877 [INFO][5290] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:26.899599 containerd[1476]: 2025-07-12 00:08:26.877 [INFO][5290] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:26.899599 containerd[1476]: 2025-07-12 00:08:26.891 [WARNING][5290] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8974938e7da3714d90a216188a2be8fea95b7a46f44521d586098b43736913f5" HandleID="k8s-pod-network.8974938e7da3714d90a216188a2be8fea95b7a46f44521d586098b43736913f5" Workload="ci--4081--3--4--n--f6981960e0-k8s-whisker--74957578d7--gb4g4-eth0" Jul 12 00:08:26.899599 containerd[1476]: 2025-07-12 00:08:26.891 [INFO][5290] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8974938e7da3714d90a216188a2be8fea95b7a46f44521d586098b43736913f5" HandleID="k8s-pod-network.8974938e7da3714d90a216188a2be8fea95b7a46f44521d586098b43736913f5" Workload="ci--4081--3--4--n--f6981960e0-k8s-whisker--74957578d7--gb4g4-eth0" Jul 12 00:08:26.899599 containerd[1476]: 2025-07-12 00:08:26.894 [INFO][5290] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:26.899599 containerd[1476]: 2025-07-12 00:08:26.896 [INFO][5273] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8974938e7da3714d90a216188a2be8fea95b7a46f44521d586098b43736913f5" Jul 12 00:08:26.901129 containerd[1476]: time="2025-07-12T00:08:26.900586920Z" level=info msg="TearDown network for sandbox \"8974938e7da3714d90a216188a2be8fea95b7a46f44521d586098b43736913f5\" successfully" Jul 12 00:08:26.901129 containerd[1476]: time="2025-07-12T00:08:26.900776001Z" level=info msg="StopPodSandbox for \"8974938e7da3714d90a216188a2be8fea95b7a46f44521d586098b43736913f5\" returns successfully" Jul 12 00:08:26.903149 containerd[1476]: time="2025-07-12T00:08:26.903087174Z" level=info msg="RemovePodSandbox for \"8974938e7da3714d90a216188a2be8fea95b7a46f44521d586098b43736913f5\"" Jul 12 00:08:26.903149 containerd[1476]: time="2025-07-12T00:08:26.903135214Z" level=info msg="Forcibly stopping sandbox \"8974938e7da3714d90a216188a2be8fea95b7a46f44521d586098b43736913f5\"" Jul 12 00:08:26.998168 containerd[1476]: 2025-07-12 00:08:26.956 [WARNING][5314] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="8974938e7da3714d90a216188a2be8fea95b7a46f44521d586098b43736913f5" WorkloadEndpoint="ci--4081--3--4--n--f6981960e0-k8s-whisker--74957578d7--gb4g4-eth0" Jul 12 00:08:26.998168 containerd[1476]: 2025-07-12 00:08:26.956 [INFO][5314] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8974938e7da3714d90a216188a2be8fea95b7a46f44521d586098b43736913f5" Jul 12 00:08:26.998168 containerd[1476]: 2025-07-12 00:08:26.956 [INFO][5314] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8974938e7da3714d90a216188a2be8fea95b7a46f44521d586098b43736913f5" iface="eth0" netns="" Jul 12 00:08:26.998168 containerd[1476]: 2025-07-12 00:08:26.956 [INFO][5314] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8974938e7da3714d90a216188a2be8fea95b7a46f44521d586098b43736913f5" Jul 12 00:08:26.998168 containerd[1476]: 2025-07-12 00:08:26.956 [INFO][5314] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8974938e7da3714d90a216188a2be8fea95b7a46f44521d586098b43736913f5" Jul 12 00:08:26.998168 containerd[1476]: 2025-07-12 00:08:26.980 [INFO][5321] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8974938e7da3714d90a216188a2be8fea95b7a46f44521d586098b43736913f5" HandleID="k8s-pod-network.8974938e7da3714d90a216188a2be8fea95b7a46f44521d586098b43736913f5" Workload="ci--4081--3--4--n--f6981960e0-k8s-whisker--74957578d7--gb4g4-eth0" Jul 12 00:08:26.998168 containerd[1476]: 2025-07-12 00:08:26.980 [INFO][5321] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:26.998168 containerd[1476]: 2025-07-12 00:08:26.980 [INFO][5321] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:26.998168 containerd[1476]: 2025-07-12 00:08:26.992 [WARNING][5321] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8974938e7da3714d90a216188a2be8fea95b7a46f44521d586098b43736913f5" HandleID="k8s-pod-network.8974938e7da3714d90a216188a2be8fea95b7a46f44521d586098b43736913f5" Workload="ci--4081--3--4--n--f6981960e0-k8s-whisker--74957578d7--gb4g4-eth0" Jul 12 00:08:26.998168 containerd[1476]: 2025-07-12 00:08:26.992 [INFO][5321] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8974938e7da3714d90a216188a2be8fea95b7a46f44521d586098b43736913f5" HandleID="k8s-pod-network.8974938e7da3714d90a216188a2be8fea95b7a46f44521d586098b43736913f5" Workload="ci--4081--3--4--n--f6981960e0-k8s-whisker--74957578d7--gb4g4-eth0" Jul 12 00:08:26.998168 containerd[1476]: 2025-07-12 00:08:26.994 [INFO][5321] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:26.998168 containerd[1476]: 2025-07-12 00:08:26.996 [INFO][5314] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8974938e7da3714d90a216188a2be8fea95b7a46f44521d586098b43736913f5" Jul 12 00:08:26.998168 containerd[1476]: time="2025-07-12T00:08:26.998118786Z" level=info msg="TearDown network for sandbox \"8974938e7da3714d90a216188a2be8fea95b7a46f44521d586098b43736913f5\" successfully" Jul 12 00:08:27.002352 containerd[1476]: time="2025-07-12T00:08:27.002255529Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8974938e7da3714d90a216188a2be8fea95b7a46f44521d586098b43736913f5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 12 00:08:27.002352 containerd[1476]: time="2025-07-12T00:08:27.002402570Z" level=info msg="RemovePodSandbox \"8974938e7da3714d90a216188a2be8fea95b7a46f44521d586098b43736913f5\" returns successfully" Jul 12 00:08:27.004024 containerd[1476]: time="2025-07-12T00:08:27.003461535Z" level=info msg="StopPodSandbox for \"2f88498ac61eea5b2478fc2e605c91f4d59c435e0afe1dc62377d1e9141c86f3\"" Jul 12 00:08:27.080504 containerd[1476]: 2025-07-12 00:08:27.043 [WARNING][5335] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2f88498ac61eea5b2478fc2e605c91f4d59c435e0afe1dc62377d1e9141c86f3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--n--f6981960e0-k8s-coredns--668d6bf9bc--fmlsv-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"0b42f234-5a87-408f-b8c4-2ac2eed39fd7", ResourceVersion:"1033", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 7, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-n-f6981960e0", ContainerID:"9cc12992014623b00d9ecad0db7318fce2574d5916a96ab692acfdb84b8c79d3", Pod:"coredns-668d6bf9bc-fmlsv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.59.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicb4d9d88fbc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:27.080504 containerd[1476]: 2025-07-12 00:08:27.043 [INFO][5335] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2f88498ac61eea5b2478fc2e605c91f4d59c435e0afe1dc62377d1e9141c86f3" Jul 12 00:08:27.080504 containerd[1476]: 2025-07-12 00:08:27.043 [INFO][5335] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2f88498ac61eea5b2478fc2e605c91f4d59c435e0afe1dc62377d1e9141c86f3" iface="eth0" netns="" Jul 12 00:08:27.080504 containerd[1476]: 2025-07-12 00:08:27.043 [INFO][5335] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2f88498ac61eea5b2478fc2e605c91f4d59c435e0afe1dc62377d1e9141c86f3" Jul 12 00:08:27.080504 containerd[1476]: 2025-07-12 00:08:27.043 [INFO][5335] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2f88498ac61eea5b2478fc2e605c91f4d59c435e0afe1dc62377d1e9141c86f3" Jul 12 00:08:27.080504 containerd[1476]: 2025-07-12 00:08:27.065 [INFO][5342] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2f88498ac61eea5b2478fc2e605c91f4d59c435e0afe1dc62377d1e9141c86f3" HandleID="k8s-pod-network.2f88498ac61eea5b2478fc2e605c91f4d59c435e0afe1dc62377d1e9141c86f3" Workload="ci--4081--3--4--n--f6981960e0-k8s-coredns--668d6bf9bc--fmlsv-eth0" Jul 12 00:08:27.080504 containerd[1476]: 2025-07-12 00:08:27.065 [INFO][5342] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:27.080504 containerd[1476]: 2025-07-12 00:08:27.065 [INFO][5342] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:27.080504 containerd[1476]: 2025-07-12 00:08:27.075 [WARNING][5342] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2f88498ac61eea5b2478fc2e605c91f4d59c435e0afe1dc62377d1e9141c86f3" HandleID="k8s-pod-network.2f88498ac61eea5b2478fc2e605c91f4d59c435e0afe1dc62377d1e9141c86f3" Workload="ci--4081--3--4--n--f6981960e0-k8s-coredns--668d6bf9bc--fmlsv-eth0" Jul 12 00:08:27.080504 containerd[1476]: 2025-07-12 00:08:27.075 [INFO][5342] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2f88498ac61eea5b2478fc2e605c91f4d59c435e0afe1dc62377d1e9141c86f3" HandleID="k8s-pod-network.2f88498ac61eea5b2478fc2e605c91f4d59c435e0afe1dc62377d1e9141c86f3" Workload="ci--4081--3--4--n--f6981960e0-k8s-coredns--668d6bf9bc--fmlsv-eth0" Jul 12 00:08:27.080504 containerd[1476]: 2025-07-12 00:08:27.077 [INFO][5342] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:27.080504 containerd[1476]: 2025-07-12 00:08:27.079 [INFO][5335] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2f88498ac61eea5b2478fc2e605c91f4d59c435e0afe1dc62377d1e9141c86f3" Jul 12 00:08:27.081112 containerd[1476]: time="2025-07-12T00:08:27.081069878Z" level=info msg="TearDown network for sandbox \"2f88498ac61eea5b2478fc2e605c91f4d59c435e0afe1dc62377d1e9141c86f3\" successfully" Jul 12 00:08:27.081163 containerd[1476]: time="2025-07-12T00:08:27.081113078Z" level=info msg="StopPodSandbox for \"2f88498ac61eea5b2478fc2e605c91f4d59c435e0afe1dc62377d1e9141c86f3\" returns successfully" Jul 12 00:08:27.082264 containerd[1476]: time="2025-07-12T00:08:27.081740682Z" level=info msg="RemovePodSandbox for \"2f88498ac61eea5b2478fc2e605c91f4d59c435e0afe1dc62377d1e9141c86f3\"" Jul 12 00:08:27.082264 containerd[1476]: time="2025-07-12T00:08:27.081885243Z" level=info msg="Forcibly stopping sandbox \"2f88498ac61eea5b2478fc2e605c91f4d59c435e0afe1dc62377d1e9141c86f3\"" Jul 12 00:08:27.171683 containerd[1476]: 2025-07-12 00:08:27.126 [WARNING][5356] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2f88498ac61eea5b2478fc2e605c91f4d59c435e0afe1dc62377d1e9141c86f3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--n--f6981960e0-k8s-coredns--668d6bf9bc--fmlsv-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"0b42f234-5a87-408f-b8c4-2ac2eed39fd7", ResourceVersion:"1033", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 7, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-n-f6981960e0", ContainerID:"9cc12992014623b00d9ecad0db7318fce2574d5916a96ab692acfdb84b8c79d3", Pod:"coredns-668d6bf9bc-fmlsv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.59.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicb4d9d88fbc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:27.171683 containerd[1476]: 2025-07-12 00:08:27.126 [INFO][5356] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2f88498ac61eea5b2478fc2e605c91f4d59c435e0afe1dc62377d1e9141c86f3" Jul 12 00:08:27.171683 containerd[1476]: 2025-07-12 00:08:27.126 [INFO][5356] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2f88498ac61eea5b2478fc2e605c91f4d59c435e0afe1dc62377d1e9141c86f3" iface="eth0" netns="" Jul 12 00:08:27.171683 containerd[1476]: 2025-07-12 00:08:27.126 [INFO][5356] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2f88498ac61eea5b2478fc2e605c91f4d59c435e0afe1dc62377d1e9141c86f3" Jul 12 00:08:27.171683 containerd[1476]: 2025-07-12 00:08:27.126 [INFO][5356] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2f88498ac61eea5b2478fc2e605c91f4d59c435e0afe1dc62377d1e9141c86f3" Jul 12 00:08:27.171683 containerd[1476]: 2025-07-12 00:08:27.152 [INFO][5363] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2f88498ac61eea5b2478fc2e605c91f4d59c435e0afe1dc62377d1e9141c86f3" HandleID="k8s-pod-network.2f88498ac61eea5b2478fc2e605c91f4d59c435e0afe1dc62377d1e9141c86f3" Workload="ci--4081--3--4--n--f6981960e0-k8s-coredns--668d6bf9bc--fmlsv-eth0" Jul 12 00:08:27.171683 containerd[1476]: 2025-07-12 00:08:27.152 [INFO][5363] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:27.171683 containerd[1476]: 2025-07-12 00:08:27.153 [INFO][5363] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:27.171683 containerd[1476]: 2025-07-12 00:08:27.165 [WARNING][5363] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2f88498ac61eea5b2478fc2e605c91f4d59c435e0afe1dc62377d1e9141c86f3" HandleID="k8s-pod-network.2f88498ac61eea5b2478fc2e605c91f4d59c435e0afe1dc62377d1e9141c86f3" Workload="ci--4081--3--4--n--f6981960e0-k8s-coredns--668d6bf9bc--fmlsv-eth0" Jul 12 00:08:27.171683 containerd[1476]: 2025-07-12 00:08:27.165 [INFO][5363] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2f88498ac61eea5b2478fc2e605c91f4d59c435e0afe1dc62377d1e9141c86f3" HandleID="k8s-pod-network.2f88498ac61eea5b2478fc2e605c91f4d59c435e0afe1dc62377d1e9141c86f3" Workload="ci--4081--3--4--n--f6981960e0-k8s-coredns--668d6bf9bc--fmlsv-eth0" Jul 12 00:08:27.171683 containerd[1476]: 2025-07-12 00:08:27.168 [INFO][5363] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:27.171683 containerd[1476]: 2025-07-12 00:08:27.170 [INFO][5356] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2f88498ac61eea5b2478fc2e605c91f4d59c435e0afe1dc62377d1e9141c86f3" Jul 12 00:08:27.173778 containerd[1476]: time="2025-07-12T00:08:27.172296335Z" level=info msg="TearDown network for sandbox \"2f88498ac61eea5b2478fc2e605c91f4d59c435e0afe1dc62377d1e9141c86f3\" successfully" Jul 12 00:08:27.177379 containerd[1476]: time="2025-07-12T00:08:27.176911681Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2f88498ac61eea5b2478fc2e605c91f4d59c435e0afe1dc62377d1e9141c86f3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 12 00:08:27.177379 containerd[1476]: time="2025-07-12T00:08:27.177040521Z" level=info msg="RemovePodSandbox \"2f88498ac61eea5b2478fc2e605c91f4d59c435e0afe1dc62377d1e9141c86f3\" returns successfully" Jul 12 00:08:27.177657 containerd[1476]: time="2025-07-12T00:08:27.177631604Z" level=info msg="StopPodSandbox for \"052d37f2cf1ebb5aa1e2ca5d415c4032512d9f4c46ee90f50cb782a6f634192f\"" Jul 12 00:08:27.313385 containerd[1476]: 2025-07-12 00:08:27.237 [WARNING][5377] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="052d37f2cf1ebb5aa1e2ca5d415c4032512d9f4c46ee90f50cb782a6f634192f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--n--f6981960e0-k8s-goldmane--768f4c5c69--pqdm4-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"e0f120ef-124f-4f0f-8ada-007ac6b4610e", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 7, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-n-f6981960e0", ContainerID:"dd4746ebf21357a55eaf8de5210e3dab958bb17173571e427997a90c831e8f64", Pod:"goldmane-768f4c5c69-pqdm4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.59.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali123987b7434", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:27.313385 containerd[1476]: 2025-07-12 00:08:27.237 [INFO][5377] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="052d37f2cf1ebb5aa1e2ca5d415c4032512d9f4c46ee90f50cb782a6f634192f" Jul 12 00:08:27.313385 containerd[1476]: 2025-07-12 00:08:27.237 [INFO][5377] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="052d37f2cf1ebb5aa1e2ca5d415c4032512d9f4c46ee90f50cb782a6f634192f" iface="eth0" netns="" Jul 12 00:08:27.313385 containerd[1476]: 2025-07-12 00:08:27.237 [INFO][5377] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="052d37f2cf1ebb5aa1e2ca5d415c4032512d9f4c46ee90f50cb782a6f634192f" Jul 12 00:08:27.313385 containerd[1476]: 2025-07-12 00:08:27.237 [INFO][5377] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="052d37f2cf1ebb5aa1e2ca5d415c4032512d9f4c46ee90f50cb782a6f634192f" Jul 12 00:08:27.313385 containerd[1476]: 2025-07-12 00:08:27.285 [INFO][5384] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="052d37f2cf1ebb5aa1e2ca5d415c4032512d9f4c46ee90f50cb782a6f634192f" HandleID="k8s-pod-network.052d37f2cf1ebb5aa1e2ca5d415c4032512d9f4c46ee90f50cb782a6f634192f" Workload="ci--4081--3--4--n--f6981960e0-k8s-goldmane--768f4c5c69--pqdm4-eth0" Jul 12 00:08:27.313385 containerd[1476]: 2025-07-12 00:08:27.285 [INFO][5384] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:27.313385 containerd[1476]: 2025-07-12 00:08:27.285 [INFO][5384] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:27.313385 containerd[1476]: 2025-07-12 00:08:27.303 [WARNING][5384] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="052d37f2cf1ebb5aa1e2ca5d415c4032512d9f4c46ee90f50cb782a6f634192f" HandleID="k8s-pod-network.052d37f2cf1ebb5aa1e2ca5d415c4032512d9f4c46ee90f50cb782a6f634192f" Workload="ci--4081--3--4--n--f6981960e0-k8s-goldmane--768f4c5c69--pqdm4-eth0" Jul 12 00:08:27.313385 containerd[1476]: 2025-07-12 00:08:27.303 [INFO][5384] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="052d37f2cf1ebb5aa1e2ca5d415c4032512d9f4c46ee90f50cb782a6f634192f" HandleID="k8s-pod-network.052d37f2cf1ebb5aa1e2ca5d415c4032512d9f4c46ee90f50cb782a6f634192f" Workload="ci--4081--3--4--n--f6981960e0-k8s-goldmane--768f4c5c69--pqdm4-eth0" Jul 12 00:08:27.313385 containerd[1476]: 2025-07-12 00:08:27.306 [INFO][5384] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:27.313385 containerd[1476]: 2025-07-12 00:08:27.310 [INFO][5377] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="052d37f2cf1ebb5aa1e2ca5d415c4032512d9f4c46ee90f50cb782a6f634192f" Jul 12 00:08:27.313385 containerd[1476]: time="2025-07-12T00:08:27.313203743Z" level=info msg="TearDown network for sandbox \"052d37f2cf1ebb5aa1e2ca5d415c4032512d9f4c46ee90f50cb782a6f634192f\" successfully" Jul 12 00:08:27.313385 containerd[1476]: time="2025-07-12T00:08:27.313257384Z" level=info msg="StopPodSandbox for \"052d37f2cf1ebb5aa1e2ca5d415c4032512d9f4c46ee90f50cb782a6f634192f\" returns successfully" Jul 12 00:08:27.314480 containerd[1476]: time="2025-07-12T00:08:27.313786666Z" level=info msg="RemovePodSandbox for \"052d37f2cf1ebb5aa1e2ca5d415c4032512d9f4c46ee90f50cb782a6f634192f\"" Jul 12 00:08:27.314480 containerd[1476]: time="2025-07-12T00:08:27.313818067Z" level=info msg="Forcibly stopping sandbox \"052d37f2cf1ebb5aa1e2ca5d415c4032512d9f4c46ee90f50cb782a6f634192f\"" Jul 12 00:08:27.432510 containerd[1476]: time="2025-07-12T00:08:27.432320312Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:27.435077 containerd[1476]: time="2025-07-12T00:08:27.434358603Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8225702" Jul 12 00:08:27.437631 containerd[1476]: time="2025-07-12T00:08:27.437585421Z" level=info msg="ImageCreate event name:\"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:27.443755 containerd[1476]: time="2025-07-12T00:08:27.442439528Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"9594943\" in 1.661940109s" Jul 12 00:08:27.443971 containerd[1476]: time="2025-07-12T00:08:27.443939136Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\"" Jul 12 00:08:27.444131 containerd[1476]: time="2025-07-12T00:08:27.442733089Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:27.449455 containerd[1476]: 2025-07-12 00:08:27.372 [WARNING][5403] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="052d37f2cf1ebb5aa1e2ca5d415c4032512d9f4c46ee90f50cb782a6f634192f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--n--f6981960e0-k8s-goldmane--768f4c5c69--pqdm4-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"e0f120ef-124f-4f0f-8ada-007ac6b4610e", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 7, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-n-f6981960e0", ContainerID:"dd4746ebf21357a55eaf8de5210e3dab958bb17173571e427997a90c831e8f64", Pod:"goldmane-768f4c5c69-pqdm4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.59.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali123987b7434", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:27.449455 containerd[1476]: 2025-07-12 00:08:27.373 [INFO][5403] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="052d37f2cf1ebb5aa1e2ca5d415c4032512d9f4c46ee90f50cb782a6f634192f" Jul 12 00:08:27.449455 containerd[1476]: 2025-07-12 00:08:27.373 [INFO][5403] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="052d37f2cf1ebb5aa1e2ca5d415c4032512d9f4c46ee90f50cb782a6f634192f" iface="eth0" netns="" Jul 12 00:08:27.449455 containerd[1476]: 2025-07-12 00:08:27.373 [INFO][5403] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="052d37f2cf1ebb5aa1e2ca5d415c4032512d9f4c46ee90f50cb782a6f634192f" Jul 12 00:08:27.449455 containerd[1476]: 2025-07-12 00:08:27.373 [INFO][5403] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="052d37f2cf1ebb5aa1e2ca5d415c4032512d9f4c46ee90f50cb782a6f634192f" Jul 12 00:08:27.449455 containerd[1476]: 2025-07-12 00:08:27.417 [INFO][5410] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="052d37f2cf1ebb5aa1e2ca5d415c4032512d9f4c46ee90f50cb782a6f634192f" HandleID="k8s-pod-network.052d37f2cf1ebb5aa1e2ca5d415c4032512d9f4c46ee90f50cb782a6f634192f" Workload="ci--4081--3--4--n--f6981960e0-k8s-goldmane--768f4c5c69--pqdm4-eth0" Jul 12 00:08:27.449455 containerd[1476]: 2025-07-12 00:08:27.418 [INFO][5410] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:27.449455 containerd[1476]: 2025-07-12 00:08:27.418 [INFO][5410] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:27.449455 containerd[1476]: 2025-07-12 00:08:27.433 [WARNING][5410] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="052d37f2cf1ebb5aa1e2ca5d415c4032512d9f4c46ee90f50cb782a6f634192f" HandleID="k8s-pod-network.052d37f2cf1ebb5aa1e2ca5d415c4032512d9f4c46ee90f50cb782a6f634192f" Workload="ci--4081--3--4--n--f6981960e0-k8s-goldmane--768f4c5c69--pqdm4-eth0" Jul 12 00:08:27.449455 containerd[1476]: 2025-07-12 00:08:27.433 [INFO][5410] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="052d37f2cf1ebb5aa1e2ca5d415c4032512d9f4c46ee90f50cb782a6f634192f" HandleID="k8s-pod-network.052d37f2cf1ebb5aa1e2ca5d415c4032512d9f4c46ee90f50cb782a6f634192f" Workload="ci--4081--3--4--n--f6981960e0-k8s-goldmane--768f4c5c69--pqdm4-eth0" Jul 12 00:08:27.449455 containerd[1476]: 2025-07-12 00:08:27.435 [INFO][5410] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:27.449455 containerd[1476]: 2025-07-12 00:08:27.442 [INFO][5403] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="052d37f2cf1ebb5aa1e2ca5d415c4032512d9f4c46ee90f50cb782a6f634192f" Jul 12 00:08:27.449455 containerd[1476]: time="2025-07-12T00:08:27.447817437Z" level=info msg="TearDown network for sandbox \"052d37f2cf1ebb5aa1e2ca5d415c4032512d9f4c46ee90f50cb782a6f634192f\" successfully" Jul 12 00:08:27.457033 containerd[1476]: time="2025-07-12T00:08:27.456873406Z" level=info msg="CreateContainer within sandbox \"7969c9147ee6357aa27a4bc2549209e11d366f5a0df0c4d99f7a5ce8e02ca6d9\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 12 00:08:27.459548 containerd[1476]: time="2025-07-12T00:08:27.459496940Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"052d37f2cf1ebb5aa1e2ca5d415c4032512d9f4c46ee90f50cb782a6f634192f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 12 00:08:27.459650 containerd[1476]: time="2025-07-12T00:08:27.459569461Z" level=info msg="RemovePodSandbox \"052d37f2cf1ebb5aa1e2ca5d415c4032512d9f4c46ee90f50cb782a6f634192f\" returns successfully" Jul 12 00:08:27.460641 containerd[1476]: time="2025-07-12T00:08:27.460595746Z" level=info msg="StopPodSandbox for \"9df73bfef9de647a300e05ba079706468df39e9261ee5401adaf5fd3aafd652c\"" Jul 12 00:08:27.482889 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4212850002.mount: Deactivated successfully. Jul 12 00:08:27.489549 containerd[1476]: time="2025-07-12T00:08:27.488951021Z" level=info msg="CreateContainer within sandbox \"7969c9147ee6357aa27a4bc2549209e11d366f5a0df0c4d99f7a5ce8e02ca6d9\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"6380b207f84ec12ad1a0943681ba1a6763fee45435163e59950437f346c5bcdb\"" Jul 12 00:08:27.491437 containerd[1476]: time="2025-07-12T00:08:27.491366674Z" level=info msg="StartContainer for \"6380b207f84ec12ad1a0943681ba1a6763fee45435163e59950437f346c5bcdb\"" Jul 12 00:08:27.540396 systemd[1]: Started cri-containerd-6380b207f84ec12ad1a0943681ba1a6763fee45435163e59950437f346c5bcdb.scope - libcontainer container 6380b207f84ec12ad1a0943681ba1a6763fee45435163e59950437f346c5bcdb. Jul 12 00:08:27.599918 containerd[1476]: time="2025-07-12T00:08:27.599688744Z" level=info msg="StartContainer for \"6380b207f84ec12ad1a0943681ba1a6763fee45435163e59950437f346c5bcdb\" returns successfully" Jul 12 00:08:27.605450 containerd[1476]: time="2025-07-12T00:08:27.605411776Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 12 00:08:27.618336 containerd[1476]: 2025-07-12 00:08:27.555 [WARNING][5445] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9df73bfef9de647a300e05ba079706468df39e9261ee5401adaf5fd3aafd652c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--n--f6981960e0-k8s-calico--kube--controllers--6ccd79f89c--9zkg8-eth0", GenerateName:"calico-kube-controllers-6ccd79f89c-", Namespace:"calico-system", SelfLink:"", UID:"38a85e2e-1a0b-4fb0-b2a5-5c3bb45039e7", ResourceVersion:"1063", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 7, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6ccd79f89c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-n-f6981960e0", ContainerID:"fa4e53246feea475034aa56b3e1bcf8ba87fd317210cddc5f586fe9f3222c2b5", Pod:"calico-kube-controllers-6ccd79f89c-9zkg8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.59.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif74458574fa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:27.618336 containerd[1476]: 2025-07-12 00:08:27.555 [INFO][5445] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9df73bfef9de647a300e05ba079706468df39e9261ee5401adaf5fd3aafd652c" Jul 12 00:08:27.618336 containerd[1476]: 2025-07-12 00:08:27.556 [INFO][5445] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9df73bfef9de647a300e05ba079706468df39e9261ee5401adaf5fd3aafd652c" iface="eth0" netns="" Jul 12 00:08:27.618336 containerd[1476]: 2025-07-12 00:08:27.556 [INFO][5445] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9df73bfef9de647a300e05ba079706468df39e9261ee5401adaf5fd3aafd652c" Jul 12 00:08:27.618336 containerd[1476]: 2025-07-12 00:08:27.556 [INFO][5445] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9df73bfef9de647a300e05ba079706468df39e9261ee5401adaf5fd3aafd652c" Jul 12 00:08:27.618336 containerd[1476]: 2025-07-12 00:08:27.594 [INFO][5477] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9df73bfef9de647a300e05ba079706468df39e9261ee5401adaf5fd3aafd652c" HandleID="k8s-pod-network.9df73bfef9de647a300e05ba079706468df39e9261ee5401adaf5fd3aafd652c" Workload="ci--4081--3--4--n--f6981960e0-k8s-calico--kube--controllers--6ccd79f89c--9zkg8-eth0" Jul 12 00:08:27.618336 containerd[1476]: 2025-07-12 00:08:27.596 [INFO][5477] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:27.618336 containerd[1476]: 2025-07-12 00:08:27.596 [INFO][5477] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:27.618336 containerd[1476]: 2025-07-12 00:08:27.609 [WARNING][5477] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9df73bfef9de647a300e05ba079706468df39e9261ee5401adaf5fd3aafd652c" HandleID="k8s-pod-network.9df73bfef9de647a300e05ba079706468df39e9261ee5401adaf5fd3aafd652c" Workload="ci--4081--3--4--n--f6981960e0-k8s-calico--kube--controllers--6ccd79f89c--9zkg8-eth0" Jul 12 00:08:27.618336 containerd[1476]: 2025-07-12 00:08:27.609 [INFO][5477] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9df73bfef9de647a300e05ba079706468df39e9261ee5401adaf5fd3aafd652c" HandleID="k8s-pod-network.9df73bfef9de647a300e05ba079706468df39e9261ee5401adaf5fd3aafd652c" Workload="ci--4081--3--4--n--f6981960e0-k8s-calico--kube--controllers--6ccd79f89c--9zkg8-eth0" Jul 12 00:08:27.618336 containerd[1476]: 2025-07-12 00:08:27.612 [INFO][5477] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:27.618336 containerd[1476]: 2025-07-12 00:08:27.615 [INFO][5445] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9df73bfef9de647a300e05ba079706468df39e9261ee5401adaf5fd3aafd652c" Jul 12 00:08:27.618759 containerd[1476]: time="2025-07-12T00:08:27.618391886Z" level=info msg="TearDown network for sandbox \"9df73bfef9de647a300e05ba079706468df39e9261ee5401adaf5fd3aafd652c\" successfully" Jul 12 00:08:27.618759 containerd[1476]: time="2025-07-12T00:08:27.618416567Z" level=info msg="StopPodSandbox for \"9df73bfef9de647a300e05ba079706468df39e9261ee5401adaf5fd3aafd652c\" returns successfully" Jul 12 00:08:27.619135 containerd[1476]: time="2025-07-12T00:08:27.618983050Z" level=info msg="RemovePodSandbox for \"9df73bfef9de647a300e05ba079706468df39e9261ee5401adaf5fd3aafd652c\"" Jul 12 00:08:27.619269 containerd[1476]: time="2025-07-12T00:08:27.619241651Z" level=info msg="Forcibly stopping sandbox \"9df73bfef9de647a300e05ba079706468df39e9261ee5401adaf5fd3aafd652c\"" Jul 12 00:08:27.706240 containerd[1476]: 2025-07-12 00:08:27.668 [WARNING][5501] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9df73bfef9de647a300e05ba079706468df39e9261ee5401adaf5fd3aafd652c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--n--f6981960e0-k8s-calico--kube--controllers--6ccd79f89c--9zkg8-eth0", GenerateName:"calico-kube-controllers-6ccd79f89c-", Namespace:"calico-system", SelfLink:"", UID:"38a85e2e-1a0b-4fb0-b2a5-5c3bb45039e7", ResourceVersion:"1063", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 7, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6ccd79f89c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-n-f6981960e0", ContainerID:"fa4e53246feea475034aa56b3e1bcf8ba87fd317210cddc5f586fe9f3222c2b5", Pod:"calico-kube-controllers-6ccd79f89c-9zkg8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.59.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif74458574fa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:27.706240 containerd[1476]: 2025-07-12 00:08:27.668 [INFO][5501] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9df73bfef9de647a300e05ba079706468df39e9261ee5401adaf5fd3aafd652c" Jul 12 00:08:27.706240 containerd[1476]: 2025-07-12 00:08:27.668 [INFO][5501] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9df73bfef9de647a300e05ba079706468df39e9261ee5401adaf5fd3aafd652c" iface="eth0" netns="" Jul 12 00:08:27.706240 containerd[1476]: 2025-07-12 00:08:27.668 [INFO][5501] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9df73bfef9de647a300e05ba079706468df39e9261ee5401adaf5fd3aafd652c" Jul 12 00:08:27.706240 containerd[1476]: 2025-07-12 00:08:27.668 [INFO][5501] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9df73bfef9de647a300e05ba079706468df39e9261ee5401adaf5fd3aafd652c" Jul 12 00:08:27.706240 containerd[1476]: 2025-07-12 00:08:27.690 [INFO][5508] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9df73bfef9de647a300e05ba079706468df39e9261ee5401adaf5fd3aafd652c" HandleID="k8s-pod-network.9df73bfef9de647a300e05ba079706468df39e9261ee5401adaf5fd3aafd652c" Workload="ci--4081--3--4--n--f6981960e0-k8s-calico--kube--controllers--6ccd79f89c--9zkg8-eth0" Jul 12 00:08:27.706240 containerd[1476]: 2025-07-12 00:08:27.690 [INFO][5508] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:27.706240 containerd[1476]: 2025-07-12 00:08:27.690 [INFO][5508] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:27.706240 containerd[1476]: 2025-07-12 00:08:27.701 [WARNING][5508] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9df73bfef9de647a300e05ba079706468df39e9261ee5401adaf5fd3aafd652c" HandleID="k8s-pod-network.9df73bfef9de647a300e05ba079706468df39e9261ee5401adaf5fd3aafd652c" Workload="ci--4081--3--4--n--f6981960e0-k8s-calico--kube--controllers--6ccd79f89c--9zkg8-eth0" Jul 12 00:08:27.706240 containerd[1476]: 2025-07-12 00:08:27.701 [INFO][5508] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9df73bfef9de647a300e05ba079706468df39e9261ee5401adaf5fd3aafd652c" HandleID="k8s-pod-network.9df73bfef9de647a300e05ba079706468df39e9261ee5401adaf5fd3aafd652c" Workload="ci--4081--3--4--n--f6981960e0-k8s-calico--kube--controllers--6ccd79f89c--9zkg8-eth0" Jul 12 00:08:27.706240 containerd[1476]: 2025-07-12 00:08:27.703 [INFO][5508] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:27.706240 containerd[1476]: 2025-07-12 00:08:27.704 [INFO][5501] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9df73bfef9de647a300e05ba079706468df39e9261ee5401adaf5fd3aafd652c" Jul 12 00:08:27.706664 containerd[1476]: time="2025-07-12T00:08:27.706283205Z" level=info msg="TearDown network for sandbox \"9df73bfef9de647a300e05ba079706468df39e9261ee5401adaf5fd3aafd652c\" successfully" Jul 12 00:08:27.710910 containerd[1476]: time="2025-07-12T00:08:27.710776950Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9df73bfef9de647a300e05ba079706468df39e9261ee5401adaf5fd3aafd652c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 12 00:08:27.711024 containerd[1476]: time="2025-07-12T00:08:27.710947791Z" level=info msg="RemovePodSandbox \"9df73bfef9de647a300e05ba079706468df39e9261ee5401adaf5fd3aafd652c\" returns successfully" Jul 12 00:08:27.711502 containerd[1476]: time="2025-07-12T00:08:27.711461514Z" level=info msg="StopPodSandbox for \"f8f6a27f6ade4dc0d004d973501bf4189b322089e6565d8a7a1df5958c9ea008\"" Jul 12 00:08:27.801756 containerd[1476]: 2025-07-12 00:08:27.756 [WARNING][5522] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f8f6a27f6ade4dc0d004d973501bf4189b322089e6565d8a7a1df5958c9ea008" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--n--f6981960e0-k8s-coredns--668d6bf9bc--6tkc8-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"061bde8d-e1c8-4411-baca-b18c385b32b1", ResourceVersion:"1014", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 7, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-n-f6981960e0", ContainerID:"c3dc8a562b776eb5bc46312057e3045a6bde4f2759238dcc4b5e78570e27d770", Pod:"coredns-668d6bf9bc-6tkc8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.59.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicc9f8c933c6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:27.801756 containerd[1476]: 2025-07-12 00:08:27.756 [INFO][5522] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f8f6a27f6ade4dc0d004d973501bf4189b322089e6565d8a7a1df5958c9ea008" Jul 12 00:08:27.801756 containerd[1476]: 2025-07-12 00:08:27.756 [INFO][5522] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f8f6a27f6ade4dc0d004d973501bf4189b322089e6565d8a7a1df5958c9ea008" iface="eth0" netns="" Jul 12 00:08:27.801756 containerd[1476]: 2025-07-12 00:08:27.756 [INFO][5522] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f8f6a27f6ade4dc0d004d973501bf4189b322089e6565d8a7a1df5958c9ea008" Jul 12 00:08:27.801756 containerd[1476]: 2025-07-12 00:08:27.756 [INFO][5522] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f8f6a27f6ade4dc0d004d973501bf4189b322089e6565d8a7a1df5958c9ea008" Jul 12 00:08:27.801756 containerd[1476]: 2025-07-12 00:08:27.783 [INFO][5529] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f8f6a27f6ade4dc0d004d973501bf4189b322089e6565d8a7a1df5958c9ea008" HandleID="k8s-pod-network.f8f6a27f6ade4dc0d004d973501bf4189b322089e6565d8a7a1df5958c9ea008" Workload="ci--4081--3--4--n--f6981960e0-k8s-coredns--668d6bf9bc--6tkc8-eth0" Jul 12 00:08:27.801756 containerd[1476]: 2025-07-12 00:08:27.783 [INFO][5529] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:27.801756 containerd[1476]: 2025-07-12 00:08:27.783 [INFO][5529] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:27.801756 containerd[1476]: 2025-07-12 00:08:27.794 [WARNING][5529] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f8f6a27f6ade4dc0d004d973501bf4189b322089e6565d8a7a1df5958c9ea008" HandleID="k8s-pod-network.f8f6a27f6ade4dc0d004d973501bf4189b322089e6565d8a7a1df5958c9ea008" Workload="ci--4081--3--4--n--f6981960e0-k8s-coredns--668d6bf9bc--6tkc8-eth0" Jul 12 00:08:27.801756 containerd[1476]: 2025-07-12 00:08:27.794 [INFO][5529] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f8f6a27f6ade4dc0d004d973501bf4189b322089e6565d8a7a1df5958c9ea008" HandleID="k8s-pod-network.f8f6a27f6ade4dc0d004d973501bf4189b322089e6565d8a7a1df5958c9ea008" Workload="ci--4081--3--4--n--f6981960e0-k8s-coredns--668d6bf9bc--6tkc8-eth0" Jul 12 00:08:27.801756 containerd[1476]: 2025-07-12 00:08:27.797 [INFO][5529] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:27.801756 containerd[1476]: 2025-07-12 00:08:27.799 [INFO][5522] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f8f6a27f6ade4dc0d004d973501bf4189b322089e6565d8a7a1df5958c9ea008" Jul 12 00:08:27.801756 containerd[1476]: time="2025-07-12T00:08:27.801795086Z" level=info msg="TearDown network for sandbox \"f8f6a27f6ade4dc0d004d973501bf4189b322089e6565d8a7a1df5958c9ea008\" successfully" Jul 12 00:08:27.801756 containerd[1476]: time="2025-07-12T00:08:27.801839846Z" level=info msg="StopPodSandbox for \"f8f6a27f6ade4dc0d004d973501bf4189b322089e6565d8a7a1df5958c9ea008\" returns successfully" Jul 12 00:08:27.802870 containerd[1476]: time="2025-07-12T00:08:27.802558850Z" level=info msg="RemovePodSandbox for \"f8f6a27f6ade4dc0d004d973501bf4189b322089e6565d8a7a1df5958c9ea008\"" Jul 12 00:08:27.802870 containerd[1476]: time="2025-07-12T00:08:27.802667651Z" level=info msg="Forcibly stopping sandbox \"f8f6a27f6ade4dc0d004d973501bf4189b322089e6565d8a7a1df5958c9ea008\"" Jul 12 00:08:27.912314 containerd[1476]: 2025-07-12 00:08:27.855 [WARNING][5543] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f8f6a27f6ade4dc0d004d973501bf4189b322089e6565d8a7a1df5958c9ea008" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--n--f6981960e0-k8s-coredns--668d6bf9bc--6tkc8-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"061bde8d-e1c8-4411-baca-b18c385b32b1", ResourceVersion:"1014", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 7, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-n-f6981960e0", ContainerID:"c3dc8a562b776eb5bc46312057e3045a6bde4f2759238dcc4b5e78570e27d770", Pod:"coredns-668d6bf9bc-6tkc8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.59.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicc9f8c933c6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:27.912314 containerd[1476]: 2025-07-12 00:08:27.856 [INFO][5543] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f8f6a27f6ade4dc0d004d973501bf4189b322089e6565d8a7a1df5958c9ea008" Jul 12 00:08:27.912314 containerd[1476]: 2025-07-12 00:08:27.856 [INFO][5543] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f8f6a27f6ade4dc0d004d973501bf4189b322089e6565d8a7a1df5958c9ea008" iface="eth0" netns="" Jul 12 00:08:27.912314 containerd[1476]: 2025-07-12 00:08:27.856 [INFO][5543] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f8f6a27f6ade4dc0d004d973501bf4189b322089e6565d8a7a1df5958c9ea008" Jul 12 00:08:27.912314 containerd[1476]: 2025-07-12 00:08:27.856 [INFO][5543] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f8f6a27f6ade4dc0d004d973501bf4189b322089e6565d8a7a1df5958c9ea008" Jul 12 00:08:27.912314 containerd[1476]: 2025-07-12 00:08:27.886 [INFO][5550] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f8f6a27f6ade4dc0d004d973501bf4189b322089e6565d8a7a1df5958c9ea008" HandleID="k8s-pod-network.f8f6a27f6ade4dc0d004d973501bf4189b322089e6565d8a7a1df5958c9ea008" Workload="ci--4081--3--4--n--f6981960e0-k8s-coredns--668d6bf9bc--6tkc8-eth0" Jul 12 00:08:27.912314 containerd[1476]: 2025-07-12 00:08:27.886 [INFO][5550] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:27.912314 containerd[1476]: 2025-07-12 00:08:27.887 [INFO][5550] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:27.912314 containerd[1476]: 2025-07-12 00:08:27.904 [WARNING][5550] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f8f6a27f6ade4dc0d004d973501bf4189b322089e6565d8a7a1df5958c9ea008" HandleID="k8s-pod-network.f8f6a27f6ade4dc0d004d973501bf4189b322089e6565d8a7a1df5958c9ea008" Workload="ci--4081--3--4--n--f6981960e0-k8s-coredns--668d6bf9bc--6tkc8-eth0" Jul 12 00:08:27.912314 containerd[1476]: 2025-07-12 00:08:27.904 [INFO][5550] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f8f6a27f6ade4dc0d004d973501bf4189b322089e6565d8a7a1df5958c9ea008" HandleID="k8s-pod-network.f8f6a27f6ade4dc0d004d973501bf4189b322089e6565d8a7a1df5958c9ea008" Workload="ci--4081--3--4--n--f6981960e0-k8s-coredns--668d6bf9bc--6tkc8-eth0" Jul 12 00:08:27.912314 containerd[1476]: 2025-07-12 00:08:27.907 [INFO][5550] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:27.912314 containerd[1476]: 2025-07-12 00:08:27.908 [INFO][5543] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f8f6a27f6ade4dc0d004d973501bf4189b322089e6565d8a7a1df5958c9ea008" Jul 12 00:08:27.912314 containerd[1476]: time="2025-07-12T00:08:27.911703565Z" level=info msg="TearDown network for sandbox \"f8f6a27f6ade4dc0d004d973501bf4189b322089e6565d8a7a1df5958c9ea008\" successfully" Jul 12 00:08:27.918511 containerd[1476]: time="2025-07-12T00:08:27.917272275Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f8f6a27f6ade4dc0d004d973501bf4189b322089e6565d8a7a1df5958c9ea008\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 12 00:08:27.918511 containerd[1476]: time="2025-07-12T00:08:27.917355996Z" level=info msg="RemovePodSandbox \"f8f6a27f6ade4dc0d004d973501bf4189b322089e6565d8a7a1df5958c9ea008\" returns successfully" Jul 12 00:08:27.919466 containerd[1476]: time="2025-07-12T00:08:27.918973764Z" level=info msg="StopPodSandbox for \"57c3d17ae3e26b35ea0253cde19561f89e2dcf55a8e7ea56ac63881363ba322e\"" Jul 12 00:08:28.024939 containerd[1476]: 2025-07-12 00:08:27.968 [WARNING][5564] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="57c3d17ae3e26b35ea0253cde19561f89e2dcf55a8e7ea56ac63881363ba322e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--n--f6981960e0-k8s-calico--apiserver--694bf789d4--flhzn-eth0", GenerateName:"calico-apiserver-694bf789d4-", Namespace:"calico-apiserver", SelfLink:"", UID:"e5b5b2b4-7d84-4c99-896a-91d48632272f", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 7, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"694bf789d4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-n-f6981960e0", ContainerID:"0ebb3e5362eed38ae55f3c510adfc101f5a7606e559f7e010fa5bf64daa198a2", Pod:"calico-apiserver-694bf789d4-flhzn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.59.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1aaf8d26815", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:28.024939 containerd[1476]: 2025-07-12 00:08:27.969 [INFO][5564] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="57c3d17ae3e26b35ea0253cde19561f89e2dcf55a8e7ea56ac63881363ba322e" Jul 12 00:08:28.024939 containerd[1476]: 2025-07-12 00:08:27.969 [INFO][5564] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="57c3d17ae3e26b35ea0253cde19561f89e2dcf55a8e7ea56ac63881363ba322e" iface="eth0" netns="" Jul 12 00:08:28.024939 containerd[1476]: 2025-07-12 00:08:27.969 [INFO][5564] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="57c3d17ae3e26b35ea0253cde19561f89e2dcf55a8e7ea56ac63881363ba322e" Jul 12 00:08:28.024939 containerd[1476]: 2025-07-12 00:08:27.969 [INFO][5564] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="57c3d17ae3e26b35ea0253cde19561f89e2dcf55a8e7ea56ac63881363ba322e" Jul 12 00:08:28.024939 containerd[1476]: 2025-07-12 00:08:28.006 [INFO][5571] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="57c3d17ae3e26b35ea0253cde19561f89e2dcf55a8e7ea56ac63881363ba322e" HandleID="k8s-pod-network.57c3d17ae3e26b35ea0253cde19561f89e2dcf55a8e7ea56ac63881363ba322e" Workload="ci--4081--3--4--n--f6981960e0-k8s-calico--apiserver--694bf789d4--flhzn-eth0" Jul 12 00:08:28.024939 containerd[1476]: 2025-07-12 00:08:28.007 [INFO][5571] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:28.024939 containerd[1476]: 2025-07-12 00:08:28.007 [INFO][5571] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:28.024939 containerd[1476]: 2025-07-12 00:08:28.017 [WARNING][5571] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="57c3d17ae3e26b35ea0253cde19561f89e2dcf55a8e7ea56ac63881363ba322e" HandleID="k8s-pod-network.57c3d17ae3e26b35ea0253cde19561f89e2dcf55a8e7ea56ac63881363ba322e" Workload="ci--4081--3--4--n--f6981960e0-k8s-calico--apiserver--694bf789d4--flhzn-eth0" Jul 12 00:08:28.024939 containerd[1476]: 2025-07-12 00:08:28.017 [INFO][5571] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="57c3d17ae3e26b35ea0253cde19561f89e2dcf55a8e7ea56ac63881363ba322e" HandleID="k8s-pod-network.57c3d17ae3e26b35ea0253cde19561f89e2dcf55a8e7ea56ac63881363ba322e" Workload="ci--4081--3--4--n--f6981960e0-k8s-calico--apiserver--694bf789d4--flhzn-eth0" Jul 12 00:08:28.024939 containerd[1476]: 2025-07-12 00:08:28.020 [INFO][5571] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:28.024939 containerd[1476]: 2025-07-12 00:08:28.023 [INFO][5564] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="57c3d17ae3e26b35ea0253cde19561f89e2dcf55a8e7ea56ac63881363ba322e" Jul 12 00:08:28.026309 containerd[1476]: time="2025-07-12T00:08:28.024979339Z" level=info msg="TearDown network for sandbox \"57c3d17ae3e26b35ea0253cde19561f89e2dcf55a8e7ea56ac63881363ba322e\" successfully" Jul 12 00:08:28.026309 containerd[1476]: time="2025-07-12T00:08:28.025008459Z" level=info msg="StopPodSandbox for \"57c3d17ae3e26b35ea0253cde19561f89e2dcf55a8e7ea56ac63881363ba322e\" returns successfully" Jul 12 00:08:28.026309 containerd[1476]: time="2025-07-12T00:08:28.025737583Z" level=info msg="RemovePodSandbox for \"57c3d17ae3e26b35ea0253cde19561f89e2dcf55a8e7ea56ac63881363ba322e\"" Jul 12 00:08:28.026309 containerd[1476]: time="2025-07-12T00:08:28.025773863Z" level=info msg="Forcibly stopping sandbox \"57c3d17ae3e26b35ea0253cde19561f89e2dcf55a8e7ea56ac63881363ba322e\"" Jul 12 00:08:28.117721 containerd[1476]: 2025-07-12 00:08:28.072 [WARNING][5585] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="57c3d17ae3e26b35ea0253cde19561f89e2dcf55a8e7ea56ac63881363ba322e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--n--f6981960e0-k8s-calico--apiserver--694bf789d4--flhzn-eth0", GenerateName:"calico-apiserver-694bf789d4-", Namespace:"calico-apiserver", SelfLink:"", UID:"e5b5b2b4-7d84-4c99-896a-91d48632272f", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 7, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"694bf789d4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-n-f6981960e0", ContainerID:"0ebb3e5362eed38ae55f3c510adfc101f5a7606e559f7e010fa5bf64daa198a2", Pod:"calico-apiserver-694bf789d4-flhzn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.59.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1aaf8d26815", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:28.117721 containerd[1476]: 2025-07-12 00:08:28.072 [INFO][5585] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="57c3d17ae3e26b35ea0253cde19561f89e2dcf55a8e7ea56ac63881363ba322e" Jul 12 00:08:28.117721 containerd[1476]: 2025-07-12 00:08:28.072 [INFO][5585] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="57c3d17ae3e26b35ea0253cde19561f89e2dcf55a8e7ea56ac63881363ba322e" iface="eth0" netns="" Jul 12 00:08:28.117721 containerd[1476]: 2025-07-12 00:08:28.072 [INFO][5585] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="57c3d17ae3e26b35ea0253cde19561f89e2dcf55a8e7ea56ac63881363ba322e" Jul 12 00:08:28.117721 containerd[1476]: 2025-07-12 00:08:28.072 [INFO][5585] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="57c3d17ae3e26b35ea0253cde19561f89e2dcf55a8e7ea56ac63881363ba322e" Jul 12 00:08:28.117721 containerd[1476]: 2025-07-12 00:08:28.101 [INFO][5593] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="57c3d17ae3e26b35ea0253cde19561f89e2dcf55a8e7ea56ac63881363ba322e" HandleID="k8s-pod-network.57c3d17ae3e26b35ea0253cde19561f89e2dcf55a8e7ea56ac63881363ba322e" Workload="ci--4081--3--4--n--f6981960e0-k8s-calico--apiserver--694bf789d4--flhzn-eth0" Jul 12 00:08:28.117721 containerd[1476]: 2025-07-12 00:08:28.101 [INFO][5593] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:28.117721 containerd[1476]: 2025-07-12 00:08:28.101 [INFO][5593] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:28.117721 containerd[1476]: 2025-07-12 00:08:28.112 [WARNING][5593] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="57c3d17ae3e26b35ea0253cde19561f89e2dcf55a8e7ea56ac63881363ba322e" HandleID="k8s-pod-network.57c3d17ae3e26b35ea0253cde19561f89e2dcf55a8e7ea56ac63881363ba322e" Workload="ci--4081--3--4--n--f6981960e0-k8s-calico--apiserver--694bf789d4--flhzn-eth0" Jul 12 00:08:28.117721 containerd[1476]: 2025-07-12 00:08:28.112 [INFO][5593] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="57c3d17ae3e26b35ea0253cde19561f89e2dcf55a8e7ea56ac63881363ba322e" HandleID="k8s-pod-network.57c3d17ae3e26b35ea0253cde19561f89e2dcf55a8e7ea56ac63881363ba322e" Workload="ci--4081--3--4--n--f6981960e0-k8s-calico--apiserver--694bf789d4--flhzn-eth0" Jul 12 00:08:28.117721 containerd[1476]: 2025-07-12 00:08:28.114 [INFO][5593] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:28.117721 containerd[1476]: 2025-07-12 00:08:28.116 [INFO][5585] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="57c3d17ae3e26b35ea0253cde19561f89e2dcf55a8e7ea56ac63881363ba322e" Jul 12 00:08:28.117721 containerd[1476]: time="2025-07-12T00:08:28.117677391Z" level=info msg="TearDown network for sandbox \"57c3d17ae3e26b35ea0253cde19561f89e2dcf55a8e7ea56ac63881363ba322e\" successfully" Jul 12 00:08:28.122371 containerd[1476]: time="2025-07-12T00:08:28.122327695Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"57c3d17ae3e26b35ea0253cde19561f89e2dcf55a8e7ea56ac63881363ba322e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 12 00:08:28.122486 containerd[1476]: time="2025-07-12T00:08:28.122415696Z" level=info msg="RemovePodSandbox \"57c3d17ae3e26b35ea0253cde19561f89e2dcf55a8e7ea56ac63881363ba322e\" returns successfully" Jul 12 00:08:30.028859 containerd[1476]: time="2025-07-12T00:08:30.028775229Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:30.030936 containerd[1476]: time="2025-07-12T00:08:30.030743039Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=13754366" Jul 12 00:08:30.035244 containerd[1476]: time="2025-07-12T00:08:30.033975655Z" level=info msg="ImageCreate event name:\"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:30.039463 containerd[1476]: time="2025-07-12T00:08:30.039421762Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:30.040067 containerd[1476]: time="2025-07-12T00:08:30.040029405Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"15123559\" in 2.434418908s" Jul 12 00:08:30.040910 containerd[1476]: time="2025-07-12T00:08:30.040687689Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\"" Jul 12 00:08:30.044347 containerd[1476]: time="2025-07-12T00:08:30.044309147Z" level=info msg="CreateContainer within sandbox \"7969c9147ee6357aa27a4bc2549209e11d366f5a0df0c4d99f7a5ce8e02ca6d9\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 12 00:08:30.062652 containerd[1476]: time="2025-07-12T00:08:30.062599199Z" level=info msg="CreateContainer within sandbox \"7969c9147ee6357aa27a4bc2549209e11d366f5a0df0c4d99f7a5ce8e02ca6d9\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"2a69a55934c8a9193e58a7a7c091b8a3a5f70eb6849330de0114e70121da5e9f\"" Jul 12 00:08:30.063519 containerd[1476]: time="2025-07-12T00:08:30.063439483Z" level=info msg="StartContainer for \"2a69a55934c8a9193e58a7a7c091b8a3a5f70eb6849330de0114e70121da5e9f\"" Jul 12 00:08:30.099402 systemd[1]: Started cri-containerd-2a69a55934c8a9193e58a7a7c091b8a3a5f70eb6849330de0114e70121da5e9f.scope - libcontainer container 2a69a55934c8a9193e58a7a7c091b8a3a5f70eb6849330de0114e70121da5e9f. Jul 12 00:08:30.133647 containerd[1476]: time="2025-07-12T00:08:30.133562076Z" level=info msg="StartContainer for \"2a69a55934c8a9193e58a7a7c091b8a3a5f70eb6849330de0114e70121da5e9f\" returns successfully" Jul 12 00:08:30.502733 kubelet[2586]: I0712 00:08:30.502665 2586 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 12 00:08:30.502733 kubelet[2586]: I0712 00:08:30.502733 2586 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 12 00:08:30.865951 kubelet[2586]: I0712 00:08:30.865855 2586 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-4mp89" podStartSLOduration=25.603711042 podStartE2EDuration="38.865828125s" podCreationTimestamp="2025-07-12 00:07:52 +0000 UTC" firstStartedPulling="2025-07-12 00:08:16.780336975 +0000 UTC m=+50.550189349" lastFinishedPulling="2025-07-12 00:08:30.042454058 +0000 UTC m=+63.812306432" observedRunningTime="2025-07-12 00:08:30.865105641 +0000 UTC m=+64.634958055" watchObservedRunningTime="2025-07-12 00:08:30.865828125 +0000 UTC m=+64.635680499" Jul 12 00:08:33.224927 kubelet[2586]: I0712 00:08:33.224855 2586 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 00:08:38.881399 update_engine[1460]: I20250712 00:08:38.880367 1460 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jul 12 00:08:38.881399 update_engine[1460]: I20250712 00:08:38.880420 1460 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jul 12 00:08:38.881399 update_engine[1460]: I20250712 00:08:38.880697 1460 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jul 12 00:08:38.883290 update_engine[1460]: I20250712 00:08:38.882386 1460 omaha_request_params.cc:62] Current group set to lts Jul 12 00:08:38.883290 update_engine[1460]: I20250712 00:08:38.882647 1460 update_attempter.cc:499] Already updated boot flags. Skipping. Jul 12 00:08:38.883290 update_engine[1460]: I20250712 00:08:38.882668 1460 update_attempter.cc:643] Scheduling an action processor start. Jul 12 00:08:38.883290 update_engine[1460]: I20250712 00:08:38.882688 1460 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 12 00:08:38.886197 update_engine[1460]: I20250712 00:08:38.886132 1460 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jul 12 00:08:38.886474 update_engine[1460]: I20250712 00:08:38.886451 1460 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 12 00:08:38.886541 update_engine[1460]: I20250712 00:08:38.886524 1460 omaha_request_action.cc:272] Request: Jul 12 00:08:38.886541 update_engine[1460]: Jul 12 00:08:38.886541 update_engine[1460]: Jul 12 00:08:38.886541 update_engine[1460]: Jul 12 00:08:38.886541 update_engine[1460]: Jul 12 00:08:38.886541 update_engine[1460]: Jul 12 00:08:38.886541 update_engine[1460]: Jul 12 00:08:38.886541 update_engine[1460]: Jul 12 00:08:38.886541 update_engine[1460]: Jul 12 00:08:38.887227 update_engine[1460]: I20250712 00:08:38.886826 1460 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 12 00:08:38.891280 locksmithd[1482]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jul 12 00:08:38.892261 update_engine[1460]: I20250712 00:08:38.891937 1460 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 12 00:08:38.892405 update_engine[1460]: I20250712 00:08:38.892347 1460 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 12 00:08:38.895459 update_engine[1460]: E20250712 00:08:38.895395 1460 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 12 00:08:38.895563 update_engine[1460]: I20250712 00:08:38.895502 1460 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jul 12 00:08:40.265631 kubelet[2586]: I0712 00:08:40.265425 2586 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 00:08:48.882371 update_engine[1460]: I20250712 00:08:48.882294 1460 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 12 00:08:48.882909 update_engine[1460]: I20250712 00:08:48.882539 1460 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 12 00:08:48.882909 update_engine[1460]: I20250712 00:08:48.882784 1460 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 12 00:08:48.885743 update_engine[1460]: E20250712 00:08:48.885687 1460 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 12 00:08:48.885872 update_engine[1460]: I20250712 00:08:48.885763 1460 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jul 12 00:08:54.805155 systemd[1]: run-containerd-runc-k8s.io-7e7e426f1728df69d8d0ed8e116250c7986224f1cfb000100df973e77880ff68-runc.bJFnnC.mount: Deactivated successfully. Jul 12 00:08:58.882324 update_engine[1460]: I20250712 00:08:58.882246 1460 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 12 00:08:58.882682 update_engine[1460]: I20250712 00:08:58.882480 1460 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 12 00:08:58.882744 update_engine[1460]: I20250712 00:08:58.882704 1460 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 12 00:08:58.884655 update_engine[1460]: E20250712 00:08:58.884606 1460 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 12 00:08:58.884772 update_engine[1460]: I20250712 00:08:58.884670 1460 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jul 12 00:09:08.882530 update_engine[1460]: I20250712 00:09:08.882391 1460 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 12 00:09:08.882978 update_engine[1460]: I20250712 00:09:08.882933 1460 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 12 00:09:08.883352 update_engine[1460]: I20250712 00:09:08.883311 1460 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 12 00:09:08.883988 update_engine[1460]: E20250712 00:09:08.883947 1460 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 12 00:09:08.884058 update_engine[1460]: I20250712 00:09:08.884017 1460 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 12 00:09:08.884058 update_engine[1460]: I20250712 00:09:08.884029 1460 omaha_request_action.cc:617] Omaha request response: Jul 12 00:09:08.884135 update_engine[1460]: E20250712 00:09:08.884115 1460 omaha_request_action.cc:636] Omaha request network transfer failed. Jul 12 00:09:08.884166 update_engine[1460]: I20250712 00:09:08.884140 1460 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jul 12 00:09:08.884166 update_engine[1460]: I20250712 00:09:08.884146 1460 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 12 00:09:08.884166 update_engine[1460]: I20250712 00:09:08.884151 1460 update_attempter.cc:306] Processing Done. Jul 12 00:09:08.884249 update_engine[1460]: E20250712 00:09:08.884164 1460 update_attempter.cc:619] Update failed. Jul 12 00:09:08.884249 update_engine[1460]: I20250712 00:09:08.884171 1460 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jul 12 00:09:08.884249 update_engine[1460]: I20250712 00:09:08.884176 1460 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jul 12 00:09:08.884249 update_engine[1460]: I20250712 00:09:08.884182 1460 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jul 12 00:09:08.884396 update_engine[1460]: I20250712 00:09:08.884270 1460 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 12 00:09:08.884396 update_engine[1460]: I20250712 00:09:08.884295 1460 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 12 00:09:08.884396 update_engine[1460]: I20250712 00:09:08.884301 1460 omaha_request_action.cc:272] Request: Jul 12 00:09:08.884396 update_engine[1460]: Jul 12 00:09:08.884396 update_engine[1460]: Jul 12 00:09:08.884396 update_engine[1460]: Jul 12 00:09:08.884396 update_engine[1460]: Jul 12 00:09:08.884396 update_engine[1460]: Jul 12 00:09:08.884396 update_engine[1460]: Jul 12 00:09:08.884396 update_engine[1460]: I20250712 00:09:08.884307 1460 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 12 00:09:08.884657 update_engine[1460]: I20250712 00:09:08.884455 1460 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 12 00:09:08.884720 update_engine[1460]: I20250712 00:09:08.884686 1460 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 12 00:09:08.885656 locksmithd[1482]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jul 12 00:09:08.885948 update_engine[1460]: E20250712 00:09:08.885372 1460 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 12 00:09:08.885948 update_engine[1460]: I20250712 00:09:08.885430 1460 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 12 00:09:08.885948 update_engine[1460]: I20250712 00:09:08.885439 1460 omaha_request_action.cc:617] Omaha request response: Jul 12 00:09:08.885948 update_engine[1460]: I20250712 00:09:08.885447 1460 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 12 00:09:08.885948 update_engine[1460]: I20250712 00:09:08.885452 1460 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 12 00:09:08.885948 update_engine[1460]: I20250712 00:09:08.885455 1460 update_attempter.cc:306] Processing Done. Jul 12 00:09:08.885948 update_engine[1460]: I20250712 00:09:08.885461 1460 update_attempter.cc:310] Error event sent. Jul 12 00:09:08.885948 update_engine[1460]: I20250712 00:09:08.885470 1460 update_check_scheduler.cc:74] Next update check in 48m2s Jul 12 00:09:08.886118 locksmithd[1482]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jul 12 00:09:56.838989 systemd[1]: run-containerd-runc-k8s.io-5db9f737554e95d0b7f5809091ed85d89e746674464e3c47473f0702eab6e7a3-runc.uCkBAT.mount: Deactivated successfully. Jul 12 00:10:09.142732 systemd[1]: Started sshd@7-91.99.219.165:22-139.178.68.195:39402.service - OpenSSH per-connection server daemon (139.178.68.195:39402). Jul 12 00:10:10.125150 sshd[5938]: Accepted publickey for core from 139.178.68.195 port 39402 ssh2: RSA SHA256:F+XLD192VdJplBwsaXiDmdHN61qgjd2kCMtCNVPlP/M Jul 12 00:10:10.127555 sshd[5938]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:10:10.137840 systemd-logind[1459]: New session 8 of user core. Jul 12 00:10:10.141276 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 12 00:10:10.913127 sshd[5938]: pam_unix(sshd:session): session closed for user core Jul 12 00:10:10.920628 systemd-logind[1459]: Session 8 logged out. Waiting for processes to exit. Jul 12 00:10:10.921792 systemd[1]: sshd@7-91.99.219.165:22-139.178.68.195:39402.service: Deactivated successfully. Jul 12 00:10:10.927594 systemd[1]: session-8.scope: Deactivated successfully. Jul 12 00:10:10.930550 systemd-logind[1459]: Removed session 8. Jul 12 00:10:16.101700 systemd[1]: Started sshd@8-91.99.219.165:22-139.178.68.195:39408.service - OpenSSH per-connection server daemon (139.178.68.195:39408). Jul 12 00:10:17.095377 sshd[5974]: Accepted publickey for core from 139.178.68.195 port 39408 ssh2: RSA SHA256:F+XLD192VdJplBwsaXiDmdHN61qgjd2kCMtCNVPlP/M Jul 12 00:10:17.097403 sshd[5974]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:10:17.102950 systemd-logind[1459]: New session 9 of user core. Jul 12 00:10:17.108537 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 12 00:10:17.865049 sshd[5974]: pam_unix(sshd:session): session closed for user core Jul 12 00:10:17.871470 systemd[1]: sshd@8-91.99.219.165:22-139.178.68.195:39408.service: Deactivated successfully. Jul 12 00:10:17.874147 systemd[1]: session-9.scope: Deactivated successfully. Jul 12 00:10:17.875488 systemd-logind[1459]: Session 9 logged out. Waiting for processes to exit. Jul 12 00:10:17.876831 systemd-logind[1459]: Removed session 9. Jul 12 00:10:23.035750 systemd[1]: Started sshd@9-91.99.219.165:22-139.178.68.195:53276.service - OpenSSH per-connection server daemon (139.178.68.195:53276). Jul 12 00:10:24.019044 sshd[6007]: Accepted publickey for core from 139.178.68.195 port 53276 ssh2: RSA SHA256:F+XLD192VdJplBwsaXiDmdHN61qgjd2kCMtCNVPlP/M Jul 12 00:10:24.021583 sshd[6007]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:10:24.027171 systemd-logind[1459]: New session 10 of user core. Jul 12 00:10:24.031502 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 12 00:10:24.771607 sshd[6007]: pam_unix(sshd:session): session closed for user core Jul 12 00:10:24.777458 systemd[1]: sshd@9-91.99.219.165:22-139.178.68.195:53276.service: Deactivated successfully. Jul 12 00:10:24.781266 systemd[1]: session-10.scope: Deactivated successfully. Jul 12 00:10:24.787521 systemd-logind[1459]: Session 10 logged out. Waiting for processes to exit. Jul 12 00:10:24.789470 systemd-logind[1459]: Removed session 10. Jul 12 00:10:24.947673 systemd[1]: Started sshd@10-91.99.219.165:22-139.178.68.195:53288.service - OpenSSH per-connection server daemon (139.178.68.195:53288). Jul 12 00:10:25.933866 sshd[6040]: Accepted publickey for core from 139.178.68.195 port 53288 ssh2: RSA SHA256:F+XLD192VdJplBwsaXiDmdHN61qgjd2kCMtCNVPlP/M Jul 12 00:10:25.935792 sshd[6040]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:10:25.940545 systemd-logind[1459]: New session 11 of user core. Jul 12 00:10:25.947492 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 12 00:10:26.738570 sshd[6040]: pam_unix(sshd:session): session closed for user core Jul 12 00:10:26.743675 systemd[1]: sshd@10-91.99.219.165:22-139.178.68.195:53288.service: Deactivated successfully. Jul 12 00:10:26.748253 systemd[1]: session-11.scope: Deactivated successfully. Jul 12 00:10:26.749529 systemd-logind[1459]: Session 11 logged out. Waiting for processes to exit. Jul 12 00:10:26.750731 systemd-logind[1459]: Removed session 11. Jul 12 00:10:26.919498 systemd[1]: Started sshd@11-91.99.219.165:22-139.178.68.195:53292.service - OpenSSH per-connection server daemon (139.178.68.195:53292). Jul 12 00:10:27.911086 sshd[6072]: Accepted publickey for core from 139.178.68.195 port 53292 ssh2: RSA SHA256:F+XLD192VdJplBwsaXiDmdHN61qgjd2kCMtCNVPlP/M Jul 12 00:10:27.913233 sshd[6072]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:10:27.919366 systemd-logind[1459]: New session 12 of user core. Jul 12 00:10:27.926402 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 12 00:10:28.676772 sshd[6072]: pam_unix(sshd:session): session closed for user core Jul 12 00:10:28.681253 systemd-logind[1459]: Session 12 logged out. Waiting for processes to exit. Jul 12 00:10:28.682075 systemd[1]: sshd@11-91.99.219.165:22-139.178.68.195:53292.service: Deactivated successfully. Jul 12 00:10:28.684846 systemd[1]: session-12.scope: Deactivated successfully. Jul 12 00:10:28.686184 systemd-logind[1459]: Removed session 12. Jul 12 00:10:33.850890 systemd[1]: Started sshd@12-91.99.219.165:22-139.178.68.195:58434.service - OpenSSH per-connection server daemon (139.178.68.195:58434). Jul 12 00:10:34.833974 sshd[6111]: Accepted publickey for core from 139.178.68.195 port 58434 ssh2: RSA SHA256:F+XLD192VdJplBwsaXiDmdHN61qgjd2kCMtCNVPlP/M Jul 12 00:10:34.836172 sshd[6111]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:10:34.841927 systemd-logind[1459]: New session 13 of user core. Jul 12 00:10:34.849470 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 12 00:10:35.593440 sshd[6111]: pam_unix(sshd:session): session closed for user core Jul 12 00:10:35.599038 systemd[1]: sshd@12-91.99.219.165:22-139.178.68.195:58434.service: Deactivated successfully. Jul 12 00:10:35.602150 systemd[1]: session-13.scope: Deactivated successfully. Jul 12 00:10:35.603785 systemd-logind[1459]: Session 13 logged out. Waiting for processes to exit. Jul 12 00:10:35.604862 systemd-logind[1459]: Removed session 13. Jul 12 00:10:40.773548 systemd[1]: Started sshd@13-91.99.219.165:22-139.178.68.195:54256.service - OpenSSH per-connection server daemon (139.178.68.195:54256). Jul 12 00:10:41.769264 sshd[6124]: Accepted publickey for core from 139.178.68.195 port 54256 ssh2: RSA SHA256:F+XLD192VdJplBwsaXiDmdHN61qgjd2kCMtCNVPlP/M Jul 12 00:10:41.770958 sshd[6124]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:10:41.778702 systemd-logind[1459]: New session 14 of user core. Jul 12 00:10:41.785391 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 12 00:10:42.573686 sshd[6124]: pam_unix(sshd:session): session closed for user core Jul 12 00:10:42.579196 systemd[1]: sshd@13-91.99.219.165:22-139.178.68.195:54256.service: Deactivated successfully. Jul 12 00:10:42.584704 systemd[1]: session-14.scope: Deactivated successfully. Jul 12 00:10:42.587569 systemd-logind[1459]: Session 14 logged out. Waiting for processes to exit. Jul 12 00:10:42.589133 systemd-logind[1459]: Removed session 14. Jul 12 00:10:47.752743 systemd[1]: Started sshd@14-91.99.219.165:22-139.178.68.195:54268.service - OpenSSH per-connection server daemon (139.178.68.195:54268). Jul 12 00:10:48.722472 sshd[6160]: Accepted publickey for core from 139.178.68.195 port 54268 ssh2: RSA SHA256:F+XLD192VdJplBwsaXiDmdHN61qgjd2kCMtCNVPlP/M Jul 12 00:10:48.724343 sshd[6160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:10:48.729268 systemd-logind[1459]: New session 15 of user core. Jul 12 00:10:48.738566 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 12 00:10:49.485567 sshd[6160]: pam_unix(sshd:session): session closed for user core Jul 12 00:10:49.489800 systemd[1]: sshd@14-91.99.219.165:22-139.178.68.195:54268.service: Deactivated successfully. Jul 12 00:10:49.493841 systemd[1]: session-15.scope: Deactivated successfully. Jul 12 00:10:49.495083 systemd-logind[1459]: Session 15 logged out. Waiting for processes to exit. Jul 12 00:10:49.496572 systemd-logind[1459]: Removed session 15. Jul 12 00:10:49.662944 systemd[1]: Started sshd@15-91.99.219.165:22-139.178.68.195:58930.service - OpenSSH per-connection server daemon (139.178.68.195:58930). Jul 12 00:10:50.635299 sshd[6173]: Accepted publickey for core from 139.178.68.195 port 58930 ssh2: RSA SHA256:F+XLD192VdJplBwsaXiDmdHN61qgjd2kCMtCNVPlP/M Jul 12 00:10:50.638291 sshd[6173]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:10:50.645100 systemd-logind[1459]: New session 16 of user core. Jul 12 00:10:50.651465 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 12 00:10:51.541043 sshd[6173]: pam_unix(sshd:session): session closed for user core Jul 12 00:10:51.548365 systemd[1]: sshd@15-91.99.219.165:22-139.178.68.195:58930.service: Deactivated successfully. Jul 12 00:10:51.552370 systemd[1]: session-16.scope: Deactivated successfully. Jul 12 00:10:51.553797 systemd-logind[1459]: Session 16 logged out. Waiting for processes to exit. Jul 12 00:10:51.555023 systemd-logind[1459]: Removed session 16. Jul 12 00:10:51.723056 systemd[1]: Started sshd@16-91.99.219.165:22-139.178.68.195:58942.service - OpenSSH per-connection server daemon (139.178.68.195:58942). Jul 12 00:10:52.720706 sshd[6184]: Accepted publickey for core from 139.178.68.195 port 58942 ssh2: RSA SHA256:F+XLD192VdJplBwsaXiDmdHN61qgjd2kCMtCNVPlP/M Jul 12 00:10:52.723382 sshd[6184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:10:52.729318 systemd-logind[1459]: New session 17 of user core. Jul 12 00:10:52.738571 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 12 00:10:54.471999 sshd[6184]: pam_unix(sshd:session): session closed for user core Jul 12 00:10:54.477032 systemd[1]: sshd@16-91.99.219.165:22-139.178.68.195:58942.service: Deactivated successfully. Jul 12 00:10:54.480106 systemd[1]: session-17.scope: Deactivated successfully. Jul 12 00:10:54.483232 systemd-logind[1459]: Session 17 logged out. Waiting for processes to exit. Jul 12 00:10:54.484747 systemd-logind[1459]: Removed session 17. Jul 12 00:10:54.647504 systemd[1]: Started sshd@17-91.99.219.165:22-139.178.68.195:58956.service - OpenSSH per-connection server daemon (139.178.68.195:58956). Jul 12 00:10:55.651351 sshd[6211]: Accepted publickey for core from 139.178.68.195 port 58956 ssh2: RSA SHA256:F+XLD192VdJplBwsaXiDmdHN61qgjd2kCMtCNVPlP/M Jul 12 00:10:55.654028 sshd[6211]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:10:55.658724 systemd-logind[1459]: New session 18 of user core. Jul 12 00:10:55.666602 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 12 00:10:56.546654 sshd[6211]: pam_unix(sshd:session): session closed for user core Jul 12 00:10:56.550684 systemd[1]: sshd@17-91.99.219.165:22-139.178.68.195:58956.service: Deactivated successfully. Jul 12 00:10:56.552614 systemd[1]: session-18.scope: Deactivated successfully. Jul 12 00:10:56.554514 systemd-logind[1459]: Session 18 logged out. Waiting for processes to exit. Jul 12 00:10:56.556773 systemd-logind[1459]: Removed session 18. Jul 12 00:10:56.719493 systemd[1]: Started sshd@18-91.99.219.165:22-139.178.68.195:58970.service - OpenSSH per-connection server daemon (139.178.68.195:58970). Jul 12 00:10:57.728024 sshd[6242]: Accepted publickey for core from 139.178.68.195 port 58970 ssh2: RSA SHA256:F+XLD192VdJplBwsaXiDmdHN61qgjd2kCMtCNVPlP/M Jul 12 00:10:57.730074 sshd[6242]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:10:57.735139 systemd-logind[1459]: New session 19 of user core. Jul 12 00:10:57.740547 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 12 00:10:58.487890 sshd[6242]: pam_unix(sshd:session): session closed for user core Jul 12 00:10:58.493204 systemd[1]: sshd@18-91.99.219.165:22-139.178.68.195:58970.service: Deactivated successfully. Jul 12 00:10:58.496754 systemd[1]: session-19.scope: Deactivated successfully. Jul 12 00:10:58.497916 systemd-logind[1459]: Session 19 logged out. Waiting for processes to exit. Jul 12 00:10:58.498995 systemd-logind[1459]: Removed session 19. Jul 12 00:11:03.660939 systemd[1]: Started sshd@19-91.99.219.165:22-139.178.68.195:48930.service - OpenSSH per-connection server daemon (139.178.68.195:48930). Jul 12 00:11:04.658545 sshd[6282]: Accepted publickey for core from 139.178.68.195 port 48930 ssh2: RSA SHA256:F+XLD192VdJplBwsaXiDmdHN61qgjd2kCMtCNVPlP/M Jul 12 00:11:04.660990 sshd[6282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:11:04.668383 systemd-logind[1459]: New session 20 of user core. Jul 12 00:11:04.674524 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 12 00:11:05.433884 sshd[6282]: pam_unix(sshd:session): session closed for user core Jul 12 00:11:05.442829 systemd-logind[1459]: Session 20 logged out. Waiting for processes to exit. Jul 12 00:11:05.443627 systemd[1]: sshd@19-91.99.219.165:22-139.178.68.195:48930.service: Deactivated successfully. Jul 12 00:11:05.449144 systemd[1]: session-20.scope: Deactivated successfully. Jul 12 00:11:05.453612 systemd-logind[1459]: Removed session 20. Jul 12 00:11:10.612644 systemd[1]: Started sshd@20-91.99.219.165:22-139.178.68.195:50040.service - OpenSSH per-connection server daemon (139.178.68.195:50040). Jul 12 00:11:11.589164 sshd[6296]: Accepted publickey for core from 139.178.68.195 port 50040 ssh2: RSA SHA256:F+XLD192VdJplBwsaXiDmdHN61qgjd2kCMtCNVPlP/M Jul 12 00:11:11.591777 sshd[6296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:11:11.597164 systemd-logind[1459]: New session 21 of user core. Jul 12 00:11:11.603433 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 12 00:11:12.336786 sshd[6296]: pam_unix(sshd:session): session closed for user core Jul 12 00:11:12.342999 systemd[1]: sshd@20-91.99.219.165:22-139.178.68.195:50040.service: Deactivated successfully. Jul 12 00:11:12.346157 systemd[1]: session-21.scope: Deactivated successfully. Jul 12 00:11:12.348975 systemd-logind[1459]: Session 21 logged out. Waiting for processes to exit. Jul 12 00:11:12.350636 systemd-logind[1459]: Removed session 21. Jul 12 00:11:26.965939 systemd[1]: cri-containerd-ab4fb9411743bb06730c77bc3eee8e3927c714499c24ba38bb7233cb37a7bcb4.scope: Deactivated successfully. Jul 12 00:11:26.966645 systemd[1]: cri-containerd-ab4fb9411743bb06730c77bc3eee8e3927c714499c24ba38bb7233cb37a7bcb4.scope: Consumed 25.790s CPU time. Jul 12 00:11:26.990165 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ab4fb9411743bb06730c77bc3eee8e3927c714499c24ba38bb7233cb37a7bcb4-rootfs.mount: Deactivated successfully. Jul 12 00:11:26.990410 containerd[1476]: time="2025-07-12T00:11:26.990343239Z" level=info msg="shim disconnected" id=ab4fb9411743bb06730c77bc3eee8e3927c714499c24ba38bb7233cb37a7bcb4 namespace=k8s.io Jul 12 00:11:26.990410 containerd[1476]: time="2025-07-12T00:11:26.990397239Z" level=warning msg="cleaning up after shim disconnected" id=ab4fb9411743bb06730c77bc3eee8e3927c714499c24ba38bb7233cb37a7bcb4 namespace=k8s.io Jul 12 00:11:26.990410 containerd[1476]: time="2025-07-12T00:11:26.990405919Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:11:27.383887 kubelet[2586]: I0712 00:11:27.383424 2586 scope.go:117] "RemoveContainer" containerID="ab4fb9411743bb06730c77bc3eee8e3927c714499c24ba38bb7233cb37a7bcb4" Jul 12 00:11:27.387175 containerd[1476]: time="2025-07-12T00:11:27.387132204Z" level=info msg="CreateContainer within sandbox \"1ee7e0147964535d75b6d23e436cf684091586f9ca7732d02702fd1f4f7964cd\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jul 12 00:11:27.400755 containerd[1476]: time="2025-07-12T00:11:27.400290536Z" level=info msg="CreateContainer within sandbox \"1ee7e0147964535d75b6d23e436cf684091586f9ca7732d02702fd1f4f7964cd\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"d8a1c3b8e31010a93fe1735489c7b8b913f2510c87b7330af9540222fe265466\"" Jul 12 00:11:27.408694 kubelet[2586]: E0712 00:11:27.408469 2586 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:48376->10.0.0.2:2379: read: connection timed out" Jul 12 00:11:27.415813 containerd[1476]: time="2025-07-12T00:11:27.415769990Z" level=info msg="StartContainer for \"d8a1c3b8e31010a93fe1735489c7b8b913f2510c87b7330af9540222fe265466\"" Jul 12 00:11:27.452490 systemd[1]: Started cri-containerd-d8a1c3b8e31010a93fe1735489c7b8b913f2510c87b7330af9540222fe265466.scope - libcontainer container d8a1c3b8e31010a93fe1735489c7b8b913f2510c87b7330af9540222fe265466. Jul 12 00:11:27.487723 containerd[1476]: time="2025-07-12T00:11:27.487312616Z" level=info msg="StartContainer for \"d8a1c3b8e31010a93fe1735489c7b8b913f2510c87b7330af9540222fe265466\" returns successfully" Jul 12 00:11:27.649660 systemd[1]: cri-containerd-65dfa0bb4a2519202cf4c95f3ed7c0c8ba2c61657424b029de3587fd549083c2.scope: Deactivated successfully. Jul 12 00:11:27.650964 systemd[1]: cri-containerd-65dfa0bb4a2519202cf4c95f3ed7c0c8ba2c61657424b029de3587fd549083c2.scope: Consumed 4.952s CPU time, 17.6M memory peak, 0B memory swap peak. Jul 12 00:11:27.671660 containerd[1476]: time="2025-07-12T00:11:27.671603665Z" level=info msg="shim disconnected" id=65dfa0bb4a2519202cf4c95f3ed7c0c8ba2c61657424b029de3587fd549083c2 namespace=k8s.io Jul 12 00:11:27.671660 containerd[1476]: time="2025-07-12T00:11:27.671655945Z" level=warning msg="cleaning up after shim disconnected" id=65dfa0bb4a2519202cf4c95f3ed7c0c8ba2c61657424b029de3587fd549083c2 namespace=k8s.io Jul 12 00:11:27.671660 containerd[1476]: time="2025-07-12T00:11:27.671665225Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:11:27.824009 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-65dfa0bb4a2519202cf4c95f3ed7c0c8ba2c61657424b029de3587fd549083c2-rootfs.mount: Deactivated successfully. Jul 12 00:11:28.390714 kubelet[2586]: I0712 00:11:28.390685 2586 scope.go:117] "RemoveContainer" containerID="65dfa0bb4a2519202cf4c95f3ed7c0c8ba2c61657424b029de3587fd549083c2" Jul 12 00:11:28.393225 containerd[1476]: time="2025-07-12T00:11:28.393166887Z" level=info msg="CreateContainer within sandbox \"8c622f2bc8d3af2caa95fab028ac675182cf1afaa4ca1a25f64286f729188284\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jul 12 00:11:28.409973 containerd[1476]: time="2025-07-12T00:11:28.409699382Z" level=info msg="CreateContainer within sandbox \"8c622f2bc8d3af2caa95fab028ac675182cf1afaa4ca1a25f64286f729188284\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"002a517deef0b00c803e95f546a2509e739784a910948774558d32162a745679\"" Jul 12 00:11:28.410425 containerd[1476]: time="2025-07-12T00:11:28.410289823Z" level=info msg="StartContainer for \"002a517deef0b00c803e95f546a2509e739784a910948774558d32162a745679\"" Jul 12 00:11:28.453821 systemd[1]: Started cri-containerd-002a517deef0b00c803e95f546a2509e739784a910948774558d32162a745679.scope - libcontainer container 002a517deef0b00c803e95f546a2509e739784a910948774558d32162a745679. Jul 12 00:11:28.492948 containerd[1476]: time="2025-07-12T00:11:28.492562538Z" level=info msg="StartContainer for \"002a517deef0b00c803e95f546a2509e739784a910948774558d32162a745679\" returns successfully" Jul 12 00:11:32.395000 kubelet[2586]: E0712 00:11:32.394664 2586 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:48184->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4081-3-4-n-f6981960e0.1851588a8194816d kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4081-3-4-n-f6981960e0,UID:4b34889502034e70050ac8568ea3f507,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4081-3-4-n-f6981960e0,},FirstTimestamp:2025-07-12 00:11:21.926558061 +0000 UTC m=+235.696410475,LastTimestamp:2025-07-12 00:11:21.926558061 +0000 UTC m=+235.696410475,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-4-n-f6981960e0,}" Jul 12 00:11:32.729828 systemd[1]: cri-containerd-11d2ca7083c92ff4416a4c9dc0215d8d7eb76a094efbbedf7eda7603a0f62f43.scope: Deactivated successfully. Jul 12 00:11:32.730541 systemd[1]: cri-containerd-11d2ca7083c92ff4416a4c9dc0215d8d7eb76a094efbbedf7eda7603a0f62f43.scope: Consumed 4.685s CPU time, 15.6M memory peak, 0B memory swap peak. Jul 12 00:11:32.753747 containerd[1476]: time="2025-07-12T00:11:32.753512126Z" level=info msg="shim disconnected" id=11d2ca7083c92ff4416a4c9dc0215d8d7eb76a094efbbedf7eda7603a0f62f43 namespace=k8s.io Jul 12 00:11:32.753747 containerd[1476]: time="2025-07-12T00:11:32.753580926Z" level=warning msg="cleaning up after shim disconnected" id=11d2ca7083c92ff4416a4c9dc0215d8d7eb76a094efbbedf7eda7603a0f62f43 namespace=k8s.io Jul 12 00:11:32.753747 containerd[1476]: time="2025-07-12T00:11:32.753591686Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:11:32.755271 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-11d2ca7083c92ff4416a4c9dc0215d8d7eb76a094efbbedf7eda7603a0f62f43-rootfs.mount: Deactivated successfully.