May 10 00:03:50.882577 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 10 00:03:50.882599 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri May 9 22:39:45 -00 2025 May 10 00:03:50.882609 kernel: KASLR enabled May 10 00:03:50.882614 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II May 10 00:03:50.882620 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x138595418 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b43d18 May 10 00:03:50.882637 kernel: random: crng init done May 10 00:03:50.882645 kernel: ACPI: Early table checksum verification disabled May 10 00:03:50.882651 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) May 10 00:03:50.882657 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) May 10 00:03:50.882665 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 10 00:03:50.882671 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 10 00:03:50.882677 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 10 00:03:50.882683 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 10 00:03:50.882690 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 10 00:03:50.882766 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 10 00:03:50.882775 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 10 00:03:50.882782 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 10 00:03:50.882788 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 10 00:03:50.883164 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) May 10 00:03:50.883171 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 May 10 00:03:50.883178 kernel: NUMA: Failed to initialise from firmware May 10 00:03:50.883184 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] May 10 00:03:50.883191 kernel: NUMA: NODE_DATA [mem 0x13966f800-0x139674fff] May 10 00:03:50.883197 kernel: Zone ranges: May 10 00:03:50.883203 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] May 10 00:03:50.883213 kernel: DMA32 empty May 10 00:03:50.883219 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] May 10 00:03:50.883225 kernel: Movable zone start for each node May 10 00:03:50.883231 kernel: Early memory node ranges May 10 00:03:50.883237 kernel: node 0: [mem 0x0000000040000000-0x000000013676ffff] May 10 00:03:50.883244 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] May 10 00:03:50.883250 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] May 10 00:03:50.883256 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] May 10 00:03:50.883263 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] May 10 00:03:50.883269 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] May 10 00:03:50.883275 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] May 10 00:03:50.883281 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] May 10 00:03:50.883289 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges May 10 00:03:50.883295 kernel: psci: probing for conduit method from ACPI. May 10 00:03:50.883302 kernel: psci: PSCIv1.1 detected in firmware. May 10 00:03:50.883310 kernel: psci: Using standard PSCI v0.2 function IDs May 10 00:03:50.883317 kernel: psci: Trusted OS migration not required May 10 00:03:50.883324 kernel: psci: SMC Calling Convention v1.1 May 10 00:03:50.883332 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 10 00:03:50.883339 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 10 00:03:50.883346 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 10 00:03:50.883353 kernel: pcpu-alloc: [0] 0 [0] 1 May 10 00:03:50.883359 kernel: Detected PIPT I-cache on CPU0 May 10 00:03:50.883366 kernel: CPU features: detected: GIC system register CPU interface May 10 00:03:50.883373 kernel: CPU features: detected: Hardware dirty bit management May 10 00:03:50.883380 kernel: CPU features: detected: Spectre-v4 May 10 00:03:50.883386 kernel: CPU features: detected: Spectre-BHB May 10 00:03:50.883393 kernel: CPU features: kernel page table isolation forced ON by KASLR May 10 00:03:50.883401 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 10 00:03:50.883408 kernel: CPU features: detected: ARM erratum 1418040 May 10 00:03:50.883414 kernel: CPU features: detected: SSBS not fully self-synchronizing May 10 00:03:50.883421 kernel: alternatives: applying boot alternatives May 10 00:03:50.883429 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=6ddfb314c5db7ed82ab49390a2bb52fe12211605ed2a5a27fb38ec34b3cca5b4 May 10 00:03:50.883436 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 10 00:03:50.883443 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 10 00:03:50.883450 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 10 00:03:50.883456 kernel: Fallback order for Node 0: 0 May 10 00:03:50.883463 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 May 10 00:03:50.883469 kernel: Policy zone: Normal May 10 00:03:50.883478 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 10 00:03:50.883484 kernel: software IO TLB: area num 2. May 10 00:03:50.883491 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) May 10 00:03:50.883498 kernel: Memory: 3882808K/4096000K available (10304K kernel code, 2186K rwdata, 8104K rodata, 39424K init, 897K bss, 213192K reserved, 0K cma-reserved) May 10 00:03:50.883505 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 10 00:03:50.883511 kernel: rcu: Preemptible hierarchical RCU implementation. May 10 00:03:50.883519 kernel: rcu: RCU event tracing is enabled. May 10 00:03:50.883526 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 10 00:03:50.883532 kernel: Trampoline variant of Tasks RCU enabled. May 10 00:03:50.883539 kernel: Tracing variant of Tasks RCU enabled. May 10 00:03:50.883546 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 10 00:03:50.883554 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 10 00:03:50.883561 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 10 00:03:50.883567 kernel: GICv3: 256 SPIs implemented May 10 00:03:50.883574 kernel: GICv3: 0 Extended SPIs implemented May 10 00:03:50.883581 kernel: Root IRQ handler: gic_handle_irq May 10 00:03:50.883587 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 10 00:03:50.883594 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 10 00:03:50.883600 kernel: ITS [mem 0x08080000-0x0809ffff] May 10 00:03:50.883607 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) May 10 00:03:50.883614 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) May 10 00:03:50.883621 kernel: GICv3: using LPI property table @0x00000001000e0000 May 10 00:03:50.883644 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 May 10 00:03:50.883654 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 10 00:03:50.883661 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 10 00:03:50.883667 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 10 00:03:50.883674 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 10 00:03:50.883681 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 10 00:03:50.883687 kernel: Console: colour dummy device 80x25 May 10 00:03:50.883725 kernel: ACPI: Core revision 20230628 May 10 00:03:50.883733 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 10 00:03:50.883740 kernel: pid_max: default: 32768 minimum: 301 May 10 00:03:50.883747 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 10 00:03:50.883756 kernel: landlock: Up and running. May 10 00:03:50.883763 kernel: SELinux: Initializing. May 10 00:03:50.883770 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 10 00:03:50.883777 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 10 00:03:50.883783 kernel: ACPI PPTT: PPTT table found, but unable to locate core 1 (1) May 10 00:03:50.883791 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 10 00:03:50.883798 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 10 00:03:50.883805 kernel: rcu: Hierarchical SRCU implementation. May 10 00:03:50.883812 kernel: rcu: Max phase no-delay instances is 400. May 10 00:03:50.883820 kernel: Platform MSI: ITS@0x8080000 domain created May 10 00:03:50.883827 kernel: PCI/MSI: ITS@0x8080000 domain created May 10 00:03:50.883834 kernel: Remapping and enabling EFI services. May 10 00:03:50.883840 kernel: smp: Bringing up secondary CPUs ... May 10 00:03:50.883848 kernel: Detected PIPT I-cache on CPU1 May 10 00:03:50.883855 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 10 00:03:50.883862 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 May 10 00:03:50.883869 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 10 00:03:50.883875 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 10 00:03:50.883883 kernel: smp: Brought up 1 node, 2 CPUs May 10 00:03:50.883891 kernel: SMP: Total of 2 processors activated. May 10 00:03:50.883898 kernel: CPU features: detected: 32-bit EL0 Support May 10 00:03:50.883910 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 10 00:03:50.883918 kernel: CPU features: detected: Common not Private translations May 10 00:03:50.883925 kernel: CPU features: detected: CRC32 instructions May 10 00:03:50.883932 kernel: CPU features: detected: Enhanced Virtualization Traps May 10 00:03:50.883939 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 10 00:03:50.883947 kernel: CPU features: detected: LSE atomic instructions May 10 00:03:50.883954 kernel: CPU features: detected: Privileged Access Never May 10 00:03:50.883961 kernel: CPU features: detected: RAS Extension Support May 10 00:03:50.883970 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 10 00:03:50.883977 kernel: CPU: All CPU(s) started at EL1 May 10 00:03:50.883985 kernel: alternatives: applying system-wide alternatives May 10 00:03:50.883992 kernel: devtmpfs: initialized May 10 00:03:50.884000 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 10 00:03:50.884007 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 10 00:03:50.884016 kernel: pinctrl core: initialized pinctrl subsystem May 10 00:03:50.884023 kernel: SMBIOS 3.0.0 present. May 10 00:03:50.884030 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 May 10 00:03:50.884038 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 10 00:03:50.884045 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 10 00:03:50.884052 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 10 00:03:50.884059 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 10 00:03:50.884155 kernel: audit: initializing netlink subsys (disabled) May 10 00:03:50.884169 kernel: audit: type=2000 audit(0.014:1): state=initialized audit_enabled=0 res=1 May 10 00:03:50.884179 kernel: thermal_sys: Registered thermal governor 'step_wise' May 10 00:03:50.884186 kernel: cpuidle: using governor menu May 10 00:03:50.884193 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 10 00:03:50.884201 kernel: ASID allocator initialised with 32768 entries May 10 00:03:50.884208 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 10 00:03:50.884215 kernel: Serial: AMBA PL011 UART driver May 10 00:03:50.884222 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 10 00:03:50.884230 kernel: Modules: 0 pages in range for non-PLT usage May 10 00:03:50.884237 kernel: Modules: 509008 pages in range for PLT usage May 10 00:03:50.884245 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 10 00:03:50.884253 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 10 00:03:50.884260 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 10 00:03:50.884267 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 10 00:03:50.884274 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 10 00:03:50.884281 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 10 00:03:50.884288 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 10 00:03:50.884296 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 10 00:03:50.884303 kernel: ACPI: Added _OSI(Module Device) May 10 00:03:50.884312 kernel: ACPI: Added _OSI(Processor Device) May 10 00:03:50.884319 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 10 00:03:50.884326 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 10 00:03:50.884333 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 10 00:03:50.884340 kernel: ACPI: Interpreter enabled May 10 00:03:50.884347 kernel: ACPI: Using GIC for interrupt routing May 10 00:03:50.884355 kernel: ACPI: MCFG table detected, 1 entries May 10 00:03:50.884362 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 10 00:03:50.884369 kernel: printk: console [ttyAMA0] enabled May 10 00:03:50.884378 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 10 00:03:50.884520 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 10 00:03:50.884595 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 10 00:03:50.884743 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 10 00:03:50.884820 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 10 00:03:50.884886 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 10 00:03:50.884896 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 10 00:03:50.884909 kernel: PCI host bridge to bus 0000:00 May 10 00:03:50.884982 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 10 00:03:50.885043 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 10 00:03:50.885102 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 10 00:03:50.885160 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 10 00:03:50.885248 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 10 00:03:50.885325 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 May 10 00:03:50.885396 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] May 10 00:03:50.885464 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] May 10 00:03:50.885537 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 May 10 00:03:50.885604 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] May 10 00:03:50.885715 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 May 10 00:03:50.885850 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] May 10 00:03:50.885932 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 May 10 00:03:50.885999 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] May 10 00:03:50.886070 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 May 10 00:03:50.886138 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] May 10 00:03:50.886209 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 May 10 00:03:50.886279 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] May 10 00:03:50.886372 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 May 10 00:03:50.886445 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] May 10 00:03:50.886518 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 May 10 00:03:50.886586 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] May 10 00:03:50.886674 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 May 10 00:03:50.887367 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] May 10 00:03:50.887461 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 May 10 00:03:50.887528 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] May 10 00:03:50.887604 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 May 10 00:03:50.887741 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] May 10 00:03:50.888179 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 May 10 00:03:50.888261 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] May 10 00:03:50.888330 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 10 00:03:50.888442 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] May 10 00:03:50.888523 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 May 10 00:03:50.888592 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] May 10 00:03:50.888689 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 May 10 00:03:50.890887 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] May 10 00:03:50.890960 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] May 10 00:03:50.891046 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 May 10 00:03:50.891115 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] May 10 00:03:50.891200 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 May 10 00:03:50.891268 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x10800000-0x10800fff] May 10 00:03:50.891336 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] May 10 00:03:50.891411 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 May 10 00:03:50.891480 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] May 10 00:03:50.891551 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] May 10 00:03:50.891639 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 May 10 00:03:50.892897 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] May 10 00:03:50.892983 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] May 10 00:03:50.893054 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] May 10 00:03:50.893125 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 10 00:03:50.893198 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 May 10 00:03:50.893263 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 May 10 00:03:50.893334 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 10 00:03:50.893401 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 10 00:03:50.893467 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 May 10 00:03:50.893537 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 10 00:03:50.893603 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 May 10 00:03:50.894581 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 May 10 00:03:50.894779 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 10 00:03:50.894868 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 May 10 00:03:50.894944 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 10 00:03:50.895016 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 May 10 00:03:50.895083 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 May 10 00:03:50.895149 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 May 10 00:03:50.895220 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 May 10 00:03:50.895292 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 May 10 00:03:50.895359 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 May 10 00:03:50.895429 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 May 10 00:03:50.895495 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 May 10 00:03:50.895562 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 May 10 00:03:50.895644 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 May 10 00:03:50.897772 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 May 10 00:03:50.897863 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 May 10 00:03:50.897937 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 May 10 00:03:50.898003 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 May 10 00:03:50.898068 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 May 10 00:03:50.898135 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] May 10 00:03:50.898203 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] May 10 00:03:50.898270 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] May 10 00:03:50.898340 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] May 10 00:03:50.898413 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] May 10 00:03:50.898480 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] May 10 00:03:50.898549 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] May 10 00:03:50.898615 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] May 10 00:03:50.898730 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] May 10 00:03:50.898805 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] May 10 00:03:50.898879 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] May 10 00:03:50.898947 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] May 10 00:03:50.899027 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] May 10 00:03:50.899094 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] May 10 00:03:50.899161 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] May 10 00:03:50.899229 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] May 10 00:03:50.899296 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] May 10 00:03:50.899364 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] May 10 00:03:50.899435 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] May 10 00:03:50.899502 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] May 10 00:03:50.899568 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] May 10 00:03:50.899673 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] May 10 00:03:50.900800 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] May 10 00:03:50.900878 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] May 10 00:03:50.900953 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] May 10 00:03:50.901019 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] May 10 00:03:50.901086 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] May 10 00:03:50.901151 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] May 10 00:03:50.901217 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] May 10 00:03:50.901282 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] May 10 00:03:50.901368 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] May 10 00:03:50.901437 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] May 10 00:03:50.901504 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] May 10 00:03:50.901575 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] May 10 00:03:50.901654 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] May 10 00:03:50.902133 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] May 10 00:03:50.902211 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] May 10 00:03:50.902284 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] May 10 00:03:50.902364 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] May 10 00:03:50.902448 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] May 10 00:03:50.902527 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 10 00:03:50.902610 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] May 10 00:03:50.903578 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] May 10 00:03:50.903704 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] May 10 00:03:50.903787 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] May 10 00:03:50.903861 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] May 10 00:03:50.903944 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] May 10 00:03:50.904017 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] May 10 00:03:50.904083 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] May 10 00:03:50.904148 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] May 10 00:03:50.904212 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] May 10 00:03:50.904285 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] May 10 00:03:50.904352 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] May 10 00:03:50.904421 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] May 10 00:03:50.904484 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] May 10 00:03:50.904548 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] May 10 00:03:50.904612 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] May 10 00:03:50.904752 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] May 10 00:03:50.904833 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] May 10 00:03:50.904899 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] May 10 00:03:50.904964 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] May 10 00:03:50.905033 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] May 10 00:03:50.905106 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] May 10 00:03:50.905174 kernel: pci 0000:05:00.0: BAR 1: assigned [mem 0x10800000-0x10800fff] May 10 00:03:50.905240 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] May 10 00:03:50.905306 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] May 10 00:03:50.905371 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] May 10 00:03:50.905436 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] May 10 00:03:50.905509 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] May 10 00:03:50.905581 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] May 10 00:03:50.905727 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] May 10 00:03:50.905812 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] May 10 00:03:50.905879 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] May 10 00:03:50.905943 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] May 10 00:03:50.906015 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] May 10 00:03:50.906084 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] May 10 00:03:50.906158 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] May 10 00:03:50.906228 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] May 10 00:03:50.906294 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] May 10 00:03:50.906359 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] May 10 00:03:50.906424 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] May 10 00:03:50.906490 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] May 10 00:03:50.906556 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] May 10 00:03:50.906621 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] May 10 00:03:50.907875 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] May 10 00:03:50.907969 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] May 10 00:03:50.908040 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] May 10 00:03:50.908112 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] May 10 00:03:50.908809 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] May 10 00:03:50.908893 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 10 00:03:50.908953 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 10 00:03:50.909012 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 10 00:03:50.909088 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] May 10 00:03:50.909150 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] May 10 00:03:50.909210 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] May 10 00:03:50.909279 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] May 10 00:03:50.909340 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] May 10 00:03:50.909398 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] May 10 00:03:50.909469 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] May 10 00:03:50.909535 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] May 10 00:03:50.909608 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] May 10 00:03:50.910754 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] May 10 00:03:50.910866 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] May 10 00:03:50.910928 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] May 10 00:03:50.910996 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] May 10 00:03:50.911064 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] May 10 00:03:50.911125 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] May 10 00:03:50.911202 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] May 10 00:03:50.911263 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] May 10 00:03:50.911328 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] May 10 00:03:50.911397 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] May 10 00:03:50.911459 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] May 10 00:03:50.911519 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] May 10 00:03:50.911587 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] May 10 00:03:50.911669 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] May 10 00:03:50.912833 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] May 10 00:03:50.912943 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] May 10 00:03:50.913027 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] May 10 00:03:50.913103 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] May 10 00:03:50.913113 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 10 00:03:50.913122 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 10 00:03:50.913129 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 10 00:03:50.913137 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 10 00:03:50.913145 kernel: iommu: Default domain type: Translated May 10 00:03:50.913156 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 10 00:03:50.913164 kernel: efivars: Registered efivars operations May 10 00:03:50.913172 kernel: vgaarb: loaded May 10 00:03:50.913180 kernel: clocksource: Switched to clocksource arch_sys_counter May 10 00:03:50.913188 kernel: VFS: Disk quotas dquot_6.6.0 May 10 00:03:50.913196 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 10 00:03:50.913203 kernel: pnp: PnP ACPI init May 10 00:03:50.913438 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 10 00:03:50.913457 kernel: pnp: PnP ACPI: found 1 devices May 10 00:03:50.913466 kernel: NET: Registered PF_INET protocol family May 10 00:03:50.913473 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 10 00:03:50.913481 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 10 00:03:50.913489 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 10 00:03:50.913497 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 10 00:03:50.913505 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 10 00:03:50.913512 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 10 00:03:50.913520 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 10 00:03:50.913530 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 10 00:03:50.913537 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 10 00:03:50.913620 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) May 10 00:03:50.913646 kernel: PCI: CLS 0 bytes, default 64 May 10 00:03:50.913655 kernel: kvm [1]: HYP mode not available May 10 00:03:50.913662 kernel: Initialise system trusted keyrings May 10 00:03:50.913670 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 10 00:03:50.913678 kernel: Key type asymmetric registered May 10 00:03:50.913686 kernel: Asymmetric key parser 'x509' registered May 10 00:03:50.913708 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 10 00:03:50.913728 kernel: io scheduler mq-deadline registered May 10 00:03:50.913736 kernel: io scheduler kyber registered May 10 00:03:50.913744 kernel: io scheduler bfq registered May 10 00:03:50.913752 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 May 10 00:03:50.913838 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 May 10 00:03:50.913911 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 May 10 00:03:50.913981 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 10 00:03:50.914061 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 May 10 00:03:50.914132 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 May 10 00:03:50.914204 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 10 00:03:50.914278 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 May 10 00:03:50.914349 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 May 10 00:03:50.914419 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 10 00:03:50.914496 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 May 10 00:03:50.914567 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 May 10 00:03:50.914664 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 10 00:03:50.914797 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 May 10 00:03:50.914922 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 May 10 00:03:50.914993 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 10 00:03:50.915067 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 May 10 00:03:50.915133 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 May 10 00:03:50.915199 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 10 00:03:50.915267 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 May 10 00:03:50.915333 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 May 10 00:03:50.915397 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 10 00:03:50.915467 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 May 10 00:03:50.915532 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 May 10 00:03:50.915597 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 10 00:03:50.915608 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 May 10 00:03:50.916871 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 May 10 00:03:50.916955 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 May 10 00:03:50.917030 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 10 00:03:50.917041 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 10 00:03:50.917049 kernel: ACPI: button: Power Button [PWRB] May 10 00:03:50.917058 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 10 00:03:50.917129 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) May 10 00:03:50.917202 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) May 10 00:03:50.917213 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 10 00:03:50.917222 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 May 10 00:03:50.917294 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) May 10 00:03:50.917305 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A May 10 00:03:50.917313 kernel: thunder_xcv, ver 1.0 May 10 00:03:50.917321 kernel: thunder_bgx, ver 1.0 May 10 00:03:50.917328 kernel: nicpf, ver 1.0 May 10 00:03:50.917336 kernel: nicvf, ver 1.0 May 10 00:03:50.917413 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 10 00:03:50.917477 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-10T00:03:50 UTC (1746835430) May 10 00:03:50.917489 kernel: hid: raw HID events driver (C) Jiri Kosina May 10 00:03:50.917497 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 10 00:03:50.917505 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 10 00:03:50.917513 kernel: watchdog: Hard watchdog permanently disabled May 10 00:03:50.917521 kernel: NET: Registered PF_INET6 protocol family May 10 00:03:50.917528 kernel: Segment Routing with IPv6 May 10 00:03:50.917536 kernel: In-situ OAM (IOAM) with IPv6 May 10 00:03:50.917544 kernel: NET: Registered PF_PACKET protocol family May 10 00:03:50.917551 kernel: Key type dns_resolver registered May 10 00:03:50.917560 kernel: registered taskstats version 1 May 10 00:03:50.917568 kernel: Loading compiled-in X.509 certificates May 10 00:03:50.917576 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: 02a1572fa4e3e92c40cffc658d8dbcab2e5537ff' May 10 00:03:50.917584 kernel: Key type .fscrypt registered May 10 00:03:50.917591 kernel: Key type fscrypt-provisioning registered May 10 00:03:50.917599 kernel: ima: No TPM chip found, activating TPM-bypass! May 10 00:03:50.917606 kernel: ima: Allocated hash algorithm: sha1 May 10 00:03:50.917614 kernel: ima: No architecture policies found May 10 00:03:50.917622 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 10 00:03:50.917671 kernel: clk: Disabling unused clocks May 10 00:03:50.917680 kernel: Freeing unused kernel memory: 39424K May 10 00:03:50.917687 kernel: Run /init as init process May 10 00:03:50.917738 kernel: with arguments: May 10 00:03:50.917747 kernel: /init May 10 00:03:50.917754 kernel: with environment: May 10 00:03:50.917761 kernel: HOME=/ May 10 00:03:50.917769 kernel: TERM=linux May 10 00:03:50.917776 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 10 00:03:50.917790 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 10 00:03:50.917800 systemd[1]: Detected virtualization kvm. May 10 00:03:50.917808 systemd[1]: Detected architecture arm64. May 10 00:03:50.917816 systemd[1]: Running in initrd. May 10 00:03:50.917824 systemd[1]: No hostname configured, using default hostname. May 10 00:03:50.917832 systemd[1]: Hostname set to . May 10 00:03:50.917840 systemd[1]: Initializing machine ID from VM UUID. May 10 00:03:50.917849 systemd[1]: Queued start job for default target initrd.target. May 10 00:03:50.917858 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 10 00:03:50.917866 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 10 00:03:50.917875 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 10 00:03:50.917883 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 10 00:03:50.917891 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 10 00:03:50.917899 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 10 00:03:50.917911 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 10 00:03:50.917920 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 10 00:03:50.917928 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 10 00:03:50.917936 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 10 00:03:50.917945 systemd[1]: Reached target paths.target - Path Units. May 10 00:03:50.917953 systemd[1]: Reached target slices.target - Slice Units. May 10 00:03:50.917961 systemd[1]: Reached target swap.target - Swaps. May 10 00:03:50.917969 systemd[1]: Reached target timers.target - Timer Units. May 10 00:03:50.917978 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 10 00:03:50.917987 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 10 00:03:50.917996 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 10 00:03:50.918004 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 10 00:03:50.918013 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 10 00:03:50.918021 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 10 00:03:50.918029 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 10 00:03:50.918037 systemd[1]: Reached target sockets.target - Socket Units. May 10 00:03:50.918045 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 10 00:03:50.918055 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 10 00:03:50.918064 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 10 00:03:50.918072 systemd[1]: Starting systemd-fsck-usr.service... May 10 00:03:50.918080 systemd[1]: Starting systemd-journald.service - Journal Service... May 10 00:03:50.918088 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 10 00:03:50.918096 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 10 00:03:50.918129 systemd-journald[235]: Collecting audit messages is disabled. May 10 00:03:50.918156 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 10 00:03:50.918167 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 10 00:03:50.918179 systemd[1]: Finished systemd-fsck-usr.service. May 10 00:03:50.918194 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 10 00:03:50.918204 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 10 00:03:50.918213 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 10 00:03:50.918222 systemd-journald[235]: Journal started May 10 00:03:50.918242 systemd-journald[235]: Runtime Journal (/run/log/journal/c8aff469bbe440419053cab629560486) is 8.0M, max 76.6M, 68.6M free. May 10 00:03:50.922335 kernel: Bridge firewalling registered May 10 00:03:50.922388 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 10 00:03:50.894869 systemd-modules-load[236]: Inserted module 'overlay' May 10 00:03:50.924006 systemd[1]: Started systemd-journald.service - Journal Service. May 10 00:03:50.918560 systemd-modules-load[236]: Inserted module 'br_netfilter' May 10 00:03:50.925756 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 10 00:03:50.927031 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 10 00:03:50.938935 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 10 00:03:50.942076 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 10 00:03:50.945052 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 10 00:03:50.950776 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 10 00:03:50.958891 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 10 00:03:50.961750 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 10 00:03:50.972839 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 10 00:03:50.975359 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 10 00:03:50.981151 dracut-cmdline[267]: dracut-dracut-053 May 10 00:03:50.982964 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 10 00:03:50.987072 dracut-cmdline[267]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=6ddfb314c5db7ed82ab49390a2bb52fe12211605ed2a5a27fb38ec34b3cca5b4 May 10 00:03:51.016479 systemd-resolved[276]: Positive Trust Anchors: May 10 00:03:51.016495 systemd-resolved[276]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 10 00:03:51.016527 systemd-resolved[276]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 10 00:03:51.026401 systemd-resolved[276]: Defaulting to hostname 'linux'. May 10 00:03:51.027976 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 10 00:03:51.028590 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 10 00:03:51.080756 kernel: SCSI subsystem initialized May 10 00:03:51.084733 kernel: Loading iSCSI transport class v2.0-870. May 10 00:03:51.092746 kernel: iscsi: registered transport (tcp) May 10 00:03:51.106742 kernel: iscsi: registered transport (qla4xxx) May 10 00:03:51.106818 kernel: QLogic iSCSI HBA Driver May 10 00:03:51.153768 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 10 00:03:51.160001 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 10 00:03:51.176754 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 10 00:03:51.176845 kernel: device-mapper: uevent: version 1.0.3 May 10 00:03:51.178276 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 10 00:03:51.225781 kernel: raid6: neonx8 gen() 15619 MB/s May 10 00:03:51.242766 kernel: raid6: neonx4 gen() 15555 MB/s May 10 00:03:51.259746 kernel: raid6: neonx2 gen() 13139 MB/s May 10 00:03:51.276768 kernel: raid6: neonx1 gen() 10366 MB/s May 10 00:03:51.293760 kernel: raid6: int64x8 gen() 6922 MB/s May 10 00:03:51.310775 kernel: raid6: int64x4 gen() 7305 MB/s May 10 00:03:51.327765 kernel: raid6: int64x2 gen() 6065 MB/s May 10 00:03:51.344758 kernel: raid6: int64x1 gen() 5027 MB/s May 10 00:03:51.344841 kernel: raid6: using algorithm neonx8 gen() 15619 MB/s May 10 00:03:51.361807 kernel: raid6: .... xor() 11851 MB/s, rmw enabled May 10 00:03:51.361886 kernel: raid6: using neon recovery algorithm May 10 00:03:51.366772 kernel: xor: measuring software checksum speed May 10 00:03:51.366868 kernel: 8regs : 19721 MB/sec May 10 00:03:51.366886 kernel: 32regs : 19655 MB/sec May 10 00:03:51.366909 kernel: arm64_neon : 26972 MB/sec May 10 00:03:51.367752 kernel: xor: using function: arm64_neon (26972 MB/sec) May 10 00:03:51.417748 kernel: Btrfs loaded, zoned=no, fsverity=no May 10 00:03:51.433111 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 10 00:03:51.441048 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 10 00:03:51.454603 systemd-udevd[456]: Using default interface naming scheme 'v255'. May 10 00:03:51.458132 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 10 00:03:51.469056 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 10 00:03:51.484374 dracut-pre-trigger[472]: rd.md=0: removing MD RAID activation May 10 00:03:51.525359 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 10 00:03:51.532016 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 10 00:03:51.582727 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 10 00:03:51.590375 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 10 00:03:51.613151 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 10 00:03:51.613935 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 10 00:03:51.615834 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 10 00:03:51.617677 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 10 00:03:51.627223 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 10 00:03:51.648137 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 10 00:03:51.680456 kernel: scsi host0: Virtio SCSI HBA May 10 00:03:51.689856 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 May 10 00:03:51.689934 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 May 10 00:03:51.726840 kernel: ACPI: bus type USB registered May 10 00:03:51.726901 kernel: usbcore: registered new interface driver usbfs May 10 00:03:51.726598 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 10 00:03:51.729786 kernel: usbcore: registered new interface driver hub May 10 00:03:51.729812 kernel: usbcore: registered new device driver usb May 10 00:03:51.726751 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 10 00:03:51.728161 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 10 00:03:51.729785 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 10 00:03:51.729949 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 10 00:03:51.732465 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 10 00:03:51.751360 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 10 00:03:51.757717 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller May 10 00:03:51.757980 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 May 10 00:03:51.759424 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 May 10 00:03:51.761676 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller May 10 00:03:51.761900 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 May 10 00:03:51.762007 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed May 10 00:03:51.763724 kernel: hub 1-0:1.0: USB hub found May 10 00:03:51.763955 kernel: hub 1-0:1.0: 4 ports detected May 10 00:03:51.764805 kernel: sr 0:0:0:0: Power-on or device reset occurred May 10 00:03:51.766712 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray May 10 00:03:51.766906 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. May 10 00:03:51.766951 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 10 00:03:51.767718 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 May 10 00:03:51.772052 kernel: hub 2-0:1.0: USB hub found May 10 00:03:51.772267 kernel: sd 0:0:0:1: Power-on or device reset occurred May 10 00:03:51.772387 kernel: hub 2-0:1.0: 4 ports detected May 10 00:03:51.772478 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) May 10 00:03:51.772813 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 10 00:03:51.775636 kernel: sd 0:0:0:1: [sda] Write Protect is off May 10 00:03:51.776118 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 May 10 00:03:51.776344 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA May 10 00:03:51.781334 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 10 00:03:51.781407 kernel: GPT:17805311 != 80003071 May 10 00:03:51.781418 kernel: GPT:Alternate GPT header not at the end of the disk. May 10 00:03:51.781427 kernel: GPT:17805311 != 80003071 May 10 00:03:51.781436 kernel: GPT: Use GNU Parted to correct GPT errors. May 10 00:03:51.781444 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 10 00:03:51.781953 kernel: sd 0:0:0:1: [sda] Attached SCSI disk May 10 00:03:51.783399 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 10 00:03:51.819186 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 10 00:03:51.836725 kernel: BTRFS: device fsid 7278434d-1c51-4098-9ab9-92db46b8a354 devid 1 transid 41 /dev/sda3 scanned by (udev-worker) (523) May 10 00:03:51.839685 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. May 10 00:03:51.843731 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (522) May 10 00:03:51.854232 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. May 10 00:03:51.863117 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. May 10 00:03:51.870272 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. May 10 00:03:51.871037 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. May 10 00:03:51.879937 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 10 00:03:51.888646 disk-uuid[580]: Primary Header is updated. May 10 00:03:51.888646 disk-uuid[580]: Secondary Entries is updated. May 10 00:03:51.888646 disk-uuid[580]: Secondary Header is updated. May 10 00:03:51.899746 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 10 00:03:52.004786 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd May 10 00:03:52.140956 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 May 10 00:03:52.141020 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 May 10 00:03:52.141248 kernel: usbcore: registered new interface driver usbhid May 10 00:03:52.141266 kernel: usbhid: USB HID core driver May 10 00:03:52.250836 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd May 10 00:03:52.381768 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 May 10 00:03:52.435788 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 May 10 00:03:52.904919 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 10 00:03:52.905297 disk-uuid[581]: The operation has completed successfully. May 10 00:03:52.966302 systemd[1]: disk-uuid.service: Deactivated successfully. May 10 00:03:52.966437 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 10 00:03:52.995068 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 10 00:03:53.002579 sh[592]: Success May 10 00:03:53.016986 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 10 00:03:53.063178 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 10 00:03:53.081837 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 10 00:03:53.087167 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 10 00:03:53.100997 kernel: BTRFS info (device dm-0): first mount of filesystem 7278434d-1c51-4098-9ab9-92db46b8a354 May 10 00:03:53.101062 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 10 00:03:53.101080 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 10 00:03:53.101108 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 10 00:03:53.101740 kernel: BTRFS info (device dm-0): using free space tree May 10 00:03:53.108731 kernel: BTRFS info (device dm-0): enabling ssd optimizations May 10 00:03:53.110808 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 10 00:03:53.112122 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 10 00:03:53.117949 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 10 00:03:53.122008 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 10 00:03:53.133926 kernel: BTRFS info (device sda6): first mount of filesystem 3b69b342-5bf7-4a79-8c13-5043d2a95a48 May 10 00:03:53.134076 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 10 00:03:53.134135 kernel: BTRFS info (device sda6): using free space tree May 10 00:03:53.139812 kernel: BTRFS info (device sda6): enabling ssd optimizations May 10 00:03:53.139893 kernel: BTRFS info (device sda6): auto enabling async discard May 10 00:03:53.151168 systemd[1]: mnt-oem.mount: Deactivated successfully. May 10 00:03:53.152301 kernel: BTRFS info (device sda6): last unmount of filesystem 3b69b342-5bf7-4a79-8c13-5043d2a95a48 May 10 00:03:53.159185 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 10 00:03:53.165998 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 10 00:03:53.246264 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 10 00:03:53.256011 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 10 00:03:53.274299 ignition[678]: Ignition 2.19.0 May 10 00:03:53.274999 ignition[678]: Stage: fetch-offline May 10 00:03:53.276884 ignition[678]: no configs at "/usr/lib/ignition/base.d" May 10 00:03:53.277388 ignition[678]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 10 00:03:53.277584 ignition[678]: parsed url from cmdline: "" May 10 00:03:53.277587 ignition[678]: no config URL provided May 10 00:03:53.277592 ignition[678]: reading system config file "/usr/lib/ignition/user.ign" May 10 00:03:53.278936 systemd-networkd[778]: lo: Link UP May 10 00:03:53.277601 ignition[678]: no config at "/usr/lib/ignition/user.ign" May 10 00:03:53.278940 systemd-networkd[778]: lo: Gained carrier May 10 00:03:53.277607 ignition[678]: failed to fetch config: resource requires networking May 10 00:03:53.281483 systemd-networkd[778]: Enumeration completed May 10 00:03:53.278562 ignition[678]: Ignition finished successfully May 10 00:03:53.281667 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 10 00:03:53.282556 systemd[1]: Started systemd-networkd.service - Network Configuration. May 10 00:03:53.283470 systemd[1]: Reached target network.target - Network. May 10 00:03:53.284157 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 10 00:03:53.284161 systemd-networkd[778]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 10 00:03:53.286812 systemd-networkd[778]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 10 00:03:53.286817 systemd-networkd[778]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. May 10 00:03:53.287383 systemd-networkd[778]: eth0: Link UP May 10 00:03:53.287386 systemd-networkd[778]: eth0: Gained carrier May 10 00:03:53.287393 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 10 00:03:53.288934 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 10 00:03:53.293865 systemd-networkd[778]: eth1: Link UP May 10 00:03:53.293869 systemd-networkd[778]: eth1: Gained carrier May 10 00:03:53.293879 systemd-networkd[778]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 10 00:03:53.303994 ignition[782]: Ignition 2.19.0 May 10 00:03:53.304008 ignition[782]: Stage: fetch May 10 00:03:53.304229 ignition[782]: no configs at "/usr/lib/ignition/base.d" May 10 00:03:53.304240 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 10 00:03:53.304345 ignition[782]: parsed url from cmdline: "" May 10 00:03:53.304348 ignition[782]: no config URL provided May 10 00:03:53.304352 ignition[782]: reading system config file "/usr/lib/ignition/user.ign" May 10 00:03:53.304360 ignition[782]: no config at "/usr/lib/ignition/user.ign" May 10 00:03:53.304379 ignition[782]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 May 10 00:03:53.305286 ignition[782]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable May 10 00:03:53.326811 systemd-networkd[778]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 May 10 00:03:53.350833 systemd-networkd[778]: eth0: DHCPv4 address 23.88.119.94/32, gateway 172.31.1.1 acquired from 172.31.1.1 May 10 00:03:53.505367 ignition[782]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 May 10 00:03:53.514815 ignition[782]: GET result: OK May 10 00:03:53.515061 ignition[782]: parsing config with SHA512: 65b1a33f738d320147eb0d5ae8fc5cc98767c392ddd7df3458ad130ffa124cdab8347f8047719e1942ef558a70ff14c11b799349b5dedcd91d7ead4551ffe58d May 10 00:03:53.521121 unknown[782]: fetched base config from "system" May 10 00:03:53.521129 unknown[782]: fetched base config from "system" May 10 00:03:53.521525 ignition[782]: fetch: fetch complete May 10 00:03:53.521134 unknown[782]: fetched user config from "hetzner" May 10 00:03:53.521530 ignition[782]: fetch: fetch passed May 10 00:03:53.524561 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 10 00:03:53.521571 ignition[782]: Ignition finished successfully May 10 00:03:53.529934 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 10 00:03:53.543929 ignition[789]: Ignition 2.19.0 May 10 00:03:53.543940 ignition[789]: Stage: kargs May 10 00:03:53.544118 ignition[789]: no configs at "/usr/lib/ignition/base.d" May 10 00:03:53.544128 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 10 00:03:53.545105 ignition[789]: kargs: kargs passed May 10 00:03:53.545162 ignition[789]: Ignition finished successfully May 10 00:03:53.548707 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 10 00:03:53.553951 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 10 00:03:53.566307 ignition[795]: Ignition 2.19.0 May 10 00:03:53.566318 ignition[795]: Stage: disks May 10 00:03:53.566503 ignition[795]: no configs at "/usr/lib/ignition/base.d" May 10 00:03:53.566513 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 10 00:03:53.569155 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 10 00:03:53.567466 ignition[795]: disks: disks passed May 10 00:03:53.567512 ignition[795]: Ignition finished successfully May 10 00:03:53.570812 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 10 00:03:53.571573 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 10 00:03:53.572789 systemd[1]: Reached target local-fs.target - Local File Systems. May 10 00:03:53.573816 systemd[1]: Reached target sysinit.target - System Initialization. May 10 00:03:53.574929 systemd[1]: Reached target basic.target - Basic System. May 10 00:03:53.582938 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 10 00:03:53.598542 systemd-fsck[803]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks May 10 00:03:53.604426 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 10 00:03:53.614848 systemd[1]: Mounting sysroot.mount - /sysroot... May 10 00:03:53.665720 kernel: EXT4-fs (sda9): mounted filesystem ffdb9517-5190-4050-8f70-de9d48dc1858 r/w with ordered data mode. Quota mode: none. May 10 00:03:53.666805 systemd[1]: Mounted sysroot.mount - /sysroot. May 10 00:03:53.667828 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 10 00:03:53.676881 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 10 00:03:53.680567 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 10 00:03:53.682184 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... May 10 00:03:53.684005 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 10 00:03:53.684036 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 10 00:03:53.692140 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (811) May 10 00:03:53.694325 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 10 00:03:53.696560 kernel: BTRFS info (device sda6): first mount of filesystem 3b69b342-5bf7-4a79-8c13-5043d2a95a48 May 10 00:03:53.696586 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 10 00:03:53.696596 kernel: BTRFS info (device sda6): using free space tree May 10 00:03:53.698242 kernel: BTRFS info (device sda6): enabling ssd optimizations May 10 00:03:53.698294 kernel: BTRFS info (device sda6): auto enabling async discard May 10 00:03:53.712116 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 10 00:03:53.718423 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 10 00:03:53.750295 coreos-metadata[813]: May 10 00:03:53.750 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 May 10 00:03:53.752065 coreos-metadata[813]: May 10 00:03:53.752 INFO Fetch successful May 10 00:03:53.753382 coreos-metadata[813]: May 10 00:03:53.753 INFO wrote hostname ci-4081-3-3-n-7b3972f1ed to /sysroot/etc/hostname May 10 00:03:53.755700 initrd-setup-root[838]: cut: /sysroot/etc/passwd: No such file or directory May 10 00:03:53.757769 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 10 00:03:53.762898 initrd-setup-root[846]: cut: /sysroot/etc/group: No such file or directory May 10 00:03:53.767333 initrd-setup-root[853]: cut: /sysroot/etc/shadow: No such file or directory May 10 00:03:53.772391 initrd-setup-root[860]: cut: /sysroot/etc/gshadow: No such file or directory May 10 00:03:53.872293 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 10 00:03:53.877883 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 10 00:03:53.882379 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 10 00:03:53.886723 kernel: BTRFS info (device sda6): last unmount of filesystem 3b69b342-5bf7-4a79-8c13-5043d2a95a48 May 10 00:03:53.925333 ignition[928]: INFO : Ignition 2.19.0 May 10 00:03:53.926682 ignition[928]: INFO : Stage: mount May 10 00:03:53.926682 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" May 10 00:03:53.926682 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 10 00:03:53.929247 ignition[928]: INFO : mount: mount passed May 10 00:03:53.929247 ignition[928]: INFO : Ignition finished successfully May 10 00:03:53.928482 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 10 00:03:53.930906 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 10 00:03:53.938919 systemd[1]: Starting ignition-files.service - Ignition (files)... May 10 00:03:54.101256 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 10 00:03:54.111018 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 10 00:03:54.121128 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (940) May 10 00:03:54.121207 kernel: BTRFS info (device sda6): first mount of filesystem 3b69b342-5bf7-4a79-8c13-5043d2a95a48 May 10 00:03:54.121238 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 10 00:03:54.121744 kernel: BTRFS info (device sda6): using free space tree May 10 00:03:54.126747 kernel: BTRFS info (device sda6): enabling ssd optimizations May 10 00:03:54.126814 kernel: BTRFS info (device sda6): auto enabling async discard May 10 00:03:54.130571 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 10 00:03:54.158345 ignition[956]: INFO : Ignition 2.19.0 May 10 00:03:54.158345 ignition[956]: INFO : Stage: files May 10 00:03:54.159681 ignition[956]: INFO : no configs at "/usr/lib/ignition/base.d" May 10 00:03:54.159681 ignition[956]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 10 00:03:54.159681 ignition[956]: DEBUG : files: compiled without relabeling support, skipping May 10 00:03:54.162365 ignition[956]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 10 00:03:54.162365 ignition[956]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 10 00:03:54.165101 ignition[956]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 10 00:03:54.165101 ignition[956]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 10 00:03:54.165101 ignition[956]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 10 00:03:54.164221 unknown[956]: wrote ssh authorized keys file for user: core May 10 00:03:54.168802 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 10 00:03:54.168802 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 10 00:03:54.465102 systemd-networkd[778]: eth0: Gained IPv6LL May 10 00:03:54.593088 systemd-networkd[778]: eth1: Gained IPv6LL May 10 00:03:54.642521 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 10 00:03:56.451158 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 10 00:03:56.453357 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 10 00:03:56.453357 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 10 00:03:56.453357 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 10 00:03:56.453357 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 10 00:03:56.453357 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 10 00:03:56.453357 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 10 00:03:56.453357 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 10 00:03:56.453357 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 10 00:03:56.453357 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 10 00:03:56.453357 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 10 00:03:56.453357 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 10 00:03:56.453357 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 10 00:03:56.453357 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 10 00:03:56.453357 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 May 10 00:03:57.054659 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 10 00:03:57.262675 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 10 00:03:57.262675 ignition[956]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 10 00:03:57.265035 ignition[956]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 10 00:03:57.265035 ignition[956]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 10 00:03:57.265035 ignition[956]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 10 00:03:57.265035 ignition[956]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 10 00:03:57.265035 ignition[956]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" May 10 00:03:57.265035 ignition[956]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" May 10 00:03:57.265035 ignition[956]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 10 00:03:57.265035 ignition[956]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" May 10 00:03:57.275915 ignition[956]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" May 10 00:03:57.275915 ignition[956]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" May 10 00:03:57.275915 ignition[956]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" May 10 00:03:57.275915 ignition[956]: INFO : files: files passed May 10 00:03:57.275915 ignition[956]: INFO : Ignition finished successfully May 10 00:03:57.268105 systemd[1]: Finished ignition-files.service - Ignition (files). May 10 00:03:57.277774 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 10 00:03:57.281055 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 10 00:03:57.286300 systemd[1]: ignition-quench.service: Deactivated successfully. May 10 00:03:57.286437 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 10 00:03:57.307514 initrd-setup-root-after-ignition[985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 10 00:03:57.307514 initrd-setup-root-after-ignition[985]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 10 00:03:57.310655 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 10 00:03:57.312715 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 10 00:03:57.314050 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 10 00:03:57.321056 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 10 00:03:57.358425 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 10 00:03:57.358667 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 10 00:03:57.361126 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 10 00:03:57.362278 systemd[1]: Reached target initrd.target - Initrd Default Target. May 10 00:03:57.363306 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 10 00:03:57.364569 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 10 00:03:57.389088 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 10 00:03:57.402002 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 10 00:03:57.418273 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 10 00:03:57.419034 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 10 00:03:57.420246 systemd[1]: Stopped target timers.target - Timer Units. May 10 00:03:57.421164 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 10 00:03:57.421292 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 10 00:03:57.422538 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 10 00:03:57.423185 systemd[1]: Stopped target basic.target - Basic System. May 10 00:03:57.424341 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 10 00:03:57.425329 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 10 00:03:57.426363 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 10 00:03:57.427484 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 10 00:03:57.428678 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 10 00:03:57.429772 systemd[1]: Stopped target sysinit.target - System Initialization. May 10 00:03:57.430725 systemd[1]: Stopped target local-fs.target - Local File Systems. May 10 00:03:57.431811 systemd[1]: Stopped target swap.target - Swaps. May 10 00:03:57.432637 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 10 00:03:57.432780 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 10 00:03:57.434011 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 10 00:03:57.434653 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 10 00:03:57.435724 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 10 00:03:57.436199 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 10 00:03:57.436922 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 10 00:03:57.437040 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 10 00:03:57.438446 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 10 00:03:57.438553 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 10 00:03:57.439724 systemd[1]: ignition-files.service: Deactivated successfully. May 10 00:03:57.439821 systemd[1]: Stopped ignition-files.service - Ignition (files). May 10 00:03:57.440884 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 10 00:03:57.440980 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 10 00:03:57.451136 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 10 00:03:57.452682 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 10 00:03:57.453024 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 10 00:03:57.462936 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 10 00:03:57.463412 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 10 00:03:57.463554 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 10 00:03:57.467511 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 10 00:03:57.467670 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 10 00:03:57.474072 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 10 00:03:57.474175 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 10 00:03:57.479573 ignition[1009]: INFO : Ignition 2.19.0 May 10 00:03:57.479573 ignition[1009]: INFO : Stage: umount May 10 00:03:57.479573 ignition[1009]: INFO : no configs at "/usr/lib/ignition/base.d" May 10 00:03:57.479573 ignition[1009]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 10 00:03:57.479573 ignition[1009]: INFO : umount: umount passed May 10 00:03:57.479573 ignition[1009]: INFO : Ignition finished successfully May 10 00:03:57.480394 systemd[1]: ignition-mount.service: Deactivated successfully. May 10 00:03:57.481973 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 10 00:03:57.485883 systemd[1]: ignition-disks.service: Deactivated successfully. May 10 00:03:57.485995 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 10 00:03:57.486863 systemd[1]: ignition-kargs.service: Deactivated successfully. May 10 00:03:57.486914 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 10 00:03:57.489001 systemd[1]: ignition-fetch.service: Deactivated successfully. May 10 00:03:57.489051 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 10 00:03:57.490405 systemd[1]: Stopped target network.target - Network. May 10 00:03:57.491828 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 10 00:03:57.491898 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 10 00:03:57.492757 systemd[1]: Stopped target paths.target - Path Units. May 10 00:03:57.493905 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 10 00:03:57.497804 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 10 00:03:57.498652 systemd[1]: Stopped target slices.target - Slice Units. May 10 00:03:57.499908 systemd[1]: Stopped target sockets.target - Socket Units. May 10 00:03:57.500885 systemd[1]: iscsid.socket: Deactivated successfully. May 10 00:03:57.500943 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 10 00:03:57.502105 systemd[1]: iscsiuio.socket: Deactivated successfully. May 10 00:03:57.502155 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 10 00:03:57.503121 systemd[1]: ignition-setup.service: Deactivated successfully. May 10 00:03:57.503184 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 10 00:03:57.504166 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 10 00:03:57.504210 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 10 00:03:57.505160 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 10 00:03:57.506159 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 10 00:03:57.507967 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 10 00:03:57.508463 systemd[1]: sysroot-boot.service: Deactivated successfully. May 10 00:03:57.508556 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 10 00:03:57.510276 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 10 00:03:57.510368 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 10 00:03:57.511095 systemd-networkd[778]: eth1: DHCPv6 lease lost May 10 00:03:57.511279 systemd-networkd[778]: eth0: DHCPv6 lease lost May 10 00:03:57.513150 systemd[1]: systemd-networkd.service: Deactivated successfully. May 10 00:03:57.513296 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 10 00:03:57.514887 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 10 00:03:57.514922 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 10 00:03:57.519895 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 10 00:03:57.520350 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 10 00:03:57.520413 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 10 00:03:57.524760 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 10 00:03:57.526394 systemd[1]: systemd-resolved.service: Deactivated successfully. May 10 00:03:57.526838 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 10 00:03:57.537235 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 10 00:03:57.537309 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 10 00:03:57.538416 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 10 00:03:57.538461 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 10 00:03:57.539439 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 10 00:03:57.539479 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 10 00:03:57.545985 systemd[1]: systemd-udevd.service: Deactivated successfully. May 10 00:03:57.546215 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 10 00:03:57.548053 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 10 00:03:57.548118 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 10 00:03:57.549588 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 10 00:03:57.549634 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 10 00:03:57.550871 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 10 00:03:57.550931 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 10 00:03:57.552870 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 10 00:03:57.552962 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 10 00:03:57.555260 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 10 00:03:57.555353 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 10 00:03:57.568654 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 10 00:03:57.569964 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 10 00:03:57.570072 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 10 00:03:57.571454 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 10 00:03:57.571530 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 10 00:03:57.574276 systemd[1]: network-cleanup.service: Deactivated successfully. May 10 00:03:57.574398 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 10 00:03:57.581455 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 10 00:03:57.581619 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 10 00:03:57.583741 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 10 00:03:57.589955 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 10 00:03:57.600408 systemd[1]: Switching root. May 10 00:03:57.666750 systemd-journald[235]: Journal stopped May 10 00:03:58.559152 systemd-journald[235]: Received SIGTERM from PID 1 (systemd). May 10 00:03:58.559234 kernel: SELinux: policy capability network_peer_controls=1 May 10 00:03:58.559247 kernel: SELinux: policy capability open_perms=1 May 10 00:03:58.559260 kernel: SELinux: policy capability extended_socket_class=1 May 10 00:03:58.559269 kernel: SELinux: policy capability always_check_network=0 May 10 00:03:58.559281 kernel: SELinux: policy capability cgroup_seclabel=1 May 10 00:03:58.559298 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 10 00:03:58.559308 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 10 00:03:58.559322 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 10 00:03:58.559332 kernel: audit: type=1403 audit(1746835437.785:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 10 00:03:58.559342 systemd[1]: Successfully loaded SELinux policy in 35.471ms. May 10 00:03:58.559360 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.189ms. May 10 00:03:58.559371 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 10 00:03:58.559382 systemd[1]: Detected virtualization kvm. May 10 00:03:58.559393 systemd[1]: Detected architecture arm64. May 10 00:03:58.559403 systemd[1]: Detected first boot. May 10 00:03:58.559414 systemd[1]: Hostname set to . May 10 00:03:58.559424 systemd[1]: Initializing machine ID from VM UUID. May 10 00:03:58.559435 zram_generator::config[1051]: No configuration found. May 10 00:03:58.559451 systemd[1]: Populated /etc with preset unit settings. May 10 00:03:58.559461 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 10 00:03:58.559472 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 10 00:03:58.559482 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 10 00:03:58.559493 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 10 00:03:58.559504 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 10 00:03:58.559514 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 10 00:03:58.559525 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 10 00:03:58.559536 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 10 00:03:58.559549 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 10 00:03:58.559563 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 10 00:03:58.559586 systemd[1]: Created slice user.slice - User and Session Slice. May 10 00:03:58.559599 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 10 00:03:58.559610 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 10 00:03:58.559624 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 10 00:03:58.559635 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 10 00:03:58.559645 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 10 00:03:58.559658 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 10 00:03:58.559669 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 10 00:03:58.559680 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 10 00:03:58.559690 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 10 00:03:58.559711 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 10 00:03:58.562081 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 10 00:03:58.562113 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 10 00:03:58.562125 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 10 00:03:58.562136 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 10 00:03:58.562147 systemd[1]: Reached target slices.target - Slice Units. May 10 00:03:58.562157 systemd[1]: Reached target swap.target - Swaps. May 10 00:03:58.562167 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 10 00:03:58.562178 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 10 00:03:58.562189 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 10 00:03:58.562200 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 10 00:03:58.562210 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 10 00:03:58.562223 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 10 00:03:58.562235 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 10 00:03:58.562247 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 10 00:03:58.562257 systemd[1]: Mounting media.mount - External Media Directory... May 10 00:03:58.562269 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 10 00:03:58.562279 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 10 00:03:58.562295 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 10 00:03:58.562309 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 10 00:03:58.562323 systemd[1]: Reached target machines.target - Containers. May 10 00:03:58.562337 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 10 00:03:58.562348 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 10 00:03:58.562362 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 10 00:03:58.562375 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 10 00:03:58.562386 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 10 00:03:58.562398 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 10 00:03:58.562409 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 10 00:03:58.562420 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 10 00:03:58.562430 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 10 00:03:58.562443 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 10 00:03:58.562456 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 10 00:03:58.562469 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 10 00:03:58.562481 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 10 00:03:58.562496 systemd[1]: Stopped systemd-fsck-usr.service. May 10 00:03:58.562510 systemd[1]: Starting systemd-journald.service - Journal Service... May 10 00:03:58.562523 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 10 00:03:58.562536 kernel: fuse: init (API version 7.39) May 10 00:03:58.562550 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 10 00:03:58.562563 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 10 00:03:58.562595 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 10 00:03:58.562610 systemd[1]: verity-setup.service: Deactivated successfully. May 10 00:03:58.562623 systemd[1]: Stopped verity-setup.service. May 10 00:03:58.562635 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 10 00:03:58.562650 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 10 00:03:58.562663 systemd[1]: Mounted media.mount - External Media Directory. May 10 00:03:58.562677 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 10 00:03:58.562690 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 10 00:03:58.567819 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 10 00:03:58.567833 kernel: loop: module loaded May 10 00:03:58.567847 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 10 00:03:58.567858 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 10 00:03:58.567869 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 10 00:03:58.567880 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 10 00:03:58.567890 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 10 00:03:58.567901 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 10 00:03:58.567915 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 10 00:03:58.567928 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 10 00:03:58.567942 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 10 00:03:58.567954 systemd[1]: modprobe@loop.service: Deactivated successfully. May 10 00:03:58.567965 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 10 00:03:58.567979 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 10 00:03:58.567992 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 10 00:03:58.568006 kernel: ACPI: bus type drm_connector registered May 10 00:03:58.568056 systemd-journald[1122]: Collecting audit messages is disabled. May 10 00:03:58.568082 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 10 00:03:58.568094 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 10 00:03:58.568105 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 10 00:03:58.568116 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 10 00:03:58.568127 systemd[1]: modprobe@drm.service: Deactivated successfully. May 10 00:03:58.568144 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 10 00:03:58.568157 systemd-journald[1122]: Journal started May 10 00:03:58.568180 systemd-journald[1122]: Runtime Journal (/run/log/journal/c8aff469bbe440419053cab629560486) is 8.0M, max 76.6M, 68.6M free. May 10 00:03:58.288172 systemd[1]: Queued start job for default target multi-user.target. May 10 00:03:58.308923 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. May 10 00:03:58.309526 systemd[1]: systemd-journald.service: Deactivated successfully. May 10 00:03:58.575623 systemd[1]: Started systemd-journald.service - Journal Service. May 10 00:03:58.571790 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 10 00:03:58.572832 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 10 00:03:58.573958 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 10 00:03:58.581268 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 10 00:03:58.599257 systemd[1]: Reached target network-pre.target - Preparation for Network. May 10 00:03:58.600102 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 10 00:03:58.600131 systemd[1]: Reached target local-fs.target - Local File Systems. May 10 00:03:58.604073 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 10 00:03:58.607023 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 10 00:03:58.610915 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 10 00:03:58.612916 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 10 00:03:58.618186 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 10 00:03:58.623188 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 10 00:03:58.624002 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 10 00:03:58.626952 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 10 00:03:58.642914 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 10 00:03:58.644868 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 10 00:03:58.647113 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 10 00:03:58.649120 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 10 00:03:58.669317 systemd-journald[1122]: Time spent on flushing to /var/log/journal/c8aff469bbe440419053cab629560486 is 86.564ms for 1127 entries. May 10 00:03:58.669317 systemd-journald[1122]: System Journal (/var/log/journal/c8aff469bbe440419053cab629560486) is 8.0M, max 584.8M, 576.8M free. May 10 00:03:58.776915 systemd-journald[1122]: Received client request to flush runtime journal. May 10 00:03:58.776973 kernel: loop0: detected capacity change from 0 to 114328 May 10 00:03:58.776990 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 10 00:03:58.777005 kernel: loop1: detected capacity change from 0 to 114432 May 10 00:03:58.675002 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 10 00:03:58.675998 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 10 00:03:58.687915 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 10 00:03:58.713021 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 10 00:03:58.727033 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 10 00:03:58.761830 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 10 00:03:58.766228 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 10 00:03:58.767985 udevadm[1181]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 10 00:03:58.775163 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 10 00:03:58.782896 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 10 00:03:58.785216 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 10 00:03:58.793836 kernel: loop2: detected capacity change from 0 to 189592 May 10 00:03:58.823290 systemd-tmpfiles[1185]: ACLs are not supported, ignoring. May 10 00:03:58.823313 systemd-tmpfiles[1185]: ACLs are not supported, ignoring. May 10 00:03:58.837063 kernel: loop3: detected capacity change from 0 to 8 May 10 00:03:58.841108 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 10 00:03:58.858737 kernel: loop4: detected capacity change from 0 to 114328 May 10 00:03:58.874038 kernel: loop5: detected capacity change from 0 to 114432 May 10 00:03:58.885961 kernel: loop6: detected capacity change from 0 to 189592 May 10 00:03:58.907745 kernel: loop7: detected capacity change from 0 to 8 May 10 00:03:58.910073 (sd-merge)[1193]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. May 10 00:03:58.910523 (sd-merge)[1193]: Merged extensions into '/usr'. May 10 00:03:58.920835 systemd[1]: Reloading requested from client PID 1169 ('systemd-sysext') (unit systemd-sysext.service)... May 10 00:03:58.921068 systemd[1]: Reloading... May 10 00:03:59.052769 zram_generator::config[1222]: No configuration found. May 10 00:03:59.169670 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 10 00:03:59.170938 ldconfig[1165]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 10 00:03:59.216243 systemd[1]: Reloading finished in 294 ms. May 10 00:03:59.258802 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 10 00:03:59.266348 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 10 00:03:59.276491 systemd[1]: Starting ensure-sysext.service... May 10 00:03:59.279473 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 10 00:03:59.298803 systemd[1]: Reloading requested from client PID 1256 ('systemctl') (unit ensure-sysext.service)... May 10 00:03:59.298846 systemd[1]: Reloading... May 10 00:03:59.342325 systemd-tmpfiles[1257]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 10 00:03:59.342686 systemd-tmpfiles[1257]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 10 00:03:59.343378 systemd-tmpfiles[1257]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 10 00:03:59.343615 systemd-tmpfiles[1257]: ACLs are not supported, ignoring. May 10 00:03:59.343663 systemd-tmpfiles[1257]: ACLs are not supported, ignoring. May 10 00:03:59.351835 systemd-tmpfiles[1257]: Detected autofs mount point /boot during canonicalization of boot. May 10 00:03:59.351846 systemd-tmpfiles[1257]: Skipping /boot May 10 00:03:59.363242 systemd-tmpfiles[1257]: Detected autofs mount point /boot during canonicalization of boot. May 10 00:03:59.363262 systemd-tmpfiles[1257]: Skipping /boot May 10 00:03:59.382798 zram_generator::config[1280]: No configuration found. May 10 00:03:59.523470 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 10 00:03:59.570743 systemd[1]: Reloading finished in 271 ms. May 10 00:03:59.593905 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 10 00:03:59.602357 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 10 00:03:59.622205 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 10 00:03:59.627093 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 10 00:03:59.641053 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 10 00:03:59.651336 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 10 00:03:59.654935 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 10 00:03:59.657553 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 10 00:03:59.663181 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 10 00:03:59.668054 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 10 00:03:59.673949 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 10 00:03:59.678031 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 10 00:03:59.679425 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 10 00:03:59.682717 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 10 00:03:59.683215 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 10 00:03:59.686727 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 10 00:03:59.690401 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 10 00:03:59.693981 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 10 00:03:59.694902 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 10 00:03:59.698480 augenrules[1346]: No rules May 10 00:03:59.699461 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 10 00:03:59.701205 systemd[1]: Finished ensure-sysext.service. May 10 00:03:59.703500 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 10 00:03:59.706375 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 10 00:03:59.707001 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 10 00:03:59.711990 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 10 00:03:59.719948 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 10 00:03:59.724786 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 10 00:03:59.735765 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 10 00:03:59.737622 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 10 00:03:59.737977 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 10 00:03:59.757608 systemd-udevd[1338]: Using default interface naming scheme 'v255'. May 10 00:03:59.758549 systemd[1]: modprobe@loop.service: Deactivated successfully. May 10 00:03:59.758825 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 10 00:03:59.761674 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 10 00:03:59.762215 systemd[1]: modprobe@drm.service: Deactivated successfully. May 10 00:03:59.762540 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 10 00:03:59.778082 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 10 00:03:59.782973 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 10 00:03:59.801816 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 10 00:03:59.802947 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 10 00:03:59.804520 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 10 00:03:59.814901 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 10 00:03:59.883116 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 10 00:03:59.884111 systemd[1]: Reached target time-set.target - System Time Set. May 10 00:03:59.915555 systemd-networkd[1375]: lo: Link UP May 10 00:03:59.915602 systemd-networkd[1375]: lo: Gained carrier May 10 00:03:59.916286 systemd-networkd[1375]: Enumeration completed May 10 00:03:59.916385 systemd[1]: Started systemd-networkd.service - Network Configuration. May 10 00:03:59.924915 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 10 00:03:59.933132 systemd-resolved[1334]: Positive Trust Anchors: May 10 00:03:59.933148 systemd-resolved[1334]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 10 00:03:59.933181 systemd-resolved[1334]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 10 00:03:59.942097 systemd-resolved[1334]: Using system hostname 'ci-4081-3-3-n-7b3972f1ed'. May 10 00:03:59.943659 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 10 00:03:59.944870 systemd[1]: Reached target network.target - Network. May 10 00:03:59.945820 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 10 00:03:59.954834 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 10 00:04:00.039068 systemd-networkd[1375]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 10 00:04:00.039077 systemd-networkd[1375]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 10 00:04:00.040634 systemd-networkd[1375]: eth0: Link UP May 10 00:04:00.040865 systemd-networkd[1375]: eth0: Gained carrier May 10 00:04:00.040949 systemd-networkd[1375]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 10 00:04:00.049019 systemd-networkd[1375]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 10 00:04:00.049520 systemd-networkd[1375]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. May 10 00:04:00.051254 systemd-networkd[1375]: eth1: Link UP May 10 00:04:00.051262 systemd-networkd[1375]: eth1: Gained carrier May 10 00:04:00.051282 systemd-networkd[1375]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 10 00:04:00.060742 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1387) May 10 00:04:00.079721 kernel: mousedev: PS/2 mouse device common for all mice May 10 00:04:00.089823 systemd-networkd[1375]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 May 10 00:04:00.090917 systemd-timesyncd[1357]: Network configuration changed, trying to establish connection. May 10 00:04:00.109352 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. May 10 00:04:00.109488 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 10 00:04:00.115054 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 10 00:04:00.117094 systemd-networkd[1375]: eth0: DHCPv4 address 23.88.119.94/32, gateway 172.31.1.1 acquired from 172.31.1.1 May 10 00:04:00.117589 systemd-timesyncd[1357]: Network configuration changed, trying to establish connection. May 10 00:04:00.119153 systemd-timesyncd[1357]: Network configuration changed, trying to establish connection. May 10 00:04:00.123964 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 10 00:04:00.141001 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 10 00:04:00.141636 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 10 00:04:00.141672 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 10 00:04:00.146599 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 10 00:04:00.147390 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 10 00:04:00.149143 systemd[1]: modprobe@loop.service: Deactivated successfully. May 10 00:04:00.151777 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 10 00:04:00.154097 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 May 10 00:04:00.154184 kernel: [drm] features: -virgl +edid -resource_blob -host_visible May 10 00:04:00.154201 kernel: [drm] features: -context_init May 10 00:04:00.154817 kernel: [drm] number of scanouts: 1 May 10 00:04:00.155782 kernel: [drm] number of cap sets: 0 May 10 00:04:00.156885 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 10 00:04:00.157715 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 May 10 00:04:00.163914 kernel: Console: switching to colour frame buffer device 160x50 May 10 00:04:00.169158 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 10 00:04:00.170735 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 10 00:04:00.183761 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device May 10 00:04:00.197758 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. May 10 00:04:00.209084 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 10 00:04:00.209731 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 10 00:04:00.213376 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 10 00:04:00.232662 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 10 00:04:00.279152 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 10 00:04:00.353399 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 10 00:04:00.362011 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 10 00:04:00.377786 lvm[1433]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 10 00:04:00.402554 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 10 00:04:00.404523 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 10 00:04:00.405294 systemd[1]: Reached target sysinit.target - System Initialization. May 10 00:04:00.406003 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 10 00:04:00.406670 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 10 00:04:00.407751 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 10 00:04:00.408365 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 10 00:04:00.409047 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 10 00:04:00.409669 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 10 00:04:00.409726 systemd[1]: Reached target paths.target - Path Units. May 10 00:04:00.410189 systemd[1]: Reached target timers.target - Timer Units. May 10 00:04:00.411952 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 10 00:04:00.414128 systemd[1]: Starting docker.socket - Docker Socket for the API... May 10 00:04:00.426082 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 10 00:04:00.428317 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 10 00:04:00.429839 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 10 00:04:00.430653 systemd[1]: Reached target sockets.target - Socket Units. May 10 00:04:00.431248 systemd[1]: Reached target basic.target - Basic System. May 10 00:04:00.431809 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 10 00:04:00.431840 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 10 00:04:00.434878 systemd[1]: Starting containerd.service - containerd container runtime... May 10 00:04:00.438353 lvm[1437]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 10 00:04:00.439897 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 10 00:04:00.447610 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 10 00:04:00.453883 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 10 00:04:00.456279 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 10 00:04:00.458869 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 10 00:04:00.465011 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 10 00:04:00.467548 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 10 00:04:00.473906 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. May 10 00:04:00.479918 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 10 00:04:00.485985 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 10 00:04:00.491899 systemd[1]: Starting systemd-logind.service - User Login Management... May 10 00:04:00.493311 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 10 00:04:00.495095 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 10 00:04:00.498450 systemd[1]: Starting update-engine.service - Update Engine... May 10 00:04:00.503908 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 10 00:04:00.504722 jq[1441]: false May 10 00:04:00.506537 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 10 00:04:00.510677 dbus-daemon[1440]: [system] SELinux support is enabled May 10 00:04:00.510949 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 10 00:04:00.512532 extend-filesystems[1442]: Found loop4 May 10 00:04:00.512532 extend-filesystems[1442]: Found loop5 May 10 00:04:00.512532 extend-filesystems[1442]: Found loop6 May 10 00:04:00.512532 extend-filesystems[1442]: Found loop7 May 10 00:04:00.512532 extend-filesystems[1442]: Found sda May 10 00:04:00.512532 extend-filesystems[1442]: Found sda1 May 10 00:04:00.512532 extend-filesystems[1442]: Found sda2 May 10 00:04:00.512532 extend-filesystems[1442]: Found sda3 May 10 00:04:00.512532 extend-filesystems[1442]: Found usr May 10 00:04:00.512532 extend-filesystems[1442]: Found sda4 May 10 00:04:00.512532 extend-filesystems[1442]: Found sda6 May 10 00:04:00.512532 extend-filesystems[1442]: Found sda7 May 10 00:04:00.512532 extend-filesystems[1442]: Found sda9 May 10 00:04:00.512532 extend-filesystems[1442]: Checking size of /dev/sda9 May 10 00:04:00.529297 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 10 00:04:00.532165 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 10 00:04:00.532967 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 10 00:04:00.533004 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 10 00:04:00.534907 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 10 00:04:00.534936 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 10 00:04:00.559087 coreos-metadata[1439]: May 10 00:04:00.557 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 May 10 00:04:00.563842 coreos-metadata[1439]: May 10 00:04:00.562 INFO Fetch successful May 10 00:04:00.563842 coreos-metadata[1439]: May 10 00:04:00.563 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 May 10 00:04:00.565461 extend-filesystems[1442]: Resized partition /dev/sda9 May 10 00:04:00.568805 coreos-metadata[1439]: May 10 00:04:00.565 INFO Fetch successful May 10 00:04:00.568871 jq[1454]: true May 10 00:04:00.569499 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 10 00:04:00.569774 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 10 00:04:00.574228 (ntainerd)[1466]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 10 00:04:00.580937 extend-filesystems[1478]: resize2fs 1.47.1 (20-May-2024) May 10 00:04:00.597811 tar[1457]: linux-arm64/helm May 10 00:04:00.615279 systemd[1]: motdgen.service: Deactivated successfully. May 10 00:04:00.615769 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 10 00:04:00.617717 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks May 10 00:04:00.638717 jq[1480]: true May 10 00:04:00.646101 update_engine[1452]: I20250510 00:04:00.627526 1452 main.cc:92] Flatcar Update Engine starting May 10 00:04:00.652770 systemd[1]: Started update-engine.service - Update Engine. May 10 00:04:00.658505 update_engine[1452]: I20250510 00:04:00.657652 1452 update_check_scheduler.cc:74] Next update check in 7m4s May 10 00:04:00.661978 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 10 00:04:00.682752 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1382) May 10 00:04:00.730975 kernel: EXT4-fs (sda9): resized filesystem to 9393147 May 10 00:04:00.745777 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 10 00:04:00.748989 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 10 00:04:00.751175 extend-filesystems[1478]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required May 10 00:04:00.751175 extend-filesystems[1478]: old_desc_blocks = 1, new_desc_blocks = 5 May 10 00:04:00.751175 extend-filesystems[1478]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. May 10 00:04:00.756474 extend-filesystems[1442]: Resized filesystem in /dev/sda9 May 10 00:04:00.756474 extend-filesystems[1442]: Found sr0 May 10 00:04:00.758069 systemd[1]: extend-filesystems.service: Deactivated successfully. May 10 00:04:00.758251 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 10 00:04:00.771579 systemd-logind[1451]: New seat seat0. May 10 00:04:00.781221 bash[1516]: Updated "/home/core/.ssh/authorized_keys" May 10 00:04:00.781527 systemd-logind[1451]: Watching system buttons on /dev/input/event0 (Power Button) May 10 00:04:00.781603 systemd-logind[1451]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) May 10 00:04:00.781858 systemd[1]: Started systemd-logind.service - User Login Management. May 10 00:04:00.796770 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 10 00:04:00.802862 locksmithd[1491]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 10 00:04:00.812025 systemd[1]: Starting sshkeys.service... May 10 00:04:00.836339 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 10 00:04:00.850079 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 10 00:04:00.891896 coreos-metadata[1523]: May 10 00:04:00.891 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 May 10 00:04:00.894323 coreos-metadata[1523]: May 10 00:04:00.894 INFO Fetch successful May 10 00:04:00.898439 unknown[1523]: wrote ssh authorized keys file for user: core May 10 00:04:00.921942 containerd[1466]: time="2025-05-10T00:04:00.921858840Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 May 10 00:04:00.935392 update-ssh-keys[1528]: Updated "/home/core/.ssh/authorized_keys" May 10 00:04:00.936767 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 10 00:04:00.945435 systemd[1]: Finished sshkeys.service. May 10 00:04:00.972066 containerd[1466]: time="2025-05-10T00:04:00.972012320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 10 00:04:00.973603 containerd[1466]: time="2025-05-10T00:04:00.973525680Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.89-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 10 00:04:00.973713 containerd[1466]: time="2025-05-10T00:04:00.973685200Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 10 00:04:00.973771 containerd[1466]: time="2025-05-10T00:04:00.973758280Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 10 00:04:00.973988 containerd[1466]: time="2025-05-10T00:04:00.973970560Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 10 00:04:00.974051 containerd[1466]: time="2025-05-10T00:04:00.974038120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 10 00:04:00.974170 containerd[1466]: time="2025-05-10T00:04:00.974151560Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 10 00:04:00.974231 containerd[1466]: time="2025-05-10T00:04:00.974217680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 10 00:04:00.974454 containerd[1466]: time="2025-05-10T00:04:00.974433720Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 10 00:04:00.974514 containerd[1466]: time="2025-05-10T00:04:00.974500640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 10 00:04:00.974589 containerd[1466]: time="2025-05-10T00:04:00.974572560Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 10 00:04:00.974636 containerd[1466]: time="2025-05-10T00:04:00.974624560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 10 00:04:00.975769 containerd[1466]: time="2025-05-10T00:04:00.975744440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 10 00:04:00.977200 containerd[1466]: time="2025-05-10T00:04:00.977170120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 10 00:04:00.977407 containerd[1466]: time="2025-05-10T00:04:00.977389240Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 10 00:04:00.977474 containerd[1466]: time="2025-05-10T00:04:00.977461520Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 10 00:04:00.977663 containerd[1466]: time="2025-05-10T00:04:00.977646160Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 10 00:04:00.977827 containerd[1466]: time="2025-05-10T00:04:00.977801840Z" level=info msg="metadata content store policy set" policy=shared May 10 00:04:00.982915 containerd[1466]: time="2025-05-10T00:04:00.982742400Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 10 00:04:00.982915 containerd[1466]: time="2025-05-10T00:04:00.982818360Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 10 00:04:00.982915 containerd[1466]: time="2025-05-10T00:04:00.982836320Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 10 00:04:00.982915 containerd[1466]: time="2025-05-10T00:04:00.982852640Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 10 00:04:00.982915 containerd[1466]: time="2025-05-10T00:04:00.982867160Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 10 00:04:00.983379 containerd[1466]: time="2025-05-10T00:04:00.983244640Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 10 00:04:00.983716 containerd[1466]: time="2025-05-10T00:04:00.983675960Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 10 00:04:00.983862 containerd[1466]: time="2025-05-10T00:04:00.983840040Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 10 00:04:00.983890 containerd[1466]: time="2025-05-10T00:04:00.983864200Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 10 00:04:00.983890 containerd[1466]: time="2025-05-10T00:04:00.983878640Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 10 00:04:00.983925 containerd[1466]: time="2025-05-10T00:04:00.983893280Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 10 00:04:00.983925 containerd[1466]: time="2025-05-10T00:04:00.983906840Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 10 00:04:00.983925 containerd[1466]: time="2025-05-10T00:04:00.983920520Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 10 00:04:00.983989 containerd[1466]: time="2025-05-10T00:04:00.983935880Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 10 00:04:00.983989 containerd[1466]: time="2025-05-10T00:04:00.983951560Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 10 00:04:00.983989 containerd[1466]: time="2025-05-10T00:04:00.983964360Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 10 00:04:00.983989 containerd[1466]: time="2025-05-10T00:04:00.983977360Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 10 00:04:00.984057 containerd[1466]: time="2025-05-10T00:04:00.983990400Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 10 00:04:00.984057 containerd[1466]: time="2025-05-10T00:04:00.984011840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 10 00:04:00.984057 containerd[1466]: time="2025-05-10T00:04:00.984026960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 10 00:04:00.984057 containerd[1466]: time="2025-05-10T00:04:00.984040720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 10 00:04:00.984057 containerd[1466]: time="2025-05-10T00:04:00.984053960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 10 00:04:00.984143 containerd[1466]: time="2025-05-10T00:04:00.984076040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 10 00:04:00.984143 containerd[1466]: time="2025-05-10T00:04:00.984093160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 10 00:04:00.984143 containerd[1466]: time="2025-05-10T00:04:00.984105680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 10 00:04:00.984143 containerd[1466]: time="2025-05-10T00:04:00.984119080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 10 00:04:00.984143 containerd[1466]: time="2025-05-10T00:04:00.984132360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 10 00:04:00.984227 containerd[1466]: time="2025-05-10T00:04:00.984146800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 10 00:04:00.984227 containerd[1466]: time="2025-05-10T00:04:00.984159880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 10 00:04:00.984227 containerd[1466]: time="2025-05-10T00:04:00.984172080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 10 00:04:00.984227 containerd[1466]: time="2025-05-10T00:04:00.984183920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 10 00:04:00.984227 containerd[1466]: time="2025-05-10T00:04:00.984199400Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 10 00:04:00.984227 containerd[1466]: time="2025-05-10T00:04:00.984226440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 10 00:04:00.984326 containerd[1466]: time="2025-05-10T00:04:00.984239360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 10 00:04:00.984326 containerd[1466]: time="2025-05-10T00:04:00.984252320Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 10 00:04:00.984386 containerd[1466]: time="2025-05-10T00:04:00.984366200Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 10 00:04:00.984410 containerd[1466]: time="2025-05-10T00:04:00.984384640Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 10 00:04:00.984410 containerd[1466]: time="2025-05-10T00:04:00.984395400Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 10 00:04:00.984649 containerd[1466]: time="2025-05-10T00:04:00.984407960Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 10 00:04:00.984649 containerd[1466]: time="2025-05-10T00:04:00.984417960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 10 00:04:00.984649 containerd[1466]: time="2025-05-10T00:04:00.984430560Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 10 00:04:00.984649 containerd[1466]: time="2025-05-10T00:04:00.984441760Z" level=info msg="NRI interface is disabled by configuration." May 10 00:04:00.984649 containerd[1466]: time="2025-05-10T00:04:00.984456000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 10 00:04:00.984946 containerd[1466]: time="2025-05-10T00:04:00.984863760Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 10 00:04:00.985063 containerd[1466]: time="2025-05-10T00:04:00.984959960Z" level=info msg="Connect containerd service" May 10 00:04:00.985063 containerd[1466]: time="2025-05-10T00:04:00.984990840Z" level=info msg="using legacy CRI server" May 10 00:04:00.985063 containerd[1466]: time="2025-05-10T00:04:00.984997640Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 10 00:04:00.985126 containerd[1466]: time="2025-05-10T00:04:00.985102280Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 10 00:04:00.994938 containerd[1466]: time="2025-05-10T00:04:00.994880840Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 10 00:04:00.995331 containerd[1466]: time="2025-05-10T00:04:00.995181480Z" level=info msg="Start subscribing containerd event" May 10 00:04:00.995331 containerd[1466]: time="2025-05-10T00:04:00.995244320Z" level=info msg="Start recovering state" May 10 00:04:00.995450 containerd[1466]: time="2025-05-10T00:04:00.995425120Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 10 00:04:00.995491 containerd[1466]: time="2025-05-10T00:04:00.995471440Z" level=info msg=serving... address=/run/containerd/containerd.sock May 10 00:04:00.995763 containerd[1466]: time="2025-05-10T00:04:00.995529440Z" level=info msg="Start event monitor" May 10 00:04:00.995763 containerd[1466]: time="2025-05-10T00:04:00.995554160Z" level=info msg="Start snapshots syncer" May 10 00:04:00.995763 containerd[1466]: time="2025-05-10T00:04:00.995612000Z" level=info msg="Start cni network conf syncer for default" May 10 00:04:00.995763 containerd[1466]: time="2025-05-10T00:04:00.995622320Z" level=info msg="Start streaming server" May 10 00:04:00.996065 containerd[1466]: time="2025-05-10T00:04:00.995907560Z" level=info msg="containerd successfully booted in 0.074956s" May 10 00:04:00.996136 systemd[1]: Started containerd.service - containerd container runtime. May 10 00:04:01.184894 systemd-networkd[1375]: eth0: Gained IPv6LL May 10 00:04:01.185439 systemd-timesyncd[1357]: Network configuration changed, trying to establish connection. May 10 00:04:01.189730 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 10 00:04:01.191346 systemd[1]: Reached target network-online.target - Network is Online. May 10 00:04:01.202005 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:04:01.205052 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 10 00:04:01.255750 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 10 00:04:01.302030 tar[1457]: linux-arm64/LICENSE May 10 00:04:01.302252 tar[1457]: linux-arm64/README.md May 10 00:04:01.316618 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 10 00:04:01.421589 sshd_keygen[1484]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 10 00:04:01.442816 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 10 00:04:01.453129 systemd[1]: Starting issuegen.service - Generate /run/issue... May 10 00:04:01.465889 systemd[1]: issuegen.service: Deactivated successfully. May 10 00:04:01.467609 systemd[1]: Finished issuegen.service - Generate /run/issue. May 10 00:04:01.477823 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 10 00:04:01.489711 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 10 00:04:01.501134 systemd[1]: Started getty@tty1.service - Getty on tty1. May 10 00:04:01.504491 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 10 00:04:01.506817 systemd[1]: Reached target getty.target - Login Prompts. May 10 00:04:01.930813 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:04:01.933291 systemd[1]: Reached target multi-user.target - Multi-User System. May 10 00:04:01.935867 systemd[1]: Startup finished in 762ms (kernel) + 7.098s (initrd) + 4.184s (userspace) = 12.046s. May 10 00:04:01.937492 (kubelet)[1570]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 10 00:04:01.953005 systemd-networkd[1375]: eth1: Gained IPv6LL May 10 00:04:01.954386 systemd-timesyncd[1357]: Network configuration changed, trying to establish connection. May 10 00:04:02.453805 kubelet[1570]: E0510 00:04:02.453719 1570 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 00:04:02.457523 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 00:04:02.457714 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 00:04:12.708204 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 10 00:04:12.717925 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:04:12.833710 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:04:12.838590 (kubelet)[1589]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 10 00:04:12.887682 kubelet[1589]: E0510 00:04:12.887637 1589 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 00:04:12.890877 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 00:04:12.891128 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 00:04:23.143483 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 10 00:04:23.157047 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:04:23.270142 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:04:23.285267 (kubelet)[1604]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 10 00:04:23.334748 kubelet[1604]: E0510 00:04:23.334682 1604 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 00:04:23.337277 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 00:04:23.337597 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 00:04:23.351602 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 10 00:04:23.362730 systemd[1]: Started sshd@0-23.88.119.94:22-88.218.193.254:52038.service - OpenSSH per-connection server daemon (88.218.193.254:52038). May 10 00:04:23.461017 sshd[1612]: Connection closed by 88.218.193.254 port 52038 May 10 00:04:23.461933 systemd[1]: sshd@0-23.88.119.94:22-88.218.193.254:52038.service: Deactivated successfully. May 10 00:04:32.109867 systemd-timesyncd[1357]: Contacted time server 185.248.189.10:123 (2.flatcar.pool.ntp.org). May 10 00:04:32.109978 systemd-timesyncd[1357]: Initial clock synchronization to Sat 2025-05-10 00:04:32.099572 UTC. May 10 00:04:33.347898 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 10 00:04:33.358083 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:04:33.503482 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:04:33.508243 (kubelet)[1623]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 10 00:04:33.546855 kubelet[1623]: E0510 00:04:33.546770 1623 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 00:04:33.552366 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 00:04:33.552626 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 00:04:43.597732 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 10 00:04:43.608067 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:04:43.715173 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:04:43.720603 (kubelet)[1638]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 10 00:04:43.761723 kubelet[1638]: E0510 00:04:43.761633 1638 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 00:04:43.763922 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 00:04:43.764070 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 00:04:45.920927 update_engine[1452]: I20250510 00:04:45.920791 1452 update_attempter.cc:509] Updating boot flags... May 10 00:04:45.976222 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1654) May 10 00:04:46.032747 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1654) May 10 00:04:46.084952 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1654) May 10 00:04:53.847791 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. May 10 00:04:53.859104 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:04:53.962551 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:04:53.974474 (kubelet)[1674]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 10 00:04:54.018936 kubelet[1674]: E0510 00:04:54.018851 1674 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 00:04:54.022643 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 00:04:54.022907 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 00:05:04.097358 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. May 10 00:05:04.115498 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:05:04.260020 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:05:04.260181 (kubelet)[1689]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 10 00:05:04.309556 kubelet[1689]: E0510 00:05:04.309473 1689 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 00:05:04.311379 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 00:05:04.311527 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 00:05:14.347813 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. May 10 00:05:14.358040 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:05:14.457878 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:05:14.469263 (kubelet)[1703]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 10 00:05:14.508329 kubelet[1703]: E0510 00:05:14.508259 1703 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 00:05:14.510478 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 00:05:14.510708 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 00:05:24.597959 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. May 10 00:05:24.608102 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:05:24.713062 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:05:24.717876 (kubelet)[1719]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 10 00:05:24.759794 kubelet[1719]: E0510 00:05:24.759725 1719 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 00:05:24.763154 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 00:05:24.763425 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 00:05:34.847639 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. May 10 00:05:34.856453 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:05:34.980187 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:05:34.985756 (kubelet)[1734]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 10 00:05:35.028609 kubelet[1734]: E0510 00:05:35.028505 1734 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 00:05:35.031470 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 00:05:35.031667 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 00:05:43.780152 systemd[1]: Started sshd@1-23.88.119.94:22-147.75.109.163:43536.service - OpenSSH per-connection server daemon (147.75.109.163:43536). May 10 00:05:44.780039 sshd[1742]: Accepted publickey for core from 147.75.109.163 port 43536 ssh2: RSA SHA256:f5WfDv+qi5DuYrx2bRROpkXs75JJRPKe8+tldd3Tjew May 10 00:05:44.783613 sshd[1742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:05:44.792583 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 10 00:05:44.803328 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 10 00:05:44.808289 systemd-logind[1451]: New session 1 of user core. May 10 00:05:44.816827 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 10 00:05:44.821054 systemd[1]: Starting user@500.service - User Manager for UID 500... May 10 00:05:44.828123 (systemd)[1746]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 10 00:05:44.938072 systemd[1746]: Queued start job for default target default.target. May 10 00:05:44.948124 systemd[1746]: Created slice app.slice - User Application Slice. May 10 00:05:44.948193 systemd[1746]: Reached target paths.target - Paths. May 10 00:05:44.948229 systemd[1746]: Reached target timers.target - Timers. May 10 00:05:44.950473 systemd[1746]: Starting dbus.socket - D-Bus User Message Bus Socket... May 10 00:05:44.975202 systemd[1746]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 10 00:05:44.975650 systemd[1746]: Reached target sockets.target - Sockets. May 10 00:05:44.975690 systemd[1746]: Reached target basic.target - Basic System. May 10 00:05:44.975781 systemd[1746]: Reached target default.target - Main User Target. May 10 00:05:44.975815 systemd[1746]: Startup finished in 141ms. May 10 00:05:44.975896 systemd[1]: Started user@500.service - User Manager for UID 500. May 10 00:05:44.988052 systemd[1]: Started session-1.scope - Session 1 of User core. May 10 00:05:45.098399 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. May 10 00:05:45.105821 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:05:45.219817 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:05:45.224684 (kubelet)[1763]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 10 00:05:45.265148 kubelet[1763]: E0510 00:05:45.265087 1763 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 00:05:45.267898 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 00:05:45.268200 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 00:05:45.700207 systemd[1]: Started sshd@2-23.88.119.94:22-147.75.109.163:43548.service - OpenSSH per-connection server daemon (147.75.109.163:43548). May 10 00:05:46.712716 sshd[1772]: Accepted publickey for core from 147.75.109.163 port 43548 ssh2: RSA SHA256:f5WfDv+qi5DuYrx2bRROpkXs75JJRPKe8+tldd3Tjew May 10 00:05:46.715031 sshd[1772]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:05:46.721476 systemd-logind[1451]: New session 2 of user core. May 10 00:05:46.732062 systemd[1]: Started session-2.scope - Session 2 of User core. May 10 00:05:47.419490 sshd[1772]: pam_unix(sshd:session): session closed for user core May 10 00:05:47.424855 systemd[1]: sshd@2-23.88.119.94:22-147.75.109.163:43548.service: Deactivated successfully. May 10 00:05:47.427498 systemd[1]: session-2.scope: Deactivated successfully. May 10 00:05:47.428312 systemd-logind[1451]: Session 2 logged out. Waiting for processes to exit. May 10 00:05:47.429740 systemd-logind[1451]: Removed session 2. May 10 00:05:47.591624 systemd[1]: Started sshd@3-23.88.119.94:22-147.75.109.163:52730.service - OpenSSH per-connection server daemon (147.75.109.163:52730). May 10 00:05:48.606182 sshd[1779]: Accepted publickey for core from 147.75.109.163 port 52730 ssh2: RSA SHA256:f5WfDv+qi5DuYrx2bRROpkXs75JJRPKe8+tldd3Tjew May 10 00:05:48.608428 sshd[1779]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:05:48.617581 systemd-logind[1451]: New session 3 of user core. May 10 00:05:48.624290 systemd[1]: Started session-3.scope - Session 3 of User core. May 10 00:05:49.306077 sshd[1779]: pam_unix(sshd:session): session closed for user core May 10 00:05:49.310910 systemd-logind[1451]: Session 3 logged out. Waiting for processes to exit. May 10 00:05:49.311254 systemd[1]: sshd@3-23.88.119.94:22-147.75.109.163:52730.service: Deactivated successfully. May 10 00:05:49.314204 systemd[1]: session-3.scope: Deactivated successfully. May 10 00:05:49.316993 systemd-logind[1451]: Removed session 3. May 10 00:05:49.484757 systemd[1]: Started sshd@4-23.88.119.94:22-147.75.109.163:52740.service - OpenSSH per-connection server daemon (147.75.109.163:52740). May 10 00:05:50.509236 sshd[1786]: Accepted publickey for core from 147.75.109.163 port 52740 ssh2: RSA SHA256:f5WfDv+qi5DuYrx2bRROpkXs75JJRPKe8+tldd3Tjew May 10 00:05:50.511092 sshd[1786]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:05:50.516914 systemd-logind[1451]: New session 4 of user core. May 10 00:05:50.522027 systemd[1]: Started session-4.scope - Session 4 of User core. May 10 00:05:51.213621 sshd[1786]: pam_unix(sshd:session): session closed for user core May 10 00:05:51.218096 systemd-logind[1451]: Session 4 logged out. Waiting for processes to exit. May 10 00:05:51.220623 systemd[1]: sshd@4-23.88.119.94:22-147.75.109.163:52740.service: Deactivated successfully. May 10 00:05:51.224149 systemd[1]: session-4.scope: Deactivated successfully. May 10 00:05:51.225150 systemd-logind[1451]: Removed session 4. May 10 00:05:51.390157 systemd[1]: Started sshd@5-23.88.119.94:22-147.75.109.163:52744.service - OpenSSH per-connection server daemon (147.75.109.163:52744). May 10 00:05:52.385339 sshd[1793]: Accepted publickey for core from 147.75.109.163 port 52744 ssh2: RSA SHA256:f5WfDv+qi5DuYrx2bRROpkXs75JJRPKe8+tldd3Tjew May 10 00:05:52.387558 sshd[1793]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:05:52.393570 systemd-logind[1451]: New session 5 of user core. May 10 00:05:52.400047 systemd[1]: Started session-5.scope - Session 5 of User core. May 10 00:05:52.924193 sudo[1796]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 10 00:05:52.924474 sudo[1796]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 10 00:05:52.939036 sudo[1796]: pam_unix(sudo:session): session closed for user root May 10 00:05:53.102780 sshd[1793]: pam_unix(sshd:session): session closed for user core May 10 00:05:53.107988 systemd-logind[1451]: Session 5 logged out. Waiting for processes to exit. May 10 00:05:53.109165 systemd[1]: sshd@5-23.88.119.94:22-147.75.109.163:52744.service: Deactivated successfully. May 10 00:05:53.113344 systemd[1]: session-5.scope: Deactivated successfully. May 10 00:05:53.114950 systemd-logind[1451]: Removed session 5. May 10 00:05:53.284258 systemd[1]: Started sshd@6-23.88.119.94:22-147.75.109.163:52756.service - OpenSSH per-connection server daemon (147.75.109.163:52756). May 10 00:05:54.281554 sshd[1801]: Accepted publickey for core from 147.75.109.163 port 52756 ssh2: RSA SHA256:f5WfDv+qi5DuYrx2bRROpkXs75JJRPKe8+tldd3Tjew May 10 00:05:54.283957 sshd[1801]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:05:54.289042 systemd-logind[1451]: New session 6 of user core. May 10 00:05:54.297394 systemd[1]: Started session-6.scope - Session 6 of User core. May 10 00:05:54.817155 sudo[1805]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 10 00:05:54.817550 sudo[1805]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 10 00:05:54.822764 sudo[1805]: pam_unix(sudo:session): session closed for user root May 10 00:05:54.830655 sudo[1804]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 10 00:05:54.831060 sudo[1804]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 10 00:05:54.847032 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... May 10 00:05:54.848531 auditctl[1808]: No rules May 10 00:05:54.849088 systemd[1]: audit-rules.service: Deactivated successfully. May 10 00:05:54.849271 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. May 10 00:05:54.853666 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 10 00:05:54.889881 augenrules[1826]: No rules May 10 00:05:54.892799 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 10 00:05:54.894921 sudo[1804]: pam_unix(sudo:session): session closed for user root May 10 00:05:55.058913 sshd[1801]: pam_unix(sshd:session): session closed for user core May 10 00:05:55.064166 systemd[1]: sshd@6-23.88.119.94:22-147.75.109.163:52756.service: Deactivated successfully. May 10 00:05:55.067492 systemd[1]: session-6.scope: Deactivated successfully. May 10 00:05:55.070215 systemd-logind[1451]: Session 6 logged out. Waiting for processes to exit. May 10 00:05:55.071385 systemd-logind[1451]: Removed session 6. May 10 00:05:55.232280 systemd[1]: Started sshd@7-23.88.119.94:22-147.75.109.163:52770.service - OpenSSH per-connection server daemon (147.75.109.163:52770). May 10 00:05:55.347985 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. May 10 00:05:55.365614 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:05:55.469344 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:05:55.475110 (kubelet)[1844]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 10 00:05:55.516281 kubelet[1844]: E0510 00:05:55.516218 1844 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 00:05:55.519320 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 00:05:55.519540 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 00:05:56.243513 sshd[1834]: Accepted publickey for core from 147.75.109.163 port 52770 ssh2: RSA SHA256:f5WfDv+qi5DuYrx2bRROpkXs75JJRPKe8+tldd3Tjew May 10 00:05:56.246907 sshd[1834]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:05:56.252602 systemd-logind[1451]: New session 7 of user core. May 10 00:05:56.262044 systemd[1]: Started session-7.scope - Session 7 of User core. May 10 00:05:56.777049 sudo[1853]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 10 00:05:56.778347 sudo[1853]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 10 00:05:57.062074 systemd[1]: Starting docker.service - Docker Application Container Engine... May 10 00:05:57.065490 (dockerd)[1868]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 10 00:05:57.316391 dockerd[1868]: time="2025-05-10T00:05:57.316228528Z" level=info msg="Starting up" May 10 00:05:57.413873 dockerd[1868]: time="2025-05-10T00:05:57.413807579Z" level=info msg="Loading containers: start." May 10 00:05:57.518757 kernel: Initializing XFRM netlink socket May 10 00:05:57.600847 systemd-networkd[1375]: docker0: Link UP May 10 00:05:57.625869 dockerd[1868]: time="2025-05-10T00:05:57.625735182Z" level=info msg="Loading containers: done." May 10 00:05:57.643620 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1269819297-merged.mount: Deactivated successfully. May 10 00:05:57.645479 dockerd[1868]: time="2025-05-10T00:05:57.645163460Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 10 00:05:57.645479 dockerd[1868]: time="2025-05-10T00:05:57.645442271Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 May 10 00:05:57.646182 dockerd[1868]: time="2025-05-10T00:05:57.645813245Z" level=info msg="Daemon has completed initialization" May 10 00:05:57.691786 dockerd[1868]: time="2025-05-10T00:05:57.691592499Z" level=info msg="API listen on /run/docker.sock" May 10 00:05:57.692830 systemd[1]: Started docker.service - Docker Application Container Engine. May 10 00:05:57.798662 systemd[1]: Started sshd@8-23.88.119.94:22-85.86.224.176:41414.service - OpenSSH per-connection server daemon (85.86.224.176:41414). May 10 00:05:58.102597 sshd[2006]: Invalid user weiqing from 85.86.224.176 port 41414 May 10 00:05:58.148564 sshd[2006]: Received disconnect from 85.86.224.176 port 41414:11: Bye Bye [preauth] May 10 00:05:58.148564 sshd[2006]: Disconnected from invalid user weiqing 85.86.224.176 port 41414 [preauth] May 10 00:05:58.152410 systemd[1]: sshd@8-23.88.119.94:22-85.86.224.176:41414.service: Deactivated successfully. May 10 00:05:58.800265 containerd[1466]: time="2025-05-10T00:05:58.800200816Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\"" May 10 00:05:59.478971 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2597272603.mount: Deactivated successfully. May 10 00:06:00.343072 containerd[1466]: time="2025-05-10T00:06:00.342943831Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:00.344554 containerd[1466]: time="2025-05-10T00:06:00.344188034Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.8: active requests=0, bytes read=25554700" May 10 00:06:00.345571 containerd[1466]: time="2025-05-10T00:06:00.345512559Z" level=info msg="ImageCreate event name:\"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:00.349555 containerd[1466]: time="2025-05-10T00:06:00.349478415Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:00.351608 containerd[1466]: time="2025-05-10T00:06:00.351545006Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.8\" with image id \"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\", size \"25551408\" in 1.551280027s" May 10 00:06:00.352311 containerd[1466]: time="2025-05-10T00:06:00.351764814Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\" returns image reference \"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\"" May 10 00:06:00.352630 containerd[1466]: time="2025-05-10T00:06:00.352591162Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\"" May 10 00:06:01.475425 containerd[1466]: time="2025-05-10T00:06:01.475331613Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:01.476660 containerd[1466]: time="2025-05-10T00:06:01.476595495Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.8: active requests=0, bytes read=22458998" May 10 00:06:01.477593 containerd[1466]: time="2025-05-10T00:06:01.477537927Z" level=info msg="ImageCreate event name:\"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:01.481022 containerd[1466]: time="2025-05-10T00:06:01.480936360Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:01.482435 containerd[1466]: time="2025-05-10T00:06:01.482256164Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.8\" with image id \"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\", size \"23900539\" in 1.129628321s" May 10 00:06:01.482435 containerd[1466]: time="2025-05-10T00:06:01.482297486Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\" returns image reference \"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\"" May 10 00:06:01.483195 containerd[1466]: time="2025-05-10T00:06:01.483111393Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\"" May 10 00:06:02.485905 containerd[1466]: time="2025-05-10T00:06:02.485840801Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:02.487007 containerd[1466]: time="2025-05-10T00:06:02.486960997Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.8: active requests=0, bytes read=17125833" May 10 00:06:02.487968 containerd[1466]: time="2025-05-10T00:06:02.487924309Z" level=info msg="ImageCreate event name:\"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:02.491397 containerd[1466]: time="2025-05-10T00:06:02.491264617Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:02.492861 containerd[1466]: time="2025-05-10T00:06:02.492675863Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.8\" with image id \"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\", size \"18567392\" in 1.009348103s" May 10 00:06:02.492861 containerd[1466]: time="2025-05-10T00:06:02.492766426Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\" returns image reference \"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\"" May 10 00:06:02.494068 containerd[1466]: time="2025-05-10T00:06:02.493849742Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" May 10 00:06:03.466925 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3549693868.mount: Deactivated successfully. May 10 00:06:03.803524 containerd[1466]: time="2025-05-10T00:06:03.803114838Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:03.804292 containerd[1466]: time="2025-05-10T00:06:03.804000826Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.8: active requests=0, bytes read=26871943" May 10 00:06:03.805104 containerd[1466]: time="2025-05-10T00:06:03.805028779Z" level=info msg="ImageCreate event name:\"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:03.807861 containerd[1466]: time="2025-05-10T00:06:03.807787587Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:03.809249 containerd[1466]: time="2025-05-10T00:06:03.808397446Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.8\" with image id \"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\", repo tag \"registry.k8s.io/kube-proxy:v1.31.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\", size \"26870936\" in 1.314509503s" May 10 00:06:03.809249 containerd[1466]: time="2025-05-10T00:06:03.808436888Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\"" May 10 00:06:03.809249 containerd[1466]: time="2025-05-10T00:06:03.809007586Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 10 00:06:04.439274 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2376910601.mount: Deactivated successfully. May 10 00:06:05.047718 containerd[1466]: time="2025-05-10T00:06:05.047609025Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:05.049419 containerd[1466]: time="2025-05-10T00:06:05.049377158Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485461" May 10 00:06:05.050180 containerd[1466]: time="2025-05-10T00:06:05.049772330Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:05.054184 containerd[1466]: time="2025-05-10T00:06:05.054114582Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:05.056464 containerd[1466]: time="2025-05-10T00:06:05.056228006Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.247183459s" May 10 00:06:05.056464 containerd[1466]: time="2025-05-10T00:06:05.056297968Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" May 10 00:06:05.057221 containerd[1466]: time="2025-05-10T00:06:05.057096032Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 10 00:06:05.597446 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. May 10 00:06:05.604983 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:06:05.630299 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3970648568.mount: Deactivated successfully. May 10 00:06:05.658017 containerd[1466]: time="2025-05-10T00:06:05.657959632Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:05.663711 containerd[1466]: time="2025-05-10T00:06:05.663597923Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" May 10 00:06:05.664913 containerd[1466]: time="2025-05-10T00:06:05.664675756Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:05.670919 containerd[1466]: time="2025-05-10T00:06:05.669462221Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:05.670919 containerd[1466]: time="2025-05-10T00:06:05.670428130Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 613.245016ms" May 10 00:06:05.670919 containerd[1466]: time="2025-05-10T00:06:05.670458651Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 10 00:06:05.672160 containerd[1466]: time="2025-05-10T00:06:05.672136662Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 10 00:06:05.715330 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:06:05.720983 (kubelet)[2137]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 10 00:06:05.764016 kubelet[2137]: E0510 00:06:05.763950 2137 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 00:06:05.766595 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 00:06:05.766978 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 00:06:06.294645 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount590230449.mount: Deactivated successfully. May 10 00:06:10.133725 containerd[1466]: time="2025-05-10T00:06:10.133651198Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:10.135103 containerd[1466]: time="2025-05-10T00:06:10.135028875Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406533" May 10 00:06:10.136090 containerd[1466]: time="2025-05-10T00:06:10.136014541Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:10.139420 containerd[1466]: time="2025-05-10T00:06:10.139381752Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:10.141052 containerd[1466]: time="2025-05-10T00:06:10.140919673Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 4.468633767s" May 10 00:06:10.141052 containerd[1466]: time="2025-05-10T00:06:10.140957434Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" May 10 00:06:15.847275 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. May 10 00:06:15.856908 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:06:15.981478 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:06:15.987302 (kubelet)[2224]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 10 00:06:15.997977 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:06:15.998834 systemd[1]: kubelet.service: Deactivated successfully. May 10 00:06:15.999184 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:06:16.018716 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:06:16.045135 systemd[1]: Reloading requested from client PID 2237 ('systemctl') (unit session-7.scope)... May 10 00:06:16.045284 systemd[1]: Reloading... May 10 00:06:16.166728 zram_generator::config[2277]: No configuration found. May 10 00:06:16.263236 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 10 00:06:16.334232 systemd[1]: Reloading finished in 288 ms. May 10 00:06:16.395469 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:06:16.399639 systemd[1]: kubelet.service: Deactivated successfully. May 10 00:06:16.400930 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:06:16.409219 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:06:16.519259 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:06:16.530221 (kubelet)[2327]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 10 00:06:16.570735 kubelet[2327]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 10 00:06:16.570735 kubelet[2327]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 10 00:06:16.570735 kubelet[2327]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 10 00:06:16.570735 kubelet[2327]: I0510 00:06:16.570457 2327 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 10 00:06:17.762989 kubelet[2327]: I0510 00:06:17.762927 2327 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 10 00:06:17.762989 kubelet[2327]: I0510 00:06:17.762970 2327 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 10 00:06:17.763409 kubelet[2327]: I0510 00:06:17.763328 2327 server.go:929] "Client rotation is on, will bootstrap in background" May 10 00:06:17.795578 kubelet[2327]: I0510 00:06:17.795524 2327 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 10 00:06:17.796535 kubelet[2327]: E0510 00:06:17.796058 2327 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://23.88.119.94:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 23.88.119.94:6443: connect: connection refused" logger="UnhandledError" May 10 00:06:17.803296 kubelet[2327]: E0510 00:06:17.803229 2327 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 10 00:06:17.803296 kubelet[2327]: I0510 00:06:17.803279 2327 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 10 00:06:17.808468 kubelet[2327]: I0510 00:06:17.808140 2327 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 10 00:06:17.809307 kubelet[2327]: I0510 00:06:17.809277 2327 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 10 00:06:17.809739 kubelet[2327]: I0510 00:06:17.809680 2327 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 10 00:06:17.810088 kubelet[2327]: I0510 00:06:17.809838 2327 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-3-n-7b3972f1ed","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 10 00:06:17.810787 kubelet[2327]: I0510 00:06:17.810411 2327 topology_manager.go:138] "Creating topology manager with none policy" May 10 00:06:17.810787 kubelet[2327]: I0510 00:06:17.810436 2327 container_manager_linux.go:300] "Creating device plugin manager" May 10 00:06:17.810787 kubelet[2327]: I0510 00:06:17.810672 2327 state_mem.go:36] "Initialized new in-memory state store" May 10 00:06:17.813941 kubelet[2327]: I0510 00:06:17.813908 2327 kubelet.go:408] "Attempting to sync node with API server" May 10 00:06:17.814073 kubelet[2327]: I0510 00:06:17.814061 2327 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 10 00:06:17.814227 kubelet[2327]: I0510 00:06:17.814217 2327 kubelet.go:314] "Adding apiserver pod source" May 10 00:06:17.814294 kubelet[2327]: I0510 00:06:17.814283 2327 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 10 00:06:17.822615 kubelet[2327]: W0510 00:06:17.822533 2327 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://23.88.119.94:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-n-7b3972f1ed&limit=500&resourceVersion=0": dial tcp 23.88.119.94:6443: connect: connection refused May 10 00:06:17.822762 kubelet[2327]: E0510 00:06:17.822622 2327 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://23.88.119.94:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-n-7b3972f1ed&limit=500&resourceVersion=0\": dial tcp 23.88.119.94:6443: connect: connection refused" logger="UnhandledError" May 10 00:06:17.823840 kubelet[2327]: W0510 00:06:17.823782 2327 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://23.88.119.94:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 23.88.119.94:6443: connect: connection refused May 10 00:06:17.823907 kubelet[2327]: E0510 00:06:17.823843 2327 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://23.88.119.94:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 23.88.119.94:6443: connect: connection refused" logger="UnhandledError" May 10 00:06:17.824779 kubelet[2327]: I0510 00:06:17.824471 2327 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 10 00:06:17.826836 kubelet[2327]: I0510 00:06:17.826813 2327 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 10 00:06:17.828737 kubelet[2327]: W0510 00:06:17.827917 2327 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 10 00:06:17.831110 kubelet[2327]: I0510 00:06:17.831073 2327 server.go:1269] "Started kubelet" May 10 00:06:17.832283 kubelet[2327]: I0510 00:06:17.832257 2327 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 10 00:06:17.837186 kubelet[2327]: E0510 00:06:17.835225 2327 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://23.88.119.94:6443/api/v1/namespaces/default/events\": dial tcp 23.88.119.94:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-3-n-7b3972f1ed.183e01b3f1a23aae default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-3-n-7b3972f1ed,UID:ci-4081-3-3-n-7b3972f1ed,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-3-n-7b3972f1ed,},FirstTimestamp:2025-05-10 00:06:17.831045806 +0000 UTC m=+1.296994383,LastTimestamp:2025-05-10 00:06:17.831045806 +0000 UTC m=+1.296994383,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-3-n-7b3972f1ed,}" May 10 00:06:17.840662 kubelet[2327]: I0510 00:06:17.839558 2327 volume_manager.go:289] "Starting Kubelet Volume Manager" May 10 00:06:17.840662 kubelet[2327]: I0510 00:06:17.839574 2327 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 10 00:06:17.840662 kubelet[2327]: E0510 00:06:17.839957 2327 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-3-n-7b3972f1ed\" not found" May 10 00:06:17.840851 kubelet[2327]: I0510 00:06:17.840627 2327 server.go:460] "Adding debug handlers to kubelet server" May 10 00:06:17.841903 kubelet[2327]: I0510 00:06:17.841851 2327 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 10 00:06:17.842217 kubelet[2327]: I0510 00:06:17.842198 2327 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 10 00:06:17.842544 kubelet[2327]: I0510 00:06:17.842527 2327 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 10 00:06:17.843022 kubelet[2327]: E0510 00:06:17.842983 2327 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://23.88.119.94:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-n-7b3972f1ed?timeout=10s\": dial tcp 23.88.119.94:6443: connect: connection refused" interval="200ms" May 10 00:06:17.843208 kubelet[2327]: I0510 00:06:17.843177 2327 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 10 00:06:17.843263 kubelet[2327]: I0510 00:06:17.843251 2327 reconciler.go:26] "Reconciler: start to sync state" May 10 00:06:17.845822 kubelet[2327]: W0510 00:06:17.845774 2327 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://23.88.119.94:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 23.88.119.94:6443: connect: connection refused May 10 00:06:17.846761 kubelet[2327]: E0510 00:06:17.846732 2327 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://23.88.119.94:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 23.88.119.94:6443: connect: connection refused" logger="UnhandledError" May 10 00:06:17.847063 kubelet[2327]: I0510 00:06:17.847043 2327 factory.go:221] Registration of the containerd container factory successfully May 10 00:06:17.847311 kubelet[2327]: I0510 00:06:17.847294 2327 factory.go:221] Registration of the systemd container factory successfully May 10 00:06:17.847499 kubelet[2327]: I0510 00:06:17.847477 2327 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 10 00:06:17.858053 kubelet[2327]: I0510 00:06:17.857942 2327 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 10 00:06:17.859216 kubelet[2327]: I0510 00:06:17.859184 2327 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 10 00:06:17.859216 kubelet[2327]: I0510 00:06:17.859211 2327 status_manager.go:217] "Starting to sync pod status with apiserver" May 10 00:06:17.859319 kubelet[2327]: I0510 00:06:17.859228 2327 kubelet.go:2321] "Starting kubelet main sync loop" May 10 00:06:17.859319 kubelet[2327]: E0510 00:06:17.859269 2327 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 10 00:06:17.867040 kubelet[2327]: W0510 00:06:17.866943 2327 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://23.88.119.94:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 23.88.119.94:6443: connect: connection refused May 10 00:06:17.867040 kubelet[2327]: E0510 00:06:17.867029 2327 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://23.88.119.94:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 23.88.119.94:6443: connect: connection refused" logger="UnhandledError" May 10 00:06:17.878733 kubelet[2327]: I0510 00:06:17.878645 2327 cpu_manager.go:214] "Starting CPU manager" policy="none" May 10 00:06:17.878733 kubelet[2327]: I0510 00:06:17.878664 2327 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 10 00:06:17.878733 kubelet[2327]: I0510 00:06:17.878687 2327 state_mem.go:36] "Initialized new in-memory state store" May 10 00:06:17.881210 kubelet[2327]: I0510 00:06:17.881177 2327 policy_none.go:49] "None policy: Start" May 10 00:06:17.882036 kubelet[2327]: I0510 00:06:17.882013 2327 memory_manager.go:170] "Starting memorymanager" policy="None" May 10 00:06:17.882135 kubelet[2327]: I0510 00:06:17.882047 2327 state_mem.go:35] "Initializing new in-memory state store" May 10 00:06:17.890411 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 10 00:06:17.901478 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 10 00:06:17.906804 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 10 00:06:17.918387 kubelet[2327]: I0510 00:06:17.918292 2327 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 10 00:06:17.918797 kubelet[2327]: I0510 00:06:17.918623 2327 eviction_manager.go:189] "Eviction manager: starting control loop" May 10 00:06:17.918797 kubelet[2327]: I0510 00:06:17.918655 2327 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 10 00:06:17.919337 kubelet[2327]: I0510 00:06:17.919308 2327 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 10 00:06:17.921215 kubelet[2327]: E0510 00:06:17.921168 2327 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-3-n-7b3972f1ed\" not found" May 10 00:06:17.974564 systemd[1]: Created slice kubepods-burstable-pod45b184450da576ac463053d572068de1.slice - libcontainer container kubepods-burstable-pod45b184450da576ac463053d572068de1.slice. May 10 00:06:17.989660 systemd[1]: Created slice kubepods-burstable-pode2d11838b529fb5c5e609514fc833ecc.slice - libcontainer container kubepods-burstable-pode2d11838b529fb5c5e609514fc833ecc.slice. May 10 00:06:17.997542 systemd[1]: Created slice kubepods-burstable-pod67274e7edf261f59f37f3ece1893177a.slice - libcontainer container kubepods-burstable-pod67274e7edf261f59f37f3ece1893177a.slice. May 10 00:06:18.021688 kubelet[2327]: I0510 00:06:18.021516 2327 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-3-n-7b3972f1ed" May 10 00:06:18.022850 kubelet[2327]: E0510 00:06:18.022812 2327 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://23.88.119.94:6443/api/v1/nodes\": dial tcp 23.88.119.94:6443: connect: connection refused" node="ci-4081-3-3-n-7b3972f1ed" May 10 00:06:18.044602 kubelet[2327]: E0510 00:06:18.044515 2327 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://23.88.119.94:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-n-7b3972f1ed?timeout=10s\": dial tcp 23.88.119.94:6443: connect: connection refused" interval="400ms" May 10 00:06:18.044863 kubelet[2327]: I0510 00:06:18.044426 2327 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e2d11838b529fb5c5e609514fc833ecc-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-3-n-7b3972f1ed\" (UID: \"e2d11838b529fb5c5e609514fc833ecc\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-7b3972f1ed" May 10 00:06:18.045284 kubelet[2327]: I0510 00:06:18.045047 2327 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/67274e7edf261f59f37f3ece1893177a-kubeconfig\") pod \"kube-scheduler-ci-4081-3-3-n-7b3972f1ed\" (UID: \"67274e7edf261f59f37f3ece1893177a\") " pod="kube-system/kube-scheduler-ci-4081-3-3-n-7b3972f1ed" May 10 00:06:18.045284 kubelet[2327]: I0510 00:06:18.045133 2327 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e2d11838b529fb5c5e609514fc833ecc-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-3-n-7b3972f1ed\" (UID: \"e2d11838b529fb5c5e609514fc833ecc\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-7b3972f1ed" May 10 00:06:18.045284 kubelet[2327]: I0510 00:06:18.045232 2327 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e2d11838b529fb5c5e609514fc833ecc-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-3-n-7b3972f1ed\" (UID: \"e2d11838b529fb5c5e609514fc833ecc\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-7b3972f1ed" May 10 00:06:18.045800 kubelet[2327]: I0510 00:06:18.045547 2327 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/45b184450da576ac463053d572068de1-ca-certs\") pod \"kube-apiserver-ci-4081-3-3-n-7b3972f1ed\" (UID: \"45b184450da576ac463053d572068de1\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-7b3972f1ed" May 10 00:06:18.045800 kubelet[2327]: I0510 00:06:18.045638 2327 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/45b184450da576ac463053d572068de1-k8s-certs\") pod \"kube-apiserver-ci-4081-3-3-n-7b3972f1ed\" (UID: \"45b184450da576ac463053d572068de1\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-7b3972f1ed" May 10 00:06:18.045800 kubelet[2327]: I0510 00:06:18.045683 2327 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/45b184450da576ac463053d572068de1-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-3-n-7b3972f1ed\" (UID: \"45b184450da576ac463053d572068de1\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-7b3972f1ed" May 10 00:06:18.045800 kubelet[2327]: I0510 00:06:18.045760 2327 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e2d11838b529fb5c5e609514fc833ecc-ca-certs\") pod \"kube-controller-manager-ci-4081-3-3-n-7b3972f1ed\" (UID: \"e2d11838b529fb5c5e609514fc833ecc\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-7b3972f1ed" May 10 00:06:18.046245 kubelet[2327]: I0510 00:06:18.046160 2327 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e2d11838b529fb5c5e609514fc833ecc-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-3-n-7b3972f1ed\" (UID: \"e2d11838b529fb5c5e609514fc833ecc\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-7b3972f1ed" May 10 00:06:18.225712 kubelet[2327]: I0510 00:06:18.225662 2327 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-3-n-7b3972f1ed" May 10 00:06:18.226070 kubelet[2327]: E0510 00:06:18.226044 2327 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://23.88.119.94:6443/api/v1/nodes\": dial tcp 23.88.119.94:6443: connect: connection refused" node="ci-4081-3-3-n-7b3972f1ed" May 10 00:06:18.288070 containerd[1466]: time="2025-05-10T00:06:18.287682975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-3-n-7b3972f1ed,Uid:45b184450da576ac463053d572068de1,Namespace:kube-system,Attempt:0,}" May 10 00:06:18.295947 containerd[1466]: time="2025-05-10T00:06:18.295781837Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-3-n-7b3972f1ed,Uid:e2d11838b529fb5c5e609514fc833ecc,Namespace:kube-system,Attempt:0,}" May 10 00:06:18.302810 containerd[1466]: time="2025-05-10T00:06:18.302754433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-3-n-7b3972f1ed,Uid:67274e7edf261f59f37f3ece1893177a,Namespace:kube-system,Attempt:0,}" May 10 00:06:18.446023 kubelet[2327]: E0510 00:06:18.445953 2327 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://23.88.119.94:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-n-7b3972f1ed?timeout=10s\": dial tcp 23.88.119.94:6443: connect: connection refused" interval="800ms" May 10 00:06:18.630317 kubelet[2327]: I0510 00:06:18.629948 2327 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-3-n-7b3972f1ed" May 10 00:06:18.630725 kubelet[2327]: E0510 00:06:18.630549 2327 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://23.88.119.94:6443/api/v1/nodes\": dial tcp 23.88.119.94:6443: connect: connection refused" node="ci-4081-3-3-n-7b3972f1ed" May 10 00:06:18.867325 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount721398723.mount: Deactivated successfully. May 10 00:06:18.874560 containerd[1466]: time="2025-05-10T00:06:18.874473501Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 10 00:06:18.875754 containerd[1466]: time="2025-05-10T00:06:18.875628086Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 10 00:06:18.877208 containerd[1466]: time="2025-05-10T00:06:18.877097879Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 10 00:06:18.877952 containerd[1466]: time="2025-05-10T00:06:18.877680652Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" May 10 00:06:18.879581 containerd[1466]: time="2025-05-10T00:06:18.879291649Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 10 00:06:18.880620 containerd[1466]: time="2025-05-10T00:06:18.880516836Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 10 00:06:18.881556 containerd[1466]: time="2025-05-10T00:06:18.881484258Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 10 00:06:18.883605 containerd[1466]: time="2025-05-10T00:06:18.883479823Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 10 00:06:18.886432 containerd[1466]: time="2025-05-10T00:06:18.886171963Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 583.309767ms" May 10 00:06:18.887220 containerd[1466]: time="2025-05-10T00:06:18.887165225Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 591.220745ms" May 10 00:06:18.893826 containerd[1466]: time="2025-05-10T00:06:18.892196498Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 604.344519ms" May 10 00:06:19.026249 containerd[1466]: time="2025-05-10T00:06:19.026074010Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:06:19.027924 containerd[1466]: time="2025-05-10T00:06:19.026143252Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:06:19.027924 containerd[1466]: time="2025-05-10T00:06:19.027841249Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:06:19.028911 containerd[1466]: time="2025-05-10T00:06:19.028760149Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:06:19.035878 containerd[1466]: time="2025-05-10T00:06:19.035406815Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:06:19.035878 containerd[1466]: time="2025-05-10T00:06:19.035466296Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:06:19.035878 containerd[1466]: time="2025-05-10T00:06:19.035487817Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:06:19.037449 containerd[1466]: time="2025-05-10T00:06:19.037277576Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:06:19.039611 containerd[1466]: time="2025-05-10T00:06:19.037686785Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:06:19.042044 containerd[1466]: time="2025-05-10T00:06:19.040781293Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:06:19.042044 containerd[1466]: time="2025-05-10T00:06:19.040810934Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:06:19.042044 containerd[1466]: time="2025-05-10T00:06:19.041893197Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:06:19.062928 systemd[1]: Started cri-containerd-5c78a6f9b71245fd9912e5e6f76d4bbe11874fc7f38f4f96693ec65eb19397c6.scope - libcontainer container 5c78a6f9b71245fd9912e5e6f76d4bbe11874fc7f38f4f96693ec65eb19397c6. May 10 00:06:19.066552 systemd[1]: Started cri-containerd-6f0bb3a79185aa6e2f11d9cc04e628eec90d439d8d03ea7bb420de0bce17c716.scope - libcontainer container 6f0bb3a79185aa6e2f11d9cc04e628eec90d439d8d03ea7bb420de0bce17c716. May 10 00:06:19.077425 systemd[1]: Started cri-containerd-d35c50f0b5a60fa37542e9d43920e191ba1d52bbce40370f056ec84199b9b467.scope - libcontainer container d35c50f0b5a60fa37542e9d43920e191ba1d52bbce40370f056ec84199b9b467. May 10 00:06:19.103068 kubelet[2327]: W0510 00:06:19.102660 2327 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://23.88.119.94:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 23.88.119.94:6443: connect: connection refused May 10 00:06:19.103068 kubelet[2327]: E0510 00:06:19.103056 2327 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://23.88.119.94:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 23.88.119.94:6443: connect: connection refused" logger="UnhandledError" May 10 00:06:19.123443 containerd[1466]: time="2025-05-10T00:06:19.122536328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-3-n-7b3972f1ed,Uid:67274e7edf261f59f37f3ece1893177a,Namespace:kube-system,Attempt:0,} returns sandbox id \"5c78a6f9b71245fd9912e5e6f76d4bbe11874fc7f38f4f96693ec65eb19397c6\"" May 10 00:06:19.132555 containerd[1466]: time="2025-05-10T00:06:19.132041497Z" level=info msg="CreateContainer within sandbox \"5c78a6f9b71245fd9912e5e6f76d4bbe11874fc7f38f4f96693ec65eb19397c6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 10 00:06:19.142419 containerd[1466]: time="2025-05-10T00:06:19.142334283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-3-n-7b3972f1ed,Uid:45b184450da576ac463053d572068de1,Namespace:kube-system,Attempt:0,} returns sandbox id \"6f0bb3a79185aa6e2f11d9cc04e628eec90d439d8d03ea7bb420de0bce17c716\"" May 10 00:06:19.143059 containerd[1466]: time="2025-05-10T00:06:19.143031058Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-3-n-7b3972f1ed,Uid:e2d11838b529fb5c5e609514fc833ecc,Namespace:kube-system,Attempt:0,} returns sandbox id \"d35c50f0b5a60fa37542e9d43920e191ba1d52bbce40370f056ec84199b9b467\"" May 10 00:06:19.148976 containerd[1466]: time="2025-05-10T00:06:19.148853466Z" level=info msg="CreateContainer within sandbox \"6f0bb3a79185aa6e2f11d9cc04e628eec90d439d8d03ea7bb420de0bce17c716\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 10 00:06:19.151282 containerd[1466]: time="2025-05-10T00:06:19.151145636Z" level=info msg="CreateContainer within sandbox \"d35c50f0b5a60fa37542e9d43920e191ba1d52bbce40370f056ec84199b9b467\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 10 00:06:19.162736 containerd[1466]: time="2025-05-10T00:06:19.162579607Z" level=info msg="CreateContainer within sandbox \"5c78a6f9b71245fd9912e5e6f76d4bbe11874fc7f38f4f96693ec65eb19397c6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7517100310b4c4ae980294e760793dba726839b8a6e7e8febca70860df5e6925\"" May 10 00:06:19.163347 containerd[1466]: time="2025-05-10T00:06:19.163321944Z" level=info msg="StartContainer for \"7517100310b4c4ae980294e760793dba726839b8a6e7e8febca70860df5e6925\"" May 10 00:06:19.173518 containerd[1466]: time="2025-05-10T00:06:19.173265682Z" level=info msg="CreateContainer within sandbox \"6f0bb3a79185aa6e2f11d9cc04e628eec90d439d8d03ea7bb420de0bce17c716\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a945c1e2e04e1e7864191396658baef87aece52b07df1b60bec69e041de5102d\"" May 10 00:06:19.175803 containerd[1466]: time="2025-05-10T00:06:19.175265966Z" level=info msg="StartContainer for \"a945c1e2e04e1e7864191396658baef87aece52b07df1b60bec69e041de5102d\"" May 10 00:06:19.178119 containerd[1466]: time="2025-05-10T00:06:19.178078227Z" level=info msg="CreateContainer within sandbox \"d35c50f0b5a60fa37542e9d43920e191ba1d52bbce40370f056ec84199b9b467\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"23904238d784cd4d89f3903eab4fcdf0c0a5258b926550b75d1467f3ce6075d9\"" May 10 00:06:19.178620 containerd[1466]: time="2025-05-10T00:06:19.178597719Z" level=info msg="StartContainer for \"23904238d784cd4d89f3903eab4fcdf0c0a5258b926550b75d1467f3ce6075d9\"" May 10 00:06:19.199232 systemd[1]: Started cri-containerd-7517100310b4c4ae980294e760793dba726839b8a6e7e8febca70860df5e6925.scope - libcontainer container 7517100310b4c4ae980294e760793dba726839b8a6e7e8febca70860df5e6925. May 10 00:06:19.218977 systemd[1]: Started cri-containerd-a945c1e2e04e1e7864191396658baef87aece52b07df1b60bec69e041de5102d.scope - libcontainer container a945c1e2e04e1e7864191396658baef87aece52b07df1b60bec69e041de5102d. May 10 00:06:19.235145 systemd[1]: Started cri-containerd-23904238d784cd4d89f3903eab4fcdf0c0a5258b926550b75d1467f3ce6075d9.scope - libcontainer container 23904238d784cd4d89f3903eab4fcdf0c0a5258b926550b75d1467f3ce6075d9. May 10 00:06:19.246838 kubelet[2327]: E0510 00:06:19.246758 2327 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://23.88.119.94:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-n-7b3972f1ed?timeout=10s\": dial tcp 23.88.119.94:6443: connect: connection refused" interval="1.6s" May 10 00:06:19.283569 kubelet[2327]: W0510 00:06:19.283461 2327 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://23.88.119.94:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 23.88.119.94:6443: connect: connection refused May 10 00:06:19.283569 kubelet[2327]: E0510 00:06:19.283544 2327 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://23.88.119.94:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 23.88.119.94:6443: connect: connection refused" logger="UnhandledError" May 10 00:06:19.286820 containerd[1466]: time="2025-05-10T00:06:19.286768814Z" level=info msg="StartContainer for \"a945c1e2e04e1e7864191396658baef87aece52b07df1b60bec69e041de5102d\" returns successfully" May 10 00:06:19.301340 containerd[1466]: time="2025-05-10T00:06:19.300466395Z" level=info msg="StartContainer for \"7517100310b4c4ae980294e760793dba726839b8a6e7e8febca70860df5e6925\" returns successfully" May 10 00:06:19.306146 containerd[1466]: time="2025-05-10T00:06:19.306077518Z" level=info msg="StartContainer for \"23904238d784cd4d89f3903eab4fcdf0c0a5258b926550b75d1467f3ce6075d9\" returns successfully" May 10 00:06:19.356467 kubelet[2327]: W0510 00:06:19.356391 2327 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://23.88.119.94:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 23.88.119.94:6443: connect: connection refused May 10 00:06:19.356467 kubelet[2327]: E0510 00:06:19.356468 2327 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://23.88.119.94:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 23.88.119.94:6443: connect: connection refused" logger="UnhandledError" May 10 00:06:19.410217 kubelet[2327]: W0510 00:06:19.410062 2327 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://23.88.119.94:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-n-7b3972f1ed&limit=500&resourceVersion=0": dial tcp 23.88.119.94:6443: connect: connection refused May 10 00:06:19.410217 kubelet[2327]: E0510 00:06:19.410139 2327 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://23.88.119.94:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-n-7b3972f1ed&limit=500&resourceVersion=0\": dial tcp 23.88.119.94:6443: connect: connection refused" logger="UnhandledError" May 10 00:06:19.432966 kubelet[2327]: I0510 00:06:19.432927 2327 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-3-n-7b3972f1ed" May 10 00:06:21.650825 kubelet[2327]: E0510 00:06:21.650766 2327 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-3-n-7b3972f1ed\" not found" node="ci-4081-3-3-n-7b3972f1ed" May 10 00:06:21.735460 kubelet[2327]: E0510 00:06:21.735331 2327 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081-3-3-n-7b3972f1ed.183e01b3f1a23aae default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-3-n-7b3972f1ed,UID:ci-4081-3-3-n-7b3972f1ed,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-3-n-7b3972f1ed,},FirstTimestamp:2025-05-10 00:06:17.831045806 +0000 UTC m=+1.296994383,LastTimestamp:2025-05-10 00:06:17.831045806 +0000 UTC m=+1.296994383,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-3-n-7b3972f1ed,}" May 10 00:06:21.791855 kubelet[2327]: E0510 00:06:21.791737 2327 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081-3-3-n-7b3972f1ed.183e01b3f46dc221 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-3-n-7b3972f1ed,UID:ci-4081-3-3-n-7b3972f1ed,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ci-4081-3-3-n-7b3972f1ed status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ci-4081-3-3-n-7b3972f1ed,},FirstTimestamp:2025-05-10 00:06:17.877938721 +0000 UTC m=+1.343887298,LastTimestamp:2025-05-10 00:06:17.877938721 +0000 UTC m=+1.343887298,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-3-n-7b3972f1ed,}" May 10 00:06:21.833263 kubelet[2327]: I0510 00:06:21.833036 2327 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-3-3-n-7b3972f1ed" May 10 00:06:21.833263 kubelet[2327]: E0510 00:06:21.833080 2327 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4081-3-3-n-7b3972f1ed\": node \"ci-4081-3-3-n-7b3972f1ed\" not found" May 10 00:06:21.870037 kubelet[2327]: E0510 00:06:21.869996 2327 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-3-n-7b3972f1ed\" not found" May 10 00:06:22.825400 kubelet[2327]: I0510 00:06:22.825142 2327 apiserver.go:52] "Watching apiserver" May 10 00:06:22.844315 kubelet[2327]: I0510 00:06:22.844252 2327 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 10 00:06:23.975202 systemd[1]: Reloading requested from client PID 2599 ('systemctl') (unit session-7.scope)... May 10 00:06:23.975546 systemd[1]: Reloading... May 10 00:06:24.065732 zram_generator::config[2642]: No configuration found. May 10 00:06:24.167506 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 10 00:06:24.248978 systemd[1]: Reloading finished in 273 ms. May 10 00:06:24.284964 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:06:24.297210 systemd[1]: kubelet.service: Deactivated successfully. May 10 00:06:24.297578 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:06:24.297667 systemd[1]: kubelet.service: Consumed 1.725s CPU time, 114.9M memory peak, 0B memory swap peak. May 10 00:06:24.310615 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:06:24.438234 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:06:24.438280 (kubelet)[2684]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 10 00:06:24.503868 kubelet[2684]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 10 00:06:24.503868 kubelet[2684]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 10 00:06:24.503868 kubelet[2684]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 10 00:06:24.503868 kubelet[2684]: I0510 00:06:24.501647 2684 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 10 00:06:24.511860 kubelet[2684]: I0510 00:06:24.511277 2684 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 10 00:06:24.513780 kubelet[2684]: I0510 00:06:24.512424 2684 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 10 00:06:24.516086 kubelet[2684]: I0510 00:06:24.514479 2684 server.go:929] "Client rotation is on, will bootstrap in background" May 10 00:06:24.519496 kubelet[2684]: I0510 00:06:24.519423 2684 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 10 00:06:24.521891 kubelet[2684]: I0510 00:06:24.521862 2684 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 10 00:06:24.526731 kubelet[2684]: E0510 00:06:24.526677 2684 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 10 00:06:24.526731 kubelet[2684]: I0510 00:06:24.526729 2684 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 10 00:06:24.528986 kubelet[2684]: I0510 00:06:24.528962 2684 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 10 00:06:24.529101 kubelet[2684]: I0510 00:06:24.529084 2684 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 10 00:06:24.529215 kubelet[2684]: I0510 00:06:24.529182 2684 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 10 00:06:24.529392 kubelet[2684]: I0510 00:06:24.529214 2684 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-3-n-7b3972f1ed","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 10 00:06:24.529493 kubelet[2684]: I0510 00:06:24.529404 2684 topology_manager.go:138] "Creating topology manager with none policy" May 10 00:06:24.529493 kubelet[2684]: I0510 00:06:24.529415 2684 container_manager_linux.go:300] "Creating device plugin manager" May 10 00:06:24.529493 kubelet[2684]: I0510 00:06:24.529482 2684 state_mem.go:36] "Initialized new in-memory state store" May 10 00:06:24.529620 kubelet[2684]: I0510 00:06:24.529607 2684 kubelet.go:408] "Attempting to sync node with API server" May 10 00:06:24.529700 kubelet[2684]: I0510 00:06:24.529624 2684 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 10 00:06:24.529700 kubelet[2684]: I0510 00:06:24.529653 2684 kubelet.go:314] "Adding apiserver pod source" May 10 00:06:24.529700 kubelet[2684]: I0510 00:06:24.529663 2684 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 10 00:06:24.531017 kubelet[2684]: I0510 00:06:24.530990 2684 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 10 00:06:24.531533 kubelet[2684]: I0510 00:06:24.531509 2684 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 10 00:06:24.532042 kubelet[2684]: I0510 00:06:24.532019 2684 server.go:1269] "Started kubelet" May 10 00:06:24.538255 kubelet[2684]: I0510 00:06:24.538227 2684 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 10 00:06:24.543914 kubelet[2684]: I0510 00:06:24.543865 2684 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 10 00:06:24.547711 kubelet[2684]: I0510 00:06:24.545630 2684 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 10 00:06:24.550630 kubelet[2684]: I0510 00:06:24.549675 2684 volume_manager.go:289] "Starting Kubelet Volume Manager" May 10 00:06:24.552393 kubelet[2684]: E0510 00:06:24.551186 2684 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-3-n-7b3972f1ed\" not found" May 10 00:06:24.552878 kubelet[2684]: I0510 00:06:24.552705 2684 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 10 00:06:24.553342 kubelet[2684]: I0510 00:06:24.553171 2684 reconciler.go:26] "Reconciler: start to sync state" May 10 00:06:24.557974 kubelet[2684]: I0510 00:06:24.557924 2684 server.go:460] "Adding debug handlers to kubelet server" May 10 00:06:24.558842 kubelet[2684]: I0510 00:06:24.558786 2684 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 10 00:06:24.559108 kubelet[2684]: I0510 00:06:24.559092 2684 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 10 00:06:24.566281 kubelet[2684]: I0510 00:06:24.566240 2684 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 10 00:06:24.568756 kubelet[2684]: I0510 00:06:24.568727 2684 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 10 00:06:24.568899 kubelet[2684]: I0510 00:06:24.568888 2684 status_manager.go:217] "Starting to sync pod status with apiserver" May 10 00:06:24.568995 kubelet[2684]: I0510 00:06:24.568973 2684 kubelet.go:2321] "Starting kubelet main sync loop" May 10 00:06:24.569419 kubelet[2684]: E0510 00:06:24.569101 2684 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 10 00:06:24.579885 kubelet[2684]: I0510 00:06:24.579849 2684 factory.go:221] Registration of the systemd container factory successfully May 10 00:06:24.580017 kubelet[2684]: I0510 00:06:24.579954 2684 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 10 00:06:24.585525 kubelet[2684]: I0510 00:06:24.585489 2684 factory.go:221] Registration of the containerd container factory successfully May 10 00:06:24.642645 kubelet[2684]: I0510 00:06:24.642334 2684 cpu_manager.go:214] "Starting CPU manager" policy="none" May 10 00:06:24.642645 kubelet[2684]: I0510 00:06:24.642352 2684 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 10 00:06:24.642645 kubelet[2684]: I0510 00:06:24.642372 2684 state_mem.go:36] "Initialized new in-memory state store" May 10 00:06:24.642645 kubelet[2684]: I0510 00:06:24.642565 2684 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 10 00:06:24.642645 kubelet[2684]: I0510 00:06:24.642576 2684 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 10 00:06:24.642645 kubelet[2684]: I0510 00:06:24.642594 2684 policy_none.go:49] "None policy: Start" May 10 00:06:24.643656 kubelet[2684]: I0510 00:06:24.643636 2684 memory_manager.go:170] "Starting memorymanager" policy="None" May 10 00:06:24.644391 kubelet[2684]: I0510 00:06:24.643904 2684 state_mem.go:35] "Initializing new in-memory state store" May 10 00:06:24.644391 kubelet[2684]: I0510 00:06:24.644095 2684 state_mem.go:75] "Updated machine memory state" May 10 00:06:24.649356 kubelet[2684]: I0510 00:06:24.648945 2684 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 10 00:06:24.649356 kubelet[2684]: I0510 00:06:24.649115 2684 eviction_manager.go:189] "Eviction manager: starting control loop" May 10 00:06:24.649356 kubelet[2684]: I0510 00:06:24.649125 2684 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 10 00:06:24.649548 kubelet[2684]: I0510 00:06:24.649405 2684 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 10 00:06:24.682061 kubelet[2684]: E0510 00:06:24.681974 2684 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4081-3-3-n-7b3972f1ed\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-3-n-7b3972f1ed" May 10 00:06:24.756459 kubelet[2684]: I0510 00:06:24.754962 2684 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-3-n-7b3972f1ed" May 10 00:06:24.771750 kubelet[2684]: I0510 00:06:24.771678 2684 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081-3-3-n-7b3972f1ed" May 10 00:06:24.771863 kubelet[2684]: I0510 00:06:24.771854 2684 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-3-3-n-7b3972f1ed" May 10 00:06:24.855081 kubelet[2684]: I0510 00:06:24.854955 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/45b184450da576ac463053d572068de1-ca-certs\") pod \"kube-apiserver-ci-4081-3-3-n-7b3972f1ed\" (UID: \"45b184450da576ac463053d572068de1\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-7b3972f1ed" May 10 00:06:24.855081 kubelet[2684]: I0510 00:06:24.855013 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/45b184450da576ac463053d572068de1-k8s-certs\") pod \"kube-apiserver-ci-4081-3-3-n-7b3972f1ed\" (UID: \"45b184450da576ac463053d572068de1\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-7b3972f1ed" May 10 00:06:24.855081 kubelet[2684]: I0510 00:06:24.855066 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/45b184450da576ac463053d572068de1-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-3-n-7b3972f1ed\" (UID: \"45b184450da576ac463053d572068de1\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-7b3972f1ed" May 10 00:06:24.855293 kubelet[2684]: I0510 00:06:24.855120 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e2d11838b529fb5c5e609514fc833ecc-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-3-n-7b3972f1ed\" (UID: \"e2d11838b529fb5c5e609514fc833ecc\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-7b3972f1ed" May 10 00:06:24.855293 kubelet[2684]: I0510 00:06:24.855156 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/67274e7edf261f59f37f3ece1893177a-kubeconfig\") pod \"kube-scheduler-ci-4081-3-3-n-7b3972f1ed\" (UID: \"67274e7edf261f59f37f3ece1893177a\") " pod="kube-system/kube-scheduler-ci-4081-3-3-n-7b3972f1ed" May 10 00:06:24.855293 kubelet[2684]: I0510 00:06:24.855194 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e2d11838b529fb5c5e609514fc833ecc-ca-certs\") pod \"kube-controller-manager-ci-4081-3-3-n-7b3972f1ed\" (UID: \"e2d11838b529fb5c5e609514fc833ecc\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-7b3972f1ed" May 10 00:06:24.855293 kubelet[2684]: I0510 00:06:24.855226 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e2d11838b529fb5c5e609514fc833ecc-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-3-n-7b3972f1ed\" (UID: \"e2d11838b529fb5c5e609514fc833ecc\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-7b3972f1ed" May 10 00:06:24.855293 kubelet[2684]: I0510 00:06:24.855253 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e2d11838b529fb5c5e609514fc833ecc-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-3-n-7b3972f1ed\" (UID: \"e2d11838b529fb5c5e609514fc833ecc\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-7b3972f1ed" May 10 00:06:24.855412 kubelet[2684]: I0510 00:06:24.855292 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e2d11838b529fb5c5e609514fc833ecc-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-3-n-7b3972f1ed\" (UID: \"e2d11838b529fb5c5e609514fc833ecc\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-7b3972f1ed" May 10 00:06:25.531371 kubelet[2684]: I0510 00:06:25.531006 2684 apiserver.go:52] "Watching apiserver" May 10 00:06:25.554147 kubelet[2684]: I0510 00:06:25.554107 2684 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 10 00:06:25.663966 kubelet[2684]: I0510 00:06:25.663828 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-3-n-7b3972f1ed" podStartSLOduration=1.663809699 podStartE2EDuration="1.663809699s" podCreationTimestamp="2025-05-10 00:06:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:06:25.662367311 +0000 UTC m=+1.217853735" watchObservedRunningTime="2025-05-10 00:06:25.663809699 +0000 UTC m=+1.219296123" May 10 00:06:25.664157 kubelet[2684]: I0510 00:06:25.664000 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-3-n-7b3972f1ed" podStartSLOduration=1.663994582 podStartE2EDuration="1.663994582s" podCreationTimestamp="2025-05-10 00:06:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:06:25.645432063 +0000 UTC m=+1.200918447" watchObservedRunningTime="2025-05-10 00:06:25.663994582 +0000 UTC m=+1.219481006" May 10 00:06:25.678325 kubelet[2684]: I0510 00:06:25.675159 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-3-n-7b3972f1ed" podStartSLOduration=1.675140718 podStartE2EDuration="1.675140718s" podCreationTimestamp="2025-05-10 00:06:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:06:25.675125078 +0000 UTC m=+1.230611502" watchObservedRunningTime="2025-05-10 00:06:25.675140718 +0000 UTC m=+1.230627142" May 10 00:06:29.905051 sudo[1853]: pam_unix(sudo:session): session closed for user root May 10 00:06:30.068046 sshd[1834]: pam_unix(sshd:session): session closed for user core May 10 00:06:30.073492 systemd[1]: sshd@7-23.88.119.94:22-147.75.109.163:52770.service: Deactivated successfully. May 10 00:06:30.075785 systemd[1]: session-7.scope: Deactivated successfully. May 10 00:06:30.076069 systemd[1]: session-7.scope: Consumed 7.203s CPU time, 151.8M memory peak, 0B memory swap peak. May 10 00:06:30.076948 systemd-logind[1451]: Session 7 logged out. Waiting for processes to exit. May 10 00:06:30.078600 systemd-logind[1451]: Removed session 7. May 10 00:06:30.820072 kubelet[2684]: I0510 00:06:30.820031 2684 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 10 00:06:30.821830 containerd[1466]: time="2025-05-10T00:06:30.821673731Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 10 00:06:30.822284 kubelet[2684]: I0510 00:06:30.822174 2684 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 10 00:06:31.921316 systemd[1]: Created slice kubepods-besteffort-poda78ca23b_2b72_4d9b_b5b2_fb47b19cda17.slice - libcontainer container kubepods-besteffort-poda78ca23b_2b72_4d9b_b5b2_fb47b19cda17.slice. May 10 00:06:32.009183 kubelet[2684]: I0510 00:06:32.008805 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a78ca23b-2b72-4d9b-b5b2-fb47b19cda17-kube-proxy\") pod \"kube-proxy-cf7dd\" (UID: \"a78ca23b-2b72-4d9b-b5b2-fb47b19cda17\") " pod="kube-system/kube-proxy-cf7dd" May 10 00:06:32.009183 kubelet[2684]: I0510 00:06:32.008853 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a78ca23b-2b72-4d9b-b5b2-fb47b19cda17-xtables-lock\") pod \"kube-proxy-cf7dd\" (UID: \"a78ca23b-2b72-4d9b-b5b2-fb47b19cda17\") " pod="kube-system/kube-proxy-cf7dd" May 10 00:06:32.009183 kubelet[2684]: I0510 00:06:32.008876 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a78ca23b-2b72-4d9b-b5b2-fb47b19cda17-lib-modules\") pod \"kube-proxy-cf7dd\" (UID: \"a78ca23b-2b72-4d9b-b5b2-fb47b19cda17\") " pod="kube-system/kube-proxy-cf7dd" May 10 00:06:32.009183 kubelet[2684]: I0510 00:06:32.008895 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxfv9\" (UniqueName: \"kubernetes.io/projected/a78ca23b-2b72-4d9b-b5b2-fb47b19cda17-kube-api-access-fxfv9\") pod \"kube-proxy-cf7dd\" (UID: \"a78ca23b-2b72-4d9b-b5b2-fb47b19cda17\") " pod="kube-system/kube-proxy-cf7dd" May 10 00:06:32.036870 systemd[1]: Created slice kubepods-besteffort-pod278e9163_5f66_4864_b01c_64035c3c9c51.slice - libcontainer container kubepods-besteffort-pod278e9163_5f66_4864_b01c_64035c3c9c51.slice. May 10 00:06:32.110998 kubelet[2684]: I0510 00:06:32.109766 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/278e9163-5f66-4864-b01c-64035c3c9c51-var-lib-calico\") pod \"tigera-operator-6f6897fdc5-5h9zw\" (UID: \"278e9163-5f66-4864-b01c-64035c3c9c51\") " pod="tigera-operator/tigera-operator-6f6897fdc5-5h9zw" May 10 00:06:32.110998 kubelet[2684]: I0510 00:06:32.109834 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgcs5\" (UniqueName: \"kubernetes.io/projected/278e9163-5f66-4864-b01c-64035c3c9c51-kube-api-access-mgcs5\") pod \"tigera-operator-6f6897fdc5-5h9zw\" (UID: \"278e9163-5f66-4864-b01c-64035c3c9c51\") " pod="tigera-operator/tigera-operator-6f6897fdc5-5h9zw" May 10 00:06:32.232017 containerd[1466]: time="2025-05-10T00:06:32.231718182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cf7dd,Uid:a78ca23b-2b72-4d9b-b5b2-fb47b19cda17,Namespace:kube-system,Attempt:0,}" May 10 00:06:32.258630 containerd[1466]: time="2025-05-10T00:06:32.257560219Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:06:32.258630 containerd[1466]: time="2025-05-10T00:06:32.257646101Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:06:32.258630 containerd[1466]: time="2025-05-10T00:06:32.257668661Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:06:32.258630 containerd[1466]: time="2025-05-10T00:06:32.257827144Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:06:32.278901 systemd[1]: Started cri-containerd-c1aaf4319584200e65cbfa007bd555e6e26eb254d53f6ed0da0d012426b949cc.scope - libcontainer container c1aaf4319584200e65cbfa007bd555e6e26eb254d53f6ed0da0d012426b949cc. May 10 00:06:32.305069 containerd[1466]: time="2025-05-10T00:06:32.305028742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cf7dd,Uid:a78ca23b-2b72-4d9b-b5b2-fb47b19cda17,Namespace:kube-system,Attempt:0,} returns sandbox id \"c1aaf4319584200e65cbfa007bd555e6e26eb254d53f6ed0da0d012426b949cc\"" May 10 00:06:32.309938 containerd[1466]: time="2025-05-10T00:06:32.309888825Z" level=info msg="CreateContainer within sandbox \"c1aaf4319584200e65cbfa007bd555e6e26eb254d53f6ed0da0d012426b949cc\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 10 00:06:32.339720 containerd[1466]: time="2025-05-10T00:06:32.338462428Z" level=info msg="CreateContainer within sandbox \"c1aaf4319584200e65cbfa007bd555e6e26eb254d53f6ed0da0d012426b949cc\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b1b38c5f039b961b05e9b3d9b505f18c7606b9b09381cf396c91a69b13203071\"" May 10 00:06:32.340079 containerd[1466]: time="2025-05-10T00:06:32.340052615Z" level=info msg="StartContainer for \"b1b38c5f039b961b05e9b3d9b505f18c7606b9b09381cf396c91a69b13203071\"" May 10 00:06:32.340609 containerd[1466]: time="2025-05-10T00:06:32.340526503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6f6897fdc5-5h9zw,Uid:278e9163-5f66-4864-b01c-64035c3c9c51,Namespace:tigera-operator,Attempt:0,}" May 10 00:06:32.395943 systemd[1]: Started cri-containerd-b1b38c5f039b961b05e9b3d9b505f18c7606b9b09381cf396c91a69b13203071.scope - libcontainer container b1b38c5f039b961b05e9b3d9b505f18c7606b9b09381cf396c91a69b13203071. May 10 00:06:32.404452 containerd[1466]: time="2025-05-10T00:06:32.404048858Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:06:32.404452 containerd[1466]: time="2025-05-10T00:06:32.404118259Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:06:32.404452 containerd[1466]: time="2025-05-10T00:06:32.404133499Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:06:32.404452 containerd[1466]: time="2025-05-10T00:06:32.404229661Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:06:32.432918 systemd[1]: Started cri-containerd-7376bf3f762ea72e4f59d6b26320f11f9180ce1d201103f4441d1ca5e9426c31.scope - libcontainer container 7376bf3f762ea72e4f59d6b26320f11f9180ce1d201103f4441d1ca5e9426c31. May 10 00:06:32.461335 containerd[1466]: time="2025-05-10T00:06:32.461213345Z" level=info msg="StartContainer for \"b1b38c5f039b961b05e9b3d9b505f18c7606b9b09381cf396c91a69b13203071\" returns successfully" May 10 00:06:32.488636 containerd[1466]: time="2025-05-10T00:06:32.488340084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6f6897fdc5-5h9zw,Uid:278e9163-5f66-4864-b01c-64035c3c9c51,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"7376bf3f762ea72e4f59d6b26320f11f9180ce1d201103f4441d1ca5e9426c31\"" May 10 00:06:32.493648 containerd[1466]: time="2025-05-10T00:06:32.493356169Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 10 00:06:32.654011 kubelet[2684]: I0510 00:06:32.653495 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-cf7dd" podStartSLOduration=1.653476238 podStartE2EDuration="1.653476238s" podCreationTimestamp="2025-05-10 00:06:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:06:32.653052991 +0000 UTC m=+8.208539455" watchObservedRunningTime="2025-05-10 00:06:32.653476238 +0000 UTC m=+8.208962662" May 10 00:06:34.175243 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount655378516.mount: Deactivated successfully. May 10 00:06:35.167736 containerd[1466]: time="2025-05-10T00:06:35.167396589Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:35.168717 containerd[1466]: time="2025-05-10T00:06:35.168457446Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=19323084" May 10 00:06:35.169739 containerd[1466]: time="2025-05-10T00:06:35.169662745Z" level=info msg="ImageCreate event name:\"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:35.173116 containerd[1466]: time="2025-05-10T00:06:35.172792195Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:35.173659 containerd[1466]: time="2025-05-10T00:06:35.173620088Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"19319079\" in 2.680222359s" May 10 00:06:35.173659 containerd[1466]: time="2025-05-10T00:06:35.173654129Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\"" May 10 00:06:35.177644 containerd[1466]: time="2025-05-10T00:06:35.177583472Z" level=info msg="CreateContainer within sandbox \"7376bf3f762ea72e4f59d6b26320f11f9180ce1d201103f4441d1ca5e9426c31\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 10 00:06:35.197726 containerd[1466]: time="2025-05-10T00:06:35.197660874Z" level=info msg="CreateContainer within sandbox \"7376bf3f762ea72e4f59d6b26320f11f9180ce1d201103f4441d1ca5e9426c31\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"8868a4aa07d4778385de7e9db227acf0a941ba58d667b5b7541bc5d26fae0f51\"" May 10 00:06:35.198981 containerd[1466]: time="2025-05-10T00:06:35.198951254Z" level=info msg="StartContainer for \"8868a4aa07d4778385de7e9db227acf0a941ba58d667b5b7541bc5d26fae0f51\"" May 10 00:06:35.230960 systemd[1]: Started cri-containerd-8868a4aa07d4778385de7e9db227acf0a941ba58d667b5b7541bc5d26fae0f51.scope - libcontainer container 8868a4aa07d4778385de7e9db227acf0a941ba58d667b5b7541bc5d26fae0f51. May 10 00:06:35.260004 containerd[1466]: time="2025-05-10T00:06:35.259902271Z" level=info msg="StartContainer for \"8868a4aa07d4778385de7e9db227acf0a941ba58d667b5b7541bc5d26fae0f51\" returns successfully" May 10 00:06:39.075843 kubelet[2684]: I0510 00:06:39.074774 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6f6897fdc5-5h9zw" podStartSLOduration=5.392483607 podStartE2EDuration="8.074747719s" podCreationTimestamp="2025-05-10 00:06:31 +0000 UTC" firstStartedPulling="2025-05-10 00:06:32.492786879 +0000 UTC m=+8.048273303" lastFinishedPulling="2025-05-10 00:06:35.175051031 +0000 UTC m=+10.730537415" observedRunningTime="2025-05-10 00:06:35.660404849 +0000 UTC m=+11.215891273" watchObservedRunningTime="2025-05-10 00:06:39.074747719 +0000 UTC m=+14.630234143" May 10 00:06:39.083534 systemd[1]: Created slice kubepods-besteffort-pod27f4828d_9eed_442b_8fce_dcf005ba20c3.slice - libcontainer container kubepods-besteffort-pod27f4828d_9eed_442b_8fce_dcf005ba20c3.slice. May 10 00:06:39.152858 kubelet[2684]: I0510 00:06:39.152806 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8h4fv\" (UniqueName: \"kubernetes.io/projected/27f4828d-9eed-442b-8fce-dcf005ba20c3-kube-api-access-8h4fv\") pod \"calico-typha-84bf67cdc9-jwndd\" (UID: \"27f4828d-9eed-442b-8fce-dcf005ba20c3\") " pod="calico-system/calico-typha-84bf67cdc9-jwndd" May 10 00:06:39.153060 kubelet[2684]: I0510 00:06:39.152904 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/27f4828d-9eed-442b-8fce-dcf005ba20c3-tigera-ca-bundle\") pod \"calico-typha-84bf67cdc9-jwndd\" (UID: \"27f4828d-9eed-442b-8fce-dcf005ba20c3\") " pod="calico-system/calico-typha-84bf67cdc9-jwndd" May 10 00:06:39.153060 kubelet[2684]: I0510 00:06:39.152939 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/27f4828d-9eed-442b-8fce-dcf005ba20c3-typha-certs\") pod \"calico-typha-84bf67cdc9-jwndd\" (UID: \"27f4828d-9eed-442b-8fce-dcf005ba20c3\") " pod="calico-system/calico-typha-84bf67cdc9-jwndd" May 10 00:06:39.195484 systemd[1]: Created slice kubepods-besteffort-podedcafc35_61fa_4831_ab53_05e27d774797.slice - libcontainer container kubepods-besteffort-podedcafc35_61fa_4831_ab53_05e27d774797.slice. May 10 00:06:39.253184 kubelet[2684]: I0510 00:06:39.253119 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/edcafc35-61fa-4831-ab53-05e27d774797-policysync\") pod \"calico-node-c54cw\" (UID: \"edcafc35-61fa-4831-ab53-05e27d774797\") " pod="calico-system/calico-node-c54cw" May 10 00:06:39.253184 kubelet[2684]: I0510 00:06:39.253182 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/edcafc35-61fa-4831-ab53-05e27d774797-tigera-ca-bundle\") pod \"calico-node-c54cw\" (UID: \"edcafc35-61fa-4831-ab53-05e27d774797\") " pod="calico-system/calico-node-c54cw" May 10 00:06:39.253351 kubelet[2684]: I0510 00:06:39.253199 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/edcafc35-61fa-4831-ab53-05e27d774797-var-lib-calico\") pod \"calico-node-c54cw\" (UID: \"edcafc35-61fa-4831-ab53-05e27d774797\") " pod="calico-system/calico-node-c54cw" May 10 00:06:39.253351 kubelet[2684]: I0510 00:06:39.253217 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/edcafc35-61fa-4831-ab53-05e27d774797-cni-bin-dir\") pod \"calico-node-c54cw\" (UID: \"edcafc35-61fa-4831-ab53-05e27d774797\") " pod="calico-system/calico-node-c54cw" May 10 00:06:39.253351 kubelet[2684]: I0510 00:06:39.253234 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/edcafc35-61fa-4831-ab53-05e27d774797-cni-log-dir\") pod \"calico-node-c54cw\" (UID: \"edcafc35-61fa-4831-ab53-05e27d774797\") " pod="calico-system/calico-node-c54cw" May 10 00:06:39.253351 kubelet[2684]: I0510 00:06:39.253251 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/edcafc35-61fa-4831-ab53-05e27d774797-flexvol-driver-host\") pod \"calico-node-c54cw\" (UID: \"edcafc35-61fa-4831-ab53-05e27d774797\") " pod="calico-system/calico-node-c54cw" May 10 00:06:39.253351 kubelet[2684]: I0510 00:06:39.253268 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/edcafc35-61fa-4831-ab53-05e27d774797-xtables-lock\") pod \"calico-node-c54cw\" (UID: \"edcafc35-61fa-4831-ab53-05e27d774797\") " pod="calico-system/calico-node-c54cw" May 10 00:06:39.253466 kubelet[2684]: I0510 00:06:39.253285 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/edcafc35-61fa-4831-ab53-05e27d774797-node-certs\") pod \"calico-node-c54cw\" (UID: \"edcafc35-61fa-4831-ab53-05e27d774797\") " pod="calico-system/calico-node-c54cw" May 10 00:06:39.253466 kubelet[2684]: I0510 00:06:39.253341 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/edcafc35-61fa-4831-ab53-05e27d774797-lib-modules\") pod \"calico-node-c54cw\" (UID: \"edcafc35-61fa-4831-ab53-05e27d774797\") " pod="calico-system/calico-node-c54cw" May 10 00:06:39.253466 kubelet[2684]: I0510 00:06:39.253358 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/edcafc35-61fa-4831-ab53-05e27d774797-cni-net-dir\") pod \"calico-node-c54cw\" (UID: \"edcafc35-61fa-4831-ab53-05e27d774797\") " pod="calico-system/calico-node-c54cw" May 10 00:06:39.253466 kubelet[2684]: I0510 00:06:39.253374 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqf5z\" (UniqueName: \"kubernetes.io/projected/edcafc35-61fa-4831-ab53-05e27d774797-kube-api-access-zqf5z\") pod \"calico-node-c54cw\" (UID: \"edcafc35-61fa-4831-ab53-05e27d774797\") " pod="calico-system/calico-node-c54cw" May 10 00:06:39.253466 kubelet[2684]: I0510 00:06:39.253407 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/edcafc35-61fa-4831-ab53-05e27d774797-var-run-calico\") pod \"calico-node-c54cw\" (UID: \"edcafc35-61fa-4831-ab53-05e27d774797\") " pod="calico-system/calico-node-c54cw" May 10 00:06:39.358971 kubelet[2684]: E0510 00:06:39.358784 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:39.358971 kubelet[2684]: W0510 00:06:39.358811 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:39.358971 kubelet[2684]: E0510 00:06:39.358843 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:39.362936 kubelet[2684]: E0510 00:06:39.362885 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:39.362936 kubelet[2684]: W0510 00:06:39.362921 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:39.363078 kubelet[2684]: E0510 00:06:39.362950 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:39.364524 kubelet[2684]: E0510 00:06:39.363815 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:39.364871 kubelet[2684]: W0510 00:06:39.364661 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:39.364871 kubelet[2684]: E0510 00:06:39.364770 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:39.365232 kubelet[2684]: E0510 00:06:39.365160 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:39.365232 kubelet[2684]: W0510 00:06:39.365174 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:39.365385 kubelet[2684]: E0510 00:06:39.365226 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:39.365585 kubelet[2684]: E0510 00:06:39.365566 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:39.365814 kubelet[2684]: W0510 00:06:39.365661 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:39.366009 kubelet[2684]: E0510 00:06:39.365995 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:39.366122 kubelet[2684]: W0510 00:06:39.366066 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:39.366122 kubelet[2684]: E0510 00:06:39.366082 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:39.366212 kubelet[2684]: E0510 00:06:39.365688 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:39.382205 kubelet[2684]: E0510 00:06:39.381935 2684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kkznf" podUID="19603e28-9cab-4129-9122-e17f2cc348a8" May 10 00:06:39.390464 containerd[1466]: time="2025-05-10T00:06:39.390382480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-84bf67cdc9-jwndd,Uid:27f4828d-9eed-442b-8fce-dcf005ba20c3,Namespace:calico-system,Attempt:0,}" May 10 00:06:39.421370 containerd[1466]: time="2025-05-10T00:06:39.421220261Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:06:39.421370 containerd[1466]: time="2025-05-10T00:06:39.421287022Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:06:39.421370 containerd[1466]: time="2025-05-10T00:06:39.421297902Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:06:39.427785 containerd[1466]: time="2025-05-10T00:06:39.421441264Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:06:39.446337 kubelet[2684]: E0510 00:06:39.446300 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:39.446803 kubelet[2684]: W0510 00:06:39.446724 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:39.446803 kubelet[2684]: E0510 00:06:39.446757 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:39.448316 systemd[1]: Started cri-containerd-e8b586c737e1f44df61b41e87d6f067bae3ca51a9c0013aaea8db4efd7a5d928.scope - libcontainer container e8b586c737e1f44df61b41e87d6f067bae3ca51a9c0013aaea8db4efd7a5d928. May 10 00:06:39.450724 kubelet[2684]: E0510 00:06:39.450349 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:39.450929 kubelet[2684]: W0510 00:06:39.450836 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:39.450929 kubelet[2684]: E0510 00:06:39.450868 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:39.452135 kubelet[2684]: E0510 00:06:39.452030 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:39.452135 kubelet[2684]: W0510 00:06:39.452054 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:39.452135 kubelet[2684]: E0510 00:06:39.452082 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:39.453717 kubelet[2684]: E0510 00:06:39.452877 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:39.453717 kubelet[2684]: W0510 00:06:39.452903 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:39.453717 kubelet[2684]: E0510 00:06:39.452917 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:39.454200 kubelet[2684]: E0510 00:06:39.454105 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:39.454200 kubelet[2684]: W0510 00:06:39.454120 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:39.454200 kubelet[2684]: E0510 00:06:39.454135 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:39.455893 kubelet[2684]: E0510 00:06:39.455783 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:39.455893 kubelet[2684]: W0510 00:06:39.455798 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:39.455893 kubelet[2684]: E0510 00:06:39.455812 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:39.456160 kubelet[2684]: E0510 00:06:39.456076 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:39.456160 kubelet[2684]: W0510 00:06:39.456087 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:39.456160 kubelet[2684]: E0510 00:06:39.456098 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:39.456517 kubelet[2684]: E0510 00:06:39.456443 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:39.456517 kubelet[2684]: W0510 00:06:39.456457 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:39.456517 kubelet[2684]: E0510 00:06:39.456468 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:39.456985 kubelet[2684]: E0510 00:06:39.456881 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:39.456985 kubelet[2684]: W0510 00:06:39.456894 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:39.456985 kubelet[2684]: E0510 00:06:39.456905 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:39.457853 kubelet[2684]: E0510 00:06:39.457783 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:39.457853 kubelet[2684]: W0510 00:06:39.457797 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:39.457853 kubelet[2684]: E0510 00:06:39.457808 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:39.458279 kubelet[2684]: E0510 00:06:39.458175 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:39.458279 kubelet[2684]: W0510 00:06:39.458187 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:39.458279 kubelet[2684]: E0510 00:06:39.458197 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:39.459833 kubelet[2684]: E0510 00:06:39.459745 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:39.459833 kubelet[2684]: W0510 00:06:39.459762 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:39.459833 kubelet[2684]: E0510 00:06:39.459773 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:39.460910 kubelet[2684]: E0510 00:06:39.460790 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:39.460910 kubelet[2684]: W0510 00:06:39.460807 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:39.460910 kubelet[2684]: E0510 00:06:39.460819 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:39.461456 kubelet[2684]: E0510 00:06:39.461352 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:39.461456 kubelet[2684]: W0510 00:06:39.461363 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:39.461456 kubelet[2684]: E0510 00:06:39.461374 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:39.461751 kubelet[2684]: E0510 00:06:39.461668 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:39.461751 kubelet[2684]: W0510 00:06:39.461683 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:39.462406 kubelet[2684]: E0510 00:06:39.461736 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:39.463812 kubelet[2684]: E0510 00:06:39.463793 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:39.463812 kubelet[2684]: W0510 00:06:39.463829 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:39.463812 kubelet[2684]: E0510 00:06:39.463844 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:39.464589 kubelet[2684]: E0510 00:06:39.464571 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:39.464851 kubelet[2684]: W0510 00:06:39.464704 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:39.464851 kubelet[2684]: E0510 00:06:39.464723 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:39.465309 kubelet[2684]: E0510 00:06:39.464984 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:39.465494 kubelet[2684]: W0510 00:06:39.465385 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:39.465494 kubelet[2684]: E0510 00:06:39.465406 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:39.465966 kubelet[2684]: E0510 00:06:39.465776 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:39.465966 kubelet[2684]: W0510 00:06:39.465789 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:39.465966 kubelet[2684]: E0510 00:06:39.465799 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:39.467340 kubelet[2684]: E0510 00:06:39.467207 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:39.467340 kubelet[2684]: W0510 00:06:39.467233 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:39.467340 kubelet[2684]: E0510 00:06:39.467245 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:39.467774 kubelet[2684]: E0510 00:06:39.467757 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:39.468052 kubelet[2684]: W0510 00:06:39.467847 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:39.468052 kubelet[2684]: E0510 00:06:39.467864 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:39.468212 kubelet[2684]: E0510 00:06:39.468200 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:39.468270 kubelet[2684]: W0510 00:06:39.468259 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:39.468332 kubelet[2684]: E0510 00:06:39.468322 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:39.468460 kubelet[2684]: I0510 00:06:39.468403 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/19603e28-9cab-4129-9122-e17f2cc348a8-kubelet-dir\") pod \"csi-node-driver-kkznf\" (UID: \"19603e28-9cab-4129-9122-e17f2cc348a8\") " pod="calico-system/csi-node-driver-kkznf" May 10 00:06:39.470867 kubelet[2684]: E0510 00:06:39.470769 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:39.470867 kubelet[2684]: W0510 00:06:39.470788 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:39.470867 kubelet[2684]: E0510 00:06:39.470812 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:39.470867 kubelet[2684]: I0510 00:06:39.470836 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/19603e28-9cab-4129-9122-e17f2cc348a8-varrun\") pod \"csi-node-driver-kkznf\" (UID: \"19603e28-9cab-4129-9122-e17f2cc348a8\") " pod="calico-system/csi-node-driver-kkznf" May 10 00:06:39.471448 kubelet[2684]: E0510 00:06:39.471422 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:39.471448 kubelet[2684]: W0510 00:06:39.471438 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:39.471578 kubelet[2684]: E0510 00:06:39.471456 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:39.471666 kubelet[2684]: E0510 00:06:39.471649 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:39.471666 kubelet[2684]: W0510 00:06:39.471662 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:39.471777 kubelet[2684]: E0510 00:06:39.471676 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:39.471878 kubelet[2684]: E0510 00:06:39.471864 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:39.471878 kubelet[2684]: W0510 00:06:39.471876 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:39.471950 kubelet[2684]: E0510 00:06:39.471890 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:39.471950 kubelet[2684]: I0510 00:06:39.471909 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/19603e28-9cab-4129-9122-e17f2cc348a8-socket-dir\") pod \"csi-node-driver-kkznf\" (UID: \"19603e28-9cab-4129-9122-e17f2cc348a8\") " pod="calico-system/csi-node-driver-kkznf" May 10 00:06:39.472856 kubelet[2684]: E0510 00:06:39.472831 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:39.472856 kubelet[2684]: W0510 00:06:39.472850 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:39.473054 kubelet[2684]: E0510 00:06:39.472920 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:39.473054 kubelet[2684]: I0510 00:06:39.472953 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/19603e28-9cab-4129-9122-e17f2cc348a8-registration-dir\") pod \"csi-node-driver-kkznf\" (UID: \"19603e28-9cab-4129-9122-e17f2cc348a8\") " pod="calico-system/csi-node-driver-kkznf" May 10 00:06:39.473108 kubelet[2684]: E0510 00:06:39.473096 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:39.473108 kubelet[2684]: W0510 00:06:39.473105 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:39.473221 kubelet[2684]: E0510 00:06:39.473144 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:39.473264 kubelet[2684]: E0510 00:06:39.473249 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:39.473264 kubelet[2684]: W0510 00:06:39.473256 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:39.473264 kubelet[2684]: E0510 00:06:39.473276 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:39.473389 kubelet[2684]: E0510 00:06:39.473373 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:39.473389 kubelet[2684]: W0510 00:06:39.473385 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:39.473440 kubelet[2684]: E0510 00:06:39.473398 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:39.473440 kubelet[2684]: I0510 00:06:39.473424 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9v4j\" (UniqueName: \"kubernetes.io/projected/19603e28-9cab-4129-9122-e17f2cc348a8-kube-api-access-j9v4j\") pod \"csi-node-driver-kkznf\" (UID: \"19603e28-9cab-4129-9122-e17f2cc348a8\") " pod="calico-system/csi-node-driver-kkznf" May 10 00:06:39.473578 kubelet[2684]: E0510 00:06:39.473563 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:39.473578 kubelet[2684]: W0510 00:06:39.473577 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:39.473684 kubelet[2684]: E0510 00:06:39.473602 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:39.473794 kubelet[2684]: E0510 00:06:39.473781 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:39.473794 kubelet[2684]: W0510 00:06:39.473791 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:39.473864 kubelet[2684]: E0510 00:06:39.473801 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:39.473985 kubelet[2684]: E0510 00:06:39.473966 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:39.473985 kubelet[2684]: W0510 00:06:39.473983 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:39.474071 kubelet[2684]: E0510 00:06:39.473996 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:39.474364 kubelet[2684]: E0510 00:06:39.474340 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:39.474364 kubelet[2684]: W0510 00:06:39.474358 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:39.474479 kubelet[2684]: E0510 00:06:39.474370 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:39.474717 kubelet[2684]: E0510 00:06:39.474678 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:39.474783 kubelet[2684]: W0510 00:06:39.474729 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:39.474783 kubelet[2684]: E0510 00:06:39.474742 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:39.475839 kubelet[2684]: E0510 00:06:39.475265 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:39.475839 kubelet[2684]: W0510 00:06:39.475279 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:39.475839 kubelet[2684]: E0510 00:06:39.475312 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:39.501952 containerd[1466]: time="2025-05-10T00:06:39.501789906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-c54cw,Uid:edcafc35-61fa-4831-ab53-05e27d774797,Namespace:calico-system,Attempt:0,}" May 10 00:06:39.544733 containerd[1466]: time="2025-05-10T00:06:39.544585186Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:06:39.544733 containerd[1466]: time="2025-05-10T00:06:39.544680107Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:06:39.546211 containerd[1466]: time="2025-05-10T00:06:39.546156929Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:06:39.546388 containerd[1466]: time="2025-05-10T00:06:39.546353372Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:06:39.573892 systemd[1]: Started cri-containerd-54f610b8ccf7a9202146d812ea9549f5c2350c83c7a1503100107bf48f1351ae.scope - libcontainer container 54f610b8ccf7a9202146d812ea9549f5c2350c83c7a1503100107bf48f1351ae. May 10 00:06:39.576544 kubelet[2684]: E0510 00:06:39.576193 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:39.576544 kubelet[2684]: W0510 00:06:39.576218 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:39.576544 kubelet[2684]: E0510 00:06:39.576242 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:39.577888 kubelet[2684]: E0510 00:06:39.577768 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:39.579100 kubelet[2684]: W0510 00:06:39.578928 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:39.579100 kubelet[2684]: E0510 00:06:39.578967 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:39.579344 kubelet[2684]: E0510 00:06:39.579319 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:39.580082 kubelet[2684]: W0510 00:06:39.579758 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:39.580082 kubelet[2684]: E0510 00:06:39.579783 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:39.581295 kubelet[2684]: E0510 00:06:39.580981 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:39.581295 kubelet[2684]: W0510 00:06:39.581092 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:39.581295 kubelet[2684]: E0510 00:06:39.581105 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:39.582325 kubelet[2684]: E0510 00:06:39.582006 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:39.582325 kubelet[2684]: W0510 00:06:39.582022 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:39.582325 kubelet[2684]: E0510 00:06:39.582034 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:39.583785 kubelet[2684]: E0510 00:06:39.583280 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:39.583785 kubelet[2684]: W0510 00:06:39.583485 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:39.583785 kubelet[2684]: E0510 00:06:39.583498 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:39.585088 kubelet[2684]: E0510 00:06:39.585072 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:39.585271 kubelet[2684]: W0510 00:06:39.585170 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:39.585271 kubelet[2684]: E0510 00:06:39.585188 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:39.586131 kubelet[2684]: E0510 00:06:39.586107 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:39.586314 kubelet[2684]: W0510 00:06:39.586222 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:39.586314 kubelet[2684]: E0510 00:06:39.586239 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:39.587091 kubelet[2684]: E0510 00:06:39.587073 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:39.587790 kubelet[2684]: W0510 00:06:39.587544 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:39.587790 kubelet[2684]: E0510 00:06:39.587564 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:39.588003 kubelet[2684]: E0510 00:06:39.587989 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:39.588316 kubelet[2684]: W0510 00:06:39.588210 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:39.588316 kubelet[2684]: E0510 00:06:39.588231 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:39.588721 kubelet[2684]: E0510 00:06:39.588683 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:39.588888 kubelet[2684]: W0510 00:06:39.588804 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:39.588888 kubelet[2684]: E0510 00:06:39.588820 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:39.589141 kubelet[2684]: E0510 00:06:39.589109 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:39.589296 kubelet[2684]: W0510 00:06:39.589213 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:39.589296 kubelet[2684]: E0510 00:06:39.589230 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:39.589711 kubelet[2684]: E0510 00:06:39.589649 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:39.589711 kubelet[2684]: W0510 00:06:39.589663 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:39.589711 kubelet[2684]: E0510 00:06:39.589674 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:39.590120 kubelet[2684]: E0510 00:06:39.590106 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:39.590217 kubelet[2684]: W0510 00:06:39.590196 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:39.590500 kubelet[2684]: E0510 00:06:39.590408 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:39.591435 kubelet[2684]: E0510 00:06:39.591294 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:39.591743 kubelet[2684]: W0510 00:06:39.591508 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:39.591743 kubelet[2684]: E0510 00:06:39.591522 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:39.592229 kubelet[2684]: E0510 00:06:39.592213 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:39.592395 kubelet[2684]: W0510 00:06:39.592297 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:39.592395 kubelet[2684]: E0510 00:06:39.592317 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:39.593202 kubelet[2684]: E0510 00:06:39.592942 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:39.593202 kubelet[2684]: W0510 00:06:39.592955 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:39.593202 kubelet[2684]: E0510 00:06:39.593067 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:39.595225 containerd[1466]: time="2025-05-10T00:06:39.594447252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-84bf67cdc9-jwndd,Uid:27f4828d-9eed-442b-8fce-dcf005ba20c3,Namespace:calico-system,Attempt:0,} returns sandbox id \"e8b586c737e1f44df61b41e87d6f067bae3ca51a9c0013aaea8db4efd7a5d928\"" May 10 00:06:39.595383 kubelet[2684]: E0510 00:06:39.595349 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:39.595383 kubelet[2684]: W0510 00:06:39.595362 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:39.595554 kubelet[2684]: E0510 00:06:39.595494 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:39.595872 kubelet[2684]: E0510 00:06:39.595815 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:39.596076 kubelet[2684]: W0510 00:06:39.596047 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:39.596195 kubelet[2684]: E0510 00:06:39.596179 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:39.596534 kubelet[2684]: E0510 00:06:39.596508 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:39.596854 kubelet[2684]: W0510 00:06:39.596576 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:39.596854 kubelet[2684]: E0510 00:06:39.596634 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:39.597470 kubelet[2684]: E0510 00:06:39.597221 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:39.597470 kubelet[2684]: W0510 00:06:39.597237 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:39.597470 kubelet[2684]: E0510 00:06:39.597249 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:39.598212 kubelet[2684]: E0510 00:06:39.597752 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:39.598212 kubelet[2684]: W0510 00:06:39.597767 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:39.598308 kubelet[2684]: E0510 00:06:39.598240 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:39.598792 kubelet[2684]: E0510 00:06:39.598610 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:39.598792 kubelet[2684]: W0510 00:06:39.598630 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:39.598792 kubelet[2684]: E0510 00:06:39.598661 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:39.600044 kubelet[2684]: E0510 00:06:39.600016 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:39.600044 kubelet[2684]: W0510 00:06:39.600040 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:39.600285 kubelet[2684]: E0510 00:06:39.600062 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:39.601146 kubelet[2684]: E0510 00:06:39.601079 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:39.601146 kubelet[2684]: W0510 00:06:39.601099 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:39.601146 kubelet[2684]: E0510 00:06:39.601113 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:39.602911 containerd[1466]: time="2025-05-10T00:06:39.602787896Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 10 00:06:39.615881 kubelet[2684]: E0510 00:06:39.613384 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:39.615881 kubelet[2684]: W0510 00:06:39.613427 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:39.615881 kubelet[2684]: E0510 00:06:39.613456 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:39.645828 containerd[1466]: time="2025-05-10T00:06:39.645679578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-c54cw,Uid:edcafc35-61fa-4831-ab53-05e27d774797,Namespace:calico-system,Attempt:0,} returns sandbox id \"54f610b8ccf7a9202146d812ea9549f5c2350c83c7a1503100107bf48f1351ae\"" May 10 00:06:41.395250 containerd[1466]: time="2025-05-10T00:06:41.395174784Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:41.396891 containerd[1466]: time="2025-05-10T00:06:41.396844248Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=28370571" May 10 00:06:41.398057 containerd[1466]: time="2025-05-10T00:06:41.398009305Z" level=info msg="ImageCreate event name:\"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:41.401250 containerd[1466]: time="2025-05-10T00:06:41.401160271Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:41.402366 containerd[1466]: time="2025-05-10T00:06:41.401832360Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"29739745\" in 1.799007783s" May 10 00:06:41.402366 containerd[1466]: time="2025-05-10T00:06:41.401873361Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\"" May 10 00:06:41.403690 containerd[1466]: time="2025-05-10T00:06:41.403512905Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 10 00:06:41.421926 containerd[1466]: time="2025-05-10T00:06:41.421886451Z" level=info msg="CreateContainer within sandbox \"e8b586c737e1f44df61b41e87d6f067bae3ca51a9c0013aaea8db4efd7a5d928\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 10 00:06:41.441533 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3593468312.mount: Deactivated successfully. May 10 00:06:41.444250 containerd[1466]: time="2025-05-10T00:06:41.444187133Z" level=info msg="CreateContainer within sandbox \"e8b586c737e1f44df61b41e87d6f067bae3ca51a9c0013aaea8db4efd7a5d928\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"5857acc8568b24e48bf4dd1d4cda0d890c8af31f34ed3fd9b050a3cbcca179ec\"" May 10 00:06:41.445364 containerd[1466]: time="2025-05-10T00:06:41.445288989Z" level=info msg="StartContainer for \"5857acc8568b24e48bf4dd1d4cda0d890c8af31f34ed3fd9b050a3cbcca179ec\"" May 10 00:06:41.484946 systemd[1]: Started cri-containerd-5857acc8568b24e48bf4dd1d4cda0d890c8af31f34ed3fd9b050a3cbcca179ec.scope - libcontainer container 5857acc8568b24e48bf4dd1d4cda0d890c8af31f34ed3fd9b050a3cbcca179ec. May 10 00:06:41.524983 containerd[1466]: time="2025-05-10T00:06:41.524932662Z" level=info msg="StartContainer for \"5857acc8568b24e48bf4dd1d4cda0d890c8af31f34ed3fd9b050a3cbcca179ec\" returns successfully" May 10 00:06:41.570740 kubelet[2684]: E0510 00:06:41.570636 2684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kkznf" podUID="19603e28-9cab-4129-9122-e17f2cc348a8" May 10 00:06:41.680394 kubelet[2684]: I0510 00:06:41.680172 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-84bf67cdc9-jwndd" podStartSLOduration=0.879639702 podStartE2EDuration="2.680146067s" podCreationTimestamp="2025-05-10 00:06:39 +0000 UTC" firstStartedPulling="2025-05-10 00:06:39.602095246 +0000 UTC m=+15.157581670" lastFinishedPulling="2025-05-10 00:06:41.402601651 +0000 UTC m=+16.958088035" observedRunningTime="2025-05-10 00:06:41.679545379 +0000 UTC m=+17.235031803" watchObservedRunningTime="2025-05-10 00:06:41.680146067 +0000 UTC m=+17.235632531" May 10 00:06:41.685216 kubelet[2684]: E0510 00:06:41.685097 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:41.685216 kubelet[2684]: W0510 00:06:41.685122 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:41.685216 kubelet[2684]: E0510 00:06:41.685144 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:41.685450 kubelet[2684]: E0510 00:06:41.685330 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:41.685450 kubelet[2684]: W0510 00:06:41.685342 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:41.685450 kubelet[2684]: E0510 00:06:41.685351 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:41.685526 kubelet[2684]: E0510 00:06:41.685506 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:41.685526 kubelet[2684]: W0510 00:06:41.685514 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:41.685526 kubelet[2684]: E0510 00:06:41.685522 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:41.685763 kubelet[2684]: E0510 00:06:41.685733 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:41.685763 kubelet[2684]: W0510 00:06:41.685749 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:41.685763 kubelet[2684]: E0510 00:06:41.685761 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:41.686090 kubelet[2684]: E0510 00:06:41.686073 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:41.686144 kubelet[2684]: W0510 00:06:41.686089 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:41.686144 kubelet[2684]: E0510 00:06:41.686113 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:41.686327 kubelet[2684]: E0510 00:06:41.686313 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:41.686372 kubelet[2684]: W0510 00:06:41.686336 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:41.686372 kubelet[2684]: E0510 00:06:41.686351 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:41.686751 kubelet[2684]: E0510 00:06:41.686731 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:41.686751 kubelet[2684]: W0510 00:06:41.686749 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:41.686874 kubelet[2684]: E0510 00:06:41.686760 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:41.686984 kubelet[2684]: E0510 00:06:41.686970 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:41.686984 kubelet[2684]: W0510 00:06:41.686983 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:41.687071 kubelet[2684]: E0510 00:06:41.686992 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:41.687196 kubelet[2684]: E0510 00:06:41.687182 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:41.687196 kubelet[2684]: W0510 00:06:41.687194 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:41.687279 kubelet[2684]: E0510 00:06:41.687204 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:41.687404 kubelet[2684]: E0510 00:06:41.687388 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:41.687507 kubelet[2684]: W0510 00:06:41.687402 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:41.687507 kubelet[2684]: E0510 00:06:41.687425 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:41.687758 kubelet[2684]: E0510 00:06:41.687593 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:41.687758 kubelet[2684]: W0510 00:06:41.687602 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:41.687758 kubelet[2684]: E0510 00:06:41.687621 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:41.687952 kubelet[2684]: E0510 00:06:41.687817 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:41.687952 kubelet[2684]: W0510 00:06:41.687826 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:41.687952 kubelet[2684]: E0510 00:06:41.687835 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:41.687952 kubelet[2684]: E0510 00:06:41.687996 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:41.687952 kubelet[2684]: W0510 00:06:41.688004 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:41.687952 kubelet[2684]: E0510 00:06:41.688012 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:41.688407 kubelet[2684]: E0510 00:06:41.688291 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:41.688407 kubelet[2684]: W0510 00:06:41.688308 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:41.688407 kubelet[2684]: E0510 00:06:41.688319 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:41.688594 kubelet[2684]: E0510 00:06:41.688575 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:41.688594 kubelet[2684]: W0510 00:06:41.688589 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:41.688744 kubelet[2684]: E0510 00:06:41.688600 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:41.697058 kubelet[2684]: E0510 00:06:41.697023 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:41.697058 kubelet[2684]: W0510 00:06:41.697049 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:41.697321 kubelet[2684]: E0510 00:06:41.697068 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:41.697321 kubelet[2684]: E0510 00:06:41.697299 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:41.697321 kubelet[2684]: W0510 00:06:41.697310 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:41.697321 kubelet[2684]: E0510 00:06:41.697326 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:41.697905 kubelet[2684]: E0510 00:06:41.697780 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:41.697905 kubelet[2684]: W0510 00:06:41.697800 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:41.697905 kubelet[2684]: E0510 00:06:41.697829 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:41.698476 kubelet[2684]: E0510 00:06:41.698315 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:41.698476 kubelet[2684]: W0510 00:06:41.698332 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:41.698476 kubelet[2684]: E0510 00:06:41.698355 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:41.699047 kubelet[2684]: E0510 00:06:41.698885 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:41.699047 kubelet[2684]: W0510 00:06:41.698909 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:41.699047 kubelet[2684]: E0510 00:06:41.698943 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:41.699416 kubelet[2684]: E0510 00:06:41.699293 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:41.699416 kubelet[2684]: W0510 00:06:41.699309 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:41.699416 kubelet[2684]: E0510 00:06:41.699338 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:41.699883 kubelet[2684]: E0510 00:06:41.699642 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:41.699883 kubelet[2684]: W0510 00:06:41.699665 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:41.699883 kubelet[2684]: E0510 00:06:41.699708 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:41.700550 kubelet[2684]: E0510 00:06:41.700362 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:41.700550 kubelet[2684]: W0510 00:06:41.700384 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:41.700550 kubelet[2684]: E0510 00:06:41.700413 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:41.700762 kubelet[2684]: E0510 00:06:41.700720 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:41.700762 kubelet[2684]: W0510 00:06:41.700735 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:41.700762 kubelet[2684]: E0510 00:06:41.700750 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:41.701101 kubelet[2684]: E0510 00:06:41.700913 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:41.701101 kubelet[2684]: W0510 00:06:41.700932 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:41.701101 kubelet[2684]: E0510 00:06:41.700946 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:41.701101 kubelet[2684]: E0510 00:06:41.701080 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:41.701101 kubelet[2684]: W0510 00:06:41.701088 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:41.701101 kubelet[2684]: E0510 00:06:41.701099 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:41.701528 kubelet[2684]: E0510 00:06:41.701250 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:41.701528 kubelet[2684]: W0510 00:06:41.701257 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:41.701528 kubelet[2684]: E0510 00:06:41.701265 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:41.701892 kubelet[2684]: E0510 00:06:41.701529 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:41.701892 kubelet[2684]: W0510 00:06:41.701556 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:41.701892 kubelet[2684]: E0510 00:06:41.701567 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:41.701892 kubelet[2684]: E0510 00:06:41.701779 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:41.701892 kubelet[2684]: W0510 00:06:41.701789 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:41.701892 kubelet[2684]: E0510 00:06:41.701800 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:41.702026 kubelet[2684]: E0510 00:06:41.701982 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:41.702026 kubelet[2684]: W0510 00:06:41.701991 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:41.702026 kubelet[2684]: E0510 00:06:41.702000 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:41.702150 kubelet[2684]: E0510 00:06:41.702138 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:41.702150 kubelet[2684]: W0510 00:06:41.702149 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:41.702222 kubelet[2684]: E0510 00:06:41.702158 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:41.702334 kubelet[2684]: E0510 00:06:41.702322 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:41.702364 kubelet[2684]: W0510 00:06:41.702334 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:41.702364 kubelet[2684]: E0510 00:06:41.702343 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:41.702983 kubelet[2684]: E0510 00:06:41.702957 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:41.702983 kubelet[2684]: W0510 00:06:41.702975 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:41.703060 kubelet[2684]: E0510 00:06:41.702990 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:42.669263 kubelet[2684]: I0510 00:06:42.669171 2684 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 10 00:06:42.698402 kubelet[2684]: E0510 00:06:42.698218 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:42.698402 kubelet[2684]: W0510 00:06:42.698273 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:42.698402 kubelet[2684]: E0510 00:06:42.698296 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:42.699395 kubelet[2684]: E0510 00:06:42.699023 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:42.699395 kubelet[2684]: W0510 00:06:42.699038 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:42.699395 kubelet[2684]: E0510 00:06:42.699079 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:42.699931 kubelet[2684]: E0510 00:06:42.699744 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:42.699931 kubelet[2684]: W0510 00:06:42.699780 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:42.699931 kubelet[2684]: E0510 00:06:42.699793 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:42.700410 kubelet[2684]: E0510 00:06:42.700222 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:42.700410 kubelet[2684]: W0510 00:06:42.700239 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:42.700410 kubelet[2684]: E0510 00:06:42.700251 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:42.701110 kubelet[2684]: E0510 00:06:42.700848 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:42.701110 kubelet[2684]: W0510 00:06:42.700861 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:42.701110 kubelet[2684]: E0510 00:06:42.700889 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:42.701562 kubelet[2684]: E0510 00:06:42.701418 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:42.701562 kubelet[2684]: W0510 00:06:42.701429 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:42.702193 kubelet[2684]: E0510 00:06:42.701440 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:42.702591 kubelet[2684]: E0510 00:06:42.702312 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:42.702591 kubelet[2684]: W0510 00:06:42.702326 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:42.702591 kubelet[2684]: E0510 00:06:42.702347 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:42.703223 kubelet[2684]: E0510 00:06:42.703098 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:42.703223 kubelet[2684]: W0510 00:06:42.703114 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:42.703653 kubelet[2684]: E0510 00:06:42.703443 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:42.704190 kubelet[2684]: E0510 00:06:42.704003 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:42.704190 kubelet[2684]: W0510 00:06:42.704077 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:42.704190 kubelet[2684]: E0510 00:06:42.704091 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:42.705363 kubelet[2684]: E0510 00:06:42.705104 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:42.705363 kubelet[2684]: W0510 00:06:42.705119 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:42.705363 kubelet[2684]: E0510 00:06:42.705130 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:42.705850 kubelet[2684]: E0510 00:06:42.705783 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:42.706126 kubelet[2684]: W0510 00:06:42.705894 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:42.706126 kubelet[2684]: E0510 00:06:42.705909 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:42.706555 kubelet[2684]: E0510 00:06:42.706386 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:42.706766 kubelet[2684]: W0510 00:06:42.706534 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:42.706766 kubelet[2684]: E0510 00:06:42.706656 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:42.707777 kubelet[2684]: E0510 00:06:42.707600 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:42.707777 kubelet[2684]: W0510 00:06:42.707624 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:42.707777 kubelet[2684]: E0510 00:06:42.707651 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:42.708890 kubelet[2684]: E0510 00:06:42.708614 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:42.708890 kubelet[2684]: W0510 00:06:42.708645 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:42.708890 kubelet[2684]: E0510 00:06:42.708658 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:42.710164 kubelet[2684]: E0510 00:06:42.709978 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:42.710164 kubelet[2684]: W0510 00:06:42.709996 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:42.710164 kubelet[2684]: E0510 00:06:42.710008 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:42.711089 kubelet[2684]: E0510 00:06:42.710753 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:42.711089 kubelet[2684]: W0510 00:06:42.710831 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:42.711089 kubelet[2684]: E0510 00:06:42.710875 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:42.711522 kubelet[2684]: E0510 00:06:42.711491 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:42.711522 kubelet[2684]: W0510 00:06:42.711507 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:42.712085 kubelet[2684]: E0510 00:06:42.711656 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:42.712308 kubelet[2684]: E0510 00:06:42.712146 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:42.712308 kubelet[2684]: W0510 00:06:42.712158 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:42.712308 kubelet[2684]: E0510 00:06:42.712169 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:42.713246 kubelet[2684]: E0510 00:06:42.713059 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:42.713246 kubelet[2684]: W0510 00:06:42.713071 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:42.713246 kubelet[2684]: E0510 00:06:42.713086 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:42.714138 kubelet[2684]: E0510 00:06:42.713971 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:42.714138 kubelet[2684]: W0510 00:06:42.713985 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:42.714577 kubelet[2684]: E0510 00:06:42.714006 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:42.714958 kubelet[2684]: E0510 00:06:42.714547 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:42.714958 kubelet[2684]: W0510 00:06:42.714827 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:42.714958 kubelet[2684]: E0510 00:06:42.714991 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:42.715830 kubelet[2684]: E0510 00:06:42.715784 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:42.715830 kubelet[2684]: W0510 00:06:42.715800 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:42.715830 kubelet[2684]: E0510 00:06:42.715915 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:42.717389 kubelet[2684]: E0510 00:06:42.716993 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:42.717389 kubelet[2684]: W0510 00:06:42.717009 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:42.717389 kubelet[2684]: E0510 00:06:42.717322 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:42.718722 kubelet[2684]: E0510 00:06:42.718530 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:42.719080 kubelet[2684]: W0510 00:06:42.718549 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:42.719080 kubelet[2684]: E0510 00:06:42.718862 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:42.720674 kubelet[2684]: E0510 00:06:42.720303 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:42.720674 kubelet[2684]: W0510 00:06:42.720330 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:42.720674 kubelet[2684]: E0510 00:06:42.720435 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:42.721280 kubelet[2684]: E0510 00:06:42.721186 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:42.721280 kubelet[2684]: W0510 00:06:42.721212 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:42.721374 kubelet[2684]: E0510 00:06:42.721307 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:42.721806 kubelet[2684]: E0510 00:06:42.721478 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:42.721806 kubelet[2684]: W0510 00:06:42.721494 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:42.722552 kubelet[2684]: E0510 00:06:42.721940 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:42.722552 kubelet[2684]: W0510 00:06:42.721960 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:42.722552 kubelet[2684]: E0510 00:06:42.721983 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:42.722552 kubelet[2684]: E0510 00:06:42.722215 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:42.722552 kubelet[2684]: W0510 00:06:42.722225 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:42.722552 kubelet[2684]: E0510 00:06:42.722237 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:42.722552 kubelet[2684]: E0510 00:06:42.722269 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:42.722552 kubelet[2684]: E0510 00:06:42.722521 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:42.722552 kubelet[2684]: W0510 00:06:42.722543 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:42.722841 kubelet[2684]: E0510 00:06:42.722564 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:42.723763 kubelet[2684]: E0510 00:06:42.723166 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:42.723763 kubelet[2684]: W0510 00:06:42.723191 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:42.723763 kubelet[2684]: E0510 00:06:42.723588 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:42.725563 kubelet[2684]: E0510 00:06:42.725198 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:42.725563 kubelet[2684]: W0510 00:06:42.725222 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:42.725563 kubelet[2684]: E0510 00:06:42.725241 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:42.731380 kubelet[2684]: E0510 00:06:42.731351 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:42.731728 kubelet[2684]: W0510 00:06:42.731513 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:42.731728 kubelet[2684]: E0510 00:06:42.731539 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:42.749997 containerd[1466]: time="2025-05-10T00:06:42.749819612Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:42.751821 containerd[1466]: time="2025-05-10T00:06:42.751762160Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5122903" May 10 00:06:42.754833 containerd[1466]: time="2025-05-10T00:06:42.754153394Z" level=info msg="ImageCreate event name:\"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:42.767059 containerd[1466]: time="2025-05-10T00:06:42.767006737Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:42.771578 containerd[1466]: time="2025-05-10T00:06:42.771532761Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6492045\" in 1.367578051s" May 10 00:06:42.771890 containerd[1466]: time="2025-05-10T00:06:42.771869646Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\"" May 10 00:06:42.775966 containerd[1466]: time="2025-05-10T00:06:42.775928864Z" level=info msg="CreateContainer within sandbox \"54f610b8ccf7a9202146d812ea9549f5c2350c83c7a1503100107bf48f1351ae\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 10 00:06:42.793768 containerd[1466]: time="2025-05-10T00:06:42.793602636Z" level=info msg="CreateContainer within sandbox \"54f610b8ccf7a9202146d812ea9549f5c2350c83c7a1503100107bf48f1351ae\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"d51d2800121cce914ef8d8944504e7155eca7f803443f43e240a2657b8f03c81\"" May 10 00:06:42.796560 containerd[1466]: time="2025-05-10T00:06:42.794910734Z" level=info msg="StartContainer for \"d51d2800121cce914ef8d8944504e7155eca7f803443f43e240a2657b8f03c81\"" May 10 00:06:42.832919 systemd[1]: Started cri-containerd-d51d2800121cce914ef8d8944504e7155eca7f803443f43e240a2657b8f03c81.scope - libcontainer container d51d2800121cce914ef8d8944504e7155eca7f803443f43e240a2657b8f03c81. May 10 00:06:42.869044 containerd[1466]: time="2025-05-10T00:06:42.868397180Z" level=info msg="StartContainer for \"d51d2800121cce914ef8d8944504e7155eca7f803443f43e240a2657b8f03c81\" returns successfully" May 10 00:06:42.909705 systemd[1]: cri-containerd-d51d2800121cce914ef8d8944504e7155eca7f803443f43e240a2657b8f03c81.scope: Deactivated successfully. May 10 00:06:42.947191 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d51d2800121cce914ef8d8944504e7155eca7f803443f43e240a2657b8f03c81-rootfs.mount: Deactivated successfully. May 10 00:06:43.063748 containerd[1466]: time="2025-05-10T00:06:43.063645146Z" level=info msg="shim disconnected" id=d51d2800121cce914ef8d8944504e7155eca7f803443f43e240a2657b8f03c81 namespace=k8s.io May 10 00:06:43.063748 containerd[1466]: time="2025-05-10T00:06:43.063725308Z" level=warning msg="cleaning up after shim disconnected" id=d51d2800121cce914ef8d8944504e7155eca7f803443f43e240a2657b8f03c81 namespace=k8s.io May 10 00:06:43.063748 containerd[1466]: time="2025-05-10T00:06:43.063738428Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 10 00:06:43.569719 kubelet[2684]: E0510 00:06:43.569606 2684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kkznf" podUID="19603e28-9cab-4129-9122-e17f2cc348a8" May 10 00:06:43.677519 containerd[1466]: time="2025-05-10T00:06:43.677297786Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 10 00:06:45.570504 kubelet[2684]: E0510 00:06:45.570416 2684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kkznf" podUID="19603e28-9cab-4129-9122-e17f2cc348a8" May 10 00:06:47.570614 kubelet[2684]: E0510 00:06:47.570249 2684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kkznf" podUID="19603e28-9cab-4129-9122-e17f2cc348a8" May 10 00:06:48.042349 containerd[1466]: time="2025-05-10T00:06:48.042288490Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:48.043714 containerd[1466]: time="2025-05-10T00:06:48.043641668Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=91256270" May 10 00:06:48.044389 containerd[1466]: time="2025-05-10T00:06:48.044185315Z" level=info msg="ImageCreate event name:\"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:48.047414 containerd[1466]: time="2025-05-10T00:06:48.046583426Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:48.047414 containerd[1466]: time="2025-05-10T00:06:48.047295915Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"92625452\" in 4.369956888s" May 10 00:06:48.047414 containerd[1466]: time="2025-05-10T00:06:48.047325916Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\"" May 10 00:06:48.050129 containerd[1466]: time="2025-05-10T00:06:48.050095112Z" level=info msg="CreateContainer within sandbox \"54f610b8ccf7a9202146d812ea9549f5c2350c83c7a1503100107bf48f1351ae\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 10 00:06:48.074016 containerd[1466]: time="2025-05-10T00:06:48.073964222Z" level=info msg="CreateContainer within sandbox \"54f610b8ccf7a9202146d812ea9549f5c2350c83c7a1503100107bf48f1351ae\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"161c44d1b6b7efec11505fe6c57ea86fef3c74aa8db0e0617764b86d03263de7\"" May 10 00:06:48.074677 containerd[1466]: time="2025-05-10T00:06:48.074627151Z" level=info msg="StartContainer for \"161c44d1b6b7efec11505fe6c57ea86fef3c74aa8db0e0617764b86d03263de7\"" May 10 00:06:48.122921 systemd[1]: Started cri-containerd-161c44d1b6b7efec11505fe6c57ea86fef3c74aa8db0e0617764b86d03263de7.scope - libcontainer container 161c44d1b6b7efec11505fe6c57ea86fef3c74aa8db0e0617764b86d03263de7. May 10 00:06:48.155367 containerd[1466]: time="2025-05-10T00:06:48.155266158Z" level=info msg="StartContainer for \"161c44d1b6b7efec11505fe6c57ea86fef3c74aa8db0e0617764b86d03263de7\" returns successfully" May 10 00:06:48.696038 containerd[1466]: time="2025-05-10T00:06:48.695970864Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 10 00:06:48.699859 systemd[1]: cri-containerd-161c44d1b6b7efec11505fe6c57ea86fef3c74aa8db0e0617764b86d03263de7.scope: Deactivated successfully. May 10 00:06:48.724970 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-161c44d1b6b7efec11505fe6c57ea86fef3c74aa8db0e0617764b86d03263de7-rootfs.mount: Deactivated successfully. May 10 00:06:48.800077 kubelet[2684]: I0510 00:06:48.799853 2684 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 10 00:06:48.823303 containerd[1466]: time="2025-05-10T00:06:48.823104876Z" level=info msg="shim disconnected" id=161c44d1b6b7efec11505fe6c57ea86fef3c74aa8db0e0617764b86d03263de7 namespace=k8s.io May 10 00:06:48.823303 containerd[1466]: time="2025-05-10T00:06:48.823259838Z" level=warning msg="cleaning up after shim disconnected" id=161c44d1b6b7efec11505fe6c57ea86fef3c74aa8db0e0617764b86d03263de7 namespace=k8s.io May 10 00:06:48.823303 containerd[1466]: time="2025-05-10T00:06:48.823270598Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 10 00:06:48.868850 systemd[1]: Created slice kubepods-burstable-pod24f009df_8c19_4b55_8fef_4f458dd12e97.slice - libcontainer container kubepods-burstable-pod24f009df_8c19_4b55_8fef_4f458dd12e97.slice. May 10 00:06:48.874924 kubelet[2684]: W0510 00:06:48.872466 2684 reflector.go:561] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4081-3-3-n-7b3972f1ed" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4081-3-3-n-7b3972f1ed' and this object May 10 00:06:48.874924 kubelet[2684]: E0510 00:06:48.872527 2684 reflector.go:158] "Unhandled Error" err="object-\"calico-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-4081-3-3-n-7b3972f1ed\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'ci-4081-3-3-n-7b3972f1ed' and this object" logger="UnhandledError" May 10 00:06:48.886851 systemd[1]: Created slice kubepods-burstable-podc1194d46_0896_4b5b_ac89_83be1ea63c3b.slice - libcontainer container kubepods-burstable-podc1194d46_0896_4b5b_ac89_83be1ea63c3b.slice. May 10 00:06:48.896877 systemd[1]: Created slice kubepods-besteffort-pod64bfd5f9_fcc7_46ea_a3e0_fb75396a3f30.slice - libcontainer container kubepods-besteffort-pod64bfd5f9_fcc7_46ea_a3e0_fb75396a3f30.slice. May 10 00:06:48.907730 systemd[1]: Created slice kubepods-besteffort-pod82dbd33f_4b0a_4dd0_8dd3_0c33f1f824b2.slice - libcontainer container kubepods-besteffort-pod82dbd33f_4b0a_4dd0_8dd3_0c33f1f824b2.slice. May 10 00:06:48.917207 systemd[1]: Created slice kubepods-besteffort-poda7e097d4_cfcc_4137_9338_932bad26e148.slice - libcontainer container kubepods-besteffort-poda7e097d4_cfcc_4137_9338_932bad26e148.slice. May 10 00:06:48.959506 kubelet[2684]: I0510 00:06:48.959377 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/24f009df-8c19-4b55-8fef-4f458dd12e97-config-volume\") pod \"coredns-6f6b679f8f-bzj9v\" (UID: \"24f009df-8c19-4b55-8fef-4f458dd12e97\") " pod="kube-system/coredns-6f6b679f8f-bzj9v" May 10 00:06:48.959506 kubelet[2684]: I0510 00:06:48.959417 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c1194d46-0896-4b5b-ac89-83be1ea63c3b-config-volume\") pod \"coredns-6f6b679f8f-fgtct\" (UID: \"c1194d46-0896-4b5b-ac89-83be1ea63c3b\") " pod="kube-system/coredns-6f6b679f8f-fgtct" May 10 00:06:48.959506 kubelet[2684]: I0510 00:06:48.959440 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzz42\" (UniqueName: \"kubernetes.io/projected/24f009df-8c19-4b55-8fef-4f458dd12e97-kube-api-access-pzz42\") pod \"coredns-6f6b679f8f-bzj9v\" (UID: \"24f009df-8c19-4b55-8fef-4f458dd12e97\") " pod="kube-system/coredns-6f6b679f8f-bzj9v" May 10 00:06:48.959506 kubelet[2684]: I0510 00:06:48.959465 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/64bfd5f9-fcc7-46ea-a3e0-fb75396a3f30-calico-apiserver-certs\") pod \"calico-apiserver-784576f4d-st7c7\" (UID: \"64bfd5f9-fcc7-46ea-a3e0-fb75396a3f30\") " pod="calico-apiserver/calico-apiserver-784576f4d-st7c7" May 10 00:06:48.959506 kubelet[2684]: I0510 00:06:48.959485 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4gx6\" (UniqueName: \"kubernetes.io/projected/64bfd5f9-fcc7-46ea-a3e0-fb75396a3f30-kube-api-access-n4gx6\") pod \"calico-apiserver-784576f4d-st7c7\" (UID: \"64bfd5f9-fcc7-46ea-a3e0-fb75396a3f30\") " pod="calico-apiserver/calico-apiserver-784576f4d-st7c7" May 10 00:06:48.959786 kubelet[2684]: I0510 00:06:48.959500 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmpzg\" (UniqueName: \"kubernetes.io/projected/c1194d46-0896-4b5b-ac89-83be1ea63c3b-kube-api-access-hmpzg\") pod \"coredns-6f6b679f8f-fgtct\" (UID: \"c1194d46-0896-4b5b-ac89-83be1ea63c3b\") " pod="kube-system/coredns-6f6b679f8f-fgtct" May 10 00:06:49.060381 kubelet[2684]: I0510 00:06:49.060290 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpt2j\" (UniqueName: \"kubernetes.io/projected/a7e097d4-cfcc-4137-9338-932bad26e148-kube-api-access-dpt2j\") pod \"calico-kube-controllers-66fcf49574-dh78g\" (UID: \"a7e097d4-cfcc-4137-9338-932bad26e148\") " pod="calico-system/calico-kube-controllers-66fcf49574-dh78g" May 10 00:06:49.060381 kubelet[2684]: I0510 00:06:49.060387 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/82dbd33f-4b0a-4dd0-8dd3-0c33f1f824b2-calico-apiserver-certs\") pod \"calico-apiserver-784576f4d-7h6d7\" (UID: \"82dbd33f-4b0a-4dd0-8dd3-0c33f1f824b2\") " pod="calico-apiserver/calico-apiserver-784576f4d-7h6d7" May 10 00:06:49.060725 kubelet[2684]: I0510 00:06:49.060425 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqd8b\" (UniqueName: \"kubernetes.io/projected/82dbd33f-4b0a-4dd0-8dd3-0c33f1f824b2-kube-api-access-lqd8b\") pod \"calico-apiserver-784576f4d-7h6d7\" (UID: \"82dbd33f-4b0a-4dd0-8dd3-0c33f1f824b2\") " pod="calico-apiserver/calico-apiserver-784576f4d-7h6d7" May 10 00:06:49.060725 kubelet[2684]: I0510 00:06:49.060458 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a7e097d4-cfcc-4137-9338-932bad26e148-tigera-ca-bundle\") pod \"calico-kube-controllers-66fcf49574-dh78g\" (UID: \"a7e097d4-cfcc-4137-9338-932bad26e148\") " pod="calico-system/calico-kube-controllers-66fcf49574-dh78g" May 10 00:06:49.184900 containerd[1466]: time="2025-05-10T00:06:49.183829931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-bzj9v,Uid:24f009df-8c19-4b55-8fef-4f458dd12e97,Namespace:kube-system,Attempt:0,}" May 10 00:06:49.192792 containerd[1466]: time="2025-05-10T00:06:49.192364280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-fgtct,Uid:c1194d46-0896-4b5b-ac89-83be1ea63c3b,Namespace:kube-system,Attempt:0,}" May 10 00:06:49.225511 containerd[1466]: time="2025-05-10T00:06:49.224884097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66fcf49574-dh78g,Uid:a7e097d4-cfcc-4137-9338-932bad26e148,Namespace:calico-system,Attempt:0,}" May 10 00:06:49.309462 containerd[1466]: time="2025-05-10T00:06:49.309414939Z" level=error msg="Failed to destroy network for sandbox \"cb569cbf6ad55f68fe5a89a759c4a546a75d1f68d1fcf60e7de7e66c9855af43\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:06:49.310187 containerd[1466]: time="2025-05-10T00:06:49.310082388Z" level=error msg="encountered an error cleaning up failed sandbox \"cb569cbf6ad55f68fe5a89a759c4a546a75d1f68d1fcf60e7de7e66c9855af43\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:06:49.310187 containerd[1466]: time="2025-05-10T00:06:49.310144629Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-bzj9v,Uid:24f009df-8c19-4b55-8fef-4f458dd12e97,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cb569cbf6ad55f68fe5a89a759c4a546a75d1f68d1fcf60e7de7e66c9855af43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:06:49.310515 kubelet[2684]: E0510 00:06:49.310479 2684 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb569cbf6ad55f68fe5a89a759c4a546a75d1f68d1fcf60e7de7e66c9855af43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:06:49.310580 kubelet[2684]: E0510 00:06:49.310547 2684 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb569cbf6ad55f68fe5a89a759c4a546a75d1f68d1fcf60e7de7e66c9855af43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-bzj9v" May 10 00:06:49.310580 kubelet[2684]: E0510 00:06:49.310566 2684 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb569cbf6ad55f68fe5a89a759c4a546a75d1f68d1fcf60e7de7e66c9855af43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-bzj9v" May 10 00:06:49.310650 kubelet[2684]: E0510 00:06:49.310611 2684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-bzj9v_kube-system(24f009df-8c19-4b55-8fef-4f458dd12e97)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-bzj9v_kube-system(24f009df-8c19-4b55-8fef-4f458dd12e97)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cb569cbf6ad55f68fe5a89a759c4a546a75d1f68d1fcf60e7de7e66c9855af43\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-bzj9v" podUID="24f009df-8c19-4b55-8fef-4f458dd12e97" May 10 00:06:49.315214 containerd[1466]: time="2025-05-10T00:06:49.315077492Z" level=error msg="Failed to destroy network for sandbox \"9e7f6a301131fc054e2e3107a12bb32bc7d3ee8a0c17d6c184f390d9185779d8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:06:49.316162 containerd[1466]: time="2025-05-10T00:06:49.316090505Z" level=error msg="encountered an error cleaning up failed sandbox \"9e7f6a301131fc054e2e3107a12bb32bc7d3ee8a0c17d6c184f390d9185779d8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:06:49.316327 containerd[1466]: time="2025-05-10T00:06:49.316275667Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-fgtct,Uid:c1194d46-0896-4b5b-ac89-83be1ea63c3b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9e7f6a301131fc054e2e3107a12bb32bc7d3ee8a0c17d6c184f390d9185779d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:06:49.316780 kubelet[2684]: E0510 00:06:49.316587 2684 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e7f6a301131fc054e2e3107a12bb32bc7d3ee8a0c17d6c184f390d9185779d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:06:49.316780 kubelet[2684]: E0510 00:06:49.316652 2684 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e7f6a301131fc054e2e3107a12bb32bc7d3ee8a0c17d6c184f390d9185779d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-fgtct" May 10 00:06:49.316780 kubelet[2684]: E0510 00:06:49.316687 2684 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e7f6a301131fc054e2e3107a12bb32bc7d3ee8a0c17d6c184f390d9185779d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-fgtct" May 10 00:06:49.317121 kubelet[2684]: E0510 00:06:49.316763 2684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-fgtct_kube-system(c1194d46-0896-4b5b-ac89-83be1ea63c3b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-fgtct_kube-system(c1194d46-0896-4b5b-ac89-83be1ea63c3b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9e7f6a301131fc054e2e3107a12bb32bc7d3ee8a0c17d6c184f390d9185779d8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-fgtct" podUID="c1194d46-0896-4b5b-ac89-83be1ea63c3b" May 10 00:06:49.337501 containerd[1466]: time="2025-05-10T00:06:49.336777250Z" level=error msg="Failed to destroy network for sandbox \"09e2a0063f199a69a8277a8a73dba6a0a38e1d510153b63bda515f685cc9a966\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:06:49.337501 containerd[1466]: time="2025-05-10T00:06:49.337178615Z" level=error msg="encountered an error cleaning up failed sandbox \"09e2a0063f199a69a8277a8a73dba6a0a38e1d510153b63bda515f685cc9a966\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:06:49.337501 containerd[1466]: time="2025-05-10T00:06:49.337226376Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66fcf49574-dh78g,Uid:a7e097d4-cfcc-4137-9338-932bad26e148,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"09e2a0063f199a69a8277a8a73dba6a0a38e1d510153b63bda515f685cc9a966\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:06:49.338813 kubelet[2684]: E0510 00:06:49.337506 2684 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09e2a0063f199a69a8277a8a73dba6a0a38e1d510153b63bda515f685cc9a966\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:06:49.338813 kubelet[2684]: E0510 00:06:49.337581 2684 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09e2a0063f199a69a8277a8a73dba6a0a38e1d510153b63bda515f685cc9a966\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-66fcf49574-dh78g" May 10 00:06:49.338813 kubelet[2684]: E0510 00:06:49.337607 2684 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09e2a0063f199a69a8277a8a73dba6a0a38e1d510153b63bda515f685cc9a966\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-66fcf49574-dh78g" May 10 00:06:49.338980 kubelet[2684]: E0510 00:06:49.337689 2684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-66fcf49574-dh78g_calico-system(a7e097d4-cfcc-4137-9338-932bad26e148)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-66fcf49574-dh78g_calico-system(a7e097d4-cfcc-4137-9338-932bad26e148)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"09e2a0063f199a69a8277a8a73dba6a0a38e1d510153b63bda515f685cc9a966\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-66fcf49574-dh78g" podUID="a7e097d4-cfcc-4137-9338-932bad26e148" May 10 00:06:49.579862 systemd[1]: Created slice kubepods-besteffort-pod19603e28_9cab_4129_9122_e17f2cc348a8.slice - libcontainer container kubepods-besteffort-pod19603e28_9cab_4129_9122_e17f2cc348a8.slice. May 10 00:06:49.583842 containerd[1466]: time="2025-05-10T00:06:49.583797854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kkznf,Uid:19603e28-9cab-4129-9122-e17f2cc348a8,Namespace:calico-system,Attempt:0,}" May 10 00:06:49.645315 containerd[1466]: time="2025-05-10T00:06:49.645243001Z" level=error msg="Failed to destroy network for sandbox \"7a01eef1c747f7713fb9f07d5a7e9af53b03d65fc5ad05df54ef02cbb6b56936\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:06:49.645650 containerd[1466]: time="2025-05-10T00:06:49.645605526Z" level=error msg="encountered an error cleaning up failed sandbox \"7a01eef1c747f7713fb9f07d5a7e9af53b03d65fc5ad05df54ef02cbb6b56936\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:06:49.645830 containerd[1466]: time="2025-05-10T00:06:49.645738207Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kkznf,Uid:19603e28-9cab-4129-9122-e17f2cc348a8,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7a01eef1c747f7713fb9f07d5a7e9af53b03d65fc5ad05df54ef02cbb6b56936\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:06:49.646295 kubelet[2684]: E0510 00:06:49.646045 2684 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a01eef1c747f7713fb9f07d5a7e9af53b03d65fc5ad05df54ef02cbb6b56936\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:06:49.646295 kubelet[2684]: E0510 00:06:49.646108 2684 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a01eef1c747f7713fb9f07d5a7e9af53b03d65fc5ad05df54ef02cbb6b56936\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kkznf" May 10 00:06:49.646295 kubelet[2684]: E0510 00:06:49.646130 2684 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a01eef1c747f7713fb9f07d5a7e9af53b03d65fc5ad05df54ef02cbb6b56936\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kkznf" May 10 00:06:49.646635 kubelet[2684]: E0510 00:06:49.646171 2684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-kkznf_calico-system(19603e28-9cab-4129-9122-e17f2cc348a8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-kkznf_calico-system(19603e28-9cab-4129-9122-e17f2cc348a8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7a01eef1c747f7713fb9f07d5a7e9af53b03d65fc5ad05df54ef02cbb6b56936\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kkznf" podUID="19603e28-9cab-4129-9122-e17f2cc348a8" May 10 00:06:49.697399 kubelet[2684]: I0510 00:06:49.697349 2684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a01eef1c747f7713fb9f07d5a7e9af53b03d65fc5ad05df54ef02cbb6b56936" May 10 00:06:49.698726 containerd[1466]: time="2025-05-10T00:06:49.698638965Z" level=info msg="StopPodSandbox for \"7a01eef1c747f7713fb9f07d5a7e9af53b03d65fc5ad05df54ef02cbb6b56936\"" May 10 00:06:49.699537 kubelet[2684]: I0510 00:06:49.699175 2684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09e2a0063f199a69a8277a8a73dba6a0a38e1d510153b63bda515f685cc9a966" May 10 00:06:49.700123 containerd[1466]: time="2025-05-10T00:06:49.699827860Z" level=info msg="Ensure that sandbox 7a01eef1c747f7713fb9f07d5a7e9af53b03d65fc5ad05df54ef02cbb6b56936 in task-service has been cleanup successfully" May 10 00:06:49.701523 containerd[1466]: time="2025-05-10T00:06:49.701494161Z" level=info msg="StopPodSandbox for \"09e2a0063f199a69a8277a8a73dba6a0a38e1d510153b63bda515f685cc9a966\"" May 10 00:06:49.702718 containerd[1466]: time="2025-05-10T00:06:49.702423013Z" level=info msg="Ensure that sandbox 09e2a0063f199a69a8277a8a73dba6a0a38e1d510153b63bda515f685cc9a966 in task-service has been cleanup successfully" May 10 00:06:49.716604 containerd[1466]: time="2025-05-10T00:06:49.716558274Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 10 00:06:49.723044 kubelet[2684]: I0510 00:06:49.723000 2684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e7f6a301131fc054e2e3107a12bb32bc7d3ee8a0c17d6c184f390d9185779d8" May 10 00:06:49.724251 containerd[1466]: time="2025-05-10T00:06:49.723871408Z" level=info msg="StopPodSandbox for \"9e7f6a301131fc054e2e3107a12bb32bc7d3ee8a0c17d6c184f390d9185779d8\"" May 10 00:06:49.724251 containerd[1466]: time="2025-05-10T00:06:49.724031250Z" level=info msg="Ensure that sandbox 9e7f6a301131fc054e2e3107a12bb32bc7d3ee8a0c17d6c184f390d9185779d8 in task-service has been cleanup successfully" May 10 00:06:49.729196 kubelet[2684]: I0510 00:06:49.729152 2684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb569cbf6ad55f68fe5a89a759c4a546a75d1f68d1fcf60e7de7e66c9855af43" May 10 00:06:49.732929 containerd[1466]: time="2025-05-10T00:06:49.732778362Z" level=info msg="StopPodSandbox for \"cb569cbf6ad55f68fe5a89a759c4a546a75d1f68d1fcf60e7de7e66c9855af43\"" May 10 00:06:49.733061 containerd[1466]: time="2025-05-10T00:06:49.732975685Z" level=info msg="Ensure that sandbox cb569cbf6ad55f68fe5a89a759c4a546a75d1f68d1fcf60e7de7e66c9855af43 in task-service has been cleanup successfully" May 10 00:06:49.795718 containerd[1466]: time="2025-05-10T00:06:49.795487125Z" level=error msg="StopPodSandbox for \"7a01eef1c747f7713fb9f07d5a7e9af53b03d65fc5ad05df54ef02cbb6b56936\" failed" error="failed to destroy network for sandbox \"7a01eef1c747f7713fb9f07d5a7e9af53b03d65fc5ad05df54ef02cbb6b56936\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:06:49.795850 kubelet[2684]: E0510 00:06:49.795756 2684 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7a01eef1c747f7713fb9f07d5a7e9af53b03d65fc5ad05df54ef02cbb6b56936\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7a01eef1c747f7713fb9f07d5a7e9af53b03d65fc5ad05df54ef02cbb6b56936" May 10 00:06:49.795895 kubelet[2684]: E0510 00:06:49.795816 2684 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7a01eef1c747f7713fb9f07d5a7e9af53b03d65fc5ad05df54ef02cbb6b56936"} May 10 00:06:49.795895 kubelet[2684]: E0510 00:06:49.795888 2684 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"19603e28-9cab-4129-9122-e17f2cc348a8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a01eef1c747f7713fb9f07d5a7e9af53b03d65fc5ad05df54ef02cbb6b56936\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 10 00:06:49.795970 kubelet[2684]: E0510 00:06:49.795911 2684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"19603e28-9cab-4129-9122-e17f2cc348a8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a01eef1c747f7713fb9f07d5a7e9af53b03d65fc5ad05df54ef02cbb6b56936\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kkznf" podUID="19603e28-9cab-4129-9122-e17f2cc348a8" May 10 00:06:49.797479 containerd[1466]: time="2025-05-10T00:06:49.797368869Z" level=error msg="StopPodSandbox for \"9e7f6a301131fc054e2e3107a12bb32bc7d3ee8a0c17d6c184f390d9185779d8\" failed" error="failed to destroy network for sandbox \"9e7f6a301131fc054e2e3107a12bb32bc7d3ee8a0c17d6c184f390d9185779d8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:06:49.797861 kubelet[2684]: E0510 00:06:49.797810 2684 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9e7f6a301131fc054e2e3107a12bb32bc7d3ee8a0c17d6c184f390d9185779d8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9e7f6a301131fc054e2e3107a12bb32bc7d3ee8a0c17d6c184f390d9185779d8" May 10 00:06:49.797941 kubelet[2684]: E0510 00:06:49.797874 2684 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9e7f6a301131fc054e2e3107a12bb32bc7d3ee8a0c17d6c184f390d9185779d8"} May 10 00:06:49.797941 kubelet[2684]: E0510 00:06:49.797916 2684 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c1194d46-0896-4b5b-ac89-83be1ea63c3b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9e7f6a301131fc054e2e3107a12bb32bc7d3ee8a0c17d6c184f390d9185779d8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 10 00:06:49.798022 kubelet[2684]: E0510 00:06:49.797936 2684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c1194d46-0896-4b5b-ac89-83be1ea63c3b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9e7f6a301131fc054e2e3107a12bb32bc7d3ee8a0c17d6c184f390d9185779d8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-fgtct" podUID="c1194d46-0896-4b5b-ac89-83be1ea63c3b" May 10 00:06:49.801878 containerd[1466]: time="2025-05-10T00:06:49.801823247Z" level=error msg="StopPodSandbox for \"cb569cbf6ad55f68fe5a89a759c4a546a75d1f68d1fcf60e7de7e66c9855af43\" failed" error="failed to destroy network for sandbox \"cb569cbf6ad55f68fe5a89a759c4a546a75d1f68d1fcf60e7de7e66c9855af43\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:06:49.802114 kubelet[2684]: E0510 00:06:49.802078 2684 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cb569cbf6ad55f68fe5a89a759c4a546a75d1f68d1fcf60e7de7e66c9855af43\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cb569cbf6ad55f68fe5a89a759c4a546a75d1f68d1fcf60e7de7e66c9855af43" May 10 00:06:49.802422 kubelet[2684]: E0510 00:06:49.802129 2684 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cb569cbf6ad55f68fe5a89a759c4a546a75d1f68d1fcf60e7de7e66c9855af43"} May 10 00:06:49.802422 kubelet[2684]: E0510 00:06:49.802161 2684 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"24f009df-8c19-4b55-8fef-4f458dd12e97\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cb569cbf6ad55f68fe5a89a759c4a546a75d1f68d1fcf60e7de7e66c9855af43\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 10 00:06:49.802422 kubelet[2684]: E0510 00:06:49.802212 2684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"24f009df-8c19-4b55-8fef-4f458dd12e97\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cb569cbf6ad55f68fe5a89a759c4a546a75d1f68d1fcf60e7de7e66c9855af43\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-bzj9v" podUID="24f009df-8c19-4b55-8fef-4f458dd12e97" May 10 00:06:49.805431 containerd[1466]: time="2025-05-10T00:06:49.805119769Z" level=error msg="StopPodSandbox for \"09e2a0063f199a69a8277a8a73dba6a0a38e1d510153b63bda515f685cc9a966\" failed" error="failed to destroy network for sandbox \"09e2a0063f199a69a8277a8a73dba6a0a38e1d510153b63bda515f685cc9a966\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:06:49.805431 containerd[1466]: time="2025-05-10T00:06:49.805284691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-784576f4d-st7c7,Uid:64bfd5f9-fcc7-46ea-a3e0-fb75396a3f30,Namespace:calico-apiserver,Attempt:0,}" May 10 00:06:49.805741 kubelet[2684]: E0510 00:06:49.805299 2684 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"09e2a0063f199a69a8277a8a73dba6a0a38e1d510153b63bda515f685cc9a966\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="09e2a0063f199a69a8277a8a73dba6a0a38e1d510153b63bda515f685cc9a966" May 10 00:06:49.805741 kubelet[2684]: E0510 00:06:49.805347 2684 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"09e2a0063f199a69a8277a8a73dba6a0a38e1d510153b63bda515f685cc9a966"} May 10 00:06:49.805741 kubelet[2684]: E0510 00:06:49.805378 2684 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a7e097d4-cfcc-4137-9338-932bad26e148\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"09e2a0063f199a69a8277a8a73dba6a0a38e1d510153b63bda515f685cc9a966\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 10 00:06:49.805741 kubelet[2684]: E0510 00:06:49.805399 2684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a7e097d4-cfcc-4137-9338-932bad26e148\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"09e2a0063f199a69a8277a8a73dba6a0a38e1d510153b63bda515f685cc9a966\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-66fcf49574-dh78g" podUID="a7e097d4-cfcc-4137-9338-932bad26e148" May 10 00:06:49.813365 containerd[1466]: time="2025-05-10T00:06:49.813174672Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-784576f4d-7h6d7,Uid:82dbd33f-4b0a-4dd0-8dd3-0c33f1f824b2,Namespace:calico-apiserver,Attempt:0,}" May 10 00:06:49.896749 containerd[1466]: time="2025-05-10T00:06:49.896643861Z" level=error msg="Failed to destroy network for sandbox \"3793df354735c769c070452fca12fdc99f9a7a50d67b35a69ce989ff4ed66b99\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:06:49.897284 containerd[1466]: time="2025-05-10T00:06:49.897252989Z" level=error msg="encountered an error cleaning up failed sandbox \"3793df354735c769c070452fca12fdc99f9a7a50d67b35a69ce989ff4ed66b99\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:06:49.897433 containerd[1466]: time="2025-05-10T00:06:49.897405151Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-784576f4d-st7c7,Uid:64bfd5f9-fcc7-46ea-a3e0-fb75396a3f30,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3793df354735c769c070452fca12fdc99f9a7a50d67b35a69ce989ff4ed66b99\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:06:49.897951 kubelet[2684]: E0510 00:06:49.897896 2684 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3793df354735c769c070452fca12fdc99f9a7a50d67b35a69ce989ff4ed66b99\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:06:49.898082 kubelet[2684]: E0510 00:06:49.898065 2684 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3793df354735c769c070452fca12fdc99f9a7a50d67b35a69ce989ff4ed66b99\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-784576f4d-st7c7" May 10 00:06:49.898246 kubelet[2684]: E0510 00:06:49.898160 2684 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3793df354735c769c070452fca12fdc99f9a7a50d67b35a69ce989ff4ed66b99\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-784576f4d-st7c7" May 10 00:06:49.898494 kubelet[2684]: E0510 00:06:49.898367 2684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-784576f4d-st7c7_calico-apiserver(64bfd5f9-fcc7-46ea-a3e0-fb75396a3f30)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-784576f4d-st7c7_calico-apiserver(64bfd5f9-fcc7-46ea-a3e0-fb75396a3f30)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3793df354735c769c070452fca12fdc99f9a7a50d67b35a69ce989ff4ed66b99\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-784576f4d-st7c7" podUID="64bfd5f9-fcc7-46ea-a3e0-fb75396a3f30" May 10 00:06:49.916268 containerd[1466]: time="2025-05-10T00:06:49.916054910Z" level=error msg="Failed to destroy network for sandbox \"94da3496626f6b27cdb212bd0cef0ea14312bee753f6943ba1ed1db227f2e3db\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:06:49.916818 containerd[1466]: time="2025-05-10T00:06:49.916629437Z" level=error msg="encountered an error cleaning up failed sandbox \"94da3496626f6b27cdb212bd0cef0ea14312bee753f6943ba1ed1db227f2e3db\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:06:49.916818 containerd[1466]: time="2025-05-10T00:06:49.916754079Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-784576f4d-7h6d7,Uid:82dbd33f-4b0a-4dd0-8dd3-0c33f1f824b2,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"94da3496626f6b27cdb212bd0cef0ea14312bee753f6943ba1ed1db227f2e3db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:06:49.917208 kubelet[2684]: E0510 00:06:49.917143 2684 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94da3496626f6b27cdb212bd0cef0ea14312bee753f6943ba1ed1db227f2e3db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:06:49.917376 kubelet[2684]: E0510 00:06:49.917220 2684 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94da3496626f6b27cdb212bd0cef0ea14312bee753f6943ba1ed1db227f2e3db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-784576f4d-7h6d7" May 10 00:06:49.917376 kubelet[2684]: E0510 00:06:49.917245 2684 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94da3496626f6b27cdb212bd0cef0ea14312bee753f6943ba1ed1db227f2e3db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-784576f4d-7h6d7" May 10 00:06:49.917376 kubelet[2684]: E0510 00:06:49.917290 2684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-784576f4d-7h6d7_calico-apiserver(82dbd33f-4b0a-4dd0-8dd3-0c33f1f824b2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-784576f4d-7h6d7_calico-apiserver(82dbd33f-4b0a-4dd0-8dd3-0c33f1f824b2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"94da3496626f6b27cdb212bd0cef0ea14312bee753f6943ba1ed1db227f2e3db\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-784576f4d-7h6d7" podUID="82dbd33f-4b0a-4dd0-8dd3-0c33f1f824b2" May 10 00:06:50.732740 kubelet[2684]: I0510 00:06:50.732507 2684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3793df354735c769c070452fca12fdc99f9a7a50d67b35a69ce989ff4ed66b99" May 10 00:06:50.736463 kubelet[2684]: I0510 00:06:50.735209 2684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94da3496626f6b27cdb212bd0cef0ea14312bee753f6943ba1ed1db227f2e3db" May 10 00:06:50.736615 containerd[1466]: time="2025-05-10T00:06:50.735278032Z" level=info msg="StopPodSandbox for \"3793df354735c769c070452fca12fdc99f9a7a50d67b35a69ce989ff4ed66b99\"" May 10 00:06:50.736615 containerd[1466]: time="2025-05-10T00:06:50.735597876Z" level=info msg="Ensure that sandbox 3793df354735c769c070452fca12fdc99f9a7a50d67b35a69ce989ff4ed66b99 in task-service has been cleanup successfully" May 10 00:06:50.738104 containerd[1466]: time="2025-05-10T00:06:50.737690302Z" level=info msg="StopPodSandbox for \"94da3496626f6b27cdb212bd0cef0ea14312bee753f6943ba1ed1db227f2e3db\"" May 10 00:06:50.738104 containerd[1466]: time="2025-05-10T00:06:50.737875464Z" level=info msg="Ensure that sandbox 94da3496626f6b27cdb212bd0cef0ea14312bee753f6943ba1ed1db227f2e3db in task-service has been cleanup successfully" May 10 00:06:50.774752 containerd[1466]: time="2025-05-10T00:06:50.774187203Z" level=error msg="StopPodSandbox for \"94da3496626f6b27cdb212bd0cef0ea14312bee753f6943ba1ed1db227f2e3db\" failed" error="failed to destroy network for sandbox \"94da3496626f6b27cdb212bd0cef0ea14312bee753f6943ba1ed1db227f2e3db\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:06:50.775466 kubelet[2684]: E0510 00:06:50.775152 2684 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"94da3496626f6b27cdb212bd0cef0ea14312bee753f6943ba1ed1db227f2e3db\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="94da3496626f6b27cdb212bd0cef0ea14312bee753f6943ba1ed1db227f2e3db" May 10 00:06:50.775466 kubelet[2684]: E0510 00:06:50.775206 2684 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"94da3496626f6b27cdb212bd0cef0ea14312bee753f6943ba1ed1db227f2e3db"} May 10 00:06:50.775466 kubelet[2684]: E0510 00:06:50.775253 2684 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"82dbd33f-4b0a-4dd0-8dd3-0c33f1f824b2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"94da3496626f6b27cdb212bd0cef0ea14312bee753f6943ba1ed1db227f2e3db\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 10 00:06:50.775466 kubelet[2684]: E0510 00:06:50.775275 2684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"82dbd33f-4b0a-4dd0-8dd3-0c33f1f824b2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"94da3496626f6b27cdb212bd0cef0ea14312bee753f6943ba1ed1db227f2e3db\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-784576f4d-7h6d7" podUID="82dbd33f-4b0a-4dd0-8dd3-0c33f1f824b2" May 10 00:06:50.776782 containerd[1466]: time="2025-05-10T00:06:50.776735555Z" level=error msg="StopPodSandbox for \"3793df354735c769c070452fca12fdc99f9a7a50d67b35a69ce989ff4ed66b99\" failed" error="failed to destroy network for sandbox \"3793df354735c769c070452fca12fdc99f9a7a50d67b35a69ce989ff4ed66b99\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:06:50.776993 kubelet[2684]: E0510 00:06:50.776957 2684 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3793df354735c769c070452fca12fdc99f9a7a50d67b35a69ce989ff4ed66b99\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3793df354735c769c070452fca12fdc99f9a7a50d67b35a69ce989ff4ed66b99" May 10 00:06:50.777095 kubelet[2684]: E0510 00:06:50.777010 2684 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3793df354735c769c070452fca12fdc99f9a7a50d67b35a69ce989ff4ed66b99"} May 10 00:06:50.777095 kubelet[2684]: E0510 00:06:50.777042 2684 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"64bfd5f9-fcc7-46ea-a3e0-fb75396a3f30\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3793df354735c769c070452fca12fdc99f9a7a50d67b35a69ce989ff4ed66b99\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 10 00:06:50.777095 kubelet[2684]: E0510 00:06:50.777064 2684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"64bfd5f9-fcc7-46ea-a3e0-fb75396a3f30\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3793df354735c769c070452fca12fdc99f9a7a50d67b35a69ce989ff4ed66b99\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-784576f4d-st7c7" podUID="64bfd5f9-fcc7-46ea-a3e0-fb75396a3f30" May 10 00:06:57.992584 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1605105185.mount: Deactivated successfully. May 10 00:06:58.025867 containerd[1466]: time="2025-05-10T00:06:58.025388930Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:58.027135 containerd[1466]: time="2025-05-10T00:06:58.026615184Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=138981893" May 10 00:06:58.027281 containerd[1466]: time="2025-05-10T00:06:58.027255992Z" level=info msg="ImageCreate event name:\"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:58.031553 containerd[1466]: time="2025-05-10T00:06:58.031480560Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:58.034643 containerd[1466]: time="2025-05-10T00:06:58.034546355Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"138981755\" in 8.317928319s" May 10 00:06:58.034817 containerd[1466]: time="2025-05-10T00:06:58.034639476Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\"" May 10 00:06:58.049490 containerd[1466]: time="2025-05-10T00:06:58.048751996Z" level=info msg="CreateContainer within sandbox \"54f610b8ccf7a9202146d812ea9549f5c2350c83c7a1503100107bf48f1351ae\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 10 00:06:58.068721 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2464399714.mount: Deactivated successfully. May 10 00:06:58.069387 containerd[1466]: time="2025-05-10T00:06:58.069282870Z" level=info msg="CreateContainer within sandbox \"54f610b8ccf7a9202146d812ea9549f5c2350c83c7a1503100107bf48f1351ae\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"142a952cfd79593370f0eac8b839e3fe939da101556733d72c93135142df696a\"" May 10 00:06:58.071574 containerd[1466]: time="2025-05-10T00:06:58.071283893Z" level=info msg="StartContainer for \"142a952cfd79593370f0eac8b839e3fe939da101556733d72c93135142df696a\"" May 10 00:06:58.101922 systemd[1]: Started cri-containerd-142a952cfd79593370f0eac8b839e3fe939da101556733d72c93135142df696a.scope - libcontainer container 142a952cfd79593370f0eac8b839e3fe939da101556733d72c93135142df696a. May 10 00:06:58.136645 containerd[1466]: time="2025-05-10T00:06:58.136592756Z" level=info msg="StartContainer for \"142a952cfd79593370f0eac8b839e3fe939da101556733d72c93135142df696a\" returns successfully" May 10 00:06:58.244354 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 10 00:06:58.244584 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 10 00:07:00.039807 kernel: bpftool[4011]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 10 00:07:00.231564 systemd-networkd[1375]: vxlan.calico: Link UP May 10 00:07:00.231570 systemd-networkd[1375]: vxlan.calico: Gained carrier May 10 00:07:01.472975 systemd-networkd[1375]: vxlan.calico: Gained IPv6LL May 10 00:07:02.573016 containerd[1466]: time="2025-05-10T00:07:02.572639194Z" level=info msg="StopPodSandbox for \"09e2a0063f199a69a8277a8a73dba6a0a38e1d510153b63bda515f685cc9a966\"" May 10 00:07:02.573016 containerd[1466]: time="2025-05-10T00:07:02.572876796Z" level=info msg="StopPodSandbox for \"7a01eef1c747f7713fb9f07d5a7e9af53b03d65fc5ad05df54ef02cbb6b56936\"" May 10 00:07:02.672308 kubelet[2684]: I0510 00:07:02.672223 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-c54cw" podStartSLOduration=5.284612202 podStartE2EDuration="23.672139555s" podCreationTimestamp="2025-05-10 00:06:39 +0000 UTC" firstStartedPulling="2025-05-10 00:06:39.648264856 +0000 UTC m=+15.203751240" lastFinishedPulling="2025-05-10 00:06:58.035792169 +0000 UTC m=+33.591278593" observedRunningTime="2025-05-10 00:06:58.791942654 +0000 UTC m=+34.347429078" watchObservedRunningTime="2025-05-10 00:07:02.672139555 +0000 UTC m=+38.227625979" May 10 00:07:02.747538 containerd[1466]: 2025-05-10 00:07:02.670 [INFO][4130] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="09e2a0063f199a69a8277a8a73dba6a0a38e1d510153b63bda515f685cc9a966" May 10 00:07:02.747538 containerd[1466]: 2025-05-10 00:07:02.670 [INFO][4130] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="09e2a0063f199a69a8277a8a73dba6a0a38e1d510153b63bda515f685cc9a966" iface="eth0" netns="/var/run/netns/cni-d147c5c5-c1a0-fac9-26c2-77085f4b87cf" May 10 00:07:02.747538 containerd[1466]: 2025-05-10 00:07:02.671 [INFO][4130] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="09e2a0063f199a69a8277a8a73dba6a0a38e1d510153b63bda515f685cc9a966" iface="eth0" netns="/var/run/netns/cni-d147c5c5-c1a0-fac9-26c2-77085f4b87cf" May 10 00:07:02.747538 containerd[1466]: 2025-05-10 00:07:02.674 [INFO][4130] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="09e2a0063f199a69a8277a8a73dba6a0a38e1d510153b63bda515f685cc9a966" iface="eth0" netns="/var/run/netns/cni-d147c5c5-c1a0-fac9-26c2-77085f4b87cf" May 10 00:07:02.747538 containerd[1466]: 2025-05-10 00:07:02.675 [INFO][4130] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="09e2a0063f199a69a8277a8a73dba6a0a38e1d510153b63bda515f685cc9a966" May 10 00:07:02.747538 containerd[1466]: 2025-05-10 00:07:02.675 [INFO][4130] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="09e2a0063f199a69a8277a8a73dba6a0a38e1d510153b63bda515f685cc9a966" May 10 00:07:02.747538 containerd[1466]: 2025-05-10 00:07:02.718 [INFO][4145] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="09e2a0063f199a69a8277a8a73dba6a0a38e1d510153b63bda515f685cc9a966" HandleID="k8s-pod-network.09e2a0063f199a69a8277a8a73dba6a0a38e1d510153b63bda515f685cc9a966" Workload="ci--4081--3--3--n--7b3972f1ed-k8s-calico--kube--controllers--66fcf49574--dh78g-eth0" May 10 00:07:02.747538 containerd[1466]: 2025-05-10 00:07:02.719 [INFO][4145] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:07:02.747538 containerd[1466]: 2025-05-10 00:07:02.719 [INFO][4145] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:07:02.747538 containerd[1466]: 2025-05-10 00:07:02.736 [WARNING][4145] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="09e2a0063f199a69a8277a8a73dba6a0a38e1d510153b63bda515f685cc9a966" HandleID="k8s-pod-network.09e2a0063f199a69a8277a8a73dba6a0a38e1d510153b63bda515f685cc9a966" Workload="ci--4081--3--3--n--7b3972f1ed-k8s-calico--kube--controllers--66fcf49574--dh78g-eth0" May 10 00:07:02.747538 containerd[1466]: 2025-05-10 00:07:02.737 [INFO][4145] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="09e2a0063f199a69a8277a8a73dba6a0a38e1d510153b63bda515f685cc9a966" HandleID="k8s-pod-network.09e2a0063f199a69a8277a8a73dba6a0a38e1d510153b63bda515f685cc9a966" Workload="ci--4081--3--3--n--7b3972f1ed-k8s-calico--kube--controllers--66fcf49574--dh78g-eth0" May 10 00:07:02.747538 containerd[1466]: 2025-05-10 00:07:02.742 [INFO][4145] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:07:02.747538 containerd[1466]: 2025-05-10 00:07:02.744 [INFO][4130] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="09e2a0063f199a69a8277a8a73dba6a0a38e1d510153b63bda515f685cc9a966" May 10 00:07:02.751032 containerd[1466]: time="2025-05-10T00:07:02.750982571Z" level=info msg="TearDown network for sandbox \"09e2a0063f199a69a8277a8a73dba6a0a38e1d510153b63bda515f685cc9a966\" successfully" May 10 00:07:02.751032 containerd[1466]: time="2025-05-10T00:07:02.751030692Z" level=info msg="StopPodSandbox for \"09e2a0063f199a69a8277a8a73dba6a0a38e1d510153b63bda515f685cc9a966\" returns successfully" May 10 00:07:02.754783 containerd[1466]: time="2025-05-10T00:07:02.753401798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66fcf49574-dh78g,Uid:a7e097d4-cfcc-4137-9338-932bad26e148,Namespace:calico-system,Attempt:1,}" May 10 00:07:02.753617 systemd[1]: run-netns-cni\x2dd147c5c5\x2dc1a0\x2dfac9\x2d26c2\x2d77085f4b87cf.mount: Deactivated successfully. May 10 00:07:02.773460 containerd[1466]: 2025-05-10 00:07:02.675 [INFO][4129] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7a01eef1c747f7713fb9f07d5a7e9af53b03d65fc5ad05df54ef02cbb6b56936" May 10 00:07:02.773460 containerd[1466]: 2025-05-10 00:07:02.675 [INFO][4129] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7a01eef1c747f7713fb9f07d5a7e9af53b03d65fc5ad05df54ef02cbb6b56936" iface="eth0" netns="/var/run/netns/cni-b812fa39-eab2-f6e4-b445-38cde570daca" May 10 00:07:02.773460 containerd[1466]: 2025-05-10 00:07:02.676 [INFO][4129] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7a01eef1c747f7713fb9f07d5a7e9af53b03d65fc5ad05df54ef02cbb6b56936" iface="eth0" netns="/var/run/netns/cni-b812fa39-eab2-f6e4-b445-38cde570daca" May 10 00:07:02.773460 containerd[1466]: 2025-05-10 00:07:02.678 [INFO][4129] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7a01eef1c747f7713fb9f07d5a7e9af53b03d65fc5ad05df54ef02cbb6b56936" iface="eth0" netns="/var/run/netns/cni-b812fa39-eab2-f6e4-b445-38cde570daca" May 10 00:07:02.773460 containerd[1466]: 2025-05-10 00:07:02.678 [INFO][4129] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7a01eef1c747f7713fb9f07d5a7e9af53b03d65fc5ad05df54ef02cbb6b56936" May 10 00:07:02.773460 containerd[1466]: 2025-05-10 00:07:02.678 [INFO][4129] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7a01eef1c747f7713fb9f07d5a7e9af53b03d65fc5ad05df54ef02cbb6b56936" May 10 00:07:02.773460 containerd[1466]: 2025-05-10 00:07:02.723 [INFO][4147] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7a01eef1c747f7713fb9f07d5a7e9af53b03d65fc5ad05df54ef02cbb6b56936" HandleID="k8s-pod-network.7a01eef1c747f7713fb9f07d5a7e9af53b03d65fc5ad05df54ef02cbb6b56936" Workload="ci--4081--3--3--n--7b3972f1ed-k8s-csi--node--driver--kkznf-eth0" May 10 00:07:02.773460 containerd[1466]: 2025-05-10 00:07:02.723 [INFO][4147] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:07:02.773460 containerd[1466]: 2025-05-10 00:07:02.742 [INFO][4147] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:07:02.773460 containerd[1466]: 2025-05-10 00:07:02.763 [WARNING][4147] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7a01eef1c747f7713fb9f07d5a7e9af53b03d65fc5ad05df54ef02cbb6b56936" HandleID="k8s-pod-network.7a01eef1c747f7713fb9f07d5a7e9af53b03d65fc5ad05df54ef02cbb6b56936" Workload="ci--4081--3--3--n--7b3972f1ed-k8s-csi--node--driver--kkznf-eth0" May 10 00:07:02.773460 containerd[1466]: 2025-05-10 00:07:02.763 [INFO][4147] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7a01eef1c747f7713fb9f07d5a7e9af53b03d65fc5ad05df54ef02cbb6b56936" HandleID="k8s-pod-network.7a01eef1c747f7713fb9f07d5a7e9af53b03d65fc5ad05df54ef02cbb6b56936" Workload="ci--4081--3--3--n--7b3972f1ed-k8s-csi--node--driver--kkznf-eth0" May 10 00:07:02.773460 containerd[1466]: 2025-05-10 00:07:02.766 [INFO][4147] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:07:02.773460 containerd[1466]: 2025-05-10 00:07:02.770 [INFO][4129] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7a01eef1c747f7713fb9f07d5a7e9af53b03d65fc5ad05df54ef02cbb6b56936" May 10 00:07:02.775511 systemd[1]: run-netns-cni\x2db812fa39\x2deab2\x2df6e4\x2db445\x2d38cde570daca.mount: Deactivated successfully. May 10 00:07:02.778214 containerd[1466]: time="2025-05-10T00:07:02.777863343Z" level=info msg="TearDown network for sandbox \"7a01eef1c747f7713fb9f07d5a7e9af53b03d65fc5ad05df54ef02cbb6b56936\" successfully" May 10 00:07:02.778214 containerd[1466]: time="2025-05-10T00:07:02.777906264Z" level=info msg="StopPodSandbox for \"7a01eef1c747f7713fb9f07d5a7e9af53b03d65fc5ad05df54ef02cbb6b56936\" returns successfully" May 10 00:07:02.779622 containerd[1466]: time="2025-05-10T00:07:02.779322159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kkznf,Uid:19603e28-9cab-4129-9122-e17f2cc348a8,Namespace:calico-system,Attempt:1,}" May 10 00:07:03.020888 systemd-networkd[1375]: calic15a4f46097: Link UP May 10 00:07:03.021243 systemd-networkd[1375]: calic15a4f46097: Gained carrier May 10 00:07:03.046594 containerd[1466]: 2025-05-10 00:07:02.913 [INFO][4160] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--n--7b3972f1ed-k8s-calico--kube--controllers--66fcf49574--dh78g-eth0 calico-kube-controllers-66fcf49574- calico-system a7e097d4-cfcc-4137-9338-932bad26e148 771 0 2025-05-10 00:06:39 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:66fcf49574 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-3-n-7b3972f1ed calico-kube-controllers-66fcf49574-dh78g eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calic15a4f46097 [] []}} ContainerID="c8f1b2bdcf1bc56a2a64333798de1da6244a985b82b2a2dfd11754f55c0540d8" Namespace="calico-system" Pod="calico-kube-controllers-66fcf49574-dh78g" WorkloadEndpoint="ci--4081--3--3--n--7b3972f1ed-k8s-calico--kube--controllers--66fcf49574--dh78g-" May 10 00:07:03.046594 containerd[1466]: 2025-05-10 00:07:02.913 [INFO][4160] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c8f1b2bdcf1bc56a2a64333798de1da6244a985b82b2a2dfd11754f55c0540d8" Namespace="calico-system" Pod="calico-kube-controllers-66fcf49574-dh78g" WorkloadEndpoint="ci--4081--3--3--n--7b3972f1ed-k8s-calico--kube--controllers--66fcf49574--dh78g-eth0" May 10 00:07:03.046594 containerd[1466]: 2025-05-10 00:07:02.955 [INFO][4186] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c8f1b2bdcf1bc56a2a64333798de1da6244a985b82b2a2dfd11754f55c0540d8" HandleID="k8s-pod-network.c8f1b2bdcf1bc56a2a64333798de1da6244a985b82b2a2dfd11754f55c0540d8" Workload="ci--4081--3--3--n--7b3972f1ed-k8s-calico--kube--controllers--66fcf49574--dh78g-eth0" May 10 00:07:03.046594 containerd[1466]: 2025-05-10 00:07:02.972 [INFO][4186] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c8f1b2bdcf1bc56a2a64333798de1da6244a985b82b2a2dfd11754f55c0540d8" HandleID="k8s-pod-network.c8f1b2bdcf1bc56a2a64333798de1da6244a985b82b2a2dfd11754f55c0540d8" Workload="ci--4081--3--3--n--7b3972f1ed-k8s-calico--kube--controllers--66fcf49574--dh78g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400028d110), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-3-n-7b3972f1ed", "pod":"calico-kube-controllers-66fcf49574-dh78g", "timestamp":"2025-05-10 00:07:02.955302311 +0000 UTC"}, Hostname:"ci-4081-3-3-n-7b3972f1ed", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 10 00:07:03.046594 containerd[1466]: 2025-05-10 00:07:02.972 [INFO][4186] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:07:03.046594 containerd[1466]: 2025-05-10 00:07:02.972 [INFO][4186] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:07:03.046594 containerd[1466]: 2025-05-10 00:07:02.972 [INFO][4186] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-n-7b3972f1ed' May 10 00:07:03.046594 containerd[1466]: 2025-05-10 00:07:02.977 [INFO][4186] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c8f1b2bdcf1bc56a2a64333798de1da6244a985b82b2a2dfd11754f55c0540d8" host="ci-4081-3-3-n-7b3972f1ed" May 10 00:07:03.046594 containerd[1466]: 2025-05-10 00:07:02.983 [INFO][4186] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-n-7b3972f1ed" May 10 00:07:03.046594 containerd[1466]: 2025-05-10 00:07:02.989 [INFO][4186] ipam/ipam.go 489: Trying affinity for 192.168.54.0/26 host="ci-4081-3-3-n-7b3972f1ed" May 10 00:07:03.046594 containerd[1466]: 2025-05-10 00:07:02.991 [INFO][4186] ipam/ipam.go 155: Attempting to load block cidr=192.168.54.0/26 host="ci-4081-3-3-n-7b3972f1ed" May 10 00:07:03.046594 containerd[1466]: 2025-05-10 00:07:02.993 [INFO][4186] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.54.0/26 host="ci-4081-3-3-n-7b3972f1ed" May 10 00:07:03.046594 containerd[1466]: 2025-05-10 00:07:02.994 [INFO][4186] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.54.0/26 handle="k8s-pod-network.c8f1b2bdcf1bc56a2a64333798de1da6244a985b82b2a2dfd11754f55c0540d8" host="ci-4081-3-3-n-7b3972f1ed" May 10 00:07:03.046594 containerd[1466]: 2025-05-10 00:07:02.996 [INFO][4186] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c8f1b2bdcf1bc56a2a64333798de1da6244a985b82b2a2dfd11754f55c0540d8 May 10 00:07:03.046594 containerd[1466]: 2025-05-10 00:07:03.001 [INFO][4186] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.54.0/26 handle="k8s-pod-network.c8f1b2bdcf1bc56a2a64333798de1da6244a985b82b2a2dfd11754f55c0540d8" host="ci-4081-3-3-n-7b3972f1ed" May 10 00:07:03.046594 containerd[1466]: 2025-05-10 00:07:03.010 [INFO][4186] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.54.1/26] block=192.168.54.0/26 handle="k8s-pod-network.c8f1b2bdcf1bc56a2a64333798de1da6244a985b82b2a2dfd11754f55c0540d8" host="ci-4081-3-3-n-7b3972f1ed" May 10 00:07:03.046594 containerd[1466]: 2025-05-10 00:07:03.010 [INFO][4186] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.54.1/26] handle="k8s-pod-network.c8f1b2bdcf1bc56a2a64333798de1da6244a985b82b2a2dfd11754f55c0540d8" host="ci-4081-3-3-n-7b3972f1ed" May 10 00:07:03.046594 containerd[1466]: 2025-05-10 00:07:03.010 [INFO][4186] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:07:03.046594 containerd[1466]: 2025-05-10 00:07:03.010 [INFO][4186] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.54.1/26] IPv6=[] ContainerID="c8f1b2bdcf1bc56a2a64333798de1da6244a985b82b2a2dfd11754f55c0540d8" HandleID="k8s-pod-network.c8f1b2bdcf1bc56a2a64333798de1da6244a985b82b2a2dfd11754f55c0540d8" Workload="ci--4081--3--3--n--7b3972f1ed-k8s-calico--kube--controllers--66fcf49574--dh78g-eth0" May 10 00:07:03.047689 containerd[1466]: 2025-05-10 00:07:03.013 [INFO][4160] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c8f1b2bdcf1bc56a2a64333798de1da6244a985b82b2a2dfd11754f55c0540d8" Namespace="calico-system" Pod="calico-kube-controllers-66fcf49574-dh78g" WorkloadEndpoint="ci--4081--3--3--n--7b3972f1ed-k8s-calico--kube--controllers--66fcf49574--dh78g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--7b3972f1ed-k8s-calico--kube--controllers--66fcf49574--dh78g-eth0", GenerateName:"calico-kube-controllers-66fcf49574-", Namespace:"calico-system", SelfLink:"", UID:"a7e097d4-cfcc-4137-9338-932bad26e148", ResourceVersion:"771", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 6, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"66fcf49574", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-7b3972f1ed", ContainerID:"", Pod:"calico-kube-controllers-66fcf49574-dh78g", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.54.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic15a4f46097", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:07:03.047689 containerd[1466]: 2025-05-10 00:07:03.014 [INFO][4160] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.54.1/32] ContainerID="c8f1b2bdcf1bc56a2a64333798de1da6244a985b82b2a2dfd11754f55c0540d8" Namespace="calico-system" Pod="calico-kube-controllers-66fcf49574-dh78g" WorkloadEndpoint="ci--4081--3--3--n--7b3972f1ed-k8s-calico--kube--controllers--66fcf49574--dh78g-eth0" May 10 00:07:03.047689 containerd[1466]: 2025-05-10 00:07:03.014 [INFO][4160] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic15a4f46097 ContainerID="c8f1b2bdcf1bc56a2a64333798de1da6244a985b82b2a2dfd11754f55c0540d8" Namespace="calico-system" Pod="calico-kube-controllers-66fcf49574-dh78g" WorkloadEndpoint="ci--4081--3--3--n--7b3972f1ed-k8s-calico--kube--controllers--66fcf49574--dh78g-eth0" May 10 00:07:03.047689 containerd[1466]: 2025-05-10 00:07:03.021 [INFO][4160] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c8f1b2bdcf1bc56a2a64333798de1da6244a985b82b2a2dfd11754f55c0540d8" Namespace="calico-system" Pod="calico-kube-controllers-66fcf49574-dh78g" WorkloadEndpoint="ci--4081--3--3--n--7b3972f1ed-k8s-calico--kube--controllers--66fcf49574--dh78g-eth0" May 10 00:07:03.047689 containerd[1466]: 2025-05-10 00:07:03.022 [INFO][4160] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c8f1b2bdcf1bc56a2a64333798de1da6244a985b82b2a2dfd11754f55c0540d8" Namespace="calico-system" Pod="calico-kube-controllers-66fcf49574-dh78g" WorkloadEndpoint="ci--4081--3--3--n--7b3972f1ed-k8s-calico--kube--controllers--66fcf49574--dh78g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--7b3972f1ed-k8s-calico--kube--controllers--66fcf49574--dh78g-eth0", GenerateName:"calico-kube-controllers-66fcf49574-", Namespace:"calico-system", SelfLink:"", UID:"a7e097d4-cfcc-4137-9338-932bad26e148", ResourceVersion:"771", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 6, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"66fcf49574", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-7b3972f1ed", ContainerID:"c8f1b2bdcf1bc56a2a64333798de1da6244a985b82b2a2dfd11754f55c0540d8", Pod:"calico-kube-controllers-66fcf49574-dh78g", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.54.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic15a4f46097", MAC:"2a:e9:b4:be:4d:59", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:07:03.047689 containerd[1466]: 2025-05-10 00:07:03.041 [INFO][4160] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c8f1b2bdcf1bc56a2a64333798de1da6244a985b82b2a2dfd11754f55c0540d8" Namespace="calico-system" Pod="calico-kube-controllers-66fcf49574-dh78g" WorkloadEndpoint="ci--4081--3--3--n--7b3972f1ed-k8s-calico--kube--controllers--66fcf49574--dh78g-eth0" May 10 00:07:03.073248 containerd[1466]: time="2025-05-10T00:07:03.072971741Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:07:03.073248 containerd[1466]: time="2025-05-10T00:07:03.073084303Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:07:03.073248 containerd[1466]: time="2025-05-10T00:07:03.073109183Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:07:03.073528 containerd[1466]: time="2025-05-10T00:07:03.073421666Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:07:03.104989 systemd[1]: Started cri-containerd-c8f1b2bdcf1bc56a2a64333798de1da6244a985b82b2a2dfd11754f55c0540d8.scope - libcontainer container c8f1b2bdcf1bc56a2a64333798de1da6244a985b82b2a2dfd11754f55c0540d8. May 10 00:07:03.134081 systemd-networkd[1375]: cali2166f89971f: Link UP May 10 00:07:03.134903 systemd-networkd[1375]: cali2166f89971f: Gained carrier May 10 00:07:03.163179 containerd[1466]: 2025-05-10 00:07:02.913 [INFO][4161] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--n--7b3972f1ed-k8s-csi--node--driver--kkznf-eth0 csi-node-driver- calico-system 19603e28-9cab-4129-9122-e17f2cc348a8 772 0 2025-05-10 00:06:39 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:5bcd8f69 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-3-n-7b3972f1ed csi-node-driver-kkznf eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali2166f89971f [] []}} ContainerID="e12d3731c20f4363bf7702c9e691e362e8f1f3fee9b2bb078ed7354824e9a32f" Namespace="calico-system" Pod="csi-node-driver-kkznf" WorkloadEndpoint="ci--4081--3--3--n--7b3972f1ed-k8s-csi--node--driver--kkznf-" May 10 00:07:03.163179 containerd[1466]: 2025-05-10 00:07:02.913 [INFO][4161] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e12d3731c20f4363bf7702c9e691e362e8f1f3fee9b2bb078ed7354824e9a32f" Namespace="calico-system" Pod="csi-node-driver-kkznf" WorkloadEndpoint="ci--4081--3--3--n--7b3972f1ed-k8s-csi--node--driver--kkznf-eth0" May 10 00:07:03.163179 containerd[1466]: 2025-05-10 00:07:02.960 [INFO][4188] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e12d3731c20f4363bf7702c9e691e362e8f1f3fee9b2bb078ed7354824e9a32f" HandleID="k8s-pod-network.e12d3731c20f4363bf7702c9e691e362e8f1f3fee9b2bb078ed7354824e9a32f" Workload="ci--4081--3--3--n--7b3972f1ed-k8s-csi--node--driver--kkznf-eth0" May 10 00:07:03.163179 containerd[1466]: 2025-05-10 00:07:02.978 [INFO][4188] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e12d3731c20f4363bf7702c9e691e362e8f1f3fee9b2bb078ed7354824e9a32f" HandleID="k8s-pod-network.e12d3731c20f4363bf7702c9e691e362e8f1f3fee9b2bb078ed7354824e9a32f" Workload="ci--4081--3--3--n--7b3972f1ed-k8s-csi--node--driver--kkznf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400028cd00), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-3-n-7b3972f1ed", "pod":"csi-node-driver-kkznf", "timestamp":"2025-05-10 00:07:02.960451607 +0000 UTC"}, Hostname:"ci-4081-3-3-n-7b3972f1ed", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 10 00:07:03.163179 containerd[1466]: 2025-05-10 00:07:02.979 [INFO][4188] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:07:03.163179 containerd[1466]: 2025-05-10 00:07:03.010 [INFO][4188] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:07:03.163179 containerd[1466]: 2025-05-10 00:07:03.010 [INFO][4188] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-n-7b3972f1ed' May 10 00:07:03.163179 containerd[1466]: 2025-05-10 00:07:03.082 [INFO][4188] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e12d3731c20f4363bf7702c9e691e362e8f1f3fee9b2bb078ed7354824e9a32f" host="ci-4081-3-3-n-7b3972f1ed" May 10 00:07:03.163179 containerd[1466]: 2025-05-10 00:07:03.089 [INFO][4188] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-n-7b3972f1ed" May 10 00:07:03.163179 containerd[1466]: 2025-05-10 00:07:03.098 [INFO][4188] ipam/ipam.go 489: Trying affinity for 192.168.54.0/26 host="ci-4081-3-3-n-7b3972f1ed" May 10 00:07:03.163179 containerd[1466]: 2025-05-10 00:07:03.102 [INFO][4188] ipam/ipam.go 155: Attempting to load block cidr=192.168.54.0/26 host="ci-4081-3-3-n-7b3972f1ed" May 10 00:07:03.163179 containerd[1466]: 2025-05-10 00:07:03.107 [INFO][4188] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.54.0/26 host="ci-4081-3-3-n-7b3972f1ed" May 10 00:07:03.163179 containerd[1466]: 2025-05-10 00:07:03.107 [INFO][4188] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.54.0/26 handle="k8s-pod-network.e12d3731c20f4363bf7702c9e691e362e8f1f3fee9b2bb078ed7354824e9a32f" host="ci-4081-3-3-n-7b3972f1ed" May 10 00:07:03.163179 containerd[1466]: 2025-05-10 00:07:03.110 [INFO][4188] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e12d3731c20f4363bf7702c9e691e362e8f1f3fee9b2bb078ed7354824e9a32f May 10 00:07:03.163179 containerd[1466]: 2025-05-10 00:07:03.118 [INFO][4188] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.54.0/26 handle="k8s-pod-network.e12d3731c20f4363bf7702c9e691e362e8f1f3fee9b2bb078ed7354824e9a32f" host="ci-4081-3-3-n-7b3972f1ed" May 10 00:07:03.163179 containerd[1466]: 2025-05-10 00:07:03.128 [INFO][4188] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.54.2/26] block=192.168.54.0/26 handle="k8s-pod-network.e12d3731c20f4363bf7702c9e691e362e8f1f3fee9b2bb078ed7354824e9a32f" host="ci-4081-3-3-n-7b3972f1ed" May 10 00:07:03.163179 containerd[1466]: 2025-05-10 00:07:03.128 [INFO][4188] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.54.2/26] handle="k8s-pod-network.e12d3731c20f4363bf7702c9e691e362e8f1f3fee9b2bb078ed7354824e9a32f" host="ci-4081-3-3-n-7b3972f1ed" May 10 00:07:03.163179 containerd[1466]: 2025-05-10 00:07:03.128 [INFO][4188] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:07:03.163179 containerd[1466]: 2025-05-10 00:07:03.128 [INFO][4188] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.54.2/26] IPv6=[] ContainerID="e12d3731c20f4363bf7702c9e691e362e8f1f3fee9b2bb078ed7354824e9a32f" HandleID="k8s-pod-network.e12d3731c20f4363bf7702c9e691e362e8f1f3fee9b2bb078ed7354824e9a32f" Workload="ci--4081--3--3--n--7b3972f1ed-k8s-csi--node--driver--kkznf-eth0" May 10 00:07:03.164354 containerd[1466]: 2025-05-10 00:07:03.131 [INFO][4161] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e12d3731c20f4363bf7702c9e691e362e8f1f3fee9b2bb078ed7354824e9a32f" Namespace="calico-system" Pod="csi-node-driver-kkznf" WorkloadEndpoint="ci--4081--3--3--n--7b3972f1ed-k8s-csi--node--driver--kkznf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--7b3972f1ed-k8s-csi--node--driver--kkznf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"19603e28-9cab-4129-9122-e17f2cc348a8", ResourceVersion:"772", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 6, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-7b3972f1ed", ContainerID:"", Pod:"csi-node-driver-kkznf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.54.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2166f89971f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:07:03.164354 containerd[1466]: 2025-05-10 00:07:03.131 [INFO][4161] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.54.2/32] ContainerID="e12d3731c20f4363bf7702c9e691e362e8f1f3fee9b2bb078ed7354824e9a32f" Namespace="calico-system" Pod="csi-node-driver-kkznf" WorkloadEndpoint="ci--4081--3--3--n--7b3972f1ed-k8s-csi--node--driver--kkznf-eth0" May 10 00:07:03.164354 containerd[1466]: 2025-05-10 00:07:03.131 [INFO][4161] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2166f89971f ContainerID="e12d3731c20f4363bf7702c9e691e362e8f1f3fee9b2bb078ed7354824e9a32f" Namespace="calico-system" Pod="csi-node-driver-kkznf" WorkloadEndpoint="ci--4081--3--3--n--7b3972f1ed-k8s-csi--node--driver--kkznf-eth0" May 10 00:07:03.164354 containerd[1466]: 2025-05-10 00:07:03.135 [INFO][4161] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e12d3731c20f4363bf7702c9e691e362e8f1f3fee9b2bb078ed7354824e9a32f" Namespace="calico-system" Pod="csi-node-driver-kkznf" WorkloadEndpoint="ci--4081--3--3--n--7b3972f1ed-k8s-csi--node--driver--kkznf-eth0" May 10 00:07:03.164354 containerd[1466]: 2025-05-10 00:07:03.135 [INFO][4161] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e12d3731c20f4363bf7702c9e691e362e8f1f3fee9b2bb078ed7354824e9a32f" Namespace="calico-system" Pod="csi-node-driver-kkznf" WorkloadEndpoint="ci--4081--3--3--n--7b3972f1ed-k8s-csi--node--driver--kkznf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--7b3972f1ed-k8s-csi--node--driver--kkznf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"19603e28-9cab-4129-9122-e17f2cc348a8", ResourceVersion:"772", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 6, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-7b3972f1ed", ContainerID:"e12d3731c20f4363bf7702c9e691e362e8f1f3fee9b2bb078ed7354824e9a32f", Pod:"csi-node-driver-kkznf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.54.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2166f89971f", MAC:"da:d7:f2:93:cf:b8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:07:03.164354 containerd[1466]: 2025-05-10 00:07:03.159 [INFO][4161] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e12d3731c20f4363bf7702c9e691e362e8f1f3fee9b2bb078ed7354824e9a32f" Namespace="calico-system" Pod="csi-node-driver-kkznf" WorkloadEndpoint="ci--4081--3--3--n--7b3972f1ed-k8s-csi--node--driver--kkznf-eth0" May 10 00:07:03.165383 containerd[1466]: time="2025-05-10T00:07:03.165211093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66fcf49574-dh78g,Uid:a7e097d4-cfcc-4137-9338-932bad26e148,Namespace:calico-system,Attempt:1,} returns sandbox id \"c8f1b2bdcf1bc56a2a64333798de1da6244a985b82b2a2dfd11754f55c0540d8\"" May 10 00:07:03.170769 containerd[1466]: time="2025-05-10T00:07:03.170384468Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 10 00:07:03.200033 containerd[1466]: time="2025-05-10T00:07:03.199847305Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:07:03.200590 containerd[1466]: time="2025-05-10T00:07:03.200512392Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:07:03.200590 containerd[1466]: time="2025-05-10T00:07:03.200569233Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:07:03.200968 containerd[1466]: time="2025-05-10T00:07:03.200850316Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:07:03.221186 systemd[1]: Started cri-containerd-e12d3731c20f4363bf7702c9e691e362e8f1f3fee9b2bb078ed7354824e9a32f.scope - libcontainer container e12d3731c20f4363bf7702c9e691e362e8f1f3fee9b2bb078ed7354824e9a32f. May 10 00:07:03.260625 containerd[1466]: time="2025-05-10T00:07:03.260547677Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kkznf,Uid:19603e28-9cab-4129-9122-e17f2cc348a8,Namespace:calico-system,Attempt:1,} returns sandbox id \"e12d3731c20f4363bf7702c9e691e362e8f1f3fee9b2bb078ed7354824e9a32f\"" May 10 00:07:03.571843 containerd[1466]: time="2025-05-10T00:07:03.571001534Z" level=info msg="StopPodSandbox for \"9e7f6a301131fc054e2e3107a12bb32bc7d3ee8a0c17d6c184f390d9185779d8\"" May 10 00:07:03.572657 containerd[1466]: time="2025-05-10T00:07:03.572593311Z" level=info msg="StopPodSandbox for \"3793df354735c769c070452fca12fdc99f9a7a50d67b35a69ce989ff4ed66b99\"" May 10 00:07:03.573828 containerd[1466]: time="2025-05-10T00:07:03.573794364Z" level=info msg="StopPodSandbox for \"94da3496626f6b27cdb212bd0cef0ea14312bee753f6943ba1ed1db227f2e3db\"" May 10 00:07:03.575288 containerd[1466]: time="2025-05-10T00:07:03.574880735Z" level=info msg="StopPodSandbox for \"cb569cbf6ad55f68fe5a89a759c4a546a75d1f68d1fcf60e7de7e66c9855af43\"" May 10 00:07:03.816469 containerd[1466]: 2025-05-10 00:07:03.726 [INFO][4350] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9e7f6a301131fc054e2e3107a12bb32bc7d3ee8a0c17d6c184f390d9185779d8" May 10 00:07:03.816469 containerd[1466]: 2025-05-10 00:07:03.726 [INFO][4350] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9e7f6a301131fc054e2e3107a12bb32bc7d3ee8a0c17d6c184f390d9185779d8" iface="eth0" netns="/var/run/netns/cni-3ebf66e5-138e-48ed-579f-fe4c59fa1b38" May 10 00:07:03.816469 containerd[1466]: 2025-05-10 00:07:03.728 [INFO][4350] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9e7f6a301131fc054e2e3107a12bb32bc7d3ee8a0c17d6c184f390d9185779d8" iface="eth0" netns="/var/run/netns/cni-3ebf66e5-138e-48ed-579f-fe4c59fa1b38" May 10 00:07:03.816469 containerd[1466]: 2025-05-10 00:07:03.730 [INFO][4350] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9e7f6a301131fc054e2e3107a12bb32bc7d3ee8a0c17d6c184f390d9185779d8" iface="eth0" netns="/var/run/netns/cni-3ebf66e5-138e-48ed-579f-fe4c59fa1b38" May 10 00:07:03.816469 containerd[1466]: 2025-05-10 00:07:03.731 [INFO][4350] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9e7f6a301131fc054e2e3107a12bb32bc7d3ee8a0c17d6c184f390d9185779d8" May 10 00:07:03.816469 containerd[1466]: 2025-05-10 00:07:03.731 [INFO][4350] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9e7f6a301131fc054e2e3107a12bb32bc7d3ee8a0c17d6c184f390d9185779d8" May 10 00:07:03.816469 containerd[1466]: 2025-05-10 00:07:03.787 [INFO][4392] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9e7f6a301131fc054e2e3107a12bb32bc7d3ee8a0c17d6c184f390d9185779d8" HandleID="k8s-pod-network.9e7f6a301131fc054e2e3107a12bb32bc7d3ee8a0c17d6c184f390d9185779d8" Workload="ci--4081--3--3--n--7b3972f1ed-k8s-coredns--6f6b679f8f--fgtct-eth0" May 10 00:07:03.816469 containerd[1466]: 2025-05-10 00:07:03.787 [INFO][4392] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:07:03.816469 containerd[1466]: 2025-05-10 00:07:03.788 [INFO][4392] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:07:03.816469 containerd[1466]: 2025-05-10 00:07:03.801 [WARNING][4392] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9e7f6a301131fc054e2e3107a12bb32bc7d3ee8a0c17d6c184f390d9185779d8" HandleID="k8s-pod-network.9e7f6a301131fc054e2e3107a12bb32bc7d3ee8a0c17d6c184f390d9185779d8" Workload="ci--4081--3--3--n--7b3972f1ed-k8s-coredns--6f6b679f8f--fgtct-eth0" May 10 00:07:03.816469 containerd[1466]: 2025-05-10 00:07:03.801 [INFO][4392] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9e7f6a301131fc054e2e3107a12bb32bc7d3ee8a0c17d6c184f390d9185779d8" HandleID="k8s-pod-network.9e7f6a301131fc054e2e3107a12bb32bc7d3ee8a0c17d6c184f390d9185779d8" Workload="ci--4081--3--3--n--7b3972f1ed-k8s-coredns--6f6b679f8f--fgtct-eth0" May 10 00:07:03.816469 containerd[1466]: 2025-05-10 00:07:03.804 [INFO][4392] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:07:03.816469 containerd[1466]: 2025-05-10 00:07:03.806 [INFO][4350] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9e7f6a301131fc054e2e3107a12bb32bc7d3ee8a0c17d6c184f390d9185779d8" May 10 00:07:03.819477 systemd[1]: run-netns-cni\x2d3ebf66e5\x2d138e\x2d48ed\x2d579f\x2dfe4c59fa1b38.mount: Deactivated successfully. May 10 00:07:03.820329 containerd[1466]: time="2025-05-10T00:07:03.820137211Z" level=info msg="TearDown network for sandbox \"9e7f6a301131fc054e2e3107a12bb32bc7d3ee8a0c17d6c184f390d9185779d8\" successfully" May 10 00:07:03.820329 containerd[1466]: time="2025-05-10T00:07:03.820176331Z" level=info msg="StopPodSandbox for \"9e7f6a301131fc054e2e3107a12bb32bc7d3ee8a0c17d6c184f390d9185779d8\" returns successfully" May 10 00:07:03.821139 containerd[1466]: time="2025-05-10T00:07:03.821113262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-fgtct,Uid:c1194d46-0896-4b5b-ac89-83be1ea63c3b,Namespace:kube-system,Attempt:1,}" May 10 00:07:03.862468 containerd[1466]: 2025-05-10 00:07:03.703 [INFO][4355] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="94da3496626f6b27cdb212bd0cef0ea14312bee753f6943ba1ed1db227f2e3db" May 10 00:07:03.862468 containerd[1466]: 2025-05-10 00:07:03.703 [INFO][4355] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="94da3496626f6b27cdb212bd0cef0ea14312bee753f6943ba1ed1db227f2e3db" iface="eth0" netns="/var/run/netns/cni-863c3a28-c616-2e82-9502-e0c9785fc030" May 10 00:07:03.862468 containerd[1466]: 2025-05-10 00:07:03.704 [INFO][4355] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="94da3496626f6b27cdb212bd0cef0ea14312bee753f6943ba1ed1db227f2e3db" iface="eth0" netns="/var/run/netns/cni-863c3a28-c616-2e82-9502-e0c9785fc030" May 10 00:07:03.862468 containerd[1466]: 2025-05-10 00:07:03.705 [INFO][4355] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="94da3496626f6b27cdb212bd0cef0ea14312bee753f6943ba1ed1db227f2e3db" iface="eth0" netns="/var/run/netns/cni-863c3a28-c616-2e82-9502-e0c9785fc030" May 10 00:07:03.862468 containerd[1466]: 2025-05-10 00:07:03.705 [INFO][4355] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="94da3496626f6b27cdb212bd0cef0ea14312bee753f6943ba1ed1db227f2e3db" May 10 00:07:03.862468 containerd[1466]: 2025-05-10 00:07:03.706 [INFO][4355] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="94da3496626f6b27cdb212bd0cef0ea14312bee753f6943ba1ed1db227f2e3db" May 10 00:07:03.862468 containerd[1466]: 2025-05-10 00:07:03.811 [INFO][4385] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="94da3496626f6b27cdb212bd0cef0ea14312bee753f6943ba1ed1db227f2e3db" HandleID="k8s-pod-network.94da3496626f6b27cdb212bd0cef0ea14312bee753f6943ba1ed1db227f2e3db" Workload="ci--4081--3--3--n--7b3972f1ed-k8s-calico--apiserver--784576f4d--7h6d7-eth0" May 10 00:07:03.862468 containerd[1466]: 2025-05-10 00:07:03.811 [INFO][4385] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:07:03.862468 containerd[1466]: 2025-05-10 00:07:03.812 [INFO][4385] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:07:03.862468 containerd[1466]: 2025-05-10 00:07:03.838 [WARNING][4385] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="94da3496626f6b27cdb212bd0cef0ea14312bee753f6943ba1ed1db227f2e3db" HandleID="k8s-pod-network.94da3496626f6b27cdb212bd0cef0ea14312bee753f6943ba1ed1db227f2e3db" Workload="ci--4081--3--3--n--7b3972f1ed-k8s-calico--apiserver--784576f4d--7h6d7-eth0" May 10 00:07:03.862468 containerd[1466]: 2025-05-10 00:07:03.838 [INFO][4385] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="94da3496626f6b27cdb212bd0cef0ea14312bee753f6943ba1ed1db227f2e3db" HandleID="k8s-pod-network.94da3496626f6b27cdb212bd0cef0ea14312bee753f6943ba1ed1db227f2e3db" Workload="ci--4081--3--3--n--7b3972f1ed-k8s-calico--apiserver--784576f4d--7h6d7-eth0" May 10 00:07:03.862468 containerd[1466]: 2025-05-10 00:07:03.845 [INFO][4385] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:07:03.862468 containerd[1466]: 2025-05-10 00:07:03.856 [INFO][4355] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="94da3496626f6b27cdb212bd0cef0ea14312bee753f6943ba1ed1db227f2e3db" May 10 00:07:03.865817 containerd[1466]: time="2025-05-10T00:07:03.865483578Z" level=info msg="TearDown network for sandbox \"94da3496626f6b27cdb212bd0cef0ea14312bee753f6943ba1ed1db227f2e3db\" successfully" May 10 00:07:03.865817 containerd[1466]: time="2025-05-10T00:07:03.865522459Z" level=info msg="StopPodSandbox for \"94da3496626f6b27cdb212bd0cef0ea14312bee753f6943ba1ed1db227f2e3db\" returns successfully" May 10 00:07:03.867752 systemd[1]: run-netns-cni\x2d863c3a28\x2dc616\x2d2e82\x2d9502\x2de0c9785fc030.mount: Deactivated successfully. May 10 00:07:03.868679 containerd[1466]: time="2025-05-10T00:07:03.868632812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-784576f4d-7h6d7,Uid:82dbd33f-4b0a-4dd0-8dd3-0c33f1f824b2,Namespace:calico-apiserver,Attempt:1,}" May 10 00:07:03.883881 containerd[1466]: 2025-05-10 00:07:03.725 [INFO][4354] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3793df354735c769c070452fca12fdc99f9a7a50d67b35a69ce989ff4ed66b99" May 10 00:07:03.883881 containerd[1466]: 2025-05-10 00:07:03.726 [INFO][4354] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3793df354735c769c070452fca12fdc99f9a7a50d67b35a69ce989ff4ed66b99" iface="eth0" netns="/var/run/netns/cni-29073d56-2edd-8953-714d-48c233fc6fe9" May 10 00:07:03.883881 containerd[1466]: 2025-05-10 00:07:03.726 [INFO][4354] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3793df354735c769c070452fca12fdc99f9a7a50d67b35a69ce989ff4ed66b99" iface="eth0" netns="/var/run/netns/cni-29073d56-2edd-8953-714d-48c233fc6fe9" May 10 00:07:03.883881 containerd[1466]: 2025-05-10 00:07:03.729 [INFO][4354] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3793df354735c769c070452fca12fdc99f9a7a50d67b35a69ce989ff4ed66b99" iface="eth0" netns="/var/run/netns/cni-29073d56-2edd-8953-714d-48c233fc6fe9" May 10 00:07:03.883881 containerd[1466]: 2025-05-10 00:07:03.729 [INFO][4354] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3793df354735c769c070452fca12fdc99f9a7a50d67b35a69ce989ff4ed66b99" May 10 00:07:03.883881 containerd[1466]: 2025-05-10 00:07:03.729 [INFO][4354] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3793df354735c769c070452fca12fdc99f9a7a50d67b35a69ce989ff4ed66b99" May 10 00:07:03.883881 containerd[1466]: 2025-05-10 00:07:03.815 [INFO][4391] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3793df354735c769c070452fca12fdc99f9a7a50d67b35a69ce989ff4ed66b99" HandleID="k8s-pod-network.3793df354735c769c070452fca12fdc99f9a7a50d67b35a69ce989ff4ed66b99" Workload="ci--4081--3--3--n--7b3972f1ed-k8s-calico--apiserver--784576f4d--st7c7-eth0" May 10 00:07:03.883881 containerd[1466]: 2025-05-10 00:07:03.816 [INFO][4391] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:07:03.883881 containerd[1466]: 2025-05-10 00:07:03.846 [INFO][4391] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:07:03.883881 containerd[1466]: 2025-05-10 00:07:03.870 [WARNING][4391] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3793df354735c769c070452fca12fdc99f9a7a50d67b35a69ce989ff4ed66b99" HandleID="k8s-pod-network.3793df354735c769c070452fca12fdc99f9a7a50d67b35a69ce989ff4ed66b99" Workload="ci--4081--3--3--n--7b3972f1ed-k8s-calico--apiserver--784576f4d--st7c7-eth0" May 10 00:07:03.883881 containerd[1466]: 2025-05-10 00:07:03.870 [INFO][4391] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3793df354735c769c070452fca12fdc99f9a7a50d67b35a69ce989ff4ed66b99" HandleID="k8s-pod-network.3793df354735c769c070452fca12fdc99f9a7a50d67b35a69ce989ff4ed66b99" Workload="ci--4081--3--3--n--7b3972f1ed-k8s-calico--apiserver--784576f4d--st7c7-eth0" May 10 00:07:03.883881 containerd[1466]: 2025-05-10 00:07:03.873 [INFO][4391] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:07:03.883881 containerd[1466]: 2025-05-10 00:07:03.876 [INFO][4354] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3793df354735c769c070452fca12fdc99f9a7a50d67b35a69ce989ff4ed66b99" May 10 00:07:03.885937 containerd[1466]: time="2025-05-10T00:07:03.885773316Z" level=info msg="TearDown network for sandbox \"3793df354735c769c070452fca12fdc99f9a7a50d67b35a69ce989ff4ed66b99\" successfully" May 10 00:07:03.885937 containerd[1466]: time="2025-05-10T00:07:03.885818077Z" level=info msg="StopPodSandbox for \"3793df354735c769c070452fca12fdc99f9a7a50d67b35a69ce989ff4ed66b99\" returns successfully" May 10 00:07:03.887636 systemd[1]: run-netns-cni\x2d29073d56\x2d2edd\x2d8953\x2d714d\x2d48c233fc6fe9.mount: Deactivated successfully. May 10 00:07:03.889531 containerd[1466]: time="2025-05-10T00:07:03.889464836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-784576f4d-st7c7,Uid:64bfd5f9-fcc7-46ea-a3e0-fb75396a3f30,Namespace:calico-apiserver,Attempt:1,}" May 10 00:07:03.903199 containerd[1466]: 2025-05-10 00:07:03.758 [INFO][4373] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cb569cbf6ad55f68fe5a89a759c4a546a75d1f68d1fcf60e7de7e66c9855af43" May 10 00:07:03.903199 containerd[1466]: 2025-05-10 00:07:03.759 [INFO][4373] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cb569cbf6ad55f68fe5a89a759c4a546a75d1f68d1fcf60e7de7e66c9855af43" iface="eth0" netns="/var/run/netns/cni-508e19e6-547e-cbcb-aa10-c74a8e12a943" May 10 00:07:03.903199 containerd[1466]: 2025-05-10 00:07:03.760 [INFO][4373] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cb569cbf6ad55f68fe5a89a759c4a546a75d1f68d1fcf60e7de7e66c9855af43" iface="eth0" netns="/var/run/netns/cni-508e19e6-547e-cbcb-aa10-c74a8e12a943" May 10 00:07:03.903199 containerd[1466]: 2025-05-10 00:07:03.760 [INFO][4373] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cb569cbf6ad55f68fe5a89a759c4a546a75d1f68d1fcf60e7de7e66c9855af43" iface="eth0" netns="/var/run/netns/cni-508e19e6-547e-cbcb-aa10-c74a8e12a943" May 10 00:07:03.903199 containerd[1466]: 2025-05-10 00:07:03.760 [INFO][4373] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cb569cbf6ad55f68fe5a89a759c4a546a75d1f68d1fcf60e7de7e66c9855af43" May 10 00:07:03.903199 containerd[1466]: 2025-05-10 00:07:03.760 [INFO][4373] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cb569cbf6ad55f68fe5a89a759c4a546a75d1f68d1fcf60e7de7e66c9855af43" May 10 00:07:03.903199 containerd[1466]: 2025-05-10 00:07:03.876 [INFO][4400] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cb569cbf6ad55f68fe5a89a759c4a546a75d1f68d1fcf60e7de7e66c9855af43" HandleID="k8s-pod-network.cb569cbf6ad55f68fe5a89a759c4a546a75d1f68d1fcf60e7de7e66c9855af43" Workload="ci--4081--3--3--n--7b3972f1ed-k8s-coredns--6f6b679f8f--bzj9v-eth0" May 10 00:07:03.903199 containerd[1466]: 2025-05-10 00:07:03.877 [INFO][4400] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:07:03.903199 containerd[1466]: 2025-05-10 00:07:03.877 [INFO][4400] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:07:03.903199 containerd[1466]: 2025-05-10 00:07:03.893 [WARNING][4400] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cb569cbf6ad55f68fe5a89a759c4a546a75d1f68d1fcf60e7de7e66c9855af43" HandleID="k8s-pod-network.cb569cbf6ad55f68fe5a89a759c4a546a75d1f68d1fcf60e7de7e66c9855af43" Workload="ci--4081--3--3--n--7b3972f1ed-k8s-coredns--6f6b679f8f--bzj9v-eth0" May 10 00:07:03.903199 containerd[1466]: 2025-05-10 00:07:03.893 [INFO][4400] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cb569cbf6ad55f68fe5a89a759c4a546a75d1f68d1fcf60e7de7e66c9855af43" HandleID="k8s-pod-network.cb569cbf6ad55f68fe5a89a759c4a546a75d1f68d1fcf60e7de7e66c9855af43" Workload="ci--4081--3--3--n--7b3972f1ed-k8s-coredns--6f6b679f8f--bzj9v-eth0" May 10 00:07:03.903199 containerd[1466]: 2025-05-10 00:07:03.896 [INFO][4400] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:07:03.903199 containerd[1466]: 2025-05-10 00:07:03.900 [INFO][4373] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cb569cbf6ad55f68fe5a89a759c4a546a75d1f68d1fcf60e7de7e66c9855af43" May 10 00:07:03.903662 containerd[1466]: time="2025-05-10T00:07:03.903507187Z" level=info msg="TearDown network for sandbox \"cb569cbf6ad55f68fe5a89a759c4a546a75d1f68d1fcf60e7de7e66c9855af43\" successfully" May 10 00:07:03.903662 containerd[1466]: time="2025-05-10T00:07:03.903538867Z" level=info msg="StopPodSandbox for \"cb569cbf6ad55f68fe5a89a759c4a546a75d1f68d1fcf60e7de7e66c9855af43\" returns successfully" May 10 00:07:03.906522 containerd[1466]: time="2025-05-10T00:07:03.906032894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-bzj9v,Uid:24f009df-8c19-4b55-8fef-4f458dd12e97,Namespace:kube-system,Attempt:1,}" May 10 00:07:04.032933 systemd-networkd[1375]: calic15a4f46097: Gained IPv6LL May 10 00:07:04.108493 systemd-networkd[1375]: cali9918d5dd095: Link UP May 10 00:07:04.108917 systemd-networkd[1375]: cali9918d5dd095: Gained carrier May 10 00:07:04.130916 containerd[1466]: 2025-05-10 00:07:03.939 [INFO][4416] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--n--7b3972f1ed-k8s-coredns--6f6b679f8f--fgtct-eth0 coredns-6f6b679f8f- kube-system c1194d46-0896-4b5b-ac89-83be1ea63c3b 789 0 2025-05-10 00:06:31 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-3-n-7b3972f1ed coredns-6f6b679f8f-fgtct eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9918d5dd095 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="9d23672aa6405f199193251a534656e2c9d855ccab913e4f21b8314f2cda33d4" Namespace="kube-system" Pod="coredns-6f6b679f8f-fgtct" WorkloadEndpoint="ci--4081--3--3--n--7b3972f1ed-k8s-coredns--6f6b679f8f--fgtct-" May 10 00:07:04.130916 containerd[1466]: 2025-05-10 00:07:03.940 [INFO][4416] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9d23672aa6405f199193251a534656e2c9d855ccab913e4f21b8314f2cda33d4" Namespace="kube-system" Pod="coredns-6f6b679f8f-fgtct" WorkloadEndpoint="ci--4081--3--3--n--7b3972f1ed-k8s-coredns--6f6b679f8f--fgtct-eth0" May 10 00:07:04.130916 containerd[1466]: 2025-05-10 00:07:03.999 [INFO][4461] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9d23672aa6405f199193251a534656e2c9d855ccab913e4f21b8314f2cda33d4" HandleID="k8s-pod-network.9d23672aa6405f199193251a534656e2c9d855ccab913e4f21b8314f2cda33d4" Workload="ci--4081--3--3--n--7b3972f1ed-k8s-coredns--6f6b679f8f--fgtct-eth0" May 10 00:07:04.130916 containerd[1466]: 2025-05-10 00:07:04.025 [INFO][4461] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9d23672aa6405f199193251a534656e2c9d855ccab913e4f21b8314f2cda33d4" HandleID="k8s-pod-network.9d23672aa6405f199193251a534656e2c9d855ccab913e4f21b8314f2cda33d4" Workload="ci--4081--3--3--n--7b3972f1ed-k8s-coredns--6f6b679f8f--fgtct-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000318780), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-3-n-7b3972f1ed", "pod":"coredns-6f6b679f8f-fgtct", "timestamp":"2025-05-10 00:07:03.999962104 +0000 UTC"}, Hostname:"ci-4081-3-3-n-7b3972f1ed", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 10 00:07:04.130916 containerd[1466]: 2025-05-10 00:07:04.025 [INFO][4461] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:07:04.130916 containerd[1466]: 2025-05-10 00:07:04.025 [INFO][4461] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:07:04.130916 containerd[1466]: 2025-05-10 00:07:04.026 [INFO][4461] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-n-7b3972f1ed' May 10 00:07:04.130916 containerd[1466]: 2025-05-10 00:07:04.037 [INFO][4461] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9d23672aa6405f199193251a534656e2c9d855ccab913e4f21b8314f2cda33d4" host="ci-4081-3-3-n-7b3972f1ed" May 10 00:07:04.130916 containerd[1466]: 2025-05-10 00:07:04.049 [INFO][4461] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-n-7b3972f1ed" May 10 00:07:04.130916 containerd[1466]: 2025-05-10 00:07:04.062 [INFO][4461] ipam/ipam.go 489: Trying affinity for 192.168.54.0/26 host="ci-4081-3-3-n-7b3972f1ed" May 10 00:07:04.130916 containerd[1466]: 2025-05-10 00:07:04.065 [INFO][4461] ipam/ipam.go 155: Attempting to load block cidr=192.168.54.0/26 host="ci-4081-3-3-n-7b3972f1ed" May 10 00:07:04.130916 containerd[1466]: 2025-05-10 00:07:04.069 [INFO][4461] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.54.0/26 host="ci-4081-3-3-n-7b3972f1ed" May 10 00:07:04.130916 containerd[1466]: 2025-05-10 00:07:04.069 [INFO][4461] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.54.0/26 handle="k8s-pod-network.9d23672aa6405f199193251a534656e2c9d855ccab913e4f21b8314f2cda33d4" host="ci-4081-3-3-n-7b3972f1ed" May 10 00:07:04.130916 containerd[1466]: 2025-05-10 00:07:04.074 [INFO][4461] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.9d23672aa6405f199193251a534656e2c9d855ccab913e4f21b8314f2cda33d4 May 10 00:07:04.130916 containerd[1466]: 2025-05-10 00:07:04.083 [INFO][4461] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.54.0/26 handle="k8s-pod-network.9d23672aa6405f199193251a534656e2c9d855ccab913e4f21b8314f2cda33d4" host="ci-4081-3-3-n-7b3972f1ed" May 10 00:07:04.130916 containerd[1466]: 2025-05-10 00:07:04.100 [INFO][4461] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.54.3/26] block=192.168.54.0/26 handle="k8s-pod-network.9d23672aa6405f199193251a534656e2c9d855ccab913e4f21b8314f2cda33d4" host="ci-4081-3-3-n-7b3972f1ed" May 10 00:07:04.130916 containerd[1466]: 2025-05-10 00:07:04.101 [INFO][4461] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.54.3/26] handle="k8s-pod-network.9d23672aa6405f199193251a534656e2c9d855ccab913e4f21b8314f2cda33d4" host="ci-4081-3-3-n-7b3972f1ed" May 10 00:07:04.130916 containerd[1466]: 2025-05-10 00:07:04.101 [INFO][4461] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:07:04.130916 containerd[1466]: 2025-05-10 00:07:04.101 [INFO][4461] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.54.3/26] IPv6=[] ContainerID="9d23672aa6405f199193251a534656e2c9d855ccab913e4f21b8314f2cda33d4" HandleID="k8s-pod-network.9d23672aa6405f199193251a534656e2c9d855ccab913e4f21b8314f2cda33d4" Workload="ci--4081--3--3--n--7b3972f1ed-k8s-coredns--6f6b679f8f--fgtct-eth0" May 10 00:07:04.131449 containerd[1466]: 2025-05-10 00:07:04.103 [INFO][4416] cni-plugin/k8s.go 386: Populated endpoint ContainerID="9d23672aa6405f199193251a534656e2c9d855ccab913e4f21b8314f2cda33d4" Namespace="kube-system" Pod="coredns-6f6b679f8f-fgtct" WorkloadEndpoint="ci--4081--3--3--n--7b3972f1ed-k8s-coredns--6f6b679f8f--fgtct-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--7b3972f1ed-k8s-coredns--6f6b679f8f--fgtct-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"c1194d46-0896-4b5b-ac89-83be1ea63c3b", ResourceVersion:"789", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 6, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-7b3972f1ed", ContainerID:"", Pod:"coredns-6f6b679f8f-fgtct", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.54.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9918d5dd095", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:07:04.131449 containerd[1466]: 2025-05-10 00:07:04.103 [INFO][4416] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.54.3/32] ContainerID="9d23672aa6405f199193251a534656e2c9d855ccab913e4f21b8314f2cda33d4" Namespace="kube-system" Pod="coredns-6f6b679f8f-fgtct" WorkloadEndpoint="ci--4081--3--3--n--7b3972f1ed-k8s-coredns--6f6b679f8f--fgtct-eth0" May 10 00:07:04.131449 containerd[1466]: 2025-05-10 00:07:04.103 [INFO][4416] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9918d5dd095 ContainerID="9d23672aa6405f199193251a534656e2c9d855ccab913e4f21b8314f2cda33d4" Namespace="kube-system" Pod="coredns-6f6b679f8f-fgtct" WorkloadEndpoint="ci--4081--3--3--n--7b3972f1ed-k8s-coredns--6f6b679f8f--fgtct-eth0" May 10 00:07:04.131449 containerd[1466]: 2025-05-10 00:07:04.107 [INFO][4416] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9d23672aa6405f199193251a534656e2c9d855ccab913e4f21b8314f2cda33d4" Namespace="kube-system" Pod="coredns-6f6b679f8f-fgtct" WorkloadEndpoint="ci--4081--3--3--n--7b3972f1ed-k8s-coredns--6f6b679f8f--fgtct-eth0" May 10 00:07:04.131449 containerd[1466]: 2025-05-10 00:07:04.107 [INFO][4416] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9d23672aa6405f199193251a534656e2c9d855ccab913e4f21b8314f2cda33d4" Namespace="kube-system" Pod="coredns-6f6b679f8f-fgtct" WorkloadEndpoint="ci--4081--3--3--n--7b3972f1ed-k8s-coredns--6f6b679f8f--fgtct-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--7b3972f1ed-k8s-coredns--6f6b679f8f--fgtct-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"c1194d46-0896-4b5b-ac89-83be1ea63c3b", ResourceVersion:"789", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 6, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-7b3972f1ed", ContainerID:"9d23672aa6405f199193251a534656e2c9d855ccab913e4f21b8314f2cda33d4", Pod:"coredns-6f6b679f8f-fgtct", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.54.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9918d5dd095", MAC:"be:93:79:28:98:b0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:07:04.131449 containerd[1466]: 2025-05-10 00:07:04.125 [INFO][4416] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="9d23672aa6405f199193251a534656e2c9d855ccab913e4f21b8314f2cda33d4" Namespace="kube-system" Pod="coredns-6f6b679f8f-fgtct" WorkloadEndpoint="ci--4081--3--3--n--7b3972f1ed-k8s-coredns--6f6b679f8f--fgtct-eth0" May 10 00:07:04.183509 containerd[1466]: time="2025-05-10T00:07:04.183377174Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:07:04.183656 containerd[1466]: time="2025-05-10T00:07:04.183518695Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:07:04.183656 containerd[1466]: time="2025-05-10T00:07:04.183534095Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:07:04.183862 containerd[1466]: time="2025-05-10T00:07:04.183762098Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:07:04.207194 systemd[1]: Started cri-containerd-9d23672aa6405f199193251a534656e2c9d855ccab913e4f21b8314f2cda33d4.scope - libcontainer container 9d23672aa6405f199193251a534656e2c9d855ccab913e4f21b8314f2cda33d4. May 10 00:07:04.259364 containerd[1466]: time="2025-05-10T00:07:04.259072538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-fgtct,Uid:c1194d46-0896-4b5b-ac89-83be1ea63c3b,Namespace:kube-system,Attempt:1,} returns sandbox id \"9d23672aa6405f199193251a534656e2c9d855ccab913e4f21b8314f2cda33d4\"" May 10 00:07:04.265198 containerd[1466]: time="2025-05-10T00:07:04.264886920Z" level=info msg="CreateContainer within sandbox \"9d23672aa6405f199193251a534656e2c9d855ccab913e4f21b8314f2cda33d4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 10 00:07:04.283099 containerd[1466]: time="2025-05-10T00:07:04.281967542Z" level=info msg="CreateContainer within sandbox \"9d23672aa6405f199193251a534656e2c9d855ccab913e4f21b8314f2cda33d4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"01170e9b8e6fe383e0ced3b12869db533c663ad605f4bf349bddab9662881963\"" May 10 00:07:04.283022 systemd-networkd[1375]: calicc54fc59012: Link UP May 10 00:07:04.283722 systemd-networkd[1375]: calicc54fc59012: Gained carrier May 10 00:07:04.289417 containerd[1466]: time="2025-05-10T00:07:04.289370141Z" level=info msg="StartContainer for \"01170e9b8e6fe383e0ced3b12869db533c663ad605f4bf349bddab9662881963\"" May 10 00:07:04.309503 containerd[1466]: 2025-05-10 00:07:03.986 [INFO][4423] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--n--7b3972f1ed-k8s-calico--apiserver--784576f4d--7h6d7-eth0 calico-apiserver-784576f4d- calico-apiserver 82dbd33f-4b0a-4dd0-8dd3-0c33f1f824b2 787 0 2025-05-10 00:06:39 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:784576f4d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-3-n-7b3972f1ed calico-apiserver-784576f4d-7h6d7 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calicc54fc59012 [] []}} ContainerID="43eeec510ecceca1c49c53abd622a08cf725a1e26b563f2e7f37b9337f936303" Namespace="calico-apiserver" Pod="calico-apiserver-784576f4d-7h6d7" WorkloadEndpoint="ci--4081--3--3--n--7b3972f1ed-k8s-calico--apiserver--784576f4d--7h6d7-" May 10 00:07:04.309503 containerd[1466]: 2025-05-10 00:07:03.992 [INFO][4423] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="43eeec510ecceca1c49c53abd622a08cf725a1e26b563f2e7f37b9337f936303" Namespace="calico-apiserver" Pod="calico-apiserver-784576f4d-7h6d7" WorkloadEndpoint="ci--4081--3--3--n--7b3972f1ed-k8s-calico--apiserver--784576f4d--7h6d7-eth0" May 10 00:07:04.309503 containerd[1466]: 2025-05-10 00:07:04.087 [INFO][4471] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="43eeec510ecceca1c49c53abd622a08cf725a1e26b563f2e7f37b9337f936303" HandleID="k8s-pod-network.43eeec510ecceca1c49c53abd622a08cf725a1e26b563f2e7f37b9337f936303" Workload="ci--4081--3--3--n--7b3972f1ed-k8s-calico--apiserver--784576f4d--7h6d7-eth0" May 10 00:07:04.309503 containerd[1466]: 2025-05-10 00:07:04.219 [INFO][4471] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="43eeec510ecceca1c49c53abd622a08cf725a1e26b563f2e7f37b9337f936303" HandleID="k8s-pod-network.43eeec510ecceca1c49c53abd622a08cf725a1e26b563f2e7f37b9337f936303" Workload="ci--4081--3--3--n--7b3972f1ed-k8s-calico--apiserver--784576f4d--7h6d7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001fb640), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-3-n-7b3972f1ed", "pod":"calico-apiserver-784576f4d-7h6d7", "timestamp":"2025-05-10 00:07:04.087442474 +0000 UTC"}, Hostname:"ci-4081-3-3-n-7b3972f1ed", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 10 00:07:04.309503 containerd[1466]: 2025-05-10 00:07:04.219 [INFO][4471] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:07:04.309503 containerd[1466]: 2025-05-10 00:07:04.219 [INFO][4471] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:07:04.309503 containerd[1466]: 2025-05-10 00:07:04.219 [INFO][4471] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-n-7b3972f1ed' May 10 00:07:04.309503 containerd[1466]: 2025-05-10 00:07:04.225 [INFO][4471] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.43eeec510ecceca1c49c53abd622a08cf725a1e26b563f2e7f37b9337f936303" host="ci-4081-3-3-n-7b3972f1ed" May 10 00:07:04.309503 containerd[1466]: 2025-05-10 00:07:04.232 [INFO][4471] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-n-7b3972f1ed" May 10 00:07:04.309503 containerd[1466]: 2025-05-10 00:07:04.239 [INFO][4471] ipam/ipam.go 489: Trying affinity for 192.168.54.0/26 host="ci-4081-3-3-n-7b3972f1ed" May 10 00:07:04.309503 containerd[1466]: 2025-05-10 00:07:04.246 [INFO][4471] ipam/ipam.go 155: Attempting to load block cidr=192.168.54.0/26 host="ci-4081-3-3-n-7b3972f1ed" May 10 00:07:04.309503 containerd[1466]: 2025-05-10 00:07:04.252 [INFO][4471] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.54.0/26 host="ci-4081-3-3-n-7b3972f1ed" May 10 00:07:04.309503 containerd[1466]: 2025-05-10 00:07:04.252 [INFO][4471] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.54.0/26 handle="k8s-pod-network.43eeec510ecceca1c49c53abd622a08cf725a1e26b563f2e7f37b9337f936303" host="ci-4081-3-3-n-7b3972f1ed" May 10 00:07:04.309503 containerd[1466]: 2025-05-10 00:07:04.255 [INFO][4471] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.43eeec510ecceca1c49c53abd622a08cf725a1e26b563f2e7f37b9337f936303 May 10 00:07:04.309503 containerd[1466]: 2025-05-10 00:07:04.262 [INFO][4471] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.54.0/26 handle="k8s-pod-network.43eeec510ecceca1c49c53abd622a08cf725a1e26b563f2e7f37b9337f936303" host="ci-4081-3-3-n-7b3972f1ed" May 10 00:07:04.309503 containerd[1466]: 2025-05-10 00:07:04.275 [INFO][4471] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.54.4/26] block=192.168.54.0/26 handle="k8s-pod-network.43eeec510ecceca1c49c53abd622a08cf725a1e26b563f2e7f37b9337f936303" host="ci-4081-3-3-n-7b3972f1ed" May 10 00:07:04.309503 containerd[1466]: 2025-05-10 00:07:04.275 [INFO][4471] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.54.4/26] handle="k8s-pod-network.43eeec510ecceca1c49c53abd622a08cf725a1e26b563f2e7f37b9337f936303" host="ci-4081-3-3-n-7b3972f1ed" May 10 00:07:04.309503 containerd[1466]: 2025-05-10 00:07:04.275 [INFO][4471] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:07:04.309503 containerd[1466]: 2025-05-10 00:07:04.275 [INFO][4471] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.54.4/26] IPv6=[] ContainerID="43eeec510ecceca1c49c53abd622a08cf725a1e26b563f2e7f37b9337f936303" HandleID="k8s-pod-network.43eeec510ecceca1c49c53abd622a08cf725a1e26b563f2e7f37b9337f936303" Workload="ci--4081--3--3--n--7b3972f1ed-k8s-calico--apiserver--784576f4d--7h6d7-eth0" May 10 00:07:04.310152 containerd[1466]: 2025-05-10 00:07:04.277 [INFO][4423] cni-plugin/k8s.go 386: Populated endpoint ContainerID="43eeec510ecceca1c49c53abd622a08cf725a1e26b563f2e7f37b9337f936303" Namespace="calico-apiserver" Pod="calico-apiserver-784576f4d-7h6d7" WorkloadEndpoint="ci--4081--3--3--n--7b3972f1ed-k8s-calico--apiserver--784576f4d--7h6d7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--7b3972f1ed-k8s-calico--apiserver--784576f4d--7h6d7-eth0", GenerateName:"calico-apiserver-784576f4d-", Namespace:"calico-apiserver", SelfLink:"", UID:"82dbd33f-4b0a-4dd0-8dd3-0c33f1f824b2", ResourceVersion:"787", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 6, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"784576f4d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-7b3972f1ed", ContainerID:"", Pod:"calico-apiserver-784576f4d-7h6d7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.54.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicc54fc59012", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:07:04.310152 containerd[1466]: 2025-05-10 00:07:04.278 [INFO][4423] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.54.4/32] ContainerID="43eeec510ecceca1c49c53abd622a08cf725a1e26b563f2e7f37b9337f936303" Namespace="calico-apiserver" Pod="calico-apiserver-784576f4d-7h6d7" WorkloadEndpoint="ci--4081--3--3--n--7b3972f1ed-k8s-calico--apiserver--784576f4d--7h6d7-eth0" May 10 00:07:04.310152 containerd[1466]: 2025-05-10 00:07:04.278 [INFO][4423] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicc54fc59012 ContainerID="43eeec510ecceca1c49c53abd622a08cf725a1e26b563f2e7f37b9337f936303" Namespace="calico-apiserver" Pod="calico-apiserver-784576f4d-7h6d7" WorkloadEndpoint="ci--4081--3--3--n--7b3972f1ed-k8s-calico--apiserver--784576f4d--7h6d7-eth0" May 10 00:07:04.310152 containerd[1466]: 2025-05-10 00:07:04.281 [INFO][4423] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="43eeec510ecceca1c49c53abd622a08cf725a1e26b563f2e7f37b9337f936303" Namespace="calico-apiserver" Pod="calico-apiserver-784576f4d-7h6d7" WorkloadEndpoint="ci--4081--3--3--n--7b3972f1ed-k8s-calico--apiserver--784576f4d--7h6d7-eth0" May 10 00:07:04.310152 containerd[1466]: 2025-05-10 00:07:04.283 [INFO][4423] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="43eeec510ecceca1c49c53abd622a08cf725a1e26b563f2e7f37b9337f936303" Namespace="calico-apiserver" Pod="calico-apiserver-784576f4d-7h6d7" WorkloadEndpoint="ci--4081--3--3--n--7b3972f1ed-k8s-calico--apiserver--784576f4d--7h6d7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--7b3972f1ed-k8s-calico--apiserver--784576f4d--7h6d7-eth0", GenerateName:"calico-apiserver-784576f4d-", Namespace:"calico-apiserver", SelfLink:"", UID:"82dbd33f-4b0a-4dd0-8dd3-0c33f1f824b2", ResourceVersion:"787", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 6, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"784576f4d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-7b3972f1ed", ContainerID:"43eeec510ecceca1c49c53abd622a08cf725a1e26b563f2e7f37b9337f936303", Pod:"calico-apiserver-784576f4d-7h6d7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.54.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicc54fc59012", MAC:"e2:60:33:08:56:3f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:07:04.310152 containerd[1466]: 2025-05-10 00:07:04.304 [INFO][4423] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="43eeec510ecceca1c49c53abd622a08cf725a1e26b563f2e7f37b9337f936303" Namespace="calico-apiserver" Pod="calico-apiserver-784576f4d-7h6d7" WorkloadEndpoint="ci--4081--3--3--n--7b3972f1ed-k8s-calico--apiserver--784576f4d--7h6d7-eth0" May 10 00:07:04.336971 systemd[1]: Started cri-containerd-01170e9b8e6fe383e0ced3b12869db533c663ad605f4bf349bddab9662881963.scope - libcontainer container 01170e9b8e6fe383e0ced3b12869db533c663ad605f4bf349bddab9662881963. May 10 00:07:04.354221 containerd[1466]: time="2025-05-10T00:07:04.354039068Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:07:04.354221 containerd[1466]: time="2025-05-10T00:07:04.354112589Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:07:04.355045 containerd[1466]: time="2025-05-10T00:07:04.354227270Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:07:04.356149 containerd[1466]: time="2025-05-10T00:07:04.354484673Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:07:04.397073 systemd[1]: Started cri-containerd-43eeec510ecceca1c49c53abd622a08cf725a1e26b563f2e7f37b9337f936303.scope - libcontainer container 43eeec510ecceca1c49c53abd622a08cf725a1e26b563f2e7f37b9337f936303. May 10 00:07:04.404618 containerd[1466]: time="2025-05-10T00:07:04.403483594Z" level=info msg="StartContainer for \"01170e9b8e6fe383e0ced3b12869db533c663ad605f4bf349bddab9662881963\" returns successfully" May 10 00:07:04.418676 systemd-networkd[1375]: cali437b71f0b5f: Link UP May 10 00:07:04.420252 systemd-networkd[1375]: cali437b71f0b5f: Gained carrier May 10 00:07:04.456794 containerd[1466]: 2025-05-10 00:07:04.052 [INFO][4437] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--n--7b3972f1ed-k8s-coredns--6f6b679f8f--bzj9v-eth0 coredns-6f6b679f8f- kube-system 24f009df-8c19-4b55-8fef-4f458dd12e97 791 0 2025-05-10 00:06:31 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-3-n-7b3972f1ed coredns-6f6b679f8f-bzj9v eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali437b71f0b5f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="b5b9db791a431347e46a3429d0ae522820e20cfc210baa4d23ec4561d5f564f1" Namespace="kube-system" Pod="coredns-6f6b679f8f-bzj9v" WorkloadEndpoint="ci--4081--3--3--n--7b3972f1ed-k8s-coredns--6f6b679f8f--bzj9v-" May 10 00:07:04.456794 containerd[1466]: 2025-05-10 00:07:04.053 [INFO][4437] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b5b9db791a431347e46a3429d0ae522820e20cfc210baa4d23ec4561d5f564f1" Namespace="kube-system" Pod="coredns-6f6b679f8f-bzj9v" WorkloadEndpoint="ci--4081--3--3--n--7b3972f1ed-k8s-coredns--6f6b679f8f--bzj9v-eth0" May 10 00:07:04.456794 containerd[1466]: 2025-05-10 00:07:04.142 [INFO][4484] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b5b9db791a431347e46a3429d0ae522820e20cfc210baa4d23ec4561d5f564f1" HandleID="k8s-pod-network.b5b9db791a431347e46a3429d0ae522820e20cfc210baa4d23ec4561d5f564f1" Workload="ci--4081--3--3--n--7b3972f1ed-k8s-coredns--6f6b679f8f--bzj9v-eth0" May 10 00:07:04.456794 containerd[1466]: 2025-05-10 00:07:04.224 [INFO][4484] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b5b9db791a431347e46a3429d0ae522820e20cfc210baa4d23ec4561d5f564f1" HandleID="k8s-pod-network.b5b9db791a431347e46a3429d0ae522820e20cfc210baa4d23ec4561d5f564f1" Workload="ci--4081--3--3--n--7b3972f1ed-k8s-coredns--6f6b679f8f--bzj9v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003015e0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-3-n-7b3972f1ed", "pod":"coredns-6f6b679f8f-bzj9v", "timestamp":"2025-05-10 00:07:04.142962264 +0000 UTC"}, Hostname:"ci-4081-3-3-n-7b3972f1ed", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 10 00:07:04.456794 containerd[1466]: 2025-05-10 00:07:04.224 [INFO][4484] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:07:04.456794 containerd[1466]: 2025-05-10 00:07:04.275 [INFO][4484] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:07:04.456794 containerd[1466]: 2025-05-10 00:07:04.276 [INFO][4484] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-n-7b3972f1ed' May 10 00:07:04.456794 containerd[1466]: 2025-05-10 00:07:04.327 [INFO][4484] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b5b9db791a431347e46a3429d0ae522820e20cfc210baa4d23ec4561d5f564f1" host="ci-4081-3-3-n-7b3972f1ed" May 10 00:07:04.456794 containerd[1466]: 2025-05-10 00:07:04.341 [INFO][4484] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-n-7b3972f1ed" May 10 00:07:04.456794 containerd[1466]: 2025-05-10 00:07:04.351 [INFO][4484] ipam/ipam.go 489: Trying affinity for 192.168.54.0/26 host="ci-4081-3-3-n-7b3972f1ed" May 10 00:07:04.456794 containerd[1466]: 2025-05-10 00:07:04.355 [INFO][4484] ipam/ipam.go 155: Attempting to load block cidr=192.168.54.0/26 host="ci-4081-3-3-n-7b3972f1ed" May 10 00:07:04.456794 containerd[1466]: 2025-05-10 00:07:04.365 [INFO][4484] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.54.0/26 host="ci-4081-3-3-n-7b3972f1ed" May 10 00:07:04.456794 containerd[1466]: 2025-05-10 00:07:04.366 [INFO][4484] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.54.0/26 handle="k8s-pod-network.b5b9db791a431347e46a3429d0ae522820e20cfc210baa4d23ec4561d5f564f1" host="ci-4081-3-3-n-7b3972f1ed" May 10 00:07:04.456794 containerd[1466]: 2025-05-10 00:07:04.371 [INFO][4484] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b5b9db791a431347e46a3429d0ae522820e20cfc210baa4d23ec4561d5f564f1 May 10 00:07:04.456794 containerd[1466]: 2025-05-10 00:07:04.382 [INFO][4484] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.54.0/26 handle="k8s-pod-network.b5b9db791a431347e46a3429d0ae522820e20cfc210baa4d23ec4561d5f564f1" host="ci-4081-3-3-n-7b3972f1ed" May 10 00:07:04.456794 containerd[1466]: 2025-05-10 00:07:04.401 [INFO][4484] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.54.5/26] block=192.168.54.0/26 handle="k8s-pod-network.b5b9db791a431347e46a3429d0ae522820e20cfc210baa4d23ec4561d5f564f1" host="ci-4081-3-3-n-7b3972f1ed" May 10 00:07:04.456794 containerd[1466]: 2025-05-10 00:07:04.401 [INFO][4484] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.54.5/26] handle="k8s-pod-network.b5b9db791a431347e46a3429d0ae522820e20cfc210baa4d23ec4561d5f564f1" host="ci-4081-3-3-n-7b3972f1ed" May 10 00:07:04.456794 containerd[1466]: 2025-05-10 00:07:04.401 [INFO][4484] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:07:04.456794 containerd[1466]: 2025-05-10 00:07:04.401 [INFO][4484] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.54.5/26] IPv6=[] ContainerID="b5b9db791a431347e46a3429d0ae522820e20cfc210baa4d23ec4561d5f564f1" HandleID="k8s-pod-network.b5b9db791a431347e46a3429d0ae522820e20cfc210baa4d23ec4561d5f564f1" Workload="ci--4081--3--3--n--7b3972f1ed-k8s-coredns--6f6b679f8f--bzj9v-eth0" May 10 00:07:04.457903 containerd[1466]: 2025-05-10 00:07:04.411 [INFO][4437] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b5b9db791a431347e46a3429d0ae522820e20cfc210baa4d23ec4561d5f564f1" Namespace="kube-system" Pod="coredns-6f6b679f8f-bzj9v" WorkloadEndpoint="ci--4081--3--3--n--7b3972f1ed-k8s-coredns--6f6b679f8f--bzj9v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--7b3972f1ed-k8s-coredns--6f6b679f8f--bzj9v-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"24f009df-8c19-4b55-8fef-4f458dd12e97", ResourceVersion:"791", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 6, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-7b3972f1ed", ContainerID:"", Pod:"coredns-6f6b679f8f-bzj9v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.54.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali437b71f0b5f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:07:04.457903 containerd[1466]: 2025-05-10 00:07:04.411 [INFO][4437] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.54.5/32] ContainerID="b5b9db791a431347e46a3429d0ae522820e20cfc210baa4d23ec4561d5f564f1" Namespace="kube-system" Pod="coredns-6f6b679f8f-bzj9v" WorkloadEndpoint="ci--4081--3--3--n--7b3972f1ed-k8s-coredns--6f6b679f8f--bzj9v-eth0" May 10 00:07:04.457903 containerd[1466]: 2025-05-10 00:07:04.411 [INFO][4437] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali437b71f0b5f ContainerID="b5b9db791a431347e46a3429d0ae522820e20cfc210baa4d23ec4561d5f564f1" Namespace="kube-system" Pod="coredns-6f6b679f8f-bzj9v" WorkloadEndpoint="ci--4081--3--3--n--7b3972f1ed-k8s-coredns--6f6b679f8f--bzj9v-eth0" May 10 00:07:04.457903 containerd[1466]: 2025-05-10 00:07:04.421 [INFO][4437] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b5b9db791a431347e46a3429d0ae522820e20cfc210baa4d23ec4561d5f564f1" Namespace="kube-system" Pod="coredns-6f6b679f8f-bzj9v" WorkloadEndpoint="ci--4081--3--3--n--7b3972f1ed-k8s-coredns--6f6b679f8f--bzj9v-eth0" May 10 00:07:04.457903 containerd[1466]: 2025-05-10 00:07:04.424 [INFO][4437] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b5b9db791a431347e46a3429d0ae522820e20cfc210baa4d23ec4561d5f564f1" Namespace="kube-system" Pod="coredns-6f6b679f8f-bzj9v" WorkloadEndpoint="ci--4081--3--3--n--7b3972f1ed-k8s-coredns--6f6b679f8f--bzj9v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--7b3972f1ed-k8s-coredns--6f6b679f8f--bzj9v-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"24f009df-8c19-4b55-8fef-4f458dd12e97", ResourceVersion:"791", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 6, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-7b3972f1ed", ContainerID:"b5b9db791a431347e46a3429d0ae522820e20cfc210baa4d23ec4561d5f564f1", Pod:"coredns-6f6b679f8f-bzj9v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.54.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali437b71f0b5f", MAC:"fe:83:43:a3:e9:51", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:07:04.457903 containerd[1466]: 2025-05-10 00:07:04.454 [INFO][4437] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b5b9db791a431347e46a3429d0ae522820e20cfc210baa4d23ec4561d5f564f1" Namespace="kube-system" Pod="coredns-6f6b679f8f-bzj9v" WorkloadEndpoint="ci--4081--3--3--n--7b3972f1ed-k8s-coredns--6f6b679f8f--bzj9v-eth0" May 10 00:07:04.501967 containerd[1466]: time="2025-05-10T00:07:04.501170712Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:07:04.501967 containerd[1466]: time="2025-05-10T00:07:04.501229233Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:07:04.501967 containerd[1466]: time="2025-05-10T00:07:04.501256193Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:07:04.503016 containerd[1466]: time="2025-05-10T00:07:04.501963801Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:07:04.522185 systemd-networkd[1375]: cali6c811702ee6: Link UP May 10 00:07:04.523981 systemd-networkd[1375]: cali6c811702ee6: Gained carrier May 10 00:07:04.560550 systemd[1]: Started cri-containerd-b5b9db791a431347e46a3429d0ae522820e20cfc210baa4d23ec4561d5f564f1.scope - libcontainer container b5b9db791a431347e46a3429d0ae522820e20cfc210baa4d23ec4561d5f564f1. May 10 00:07:04.601901 containerd[1466]: 2025-05-10 00:07:04.045 [INFO][4439] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--n--7b3972f1ed-k8s-calico--apiserver--784576f4d--st7c7-eth0 calico-apiserver-784576f4d- calico-apiserver 64bfd5f9-fcc7-46ea-a3e0-fb75396a3f30 788 0 2025-05-10 00:06:39 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:784576f4d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-3-n-7b3972f1ed calico-apiserver-784576f4d-st7c7 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6c811702ee6 [] []}} ContainerID="fe3ff86236155116c16205c2f405e8204c6a8a2e96d4889627193bdbbaeeab25" Namespace="calico-apiserver" Pod="calico-apiserver-784576f4d-st7c7" WorkloadEndpoint="ci--4081--3--3--n--7b3972f1ed-k8s-calico--apiserver--784576f4d--st7c7-" May 10 00:07:04.601901 containerd[1466]: 2025-05-10 00:07:04.045 [INFO][4439] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="fe3ff86236155116c16205c2f405e8204c6a8a2e96d4889627193bdbbaeeab25" Namespace="calico-apiserver" Pod="calico-apiserver-784576f4d-st7c7" WorkloadEndpoint="ci--4081--3--3--n--7b3972f1ed-k8s-calico--apiserver--784576f4d--st7c7-eth0" May 10 00:07:04.601901 containerd[1466]: 2025-05-10 00:07:04.141 [INFO][4479] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fe3ff86236155116c16205c2f405e8204c6a8a2e96d4889627193bdbbaeeab25" HandleID="k8s-pod-network.fe3ff86236155116c16205c2f405e8204c6a8a2e96d4889627193bdbbaeeab25" Workload="ci--4081--3--3--n--7b3972f1ed-k8s-calico--apiserver--784576f4d--st7c7-eth0" May 10 00:07:04.601901 containerd[1466]: 2025-05-10 00:07:04.226 [INFO][4479] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fe3ff86236155116c16205c2f405e8204c6a8a2e96d4889627193bdbbaeeab25" HandleID="k8s-pod-network.fe3ff86236155116c16205c2f405e8204c6a8a2e96d4889627193bdbbaeeab25" Workload="ci--4081--3--3--n--7b3972f1ed-k8s-calico--apiserver--784576f4d--st7c7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d570), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-3-n-7b3972f1ed", "pod":"calico-apiserver-784576f4d-st7c7", "timestamp":"2025-05-10 00:07:04.141573929 +0000 UTC"}, Hostname:"ci-4081-3-3-n-7b3972f1ed", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 10 00:07:04.601901 containerd[1466]: 2025-05-10 00:07:04.226 [INFO][4479] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:07:04.601901 containerd[1466]: 2025-05-10 00:07:04.401 [INFO][4479] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:07:04.601901 containerd[1466]: 2025-05-10 00:07:04.401 [INFO][4479] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-n-7b3972f1ed' May 10 00:07:04.601901 containerd[1466]: 2025-05-10 00:07:04.426 [INFO][4479] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.fe3ff86236155116c16205c2f405e8204c6a8a2e96d4889627193bdbbaeeab25" host="ci-4081-3-3-n-7b3972f1ed" May 10 00:07:04.601901 containerd[1466]: 2025-05-10 00:07:04.445 [INFO][4479] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-n-7b3972f1ed" May 10 00:07:04.601901 containerd[1466]: 2025-05-10 00:07:04.456 [INFO][4479] ipam/ipam.go 489: Trying affinity for 192.168.54.0/26 host="ci-4081-3-3-n-7b3972f1ed" May 10 00:07:04.601901 containerd[1466]: 2025-05-10 00:07:04.460 [INFO][4479] ipam/ipam.go 155: Attempting to load block cidr=192.168.54.0/26 host="ci-4081-3-3-n-7b3972f1ed" May 10 00:07:04.601901 containerd[1466]: 2025-05-10 00:07:04.466 [INFO][4479] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.54.0/26 host="ci-4081-3-3-n-7b3972f1ed" May 10 00:07:04.601901 containerd[1466]: 2025-05-10 00:07:04.467 [INFO][4479] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.54.0/26 handle="k8s-pod-network.fe3ff86236155116c16205c2f405e8204c6a8a2e96d4889627193bdbbaeeab25" host="ci-4081-3-3-n-7b3972f1ed" May 10 00:07:04.601901 containerd[1466]: 2025-05-10 00:07:04.470 [INFO][4479] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.fe3ff86236155116c16205c2f405e8204c6a8a2e96d4889627193bdbbaeeab25 May 10 00:07:04.601901 containerd[1466]: 2025-05-10 00:07:04.478 [INFO][4479] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.54.0/26 handle="k8s-pod-network.fe3ff86236155116c16205c2f405e8204c6a8a2e96d4889627193bdbbaeeab25" host="ci-4081-3-3-n-7b3972f1ed" May 10 00:07:04.601901 containerd[1466]: 2025-05-10 00:07:04.497 [INFO][4479] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.54.6/26] block=192.168.54.0/26 handle="k8s-pod-network.fe3ff86236155116c16205c2f405e8204c6a8a2e96d4889627193bdbbaeeab25" host="ci-4081-3-3-n-7b3972f1ed" May 10 00:07:04.601901 containerd[1466]: 2025-05-10 00:07:04.497 [INFO][4479] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.54.6/26] handle="k8s-pod-network.fe3ff86236155116c16205c2f405e8204c6a8a2e96d4889627193bdbbaeeab25" host="ci-4081-3-3-n-7b3972f1ed" May 10 00:07:04.601901 containerd[1466]: 2025-05-10 00:07:04.497 [INFO][4479] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:07:04.601901 containerd[1466]: 2025-05-10 00:07:04.497 [INFO][4479] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.54.6/26] IPv6=[] ContainerID="fe3ff86236155116c16205c2f405e8204c6a8a2e96d4889627193bdbbaeeab25" HandleID="k8s-pod-network.fe3ff86236155116c16205c2f405e8204c6a8a2e96d4889627193bdbbaeeab25" Workload="ci--4081--3--3--n--7b3972f1ed-k8s-calico--apiserver--784576f4d--st7c7-eth0" May 10 00:07:04.602645 containerd[1466]: 2025-05-10 00:07:04.504 [INFO][4439] cni-plugin/k8s.go 386: Populated endpoint ContainerID="fe3ff86236155116c16205c2f405e8204c6a8a2e96d4889627193bdbbaeeab25" Namespace="calico-apiserver" Pod="calico-apiserver-784576f4d-st7c7" WorkloadEndpoint="ci--4081--3--3--n--7b3972f1ed-k8s-calico--apiserver--784576f4d--st7c7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--7b3972f1ed-k8s-calico--apiserver--784576f4d--st7c7-eth0", GenerateName:"calico-apiserver-784576f4d-", Namespace:"calico-apiserver", SelfLink:"", UID:"64bfd5f9-fcc7-46ea-a3e0-fb75396a3f30", ResourceVersion:"788", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 6, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"784576f4d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-7b3972f1ed", ContainerID:"", Pod:"calico-apiserver-784576f4d-st7c7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.54.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6c811702ee6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:07:04.602645 containerd[1466]: 2025-05-10 00:07:04.504 [INFO][4439] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.54.6/32] ContainerID="fe3ff86236155116c16205c2f405e8204c6a8a2e96d4889627193bdbbaeeab25" Namespace="calico-apiserver" Pod="calico-apiserver-784576f4d-st7c7" WorkloadEndpoint="ci--4081--3--3--n--7b3972f1ed-k8s-calico--apiserver--784576f4d--st7c7-eth0" May 10 00:07:04.602645 containerd[1466]: 2025-05-10 00:07:04.504 [INFO][4439] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6c811702ee6 ContainerID="fe3ff86236155116c16205c2f405e8204c6a8a2e96d4889627193bdbbaeeab25" Namespace="calico-apiserver" Pod="calico-apiserver-784576f4d-st7c7" WorkloadEndpoint="ci--4081--3--3--n--7b3972f1ed-k8s-calico--apiserver--784576f4d--st7c7-eth0" May 10 00:07:04.602645 containerd[1466]: 2025-05-10 00:07:04.526 [INFO][4439] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fe3ff86236155116c16205c2f405e8204c6a8a2e96d4889627193bdbbaeeab25" Namespace="calico-apiserver" Pod="calico-apiserver-784576f4d-st7c7" WorkloadEndpoint="ci--4081--3--3--n--7b3972f1ed-k8s-calico--apiserver--784576f4d--st7c7-eth0" May 10 00:07:04.602645 containerd[1466]: 2025-05-10 00:07:04.529 [INFO][4439] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="fe3ff86236155116c16205c2f405e8204c6a8a2e96d4889627193bdbbaeeab25" Namespace="calico-apiserver" Pod="calico-apiserver-784576f4d-st7c7" WorkloadEndpoint="ci--4081--3--3--n--7b3972f1ed-k8s-calico--apiserver--784576f4d--st7c7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--7b3972f1ed-k8s-calico--apiserver--784576f4d--st7c7-eth0", GenerateName:"calico-apiserver-784576f4d-", Namespace:"calico-apiserver", SelfLink:"", UID:"64bfd5f9-fcc7-46ea-a3e0-fb75396a3f30", ResourceVersion:"788", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 6, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"784576f4d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-7b3972f1ed", ContainerID:"fe3ff86236155116c16205c2f405e8204c6a8a2e96d4889627193bdbbaeeab25", Pod:"calico-apiserver-784576f4d-st7c7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.54.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6c811702ee6", MAC:"52:22:d3:8d:78:83", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:07:04.602645 containerd[1466]: 2025-05-10 00:07:04.599 [INFO][4439] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="fe3ff86236155116c16205c2f405e8204c6a8a2e96d4889627193bdbbaeeab25" Namespace="calico-apiserver" Pod="calico-apiserver-784576f4d-st7c7" WorkloadEndpoint="ci--4081--3--3--n--7b3972f1ed-k8s-calico--apiserver--784576f4d--st7c7-eth0" May 10 00:07:04.612133 containerd[1466]: time="2025-05-10T00:07:04.611938290Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-784576f4d-7h6d7,Uid:82dbd33f-4b0a-4dd0-8dd3-0c33f1f824b2,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"43eeec510ecceca1c49c53abd622a08cf725a1e26b563f2e7f37b9337f936303\"" May 10 00:07:04.646974 containerd[1466]: time="2025-05-10T00:07:04.646922502Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-bzj9v,Uid:24f009df-8c19-4b55-8fef-4f458dd12e97,Namespace:kube-system,Attempt:1,} returns sandbox id \"b5b9db791a431347e46a3429d0ae522820e20cfc210baa4d23ec4561d5f564f1\"" May 10 00:07:04.653217 containerd[1466]: time="2025-05-10T00:07:04.652155078Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:07:04.653217 containerd[1466]: time="2025-05-10T00:07:04.652209918Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:07:04.653217 containerd[1466]: time="2025-05-10T00:07:04.652221398Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:07:04.653217 containerd[1466]: time="2025-05-10T00:07:04.652298719Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:07:04.658365 containerd[1466]: time="2025-05-10T00:07:04.658184182Z" level=info msg="CreateContainer within sandbox \"b5b9db791a431347e46a3429d0ae522820e20cfc210baa4d23ec4561d5f564f1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 10 00:07:04.682007 systemd[1]: Started cri-containerd-fe3ff86236155116c16205c2f405e8204c6a8a2e96d4889627193bdbbaeeab25.scope - libcontainer container fe3ff86236155116c16205c2f405e8204c6a8a2e96d4889627193bdbbaeeab25. May 10 00:07:04.689219 containerd[1466]: time="2025-05-10T00:07:04.689082630Z" level=info msg="CreateContainer within sandbox \"b5b9db791a431347e46a3429d0ae522820e20cfc210baa4d23ec4561d5f564f1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5f326902d2009ca72b889c951e48710fb779e7530f5ba77c3c1885a945874f2f\"" May 10 00:07:04.690496 containerd[1466]: time="2025-05-10T00:07:04.690375844Z" level=info msg="StartContainer for \"5f326902d2009ca72b889c951e48710fb779e7530f5ba77c3c1885a945874f2f\"" May 10 00:07:04.738954 systemd[1]: Started cri-containerd-5f326902d2009ca72b889c951e48710fb779e7530f5ba77c3c1885a945874f2f.scope - libcontainer container 5f326902d2009ca72b889c951e48710fb779e7530f5ba77c3c1885a945874f2f. May 10 00:07:04.775127 systemd[1]: run-netns-cni\x2d508e19e6\x2d547e\x2dcbcb\x2daa10\x2dc74a8e12a943.mount: Deactivated successfully. May 10 00:07:04.823160 containerd[1466]: time="2025-05-10T00:07:04.823103055Z" level=info msg="StartContainer for \"5f326902d2009ca72b889c951e48710fb779e7530f5ba77c3c1885a945874f2f\" returns successfully" May 10 00:07:04.847988 kubelet[2684]: I0510 00:07:04.847923 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-fgtct" podStartSLOduration=33.847902799 podStartE2EDuration="33.847902799s" podCreationTimestamp="2025-05-10 00:06:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:07:04.824003305 +0000 UTC m=+40.379489729" watchObservedRunningTime="2025-05-10 00:07:04.847902799 +0000 UTC m=+40.403389183" May 10 00:07:04.925458 containerd[1466]: time="2025-05-10T00:07:04.924623454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-784576f4d-st7c7,Uid:64bfd5f9-fcc7-46ea-a3e0-fb75396a3f30,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"fe3ff86236155116c16205c2f405e8204c6a8a2e96d4889627193bdbbaeeab25\"" May 10 00:07:05.121535 systemd-networkd[1375]: cali2166f89971f: Gained IPv6LL May 10 00:07:05.185135 systemd-networkd[1375]: cali9918d5dd095: Gained IPv6LL May 10 00:07:05.313074 systemd-networkd[1375]: calicc54fc59012: Gained IPv6LL May 10 00:07:05.841595 kubelet[2684]: I0510 00:07:05.841113 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-bzj9v" podStartSLOduration=34.841091104 podStartE2EDuration="34.841091104s" podCreationTimestamp="2025-05-10 00:06:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:07:05.834480075 +0000 UTC m=+41.389966499" watchObservedRunningTime="2025-05-10 00:07:05.841091104 +0000 UTC m=+41.396577528" May 10 00:07:06.145086 systemd-networkd[1375]: cali6c811702ee6: Gained IPv6LL May 10 00:07:06.341527 containerd[1466]: time="2025-05-10T00:07:06.340670363Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:07:06.341527 containerd[1466]: time="2025-05-10T00:07:06.341454291Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=32554116" May 10 00:07:06.342295 containerd[1466]: time="2025-05-10T00:07:06.342245940Z" level=info msg="ImageCreate event name:\"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:07:06.344872 containerd[1466]: time="2025-05-10T00:07:06.344776006Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:07:06.346284 containerd[1466]: time="2025-05-10T00:07:06.345832577Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"33923266\" in 3.175380348s" May 10 00:07:06.346284 containerd[1466]: time="2025-05-10T00:07:06.345875857Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\"" May 10 00:07:06.357097 containerd[1466]: time="2025-05-10T00:07:06.356924252Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 10 00:07:06.378721 containerd[1466]: time="2025-05-10T00:07:06.377177423Z" level=info msg="CreateContainer within sandbox \"c8f1b2bdcf1bc56a2a64333798de1da6244a985b82b2a2dfd11754f55c0540d8\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 10 00:07:06.394522 containerd[1466]: time="2025-05-10T00:07:06.394472923Z" level=info msg="CreateContainer within sandbox \"c8f1b2bdcf1bc56a2a64333798de1da6244a985b82b2a2dfd11754f55c0540d8\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"26ca02a30a25919e9087c4ac342aed4e22d8bc5a12d49d6e6352a62f9f1fe1fa\"" May 10 00:07:06.397629 containerd[1466]: time="2025-05-10T00:07:06.396073420Z" level=info msg="StartContainer for \"26ca02a30a25919e9087c4ac342aed4e22d8bc5a12d49d6e6352a62f9f1fe1fa\"" May 10 00:07:06.401283 systemd-networkd[1375]: cali437b71f0b5f: Gained IPv6LL May 10 00:07:06.430896 systemd[1]: Started cri-containerd-26ca02a30a25919e9087c4ac342aed4e22d8bc5a12d49d6e6352a62f9f1fe1fa.scope - libcontainer container 26ca02a30a25919e9087c4ac342aed4e22d8bc5a12d49d6e6352a62f9f1fe1fa. May 10 00:07:06.473966 containerd[1466]: time="2025-05-10T00:07:06.472415295Z" level=info msg="StartContainer for \"26ca02a30a25919e9087c4ac342aed4e22d8bc5a12d49d6e6352a62f9f1fe1fa\" returns successfully" May 10 00:07:06.902624 kubelet[2684]: I0510 00:07:06.902406 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-66fcf49574-dh78g" podStartSLOduration=24.721697568 podStartE2EDuration="27.902384012s" podCreationTimestamp="2025-05-10 00:06:39 +0000 UTC" firstStartedPulling="2025-05-10 00:07:03.167930842 +0000 UTC m=+38.723417266" lastFinishedPulling="2025-05-10 00:07:06.348617326 +0000 UTC m=+41.904103710" observedRunningTime="2025-05-10 00:07:06.868316177 +0000 UTC m=+42.423802601" watchObservedRunningTime="2025-05-10 00:07:06.902384012 +0000 UTC m=+42.457870436" May 10 00:07:08.018667 containerd[1466]: time="2025-05-10T00:07:08.018619606Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:07:08.020198 containerd[1466]: time="2025-05-10T00:07:08.020006740Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7474935" May 10 00:07:08.021347 containerd[1466]: time="2025-05-10T00:07:08.021081631Z" level=info msg="ImageCreate event name:\"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:07:08.023286 containerd[1466]: time="2025-05-10T00:07:08.023245333Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:07:08.024182 containerd[1466]: time="2025-05-10T00:07:08.024149862Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"8844117\" in 1.667183289s" May 10 00:07:08.024182 containerd[1466]: time="2025-05-10T00:07:08.024181343Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\"" May 10 00:07:08.037386 containerd[1466]: time="2025-05-10T00:07:08.037310197Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 10 00:07:08.047141 containerd[1466]: time="2025-05-10T00:07:08.046952495Z" level=info msg="CreateContainer within sandbox \"e12d3731c20f4363bf7702c9e691e362e8f1f3fee9b2bb078ed7354824e9a32f\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 10 00:07:08.066272 containerd[1466]: time="2025-05-10T00:07:08.065877928Z" level=info msg="CreateContainer within sandbox \"e12d3731c20f4363bf7702c9e691e362e8f1f3fee9b2bb078ed7354824e9a32f\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"3ee0593ec9dc4a38017f984dd230274cb34e950772988204aff132cbc7d523a4\"" May 10 00:07:08.068888 containerd[1466]: time="2025-05-10T00:07:08.068848919Z" level=info msg="StartContainer for \"3ee0593ec9dc4a38017f984dd230274cb34e950772988204aff132cbc7d523a4\"" May 10 00:07:08.113037 systemd[1]: Started cri-containerd-3ee0593ec9dc4a38017f984dd230274cb34e950772988204aff132cbc7d523a4.scope - libcontainer container 3ee0593ec9dc4a38017f984dd230274cb34e950772988204aff132cbc7d523a4. May 10 00:07:08.149985 containerd[1466]: time="2025-05-10T00:07:08.149418821Z" level=info msg="StartContainer for \"3ee0593ec9dc4a38017f984dd230274cb34e950772988204aff132cbc7d523a4\" returns successfully" May 10 00:07:08.369018 systemd[1]: run-containerd-runc-k8s.io-3ee0593ec9dc4a38017f984dd230274cb34e950772988204aff132cbc7d523a4-runc.QjoCMW.mount: Deactivated successfully. May 10 00:07:12.160662 containerd[1466]: time="2025-05-10T00:07:12.160587682Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:07:12.162347 containerd[1466]: time="2025-05-10T00:07:12.162266579Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=40247603" May 10 00:07:12.164099 containerd[1466]: time="2025-05-10T00:07:12.164055156Z" level=info msg="ImageCreate event name:\"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:07:12.168388 containerd[1466]: time="2025-05-10T00:07:12.168353599Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:07:12.170358 containerd[1466]: time="2025-05-10T00:07:12.169990535Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 4.132365095s" May 10 00:07:12.170358 containerd[1466]: time="2025-05-10T00:07:12.170023335Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" May 10 00:07:12.171835 containerd[1466]: time="2025-05-10T00:07:12.171255667Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 10 00:07:12.173546 containerd[1466]: time="2025-05-10T00:07:12.173369088Z" level=info msg="CreateContainer within sandbox \"43eeec510ecceca1c49c53abd622a08cf725a1e26b563f2e7f37b9337f936303\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 10 00:07:12.193032 containerd[1466]: time="2025-05-10T00:07:12.192985521Z" level=info msg="CreateContainer within sandbox \"43eeec510ecceca1c49c53abd622a08cf725a1e26b563f2e7f37b9337f936303\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d95781c6b82720f02378124e0265b78602c54a28362d5b4bb963e77cd540d9ea\"" May 10 00:07:12.194332 containerd[1466]: time="2025-05-10T00:07:12.194306294Z" level=info msg="StartContainer for \"d95781c6b82720f02378124e0265b78602c54a28362d5b4bb963e77cd540d9ea\"" May 10 00:07:12.244960 systemd[1]: Started cri-containerd-d95781c6b82720f02378124e0265b78602c54a28362d5b4bb963e77cd540d9ea.scope - libcontainer container d95781c6b82720f02378124e0265b78602c54a28362d5b4bb963e77cd540d9ea. May 10 00:07:12.284568 containerd[1466]: time="2025-05-10T00:07:12.284520661Z" level=info msg="StartContainer for \"d95781c6b82720f02378124e0265b78602c54a28362d5b4bb963e77cd540d9ea\" returns successfully" May 10 00:07:12.592558 containerd[1466]: time="2025-05-10T00:07:12.591741561Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:07:12.592959 containerd[1466]: time="2025-05-10T00:07:12.592920453Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" May 10 00:07:12.596443 containerd[1466]: time="2025-05-10T00:07:12.596400327Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 425.1157ms" May 10 00:07:12.596443 containerd[1466]: time="2025-05-10T00:07:12.596440287Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" May 10 00:07:12.599053 containerd[1466]: time="2025-05-10T00:07:12.599016873Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 10 00:07:12.600973 containerd[1466]: time="2025-05-10T00:07:12.600932372Z" level=info msg="CreateContainer within sandbox \"fe3ff86236155116c16205c2f405e8204c6a8a2e96d4889627193bdbbaeeab25\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 10 00:07:12.617349 containerd[1466]: time="2025-05-10T00:07:12.617278692Z" level=info msg="CreateContainer within sandbox \"fe3ff86236155116c16205c2f405e8204c6a8a2e96d4889627193bdbbaeeab25\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"4bdc21365358b98d9f1ac46b6d987abb318af58da338e586bb6e2ffdf2f04f02\"" May 10 00:07:12.618143 containerd[1466]: time="2025-05-10T00:07:12.617982979Z" level=info msg="StartContainer for \"4bdc21365358b98d9f1ac46b6d987abb318af58da338e586bb6e2ffdf2f04f02\"" May 10 00:07:12.656925 systemd[1]: Started cri-containerd-4bdc21365358b98d9f1ac46b6d987abb318af58da338e586bb6e2ffdf2f04f02.scope - libcontainer container 4bdc21365358b98d9f1ac46b6d987abb318af58da338e586bb6e2ffdf2f04f02. May 10 00:07:12.710388 containerd[1466]: time="2025-05-10T00:07:12.710338607Z" level=info msg="StartContainer for \"4bdc21365358b98d9f1ac46b6d987abb318af58da338e586bb6e2ffdf2f04f02\" returns successfully" May 10 00:07:12.870011 kubelet[2684]: I0510 00:07:12.869461 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-784576f4d-st7c7" podStartSLOduration=26.200406488 podStartE2EDuration="33.869445731s" podCreationTimestamp="2025-05-10 00:06:39 +0000 UTC" firstStartedPulling="2025-05-10 00:07:04.928092411 +0000 UTC m=+40.483578835" lastFinishedPulling="2025-05-10 00:07:12.597131654 +0000 UTC m=+48.152618078" observedRunningTime="2025-05-10 00:07:12.868411481 +0000 UTC m=+48.423897985" watchObservedRunningTime="2025-05-10 00:07:12.869445731 +0000 UTC m=+48.424932155" May 10 00:07:12.886725 kubelet[2684]: I0510 00:07:12.886174 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-784576f4d-7h6d7" podStartSLOduration=26.32984711 podStartE2EDuration="33.886154256s" podCreationTimestamp="2025-05-10 00:06:39 +0000 UTC" firstStartedPulling="2025-05-10 00:07:04.614597518 +0000 UTC m=+40.170083942" lastFinishedPulling="2025-05-10 00:07:12.170904664 +0000 UTC m=+47.726391088" observedRunningTime="2025-05-10 00:07:12.885951094 +0000 UTC m=+48.441437558" watchObservedRunningTime="2025-05-10 00:07:12.886154256 +0000 UTC m=+48.441640680" May 10 00:07:13.871390 kubelet[2684]: I0510 00:07:13.870821 2684 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 10 00:07:13.871390 kubelet[2684]: I0510 00:07:13.871066 2684 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 10 00:07:14.140486 containerd[1466]: time="2025-05-10T00:07:14.140082393Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:07:14.143287 containerd[1466]: time="2025-05-10T00:07:14.142482256Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13124299" May 10 00:07:14.144585 containerd[1466]: time="2025-05-10T00:07:14.144252873Z" level=info msg="ImageCreate event name:\"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:07:14.147475 containerd[1466]: time="2025-05-10T00:07:14.147424384Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:07:14.148777 containerd[1466]: time="2025-05-10T00:07:14.148726077Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"14493433\" in 1.549667403s" May 10 00:07:14.148859 containerd[1466]: time="2025-05-10T00:07:14.148817037Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\"" May 10 00:07:14.152578 containerd[1466]: time="2025-05-10T00:07:14.152452273Z" level=info msg="CreateContainer within sandbox \"e12d3731c20f4363bf7702c9e691e362e8f1f3fee9b2bb078ed7354824e9a32f\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 10 00:07:14.177344 containerd[1466]: time="2025-05-10T00:07:14.177296033Z" level=info msg="CreateContainer within sandbox \"e12d3731c20f4363bf7702c9e691e362e8f1f3fee9b2bb078ed7354824e9a32f\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"5874bdb6a83a66fdac5d42c963db0077f8c0d955ce5d86f6cefb4298b3a889dc\"" May 10 00:07:14.179717 containerd[1466]: time="2025-05-10T00:07:14.179393933Z" level=info msg="StartContainer for \"5874bdb6a83a66fdac5d42c963db0077f8c0d955ce5d86f6cefb4298b3a889dc\"" May 10 00:07:14.238797 systemd[1]: run-containerd-runc-k8s.io-5874bdb6a83a66fdac5d42c963db0077f8c0d955ce5d86f6cefb4298b3a889dc-runc.tLO8zL.mount: Deactivated successfully. May 10 00:07:14.250923 systemd[1]: Started cri-containerd-5874bdb6a83a66fdac5d42c963db0077f8c0d955ce5d86f6cefb4298b3a889dc.scope - libcontainer container 5874bdb6a83a66fdac5d42c963db0077f8c0d955ce5d86f6cefb4298b3a889dc. May 10 00:07:14.294984 containerd[1466]: time="2025-05-10T00:07:14.294488325Z" level=info msg="StartContainer for \"5874bdb6a83a66fdac5d42c963db0077f8c0d955ce5d86f6cefb4298b3a889dc\" returns successfully" May 10 00:07:14.714628 kubelet[2684]: I0510 00:07:14.714491 2684 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 10 00:07:14.721532 kubelet[2684]: I0510 00:07:14.720513 2684 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 10 00:07:14.894812 kubelet[2684]: I0510 00:07:14.893939 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-kkznf" podStartSLOduration=25.006471687 podStartE2EDuration="35.893920236s" podCreationTimestamp="2025-05-10 00:06:39 +0000 UTC" firstStartedPulling="2025-05-10 00:07:03.263121985 +0000 UTC m=+38.818608409" lastFinishedPulling="2025-05-10 00:07:14.150570534 +0000 UTC m=+49.706056958" observedRunningTime="2025-05-10 00:07:14.892182499 +0000 UTC m=+50.447668923" watchObservedRunningTime="2025-05-10 00:07:14.893920236 +0000 UTC m=+50.449406660" May 10 00:07:18.120164 kubelet[2684]: I0510 00:07:18.119667 2684 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 10 00:07:18.374878 kubelet[2684]: I0510 00:07:18.374526 2684 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 10 00:07:24.590354 containerd[1466]: time="2025-05-10T00:07:24.590199810Z" level=info msg="StopPodSandbox for \"3793df354735c769c070452fca12fdc99f9a7a50d67b35a69ce989ff4ed66b99\"" May 10 00:07:24.702611 containerd[1466]: 2025-05-10 00:07:24.650 [WARNING][5082] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3793df354735c769c070452fca12fdc99f9a7a50d67b35a69ce989ff4ed66b99" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--7b3972f1ed-k8s-calico--apiserver--784576f4d--st7c7-eth0", GenerateName:"calico-apiserver-784576f4d-", Namespace:"calico-apiserver", SelfLink:"", UID:"64bfd5f9-fcc7-46ea-a3e0-fb75396a3f30", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 6, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"784576f4d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-7b3972f1ed", ContainerID:"fe3ff86236155116c16205c2f405e8204c6a8a2e96d4889627193bdbbaeeab25", Pod:"calico-apiserver-784576f4d-st7c7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.54.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6c811702ee6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:07:24.702611 containerd[1466]: 2025-05-10 00:07:24.651 [INFO][5082] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3793df354735c769c070452fca12fdc99f9a7a50d67b35a69ce989ff4ed66b99" May 10 00:07:24.702611 containerd[1466]: 2025-05-10 00:07:24.651 [INFO][5082] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3793df354735c769c070452fca12fdc99f9a7a50d67b35a69ce989ff4ed66b99" iface="eth0" netns="" May 10 00:07:24.702611 containerd[1466]: 2025-05-10 00:07:24.651 [INFO][5082] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3793df354735c769c070452fca12fdc99f9a7a50d67b35a69ce989ff4ed66b99" May 10 00:07:24.702611 containerd[1466]: 2025-05-10 00:07:24.651 [INFO][5082] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3793df354735c769c070452fca12fdc99f9a7a50d67b35a69ce989ff4ed66b99" May 10 00:07:24.702611 containerd[1466]: 2025-05-10 00:07:24.683 [INFO][5089] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3793df354735c769c070452fca12fdc99f9a7a50d67b35a69ce989ff4ed66b99" HandleID="k8s-pod-network.3793df354735c769c070452fca12fdc99f9a7a50d67b35a69ce989ff4ed66b99" Workload="ci--4081--3--3--n--7b3972f1ed-k8s-calico--apiserver--784576f4d--st7c7-eth0" May 10 00:07:24.702611 containerd[1466]: 2025-05-10 00:07:24.683 [INFO][5089] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:07:24.702611 containerd[1466]: 2025-05-10 00:07:24.683 [INFO][5089] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:07:24.702611 containerd[1466]: 2025-05-10 00:07:24.696 [WARNING][5089] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3793df354735c769c070452fca12fdc99f9a7a50d67b35a69ce989ff4ed66b99" HandleID="k8s-pod-network.3793df354735c769c070452fca12fdc99f9a7a50d67b35a69ce989ff4ed66b99" Workload="ci--4081--3--3--n--7b3972f1ed-k8s-calico--apiserver--784576f4d--st7c7-eth0" May 10 00:07:24.702611 containerd[1466]: 2025-05-10 00:07:24.696 [INFO][5089] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3793df354735c769c070452fca12fdc99f9a7a50d67b35a69ce989ff4ed66b99" HandleID="k8s-pod-network.3793df354735c769c070452fca12fdc99f9a7a50d67b35a69ce989ff4ed66b99" Workload="ci--4081--3--3--n--7b3972f1ed-k8s-calico--apiserver--784576f4d--st7c7-eth0" May 10 00:07:24.702611 containerd[1466]: 2025-05-10 00:07:24.699 [INFO][5089] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:07:24.702611 containerd[1466]: 2025-05-10 00:07:24.700 [INFO][5082] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3793df354735c769c070452fca12fdc99f9a7a50d67b35a69ce989ff4ed66b99" May 10 00:07:24.702611 containerd[1466]: time="2025-05-10T00:07:24.702459135Z" level=info msg="TearDown network for sandbox \"3793df354735c769c070452fca12fdc99f9a7a50d67b35a69ce989ff4ed66b99\" successfully" May 10 00:07:24.702611 containerd[1466]: time="2025-05-10T00:07:24.702486255Z" level=info msg="StopPodSandbox for \"3793df354735c769c070452fca12fdc99f9a7a50d67b35a69ce989ff4ed66b99\" returns successfully" May 10 00:07:24.705852 containerd[1466]: time="2025-05-10T00:07:24.703326223Z" level=info msg="RemovePodSandbox for \"3793df354735c769c070452fca12fdc99f9a7a50d67b35a69ce989ff4ed66b99\"" May 10 00:07:24.716860 containerd[1466]: time="2025-05-10T00:07:24.716807104Z" level=info msg="Forcibly stopping sandbox \"3793df354735c769c070452fca12fdc99f9a7a50d67b35a69ce989ff4ed66b99\"" May 10 00:07:24.817032 containerd[1466]: 2025-05-10 00:07:24.773 [WARNING][5107] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3793df354735c769c070452fca12fdc99f9a7a50d67b35a69ce989ff4ed66b99" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--7b3972f1ed-k8s-calico--apiserver--784576f4d--st7c7-eth0", GenerateName:"calico-apiserver-784576f4d-", Namespace:"calico-apiserver", SelfLink:"", UID:"64bfd5f9-fcc7-46ea-a3e0-fb75396a3f30", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 6, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"784576f4d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-7b3972f1ed", ContainerID:"fe3ff86236155116c16205c2f405e8204c6a8a2e96d4889627193bdbbaeeab25", Pod:"calico-apiserver-784576f4d-st7c7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.54.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6c811702ee6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:07:24.817032 containerd[1466]: 2025-05-10 00:07:24.773 [INFO][5107] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3793df354735c769c070452fca12fdc99f9a7a50d67b35a69ce989ff4ed66b99" May 10 00:07:24.817032 containerd[1466]: 2025-05-10 00:07:24.773 [INFO][5107] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3793df354735c769c070452fca12fdc99f9a7a50d67b35a69ce989ff4ed66b99" iface="eth0" netns="" May 10 00:07:24.817032 containerd[1466]: 2025-05-10 00:07:24.773 [INFO][5107] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3793df354735c769c070452fca12fdc99f9a7a50d67b35a69ce989ff4ed66b99" May 10 00:07:24.817032 containerd[1466]: 2025-05-10 00:07:24.773 [INFO][5107] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3793df354735c769c070452fca12fdc99f9a7a50d67b35a69ce989ff4ed66b99" May 10 00:07:24.817032 containerd[1466]: 2025-05-10 00:07:24.796 [INFO][5115] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3793df354735c769c070452fca12fdc99f9a7a50d67b35a69ce989ff4ed66b99" HandleID="k8s-pod-network.3793df354735c769c070452fca12fdc99f9a7a50d67b35a69ce989ff4ed66b99" Workload="ci--4081--3--3--n--7b3972f1ed-k8s-calico--apiserver--784576f4d--st7c7-eth0" May 10 00:07:24.817032 containerd[1466]: 2025-05-10 00:07:24.797 [INFO][5115] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:07:24.817032 containerd[1466]: 2025-05-10 00:07:24.797 [INFO][5115] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:07:24.817032 containerd[1466]: 2025-05-10 00:07:24.809 [WARNING][5115] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3793df354735c769c070452fca12fdc99f9a7a50d67b35a69ce989ff4ed66b99" HandleID="k8s-pod-network.3793df354735c769c070452fca12fdc99f9a7a50d67b35a69ce989ff4ed66b99" Workload="ci--4081--3--3--n--7b3972f1ed-k8s-calico--apiserver--784576f4d--st7c7-eth0" May 10 00:07:24.817032 containerd[1466]: 2025-05-10 00:07:24.809 [INFO][5115] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3793df354735c769c070452fca12fdc99f9a7a50d67b35a69ce989ff4ed66b99" HandleID="k8s-pod-network.3793df354735c769c070452fca12fdc99f9a7a50d67b35a69ce989ff4ed66b99" Workload="ci--4081--3--3--n--7b3972f1ed-k8s-calico--apiserver--784576f4d--st7c7-eth0" May 10 00:07:24.817032 containerd[1466]: 2025-05-10 00:07:24.811 [INFO][5115] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:07:24.817032 containerd[1466]: 2025-05-10 00:07:24.814 [INFO][5107] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3793df354735c769c070452fca12fdc99f9a7a50d67b35a69ce989ff4ed66b99" May 10 00:07:24.820204 containerd[1466]: time="2025-05-10T00:07:24.817073041Z" level=info msg="TearDown network for sandbox \"3793df354735c769c070452fca12fdc99f9a7a50d67b35a69ce989ff4ed66b99\" successfully" May 10 00:07:24.821047 containerd[1466]: time="2025-05-10T00:07:24.820936396Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3793df354735c769c070452fca12fdc99f9a7a50d67b35a69ce989ff4ed66b99\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 10 00:07:24.823080 containerd[1466]: time="2025-05-10T00:07:24.821075557Z" level=info msg="RemovePodSandbox \"3793df354735c769c070452fca12fdc99f9a7a50d67b35a69ce989ff4ed66b99\" returns successfully" May 10 00:07:24.823930 containerd[1466]: time="2025-05-10T00:07:24.823855142Z" level=info msg="StopPodSandbox for \"9e7f6a301131fc054e2e3107a12bb32bc7d3ee8a0c17d6c184f390d9185779d8\"" May 10 00:07:24.922496 containerd[1466]: 2025-05-10 00:07:24.882 [WARNING][5133] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9e7f6a301131fc054e2e3107a12bb32bc7d3ee8a0c17d6c184f390d9185779d8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--7b3972f1ed-k8s-coredns--6f6b679f8f--fgtct-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"c1194d46-0896-4b5b-ac89-83be1ea63c3b", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 6, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-7b3972f1ed", ContainerID:"9d23672aa6405f199193251a534656e2c9d855ccab913e4f21b8314f2cda33d4", Pod:"coredns-6f6b679f8f-fgtct", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.54.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9918d5dd095", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:07:24.922496 containerd[1466]: 2025-05-10 00:07:24.883 [INFO][5133] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9e7f6a301131fc054e2e3107a12bb32bc7d3ee8a0c17d6c184f390d9185779d8" May 10 00:07:24.922496 containerd[1466]: 2025-05-10 00:07:24.883 [INFO][5133] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9e7f6a301131fc054e2e3107a12bb32bc7d3ee8a0c17d6c184f390d9185779d8" iface="eth0" netns="" May 10 00:07:24.922496 containerd[1466]: 2025-05-10 00:07:24.883 [INFO][5133] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9e7f6a301131fc054e2e3107a12bb32bc7d3ee8a0c17d6c184f390d9185779d8" May 10 00:07:24.922496 containerd[1466]: 2025-05-10 00:07:24.883 [INFO][5133] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9e7f6a301131fc054e2e3107a12bb32bc7d3ee8a0c17d6c184f390d9185779d8" May 10 00:07:24.922496 containerd[1466]: 2025-05-10 00:07:24.904 [INFO][5140] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9e7f6a301131fc054e2e3107a12bb32bc7d3ee8a0c17d6c184f390d9185779d8" HandleID="k8s-pod-network.9e7f6a301131fc054e2e3107a12bb32bc7d3ee8a0c17d6c184f390d9185779d8" Workload="ci--4081--3--3--n--7b3972f1ed-k8s-coredns--6f6b679f8f--fgtct-eth0" May 10 00:07:24.922496 containerd[1466]: 2025-05-10 00:07:24.905 [INFO][5140] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:07:24.922496 containerd[1466]: 2025-05-10 00:07:24.905 [INFO][5140] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:07:24.922496 containerd[1466]: 2025-05-10 00:07:24.915 [WARNING][5140] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9e7f6a301131fc054e2e3107a12bb32bc7d3ee8a0c17d6c184f390d9185779d8" HandleID="k8s-pod-network.9e7f6a301131fc054e2e3107a12bb32bc7d3ee8a0c17d6c184f390d9185779d8" Workload="ci--4081--3--3--n--7b3972f1ed-k8s-coredns--6f6b679f8f--fgtct-eth0" May 10 00:07:24.922496 containerd[1466]: 2025-05-10 00:07:24.916 [INFO][5140] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9e7f6a301131fc054e2e3107a12bb32bc7d3ee8a0c17d6c184f390d9185779d8" HandleID="k8s-pod-network.9e7f6a301131fc054e2e3107a12bb32bc7d3ee8a0c17d6c184f390d9185779d8" Workload="ci--4081--3--3--n--7b3972f1ed-k8s-coredns--6f6b679f8f--fgtct-eth0" May 10 00:07:24.922496 containerd[1466]: 2025-05-10 00:07:24.918 [INFO][5140] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:07:24.922496 containerd[1466]: 2025-05-10 00:07:24.921 [INFO][5133] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9e7f6a301131fc054e2e3107a12bb32bc7d3ee8a0c17d6c184f390d9185779d8" May 10 00:07:24.923168 containerd[1466]: time="2025-05-10T00:07:24.923020910Z" level=info msg="TearDown network for sandbox \"9e7f6a301131fc054e2e3107a12bb32bc7d3ee8a0c17d6c184f390d9185779d8\" successfully" May 10 00:07:24.923168 containerd[1466]: time="2025-05-10T00:07:24.923056630Z" level=info msg="StopPodSandbox for \"9e7f6a301131fc054e2e3107a12bb32bc7d3ee8a0c17d6c184f390d9185779d8\" returns successfully" May 10 00:07:24.923623 containerd[1466]: time="2025-05-10T00:07:24.923578035Z" level=info msg="RemovePodSandbox for \"9e7f6a301131fc054e2e3107a12bb32bc7d3ee8a0c17d6c184f390d9185779d8\"" May 10 00:07:24.923687 containerd[1466]: time="2025-05-10T00:07:24.923622675Z" level=info msg="Forcibly stopping sandbox \"9e7f6a301131fc054e2e3107a12bb32bc7d3ee8a0c17d6c184f390d9185779d8\"" May 10 00:07:25.051102 containerd[1466]: 2025-05-10 00:07:24.981 [WARNING][5158] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9e7f6a301131fc054e2e3107a12bb32bc7d3ee8a0c17d6c184f390d9185779d8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--7b3972f1ed-k8s-coredns--6f6b679f8f--fgtct-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"c1194d46-0896-4b5b-ac89-83be1ea63c3b", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 6, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-7b3972f1ed", ContainerID:"9d23672aa6405f199193251a534656e2c9d855ccab913e4f21b8314f2cda33d4", Pod:"coredns-6f6b679f8f-fgtct", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.54.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9918d5dd095", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:07:25.051102 containerd[1466]: 2025-05-10 00:07:24.982 [INFO][5158] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9e7f6a301131fc054e2e3107a12bb32bc7d3ee8a0c17d6c184f390d9185779d8" May 10 00:07:25.051102 containerd[1466]: 2025-05-10 00:07:24.982 [INFO][5158] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9e7f6a301131fc054e2e3107a12bb32bc7d3ee8a0c17d6c184f390d9185779d8" iface="eth0" netns="" May 10 00:07:25.051102 containerd[1466]: 2025-05-10 00:07:24.982 [INFO][5158] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9e7f6a301131fc054e2e3107a12bb32bc7d3ee8a0c17d6c184f390d9185779d8" May 10 00:07:25.051102 containerd[1466]: 2025-05-10 00:07:24.982 [INFO][5158] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9e7f6a301131fc054e2e3107a12bb32bc7d3ee8a0c17d6c184f390d9185779d8" May 10 00:07:25.051102 containerd[1466]: 2025-05-10 00:07:25.016 [INFO][5167] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9e7f6a301131fc054e2e3107a12bb32bc7d3ee8a0c17d6c184f390d9185779d8" HandleID="k8s-pod-network.9e7f6a301131fc054e2e3107a12bb32bc7d3ee8a0c17d6c184f390d9185779d8" Workload="ci--4081--3--3--n--7b3972f1ed-k8s-coredns--6f6b679f8f--fgtct-eth0" May 10 00:07:25.051102 containerd[1466]: 2025-05-10 00:07:25.016 [INFO][5167] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:07:25.051102 containerd[1466]: 2025-05-10 00:07:25.016 [INFO][5167] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:07:25.051102 containerd[1466]: 2025-05-10 00:07:25.039 [WARNING][5167] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9e7f6a301131fc054e2e3107a12bb32bc7d3ee8a0c17d6c184f390d9185779d8" HandleID="k8s-pod-network.9e7f6a301131fc054e2e3107a12bb32bc7d3ee8a0c17d6c184f390d9185779d8" Workload="ci--4081--3--3--n--7b3972f1ed-k8s-coredns--6f6b679f8f--fgtct-eth0" May 10 00:07:25.051102 containerd[1466]: 2025-05-10 00:07:25.039 [INFO][5167] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9e7f6a301131fc054e2e3107a12bb32bc7d3ee8a0c17d6c184f390d9185779d8" HandleID="k8s-pod-network.9e7f6a301131fc054e2e3107a12bb32bc7d3ee8a0c17d6c184f390d9185779d8" Workload="ci--4081--3--3--n--7b3972f1ed-k8s-coredns--6f6b679f8f--fgtct-eth0" May 10 00:07:25.051102 containerd[1466]: 2025-05-10 00:07:25.045 [INFO][5167] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:07:25.051102 containerd[1466]: 2025-05-10 00:07:25.049 [INFO][5158] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9e7f6a301131fc054e2e3107a12bb32bc7d3ee8a0c17d6c184f390d9185779d8" May 10 00:07:25.052358 containerd[1466]: time="2025-05-10T00:07:25.051125574Z" level=info msg="TearDown network for sandbox \"9e7f6a301131fc054e2e3107a12bb32bc7d3ee8a0c17d6c184f390d9185779d8\" successfully" May 10 00:07:25.061956 containerd[1466]: time="2025-05-10T00:07:25.061888750Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9e7f6a301131fc054e2e3107a12bb32bc7d3ee8a0c17d6c184f390d9185779d8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 10 00:07:25.062202 containerd[1466]: time="2025-05-10T00:07:25.061969911Z" level=info msg="RemovePodSandbox \"9e7f6a301131fc054e2e3107a12bb32bc7d3ee8a0c17d6c184f390d9185779d8\" returns successfully" May 10 00:07:25.062804 containerd[1466]: time="2025-05-10T00:07:25.062743918Z" level=info msg="StopPodSandbox for \"cb569cbf6ad55f68fe5a89a759c4a546a75d1f68d1fcf60e7de7e66c9855af43\"" May 10 00:07:25.155847 containerd[1466]: 2025-05-10 00:07:25.111 [WARNING][5186] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cb569cbf6ad55f68fe5a89a759c4a546a75d1f68d1fcf60e7de7e66c9855af43" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--7b3972f1ed-k8s-coredns--6f6b679f8f--bzj9v-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"24f009df-8c19-4b55-8fef-4f458dd12e97", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 6, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-7b3972f1ed", ContainerID:"b5b9db791a431347e46a3429d0ae522820e20cfc210baa4d23ec4561d5f564f1", Pod:"coredns-6f6b679f8f-bzj9v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.54.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali437b71f0b5f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:07:25.155847 containerd[1466]: 2025-05-10 00:07:25.111 [INFO][5186] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cb569cbf6ad55f68fe5a89a759c4a546a75d1f68d1fcf60e7de7e66c9855af43" May 10 00:07:25.155847 containerd[1466]: 2025-05-10 00:07:25.111 [INFO][5186] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cb569cbf6ad55f68fe5a89a759c4a546a75d1f68d1fcf60e7de7e66c9855af43" iface="eth0" netns="" May 10 00:07:25.155847 containerd[1466]: 2025-05-10 00:07:25.111 [INFO][5186] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cb569cbf6ad55f68fe5a89a759c4a546a75d1f68d1fcf60e7de7e66c9855af43" May 10 00:07:25.155847 containerd[1466]: 2025-05-10 00:07:25.111 [INFO][5186] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cb569cbf6ad55f68fe5a89a759c4a546a75d1f68d1fcf60e7de7e66c9855af43" May 10 00:07:25.155847 containerd[1466]: 2025-05-10 00:07:25.138 [INFO][5194] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cb569cbf6ad55f68fe5a89a759c4a546a75d1f68d1fcf60e7de7e66c9855af43" HandleID="k8s-pod-network.cb569cbf6ad55f68fe5a89a759c4a546a75d1f68d1fcf60e7de7e66c9855af43" Workload="ci--4081--3--3--n--7b3972f1ed-k8s-coredns--6f6b679f8f--bzj9v-eth0" May 10 00:07:25.155847 containerd[1466]: 2025-05-10 00:07:25.138 [INFO][5194] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:07:25.155847 containerd[1466]: 2025-05-10 00:07:25.138 [INFO][5194] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:07:25.155847 containerd[1466]: 2025-05-10 00:07:25.149 [WARNING][5194] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cb569cbf6ad55f68fe5a89a759c4a546a75d1f68d1fcf60e7de7e66c9855af43" HandleID="k8s-pod-network.cb569cbf6ad55f68fe5a89a759c4a546a75d1f68d1fcf60e7de7e66c9855af43" Workload="ci--4081--3--3--n--7b3972f1ed-k8s-coredns--6f6b679f8f--bzj9v-eth0" May 10 00:07:25.155847 containerd[1466]: 2025-05-10 00:07:25.149 [INFO][5194] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cb569cbf6ad55f68fe5a89a759c4a546a75d1f68d1fcf60e7de7e66c9855af43" HandleID="k8s-pod-network.cb569cbf6ad55f68fe5a89a759c4a546a75d1f68d1fcf60e7de7e66c9855af43" Workload="ci--4081--3--3--n--7b3972f1ed-k8s-coredns--6f6b679f8f--bzj9v-eth0" May 10 00:07:25.155847 containerd[1466]: 2025-05-10 00:07:25.152 [INFO][5194] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:07:25.155847 containerd[1466]: 2025-05-10 00:07:25.154 [INFO][5186] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cb569cbf6ad55f68fe5a89a759c4a546a75d1f68d1fcf60e7de7e66c9855af43" May 10 00:07:25.156649 containerd[1466]: time="2025-05-10T00:07:25.156487391Z" level=info msg="TearDown network for sandbox \"cb569cbf6ad55f68fe5a89a759c4a546a75d1f68d1fcf60e7de7e66c9855af43\" successfully" May 10 00:07:25.156649 containerd[1466]: time="2025-05-10T00:07:25.156525152Z" level=info msg="StopPodSandbox for \"cb569cbf6ad55f68fe5a89a759c4a546a75d1f68d1fcf60e7de7e66c9855af43\" returns successfully" May 10 00:07:25.157436 containerd[1466]: time="2025-05-10T00:07:25.157124357Z" level=info msg="RemovePodSandbox for \"cb569cbf6ad55f68fe5a89a759c4a546a75d1f68d1fcf60e7de7e66c9855af43\"" May 10 00:07:25.157436 containerd[1466]: time="2025-05-10T00:07:25.157159717Z" level=info msg="Forcibly stopping sandbox \"cb569cbf6ad55f68fe5a89a759c4a546a75d1f68d1fcf60e7de7e66c9855af43\"" May 10 00:07:25.253901 containerd[1466]: 2025-05-10 00:07:25.205 [WARNING][5213] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cb569cbf6ad55f68fe5a89a759c4a546a75d1f68d1fcf60e7de7e66c9855af43" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--7b3972f1ed-k8s-coredns--6f6b679f8f--bzj9v-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"24f009df-8c19-4b55-8fef-4f458dd12e97", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 6, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-7b3972f1ed", ContainerID:"b5b9db791a431347e46a3429d0ae522820e20cfc210baa4d23ec4561d5f564f1", Pod:"coredns-6f6b679f8f-bzj9v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.54.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali437b71f0b5f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:07:25.253901 containerd[1466]: 2025-05-10 00:07:25.206 [INFO][5213] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cb569cbf6ad55f68fe5a89a759c4a546a75d1f68d1fcf60e7de7e66c9855af43" May 10 00:07:25.253901 containerd[1466]: 2025-05-10 00:07:25.206 [INFO][5213] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cb569cbf6ad55f68fe5a89a759c4a546a75d1f68d1fcf60e7de7e66c9855af43" iface="eth0" netns="" May 10 00:07:25.253901 containerd[1466]: 2025-05-10 00:07:25.206 [INFO][5213] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cb569cbf6ad55f68fe5a89a759c4a546a75d1f68d1fcf60e7de7e66c9855af43" May 10 00:07:25.253901 containerd[1466]: 2025-05-10 00:07:25.206 [INFO][5213] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cb569cbf6ad55f68fe5a89a759c4a546a75d1f68d1fcf60e7de7e66c9855af43" May 10 00:07:25.253901 containerd[1466]: 2025-05-10 00:07:25.235 [INFO][5220] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cb569cbf6ad55f68fe5a89a759c4a546a75d1f68d1fcf60e7de7e66c9855af43" HandleID="k8s-pod-network.cb569cbf6ad55f68fe5a89a759c4a546a75d1f68d1fcf60e7de7e66c9855af43" Workload="ci--4081--3--3--n--7b3972f1ed-k8s-coredns--6f6b679f8f--bzj9v-eth0" May 10 00:07:25.253901 containerd[1466]: 2025-05-10 00:07:25.235 [INFO][5220] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:07:25.253901 containerd[1466]: 2025-05-10 00:07:25.235 [INFO][5220] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:07:25.253901 containerd[1466]: 2025-05-10 00:07:25.247 [WARNING][5220] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cb569cbf6ad55f68fe5a89a759c4a546a75d1f68d1fcf60e7de7e66c9855af43" HandleID="k8s-pod-network.cb569cbf6ad55f68fe5a89a759c4a546a75d1f68d1fcf60e7de7e66c9855af43" Workload="ci--4081--3--3--n--7b3972f1ed-k8s-coredns--6f6b679f8f--bzj9v-eth0" May 10 00:07:25.253901 containerd[1466]: 2025-05-10 00:07:25.247 [INFO][5220] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cb569cbf6ad55f68fe5a89a759c4a546a75d1f68d1fcf60e7de7e66c9855af43" HandleID="k8s-pod-network.cb569cbf6ad55f68fe5a89a759c4a546a75d1f68d1fcf60e7de7e66c9855af43" Workload="ci--4081--3--3--n--7b3972f1ed-k8s-coredns--6f6b679f8f--bzj9v-eth0" May 10 00:07:25.253901 containerd[1466]: 2025-05-10 00:07:25.249 [INFO][5220] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:07:25.253901 containerd[1466]: 2025-05-10 00:07:25.252 [INFO][5213] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cb569cbf6ad55f68fe5a89a759c4a546a75d1f68d1fcf60e7de7e66c9855af43" May 10 00:07:25.253901 containerd[1466]: time="2025-05-10T00:07:25.253756417Z" level=info msg="TearDown network for sandbox \"cb569cbf6ad55f68fe5a89a759c4a546a75d1f68d1fcf60e7de7e66c9855af43\" successfully" May 10 00:07:25.260493 containerd[1466]: time="2025-05-10T00:07:25.259729390Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cb569cbf6ad55f68fe5a89a759c4a546a75d1f68d1fcf60e7de7e66c9855af43\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 10 00:07:25.260493 containerd[1466]: time="2025-05-10T00:07:25.259890311Z" level=info msg="RemovePodSandbox \"cb569cbf6ad55f68fe5a89a759c4a546a75d1f68d1fcf60e7de7e66c9855af43\" returns successfully" May 10 00:07:25.262059 containerd[1466]: time="2025-05-10T00:07:25.261313604Z" level=info msg="StopPodSandbox for \"09e2a0063f199a69a8277a8a73dba6a0a38e1d510153b63bda515f685cc9a966\"" May 10 00:07:25.363100 containerd[1466]: 2025-05-10 00:07:25.314 [WARNING][5238] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="09e2a0063f199a69a8277a8a73dba6a0a38e1d510153b63bda515f685cc9a966" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--7b3972f1ed-k8s-calico--kube--controllers--66fcf49574--dh78g-eth0", GenerateName:"calico-kube-controllers-66fcf49574-", Namespace:"calico-system", SelfLink:"", UID:"a7e097d4-cfcc-4137-9338-932bad26e148", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 6, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"66fcf49574", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-7b3972f1ed", ContainerID:"c8f1b2bdcf1bc56a2a64333798de1da6244a985b82b2a2dfd11754f55c0540d8", Pod:"calico-kube-controllers-66fcf49574-dh78g", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.54.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic15a4f46097", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:07:25.363100 containerd[1466]: 2025-05-10 00:07:25.315 [INFO][5238] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="09e2a0063f199a69a8277a8a73dba6a0a38e1d510153b63bda515f685cc9a966" May 10 00:07:25.363100 containerd[1466]: 2025-05-10 00:07:25.315 [INFO][5238] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="09e2a0063f199a69a8277a8a73dba6a0a38e1d510153b63bda515f685cc9a966" iface="eth0" netns="" May 10 00:07:25.363100 containerd[1466]: 2025-05-10 00:07:25.315 [INFO][5238] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="09e2a0063f199a69a8277a8a73dba6a0a38e1d510153b63bda515f685cc9a966" May 10 00:07:25.363100 containerd[1466]: 2025-05-10 00:07:25.315 [INFO][5238] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="09e2a0063f199a69a8277a8a73dba6a0a38e1d510153b63bda515f685cc9a966" May 10 00:07:25.363100 containerd[1466]: 2025-05-10 00:07:25.346 [INFO][5245] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="09e2a0063f199a69a8277a8a73dba6a0a38e1d510153b63bda515f685cc9a966" HandleID="k8s-pod-network.09e2a0063f199a69a8277a8a73dba6a0a38e1d510153b63bda515f685cc9a966" Workload="ci--4081--3--3--n--7b3972f1ed-k8s-calico--kube--controllers--66fcf49574--dh78g-eth0" May 10 00:07:25.363100 containerd[1466]: 2025-05-10 00:07:25.346 [INFO][5245] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:07:25.363100 containerd[1466]: 2025-05-10 00:07:25.347 [INFO][5245] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:07:25.363100 containerd[1466]: 2025-05-10 00:07:25.357 [WARNING][5245] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="09e2a0063f199a69a8277a8a73dba6a0a38e1d510153b63bda515f685cc9a966" HandleID="k8s-pod-network.09e2a0063f199a69a8277a8a73dba6a0a38e1d510153b63bda515f685cc9a966" Workload="ci--4081--3--3--n--7b3972f1ed-k8s-calico--kube--controllers--66fcf49574--dh78g-eth0" May 10 00:07:25.363100 containerd[1466]: 2025-05-10 00:07:25.357 [INFO][5245] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="09e2a0063f199a69a8277a8a73dba6a0a38e1d510153b63bda515f685cc9a966" HandleID="k8s-pod-network.09e2a0063f199a69a8277a8a73dba6a0a38e1d510153b63bda515f685cc9a966" Workload="ci--4081--3--3--n--7b3972f1ed-k8s-calico--kube--controllers--66fcf49574--dh78g-eth0" May 10 00:07:25.363100 containerd[1466]: 2025-05-10 00:07:25.359 [INFO][5245] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:07:25.363100 containerd[1466]: 2025-05-10 00:07:25.360 [INFO][5238] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="09e2a0063f199a69a8277a8a73dba6a0a38e1d510153b63bda515f685cc9a966" May 10 00:07:25.363100 containerd[1466]: time="2025-05-10T00:07:25.362970468Z" level=info msg="TearDown network for sandbox \"09e2a0063f199a69a8277a8a73dba6a0a38e1d510153b63bda515f685cc9a966\" successfully" May 10 00:07:25.363100 containerd[1466]: time="2025-05-10T00:07:25.362997308Z" level=info msg="StopPodSandbox for \"09e2a0063f199a69a8277a8a73dba6a0a38e1d510153b63bda515f685cc9a966\" returns successfully" May 10 00:07:25.364267 containerd[1466]: time="2025-05-10T00:07:25.364183599Z" level=info msg="RemovePodSandbox for \"09e2a0063f199a69a8277a8a73dba6a0a38e1d510153b63bda515f685cc9a966\"" May 10 00:07:25.364267 containerd[1466]: time="2025-05-10T00:07:25.364269080Z" level=info msg="Forcibly stopping sandbox \"09e2a0063f199a69a8277a8a73dba6a0a38e1d510153b63bda515f685cc9a966\"" May 10 00:07:25.454016 containerd[1466]: 2025-05-10 00:07:25.409 [WARNING][5263] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="09e2a0063f199a69a8277a8a73dba6a0a38e1d510153b63bda515f685cc9a966" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--7b3972f1ed-k8s-calico--kube--controllers--66fcf49574--dh78g-eth0", GenerateName:"calico-kube-controllers-66fcf49574-", Namespace:"calico-system", SelfLink:"", UID:"a7e097d4-cfcc-4137-9338-932bad26e148", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 6, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"66fcf49574", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-7b3972f1ed", ContainerID:"c8f1b2bdcf1bc56a2a64333798de1da6244a985b82b2a2dfd11754f55c0540d8", Pod:"calico-kube-controllers-66fcf49574-dh78g", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.54.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic15a4f46097", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:07:25.454016 containerd[1466]: 2025-05-10 00:07:25.410 [INFO][5263] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="09e2a0063f199a69a8277a8a73dba6a0a38e1d510153b63bda515f685cc9a966" May 10 00:07:25.454016 containerd[1466]: 2025-05-10 00:07:25.410 [INFO][5263] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="09e2a0063f199a69a8277a8a73dba6a0a38e1d510153b63bda515f685cc9a966" iface="eth0" netns="" May 10 00:07:25.454016 containerd[1466]: 2025-05-10 00:07:25.410 [INFO][5263] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="09e2a0063f199a69a8277a8a73dba6a0a38e1d510153b63bda515f685cc9a966" May 10 00:07:25.454016 containerd[1466]: 2025-05-10 00:07:25.410 [INFO][5263] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="09e2a0063f199a69a8277a8a73dba6a0a38e1d510153b63bda515f685cc9a966" May 10 00:07:25.454016 containerd[1466]: 2025-05-10 00:07:25.435 [INFO][5271] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="09e2a0063f199a69a8277a8a73dba6a0a38e1d510153b63bda515f685cc9a966" HandleID="k8s-pod-network.09e2a0063f199a69a8277a8a73dba6a0a38e1d510153b63bda515f685cc9a966" Workload="ci--4081--3--3--n--7b3972f1ed-k8s-calico--kube--controllers--66fcf49574--dh78g-eth0" May 10 00:07:25.454016 containerd[1466]: 2025-05-10 00:07:25.435 [INFO][5271] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:07:25.454016 containerd[1466]: 2025-05-10 00:07:25.435 [INFO][5271] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:07:25.454016 containerd[1466]: 2025-05-10 00:07:25.447 [WARNING][5271] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="09e2a0063f199a69a8277a8a73dba6a0a38e1d510153b63bda515f685cc9a966" HandleID="k8s-pod-network.09e2a0063f199a69a8277a8a73dba6a0a38e1d510153b63bda515f685cc9a966" Workload="ci--4081--3--3--n--7b3972f1ed-k8s-calico--kube--controllers--66fcf49574--dh78g-eth0" May 10 00:07:25.454016 containerd[1466]: 2025-05-10 00:07:25.447 [INFO][5271] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="09e2a0063f199a69a8277a8a73dba6a0a38e1d510153b63bda515f685cc9a966" HandleID="k8s-pod-network.09e2a0063f199a69a8277a8a73dba6a0a38e1d510153b63bda515f685cc9a966" Workload="ci--4081--3--3--n--7b3972f1ed-k8s-calico--kube--controllers--66fcf49574--dh78g-eth0" May 10 00:07:25.454016 containerd[1466]: 2025-05-10 00:07:25.449 [INFO][5271] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:07:25.454016 containerd[1466]: 2025-05-10 00:07:25.451 [INFO][5263] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="09e2a0063f199a69a8277a8a73dba6a0a38e1d510153b63bda515f685cc9a966" May 10 00:07:25.454016 containerd[1466]: time="2025-05-10T00:07:25.453975238Z" level=info msg="TearDown network for sandbox \"09e2a0063f199a69a8277a8a73dba6a0a38e1d510153b63bda515f685cc9a966\" successfully" May 10 00:07:25.458659 containerd[1466]: time="2025-05-10T00:07:25.458574999Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"09e2a0063f199a69a8277a8a73dba6a0a38e1d510153b63bda515f685cc9a966\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 10 00:07:25.458812 containerd[1466]: time="2025-05-10T00:07:25.458729920Z" level=info msg="RemovePodSandbox \"09e2a0063f199a69a8277a8a73dba6a0a38e1d510153b63bda515f685cc9a966\" returns successfully" May 10 00:07:25.460012 containerd[1466]: time="2025-05-10T00:07:25.459517887Z" level=info msg="StopPodSandbox for \"7a01eef1c747f7713fb9f07d5a7e9af53b03d65fc5ad05df54ef02cbb6b56936\"" May 10 00:07:25.566225 containerd[1466]: 2025-05-10 00:07:25.510 [WARNING][5289] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7a01eef1c747f7713fb9f07d5a7e9af53b03d65fc5ad05df54ef02cbb6b56936" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--7b3972f1ed-k8s-csi--node--driver--kkznf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"19603e28-9cab-4129-9122-e17f2cc348a8", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 6, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-7b3972f1ed", ContainerID:"e12d3731c20f4363bf7702c9e691e362e8f1f3fee9b2bb078ed7354824e9a32f", Pod:"csi-node-driver-kkznf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.54.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2166f89971f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:07:25.566225 containerd[1466]: 2025-05-10 00:07:25.510 [INFO][5289] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7a01eef1c747f7713fb9f07d5a7e9af53b03d65fc5ad05df54ef02cbb6b56936" May 10 00:07:25.566225 containerd[1466]: 2025-05-10 00:07:25.510 [INFO][5289] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7a01eef1c747f7713fb9f07d5a7e9af53b03d65fc5ad05df54ef02cbb6b56936" iface="eth0" netns="" May 10 00:07:25.566225 containerd[1466]: 2025-05-10 00:07:25.510 [INFO][5289] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7a01eef1c747f7713fb9f07d5a7e9af53b03d65fc5ad05df54ef02cbb6b56936" May 10 00:07:25.566225 containerd[1466]: 2025-05-10 00:07:25.510 [INFO][5289] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7a01eef1c747f7713fb9f07d5a7e9af53b03d65fc5ad05df54ef02cbb6b56936" May 10 00:07:25.566225 containerd[1466]: 2025-05-10 00:07:25.541 [INFO][5297] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7a01eef1c747f7713fb9f07d5a7e9af53b03d65fc5ad05df54ef02cbb6b56936" HandleID="k8s-pod-network.7a01eef1c747f7713fb9f07d5a7e9af53b03d65fc5ad05df54ef02cbb6b56936" Workload="ci--4081--3--3--n--7b3972f1ed-k8s-csi--node--driver--kkznf-eth0" May 10 00:07:25.566225 containerd[1466]: 2025-05-10 00:07:25.541 [INFO][5297] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:07:25.566225 containerd[1466]: 2025-05-10 00:07:25.542 [INFO][5297] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:07:25.566225 containerd[1466]: 2025-05-10 00:07:25.559 [WARNING][5297] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7a01eef1c747f7713fb9f07d5a7e9af53b03d65fc5ad05df54ef02cbb6b56936" HandleID="k8s-pod-network.7a01eef1c747f7713fb9f07d5a7e9af53b03d65fc5ad05df54ef02cbb6b56936" Workload="ci--4081--3--3--n--7b3972f1ed-k8s-csi--node--driver--kkznf-eth0" May 10 00:07:25.566225 containerd[1466]: 2025-05-10 00:07:25.559 [INFO][5297] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7a01eef1c747f7713fb9f07d5a7e9af53b03d65fc5ad05df54ef02cbb6b56936" HandleID="k8s-pod-network.7a01eef1c747f7713fb9f07d5a7e9af53b03d65fc5ad05df54ef02cbb6b56936" Workload="ci--4081--3--3--n--7b3972f1ed-k8s-csi--node--driver--kkznf-eth0" May 10 00:07:25.566225 containerd[1466]: 2025-05-10 00:07:25.562 [INFO][5297] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:07:25.566225 containerd[1466]: 2025-05-10 00:07:25.564 [INFO][5289] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7a01eef1c747f7713fb9f07d5a7e9af53b03d65fc5ad05df54ef02cbb6b56936" May 10 00:07:25.568223 containerd[1466]: time="2025-05-10T00:07:25.566974563Z" level=info msg="TearDown network for sandbox \"7a01eef1c747f7713fb9f07d5a7e9af53b03d65fc5ad05df54ef02cbb6b56936\" successfully" May 10 00:07:25.568223 containerd[1466]: time="2025-05-10T00:07:25.567004283Z" level=info msg="StopPodSandbox for \"7a01eef1c747f7713fb9f07d5a7e9af53b03d65fc5ad05df54ef02cbb6b56936\" returns successfully" May 10 00:07:25.568223 containerd[1466]: time="2025-05-10T00:07:25.568140413Z" level=info msg="RemovePodSandbox for \"7a01eef1c747f7713fb9f07d5a7e9af53b03d65fc5ad05df54ef02cbb6b56936\"" May 10 00:07:25.568223 containerd[1466]: time="2025-05-10T00:07:25.568172373Z" level=info msg="Forcibly stopping sandbox \"7a01eef1c747f7713fb9f07d5a7e9af53b03d65fc5ad05df54ef02cbb6b56936\"" May 10 00:07:25.677858 containerd[1466]: 2025-05-10 00:07:25.640 [WARNING][5316] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7a01eef1c747f7713fb9f07d5a7e9af53b03d65fc5ad05df54ef02cbb6b56936" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--7b3972f1ed-k8s-csi--node--driver--kkznf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"19603e28-9cab-4129-9122-e17f2cc348a8", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 6, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-7b3972f1ed", ContainerID:"e12d3731c20f4363bf7702c9e691e362e8f1f3fee9b2bb078ed7354824e9a32f", Pod:"csi-node-driver-kkznf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.54.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2166f89971f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:07:25.677858 containerd[1466]: 2025-05-10 00:07:25.641 [INFO][5316] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7a01eef1c747f7713fb9f07d5a7e9af53b03d65fc5ad05df54ef02cbb6b56936" May 10 00:07:25.677858 containerd[1466]: 2025-05-10 00:07:25.641 [INFO][5316] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7a01eef1c747f7713fb9f07d5a7e9af53b03d65fc5ad05df54ef02cbb6b56936" iface="eth0" netns="" May 10 00:07:25.677858 containerd[1466]: 2025-05-10 00:07:25.641 [INFO][5316] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7a01eef1c747f7713fb9f07d5a7e9af53b03d65fc5ad05df54ef02cbb6b56936" May 10 00:07:25.677858 containerd[1466]: 2025-05-10 00:07:25.641 [INFO][5316] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7a01eef1c747f7713fb9f07d5a7e9af53b03d65fc5ad05df54ef02cbb6b56936" May 10 00:07:25.677858 containerd[1466]: 2025-05-10 00:07:25.662 [INFO][5323] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7a01eef1c747f7713fb9f07d5a7e9af53b03d65fc5ad05df54ef02cbb6b56936" HandleID="k8s-pod-network.7a01eef1c747f7713fb9f07d5a7e9af53b03d65fc5ad05df54ef02cbb6b56936" Workload="ci--4081--3--3--n--7b3972f1ed-k8s-csi--node--driver--kkznf-eth0" May 10 00:07:25.677858 containerd[1466]: 2025-05-10 00:07:25.662 [INFO][5323] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:07:25.677858 containerd[1466]: 2025-05-10 00:07:25.662 [INFO][5323] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:07:25.677858 containerd[1466]: 2025-05-10 00:07:25.672 [WARNING][5323] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7a01eef1c747f7713fb9f07d5a7e9af53b03d65fc5ad05df54ef02cbb6b56936" HandleID="k8s-pod-network.7a01eef1c747f7713fb9f07d5a7e9af53b03d65fc5ad05df54ef02cbb6b56936" Workload="ci--4081--3--3--n--7b3972f1ed-k8s-csi--node--driver--kkznf-eth0" May 10 00:07:25.677858 containerd[1466]: 2025-05-10 00:07:25.672 [INFO][5323] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7a01eef1c747f7713fb9f07d5a7e9af53b03d65fc5ad05df54ef02cbb6b56936" HandleID="k8s-pod-network.7a01eef1c747f7713fb9f07d5a7e9af53b03d65fc5ad05df54ef02cbb6b56936" Workload="ci--4081--3--3--n--7b3972f1ed-k8s-csi--node--driver--kkznf-eth0" May 10 00:07:25.677858 containerd[1466]: 2025-05-10 00:07:25.674 [INFO][5323] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:07:25.677858 containerd[1466]: 2025-05-10 00:07:25.676 [INFO][5316] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7a01eef1c747f7713fb9f07d5a7e9af53b03d65fc5ad05df54ef02cbb6b56936" May 10 00:07:25.679152 containerd[1466]: time="2025-05-10T00:07:25.678849758Z" level=info msg="TearDown network for sandbox \"7a01eef1c747f7713fb9f07d5a7e9af53b03d65fc5ad05df54ef02cbb6b56936\" successfully" May 10 00:07:25.682450 containerd[1466]: time="2025-05-10T00:07:25.682389389Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7a01eef1c747f7713fb9f07d5a7e9af53b03d65fc5ad05df54ef02cbb6b56936\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 10 00:07:25.682581 containerd[1466]: time="2025-05-10T00:07:25.682464750Z" level=info msg="RemovePodSandbox \"7a01eef1c747f7713fb9f07d5a7e9af53b03d65fc5ad05df54ef02cbb6b56936\" returns successfully" May 10 00:07:25.684145 containerd[1466]: time="2025-05-10T00:07:25.684109205Z" level=info msg="StopPodSandbox for \"94da3496626f6b27cdb212bd0cef0ea14312bee753f6943ba1ed1db227f2e3db\"" May 10 00:07:25.772937 containerd[1466]: 2025-05-10 00:07:25.736 [WARNING][5341] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="94da3496626f6b27cdb212bd0cef0ea14312bee753f6943ba1ed1db227f2e3db" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--7b3972f1ed-k8s-calico--apiserver--784576f4d--7h6d7-eth0", GenerateName:"calico-apiserver-784576f4d-", Namespace:"calico-apiserver", SelfLink:"", UID:"82dbd33f-4b0a-4dd0-8dd3-0c33f1f824b2", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 6, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"784576f4d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-7b3972f1ed", ContainerID:"43eeec510ecceca1c49c53abd622a08cf725a1e26b563f2e7f37b9337f936303", Pod:"calico-apiserver-784576f4d-7h6d7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.54.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicc54fc59012", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:07:25.772937 containerd[1466]: 2025-05-10 00:07:25.736 [INFO][5341] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="94da3496626f6b27cdb212bd0cef0ea14312bee753f6943ba1ed1db227f2e3db" May 10 00:07:25.772937 containerd[1466]: 2025-05-10 00:07:25.736 [INFO][5341] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="94da3496626f6b27cdb212bd0cef0ea14312bee753f6943ba1ed1db227f2e3db" iface="eth0" netns="" May 10 00:07:25.772937 containerd[1466]: 2025-05-10 00:07:25.736 [INFO][5341] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="94da3496626f6b27cdb212bd0cef0ea14312bee753f6943ba1ed1db227f2e3db" May 10 00:07:25.772937 containerd[1466]: 2025-05-10 00:07:25.736 [INFO][5341] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="94da3496626f6b27cdb212bd0cef0ea14312bee753f6943ba1ed1db227f2e3db" May 10 00:07:25.772937 containerd[1466]: 2025-05-10 00:07:25.755 [INFO][5348] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="94da3496626f6b27cdb212bd0cef0ea14312bee753f6943ba1ed1db227f2e3db" HandleID="k8s-pod-network.94da3496626f6b27cdb212bd0cef0ea14312bee753f6943ba1ed1db227f2e3db" Workload="ci--4081--3--3--n--7b3972f1ed-k8s-calico--apiserver--784576f4d--7h6d7-eth0" May 10 00:07:25.772937 containerd[1466]: 2025-05-10 00:07:25.755 [INFO][5348] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:07:25.772937 containerd[1466]: 2025-05-10 00:07:25.755 [INFO][5348] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:07:25.772937 containerd[1466]: 2025-05-10 00:07:25.765 [WARNING][5348] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="94da3496626f6b27cdb212bd0cef0ea14312bee753f6943ba1ed1db227f2e3db" HandleID="k8s-pod-network.94da3496626f6b27cdb212bd0cef0ea14312bee753f6943ba1ed1db227f2e3db" Workload="ci--4081--3--3--n--7b3972f1ed-k8s-calico--apiserver--784576f4d--7h6d7-eth0" May 10 00:07:25.772937 containerd[1466]: 2025-05-10 00:07:25.765 [INFO][5348] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="94da3496626f6b27cdb212bd0cef0ea14312bee753f6943ba1ed1db227f2e3db" HandleID="k8s-pod-network.94da3496626f6b27cdb212bd0cef0ea14312bee753f6943ba1ed1db227f2e3db" Workload="ci--4081--3--3--n--7b3972f1ed-k8s-calico--apiserver--784576f4d--7h6d7-eth0" May 10 00:07:25.772937 containerd[1466]: 2025-05-10 00:07:25.768 [INFO][5348] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:07:25.772937 containerd[1466]: 2025-05-10 00:07:25.770 [INFO][5341] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="94da3496626f6b27cdb212bd0cef0ea14312bee753f6943ba1ed1db227f2e3db" May 10 00:07:25.773385 containerd[1466]: time="2025-05-10T00:07:25.773103556Z" level=info msg="TearDown network for sandbox \"94da3496626f6b27cdb212bd0cef0ea14312bee753f6943ba1ed1db227f2e3db\" successfully" May 10 00:07:25.773385 containerd[1466]: time="2025-05-10T00:07:25.773150197Z" level=info msg="StopPodSandbox for \"94da3496626f6b27cdb212bd0cef0ea14312bee753f6943ba1ed1db227f2e3db\" returns successfully" May 10 00:07:25.774872 containerd[1466]: time="2025-05-10T00:07:25.774813012Z" level=info msg="RemovePodSandbox for \"94da3496626f6b27cdb212bd0cef0ea14312bee753f6943ba1ed1db227f2e3db\"" May 10 00:07:25.775006 containerd[1466]: time="2025-05-10T00:07:25.774880132Z" level=info msg="Forcibly stopping sandbox \"94da3496626f6b27cdb212bd0cef0ea14312bee753f6943ba1ed1db227f2e3db\"" May 10 00:07:25.873201 containerd[1466]: 2025-05-10 00:07:25.823 [WARNING][5366] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="94da3496626f6b27cdb212bd0cef0ea14312bee753f6943ba1ed1db227f2e3db" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--7b3972f1ed-k8s-calico--apiserver--784576f4d--7h6d7-eth0", GenerateName:"calico-apiserver-784576f4d-", Namespace:"calico-apiserver", SelfLink:"", UID:"82dbd33f-4b0a-4dd0-8dd3-0c33f1f824b2", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 6, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"784576f4d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-7b3972f1ed", ContainerID:"43eeec510ecceca1c49c53abd622a08cf725a1e26b563f2e7f37b9337f936303", Pod:"calico-apiserver-784576f4d-7h6d7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.54.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicc54fc59012", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:07:25.873201 containerd[1466]: 2025-05-10 00:07:25.823 [INFO][5366] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="94da3496626f6b27cdb212bd0cef0ea14312bee753f6943ba1ed1db227f2e3db" May 10 00:07:25.873201 containerd[1466]: 2025-05-10 00:07:25.823 [INFO][5366] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="94da3496626f6b27cdb212bd0cef0ea14312bee753f6943ba1ed1db227f2e3db" iface="eth0" netns="" May 10 00:07:25.873201 containerd[1466]: 2025-05-10 00:07:25.823 [INFO][5366] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="94da3496626f6b27cdb212bd0cef0ea14312bee753f6943ba1ed1db227f2e3db" May 10 00:07:25.873201 containerd[1466]: 2025-05-10 00:07:25.823 [INFO][5366] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="94da3496626f6b27cdb212bd0cef0ea14312bee753f6943ba1ed1db227f2e3db" May 10 00:07:25.873201 containerd[1466]: 2025-05-10 00:07:25.845 [INFO][5373] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="94da3496626f6b27cdb212bd0cef0ea14312bee753f6943ba1ed1db227f2e3db" HandleID="k8s-pod-network.94da3496626f6b27cdb212bd0cef0ea14312bee753f6943ba1ed1db227f2e3db" Workload="ci--4081--3--3--n--7b3972f1ed-k8s-calico--apiserver--784576f4d--7h6d7-eth0" May 10 00:07:25.873201 containerd[1466]: 2025-05-10 00:07:25.846 [INFO][5373] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:07:25.873201 containerd[1466]: 2025-05-10 00:07:25.846 [INFO][5373] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:07:25.873201 containerd[1466]: 2025-05-10 00:07:25.860 [WARNING][5373] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="94da3496626f6b27cdb212bd0cef0ea14312bee753f6943ba1ed1db227f2e3db" HandleID="k8s-pod-network.94da3496626f6b27cdb212bd0cef0ea14312bee753f6943ba1ed1db227f2e3db" Workload="ci--4081--3--3--n--7b3972f1ed-k8s-calico--apiserver--784576f4d--7h6d7-eth0" May 10 00:07:25.873201 containerd[1466]: 2025-05-10 00:07:25.860 [INFO][5373] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="94da3496626f6b27cdb212bd0cef0ea14312bee753f6943ba1ed1db227f2e3db" HandleID="k8s-pod-network.94da3496626f6b27cdb212bd0cef0ea14312bee753f6943ba1ed1db227f2e3db" Workload="ci--4081--3--3--n--7b3972f1ed-k8s-calico--apiserver--784576f4d--7h6d7-eth0" May 10 00:07:25.873201 containerd[1466]: 2025-05-10 00:07:25.865 [INFO][5373] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:07:25.873201 containerd[1466]: 2025-05-10 00:07:25.869 [INFO][5366] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="94da3496626f6b27cdb212bd0cef0ea14312bee753f6943ba1ed1db227f2e3db" May 10 00:07:25.874254 containerd[1466]: time="2025-05-10T00:07:25.873868053Z" level=info msg="TearDown network for sandbox \"94da3496626f6b27cdb212bd0cef0ea14312bee753f6943ba1ed1db227f2e3db\" successfully" May 10 00:07:25.879575 containerd[1466]: time="2025-05-10T00:07:25.879500343Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"94da3496626f6b27cdb212bd0cef0ea14312bee753f6943ba1ed1db227f2e3db\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 10 00:07:25.879742 containerd[1466]: time="2025-05-10T00:07:25.879623344Z" level=info msg="RemovePodSandbox \"94da3496626f6b27cdb212bd0cef0ea14312bee753f6943ba1ed1db227f2e3db\" returns successfully" May 10 00:07:43.069675 systemd[1]: Started sshd@9-23.88.119.94:22-194.0.234.19:55518.service - OpenSSH per-connection server daemon (194.0.234.19:55518). May 10 00:07:43.806976 sshd[5452]: Invalid user Sujan from 194.0.234.19 port 55518 May 10 00:07:43.880540 sshd[5452]: Connection closed by invalid user Sujan 194.0.234.19 port 55518 [preauth] May 10 00:07:43.882619 systemd[1]: sshd@9-23.88.119.94:22-194.0.234.19:55518.service: Deactivated successfully. May 10 00:08:42.102199 systemd[1]: run-containerd-runc-k8s.io-142a952cfd79593370f0eac8b839e3fe939da101556733d72c93135142df696a-runc.bzpy10.mount: Deactivated successfully. May 10 00:09:05.544188 systemd[1]: run-containerd-runc-k8s.io-26ca02a30a25919e9087c4ac342aed4e22d8bc5a12d49d6e6352a62f9f1fe1fa-runc.S17kvv.mount: Deactivated successfully. May 10 00:09:30.471947 systemd[1]: Started sshd@10-23.88.119.94:22-41.61.20.210:53380.service - OpenSSH per-connection server daemon (41.61.20.210:53380). May 10 00:09:31.482763 sshd[5642]: Received disconnect from 41.61.20.210 port 53380:11: Bye Bye [preauth] May 10 00:09:31.482763 sshd[5642]: Disconnected from authenticating user root 41.61.20.210 port 53380 [preauth] May 10 00:09:31.486454 systemd[1]: sshd@10-23.88.119.94:22-41.61.20.210:53380.service: Deactivated successfully. May 10 00:09:42.097922 systemd[1]: run-containerd-runc-k8s.io-142a952cfd79593370f0eac8b839e3fe939da101556733d72c93135142df696a-runc.Rh0jo2.mount: Deactivated successfully. May 10 00:09:48.096426 systemd[1]: Started sshd@11-23.88.119.94:22-106.75.213.23:54566.service - OpenSSH per-connection server daemon (106.75.213.23:54566). May 10 00:09:48.995090 systemd[1]: Started sshd@12-23.88.119.94:22-51.79.250.84:40604.service - OpenSSH per-connection server daemon (51.79.250.84:40604). May 10 00:09:50.043087 sshd[5720]: Invalid user varsha from 51.79.250.84 port 40604 May 10 00:09:50.237053 sshd[5720]: Received disconnect from 51.79.250.84 port 40604:11: Bye Bye [preauth] May 10 00:09:50.237053 sshd[5720]: Disconnected from invalid user varsha 51.79.250.84 port 40604 [preauth] May 10 00:09:50.240446 systemd[1]: sshd@12-23.88.119.94:22-51.79.250.84:40604.service: Deactivated successfully. May 10 00:10:42.101500 systemd[1]: run-containerd-runc-k8s.io-142a952cfd79593370f0eac8b839e3fe939da101556733d72c93135142df696a-runc.OT6Bde.mount: Deactivated successfully. May 10 00:11:04.987019 update_engine[1452]: I20250510 00:11:04.986934 1452 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs May 10 00:11:04.987019 update_engine[1452]: I20250510 00:11:04.987017 1452 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs May 10 00:11:04.989552 update_engine[1452]: I20250510 00:11:04.987668 1452 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs May 10 00:11:04.989552 update_engine[1452]: I20250510 00:11:04.989279 1452 omaha_request_params.cc:62] Current group set to lts May 10 00:11:04.989552 update_engine[1452]: I20250510 00:11:04.989391 1452 update_attempter.cc:499] Already updated boot flags. Skipping. May 10 00:11:04.989552 update_engine[1452]: I20250510 00:11:04.989403 1452 update_attempter.cc:643] Scheduling an action processor start. May 10 00:11:04.989552 update_engine[1452]: I20250510 00:11:04.989421 1452 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 10 00:11:04.990073 locksmithd[1491]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 May 10 00:11:04.990371 update_engine[1452]: I20250510 00:11:04.990149 1452 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs May 10 00:11:04.990371 update_engine[1452]: I20250510 00:11:04.990239 1452 omaha_request_action.cc:271] Posting an Omaha request to disabled May 10 00:11:04.990371 update_engine[1452]: I20250510 00:11:04.990279 1452 omaha_request_action.cc:272] Request: May 10 00:11:04.990371 update_engine[1452]: May 10 00:11:04.990371 update_engine[1452]: May 10 00:11:04.990371 update_engine[1452]: May 10 00:11:04.990371 update_engine[1452]: May 10 00:11:04.990371 update_engine[1452]: May 10 00:11:04.990371 update_engine[1452]: May 10 00:11:04.990371 update_engine[1452]: May 10 00:11:04.990371 update_engine[1452]: May 10 00:11:04.990371 update_engine[1452]: I20250510 00:11:04.990288 1452 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 10 00:11:04.993930 update_engine[1452]: I20250510 00:11:04.993485 1452 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 10 00:11:04.993930 update_engine[1452]: I20250510 00:11:04.993868 1452 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 10 00:11:04.995098 update_engine[1452]: E20250510 00:11:04.994993 1452 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 10 00:11:04.995098 update_engine[1452]: I20250510 00:11:04.995069 1452 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 May 10 00:11:08.034999 systemd[1]: Started sshd@13-23.88.119.94:22-147.75.109.163:44862.service - OpenSSH per-connection server daemon (147.75.109.163:44862). May 10 00:11:09.046851 sshd[5877]: Accepted publickey for core from 147.75.109.163 port 44862 ssh2: RSA SHA256:f5WfDv+qi5DuYrx2bRROpkXs75JJRPKe8+tldd3Tjew May 10 00:11:09.050129 sshd[5877]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:11:09.055452 systemd-logind[1451]: New session 8 of user core. May 10 00:11:09.060945 systemd[1]: Started session-8.scope - Session 8 of User core. May 10 00:11:09.834465 sshd[5877]: pam_unix(sshd:session): session closed for user core May 10 00:11:09.842449 systemd[1]: sshd@13-23.88.119.94:22-147.75.109.163:44862.service: Deactivated successfully. May 10 00:11:09.848209 systemd[1]: session-8.scope: Deactivated successfully. May 10 00:11:09.850488 systemd-logind[1451]: Session 8 logged out. Waiting for processes to exit. May 10 00:11:09.852584 systemd-logind[1451]: Removed session 8. May 10 00:11:14.895561 update_engine[1452]: I20250510 00:11:14.894865 1452 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 10 00:11:14.895561 update_engine[1452]: I20250510 00:11:14.895182 1452 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 10 00:11:14.895561 update_engine[1452]: I20250510 00:11:14.895494 1452 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 10 00:11:14.897042 update_engine[1452]: E20250510 00:11:14.897001 1452 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 10 00:11:14.897224 update_engine[1452]: I20250510 00:11:14.897197 1452 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 May 10 00:11:15.017091 systemd[1]: Started sshd@14-23.88.119.94:22-147.75.109.163:44870.service - OpenSSH per-connection server daemon (147.75.109.163:44870). May 10 00:11:16.034877 sshd[5913]: Accepted publickey for core from 147.75.109.163 port 44870 ssh2: RSA SHA256:f5WfDv+qi5DuYrx2bRROpkXs75JJRPKe8+tldd3Tjew May 10 00:11:16.036932 sshd[5913]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:11:16.043621 systemd-logind[1451]: New session 9 of user core. May 10 00:11:16.054053 systemd[1]: Started session-9.scope - Session 9 of User core. May 10 00:11:16.813388 sshd[5913]: pam_unix(sshd:session): session closed for user core May 10 00:11:16.820124 systemd[1]: sshd@14-23.88.119.94:22-147.75.109.163:44870.service: Deactivated successfully. May 10 00:11:16.823866 systemd[1]: session-9.scope: Deactivated successfully. May 10 00:11:16.825664 systemd-logind[1451]: Session 9 logged out. Waiting for processes to exit. May 10 00:11:16.827225 systemd-logind[1451]: Removed session 9. May 10 00:11:22.000088 systemd[1]: Started sshd@15-23.88.119.94:22-147.75.109.163:38096.service - OpenSSH per-connection server daemon (147.75.109.163:38096). May 10 00:11:23.008201 sshd[5927]: Accepted publickey for core from 147.75.109.163 port 38096 ssh2: RSA SHA256:f5WfDv+qi5DuYrx2bRROpkXs75JJRPKe8+tldd3Tjew May 10 00:11:23.011801 sshd[5927]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:11:23.016767 systemd-logind[1451]: New session 10 of user core. May 10 00:11:23.026039 systemd[1]: Started session-10.scope - Session 10 of User core. May 10 00:11:23.787588 sshd[5927]: pam_unix(sshd:session): session closed for user core May 10 00:11:23.793525 systemd[1]: sshd@15-23.88.119.94:22-147.75.109.163:38096.service: Deactivated successfully. May 10 00:11:23.797979 systemd[1]: session-10.scope: Deactivated successfully. May 10 00:11:23.799073 systemd-logind[1451]: Session 10 logged out. Waiting for processes to exit. May 10 00:11:23.802958 systemd-logind[1451]: Removed session 10. May 10 00:11:23.973168 systemd[1]: Started sshd@16-23.88.119.94:22-147.75.109.163:38106.service - OpenSSH per-connection server daemon (147.75.109.163:38106). May 10 00:11:24.891327 update_engine[1452]: I20250510 00:11:24.890743 1452 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 10 00:11:24.891327 update_engine[1452]: I20250510 00:11:24.891036 1452 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 10 00:11:24.891327 update_engine[1452]: I20250510 00:11:24.891280 1452 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 10 00:11:24.892239 update_engine[1452]: E20250510 00:11:24.892210 1452 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 10 00:11:24.892357 update_engine[1452]: I20250510 00:11:24.892338 1452 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 May 10 00:11:24.985069 sshd[5941]: Accepted publickey for core from 147.75.109.163 port 38106 ssh2: RSA SHA256:f5WfDv+qi5DuYrx2bRROpkXs75JJRPKe8+tldd3Tjew May 10 00:11:24.987583 sshd[5941]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:11:24.993249 systemd-logind[1451]: New session 11 of user core. May 10 00:11:25.002091 systemd[1]: Started session-11.scope - Session 11 of User core. May 10 00:11:25.827572 sshd[5941]: pam_unix(sshd:session): session closed for user core May 10 00:11:25.833688 systemd[1]: sshd@16-23.88.119.94:22-147.75.109.163:38106.service: Deactivated successfully. May 10 00:11:25.834123 systemd-logind[1451]: Session 11 logged out. Waiting for processes to exit. May 10 00:11:25.835665 systemd[1]: session-11.scope: Deactivated successfully. May 10 00:11:25.840576 systemd-logind[1451]: Removed session 11. May 10 00:11:26.001145 systemd[1]: Started sshd@17-23.88.119.94:22-147.75.109.163:38110.service - OpenSSH per-connection server daemon (147.75.109.163:38110). May 10 00:11:27.006197 sshd[5958]: Accepted publickey for core from 147.75.109.163 port 38110 ssh2: RSA SHA256:f5WfDv+qi5DuYrx2bRROpkXs75JJRPKe8+tldd3Tjew May 10 00:11:27.008634 sshd[5958]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:11:27.014238 systemd-logind[1451]: New session 12 of user core. May 10 00:11:27.022064 systemd[1]: Started session-12.scope - Session 12 of User core. May 10 00:11:27.773117 sshd[5958]: pam_unix(sshd:session): session closed for user core May 10 00:11:27.777924 systemd-logind[1451]: Session 12 logged out. Waiting for processes to exit. May 10 00:11:27.778070 systemd[1]: sshd@17-23.88.119.94:22-147.75.109.163:38110.service: Deactivated successfully. May 10 00:11:27.781454 systemd[1]: session-12.scope: Deactivated successfully. May 10 00:11:27.784509 systemd-logind[1451]: Removed session 12. May 10 00:11:32.945427 systemd[1]: Started sshd@18-23.88.119.94:22-147.75.109.163:46660.service - OpenSSH per-connection server daemon (147.75.109.163:46660). May 10 00:11:33.942994 sshd[5973]: Accepted publickey for core from 147.75.109.163 port 46660 ssh2: RSA SHA256:f5WfDv+qi5DuYrx2bRROpkXs75JJRPKe8+tldd3Tjew May 10 00:11:33.946492 sshd[5973]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:11:33.951612 systemd-logind[1451]: New session 13 of user core. May 10 00:11:33.955912 systemd[1]: Started session-13.scope - Session 13 of User core. May 10 00:11:34.717222 sshd[5973]: pam_unix(sshd:session): session closed for user core May 10 00:11:34.721751 systemd-logind[1451]: Session 13 logged out. Waiting for processes to exit. May 10 00:11:34.722081 systemd[1]: sshd@18-23.88.119.94:22-147.75.109.163:46660.service: Deactivated successfully. May 10 00:11:34.724146 systemd[1]: session-13.scope: Deactivated successfully. May 10 00:11:34.725687 systemd-logind[1451]: Removed session 13. May 10 00:11:34.891437 update_engine[1452]: I20250510 00:11:34.890681 1452 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 10 00:11:34.891437 update_engine[1452]: I20250510 00:11:34.890995 1452 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 10 00:11:34.891437 update_engine[1452]: I20250510 00:11:34.891212 1452 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 10 00:11:34.891079 systemd[1]: Started sshd@19-23.88.119.94:22-147.75.109.163:46674.service - OpenSSH per-connection server daemon (147.75.109.163:46674). May 10 00:11:34.894745 update_engine[1452]: E20250510 00:11:34.894035 1452 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 10 00:11:34.894745 update_engine[1452]: I20250510 00:11:34.894093 1452 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 10 00:11:34.894745 update_engine[1452]: I20250510 00:11:34.894101 1452 omaha_request_action.cc:617] Omaha request response: May 10 00:11:34.894745 update_engine[1452]: E20250510 00:11:34.894176 1452 omaha_request_action.cc:636] Omaha request network transfer failed. May 10 00:11:34.894745 update_engine[1452]: I20250510 00:11:34.894194 1452 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. May 10 00:11:34.894745 update_engine[1452]: I20250510 00:11:34.894201 1452 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 10 00:11:34.894745 update_engine[1452]: I20250510 00:11:34.894207 1452 update_attempter.cc:306] Processing Done. May 10 00:11:34.894745 update_engine[1452]: E20250510 00:11:34.894222 1452 update_attempter.cc:619] Update failed. May 10 00:11:34.894745 update_engine[1452]: I20250510 00:11:34.894227 1452 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse May 10 00:11:34.894745 update_engine[1452]: I20250510 00:11:34.894233 1452 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) May 10 00:11:34.894745 update_engine[1452]: I20250510 00:11:34.894239 1452 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. May 10 00:11:34.894745 update_engine[1452]: I20250510 00:11:34.894302 1452 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 10 00:11:34.894745 update_engine[1452]: I20250510 00:11:34.894325 1452 omaha_request_action.cc:271] Posting an Omaha request to disabled May 10 00:11:34.894745 update_engine[1452]: I20250510 00:11:34.894330 1452 omaha_request_action.cc:272] Request: May 10 00:11:34.894745 update_engine[1452]: May 10 00:11:34.894745 update_engine[1452]: May 10 00:11:34.895381 update_engine[1452]: May 10 00:11:34.895381 update_engine[1452]: May 10 00:11:34.895381 update_engine[1452]: May 10 00:11:34.895381 update_engine[1452]: May 10 00:11:34.895381 update_engine[1452]: I20250510 00:11:34.894336 1452 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 10 00:11:34.895381 update_engine[1452]: I20250510 00:11:34.894474 1452 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 10 00:11:34.895381 update_engine[1452]: I20250510 00:11:34.894641 1452 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 10 00:11:34.896406 locksmithd[1491]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 May 10 00:11:34.897251 update_engine[1452]: E20250510 00:11:34.896927 1452 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 10 00:11:34.897251 update_engine[1452]: I20250510 00:11:34.896984 1452 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 10 00:11:34.897251 update_engine[1452]: I20250510 00:11:34.896993 1452 omaha_request_action.cc:617] Omaha request response: May 10 00:11:34.897251 update_engine[1452]: I20250510 00:11:34.897001 1452 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 10 00:11:34.897251 update_engine[1452]: I20250510 00:11:34.897006 1452 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 10 00:11:34.897251 update_engine[1452]: I20250510 00:11:34.897055 1452 update_attempter.cc:306] Processing Done. May 10 00:11:34.897251 update_engine[1452]: I20250510 00:11:34.897063 1452 update_attempter.cc:310] Error event sent. May 10 00:11:34.897251 update_engine[1452]: I20250510 00:11:34.897073 1452 update_check_scheduler.cc:74] Next update check in 40m10s May 10 00:11:34.897676 locksmithd[1491]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 May 10 00:11:35.270150 systemd[1]: run-containerd-runc-k8s.io-26ca02a30a25919e9087c4ac342aed4e22d8bc5a12d49d6e6352a62f9f1fe1fa-runc.bs8Y6N.mount: Deactivated successfully. May 10 00:11:35.886302 sshd[5985]: Accepted publickey for core from 147.75.109.163 port 46674 ssh2: RSA SHA256:f5WfDv+qi5DuYrx2bRROpkXs75JJRPKe8+tldd3Tjew May 10 00:11:35.888903 sshd[5985]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:11:35.893897 systemd-logind[1451]: New session 14 of user core. May 10 00:11:35.909092 systemd[1]: Started session-14.scope - Session 14 of User core. May 10 00:11:36.765100 sshd[5985]: pam_unix(sshd:session): session closed for user core May 10 00:11:36.771027 systemd-logind[1451]: Session 14 logged out. Waiting for processes to exit. May 10 00:11:36.771980 systemd[1]: sshd@19-23.88.119.94:22-147.75.109.163:46674.service: Deactivated successfully. May 10 00:11:36.774883 systemd[1]: session-14.scope: Deactivated successfully. May 10 00:11:36.776500 systemd-logind[1451]: Removed session 14. May 10 00:11:36.952531 systemd[1]: Started sshd@20-23.88.119.94:22-147.75.109.163:46688.service - OpenSSH per-connection server daemon (147.75.109.163:46688). May 10 00:11:37.964979 sshd[6036]: Accepted publickey for core from 147.75.109.163 port 46688 ssh2: RSA SHA256:f5WfDv+qi5DuYrx2bRROpkXs75JJRPKe8+tldd3Tjew May 10 00:11:37.967225 sshd[6036]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:11:37.974460 systemd-logind[1451]: New session 15 of user core. May 10 00:11:37.983238 systemd[1]: Started session-15.scope - Session 15 of User core. May 10 00:11:40.626606 sshd[6036]: pam_unix(sshd:session): session closed for user core May 10 00:11:40.633324 systemd[1]: sshd@20-23.88.119.94:22-147.75.109.163:46688.service: Deactivated successfully. May 10 00:11:40.637656 systemd[1]: session-15.scope: Deactivated successfully. May 10 00:11:40.639416 systemd-logind[1451]: Session 15 logged out. Waiting for processes to exit. May 10 00:11:40.642470 systemd-logind[1451]: Removed session 15. May 10 00:11:40.810305 systemd[1]: Started sshd@21-23.88.119.94:22-147.75.109.163:45346.service - OpenSSH per-connection server daemon (147.75.109.163:45346). May 10 00:11:41.811190 sshd[6060]: Accepted publickey for core from 147.75.109.163 port 45346 ssh2: RSA SHA256:f5WfDv+qi5DuYrx2bRROpkXs75JJRPKe8+tldd3Tjew May 10 00:11:41.813464 sshd[6060]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:11:41.818148 systemd-logind[1451]: New session 16 of user core. May 10 00:11:41.824949 systemd[1]: Started session-16.scope - Session 16 of User core. May 10 00:11:42.764060 sshd[6060]: pam_unix(sshd:session): session closed for user core May 10 00:11:42.771787 systemd[1]: sshd@21-23.88.119.94:22-147.75.109.163:45346.service: Deactivated successfully. May 10 00:11:42.775013 systemd[1]: session-16.scope: Deactivated successfully. May 10 00:11:42.778227 systemd-logind[1451]: Session 16 logged out. Waiting for processes to exit. May 10 00:11:42.779935 systemd-logind[1451]: Removed session 16. May 10 00:11:42.946112 systemd[1]: Started sshd@22-23.88.119.94:22-147.75.109.163:45352.service - OpenSSH per-connection server daemon (147.75.109.163:45352). May 10 00:11:43.944299 sshd[6095]: Accepted publickey for core from 147.75.109.163 port 45352 ssh2: RSA SHA256:f5WfDv+qi5DuYrx2bRROpkXs75JJRPKe8+tldd3Tjew May 10 00:11:43.946542 sshd[6095]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:11:43.951162 systemd-logind[1451]: New session 17 of user core. May 10 00:11:43.960769 systemd[1]: Started session-17.scope - Session 17 of User core. May 10 00:11:44.731282 sshd[6095]: pam_unix(sshd:session): session closed for user core May 10 00:11:44.735140 systemd-logind[1451]: Session 17 logged out. Waiting for processes to exit. May 10 00:11:44.737549 systemd[1]: sshd@22-23.88.119.94:22-147.75.109.163:45352.service: Deactivated successfully. May 10 00:11:44.739359 systemd[1]: session-17.scope: Deactivated successfully. May 10 00:11:44.740409 systemd-logind[1451]: Removed session 17. May 10 00:11:48.120370 systemd[1]: sshd@11-23.88.119.94:22-106.75.213.23:54566.service: Deactivated successfully. May 10 00:11:49.914220 systemd[1]: Started sshd@23-23.88.119.94:22-147.75.109.163:37350.service - OpenSSH per-connection server daemon (147.75.109.163:37350). May 10 00:11:50.934509 sshd[6125]: Accepted publickey for core from 147.75.109.163 port 37350 ssh2: RSA SHA256:f5WfDv+qi5DuYrx2bRROpkXs75JJRPKe8+tldd3Tjew May 10 00:11:50.938164 sshd[6125]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:11:50.944533 systemd-logind[1451]: New session 18 of user core. May 10 00:11:50.952993 systemd[1]: Started session-18.scope - Session 18 of User core. May 10 00:11:51.708405 sshd[6125]: pam_unix(sshd:session): session closed for user core May 10 00:11:51.714324 systemd-logind[1451]: Session 18 logged out. Waiting for processes to exit. May 10 00:11:51.714935 systemd[1]: sshd@23-23.88.119.94:22-147.75.109.163:37350.service: Deactivated successfully. May 10 00:11:51.720656 systemd[1]: session-18.scope: Deactivated successfully. May 10 00:11:51.723369 systemd-logind[1451]: Removed session 18. May 10 00:11:56.894069 systemd[1]: Started sshd@24-23.88.119.94:22-147.75.109.163:37360.service - OpenSSH per-connection server daemon (147.75.109.163:37360). May 10 00:11:57.904516 sshd[6138]: Accepted publickey for core from 147.75.109.163 port 37360 ssh2: RSA SHA256:f5WfDv+qi5DuYrx2bRROpkXs75JJRPKe8+tldd3Tjew May 10 00:11:57.907990 sshd[6138]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:11:57.916193 systemd-logind[1451]: New session 19 of user core. May 10 00:11:57.917922 systemd[1]: Started session-19.scope - Session 19 of User core. May 10 00:11:58.673269 sshd[6138]: pam_unix(sshd:session): session closed for user core May 10 00:11:58.681361 systemd[1]: sshd@24-23.88.119.94:22-147.75.109.163:37360.service: Deactivated successfully. May 10 00:11:58.684013 systemd[1]: session-19.scope: Deactivated successfully. May 10 00:11:58.685380 systemd-logind[1451]: Session 19 logged out. Waiting for processes to exit. May 10 00:11:58.687082 systemd-logind[1451]: Removed session 19. May 10 00:12:01.687470 systemd[1]: Started sshd@25-23.88.119.94:22-180.93.172.127:36966.service - OpenSSH per-connection server daemon (180.93.172.127:36966). May 10 00:12:02.862272 sshd[6151]: Invalid user darwin from 180.93.172.127 port 36966 May 10 00:12:03.081170 sshd[6151]: Received disconnect from 180.93.172.127 port 36966:11: Bye Bye [preauth] May 10 00:12:03.081170 sshd[6151]: Disconnected from invalid user darwin 180.93.172.127 port 36966 [preauth] May 10 00:12:03.083738 systemd[1]: sshd@25-23.88.119.94:22-180.93.172.127:36966.service: Deactivated successfully.