May 10 00:03:49.889526 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 10 00:03:49.889550 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri May 9 22:39:45 -00 2025 May 10 00:03:49.889560 kernel: KASLR enabled May 10 00:03:49.889566 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II May 10 00:03:49.889572 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x138595418 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b43d18 May 10 00:03:49.889577 kernel: random: crng init done May 10 00:03:49.889651 kernel: ACPI: Early table checksum verification disabled May 10 00:03:49.889658 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) May 10 00:03:49.889665 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) May 10 00:03:49.889673 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 10 00:03:49.889679 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 10 00:03:49.889685 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 10 00:03:49.889691 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 10 00:03:49.889697 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 10 00:03:49.889704 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 10 00:03:49.889718 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 10 00:03:49.889725 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 10 00:03:49.889731 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 10 00:03:49.889738 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) May 10 00:03:49.889744 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 May 10 00:03:49.889750 kernel: NUMA: Failed to initialise from firmware May 10 00:03:49.889756 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] May 10 00:03:49.889763 kernel: NUMA: NODE_DATA [mem 0x13966e800-0x139673fff] May 10 00:03:49.889769 kernel: Zone ranges: May 10 00:03:49.889775 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] May 10 00:03:49.889783 kernel: DMA32 empty May 10 00:03:49.889790 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] May 10 00:03:49.889796 kernel: Movable zone start for each node May 10 00:03:49.889812 kernel: Early memory node ranges May 10 00:03:49.889818 kernel: node 0: [mem 0x0000000040000000-0x000000013676ffff] May 10 00:03:49.889825 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] May 10 00:03:49.889831 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] May 10 00:03:49.889837 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] May 10 00:03:49.889844 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] May 10 00:03:49.889850 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] May 10 00:03:49.889856 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] May 10 00:03:49.889862 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] May 10 00:03:49.889871 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges May 10 00:03:49.889877 kernel: psci: probing for conduit method from ACPI. May 10 00:03:49.889883 kernel: psci: PSCIv1.1 detected in firmware. May 10 00:03:49.889892 kernel: psci: Using standard PSCI v0.2 function IDs May 10 00:03:49.889899 kernel: psci: Trusted OS migration not required May 10 00:03:49.889905 kernel: psci: SMC Calling Convention v1.1 May 10 00:03:49.889914 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 10 00:03:49.889920 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 10 00:03:49.889927 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 10 00:03:49.889934 kernel: pcpu-alloc: [0] 0 [0] 1 May 10 00:03:49.889940 kernel: Detected PIPT I-cache on CPU0 May 10 00:03:49.889947 kernel: CPU features: detected: GIC system register CPU interface May 10 00:03:49.889954 kernel: CPU features: detected: Hardware dirty bit management May 10 00:03:49.889960 kernel: CPU features: detected: Spectre-v4 May 10 00:03:49.889967 kernel: CPU features: detected: Spectre-BHB May 10 00:03:49.889973 kernel: CPU features: kernel page table isolation forced ON by KASLR May 10 00:03:49.889982 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 10 00:03:49.889988 kernel: CPU features: detected: ARM erratum 1418040 May 10 00:03:49.889995 kernel: CPU features: detected: SSBS not fully self-synchronizing May 10 00:03:49.890002 kernel: alternatives: applying boot alternatives May 10 00:03:49.890010 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=6ddfb314c5db7ed82ab49390a2bb52fe12211605ed2a5a27fb38ec34b3cca5b4 May 10 00:03:49.890017 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 10 00:03:49.890024 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 10 00:03:49.890031 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 10 00:03:49.890037 kernel: Fallback order for Node 0: 0 May 10 00:03:49.890044 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 May 10 00:03:49.890051 kernel: Policy zone: Normal May 10 00:03:49.890059 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 10 00:03:49.890065 kernel: software IO TLB: area num 2. May 10 00:03:49.890072 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) May 10 00:03:49.890079 kernel: Memory: 3882804K/4096000K available (10304K kernel code, 2186K rwdata, 8104K rodata, 39424K init, 897K bss, 213196K reserved, 0K cma-reserved) May 10 00:03:49.890086 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 10 00:03:49.890093 kernel: rcu: Preemptible hierarchical RCU implementation. May 10 00:03:49.890101 kernel: rcu: RCU event tracing is enabled. May 10 00:03:49.890107 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 10 00:03:49.890114 kernel: Trampoline variant of Tasks RCU enabled. May 10 00:03:49.890121 kernel: Tracing variant of Tasks RCU enabled. May 10 00:03:49.890128 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 10 00:03:49.890136 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 10 00:03:49.890143 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 10 00:03:49.890149 kernel: GICv3: 256 SPIs implemented May 10 00:03:49.890156 kernel: GICv3: 0 Extended SPIs implemented May 10 00:03:49.890162 kernel: Root IRQ handler: gic_handle_irq May 10 00:03:49.890169 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 10 00:03:49.890176 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 10 00:03:49.890182 kernel: ITS [mem 0x08080000-0x0809ffff] May 10 00:03:49.890189 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) May 10 00:03:49.890196 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) May 10 00:03:49.890203 kernel: GICv3: using LPI property table @0x00000001000e0000 May 10 00:03:49.890210 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 May 10 00:03:49.890218 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 10 00:03:49.890224 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 10 00:03:49.890231 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 10 00:03:49.890238 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 10 00:03:49.890245 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 10 00:03:49.890282 kernel: Console: colour dummy device 80x25 May 10 00:03:49.890290 kernel: ACPI: Core revision 20230628 May 10 00:03:49.890297 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 10 00:03:49.890304 kernel: pid_max: default: 32768 minimum: 301 May 10 00:03:49.890311 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 10 00:03:49.890321 kernel: landlock: Up and running. May 10 00:03:49.890328 kernel: SELinux: Initializing. May 10 00:03:49.890335 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 10 00:03:49.890342 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 10 00:03:49.890349 kernel: ACPI PPTT: PPTT table found, but unable to locate core 1 (1) May 10 00:03:49.890356 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 10 00:03:49.890363 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 10 00:03:49.890370 kernel: rcu: Hierarchical SRCU implementation. May 10 00:03:49.890377 kernel: rcu: Max phase no-delay instances is 400. May 10 00:03:49.890385 kernel: Platform MSI: ITS@0x8080000 domain created May 10 00:03:49.890392 kernel: PCI/MSI: ITS@0x8080000 domain created May 10 00:03:49.890404 kernel: Remapping and enabling EFI services. May 10 00:03:49.890415 kernel: smp: Bringing up secondary CPUs ... May 10 00:03:49.890424 kernel: Detected PIPT I-cache on CPU1 May 10 00:03:49.890431 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 10 00:03:49.890438 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 May 10 00:03:49.890445 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 10 00:03:49.890452 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 10 00:03:49.890460 kernel: smp: Brought up 1 node, 2 CPUs May 10 00:03:49.890467 kernel: SMP: Total of 2 processors activated. May 10 00:03:49.890474 kernel: CPU features: detected: 32-bit EL0 Support May 10 00:03:49.890487 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 10 00:03:49.890495 kernel: CPU features: detected: Common not Private translations May 10 00:03:49.890502 kernel: CPU features: detected: CRC32 instructions May 10 00:03:49.890509 kernel: CPU features: detected: Enhanced Virtualization Traps May 10 00:03:49.890517 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 10 00:03:49.890524 kernel: CPU features: detected: LSE atomic instructions May 10 00:03:49.890531 kernel: CPU features: detected: Privileged Access Never May 10 00:03:49.890538 kernel: CPU features: detected: RAS Extension Support May 10 00:03:49.890547 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 10 00:03:49.890554 kernel: CPU: All CPU(s) started at EL1 May 10 00:03:49.890562 kernel: alternatives: applying system-wide alternatives May 10 00:03:49.890569 kernel: devtmpfs: initialized May 10 00:03:49.890576 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 10 00:03:49.890583 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 10 00:03:49.890592 kernel: pinctrl core: initialized pinctrl subsystem May 10 00:03:49.890599 kernel: SMBIOS 3.0.0 present. May 10 00:03:49.890607 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 May 10 00:03:49.890614 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 10 00:03:49.890621 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 10 00:03:49.890629 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 10 00:03:49.890636 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 10 00:03:49.890643 kernel: audit: initializing netlink subsys (disabled) May 10 00:03:49.890650 kernel: audit: type=2000 audit(0.015:1): state=initialized audit_enabled=0 res=1 May 10 00:03:49.890659 kernel: thermal_sys: Registered thermal governor 'step_wise' May 10 00:03:49.890666 kernel: cpuidle: using governor menu May 10 00:03:49.890674 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 10 00:03:49.890681 kernel: ASID allocator initialised with 32768 entries May 10 00:03:49.890688 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 10 00:03:49.890695 kernel: Serial: AMBA PL011 UART driver May 10 00:03:49.890702 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 10 00:03:49.890710 kernel: Modules: 0 pages in range for non-PLT usage May 10 00:03:49.890717 kernel: Modules: 509008 pages in range for PLT usage May 10 00:03:49.890725 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 10 00:03:49.890733 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 10 00:03:49.890740 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 10 00:03:49.890747 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 10 00:03:49.890755 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 10 00:03:49.890762 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 10 00:03:49.890769 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 10 00:03:49.890776 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 10 00:03:49.890783 kernel: ACPI: Added _OSI(Module Device) May 10 00:03:49.890791 kernel: ACPI: Added _OSI(Processor Device) May 10 00:03:49.890799 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 10 00:03:49.890813 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 10 00:03:49.890821 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 10 00:03:49.890828 kernel: ACPI: Interpreter enabled May 10 00:03:49.890835 kernel: ACPI: Using GIC for interrupt routing May 10 00:03:49.890842 kernel: ACPI: MCFG table detected, 1 entries May 10 00:03:49.890849 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 10 00:03:49.890857 kernel: printk: console [ttyAMA0] enabled May 10 00:03:49.890867 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 10 00:03:49.891022 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 10 00:03:49.891095 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 10 00:03:49.891159 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 10 00:03:49.891223 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 10 00:03:49.891346 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 10 00:03:49.891358 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 10 00:03:49.891369 kernel: PCI host bridge to bus 0000:00 May 10 00:03:49.891439 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 10 00:03:49.891499 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 10 00:03:49.891557 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 10 00:03:49.891615 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 10 00:03:49.891694 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 10 00:03:49.891778 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 May 10 00:03:49.891890 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] May 10 00:03:49.891964 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] May 10 00:03:49.892044 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 May 10 00:03:49.892112 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] May 10 00:03:49.892183 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 May 10 00:03:49.894403 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] May 10 00:03:49.894504 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 May 10 00:03:49.894571 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] May 10 00:03:49.894646 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 May 10 00:03:49.894713 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] May 10 00:03:49.894785 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 May 10 00:03:49.894875 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] May 10 00:03:49.894957 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 May 10 00:03:49.895024 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] May 10 00:03:49.895096 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 May 10 00:03:49.895162 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] May 10 00:03:49.895233 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 May 10 00:03:49.896118 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] May 10 00:03:49.896214 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 May 10 00:03:49.897378 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] May 10 00:03:49.897468 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 May 10 00:03:49.897534 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] May 10 00:03:49.897614 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 May 10 00:03:49.897683 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] May 10 00:03:49.897758 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 10 00:03:49.897875 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] May 10 00:03:49.897958 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 May 10 00:03:49.898029 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] May 10 00:03:49.898105 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 May 10 00:03:49.898174 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] May 10 00:03:49.898242 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] May 10 00:03:49.899398 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 May 10 00:03:49.899473 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] May 10 00:03:49.899558 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 May 10 00:03:49.899627 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x10800000-0x10800fff] May 10 00:03:49.899692 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] May 10 00:03:49.899767 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 May 10 00:03:49.899856 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] May 10 00:03:49.899936 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] May 10 00:03:49.900011 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 May 10 00:03:49.900081 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] May 10 00:03:49.900148 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] May 10 00:03:49.902435 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] May 10 00:03:49.902593 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 10 00:03:49.902711 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 May 10 00:03:49.902840 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 May 10 00:03:49.902960 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 10 00:03:49.903065 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 10 00:03:49.903169 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 May 10 00:03:49.904635 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 10 00:03:49.904728 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 May 10 00:03:49.904814 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 May 10 00:03:49.904892 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 10 00:03:49.904958 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 May 10 00:03:49.905022 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 10 00:03:49.905090 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 May 10 00:03:49.905155 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 May 10 00:03:49.905231 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 May 10 00:03:49.905330 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 May 10 00:03:49.905399 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 May 10 00:03:49.905462 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 May 10 00:03:49.905531 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 May 10 00:03:49.905595 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 May 10 00:03:49.905663 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 May 10 00:03:49.905731 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 May 10 00:03:49.905798 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 May 10 00:03:49.905881 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 May 10 00:03:49.905952 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 May 10 00:03:49.906019 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 May 10 00:03:49.906084 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 May 10 00:03:49.906150 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] May 10 00:03:49.906215 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] May 10 00:03:49.906317 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] May 10 00:03:49.906389 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] May 10 00:03:49.906454 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] May 10 00:03:49.906519 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] May 10 00:03:49.906584 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] May 10 00:03:49.906649 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] May 10 00:03:49.906714 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] May 10 00:03:49.906778 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] May 10 00:03:49.906891 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] May 10 00:03:49.906961 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] May 10 00:03:49.907028 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] May 10 00:03:49.907092 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] May 10 00:03:49.907158 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] May 10 00:03:49.907223 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] May 10 00:03:49.907900 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] May 10 00:03:49.907990 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] May 10 00:03:49.908061 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] May 10 00:03:49.908125 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] May 10 00:03:49.908194 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] May 10 00:03:49.909188 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] May 10 00:03:49.910320 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] May 10 00:03:49.910398 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] May 10 00:03:49.910473 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] May 10 00:03:49.910538 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] May 10 00:03:49.910603 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] May 10 00:03:49.910667 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] May 10 00:03:49.910732 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] May 10 00:03:49.910795 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] May 10 00:03:49.910898 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] May 10 00:03:49.910964 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] May 10 00:03:49.911036 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] May 10 00:03:49.911099 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] May 10 00:03:49.911167 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] May 10 00:03:49.911230 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] May 10 00:03:49.912360 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] May 10 00:03:49.912434 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] May 10 00:03:49.912503 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] May 10 00:03:49.912576 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] May 10 00:03:49.912649 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 10 00:03:49.912715 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] May 10 00:03:49.912781 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] May 10 00:03:49.912866 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] May 10 00:03:49.912932 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] May 10 00:03:49.912997 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] May 10 00:03:49.913068 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] May 10 00:03:49.913140 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] May 10 00:03:49.913204 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] May 10 00:03:49.913283 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] May 10 00:03:49.913368 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] May 10 00:03:49.913442 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] May 10 00:03:49.913514 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] May 10 00:03:49.913579 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] May 10 00:03:49.913643 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] May 10 00:03:49.913706 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] May 10 00:03:49.913770 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] May 10 00:03:49.913858 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] May 10 00:03:49.913927 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] May 10 00:03:49.913991 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] May 10 00:03:49.914058 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] May 10 00:03:49.914122 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] May 10 00:03:49.914193 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] May 10 00:03:49.914308 kernel: pci 0000:05:00.0: BAR 1: assigned [mem 0x10800000-0x10800fff] May 10 00:03:49.914381 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] May 10 00:03:49.914468 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] May 10 00:03:49.914536 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] May 10 00:03:49.914601 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] May 10 00:03:49.914678 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] May 10 00:03:49.914744 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] May 10 00:03:49.914847 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] May 10 00:03:49.914925 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] May 10 00:03:49.914992 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] May 10 00:03:49.915057 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] May 10 00:03:49.915131 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] May 10 00:03:49.915199 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] May 10 00:03:49.915375 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] May 10 00:03:49.915449 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] May 10 00:03:49.915513 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] May 10 00:03:49.915575 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] May 10 00:03:49.915637 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] May 10 00:03:49.915702 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] May 10 00:03:49.915787 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] May 10 00:03:49.915872 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] May 10 00:03:49.915943 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] May 10 00:03:49.916009 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] May 10 00:03:49.916072 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] May 10 00:03:49.916135 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] May 10 00:03:49.916198 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] May 10 00:03:49.916275 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 10 00:03:49.916350 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 10 00:03:49.916410 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 10 00:03:49.916486 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] May 10 00:03:49.916547 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] May 10 00:03:49.916688 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] May 10 00:03:49.916769 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] May 10 00:03:49.916846 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] May 10 00:03:49.916908 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] May 10 00:03:49.916980 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] May 10 00:03:49.917043 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] May 10 00:03:49.917112 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] May 10 00:03:49.917179 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] May 10 00:03:49.917240 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] May 10 00:03:49.917357 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] May 10 00:03:49.917425 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] May 10 00:03:49.917488 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] May 10 00:03:49.917546 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] May 10 00:03:49.917611 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] May 10 00:03:49.917671 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] May 10 00:03:49.917734 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] May 10 00:03:49.917846 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] May 10 00:03:49.917917 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] May 10 00:03:49.917975 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] May 10 00:03:49.918042 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] May 10 00:03:49.918103 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] May 10 00:03:49.918162 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] May 10 00:03:49.918234 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] May 10 00:03:49.918343 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] May 10 00:03:49.918409 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] May 10 00:03:49.918419 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 10 00:03:49.918427 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 10 00:03:49.918435 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 10 00:03:49.918443 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 10 00:03:49.918454 kernel: iommu: Default domain type: Translated May 10 00:03:49.918464 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 10 00:03:49.918474 kernel: efivars: Registered efivars operations May 10 00:03:49.918483 kernel: vgaarb: loaded May 10 00:03:49.918492 kernel: clocksource: Switched to clocksource arch_sys_counter May 10 00:03:49.918501 kernel: VFS: Disk quotas dquot_6.6.0 May 10 00:03:49.918510 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 10 00:03:49.918519 kernel: pnp: PnP ACPI init May 10 00:03:49.918609 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 10 00:03:49.918623 kernel: pnp: PnP ACPI: found 1 devices May 10 00:03:49.918631 kernel: NET: Registered PF_INET protocol family May 10 00:03:49.918639 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 10 00:03:49.918647 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 10 00:03:49.918655 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 10 00:03:49.918663 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 10 00:03:49.918671 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 10 00:03:49.918679 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 10 00:03:49.918686 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 10 00:03:49.918696 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 10 00:03:49.918704 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 10 00:03:49.918777 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) May 10 00:03:49.918788 kernel: PCI: CLS 0 bytes, default 64 May 10 00:03:49.918796 kernel: kvm [1]: HYP mode not available May 10 00:03:49.918813 kernel: Initialise system trusted keyrings May 10 00:03:49.918822 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 10 00:03:49.918829 kernel: Key type asymmetric registered May 10 00:03:49.918837 kernel: Asymmetric key parser 'x509' registered May 10 00:03:49.918847 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 10 00:03:49.918855 kernel: io scheduler mq-deadline registered May 10 00:03:49.918862 kernel: io scheduler kyber registered May 10 00:03:49.918870 kernel: io scheduler bfq registered May 10 00:03:49.918878 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 May 10 00:03:49.918951 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 May 10 00:03:49.919020 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 May 10 00:03:49.919085 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 10 00:03:49.919155 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 May 10 00:03:49.919221 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 May 10 00:03:49.920539 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 10 00:03:49.920625 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 May 10 00:03:49.920690 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 May 10 00:03:49.920755 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 10 00:03:49.920850 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 May 10 00:03:49.920918 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 May 10 00:03:49.920993 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 10 00:03:49.921060 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 May 10 00:03:49.921127 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 May 10 00:03:49.921195 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 10 00:03:49.921276 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 May 10 00:03:49.921343 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 May 10 00:03:49.921408 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 10 00:03:49.921475 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 May 10 00:03:49.921538 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 May 10 00:03:49.921604 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 10 00:03:49.921674 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 May 10 00:03:49.921741 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 May 10 00:03:49.921859 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 10 00:03:49.921874 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 May 10 00:03:49.921952 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 May 10 00:03:49.922020 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 May 10 00:03:49.922090 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 10 00:03:49.922101 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 10 00:03:49.922110 kernel: ACPI: button: Power Button [PWRB] May 10 00:03:49.922118 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 10 00:03:49.922188 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) May 10 00:03:49.924333 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) May 10 00:03:49.924360 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 10 00:03:49.924370 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 May 10 00:03:49.924471 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) May 10 00:03:49.924483 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A May 10 00:03:49.924491 kernel: thunder_xcv, ver 1.0 May 10 00:03:49.924499 kernel: thunder_bgx, ver 1.0 May 10 00:03:49.924506 kernel: nicpf, ver 1.0 May 10 00:03:49.924515 kernel: nicvf, ver 1.0 May 10 00:03:49.924596 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 10 00:03:49.924660 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-10T00:03:49 UTC (1746835429) May 10 00:03:49.924673 kernel: hid: raw HID events driver (C) Jiri Kosina May 10 00:03:49.924682 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 10 00:03:49.924689 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 10 00:03:49.924697 kernel: watchdog: Hard watchdog permanently disabled May 10 00:03:49.924705 kernel: NET: Registered PF_INET6 protocol family May 10 00:03:49.924716 kernel: Segment Routing with IPv6 May 10 00:03:49.924725 kernel: In-situ OAM (IOAM) with IPv6 May 10 00:03:49.924734 kernel: NET: Registered PF_PACKET protocol family May 10 00:03:49.924744 kernel: Key type dns_resolver registered May 10 00:03:49.924754 kernel: registered taskstats version 1 May 10 00:03:49.924762 kernel: Loading compiled-in X.509 certificates May 10 00:03:49.924770 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: 02a1572fa4e3e92c40cffc658d8dbcab2e5537ff' May 10 00:03:49.924777 kernel: Key type .fscrypt registered May 10 00:03:49.924785 kernel: Key type fscrypt-provisioning registered May 10 00:03:49.924793 kernel: ima: No TPM chip found, activating TPM-bypass! May 10 00:03:49.924843 kernel: ima: Allocated hash algorithm: sha1 May 10 00:03:49.924852 kernel: ima: No architecture policies found May 10 00:03:49.924860 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 10 00:03:49.924871 kernel: clk: Disabling unused clocks May 10 00:03:49.924879 kernel: Freeing unused kernel memory: 39424K May 10 00:03:49.924887 kernel: Run /init as init process May 10 00:03:49.924894 kernel: with arguments: May 10 00:03:49.924902 kernel: /init May 10 00:03:49.924910 kernel: with environment: May 10 00:03:49.924917 kernel: HOME=/ May 10 00:03:49.924925 kernel: TERM=linux May 10 00:03:49.924932 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 10 00:03:49.924944 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 10 00:03:49.924954 systemd[1]: Detected virtualization kvm. May 10 00:03:49.924962 systemd[1]: Detected architecture arm64. May 10 00:03:49.924970 systemd[1]: Running in initrd. May 10 00:03:49.924978 systemd[1]: No hostname configured, using default hostname. May 10 00:03:49.924986 systemd[1]: Hostname set to . May 10 00:03:49.924994 systemd[1]: Initializing machine ID from VM UUID. May 10 00:03:49.925004 systemd[1]: Queued start job for default target initrd.target. May 10 00:03:49.925012 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 10 00:03:49.925020 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 10 00:03:49.925029 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 10 00:03:49.925039 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 10 00:03:49.925048 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 10 00:03:49.925056 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 10 00:03:49.925069 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 10 00:03:49.925077 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 10 00:03:49.925086 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 10 00:03:49.925094 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 10 00:03:49.925103 systemd[1]: Reached target paths.target - Path Units. May 10 00:03:49.925111 systemd[1]: Reached target slices.target - Slice Units. May 10 00:03:49.925119 systemd[1]: Reached target swap.target - Swaps. May 10 00:03:49.925127 systemd[1]: Reached target timers.target - Timer Units. May 10 00:03:49.925137 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 10 00:03:49.925145 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 10 00:03:49.925154 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 10 00:03:49.925162 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 10 00:03:49.925170 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 10 00:03:49.925178 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 10 00:03:49.925186 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 10 00:03:49.925194 systemd[1]: Reached target sockets.target - Socket Units. May 10 00:03:49.925203 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 10 00:03:49.925213 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 10 00:03:49.925221 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 10 00:03:49.925230 systemd[1]: Starting systemd-fsck-usr.service... May 10 00:03:49.925238 systemd[1]: Starting systemd-journald.service - Journal Service... May 10 00:03:49.925246 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 10 00:03:49.925934 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 10 00:03:49.925948 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 10 00:03:49.925957 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 10 00:03:49.925971 systemd[1]: Finished systemd-fsck-usr.service. May 10 00:03:49.925981 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 10 00:03:49.926017 systemd-journald[236]: Collecting audit messages is disabled. May 10 00:03:49.926039 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 10 00:03:49.926048 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 10 00:03:49.926057 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 10 00:03:49.926066 kernel: Bridge firewalling registered May 10 00:03:49.926074 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 10 00:03:49.926084 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 10 00:03:49.926093 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 10 00:03:49.926102 systemd-journald[236]: Journal started May 10 00:03:49.926121 systemd-journald[236]: Runtime Journal (/run/log/journal/4af634b687294327bf1ebb5ab2278ce7) is 8.0M, max 76.6M, 68.6M free. May 10 00:03:49.927773 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 10 00:03:49.885319 systemd-modules-load[237]: Inserted module 'overlay' May 10 00:03:49.908181 systemd-modules-load[237]: Inserted module 'br_netfilter' May 10 00:03:49.932902 systemd[1]: Started systemd-journald.service - Journal Service. May 10 00:03:49.935119 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 10 00:03:49.945375 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 10 00:03:49.949736 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 10 00:03:49.951054 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 10 00:03:49.957472 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 10 00:03:49.964910 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 10 00:03:49.975220 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 10 00:03:49.978674 dracut-cmdline[272]: dracut-dracut-053 May 10 00:03:49.981424 dracut-cmdline[272]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=6ddfb314c5db7ed82ab49390a2bb52fe12211605ed2a5a27fb38ec34b3cca5b4 May 10 00:03:50.008510 systemd-resolved[276]: Positive Trust Anchors: May 10 00:03:50.008527 systemd-resolved[276]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 10 00:03:50.008558 systemd-resolved[276]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 10 00:03:50.014187 systemd-resolved[276]: Defaulting to hostname 'linux'. May 10 00:03:50.016015 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 10 00:03:50.016701 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 10 00:03:50.079315 kernel: SCSI subsystem initialized May 10 00:03:50.083283 kernel: Loading iSCSI transport class v2.0-870. May 10 00:03:50.091755 kernel: iscsi: registered transport (tcp) May 10 00:03:50.105308 kernel: iscsi: registered transport (qla4xxx) May 10 00:03:50.105413 kernel: QLogic iSCSI HBA Driver May 10 00:03:50.159952 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 10 00:03:50.167499 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 10 00:03:50.188751 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 10 00:03:50.188824 kernel: device-mapper: uevent: version 1.0.3 May 10 00:03:50.188837 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 10 00:03:50.238308 kernel: raid6: neonx8 gen() 15701 MB/s May 10 00:03:50.255328 kernel: raid6: neonx4 gen() 15528 MB/s May 10 00:03:50.272352 kernel: raid6: neonx2 gen() 13160 MB/s May 10 00:03:50.289317 kernel: raid6: neonx1 gen() 10400 MB/s May 10 00:03:50.306317 kernel: raid6: int64x8 gen() 6909 MB/s May 10 00:03:50.323325 kernel: raid6: int64x4 gen() 7240 MB/s May 10 00:03:50.340312 kernel: raid6: int64x2 gen() 6101 MB/s May 10 00:03:50.357330 kernel: raid6: int64x1 gen() 5043 MB/s May 10 00:03:50.357415 kernel: raid6: using algorithm neonx8 gen() 15701 MB/s May 10 00:03:50.374316 kernel: raid6: .... xor() 11880 MB/s, rmw enabled May 10 00:03:50.374390 kernel: raid6: using neon recovery algorithm May 10 00:03:50.379427 kernel: xor: measuring software checksum speed May 10 00:03:50.379500 kernel: 8regs : 19358 MB/sec May 10 00:03:50.379519 kernel: 32regs : 19660 MB/sec May 10 00:03:50.379535 kernel: arm64_neon : 26981 MB/sec May 10 00:03:50.380287 kernel: xor: using function: arm64_neon (26981 MB/sec) May 10 00:03:50.430307 kernel: Btrfs loaded, zoned=no, fsverity=no May 10 00:03:50.445680 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 10 00:03:50.451515 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 10 00:03:50.465164 systemd-udevd[456]: Using default interface naming scheme 'v255'. May 10 00:03:50.468579 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 10 00:03:50.477210 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 10 00:03:50.492460 dracut-pre-trigger[462]: rd.md=0: removing MD RAID activation May 10 00:03:50.529469 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 10 00:03:50.535623 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 10 00:03:50.585528 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 10 00:03:50.594857 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 10 00:03:50.614684 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 10 00:03:50.616459 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 10 00:03:50.617111 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 10 00:03:50.619510 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 10 00:03:50.626469 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 10 00:03:50.642866 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 10 00:03:50.673455 kernel: scsi host0: Virtio SCSI HBA May 10 00:03:50.684272 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 May 10 00:03:50.684571 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 May 10 00:03:50.701279 kernel: ACPI: bus type USB registered May 10 00:03:50.701345 kernel: usbcore: registered new interface driver usbfs May 10 00:03:50.706585 kernel: usbcore: registered new interface driver hub May 10 00:03:50.706646 kernel: usbcore: registered new device driver usb May 10 00:03:50.727445 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller May 10 00:03:50.727689 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 May 10 00:03:50.728505 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 May 10 00:03:50.730243 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 10 00:03:50.732269 kernel: sr 0:0:0:0: Power-on or device reset occurred May 10 00:03:50.731400 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 10 00:03:50.733309 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 10 00:03:50.734788 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 10 00:03:50.734894 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 10 00:03:50.736903 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 10 00:03:50.740477 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray May 10 00:03:50.740647 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller May 10 00:03:50.740752 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 May 10 00:03:50.740870 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed May 10 00:03:50.740953 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 10 00:03:50.742427 kernel: hub 1-0:1.0: USB hub found May 10 00:03:50.744455 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 May 10 00:03:50.744575 kernel: hub 1-0:1.0: 4 ports detected May 10 00:03:50.742589 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 10 00:03:50.746886 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. May 10 00:03:50.747934 kernel: hub 2-0:1.0: USB hub found May 10 00:03:50.748070 kernel: hub 2-0:1.0: 4 ports detected May 10 00:03:50.750538 kernel: sd 0:0:0:1: Power-on or device reset occurred May 10 00:03:50.752289 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) May 10 00:03:50.752474 kernel: sd 0:0:0:1: [sda] Write Protect is off May 10 00:03:50.752560 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 May 10 00:03:50.753867 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA May 10 00:03:50.758468 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 10 00:03:50.758518 kernel: GPT:17805311 != 80003071 May 10 00:03:50.758529 kernel: GPT:Alternate GPT header not at the end of the disk. May 10 00:03:50.758538 kernel: GPT:17805311 != 80003071 May 10 00:03:50.758555 kernel: GPT: Use GNU Parted to correct GPT errors. May 10 00:03:50.759448 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 10 00:03:50.761302 kernel: sd 0:0:0:1: [sda] Attached SCSI disk May 10 00:03:50.764240 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 10 00:03:50.770558 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 10 00:03:50.790401 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 10 00:03:50.816279 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (525) May 10 00:03:50.818960 kernel: BTRFS: device fsid 7278434d-1c51-4098-9ab9-92db46b8a354 devid 1 transid 41 /dev/sda3 scanned by (udev-worker) (513) May 10 00:03:50.821890 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. May 10 00:03:50.835536 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. May 10 00:03:50.840864 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. May 10 00:03:50.848518 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. May 10 00:03:50.849170 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. May 10 00:03:50.853511 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 10 00:03:50.867241 disk-uuid[574]: Primary Header is updated. May 10 00:03:50.867241 disk-uuid[574]: Secondary Entries is updated. May 10 00:03:50.867241 disk-uuid[574]: Secondary Header is updated. May 10 00:03:50.874290 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 10 00:03:50.878267 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 10 00:03:50.983283 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd May 10 00:03:51.117284 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 May 10 00:03:51.119436 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 May 10 00:03:51.119727 kernel: usbcore: registered new interface driver usbhid May 10 00:03:51.119750 kernel: usbhid: USB HID core driver May 10 00:03:51.226345 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd May 10 00:03:51.357321 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 May 10 00:03:51.411334 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 May 10 00:03:51.885134 disk-uuid[575]: The operation has completed successfully. May 10 00:03:51.885839 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 10 00:03:51.942535 systemd[1]: disk-uuid.service: Deactivated successfully. May 10 00:03:51.942655 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 10 00:03:51.962607 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 10 00:03:51.967759 sh[590]: Success May 10 00:03:51.978275 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 10 00:03:52.036368 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 10 00:03:52.047449 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 10 00:03:52.049664 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 10 00:03:52.075535 kernel: BTRFS info (device dm-0): first mount of filesystem 7278434d-1c51-4098-9ab9-92db46b8a354 May 10 00:03:52.075617 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 10 00:03:52.075637 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 10 00:03:52.075655 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 10 00:03:52.076331 kernel: BTRFS info (device dm-0): using free space tree May 10 00:03:52.084281 kernel: BTRFS info (device dm-0): enabling ssd optimizations May 10 00:03:52.085488 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 10 00:03:52.086635 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 10 00:03:52.091589 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 10 00:03:52.094455 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 10 00:03:52.106588 kernel: BTRFS info (device sda6): first mount of filesystem 3b69b342-5bf7-4a79-8c13-5043d2a95a48 May 10 00:03:52.106639 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 10 00:03:52.106650 kernel: BTRFS info (device sda6): using free space tree May 10 00:03:52.110266 kernel: BTRFS info (device sda6): enabling ssd optimizations May 10 00:03:52.110316 kernel: BTRFS info (device sda6): auto enabling async discard May 10 00:03:52.122825 kernel: BTRFS info (device sda6): last unmount of filesystem 3b69b342-5bf7-4a79-8c13-5043d2a95a48 May 10 00:03:52.122152 systemd[1]: mnt-oem.mount: Deactivated successfully. May 10 00:03:52.131433 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 10 00:03:52.138497 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 10 00:03:52.236743 ignition[673]: Ignition 2.19.0 May 10 00:03:52.236754 ignition[673]: Stage: fetch-offline May 10 00:03:52.236804 ignition[673]: no configs at "/usr/lib/ignition/base.d" May 10 00:03:52.236814 ignition[673]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 10 00:03:52.236972 ignition[673]: parsed url from cmdline: "" May 10 00:03:52.236976 ignition[673]: no config URL provided May 10 00:03:52.236980 ignition[673]: reading system config file "/usr/lib/ignition/user.ign" May 10 00:03:52.236987 ignition[673]: no config at "/usr/lib/ignition/user.ign" May 10 00:03:52.240938 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 10 00:03:52.236992 ignition[673]: failed to fetch config: resource requires networking May 10 00:03:52.237199 ignition[673]: Ignition finished successfully May 10 00:03:52.243277 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 10 00:03:52.249543 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 10 00:03:52.283950 systemd-networkd[778]: lo: Link UP May 10 00:03:52.283960 systemd-networkd[778]: lo: Gained carrier May 10 00:03:52.286684 systemd-networkd[778]: Enumeration completed May 10 00:03:52.287805 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 10 00:03:52.287809 systemd-networkd[778]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 10 00:03:52.287862 systemd[1]: Started systemd-networkd.service - Network Configuration. May 10 00:03:52.288555 systemd-networkd[778]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 10 00:03:52.288558 systemd-networkd[778]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. May 10 00:03:52.289018 systemd-networkd[778]: eth0: Link UP May 10 00:03:52.289021 systemd-networkd[778]: eth0: Gained carrier May 10 00:03:52.289028 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 10 00:03:52.291201 systemd[1]: Reached target network.target - Network. May 10 00:03:52.295143 systemd-networkd[778]: eth1: Link UP May 10 00:03:52.295147 systemd-networkd[778]: eth1: Gained carrier May 10 00:03:52.295156 systemd-networkd[778]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 10 00:03:52.299438 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 10 00:03:52.312632 ignition[780]: Ignition 2.19.0 May 10 00:03:52.312642 ignition[780]: Stage: fetch May 10 00:03:52.312860 ignition[780]: no configs at "/usr/lib/ignition/base.d" May 10 00:03:52.312870 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 10 00:03:52.312954 ignition[780]: parsed url from cmdline: "" May 10 00:03:52.312957 ignition[780]: no config URL provided May 10 00:03:52.312961 ignition[780]: reading system config file "/usr/lib/ignition/user.ign" May 10 00:03:52.312968 ignition[780]: no config at "/usr/lib/ignition/user.ign" May 10 00:03:52.312986 ignition[780]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 May 10 00:03:52.313640 ignition[780]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable May 10 00:03:52.331350 systemd-networkd[778]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 May 10 00:03:52.352388 systemd-networkd[778]: eth0: DHCPv4 address 88.99.34.22/32, gateway 172.31.1.1 acquired from 172.31.1.1 May 10 00:03:52.513914 ignition[780]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 May 10 00:03:52.521464 ignition[780]: GET result: OK May 10 00:03:52.521601 ignition[780]: parsing config with SHA512: aa43e116ae609bb390b8db971cd795e6842a20108d19758c6a706cdfc7344a2583773cab823bc37feb011dd3139f503b030810ed6a26c7c982e55638c6510261 May 10 00:03:52.526709 unknown[780]: fetched base config from "system" May 10 00:03:52.526725 unknown[780]: fetched base config from "system" May 10 00:03:52.527181 ignition[780]: fetch: fetch complete May 10 00:03:52.526730 unknown[780]: fetched user config from "hetzner" May 10 00:03:52.527187 ignition[780]: fetch: fetch passed May 10 00:03:52.529571 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 10 00:03:52.527235 ignition[780]: Ignition finished successfully May 10 00:03:52.536486 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 10 00:03:52.549203 ignition[787]: Ignition 2.19.0 May 10 00:03:52.549216 ignition[787]: Stage: kargs May 10 00:03:52.549427 ignition[787]: no configs at "/usr/lib/ignition/base.d" May 10 00:03:52.549437 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 10 00:03:52.550428 ignition[787]: kargs: kargs passed May 10 00:03:52.552243 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 10 00:03:52.550483 ignition[787]: Ignition finished successfully May 10 00:03:52.557659 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 10 00:03:52.573656 ignition[793]: Ignition 2.19.0 May 10 00:03:52.573673 ignition[793]: Stage: disks May 10 00:03:52.573992 ignition[793]: no configs at "/usr/lib/ignition/base.d" May 10 00:03:52.574012 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 10 00:03:52.578081 ignition[793]: disks: disks passed May 10 00:03:52.578532 ignition[793]: Ignition finished successfully May 10 00:03:52.580736 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 10 00:03:52.582378 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 10 00:03:52.583398 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 10 00:03:52.584529 systemd[1]: Reached target local-fs.target - Local File Systems. May 10 00:03:52.585935 systemd[1]: Reached target sysinit.target - System Initialization. May 10 00:03:52.587231 systemd[1]: Reached target basic.target - Basic System. May 10 00:03:52.593488 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 10 00:03:52.608406 systemd-fsck[802]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks May 10 00:03:52.612170 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 10 00:03:52.618403 systemd[1]: Mounting sysroot.mount - /sysroot... May 10 00:03:52.670289 kernel: EXT4-fs (sda9): mounted filesystem ffdb9517-5190-4050-8f70-de9d48dc1858 r/w with ordered data mode. Quota mode: none. May 10 00:03:52.672117 systemd[1]: Mounted sysroot.mount - /sysroot. May 10 00:03:52.674866 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 10 00:03:52.687474 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 10 00:03:52.691753 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 10 00:03:52.694404 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... May 10 00:03:52.695178 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 10 00:03:52.695211 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 10 00:03:52.705740 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (810) May 10 00:03:52.705825 kernel: BTRFS info (device sda6): first mount of filesystem 3b69b342-5bf7-4a79-8c13-5043d2a95a48 May 10 00:03:52.706393 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 10 00:03:52.707273 kernel: BTRFS info (device sda6): using free space tree May 10 00:03:52.710479 kernel: BTRFS info (device sda6): enabling ssd optimizations May 10 00:03:52.710522 kernel: BTRFS info (device sda6): auto enabling async discard May 10 00:03:52.714056 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 10 00:03:52.718185 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 10 00:03:52.728730 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 10 00:03:52.767159 coreos-metadata[812]: May 10 00:03:52.766 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 May 10 00:03:52.768765 coreos-metadata[812]: May 10 00:03:52.768 INFO Fetch successful May 10 00:03:52.770191 coreos-metadata[812]: May 10 00:03:52.770 INFO wrote hostname ci-4081-3-3-n-60bc3761e6 to /sysroot/etc/hostname May 10 00:03:52.773116 initrd-setup-root[838]: cut: /sysroot/etc/passwd: No such file or directory May 10 00:03:52.774570 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 10 00:03:52.780633 initrd-setup-root[846]: cut: /sysroot/etc/group: No such file or directory May 10 00:03:52.785630 initrd-setup-root[853]: cut: /sysroot/etc/shadow: No such file or directory May 10 00:03:52.790161 initrd-setup-root[860]: cut: /sysroot/etc/gshadow: No such file or directory May 10 00:03:52.898687 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 10 00:03:52.911762 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 10 00:03:52.917227 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 10 00:03:52.923296 kernel: BTRFS info (device sda6): last unmount of filesystem 3b69b342-5bf7-4a79-8c13-5043d2a95a48 May 10 00:03:52.949244 ignition[928]: INFO : Ignition 2.19.0 May 10 00:03:52.949244 ignition[928]: INFO : Stage: mount May 10 00:03:52.950200 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" May 10 00:03:52.950200 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 10 00:03:52.952284 ignition[928]: INFO : mount: mount passed May 10 00:03:52.952284 ignition[928]: INFO : Ignition finished successfully May 10 00:03:52.954172 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 10 00:03:52.961438 systemd[1]: Starting ignition-files.service - Ignition (files)... May 10 00:03:52.964076 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 10 00:03:53.075512 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 10 00:03:53.084605 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 10 00:03:53.095283 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (939) May 10 00:03:53.097073 kernel: BTRFS info (device sda6): first mount of filesystem 3b69b342-5bf7-4a79-8c13-5043d2a95a48 May 10 00:03:53.097112 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 10 00:03:53.097134 kernel: BTRFS info (device sda6): using free space tree May 10 00:03:53.101524 kernel: BTRFS info (device sda6): enabling ssd optimizations May 10 00:03:53.101582 kernel: BTRFS info (device sda6): auto enabling async discard May 10 00:03:53.105091 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 10 00:03:53.125772 ignition[956]: INFO : Ignition 2.19.0 May 10 00:03:53.125772 ignition[956]: INFO : Stage: files May 10 00:03:53.126794 ignition[956]: INFO : no configs at "/usr/lib/ignition/base.d" May 10 00:03:53.126794 ignition[956]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 10 00:03:53.128042 ignition[956]: DEBUG : files: compiled without relabeling support, skipping May 10 00:03:53.128658 ignition[956]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 10 00:03:53.128658 ignition[956]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 10 00:03:53.132088 ignition[956]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 10 00:03:53.133091 ignition[956]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 10 00:03:53.133091 ignition[956]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 10 00:03:53.132570 unknown[956]: wrote ssh authorized keys file for user: core May 10 00:03:53.136118 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" May 10 00:03:53.136118 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" May 10 00:03:53.136118 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 10 00:03:53.136118 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 10 00:03:53.237857 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 10 00:03:53.457212 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 10 00:03:53.457212 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 10 00:03:53.459677 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 10 00:03:53.459677 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 10 00:03:53.459677 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 10 00:03:53.459677 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 10 00:03:53.459677 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 10 00:03:53.459677 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 10 00:03:53.459677 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 10 00:03:53.466968 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 10 00:03:53.466968 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 10 00:03:53.466968 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 10 00:03:53.466968 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 10 00:03:53.466968 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 10 00:03:53.466968 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 May 10 00:03:54.108352 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 10 00:03:54.117409 systemd-networkd[778]: eth0: Gained IPv6LL May 10 00:03:54.308564 systemd-networkd[778]: eth1: Gained IPv6LL May 10 00:03:56.218950 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 10 00:03:56.218950 ignition[956]: INFO : files: op(c): [started] processing unit "containerd.service" May 10 00:03:56.221471 ignition[956]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 10 00:03:56.221471 ignition[956]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 10 00:03:56.221471 ignition[956]: INFO : files: op(c): [finished] processing unit "containerd.service" May 10 00:03:56.221471 ignition[956]: INFO : files: op(e): [started] processing unit "prepare-helm.service" May 10 00:03:56.221471 ignition[956]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 10 00:03:56.221471 ignition[956]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 10 00:03:56.221471 ignition[956]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" May 10 00:03:56.221471 ignition[956]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" May 10 00:03:56.221471 ignition[956]: INFO : files: op(10): op(11): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" May 10 00:03:56.221471 ignition[956]: INFO : files: op(10): op(11): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" May 10 00:03:56.221471 ignition[956]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" May 10 00:03:56.221471 ignition[956]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" May 10 00:03:56.221471 ignition[956]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" May 10 00:03:56.235673 ignition[956]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 10 00:03:56.235673 ignition[956]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 10 00:03:56.235673 ignition[956]: INFO : files: files passed May 10 00:03:56.235673 ignition[956]: INFO : Ignition finished successfully May 10 00:03:56.224799 systemd[1]: Finished ignition-files.service - Ignition (files). May 10 00:03:56.231476 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 10 00:03:56.236454 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 10 00:03:56.242050 systemd[1]: ignition-quench.service: Deactivated successfully. May 10 00:03:56.242145 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 10 00:03:56.253624 initrd-setup-root-after-ignition[985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 10 00:03:56.255860 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 10 00:03:56.257131 initrd-setup-root-after-ignition[985]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 10 00:03:56.258717 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 10 00:03:56.259956 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 10 00:03:56.265464 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 10 00:03:56.292741 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 10 00:03:56.293485 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 10 00:03:56.295365 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 10 00:03:56.295936 systemd[1]: Reached target initrd.target - Initrd Default Target. May 10 00:03:56.296913 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 10 00:03:56.307773 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 10 00:03:56.329321 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 10 00:03:56.335473 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 10 00:03:56.363706 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 10 00:03:56.364377 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 10 00:03:56.365938 systemd[1]: Stopped target timers.target - Timer Units. May 10 00:03:56.366773 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 10 00:03:56.366896 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 10 00:03:56.368097 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 10 00:03:56.368701 systemd[1]: Stopped target basic.target - Basic System. May 10 00:03:56.369765 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 10 00:03:56.370811 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 10 00:03:56.371730 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 10 00:03:56.372763 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 10 00:03:56.373813 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 10 00:03:56.374911 systemd[1]: Stopped target sysinit.target - System Initialization. May 10 00:03:56.375817 systemd[1]: Stopped target local-fs.target - Local File Systems. May 10 00:03:56.376824 systemd[1]: Stopped target swap.target - Swaps. May 10 00:03:56.377642 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 10 00:03:56.377776 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 10 00:03:56.378931 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 10 00:03:56.379519 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 10 00:03:56.380488 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 10 00:03:56.382665 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 10 00:03:56.383399 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 10 00:03:56.383522 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 10 00:03:56.384993 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 10 00:03:56.385119 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 10 00:03:56.386220 systemd[1]: ignition-files.service: Deactivated successfully. May 10 00:03:56.386335 systemd[1]: Stopped ignition-files.service - Ignition (files). May 10 00:03:56.387211 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 10 00:03:56.387327 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 10 00:03:56.399642 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 10 00:03:56.400867 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 10 00:03:56.401126 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 10 00:03:56.404613 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 10 00:03:56.407349 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 10 00:03:56.407511 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 10 00:03:56.410399 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 10 00:03:56.410725 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 10 00:03:56.421640 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 10 00:03:56.425327 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 10 00:03:56.427809 ignition[1009]: INFO : Ignition 2.19.0 May 10 00:03:56.428740 ignition[1009]: INFO : Stage: umount May 10 00:03:56.429531 ignition[1009]: INFO : no configs at "/usr/lib/ignition/base.d" May 10 00:03:56.431128 ignition[1009]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 10 00:03:56.431128 ignition[1009]: INFO : umount: umount passed May 10 00:03:56.431128 ignition[1009]: INFO : Ignition finished successfully May 10 00:03:56.434993 systemd[1]: ignition-mount.service: Deactivated successfully. May 10 00:03:56.435122 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 10 00:03:56.440033 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 10 00:03:56.440549 systemd[1]: sysroot-boot.service: Deactivated successfully. May 10 00:03:56.440645 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 10 00:03:56.441987 systemd[1]: ignition-disks.service: Deactivated successfully. May 10 00:03:56.442079 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 10 00:03:56.442673 systemd[1]: ignition-kargs.service: Deactivated successfully. May 10 00:03:56.442710 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 10 00:03:56.443482 systemd[1]: ignition-fetch.service: Deactivated successfully. May 10 00:03:56.443514 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 10 00:03:56.444279 systemd[1]: Stopped target network.target - Network. May 10 00:03:56.445024 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 10 00:03:56.445076 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 10 00:03:56.445929 systemd[1]: Stopped target paths.target - Path Units. May 10 00:03:56.446814 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 10 00:03:56.450350 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 10 00:03:56.451524 systemd[1]: Stopped target slices.target - Slice Units. May 10 00:03:56.452886 systemd[1]: Stopped target sockets.target - Socket Units. May 10 00:03:56.453901 systemd[1]: iscsid.socket: Deactivated successfully. May 10 00:03:56.453953 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 10 00:03:56.454847 systemd[1]: iscsiuio.socket: Deactivated successfully. May 10 00:03:56.454886 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 10 00:03:56.455797 systemd[1]: ignition-setup.service: Deactivated successfully. May 10 00:03:56.455878 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 10 00:03:56.456601 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 10 00:03:56.456638 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 10 00:03:56.457363 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 10 00:03:56.457398 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 10 00:03:56.458343 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 10 00:03:56.459408 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 10 00:03:56.462678 systemd-networkd[778]: eth0: DHCPv6 lease lost May 10 00:03:56.465313 systemd-networkd[778]: eth1: DHCPv6 lease lost May 10 00:03:56.466375 systemd[1]: systemd-resolved.service: Deactivated successfully. May 10 00:03:56.466497 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 10 00:03:56.468005 systemd[1]: systemd-networkd.service: Deactivated successfully. May 10 00:03:56.468132 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 10 00:03:56.470934 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 10 00:03:56.470983 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 10 00:03:56.477426 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 10 00:03:56.477874 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 10 00:03:56.477933 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 10 00:03:56.482106 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 10 00:03:56.482172 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 10 00:03:56.483069 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 10 00:03:56.483122 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 10 00:03:56.484726 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 10 00:03:56.484790 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 10 00:03:56.485874 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 10 00:03:56.501159 systemd[1]: network-cleanup.service: Deactivated successfully. May 10 00:03:56.501412 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 10 00:03:56.505898 systemd[1]: systemd-udevd.service: Deactivated successfully. May 10 00:03:56.506056 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 10 00:03:56.507376 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 10 00:03:56.507415 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 10 00:03:56.508248 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 10 00:03:56.508312 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 10 00:03:56.509197 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 10 00:03:56.509239 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 10 00:03:56.511478 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 10 00:03:56.511527 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 10 00:03:56.513761 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 10 00:03:56.513804 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 10 00:03:56.519499 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 10 00:03:56.520040 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 10 00:03:56.520093 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 10 00:03:56.522283 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 10 00:03:56.522329 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 10 00:03:56.529199 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 10 00:03:56.529334 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 10 00:03:56.530960 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 10 00:03:56.536411 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 10 00:03:56.548092 systemd[1]: Switching root. May 10 00:03:56.577919 systemd-journald[236]: Journal stopped May 10 00:03:57.520131 systemd-journald[236]: Received SIGTERM from PID 1 (systemd). May 10 00:03:57.520216 kernel: SELinux: policy capability network_peer_controls=1 May 10 00:03:57.520229 kernel: SELinux: policy capability open_perms=1 May 10 00:03:57.520239 kernel: SELinux: policy capability extended_socket_class=1 May 10 00:03:57.520248 kernel: SELinux: policy capability always_check_network=0 May 10 00:03:57.522851 kernel: SELinux: policy capability cgroup_seclabel=1 May 10 00:03:57.522870 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 10 00:03:57.522880 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 10 00:03:57.522890 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 10 00:03:57.522905 kernel: audit: type=1403 audit(1746835436.787:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 10 00:03:57.522921 systemd[1]: Successfully loaded SELinux policy in 36.184ms. May 10 00:03:57.522938 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.991ms. May 10 00:03:57.522951 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 10 00:03:57.522961 systemd[1]: Detected virtualization kvm. May 10 00:03:57.522972 systemd[1]: Detected architecture arm64. May 10 00:03:57.522982 systemd[1]: Detected first boot. May 10 00:03:57.522992 systemd[1]: Hostname set to . May 10 00:03:57.523006 systemd[1]: Initializing machine ID from VM UUID. May 10 00:03:57.523018 zram_generator::config[1072]: No configuration found. May 10 00:03:57.523029 systemd[1]: Populated /etc with preset unit settings. May 10 00:03:57.523039 systemd[1]: Queued start job for default target multi-user.target. May 10 00:03:57.523054 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. May 10 00:03:57.523065 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 10 00:03:57.523076 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 10 00:03:57.523086 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 10 00:03:57.523096 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 10 00:03:57.523109 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 10 00:03:57.523120 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 10 00:03:57.523130 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 10 00:03:57.523142 systemd[1]: Created slice user.slice - User and Session Slice. May 10 00:03:57.523152 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 10 00:03:57.523163 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 10 00:03:57.523174 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 10 00:03:57.523184 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 10 00:03:57.523195 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 10 00:03:57.523207 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 10 00:03:57.523217 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 10 00:03:57.523227 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 10 00:03:57.523238 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 10 00:03:57.523248 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 10 00:03:57.523275 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 10 00:03:57.523290 systemd[1]: Reached target slices.target - Slice Units. May 10 00:03:57.523301 systemd[1]: Reached target swap.target - Swaps. May 10 00:03:57.523311 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 10 00:03:57.523321 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 10 00:03:57.523331 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 10 00:03:57.523342 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 10 00:03:57.523359 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 10 00:03:57.523370 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 10 00:03:57.523381 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 10 00:03:57.523391 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 10 00:03:57.523404 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 10 00:03:57.523414 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 10 00:03:57.523424 systemd[1]: Mounting media.mount - External Media Directory... May 10 00:03:57.523435 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 10 00:03:57.523446 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 10 00:03:57.523459 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 10 00:03:57.523471 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 10 00:03:57.523482 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 10 00:03:57.523493 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 10 00:03:57.523504 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 10 00:03:57.523514 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 10 00:03:57.523529 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 10 00:03:57.523540 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 10 00:03:57.523551 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 10 00:03:57.523563 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 10 00:03:57.523574 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 10 00:03:57.523585 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. May 10 00:03:57.523597 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) May 10 00:03:57.523607 kernel: fuse: init (API version 7.39) May 10 00:03:57.523617 systemd[1]: Starting systemd-journald.service - Journal Service... May 10 00:03:57.523628 kernel: loop: module loaded May 10 00:03:57.523638 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 10 00:03:57.523648 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 10 00:03:57.523660 kernel: ACPI: bus type drm_connector registered May 10 00:03:57.523670 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 10 00:03:57.523682 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 10 00:03:57.523692 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 10 00:03:57.523703 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 10 00:03:57.523713 systemd[1]: Mounted media.mount - External Media Directory. May 10 00:03:57.523723 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 10 00:03:57.523734 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 10 00:03:57.523757 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 10 00:03:57.523770 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 10 00:03:57.523781 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 10 00:03:57.523791 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 10 00:03:57.523802 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 10 00:03:57.523814 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 10 00:03:57.523825 systemd[1]: modprobe@drm.service: Deactivated successfully. May 10 00:03:57.523835 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 10 00:03:57.523846 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 10 00:03:57.523856 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 10 00:03:57.523900 systemd-journald[1150]: Collecting audit messages is disabled. May 10 00:03:57.523926 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 10 00:03:57.523937 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 10 00:03:57.523950 systemd[1]: modprobe@loop.service: Deactivated successfully. May 10 00:03:57.523960 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 10 00:03:57.523971 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 10 00:03:57.523981 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 10 00:03:57.523992 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 10 00:03:57.524003 systemd[1]: Reached target network-pre.target - Preparation for Network. May 10 00:03:57.524016 systemd-journald[1150]: Journal started May 10 00:03:57.524038 systemd-journald[1150]: Runtime Journal (/run/log/journal/4af634b687294327bf1ebb5ab2278ce7) is 8.0M, max 76.6M, 68.6M free. May 10 00:03:57.533112 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 10 00:03:57.540911 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 10 00:03:57.540975 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 10 00:03:57.557277 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 10 00:03:57.557351 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 10 00:03:57.562833 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 10 00:03:57.570910 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 10 00:03:57.576351 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 10 00:03:57.599276 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 10 00:03:57.599354 systemd[1]: Started systemd-journald.service - Journal Service. May 10 00:03:57.598882 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 10 00:03:57.602564 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 10 00:03:57.603374 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 10 00:03:57.607348 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 10 00:03:57.624346 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 10 00:03:57.630583 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 10 00:03:57.639026 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 10 00:03:57.647417 systemd-journald[1150]: Time spent on flushing to /var/log/journal/4af634b687294327bf1ebb5ab2278ce7 is 32.959ms for 1118 entries. May 10 00:03:57.647417 systemd-journald[1150]: System Journal (/var/log/journal/4af634b687294327bf1ebb5ab2278ce7) is 8.0M, max 584.8M, 576.8M free. May 10 00:03:57.704817 systemd-journald[1150]: Received client request to flush runtime journal. May 10 00:03:57.658992 systemd-tmpfiles[1187]: ACLs are not supported, ignoring. May 10 00:03:57.659002 systemd-tmpfiles[1187]: ACLs are not supported, ignoring. May 10 00:03:57.661823 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 10 00:03:57.671696 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 10 00:03:57.673885 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 10 00:03:57.684503 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 10 00:03:57.701047 udevadm[1217]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 10 00:03:57.710035 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 10 00:03:57.748721 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 10 00:03:57.757525 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 10 00:03:57.771381 systemd-tmpfiles[1228]: ACLs are not supported, ignoring. May 10 00:03:57.771400 systemd-tmpfiles[1228]: ACLs are not supported, ignoring. May 10 00:03:57.777855 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 10 00:03:58.147131 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 10 00:03:58.153598 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 10 00:03:58.184129 systemd-udevd[1234]: Using default interface naming scheme 'v255'. May 10 00:03:58.207857 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 10 00:03:58.220057 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 10 00:03:58.244678 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 10 00:03:58.271894 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. May 10 00:03:58.300628 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 10 00:03:58.375833 systemd-networkd[1244]: lo: Link UP May 10 00:03:58.375840 systemd-networkd[1244]: lo: Gained carrier May 10 00:03:58.377893 systemd-networkd[1244]: Enumeration completed May 10 00:03:58.378076 systemd[1]: Started systemd-networkd.service - Network Configuration. May 10 00:03:58.380157 systemd-networkd[1244]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 10 00:03:58.380364 systemd-networkd[1244]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 10 00:03:58.382275 systemd-networkd[1244]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 10 00:03:58.382282 systemd-networkd[1244]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. May 10 00:03:58.383098 systemd-networkd[1244]: eth0: Link UP May 10 00:03:58.383105 systemd-networkd[1244]: eth0: Gained carrier May 10 00:03:58.383120 systemd-networkd[1244]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 10 00:03:58.384490 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 10 00:03:58.387557 systemd-networkd[1244]: eth1: Link UP May 10 00:03:58.387565 systemd-networkd[1244]: eth1: Gained carrier May 10 00:03:58.387582 systemd-networkd[1244]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 10 00:03:58.397900 systemd-networkd[1244]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 10 00:03:58.415366 systemd-networkd[1244]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 May 10 00:03:58.430629 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1248) May 10 00:03:58.433329 systemd-networkd[1244]: eth0: DHCPv4 address 88.99.34.22/32, gateway 172.31.1.1 acquired from 172.31.1.1 May 10 00:03:58.436387 kernel: mousedev: PS/2 mouse device common for all mice May 10 00:03:58.459487 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. May 10 00:03:58.492895 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. May 10 00:03:58.493066 systemd[1]: Condition check resulted in dev-vport2p1.device - /dev/vport2p1 being skipped. May 10 00:03:58.493216 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 10 00:03:58.502553 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 10 00:03:58.508399 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 10 00:03:58.512708 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 10 00:03:58.517467 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 10 00:03:58.517524 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 10 00:03:58.517877 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 10 00:03:58.518039 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 10 00:03:58.533949 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 10 00:03:58.534141 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 10 00:03:58.537784 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 May 10 00:03:58.537868 kernel: [drm] features: -virgl +edid -resource_blob -host_visible May 10 00:03:58.537886 kernel: [drm] features: -context_init May 10 00:03:58.540095 kernel: [drm] number of scanouts: 1 May 10 00:03:58.540152 kernel: [drm] number of cap sets: 0 May 10 00:03:58.539941 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 10 00:03:58.543027 systemd[1]: modprobe@loop.service: Deactivated successfully. May 10 00:03:58.545505 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 10 00:03:58.546703 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 10 00:03:58.549274 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 May 10 00:03:58.554298 kernel: Console: switching to colour frame buffer device 160x50 May 10 00:03:58.561290 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device May 10 00:03:58.568664 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 10 00:03:58.582569 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 10 00:03:58.582927 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 10 00:03:58.591671 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 10 00:03:58.652893 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 10 00:03:58.709792 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 10 00:03:58.716432 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 10 00:03:58.742748 lvm[1306]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 10 00:03:58.770173 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 10 00:03:58.772487 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 10 00:03:58.785592 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 10 00:03:58.791814 lvm[1309]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 10 00:03:58.818072 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 10 00:03:58.820518 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 10 00:03:58.821834 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 10 00:03:58.821878 systemd[1]: Reached target local-fs.target - Local File Systems. May 10 00:03:58.822442 systemd[1]: Reached target machines.target - Containers. May 10 00:03:58.824237 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 10 00:03:58.829461 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 10 00:03:58.833414 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 10 00:03:58.834580 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 10 00:03:58.836433 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 10 00:03:58.842961 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 10 00:03:58.847410 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 10 00:03:58.848812 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 10 00:03:58.866271 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 10 00:03:58.875364 kernel: loop0: detected capacity change from 0 to 8 May 10 00:03:58.885276 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 10 00:03:58.897381 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 10 00:03:58.898139 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 10 00:03:58.904363 kernel: loop1: detected capacity change from 0 to 114432 May 10 00:03:58.941355 kernel: loop2: detected capacity change from 0 to 114328 May 10 00:03:58.967284 kernel: loop3: detected capacity change from 0 to 194096 May 10 00:03:59.013452 kernel: loop4: detected capacity change from 0 to 8 May 10 00:03:59.016674 kernel: loop5: detected capacity change from 0 to 114432 May 10 00:03:59.031297 kernel: loop6: detected capacity change from 0 to 114328 May 10 00:03:59.047307 kernel: loop7: detected capacity change from 0 to 194096 May 10 00:03:59.065752 (sd-merge)[1330]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. May 10 00:03:59.066462 (sd-merge)[1330]: Merged extensions into '/usr'. May 10 00:03:59.072547 systemd[1]: Reloading requested from client PID 1317 ('systemd-sysext') (unit systemd-sysext.service)... May 10 00:03:59.072561 systemd[1]: Reloading... May 10 00:03:59.149287 zram_generator::config[1358]: No configuration found. May 10 00:03:59.261828 ldconfig[1313]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 10 00:03:59.285885 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 10 00:03:59.347061 systemd[1]: Reloading finished in 274 ms. May 10 00:03:59.367059 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 10 00:03:59.369705 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 10 00:03:59.376550 systemd[1]: Starting ensure-sysext.service... May 10 00:03:59.380553 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 10 00:03:59.397453 systemd[1]: Reloading requested from client PID 1402 ('systemctl') (unit ensure-sysext.service)... May 10 00:03:59.397937 systemd[1]: Reloading... May 10 00:03:59.425020 systemd-tmpfiles[1403]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 10 00:03:59.425372 systemd-tmpfiles[1403]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 10 00:03:59.426175 systemd-tmpfiles[1403]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 10 00:03:59.427548 systemd-tmpfiles[1403]: ACLs are not supported, ignoring. May 10 00:03:59.427611 systemd-tmpfiles[1403]: ACLs are not supported, ignoring. May 10 00:03:59.430424 systemd-tmpfiles[1403]: Detected autofs mount point /boot during canonicalization of boot. May 10 00:03:59.430434 systemd-tmpfiles[1403]: Skipping /boot May 10 00:03:59.441558 systemd-tmpfiles[1403]: Detected autofs mount point /boot during canonicalization of boot. May 10 00:03:59.441575 systemd-tmpfiles[1403]: Skipping /boot May 10 00:03:59.477325 zram_generator::config[1432]: No configuration found. May 10 00:03:59.589609 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 10 00:03:59.650096 systemd[1]: Reloading finished in 251 ms. May 10 00:03:59.666482 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 10 00:03:59.679477 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 10 00:03:59.685504 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 10 00:03:59.690823 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 10 00:03:59.697491 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 10 00:03:59.705423 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 10 00:03:59.717582 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 10 00:03:59.723454 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 10 00:03:59.728522 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 10 00:03:59.740535 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 10 00:03:59.743558 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 10 00:03:59.756462 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 10 00:03:59.756707 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 10 00:03:59.762626 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 10 00:03:59.769103 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 10 00:03:59.771989 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 10 00:03:59.772441 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 10 00:03:59.780916 systemd[1]: Finished ensure-sysext.service. May 10 00:03:59.790912 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 10 00:03:59.791962 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 10 00:03:59.792119 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 10 00:03:59.801642 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 10 00:03:59.801862 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 10 00:03:59.806043 systemd[1]: modprobe@loop.service: Deactivated successfully. May 10 00:03:59.806215 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 10 00:03:59.809581 systemd[1]: modprobe@drm.service: Deactivated successfully. May 10 00:03:59.809801 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 10 00:03:59.812906 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 10 00:03:59.816554 augenrules[1513]: No rules May 10 00:03:59.819230 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 10 00:03:59.830207 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 10 00:03:59.830330 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 10 00:03:59.835596 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 10 00:03:59.837980 systemd-resolved[1481]: Positive Trust Anchors: May 10 00:03:59.838011 systemd-resolved[1481]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 10 00:03:59.838044 systemd-resolved[1481]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 10 00:03:59.840209 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 10 00:03:59.842102 systemd-resolved[1481]: Using system hostname 'ci-4081-3-3-n-60bc3761e6'. May 10 00:03:59.842476 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 10 00:03:59.848514 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 10 00:03:59.851643 systemd[1]: Reached target network.target - Network. May 10 00:03:59.852745 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 10 00:03:59.859853 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 10 00:03:59.876488 systemd-networkd[1244]: eth1: Gained IPv6LL May 10 00:03:59.886364 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 10 00:03:59.887184 systemd[1]: Reached target network-online.target - Network is Online. May 10 00:03:59.902647 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 10 00:03:59.903391 systemd[1]: Reached target sysinit.target - System Initialization. May 10 00:03:59.903987 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 10 00:03:59.904628 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 10 00:03:59.905222 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 10 00:03:59.906786 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 10 00:03:59.906828 systemd[1]: Reached target paths.target - Path Units. May 10 00:03:59.907510 systemd[1]: Reached target time-set.target - System Time Set. May 10 00:03:59.908238 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 10 00:03:59.908873 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 10 00:03:59.909497 systemd[1]: Reached target timers.target - Timer Units. May 10 00:03:59.911080 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 10 00:03:59.913225 systemd[1]: Starting docker.socket - Docker Socket for the API... May 10 00:03:59.914879 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 10 00:03:59.919042 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 10 00:03:59.920008 systemd[1]: Reached target sockets.target - Socket Units. May 10 00:03:59.921315 systemd[1]: Reached target basic.target - Basic System. May 10 00:03:59.922233 systemd[1]: System is tainted: cgroupsv1 May 10 00:03:59.922433 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 10 00:03:59.922520 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 10 00:03:59.923914 systemd[1]: Starting containerd.service - containerd container runtime... May 10 00:03:59.927453 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 10 00:03:59.929558 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 10 00:03:59.939903 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 10 00:03:59.943094 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 10 00:03:59.946576 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 10 00:03:59.952156 jq[1540]: false May 10 00:03:59.956396 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:03:59.964863 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 10 00:03:59.972529 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 10 00:03:59.978794 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 10 00:03:59.992483 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. May 10 00:03:59.995043 coreos-metadata[1537]: May 10 00:03:59.994 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 May 10 00:04:00.001303 coreos-metadata[1537]: May 10 00:03:59.998 INFO Fetch successful May 10 00:04:00.001303 coreos-metadata[1537]: May 10 00:03:59.998 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 May 10 00:04:00.001303 coreos-metadata[1537]: May 10 00:03:59.998 INFO Fetch successful May 10 00:03:59.996964 dbus-daemon[1538]: [system] SELinux support is enabled May 10 00:04:00.001886 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 10 00:04:00.008800 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 10 00:04:00.021492 systemd[1]: Starting systemd-logind.service - User Login Management... May 10 00:04:00.029124 extend-filesystems[1541]: Found loop4 May 10 00:04:00.029124 extend-filesystems[1541]: Found loop5 May 10 00:04:00.029124 extend-filesystems[1541]: Found loop6 May 10 00:04:00.029124 extend-filesystems[1541]: Found loop7 May 10 00:04:00.029124 extend-filesystems[1541]: Found sda May 10 00:04:00.029124 extend-filesystems[1541]: Found sda1 May 10 00:04:00.029124 extend-filesystems[1541]: Found sda2 May 10 00:04:00.029124 extend-filesystems[1541]: Found sda3 May 10 00:04:00.029124 extend-filesystems[1541]: Found usr May 10 00:04:00.029124 extend-filesystems[1541]: Found sda4 May 10 00:04:00.029124 extend-filesystems[1541]: Found sda6 May 10 00:04:00.029124 extend-filesystems[1541]: Found sda7 May 10 00:04:00.029124 extend-filesystems[1541]: Found sda9 May 10 00:04:00.029124 extend-filesystems[1541]: Checking size of /dev/sda9 May 10 00:04:00.026114 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 10 00:04:00.034470 systemd[1]: Starting update-engine.service - Update Engine... May 10 00:04:00.049373 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 10 00:04:00.057064 extend-filesystems[1541]: Resized partition /dev/sda9 May 10 00:04:00.058071 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 10 00:04:00.067785 jq[1570]: true May 10 00:04:00.069710 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 10 00:04:00.070070 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 10 00:04:00.078953 extend-filesystems[1576]: resize2fs 1.47.1 (20-May-2024) May 10 00:04:00.079707 systemd[1]: motdgen.service: Deactivated successfully. May 10 00:04:00.082528 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 10 00:04:00.091167 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 10 00:04:00.092027 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 10 00:04:00.097849 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 10 00:04:00.104007 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks May 10 00:04:00.119822 systemd-timesyncd[1527]: Contacted time server 51.75.67.47:123 (0.flatcar.pool.ntp.org). May 10 00:04:00.120178 systemd-timesyncd[1527]: Initial clock synchronization to Sat 2025-05-10 00:04:00.437542 UTC. May 10 00:04:00.123647 jq[1587]: true May 10 00:04:00.133200 (ntainerd)[1591]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 10 00:04:00.151274 update_engine[1563]: I20250510 00:04:00.150609 1563 main.cc:92] Flatcar Update Engine starting May 10 00:04:00.156481 tar[1585]: linux-arm64/helm May 10 00:04:00.157450 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 10 00:04:00.157515 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 10 00:04:00.160417 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 10 00:04:00.160451 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 10 00:04:00.177297 systemd[1]: Started update-engine.service - Update Engine. May 10 00:04:00.178916 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 10 00:04:00.181444 update_engine[1563]: I20250510 00:04:00.181279 1563 update_check_scheduler.cc:74] Next update check in 2m51s May 10 00:04:00.182461 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 10 00:04:00.197242 systemd-networkd[1244]: eth0: Gained IPv6LL May 10 00:04:00.206269 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1236) May 10 00:04:00.214711 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 10 00:04:00.216581 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 10 00:04:00.321281 kernel: EXT4-fs (sda9): resized filesystem to 9393147 May 10 00:04:00.336455 bash[1632]: Updated "/home/core/.ssh/authorized_keys" May 10 00:04:00.336619 extend-filesystems[1576]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required May 10 00:04:00.336619 extend-filesystems[1576]: old_desc_blocks = 1, new_desc_blocks = 5 May 10 00:04:00.336619 extend-filesystems[1576]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. May 10 00:04:00.339238 systemd-logind[1559]: New seat seat0. May 10 00:04:00.343273 extend-filesystems[1541]: Resized filesystem in /dev/sda9 May 10 00:04:00.343273 extend-filesystems[1541]: Found sr0 May 10 00:04:00.340460 systemd-logind[1559]: Watching system buttons on /dev/input/event0 (Power Button) May 10 00:04:00.340474 systemd-logind[1559]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) May 10 00:04:00.341193 systemd[1]: Started systemd-logind.service - User Login Management. May 10 00:04:00.344786 systemd[1]: extend-filesystems.service: Deactivated successfully. May 10 00:04:00.345046 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 10 00:04:00.349854 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 10 00:04:00.372900 systemd[1]: Starting sshkeys.service... May 10 00:04:00.391390 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 10 00:04:00.402766 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 10 00:04:00.441782 coreos-metadata[1645]: May 10 00:04:00.441 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 May 10 00:04:00.447515 coreos-metadata[1645]: May 10 00:04:00.444 INFO Fetch successful May 10 00:04:00.448654 unknown[1645]: wrote ssh authorized keys file for user: core May 10 00:04:00.496612 update-ssh-keys[1650]: Updated "/home/core/.ssh/authorized_keys" May 10 00:04:00.505513 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 10 00:04:00.510912 systemd[1]: Finished sshkeys.service. May 10 00:04:00.605574 locksmithd[1612]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 10 00:04:00.613283 containerd[1591]: time="2025-05-10T00:04:00.610613920Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 May 10 00:04:00.672812 containerd[1591]: time="2025-05-10T00:04:00.671510800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 10 00:04:00.673214 containerd[1591]: time="2025-05-10T00:04:00.673177200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.89-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 10 00:04:00.674195 containerd[1591]: time="2025-05-10T00:04:00.673866880Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 10 00:04:00.674195 containerd[1591]: time="2025-05-10T00:04:00.673893960Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 10 00:04:00.674195 containerd[1591]: time="2025-05-10T00:04:00.674048240Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 10 00:04:00.674195 containerd[1591]: time="2025-05-10T00:04:00.674064760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 10 00:04:00.674195 containerd[1591]: time="2025-05-10T00:04:00.674122840Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 10 00:04:00.674195 containerd[1591]: time="2025-05-10T00:04:00.674134440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 10 00:04:00.674564 containerd[1591]: time="2025-05-10T00:04:00.674542880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 10 00:04:00.677089 containerd[1591]: time="2025-05-10T00:04:00.676271320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 10 00:04:00.677089 containerd[1591]: time="2025-05-10T00:04:00.676297120Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 10 00:04:00.677089 containerd[1591]: time="2025-05-10T00:04:00.676308120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 10 00:04:00.677089 containerd[1591]: time="2025-05-10T00:04:00.676418760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 10 00:04:00.677089 containerd[1591]: time="2025-05-10T00:04:00.676612200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 10 00:04:00.677089 containerd[1591]: time="2025-05-10T00:04:00.676818680Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 10 00:04:00.677089 containerd[1591]: time="2025-05-10T00:04:00.676836960Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 10 00:04:00.677089 containerd[1591]: time="2025-05-10T00:04:00.676916200Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 10 00:04:00.677089 containerd[1591]: time="2025-05-10T00:04:00.676955960Z" level=info msg="metadata content store policy set" policy=shared May 10 00:04:00.681304 containerd[1591]: time="2025-05-10T00:04:00.681162040Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 10 00:04:00.681304 containerd[1591]: time="2025-05-10T00:04:00.681228400Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 10 00:04:00.681686 containerd[1591]: time="2025-05-10T00:04:00.681245120Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 10 00:04:00.681686 containerd[1591]: time="2025-05-10T00:04:00.681416000Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 10 00:04:00.681686 containerd[1591]: time="2025-05-10T00:04:00.681460240Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 10 00:04:00.681686 containerd[1591]: time="2025-05-10T00:04:00.681608800Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 10 00:04:00.685271 containerd[1591]: time="2025-05-10T00:04:00.682800840Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 10 00:04:00.685271 containerd[1591]: time="2025-05-10T00:04:00.682947320Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 10 00:04:00.685271 containerd[1591]: time="2025-05-10T00:04:00.682965000Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 10 00:04:00.685271 containerd[1591]: time="2025-05-10T00:04:00.682977840Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 10 00:04:00.685271 containerd[1591]: time="2025-05-10T00:04:00.682992520Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 10 00:04:00.685271 containerd[1591]: time="2025-05-10T00:04:00.683014520Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 10 00:04:00.685271 containerd[1591]: time="2025-05-10T00:04:00.683027960Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 10 00:04:00.685271 containerd[1591]: time="2025-05-10T00:04:00.683042640Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 10 00:04:00.685271 containerd[1591]: time="2025-05-10T00:04:00.683057800Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 10 00:04:00.685271 containerd[1591]: time="2025-05-10T00:04:00.683071560Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 10 00:04:00.685271 containerd[1591]: time="2025-05-10T00:04:00.683083680Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 10 00:04:00.685271 containerd[1591]: time="2025-05-10T00:04:00.683097640Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 10 00:04:00.685271 containerd[1591]: time="2025-05-10T00:04:00.683119360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 10 00:04:00.685271 containerd[1591]: time="2025-05-10T00:04:00.683134840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 10 00:04:00.685635 containerd[1591]: time="2025-05-10T00:04:00.683148600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 10 00:04:00.685635 containerd[1591]: time="2025-05-10T00:04:00.683162800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 10 00:04:00.685635 containerd[1591]: time="2025-05-10T00:04:00.683174640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 10 00:04:00.685635 containerd[1591]: time="2025-05-10T00:04:00.683187480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 10 00:04:00.685635 containerd[1591]: time="2025-05-10T00:04:00.683199720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 10 00:04:00.685635 containerd[1591]: time="2025-05-10T00:04:00.683213360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 10 00:04:00.685635 containerd[1591]: time="2025-05-10T00:04:00.683227680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 10 00:04:00.685635 containerd[1591]: time="2025-05-10T00:04:00.683242120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 10 00:04:00.685635 containerd[1591]: time="2025-05-10T00:04:00.683283960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 10 00:04:00.685635 containerd[1591]: time="2025-05-10T00:04:00.683298840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 10 00:04:00.685635 containerd[1591]: time="2025-05-10T00:04:00.683312160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 10 00:04:00.685635 containerd[1591]: time="2025-05-10T00:04:00.683332360Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 10 00:04:00.685635 containerd[1591]: time="2025-05-10T00:04:00.683353440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 10 00:04:00.685635 containerd[1591]: time="2025-05-10T00:04:00.683366200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 10 00:04:00.685635 containerd[1591]: time="2025-05-10T00:04:00.683377520Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 10 00:04:00.685905 containerd[1591]: time="2025-05-10T00:04:00.683483040Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 10 00:04:00.685905 containerd[1591]: time="2025-05-10T00:04:00.683499600Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 10 00:04:00.685905 containerd[1591]: time="2025-05-10T00:04:00.683510440Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 10 00:04:00.685905 containerd[1591]: time="2025-05-10T00:04:00.683522840Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 10 00:04:00.685905 containerd[1591]: time="2025-05-10T00:04:00.683532680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 10 00:04:00.685905 containerd[1591]: time="2025-05-10T00:04:00.683545400Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 10 00:04:00.685905 containerd[1591]: time="2025-05-10T00:04:00.683558240Z" level=info msg="NRI interface is disabled by configuration." May 10 00:04:00.685905 containerd[1591]: time="2025-05-10T00:04:00.683571240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 10 00:04:00.686053 containerd[1591]: time="2025-05-10T00:04:00.683922840Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 10 00:04:00.686053 containerd[1591]: time="2025-05-10T00:04:00.683988120Z" level=info msg="Connect containerd service" May 10 00:04:00.686053 containerd[1591]: time="2025-05-10T00:04:00.684090480Z" level=info msg="using legacy CRI server" May 10 00:04:00.686053 containerd[1591]: time="2025-05-10T00:04:00.684097400Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 10 00:04:00.686053 containerd[1591]: time="2025-05-10T00:04:00.684182600Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 10 00:04:00.690240 containerd[1591]: time="2025-05-10T00:04:00.688902200Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 10 00:04:00.690240 containerd[1591]: time="2025-05-10T00:04:00.689405320Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 10 00:04:00.690240 containerd[1591]: time="2025-05-10T00:04:00.689446240Z" level=info msg=serving... address=/run/containerd/containerd.sock May 10 00:04:00.690240 containerd[1591]: time="2025-05-10T00:04:00.689593880Z" level=info msg="Start subscribing containerd event" May 10 00:04:00.690240 containerd[1591]: time="2025-05-10T00:04:00.689630600Z" level=info msg="Start recovering state" May 10 00:04:00.690240 containerd[1591]: time="2025-05-10T00:04:00.689690560Z" level=info msg="Start event monitor" May 10 00:04:00.690240 containerd[1591]: time="2025-05-10T00:04:00.689710360Z" level=info msg="Start snapshots syncer" May 10 00:04:00.690240 containerd[1591]: time="2025-05-10T00:04:00.689737320Z" level=info msg="Start cni network conf syncer for default" May 10 00:04:00.690240 containerd[1591]: time="2025-05-10T00:04:00.689744680Z" level=info msg="Start streaming server" May 10 00:04:00.690240 containerd[1591]: time="2025-05-10T00:04:00.689871240Z" level=info msg="containerd successfully booted in 0.082033s" May 10 00:04:00.690024 systemd[1]: Started containerd.service - containerd container runtime. May 10 00:04:01.137300 tar[1585]: linux-arm64/LICENSE May 10 00:04:01.137300 tar[1585]: linux-arm64/README.md May 10 00:04:01.173581 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:04:01.174880 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 10 00:04:01.176542 (kubelet)[1673]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 10 00:04:01.773582 kubelet[1673]: E0510 00:04:01.773535 1673 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 00:04:01.779581 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 00:04:01.780111 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 00:04:01.913446 sshd_keygen[1594]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 10 00:04:01.939867 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 10 00:04:01.947839 systemd[1]: Starting issuegen.service - Generate /run/issue... May 10 00:04:01.961196 systemd[1]: issuegen.service: Deactivated successfully. May 10 00:04:01.961583 systemd[1]: Finished issuegen.service - Generate /run/issue. May 10 00:04:01.969685 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 10 00:04:01.984917 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 10 00:04:01.992677 systemd[1]: Started getty@tty1.service - Getty on tty1. May 10 00:04:02.002729 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 10 00:04:02.004003 systemd[1]: Reached target getty.target - Login Prompts. May 10 00:04:02.005081 systemd[1]: Reached target multi-user.target - Multi-User System. May 10 00:04:02.006084 systemd[1]: Startup finished in 7.850s (kernel) + 5.254s (userspace) = 13.104s. May 10 00:04:12.030584 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 10 00:04:12.041653 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:04:12.169234 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:04:12.183944 (kubelet)[1720]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 10 00:04:12.245502 kubelet[1720]: E0510 00:04:12.244661 1720 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 00:04:12.251865 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 00:04:12.252164 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 00:04:22.326789 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 10 00:04:22.337609 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:04:22.481573 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:04:22.486465 (kubelet)[1741]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 10 00:04:22.543283 kubelet[1741]: E0510 00:04:22.543199 1741 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 00:04:22.548570 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 00:04:22.548776 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 00:04:32.576625 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 10 00:04:32.583588 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:04:32.687559 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:04:32.702062 (kubelet)[1761]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 10 00:04:32.750208 kubelet[1761]: E0510 00:04:32.750124 1761 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 00:04:32.755459 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 00:04:32.755656 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 00:04:42.826460 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 10 00:04:42.835613 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:04:42.949467 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:04:42.960940 (kubelet)[1782]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 10 00:04:43.003667 kubelet[1782]: E0510 00:04:43.003590 1782 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 00:04:43.007152 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 00:04:43.007443 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 00:04:45.947343 update_engine[1563]: I20250510 00:04:45.946820 1563 update_attempter.cc:509] Updating boot flags... May 10 00:04:45.997310 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1800) May 10 00:04:53.076576 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. May 10 00:04:53.083613 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:04:53.212517 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:04:53.213581 (kubelet)[1818]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 10 00:04:53.261538 kubelet[1818]: E0510 00:04:53.261438 1818 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 00:04:53.265491 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 00:04:53.265969 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 00:05:03.326686 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. May 10 00:05:03.338527 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:05:03.462522 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:05:03.473824 (kubelet)[1838]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 10 00:05:03.523788 kubelet[1838]: E0510 00:05:03.523744 1838 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 00:05:03.528223 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 00:05:03.529364 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 00:05:13.576095 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. May 10 00:05:13.585627 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:05:13.708521 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:05:13.713291 (kubelet)[1859]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 10 00:05:13.765047 kubelet[1859]: E0510 00:05:13.764983 1859 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 00:05:13.768492 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 00:05:13.768791 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 00:05:23.826902 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. May 10 00:05:23.838600 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:05:23.978514 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:05:23.982900 (kubelet)[1880]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 10 00:05:24.027651 kubelet[1880]: E0510 00:05:24.027585 1880 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 00:05:24.031902 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 00:05:24.032134 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 00:05:34.076587 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. May 10 00:05:34.083426 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:05:34.207561 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:05:34.213373 (kubelet)[1901]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 10 00:05:34.258144 kubelet[1901]: E0510 00:05:34.258069 1901 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 00:05:34.263552 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 00:05:34.263783 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 00:05:43.345882 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 10 00:05:43.351664 systemd[1]: Started sshd@0-88.99.34.22:22-147.75.109.163:57046.service - OpenSSH per-connection server daemon (147.75.109.163:57046). May 10 00:05:44.326035 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. May 10 00:05:44.334803 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:05:44.352355 sshd[1911]: Accepted publickey for core from 147.75.109.163 port 57046 ssh2: RSA SHA256:f5WfDv+qi5DuYrx2bRROpkXs75JJRPKe8+tldd3Tjew May 10 00:05:44.353873 sshd[1911]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:05:44.366315 systemd-logind[1559]: New session 1 of user core. May 10 00:05:44.367730 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 10 00:05:44.372711 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 10 00:05:44.396519 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 10 00:05:44.406783 systemd[1]: Starting user@500.service - User Manager for UID 500... May 10 00:05:44.409913 (systemd)[1921]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 10 00:05:44.494442 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:05:44.495218 (kubelet)[1935]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 10 00:05:44.525389 systemd[1921]: Queued start job for default target default.target. May 10 00:05:44.525779 systemd[1921]: Created slice app.slice - User Application Slice. May 10 00:05:44.525803 systemd[1921]: Reached target paths.target - Paths. May 10 00:05:44.525815 systemd[1921]: Reached target timers.target - Timers. May 10 00:05:44.537483 systemd[1921]: Starting dbus.socket - D-Bus User Message Bus Socket... May 10 00:05:44.544294 kubelet[1935]: E0510 00:05:44.542704 1935 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 00:05:44.546582 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 00:05:44.546735 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 00:05:44.549712 systemd[1921]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 10 00:05:44.549784 systemd[1921]: Reached target sockets.target - Sockets. May 10 00:05:44.549797 systemd[1921]: Reached target basic.target - Basic System. May 10 00:05:44.549842 systemd[1921]: Reached target default.target - Main User Target. May 10 00:05:44.549867 systemd[1921]: Startup finished in 131ms. May 10 00:05:44.549970 systemd[1]: Started user@500.service - User Manager for UID 500. May 10 00:05:44.559909 systemd[1]: Started session-1.scope - Session 1 of User core. May 10 00:05:45.271026 systemd[1]: Started sshd@1-88.99.34.22:22-147.75.109.163:57050.service - OpenSSH per-connection server daemon (147.75.109.163:57050). May 10 00:05:46.281689 sshd[1950]: Accepted publickey for core from 147.75.109.163 port 57050 ssh2: RSA SHA256:f5WfDv+qi5DuYrx2bRROpkXs75JJRPKe8+tldd3Tjew May 10 00:05:46.284196 sshd[1950]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:05:46.290763 systemd-logind[1559]: New session 2 of user core. May 10 00:05:46.294606 systemd[1]: Started session-2.scope - Session 2 of User core. May 10 00:05:46.988620 sshd[1950]: pam_unix(sshd:session): session closed for user core May 10 00:05:46.993617 systemd-logind[1559]: Session 2 logged out. Waiting for processes to exit. May 10 00:05:46.993969 systemd[1]: sshd@1-88.99.34.22:22-147.75.109.163:57050.service: Deactivated successfully. May 10 00:05:46.998575 systemd[1]: session-2.scope: Deactivated successfully. May 10 00:05:46.999391 systemd-logind[1559]: Removed session 2. May 10 00:05:47.162978 systemd[1]: Started sshd@2-88.99.34.22:22-147.75.109.163:40452.service - OpenSSH per-connection server daemon (147.75.109.163:40452). May 10 00:05:48.171149 sshd[1958]: Accepted publickey for core from 147.75.109.163 port 40452 ssh2: RSA SHA256:f5WfDv+qi5DuYrx2bRROpkXs75JJRPKe8+tldd3Tjew May 10 00:05:48.173451 sshd[1958]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:05:48.178413 systemd-logind[1559]: New session 3 of user core. May 10 00:05:48.188743 systemd[1]: Started session-3.scope - Session 3 of User core. May 10 00:05:48.871031 sshd[1958]: pam_unix(sshd:session): session closed for user core May 10 00:05:48.877504 systemd[1]: sshd@2-88.99.34.22:22-147.75.109.163:40452.service: Deactivated successfully. May 10 00:05:48.882147 systemd[1]: session-3.scope: Deactivated successfully. May 10 00:05:48.882960 systemd-logind[1559]: Session 3 logged out. Waiting for processes to exit. May 10 00:05:48.884194 systemd-logind[1559]: Removed session 3. May 10 00:05:49.049675 systemd[1]: Started sshd@3-88.99.34.22:22-147.75.109.163:40454.service - OpenSSH per-connection server daemon (147.75.109.163:40454). May 10 00:05:50.060685 sshd[1966]: Accepted publickey for core from 147.75.109.163 port 40454 ssh2: RSA SHA256:f5WfDv+qi5DuYrx2bRROpkXs75JJRPKe8+tldd3Tjew May 10 00:05:50.063017 sshd[1966]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:05:50.068215 systemd-logind[1559]: New session 4 of user core. May 10 00:05:50.076901 systemd[1]: Started session-4.scope - Session 4 of User core. May 10 00:05:50.768660 sshd[1966]: pam_unix(sshd:session): session closed for user core May 10 00:05:50.773140 systemd-logind[1559]: Session 4 logged out. Waiting for processes to exit. May 10 00:05:50.774664 systemd[1]: sshd@3-88.99.34.22:22-147.75.109.163:40454.service: Deactivated successfully. May 10 00:05:50.778986 systemd[1]: session-4.scope: Deactivated successfully. May 10 00:05:50.780507 systemd-logind[1559]: Removed session 4. May 10 00:05:50.934634 systemd[1]: Started sshd@4-88.99.34.22:22-147.75.109.163:40468.service - OpenSSH per-connection server daemon (147.75.109.163:40468). May 10 00:05:51.928725 sshd[1974]: Accepted publickey for core from 147.75.109.163 port 40468 ssh2: RSA SHA256:f5WfDv+qi5DuYrx2bRROpkXs75JJRPKe8+tldd3Tjew May 10 00:05:51.930654 sshd[1974]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:05:51.936972 systemd-logind[1559]: New session 5 of user core. May 10 00:05:51.943807 systemd[1]: Started session-5.scope - Session 5 of User core. May 10 00:05:52.467534 sudo[1978]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 10 00:05:52.467832 sudo[1978]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 10 00:05:52.485115 sudo[1978]: pam_unix(sudo:session): session closed for user root May 10 00:05:52.648488 sshd[1974]: pam_unix(sshd:session): session closed for user core May 10 00:05:52.654841 systemd[1]: sshd@4-88.99.34.22:22-147.75.109.163:40468.service: Deactivated successfully. May 10 00:05:52.658686 systemd[1]: session-5.scope: Deactivated successfully. May 10 00:05:52.659688 systemd-logind[1559]: Session 5 logged out. Waiting for processes to exit. May 10 00:05:52.660886 systemd-logind[1559]: Removed session 5. May 10 00:05:52.819752 systemd[1]: Started sshd@5-88.99.34.22:22-147.75.109.163:40482.service - OpenSSH per-connection server daemon (147.75.109.163:40482). May 10 00:05:53.813013 sshd[1983]: Accepted publickey for core from 147.75.109.163 port 40482 ssh2: RSA SHA256:f5WfDv+qi5DuYrx2bRROpkXs75JJRPKe8+tldd3Tjew May 10 00:05:53.815326 sshd[1983]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:05:53.821341 systemd-logind[1559]: New session 6 of user core. May 10 00:05:53.833985 systemd[1]: Started session-6.scope - Session 6 of User core. May 10 00:05:54.348778 sudo[1988]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 10 00:05:54.349111 sudo[1988]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 10 00:05:54.355299 sudo[1988]: pam_unix(sudo:session): session closed for user root May 10 00:05:54.361890 sudo[1987]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 10 00:05:54.362198 sudo[1987]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 10 00:05:54.377540 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... May 10 00:05:54.397227 auditctl[1991]: No rules May 10 00:05:54.397906 systemd[1]: audit-rules.service: Deactivated successfully. May 10 00:05:54.398212 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. May 10 00:05:54.408643 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 10 00:05:54.434975 augenrules[2010]: No rules May 10 00:05:54.436604 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 10 00:05:54.440502 sudo[1987]: pam_unix(sudo:session): session closed for user root May 10 00:05:54.576466 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. May 10 00:05:54.587681 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:05:54.602936 sshd[1983]: pam_unix(sshd:session): session closed for user core May 10 00:05:54.611416 systemd[1]: sshd@5-88.99.34.22:22-147.75.109.163:40482.service: Deactivated successfully. May 10 00:05:54.615840 systemd[1]: session-6.scope: Deactivated successfully. May 10 00:05:54.616721 systemd-logind[1559]: Session 6 logged out. Waiting for processes to exit. May 10 00:05:54.617921 systemd-logind[1559]: Removed session 6. May 10 00:05:54.718535 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:05:54.723599 (kubelet)[2031]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 10 00:05:54.777832 kubelet[2031]: E0510 00:05:54.777780 2031 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 00:05:54.778552 systemd[1]: Started sshd@6-88.99.34.22:22-147.75.109.163:40494.service - OpenSSH per-connection server daemon (147.75.109.163:40494). May 10 00:05:54.781625 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 00:05:54.781796 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 00:05:55.789730 sshd[2038]: Accepted publickey for core from 147.75.109.163 port 40494 ssh2: RSA SHA256:f5WfDv+qi5DuYrx2bRROpkXs75JJRPKe8+tldd3Tjew May 10 00:05:55.792005 sshd[2038]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:05:55.797557 systemd-logind[1559]: New session 7 of user core. May 10 00:05:55.810901 systemd[1]: Started session-7.scope - Session 7 of User core. May 10 00:05:56.328083 sudo[2044]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 10 00:05:56.328839 sudo[2044]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 10 00:05:56.641026 (dockerd)[2059]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 10 00:05:56.641762 systemd[1]: Starting docker.service - Docker Application Container Engine... May 10 00:05:56.879891 dockerd[2059]: time="2025-05-10T00:05:56.879838715Z" level=info msg="Starting up" May 10 00:05:56.970450 systemd[1]: var-lib-docker-metacopy\x2dcheck4261900261-merged.mount: Deactivated successfully. May 10 00:05:56.979965 dockerd[2059]: time="2025-05-10T00:05:56.979679624Z" level=info msg="Loading containers: start." May 10 00:05:57.086301 kernel: Initializing XFRM netlink socket May 10 00:05:57.168090 systemd-networkd[1244]: docker0: Link UP May 10 00:05:57.182350 dockerd[2059]: time="2025-05-10T00:05:57.182096349Z" level=info msg="Loading containers: done." May 10 00:05:57.202099 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1793489846-merged.mount: Deactivated successfully. May 10 00:05:57.203877 dockerd[2059]: time="2025-05-10T00:05:57.203455759Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 10 00:05:57.203877 dockerd[2059]: time="2025-05-10T00:05:57.203563674Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 May 10 00:05:57.203877 dockerd[2059]: time="2025-05-10T00:05:57.203674189Z" level=info msg="Daemon has completed initialization" May 10 00:05:57.236537 dockerd[2059]: time="2025-05-10T00:05:57.236342174Z" level=info msg="API listen on /run/docker.sock" May 10 00:05:57.236652 systemd[1]: Started docker.service - Docker Application Container Engine. May 10 00:05:58.380337 containerd[1591]: time="2025-05-10T00:05:58.380292535Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 10 00:05:59.061950 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount572028294.mount: Deactivated successfully. May 10 00:06:00.198505 containerd[1591]: time="2025-05-10T00:06:00.198457424Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:00.200853 containerd[1591]: time="2025-05-10T00:06:00.200808854Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=29794242" May 10 00:06:00.202340 containerd[1591]: time="2025-05-10T00:06:00.202025728Z" level=info msg="ImageCreate event name:\"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:00.204807 containerd[1591]: time="2025-05-10T00:06:00.204744584Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:00.206415 containerd[1591]: time="2025-05-10T00:06:00.206006256Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"29790950\" in 1.825667084s" May 10 00:06:00.206415 containerd[1591]: time="2025-05-10T00:06:00.206052855Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\"" May 10 00:06:00.227213 containerd[1591]: time="2025-05-10T00:06:00.227168690Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 10 00:06:02.089506 containerd[1591]: time="2025-05-10T00:06:02.089405113Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:02.091275 containerd[1591]: time="2025-05-10T00:06:02.091000262Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=26855570" May 10 00:06:02.092309 containerd[1591]: time="2025-05-10T00:06:02.092263622Z" level=info msg="ImageCreate event name:\"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:02.095452 containerd[1591]: time="2025-05-10T00:06:02.095396522Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:02.096760 containerd[1591]: time="2025-05-10T00:06:02.096625363Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"28297111\" in 1.869418834s" May 10 00:06:02.096760 containerd[1591]: time="2025-05-10T00:06:02.096666641Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\"" May 10 00:06:02.118144 containerd[1591]: time="2025-05-10T00:06:02.118100038Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 10 00:06:03.134167 containerd[1591]: time="2025-05-10T00:06:03.132970768Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:03.135563 containerd[1591]: time="2025-05-10T00:06:03.135484695Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=16263965" May 10 00:06:03.136663 containerd[1591]: time="2025-05-10T00:06:03.136557704Z" level=info msg="ImageCreate event name:\"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:03.140808 containerd[1591]: time="2025-05-10T00:06:03.140723183Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:03.142804 containerd[1591]: time="2025-05-10T00:06:03.142385415Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"17705524\" in 1.024239978s" May 10 00:06:03.142804 containerd[1591]: time="2025-05-10T00:06:03.142440973Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\"" May 10 00:06:03.166764 containerd[1591]: time="2025-05-10T00:06:03.166716151Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 10 00:06:04.114450 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2200302181.mount: Deactivated successfully. May 10 00:06:04.440580 containerd[1591]: time="2025-05-10T00:06:04.440451543Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:04.442585 containerd[1591]: time="2025-05-10T00:06:04.442534689Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=25775731" May 10 00:06:04.443633 containerd[1591]: time="2025-05-10T00:06:04.443572902Z" level=info msg="ImageCreate event name:\"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:04.446929 containerd[1591]: time="2025-05-10T00:06:04.446865296Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:04.448300 containerd[1591]: time="2025-05-10T00:06:04.448112463Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"25774724\" in 1.281346834s" May 10 00:06:04.448300 containerd[1591]: time="2025-05-10T00:06:04.448162622Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\"" May 10 00:06:04.470651 containerd[1591]: time="2025-05-10T00:06:04.470588517Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 10 00:06:04.825887 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. May 10 00:06:04.837026 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:06:04.966534 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:06:04.973179 (kubelet)[2304]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 10 00:06:05.026065 kubelet[2304]: E0510 00:06:05.026024 2304 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 00:06:05.029646 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 00:06:05.030016 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 00:06:05.040534 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1207354144.mount: Deactivated successfully. May 10 00:06:05.631505 containerd[1591]: time="2025-05-10T00:06:05.631443321Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:05.632849 containerd[1591]: time="2025-05-10T00:06:05.632802009Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485461" May 10 00:06:05.633737 containerd[1591]: time="2025-05-10T00:06:05.633668789Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:05.636832 containerd[1591]: time="2025-05-10T00:06:05.636763317Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:05.638379 containerd[1591]: time="2025-05-10T00:06:05.638057367Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.167422411s" May 10 00:06:05.638379 containerd[1591]: time="2025-05-10T00:06:05.638100566Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" May 10 00:06:05.658681 containerd[1591]: time="2025-05-10T00:06:05.658576649Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 10 00:06:06.209815 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4164277801.mount: Deactivated successfully. May 10 00:06:06.216293 containerd[1591]: time="2025-05-10T00:06:06.216188833Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:06.217679 containerd[1591]: time="2025-05-10T00:06:06.217638243Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268841" May 10 00:06:06.218696 containerd[1591]: time="2025-05-10T00:06:06.218651902Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:06.220966 containerd[1591]: time="2025-05-10T00:06:06.220894496Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:06.223074 containerd[1591]: time="2025-05-10T00:06:06.222891855Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 564.267247ms" May 10 00:06:06.223074 containerd[1591]: time="2025-05-10T00:06:06.222967494Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" May 10 00:06:06.244624 containerd[1591]: time="2025-05-10T00:06:06.244589768Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 10 00:06:06.856946 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount354574834.mount: Deactivated successfully. May 10 00:06:11.107920 containerd[1591]: time="2025-05-10T00:06:11.107756490Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:11.109559 containerd[1591]: time="2025-05-10T00:06:11.109518636Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191552" May 10 00:06:11.110920 containerd[1591]: time="2025-05-10T00:06:11.110754425Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:11.114245 containerd[1591]: time="2025-05-10T00:06:11.114192117Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:11.115667 containerd[1591]: time="2025-05-10T00:06:11.115527345Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 4.870743582s" May 10 00:06:11.115667 containerd[1591]: time="2025-05-10T00:06:11.115563505Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" May 10 00:06:15.076811 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. May 10 00:06:15.085505 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:06:15.219528 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:06:15.220337 (kubelet)[2486]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 10 00:06:15.269308 kubelet[2486]: E0510 00:06:15.269249 2486 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 00:06:15.272197 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 00:06:15.272519 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 00:06:15.701612 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:06:15.709484 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:06:15.736703 systemd[1]: Reloading requested from client PID 2503 ('systemctl') (unit session-7.scope)... May 10 00:06:15.736724 systemd[1]: Reloading... May 10 00:06:15.852281 zram_generator::config[2548]: No configuration found. May 10 00:06:15.956459 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 10 00:06:16.024878 systemd[1]: Reloading finished in 287 ms. May 10 00:06:16.081486 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 10 00:06:16.081573 systemd[1]: kubelet.service: Failed with result 'signal'. May 10 00:06:16.081876 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:06:16.091162 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:06:16.203543 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:06:16.204092 (kubelet)[2603]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 10 00:06:16.254862 kubelet[2603]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 10 00:06:16.254862 kubelet[2603]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 10 00:06:16.254862 kubelet[2603]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 10 00:06:16.254862 kubelet[2603]: I0510 00:06:16.253825 2603 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 10 00:06:17.756855 kubelet[2603]: I0510 00:06:17.756820 2603 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 10 00:06:17.757419 kubelet[2603]: I0510 00:06:17.757400 2603 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 10 00:06:17.757814 kubelet[2603]: I0510 00:06:17.757793 2603 server.go:927] "Client rotation is on, will bootstrap in background" May 10 00:06:17.783595 kubelet[2603]: I0510 00:06:17.783539 2603 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 10 00:06:17.784094 kubelet[2603]: E0510 00:06:17.784055 2603 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://88.99.34.22:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 88.99.34.22:6443: connect: connection refused May 10 00:06:17.793436 kubelet[2603]: I0510 00:06:17.793407 2603 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 10 00:06:17.796159 kubelet[2603]: I0510 00:06:17.795383 2603 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 10 00:06:17.796159 kubelet[2603]: I0510 00:06:17.795427 2603 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-3-n-60bc3761e6","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 10 00:06:17.796159 kubelet[2603]: I0510 00:06:17.795671 2603 topology_manager.go:138] "Creating topology manager with none policy" May 10 00:06:17.796159 kubelet[2603]: I0510 00:06:17.795681 2603 container_manager_linux.go:301] "Creating device plugin manager" May 10 00:06:17.796486 kubelet[2603]: I0510 00:06:17.795900 2603 state_mem.go:36] "Initialized new in-memory state store" May 10 00:06:17.797131 kubelet[2603]: I0510 00:06:17.797113 2603 kubelet.go:400] "Attempting to sync node with API server" May 10 00:06:17.797762 kubelet[2603]: I0510 00:06:17.797733 2603 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 10 00:06:17.798017 kubelet[2603]: I0510 00:06:17.798005 2603 kubelet.go:312] "Adding apiserver pod source" May 10 00:06:17.798142 kubelet[2603]: I0510 00:06:17.798134 2603 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 10 00:06:17.798595 kubelet[2603]: W0510 00:06:17.798516 2603 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://88.99.34.22:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-n-60bc3761e6&limit=500&resourceVersion=0": dial tcp 88.99.34.22:6443: connect: connection refused May 10 00:06:17.798647 kubelet[2603]: E0510 00:06:17.798611 2603 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://88.99.34.22:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-n-60bc3761e6&limit=500&resourceVersion=0": dial tcp 88.99.34.22:6443: connect: connection refused May 10 00:06:17.799400 kubelet[2603]: W0510 00:06:17.799365 2603 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://88.99.34.22:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 88.99.34.22:6443: connect: connection refused May 10 00:06:17.799727 kubelet[2603]: E0510 00:06:17.799712 2603 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://88.99.34.22:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 88.99.34.22:6443: connect: connection refused May 10 00:06:17.800323 kubelet[2603]: I0510 00:06:17.800027 2603 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 10 00:06:17.800589 kubelet[2603]: I0510 00:06:17.800574 2603 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 10 00:06:17.800759 kubelet[2603]: W0510 00:06:17.800734 2603 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 10 00:06:17.801654 kubelet[2603]: I0510 00:06:17.801634 2603 server.go:1264] "Started kubelet" May 10 00:06:17.806330 kubelet[2603]: I0510 00:06:17.806306 2603 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 10 00:06:17.809941 kubelet[2603]: E0510 00:06:17.809706 2603 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://88.99.34.22:6443/api/v1/namespaces/default/events\": dial tcp 88.99.34.22:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-3-n-60bc3761e6.183e01b3efe119e9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-3-n-60bc3761e6,UID:ci-4081-3-3-n-60bc3761e6,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-3-n-60bc3761e6,},FirstTimestamp:2025-05-10 00:06:17.801611753 +0000 UTC m=+1.591732138,LastTimestamp:2025-05-10 00:06:17.801611753 +0000 UTC m=+1.591732138,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-3-n-60bc3761e6,}" May 10 00:06:17.812453 kubelet[2603]: I0510 00:06:17.812397 2603 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 10 00:06:17.813553 kubelet[2603]: I0510 00:06:17.813248 2603 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 10 00:06:17.815289 kubelet[2603]: I0510 00:06:17.813715 2603 volume_manager.go:291] "Starting Kubelet Volume Manager" May 10 00:06:17.815289 kubelet[2603]: I0510 00:06:17.813955 2603 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 10 00:06:17.815289 kubelet[2603]: I0510 00:06:17.813608 2603 server.go:455] "Adding debug handlers to kubelet server" May 10 00:06:17.817356 kubelet[2603]: E0510 00:06:17.817311 2603 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://88.99.34.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-n-60bc3761e6?timeout=10s\": dial tcp 88.99.34.22:6443: connect: connection refused" interval="200ms" May 10 00:06:17.817822 kubelet[2603]: I0510 00:06:17.817796 2603 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 10 00:06:17.818004 kubelet[2603]: E0510 00:06:17.817986 2603 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 10 00:06:17.818335 kubelet[2603]: I0510 00:06:17.818320 2603 factory.go:221] Registration of the systemd container factory successfully May 10 00:06:17.818492 kubelet[2603]: I0510 00:06:17.818476 2603 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 10 00:06:17.821534 kubelet[2603]: I0510 00:06:17.821510 2603 reconciler.go:26] "Reconciler: start to sync state" May 10 00:06:17.822719 kubelet[2603]: I0510 00:06:17.822699 2603 factory.go:221] Registration of the containerd container factory successfully May 10 00:06:17.842800 kubelet[2603]: I0510 00:06:17.842734 2603 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 10 00:06:17.844391 kubelet[2603]: I0510 00:06:17.844362 2603 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 10 00:06:17.844470 kubelet[2603]: I0510 00:06:17.844407 2603 status_manager.go:217] "Starting to sync pod status with apiserver" May 10 00:06:17.844470 kubelet[2603]: I0510 00:06:17.844434 2603 kubelet.go:2337] "Starting kubelet main sync loop" May 10 00:06:17.844514 kubelet[2603]: E0510 00:06:17.844489 2603 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 10 00:06:17.847792 kubelet[2603]: W0510 00:06:17.847721 2603 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://88.99.34.22:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 88.99.34.22:6443: connect: connection refused May 10 00:06:17.847929 kubelet[2603]: E0510 00:06:17.847915 2603 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://88.99.34.22:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 88.99.34.22:6443: connect: connection refused May 10 00:06:17.850639 kubelet[2603]: W0510 00:06:17.850594 2603 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://88.99.34.22:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 88.99.34.22:6443: connect: connection refused May 10 00:06:17.850826 kubelet[2603]: E0510 00:06:17.850810 2603 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://88.99.34.22:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 88.99.34.22:6443: connect: connection refused May 10 00:06:17.851237 kubelet[2603]: I0510 00:06:17.851126 2603 cpu_manager.go:214] "Starting CPU manager" policy="none" May 10 00:06:17.851355 kubelet[2603]: I0510 00:06:17.851343 2603 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 10 00:06:17.851413 kubelet[2603]: I0510 00:06:17.851405 2603 state_mem.go:36] "Initialized new in-memory state store" May 10 00:06:17.853860 kubelet[2603]: I0510 00:06:17.853824 2603 policy_none.go:49] "None policy: Start" May 10 00:06:17.855410 kubelet[2603]: I0510 00:06:17.855371 2603 memory_manager.go:170] "Starting memorymanager" policy="None" May 10 00:06:17.855560 kubelet[2603]: I0510 00:06:17.855428 2603 state_mem.go:35] "Initializing new in-memory state store" May 10 00:06:17.863283 kubelet[2603]: I0510 00:06:17.861637 2603 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 10 00:06:17.863283 kubelet[2603]: I0510 00:06:17.861861 2603 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 10 00:06:17.863283 kubelet[2603]: I0510 00:06:17.861960 2603 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 10 00:06:17.867623 kubelet[2603]: E0510 00:06:17.867598 2603 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-3-n-60bc3761e6\" not found" May 10 00:06:17.917479 kubelet[2603]: I0510 00:06:17.917432 2603 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-3-n-60bc3761e6" May 10 00:06:17.918188 kubelet[2603]: E0510 00:06:17.918162 2603 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://88.99.34.22:6443/api/v1/nodes\": dial tcp 88.99.34.22:6443: connect: connection refused" node="ci-4081-3-3-n-60bc3761e6" May 10 00:06:17.944808 kubelet[2603]: I0510 00:06:17.944712 2603 topology_manager.go:215] "Topology Admit Handler" podUID="52469b807676a11b636599956e9fbac6" podNamespace="kube-system" podName="kube-apiserver-ci-4081-3-3-n-60bc3761e6" May 10 00:06:17.948043 kubelet[2603]: I0510 00:06:17.947994 2603 topology_manager.go:215] "Topology Admit Handler" podUID="6089fabeb6788baf8816c701e44ab527" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-3-3-n-60bc3761e6" May 10 00:06:17.951414 kubelet[2603]: I0510 00:06:17.951371 2603 topology_manager.go:215] "Topology Admit Handler" podUID="e979dace742fdc7983e38c4bfcd10b4a" podNamespace="kube-system" podName="kube-scheduler-ci-4081-3-3-n-60bc3761e6" May 10 00:06:18.019165 kubelet[2603]: E0510 00:06:18.019017 2603 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://88.99.34.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-n-60bc3761e6?timeout=10s\": dial tcp 88.99.34.22:6443: connect: connection refused" interval="400ms" May 10 00:06:18.023061 kubelet[2603]: I0510 00:06:18.023020 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/52469b807676a11b636599956e9fbac6-k8s-certs\") pod \"kube-apiserver-ci-4081-3-3-n-60bc3761e6\" (UID: \"52469b807676a11b636599956e9fbac6\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-60bc3761e6" May 10 00:06:18.023432 kubelet[2603]: I0510 00:06:18.023374 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6089fabeb6788baf8816c701e44ab527-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-3-n-60bc3761e6\" (UID: \"6089fabeb6788baf8816c701e44ab527\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-60bc3761e6" May 10 00:06:18.023681 kubelet[2603]: I0510 00:06:18.023618 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6089fabeb6788baf8816c701e44ab527-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-3-n-60bc3761e6\" (UID: \"6089fabeb6788baf8816c701e44ab527\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-60bc3761e6" May 10 00:06:18.024033 kubelet[2603]: I0510 00:06:18.023893 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6089fabeb6788baf8816c701e44ab527-ca-certs\") pod \"kube-controller-manager-ci-4081-3-3-n-60bc3761e6\" (UID: \"6089fabeb6788baf8816c701e44ab527\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-60bc3761e6" May 10 00:06:18.024033 kubelet[2603]: I0510 00:06:18.023979 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6089fabeb6788baf8816c701e44ab527-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-3-n-60bc3761e6\" (UID: \"6089fabeb6788baf8816c701e44ab527\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-60bc3761e6" May 10 00:06:18.024374 kubelet[2603]: I0510 00:06:18.024195 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6089fabeb6788baf8816c701e44ab527-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-3-n-60bc3761e6\" (UID: \"6089fabeb6788baf8816c701e44ab527\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-60bc3761e6" May 10 00:06:18.024374 kubelet[2603]: I0510 00:06:18.024316 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e979dace742fdc7983e38c4bfcd10b4a-kubeconfig\") pod \"kube-scheduler-ci-4081-3-3-n-60bc3761e6\" (UID: \"e979dace742fdc7983e38c4bfcd10b4a\") " pod="kube-system/kube-scheduler-ci-4081-3-3-n-60bc3761e6" May 10 00:06:18.024723 kubelet[2603]: I0510 00:06:18.024355 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/52469b807676a11b636599956e9fbac6-ca-certs\") pod \"kube-apiserver-ci-4081-3-3-n-60bc3761e6\" (UID: \"52469b807676a11b636599956e9fbac6\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-60bc3761e6" May 10 00:06:18.024723 kubelet[2603]: I0510 00:06:18.024665 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/52469b807676a11b636599956e9fbac6-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-3-n-60bc3761e6\" (UID: \"52469b807676a11b636599956e9fbac6\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-60bc3761e6" May 10 00:06:18.121321 kubelet[2603]: I0510 00:06:18.121141 2603 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-3-n-60bc3761e6" May 10 00:06:18.121770 kubelet[2603]: E0510 00:06:18.121655 2603 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://88.99.34.22:6443/api/v1/nodes\": dial tcp 88.99.34.22:6443: connect: connection refused" node="ci-4081-3-3-n-60bc3761e6" May 10 00:06:18.257561 containerd[1591]: time="2025-05-10T00:06:18.257358056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-3-n-60bc3761e6,Uid:52469b807676a11b636599956e9fbac6,Namespace:kube-system,Attempt:0,}" May 10 00:06:18.258166 containerd[1591]: time="2025-05-10T00:06:18.257958899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-3-n-60bc3761e6,Uid:6089fabeb6788baf8816c701e44ab527,Namespace:kube-system,Attempt:0,}" May 10 00:06:18.261436 containerd[1591]: time="2025-05-10T00:06:18.261383879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-3-n-60bc3761e6,Uid:e979dace742fdc7983e38c4bfcd10b4a,Namespace:kube-system,Attempt:0,}" May 10 00:06:18.420235 kubelet[2603]: E0510 00:06:18.420169 2603 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://88.99.34.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-n-60bc3761e6?timeout=10s\": dial tcp 88.99.34.22:6443: connect: connection refused" interval="800ms" May 10 00:06:18.525117 kubelet[2603]: I0510 00:06:18.525054 2603 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-3-n-60bc3761e6" May 10 00:06:18.525853 kubelet[2603]: E0510 00:06:18.525797 2603 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://88.99.34.22:6443/api/v1/nodes\": dial tcp 88.99.34.22:6443: connect: connection refused" node="ci-4081-3-3-n-60bc3761e6" May 10 00:06:18.799059 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1148531597.mount: Deactivated successfully. May 10 00:06:18.805985 containerd[1591]: time="2025-05-10T00:06:18.805892416Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 10 00:06:18.807678 containerd[1591]: time="2025-05-10T00:06:18.807607306Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" May 10 00:06:18.811539 containerd[1591]: time="2025-05-10T00:06:18.811482088Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 10 00:06:18.812766 containerd[1591]: time="2025-05-10T00:06:18.812701935Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 10 00:06:18.813577 containerd[1591]: time="2025-05-10T00:06:18.813538420Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 10 00:06:18.815067 containerd[1591]: time="2025-05-10T00:06:18.814996189Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 10 00:06:18.816518 containerd[1591]: time="2025-05-10T00:06:18.816461437Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 10 00:06:18.817772 containerd[1591]: time="2025-05-10T00:06:18.817590644Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 10 00:06:18.822716 containerd[1591]: time="2025-05-10T00:06:18.822632353Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 565.154456ms" May 10 00:06:18.825353 containerd[1591]: time="2025-05-10T00:06:18.825209208Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 563.751968ms" May 10 00:06:18.827276 containerd[1591]: time="2025-05-10T00:06:18.827126699Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 569.094239ms" May 10 00:06:18.846476 kubelet[2603]: W0510 00:06:18.845823 2603 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://88.99.34.22:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 88.99.34.22:6443: connect: connection refused May 10 00:06:18.846476 kubelet[2603]: E0510 00:06:18.845885 2603 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://88.99.34.22:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 88.99.34.22:6443: connect: connection refused May 10 00:06:18.969284 containerd[1591]: time="2025-05-10T00:06:18.968688405Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:06:18.969284 containerd[1591]: time="2025-05-10T00:06:18.969034287Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:06:18.969284 containerd[1591]: time="2025-05-10T00:06:18.969079167Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:06:18.969284 containerd[1591]: time="2025-05-10T00:06:18.969113728Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:06:18.969284 containerd[1591]: time="2025-05-10T00:06:18.969147608Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:06:18.969284 containerd[1591]: time="2025-05-10T00:06:18.969090288Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:06:18.969284 containerd[1591]: time="2025-05-10T00:06:18.969174248Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:06:18.969879 containerd[1591]: time="2025-05-10T00:06:18.969405529Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:06:18.971929 containerd[1591]: time="2025-05-10T00:06:18.971826703Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:06:18.972231 containerd[1591]: time="2025-05-10T00:06:18.972003625Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:06:18.972231 containerd[1591]: time="2025-05-10T00:06:18.972020345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:06:18.973341 containerd[1591]: time="2025-05-10T00:06:18.972725589Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:06:19.039696 containerd[1591]: time="2025-05-10T00:06:19.039574409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-3-n-60bc3761e6,Uid:6089fabeb6788baf8816c701e44ab527,Namespace:kube-system,Attempt:0,} returns sandbox id \"a7edd37c58646d756c6926d7c993c78f6314fff1fce5f0d007ff4b6a799e5829\"" May 10 00:06:19.045814 containerd[1591]: time="2025-05-10T00:06:19.044714368Z" level=info msg="CreateContainer within sandbox \"a7edd37c58646d756c6926d7c993c78f6314fff1fce5f0d007ff4b6a799e5829\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 10 00:06:19.052383 containerd[1591]: time="2025-05-10T00:06:19.052283545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-3-n-60bc3761e6,Uid:e979dace742fdc7983e38c4bfcd10b4a,Namespace:kube-system,Attempt:0,} returns sandbox id \"b2c863642668b43ee6fe2d36cf53633ec7dc667a615ef12716bd0aa1f3380d88\"" May 10 00:06:19.057129 containerd[1591]: time="2025-05-10T00:06:19.057095582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-3-n-60bc3761e6,Uid:52469b807676a11b636599956e9fbac6,Namespace:kube-system,Attempt:0,} returns sandbox id \"2b4f6c4dd8c2193c92fbece0d47deeff1e8c0030cc1fb8b50b5621e801458f3a\"" May 10 00:06:19.058357 containerd[1591]: time="2025-05-10T00:06:19.058244991Z" level=info msg="CreateContainer within sandbox \"b2c863642668b43ee6fe2d36cf53633ec7dc667a615ef12716bd0aa1f3380d88\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 10 00:06:19.061444 containerd[1591]: time="2025-05-10T00:06:19.061413295Z" level=info msg="CreateContainer within sandbox \"2b4f6c4dd8c2193c92fbece0d47deeff1e8c0030cc1fb8b50b5621e801458f3a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 10 00:06:19.065186 kubelet[2603]: W0510 00:06:19.065114 2603 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://88.99.34.22:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 88.99.34.22:6443: connect: connection refused May 10 00:06:19.065328 kubelet[2603]: E0510 00:06:19.065317 2603 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://88.99.34.22:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 88.99.34.22:6443: connect: connection refused May 10 00:06:19.071344 containerd[1591]: time="2025-05-10T00:06:19.071301210Z" level=info msg="CreateContainer within sandbox \"a7edd37c58646d756c6926d7c993c78f6314fff1fce5f0d007ff4b6a799e5829\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"25ed6363d976b70143d0aa5da416f7bee442a216566844f1c61b0fa7bd2a4206\"" May 10 00:06:19.072702 containerd[1591]: time="2025-05-10T00:06:19.072658341Z" level=info msg="StartContainer for \"25ed6363d976b70143d0aa5da416f7bee442a216566844f1c61b0fa7bd2a4206\"" May 10 00:06:19.077345 containerd[1591]: time="2025-05-10T00:06:19.077266896Z" level=info msg="CreateContainer within sandbox \"b2c863642668b43ee6fe2d36cf53633ec7dc667a615ef12716bd0aa1f3380d88\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"aab1caffa1c536ea89e7da2c2c869f6d0b987466425467c05a164c79429cf773\"" May 10 00:06:19.078619 containerd[1591]: time="2025-05-10T00:06:19.078591906Z" level=info msg="StartContainer for \"aab1caffa1c536ea89e7da2c2c869f6d0b987466425467c05a164c79429cf773\"" May 10 00:06:19.087543 containerd[1591]: time="2025-05-10T00:06:19.087497294Z" level=info msg="CreateContainer within sandbox \"2b4f6c4dd8c2193c92fbece0d47deeff1e8c0030cc1fb8b50b5621e801458f3a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f5d545cfdab31e328311d7aa3c91c6a27afd49022731e955041f0069ca843b84\"" May 10 00:06:19.089568 containerd[1591]: time="2025-05-10T00:06:19.088477421Z" level=info msg="StartContainer for \"f5d545cfdab31e328311d7aa3c91c6a27afd49022731e955041f0069ca843b84\"" May 10 00:06:19.145914 kubelet[2603]: W0510 00:06:19.145733 2603 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://88.99.34.22:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-n-60bc3761e6&limit=500&resourceVersion=0": dial tcp 88.99.34.22:6443: connect: connection refused May 10 00:06:19.146045 kubelet[2603]: E0510 00:06:19.146007 2603 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://88.99.34.22:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-n-60bc3761e6&limit=500&resourceVersion=0": dial tcp 88.99.34.22:6443: connect: connection refused May 10 00:06:19.177126 containerd[1591]: time="2025-05-10T00:06:19.176918655Z" level=info msg="StartContainer for \"25ed6363d976b70143d0aa5da416f7bee442a216566844f1c61b0fa7bd2a4206\" returns successfully" May 10 00:06:19.189350 containerd[1591]: time="2025-05-10T00:06:19.188756145Z" level=info msg="StartContainer for \"aab1caffa1c536ea89e7da2c2c869f6d0b987466425467c05a164c79429cf773\" returns successfully" May 10 00:06:19.204634 containerd[1591]: time="2025-05-10T00:06:19.204587745Z" level=info msg="StartContainer for \"f5d545cfdab31e328311d7aa3c91c6a27afd49022731e955041f0069ca843b84\" returns successfully" May 10 00:06:19.222325 kubelet[2603]: E0510 00:06:19.220913 2603 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://88.99.34.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-n-60bc3761e6?timeout=10s\": dial tcp 88.99.34.22:6443: connect: connection refused" interval="1.6s" May 10 00:06:19.331803 kubelet[2603]: I0510 00:06:19.331735 2603 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-3-n-60bc3761e6" May 10 00:06:21.654463 kubelet[2603]: E0510 00:06:21.654400 2603 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-3-n-60bc3761e6\" not found" node="ci-4081-3-3-n-60bc3761e6" May 10 00:06:21.735049 kubelet[2603]: I0510 00:06:21.734990 2603 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-3-3-n-60bc3761e6" May 10 00:06:21.757406 kubelet[2603]: E0510 00:06:21.757325 2603 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-3-3-n-60bc3761e6\" not found" May 10 00:06:21.858112 kubelet[2603]: E0510 00:06:21.858057 2603 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-3-3-n-60bc3761e6\" not found" May 10 00:06:21.958677 kubelet[2603]: E0510 00:06:21.958539 2603 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-3-3-n-60bc3761e6\" not found" May 10 00:06:22.059352 kubelet[2603]: E0510 00:06:22.059194 2603 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-3-3-n-60bc3761e6\" not found" May 10 00:06:22.160130 kubelet[2603]: E0510 00:06:22.160075 2603 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-3-3-n-60bc3761e6\" not found" May 10 00:06:22.260980 kubelet[2603]: E0510 00:06:22.260825 2603 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-3-3-n-60bc3761e6\" not found" May 10 00:06:22.805620 kubelet[2603]: I0510 00:06:22.805572 2603 apiserver.go:52] "Watching apiserver" May 10 00:06:22.820452 kubelet[2603]: I0510 00:06:22.820394 2603 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 10 00:06:24.155309 systemd[1]: Reloading requested from client PID 2875 ('systemctl') (unit session-7.scope)... May 10 00:06:24.155329 systemd[1]: Reloading... May 10 00:06:24.249323 zram_generator::config[2918]: No configuration found. May 10 00:06:24.358573 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 10 00:06:24.448832 systemd[1]: Reloading finished in 292 ms. May 10 00:06:24.490990 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:06:24.509042 systemd[1]: kubelet.service: Deactivated successfully. May 10 00:06:24.510060 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:06:24.518070 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:06:24.637516 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:06:24.644579 (kubelet)[2970]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 10 00:06:24.701584 kubelet[2970]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 10 00:06:24.701584 kubelet[2970]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 10 00:06:24.701584 kubelet[2970]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 10 00:06:24.701584 kubelet[2970]: I0510 00:06:24.701024 2970 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 10 00:06:24.711376 kubelet[2970]: I0510 00:06:24.707348 2970 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 10 00:06:24.711376 kubelet[2970]: I0510 00:06:24.707377 2970 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 10 00:06:24.711376 kubelet[2970]: I0510 00:06:24.708358 2970 server.go:927] "Client rotation is on, will bootstrap in background" May 10 00:06:24.713746 kubelet[2970]: I0510 00:06:24.713715 2970 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 10 00:06:24.716488 kubelet[2970]: I0510 00:06:24.716458 2970 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 10 00:06:24.722389 kubelet[2970]: I0510 00:06:24.722356 2970 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 10 00:06:24.723102 kubelet[2970]: I0510 00:06:24.723070 2970 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 10 00:06:24.723799 kubelet[2970]: I0510 00:06:24.723102 2970 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-3-n-60bc3761e6","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 10 00:06:24.723928 kubelet[2970]: I0510 00:06:24.723803 2970 topology_manager.go:138] "Creating topology manager with none policy" May 10 00:06:24.723928 kubelet[2970]: I0510 00:06:24.723815 2970 container_manager_linux.go:301] "Creating device plugin manager" May 10 00:06:24.723928 kubelet[2970]: I0510 00:06:24.723852 2970 state_mem.go:36] "Initialized new in-memory state store" May 10 00:06:24.723999 kubelet[2970]: I0510 00:06:24.723958 2970 kubelet.go:400] "Attempting to sync node with API server" May 10 00:06:24.723999 kubelet[2970]: I0510 00:06:24.723970 2970 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 10 00:06:24.726374 kubelet[2970]: I0510 00:06:24.726340 2970 kubelet.go:312] "Adding apiserver pod source" May 10 00:06:24.726374 kubelet[2970]: I0510 00:06:24.726374 2970 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 10 00:06:24.729643 kubelet[2970]: I0510 00:06:24.729622 2970 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 10 00:06:24.729843 kubelet[2970]: I0510 00:06:24.729828 2970 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 10 00:06:24.730203 kubelet[2970]: I0510 00:06:24.730187 2970 server.go:1264] "Started kubelet" May 10 00:06:24.733559 kubelet[2970]: I0510 00:06:24.731870 2970 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 10 00:06:24.733559 kubelet[2970]: I0510 00:06:24.732125 2970 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 10 00:06:24.733559 kubelet[2970]: I0510 00:06:24.732162 2970 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 10 00:06:24.733559 kubelet[2970]: I0510 00:06:24.733433 2970 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 10 00:06:24.734064 kubelet[2970]: I0510 00:06:24.733837 2970 server.go:455] "Adding debug handlers to kubelet server" May 10 00:06:24.751510 kubelet[2970]: I0510 00:06:24.750048 2970 volume_manager.go:291] "Starting Kubelet Volume Manager" May 10 00:06:24.760016 kubelet[2970]: I0510 00:06:24.759988 2970 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 10 00:06:24.760162 kubelet[2970]: I0510 00:06:24.760147 2970 reconciler.go:26] "Reconciler: start to sync state" May 10 00:06:24.765287 kubelet[2970]: I0510 00:06:24.763740 2970 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 10 00:06:24.771952 kubelet[2970]: I0510 00:06:24.771918 2970 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 10 00:06:24.772047 kubelet[2970]: I0510 00:06:24.771964 2970 status_manager.go:217] "Starting to sync pod status with apiserver" May 10 00:06:24.772047 kubelet[2970]: I0510 00:06:24.771986 2970 kubelet.go:2337] "Starting kubelet main sync loop" May 10 00:06:24.772047 kubelet[2970]: E0510 00:06:24.772023 2970 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 10 00:06:24.772802 kubelet[2970]: I0510 00:06:24.772759 2970 factory.go:221] Registration of the systemd container factory successfully May 10 00:06:24.773026 kubelet[2970]: I0510 00:06:24.772862 2970 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 10 00:06:24.775510 kubelet[2970]: I0510 00:06:24.775479 2970 factory.go:221] Registration of the containerd container factory successfully May 10 00:06:24.778411 kubelet[2970]: E0510 00:06:24.778385 2970 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 10 00:06:24.831018 kubelet[2970]: I0510 00:06:24.830966 2970 cpu_manager.go:214] "Starting CPU manager" policy="none" May 10 00:06:24.831018 kubelet[2970]: I0510 00:06:24.831002 2970 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 10 00:06:24.831018 kubelet[2970]: I0510 00:06:24.831027 2970 state_mem.go:36] "Initialized new in-memory state store" May 10 00:06:24.831204 kubelet[2970]: I0510 00:06:24.831186 2970 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 10 00:06:24.831229 kubelet[2970]: I0510 00:06:24.831203 2970 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 10 00:06:24.831229 kubelet[2970]: I0510 00:06:24.831221 2970 policy_none.go:49] "None policy: Start" May 10 00:06:24.832083 kubelet[2970]: I0510 00:06:24.832024 2970 memory_manager.go:170] "Starting memorymanager" policy="None" May 10 00:06:24.832157 kubelet[2970]: I0510 00:06:24.832099 2970 state_mem.go:35] "Initializing new in-memory state store" May 10 00:06:24.832452 kubelet[2970]: I0510 00:06:24.832431 2970 state_mem.go:75] "Updated machine memory state" May 10 00:06:24.835476 kubelet[2970]: I0510 00:06:24.834448 2970 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 10 00:06:24.835476 kubelet[2970]: I0510 00:06:24.834743 2970 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 10 00:06:24.835476 kubelet[2970]: I0510 00:06:24.834905 2970 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 10 00:06:24.856094 kubelet[2970]: I0510 00:06:24.856063 2970 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-3-n-60bc3761e6" May 10 00:06:24.867044 kubelet[2970]: I0510 00:06:24.867011 2970 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081-3-3-n-60bc3761e6" May 10 00:06:24.867187 kubelet[2970]: I0510 00:06:24.867106 2970 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-3-3-n-60bc3761e6" May 10 00:06:24.873018 kubelet[2970]: I0510 00:06:24.872414 2970 topology_manager.go:215] "Topology Admit Handler" podUID="6089fabeb6788baf8816c701e44ab527" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-3-3-n-60bc3761e6" May 10 00:06:24.873018 kubelet[2970]: I0510 00:06:24.872536 2970 topology_manager.go:215] "Topology Admit Handler" podUID="e979dace742fdc7983e38c4bfcd10b4a" podNamespace="kube-system" podName="kube-scheduler-ci-4081-3-3-n-60bc3761e6" May 10 00:06:24.873018 kubelet[2970]: I0510 00:06:24.872590 2970 topology_manager.go:215] "Topology Admit Handler" podUID="52469b807676a11b636599956e9fbac6" podNamespace="kube-system" podName="kube-apiserver-ci-4081-3-3-n-60bc3761e6" May 10 00:06:24.961225 kubelet[2970]: I0510 00:06:24.961079 2970 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6089fabeb6788baf8816c701e44ab527-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-3-n-60bc3761e6\" (UID: \"6089fabeb6788baf8816c701e44ab527\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-60bc3761e6" May 10 00:06:24.961225 kubelet[2970]: I0510 00:06:24.961150 2970 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/52469b807676a11b636599956e9fbac6-ca-certs\") pod \"kube-apiserver-ci-4081-3-3-n-60bc3761e6\" (UID: \"52469b807676a11b636599956e9fbac6\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-60bc3761e6" May 10 00:06:24.961225 kubelet[2970]: I0510 00:06:24.961192 2970 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/52469b807676a11b636599956e9fbac6-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-3-n-60bc3761e6\" (UID: \"52469b807676a11b636599956e9fbac6\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-60bc3761e6" May 10 00:06:24.961504 kubelet[2970]: I0510 00:06:24.961238 2970 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6089fabeb6788baf8816c701e44ab527-ca-certs\") pod \"kube-controller-manager-ci-4081-3-3-n-60bc3761e6\" (UID: \"6089fabeb6788baf8816c701e44ab527\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-60bc3761e6" May 10 00:06:24.961504 kubelet[2970]: I0510 00:06:24.961305 2970 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6089fabeb6788baf8816c701e44ab527-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-3-n-60bc3761e6\" (UID: \"6089fabeb6788baf8816c701e44ab527\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-60bc3761e6" May 10 00:06:24.961504 kubelet[2970]: I0510 00:06:24.961340 2970 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6089fabeb6788baf8816c701e44ab527-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-3-n-60bc3761e6\" (UID: \"6089fabeb6788baf8816c701e44ab527\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-60bc3761e6" May 10 00:06:24.961504 kubelet[2970]: I0510 00:06:24.961370 2970 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6089fabeb6788baf8816c701e44ab527-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-3-n-60bc3761e6\" (UID: \"6089fabeb6788baf8816c701e44ab527\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-60bc3761e6" May 10 00:06:24.961504 kubelet[2970]: I0510 00:06:24.961413 2970 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e979dace742fdc7983e38c4bfcd10b4a-kubeconfig\") pod \"kube-scheduler-ci-4081-3-3-n-60bc3761e6\" (UID: \"e979dace742fdc7983e38c4bfcd10b4a\") " pod="kube-system/kube-scheduler-ci-4081-3-3-n-60bc3761e6" May 10 00:06:24.961743 kubelet[2970]: I0510 00:06:24.961450 2970 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/52469b807676a11b636599956e9fbac6-k8s-certs\") pod \"kube-apiserver-ci-4081-3-3-n-60bc3761e6\" (UID: \"52469b807676a11b636599956e9fbac6\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-60bc3761e6" May 10 00:06:25.729307 kubelet[2970]: I0510 00:06:25.727449 2970 apiserver.go:52] "Watching apiserver" May 10 00:06:25.760469 kubelet[2970]: I0510 00:06:25.760427 2970 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 10 00:06:25.894232 kubelet[2970]: I0510 00:06:25.892755 2970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-3-n-60bc3761e6" podStartSLOduration=1.892736337 podStartE2EDuration="1.892736337s" podCreationTimestamp="2025-05-10 00:06:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:06:25.861391998 +0000 UTC m=+1.212505491" watchObservedRunningTime="2025-05-10 00:06:25.892736337 +0000 UTC m=+1.243849870" May 10 00:06:25.909655 kubelet[2970]: I0510 00:06:25.909593 2970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-3-n-60bc3761e6" podStartSLOduration=1.9095739470000002 podStartE2EDuration="1.909573947s" podCreationTimestamp="2025-05-10 00:06:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:06:25.894724372 +0000 UTC m=+1.245837905" watchObservedRunningTime="2025-05-10 00:06:25.909573947 +0000 UTC m=+1.260687480" May 10 00:06:29.876318 sudo[2044]: pam_unix(sudo:session): session closed for user root May 10 00:06:30.042445 sshd[2038]: pam_unix(sshd:session): session closed for user core May 10 00:06:30.050895 systemd[1]: sshd@6-88.99.34.22:22-147.75.109.163:40494.service: Deactivated successfully. May 10 00:06:30.054677 systemd[1]: session-7.scope: Deactivated successfully. May 10 00:06:30.055589 systemd-logind[1559]: Session 7 logged out. Waiting for processes to exit. May 10 00:06:30.056679 systemd-logind[1559]: Removed session 7. May 10 00:06:32.268846 kubelet[2970]: I0510 00:06:32.268774 2970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-3-n-60bc3761e6" podStartSLOduration=8.268758087 podStartE2EDuration="8.268758087s" podCreationTimestamp="2025-05-10 00:06:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:06:25.912371595 +0000 UTC m=+1.263485128" watchObservedRunningTime="2025-05-10 00:06:32.268758087 +0000 UTC m=+7.619871620" May 10 00:06:38.242604 kubelet[2970]: I0510 00:06:38.242565 2970 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 10 00:06:38.243100 containerd[1591]: time="2025-05-10T00:06:38.243025321Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 10 00:06:38.245492 kubelet[2970]: I0510 00:06:38.243563 2970 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 10 00:06:38.657913 kubelet[2970]: I0510 00:06:38.654909 2970 topology_manager.go:215] "Topology Admit Handler" podUID="fadfc4d3-6c77-4d78-97be-1343540c1c6b" podNamespace="kube-system" podName="kube-proxy-pjxh8" May 10 00:06:38.659140 kubelet[2970]: W0510 00:06:38.658805 2970 reflector.go:547] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-4081-3-3-n-60bc3761e6" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-3-3-n-60bc3761e6' and this object May 10 00:06:38.659384 kubelet[2970]: E0510 00:06:38.659356 2970 reflector.go:150] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-4081-3-3-n-60bc3761e6" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-3-3-n-60bc3761e6' and this object May 10 00:06:38.760557 kubelet[2970]: I0510 00:06:38.760500 2970 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxlkb\" (UniqueName: \"kubernetes.io/projected/fadfc4d3-6c77-4d78-97be-1343540c1c6b-kube-api-access-hxlkb\") pod \"kube-proxy-pjxh8\" (UID: \"fadfc4d3-6c77-4d78-97be-1343540c1c6b\") " pod="kube-system/kube-proxy-pjxh8" May 10 00:06:38.760860 kubelet[2970]: I0510 00:06:38.760835 2970 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fadfc4d3-6c77-4d78-97be-1343540c1c6b-xtables-lock\") pod \"kube-proxy-pjxh8\" (UID: \"fadfc4d3-6c77-4d78-97be-1343540c1c6b\") " pod="kube-system/kube-proxy-pjxh8" May 10 00:06:38.761188 kubelet[2970]: I0510 00:06:38.761054 2970 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fadfc4d3-6c77-4d78-97be-1343540c1c6b-lib-modules\") pod \"kube-proxy-pjxh8\" (UID: \"fadfc4d3-6c77-4d78-97be-1343540c1c6b\") " pod="kube-system/kube-proxy-pjxh8" May 10 00:06:38.761188 kubelet[2970]: I0510 00:06:38.761129 2970 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fadfc4d3-6c77-4d78-97be-1343540c1c6b-kube-proxy\") pod \"kube-proxy-pjxh8\" (UID: \"fadfc4d3-6c77-4d78-97be-1343540c1c6b\") " pod="kube-system/kube-proxy-pjxh8" May 10 00:06:38.838949 kubelet[2970]: I0510 00:06:38.838892 2970 topology_manager.go:215] "Topology Admit Handler" podUID="caed5c12-766c-4667-9634-21b7e7a71252" podNamespace="tigera-operator" podName="tigera-operator-797db67f8-2b2n5" May 10 00:06:38.862068 kubelet[2970]: I0510 00:06:38.861361 2970 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/caed5c12-766c-4667-9634-21b7e7a71252-var-lib-calico\") pod \"tigera-operator-797db67f8-2b2n5\" (UID: \"caed5c12-766c-4667-9634-21b7e7a71252\") " pod="tigera-operator/tigera-operator-797db67f8-2b2n5" May 10 00:06:38.862068 kubelet[2970]: I0510 00:06:38.861431 2970 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kd2nk\" (UniqueName: \"kubernetes.io/projected/caed5c12-766c-4667-9634-21b7e7a71252-kube-api-access-kd2nk\") pod \"tigera-operator-797db67f8-2b2n5\" (UID: \"caed5c12-766c-4667-9634-21b7e7a71252\") " pod="tigera-operator/tigera-operator-797db67f8-2b2n5" May 10 00:06:39.146760 containerd[1591]: time="2025-05-10T00:06:39.146388095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-2b2n5,Uid:caed5c12-766c-4667-9634-21b7e7a71252,Namespace:tigera-operator,Attempt:0,}" May 10 00:06:39.169972 containerd[1591]: time="2025-05-10T00:06:39.169863923Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:06:39.169972 containerd[1591]: time="2025-05-10T00:06:39.169929485Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:06:39.170927 containerd[1591]: time="2025-05-10T00:06:39.169947046Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:06:39.171593 containerd[1591]: time="2025-05-10T00:06:39.171504738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:06:39.219451 containerd[1591]: time="2025-05-10T00:06:39.219334184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-2b2n5,Uid:caed5c12-766c-4667-9634-21b7e7a71252,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"588d3493a6f8076dc8363e4653f8a646c35075d9fc5b3884147392602f156cdd\"" May 10 00:06:39.224578 containerd[1591]: time="2025-05-10T00:06:39.224377034Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 10 00:06:39.862950 kubelet[2970]: E0510 00:06:39.862900 2970 configmap.go:199] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition May 10 00:06:39.863511 kubelet[2970]: E0510 00:06:39.862995 2970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fadfc4d3-6c77-4d78-97be-1343540c1c6b-kube-proxy podName:fadfc4d3-6c77-4d78-97be-1343540c1c6b nodeName:}" failed. No retries permitted until 2025-05-10 00:06:40.362974114 +0000 UTC m=+15.714087647 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/fadfc4d3-6c77-4d78-97be-1343540c1c6b-kube-proxy") pod "kube-proxy-pjxh8" (UID: "fadfc4d3-6c77-4d78-97be-1343540c1c6b") : failed to sync configmap cache: timed out waiting for the condition May 10 00:06:39.876732 systemd[1]: run-containerd-runc-k8s.io-588d3493a6f8076dc8363e4653f8a646c35075d9fc5b3884147392602f156cdd-runc.LrM5db.mount: Deactivated successfully. May 10 00:06:40.465316 containerd[1591]: time="2025-05-10T00:06:40.465005471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pjxh8,Uid:fadfc4d3-6c77-4d78-97be-1343540c1c6b,Namespace:kube-system,Attempt:0,}" May 10 00:06:40.498227 containerd[1591]: time="2025-05-10T00:06:40.497741040Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:06:40.498227 containerd[1591]: time="2025-05-10T00:06:40.497842763Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:06:40.498227 containerd[1591]: time="2025-05-10T00:06:40.497870204Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:06:40.498227 containerd[1591]: time="2025-05-10T00:06:40.497998849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:06:40.547098 containerd[1591]: time="2025-05-10T00:06:40.546281914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pjxh8,Uid:fadfc4d3-6c77-4d78-97be-1343540c1c6b,Namespace:kube-system,Attempt:0,} returns sandbox id \"22c67f1152897e7fbfc508d4c1407fd178347bfe09cdda16476ecf42f5bbd459\"" May 10 00:06:40.554526 containerd[1591]: time="2025-05-10T00:06:40.554213108Z" level=info msg="CreateContainer within sandbox \"22c67f1152897e7fbfc508d4c1407fd178347bfe09cdda16476ecf42f5bbd459\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 10 00:06:40.570564 containerd[1591]: time="2025-05-10T00:06:40.570517590Z" level=info msg="CreateContainer within sandbox \"22c67f1152897e7fbfc508d4c1407fd178347bfe09cdda16476ecf42f5bbd459\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"66ed9082dfe7f672c7533b8accea49f7dc64903bac7514ff9775435d2dc74b67\"" May 10 00:06:40.571388 containerd[1591]: time="2025-05-10T00:06:40.571355859Z" level=info msg="StartContainer for \"66ed9082dfe7f672c7533b8accea49f7dc64903bac7514ff9775435d2dc74b67\"" May 10 00:06:40.629445 containerd[1591]: time="2025-05-10T00:06:40.629380340Z" level=info msg="StartContainer for \"66ed9082dfe7f672c7533b8accea49f7dc64903bac7514ff9775435d2dc74b67\" returns successfully" May 10 00:06:40.848926 kubelet[2970]: I0510 00:06:40.847774 2970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-pjxh8" podStartSLOduration=2.8477249110000002 podStartE2EDuration="2.847724911s" podCreationTimestamp="2025-05-10 00:06:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:06:40.847532184 +0000 UTC m=+16.198645717" watchObservedRunningTime="2025-05-10 00:06:40.847724911 +0000 UTC m=+16.198838404" May 10 00:06:41.031799 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1426229395.mount: Deactivated successfully. May 10 00:06:41.422389 containerd[1591]: time="2025-05-10T00:06:41.422327021Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:41.423868 containerd[1591]: time="2025-05-10T00:06:41.423739031Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=19323084" May 10 00:06:41.424618 containerd[1591]: time="2025-05-10T00:06:41.424580101Z" level=info msg="ImageCreate event name:\"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:41.427895 containerd[1591]: time="2025-05-10T00:06:41.427528365Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:41.428578 containerd[1591]: time="2025-05-10T00:06:41.428535201Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"19319079\" in 2.204101285s" May 10 00:06:41.428578 containerd[1591]: time="2025-05-10T00:06:41.428574362Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\"" May 10 00:06:41.432741 containerd[1591]: time="2025-05-10T00:06:41.432706388Z" level=info msg="CreateContainer within sandbox \"588d3493a6f8076dc8363e4653f8a646c35075d9fc5b3884147392602f156cdd\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 10 00:06:41.455022 containerd[1591]: time="2025-05-10T00:06:41.454971576Z" level=info msg="CreateContainer within sandbox \"588d3493a6f8076dc8363e4653f8a646c35075d9fc5b3884147392602f156cdd\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"945429b329e421b46b84df58f876e05dc7cb86fdebf538d1e0cb21c53cd4165f\"" May 10 00:06:41.457001 containerd[1591]: time="2025-05-10T00:06:41.456850322Z" level=info msg="StartContainer for \"945429b329e421b46b84df58f876e05dc7cb86fdebf538d1e0cb21c53cd4165f\"" May 10 00:06:41.511142 containerd[1591]: time="2025-05-10T00:06:41.511095721Z" level=info msg="StartContainer for \"945429b329e421b46b84df58f876e05dc7cb86fdebf538d1e0cb21c53cd4165f\" returns successfully" May 10 00:06:41.853701 kubelet[2970]: I0510 00:06:41.853477 2970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-797db67f8-2b2n5" podStartSLOduration=1.6436233489999998 podStartE2EDuration="3.852229909s" podCreationTimestamp="2025-05-10 00:06:38 +0000 UTC" firstStartedPulling="2025-05-10 00:06:39.221632221 +0000 UTC m=+14.572745754" lastFinishedPulling="2025-05-10 00:06:41.430238821 +0000 UTC m=+16.781352314" observedRunningTime="2025-05-10 00:06:41.851956659 +0000 UTC m=+17.203070272" watchObservedRunningTime="2025-05-10 00:06:41.852229909 +0000 UTC m=+17.203343482" May 10 00:06:45.646084 kubelet[2970]: I0510 00:06:45.646031 2970 topology_manager.go:215] "Topology Admit Handler" podUID="b98b64fd-b29f-4751-937a-aac0d91bd1ea" podNamespace="calico-system" podName="calico-typha-5d69f494c-pkx7x" May 10 00:06:45.703310 kubelet[2970]: I0510 00:06:45.703072 2970 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b98b64fd-b29f-4751-937a-aac0d91bd1ea-tigera-ca-bundle\") pod \"calico-typha-5d69f494c-pkx7x\" (UID: \"b98b64fd-b29f-4751-937a-aac0d91bd1ea\") " pod="calico-system/calico-typha-5d69f494c-pkx7x" May 10 00:06:45.703310 kubelet[2970]: I0510 00:06:45.703131 2970 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/b98b64fd-b29f-4751-937a-aac0d91bd1ea-typha-certs\") pod \"calico-typha-5d69f494c-pkx7x\" (UID: \"b98b64fd-b29f-4751-937a-aac0d91bd1ea\") " pod="calico-system/calico-typha-5d69f494c-pkx7x" May 10 00:06:45.703310 kubelet[2970]: I0510 00:06:45.703151 2970 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z26k7\" (UniqueName: \"kubernetes.io/projected/b98b64fd-b29f-4751-937a-aac0d91bd1ea-kube-api-access-z26k7\") pod \"calico-typha-5d69f494c-pkx7x\" (UID: \"b98b64fd-b29f-4751-937a-aac0d91bd1ea\") " pod="calico-system/calico-typha-5d69f494c-pkx7x" May 10 00:06:45.848190 kubelet[2970]: I0510 00:06:45.848137 2970 topology_manager.go:215] "Topology Admit Handler" podUID="de29be5b-cfcb-464c-b84a-f7a5c33bd1f7" podNamespace="calico-system" podName="calico-node-v2mgl" May 10 00:06:45.905404 kubelet[2970]: I0510 00:06:45.905055 2970 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/de29be5b-cfcb-464c-b84a-f7a5c33bd1f7-xtables-lock\") pod \"calico-node-v2mgl\" (UID: \"de29be5b-cfcb-464c-b84a-f7a5c33bd1f7\") " pod="calico-system/calico-node-v2mgl" May 10 00:06:45.905404 kubelet[2970]: I0510 00:06:45.905101 2970 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/de29be5b-cfcb-464c-b84a-f7a5c33bd1f7-node-certs\") pod \"calico-node-v2mgl\" (UID: \"de29be5b-cfcb-464c-b84a-f7a5c33bd1f7\") " pod="calico-system/calico-node-v2mgl" May 10 00:06:45.905404 kubelet[2970]: I0510 00:06:45.905119 2970 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/de29be5b-cfcb-464c-b84a-f7a5c33bd1f7-cni-log-dir\") pod \"calico-node-v2mgl\" (UID: \"de29be5b-cfcb-464c-b84a-f7a5c33bd1f7\") " pod="calico-system/calico-node-v2mgl" May 10 00:06:45.905404 kubelet[2970]: I0510 00:06:45.905142 2970 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/de29be5b-cfcb-464c-b84a-f7a5c33bd1f7-var-lib-calico\") pod \"calico-node-v2mgl\" (UID: \"de29be5b-cfcb-464c-b84a-f7a5c33bd1f7\") " pod="calico-system/calico-node-v2mgl" May 10 00:06:45.905404 kubelet[2970]: I0510 00:06:45.905243 2970 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de29be5b-cfcb-464c-b84a-f7a5c33bd1f7-tigera-ca-bundle\") pod \"calico-node-v2mgl\" (UID: \"de29be5b-cfcb-464c-b84a-f7a5c33bd1f7\") " pod="calico-system/calico-node-v2mgl" May 10 00:06:45.905619 kubelet[2970]: I0510 00:06:45.905287 2970 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/de29be5b-cfcb-464c-b84a-f7a5c33bd1f7-flexvol-driver-host\") pod \"calico-node-v2mgl\" (UID: \"de29be5b-cfcb-464c-b84a-f7a5c33bd1f7\") " pod="calico-system/calico-node-v2mgl" May 10 00:06:45.905619 kubelet[2970]: I0510 00:06:45.905317 2970 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/de29be5b-cfcb-464c-b84a-f7a5c33bd1f7-var-run-calico\") pod \"calico-node-v2mgl\" (UID: \"de29be5b-cfcb-464c-b84a-f7a5c33bd1f7\") " pod="calico-system/calico-node-v2mgl" May 10 00:06:45.905619 kubelet[2970]: I0510 00:06:45.905336 2970 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/de29be5b-cfcb-464c-b84a-f7a5c33bd1f7-cni-bin-dir\") pod \"calico-node-v2mgl\" (UID: \"de29be5b-cfcb-464c-b84a-f7a5c33bd1f7\") " pod="calico-system/calico-node-v2mgl" May 10 00:06:45.905619 kubelet[2970]: I0510 00:06:45.905355 2970 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/de29be5b-cfcb-464c-b84a-f7a5c33bd1f7-cni-net-dir\") pod \"calico-node-v2mgl\" (UID: \"de29be5b-cfcb-464c-b84a-f7a5c33bd1f7\") " pod="calico-system/calico-node-v2mgl" May 10 00:06:45.905619 kubelet[2970]: I0510 00:06:45.905397 2970 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/de29be5b-cfcb-464c-b84a-f7a5c33bd1f7-lib-modules\") pod \"calico-node-v2mgl\" (UID: \"de29be5b-cfcb-464c-b84a-f7a5c33bd1f7\") " pod="calico-system/calico-node-v2mgl" May 10 00:06:45.905769 kubelet[2970]: I0510 00:06:45.905416 2970 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/de29be5b-cfcb-464c-b84a-f7a5c33bd1f7-policysync\") pod \"calico-node-v2mgl\" (UID: \"de29be5b-cfcb-464c-b84a-f7a5c33bd1f7\") " pod="calico-system/calico-node-v2mgl" May 10 00:06:45.905769 kubelet[2970]: I0510 00:06:45.905438 2970 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7sx2\" (UniqueName: \"kubernetes.io/projected/de29be5b-cfcb-464c-b84a-f7a5c33bd1f7-kube-api-access-x7sx2\") pod \"calico-node-v2mgl\" (UID: \"de29be5b-cfcb-464c-b84a-f7a5c33bd1f7\") " pod="calico-system/calico-node-v2mgl" May 10 00:06:45.953463 containerd[1591]: time="2025-05-10T00:06:45.953183823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5d69f494c-pkx7x,Uid:b98b64fd-b29f-4751-937a-aac0d91bd1ea,Namespace:calico-system,Attempt:0,}" May 10 00:06:45.984182 kubelet[2970]: I0510 00:06:45.982783 2970 topology_manager.go:215] "Topology Admit Handler" podUID="0766211e-6e96-4ed6-b977-d34cdc94d220" podNamespace="calico-system" podName="csi-node-driver-4hjhz" May 10 00:06:45.987547 kubelet[2970]: E0510 00:06:45.987511 2970 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4hjhz" podUID="0766211e-6e96-4ed6-b977-d34cdc94d220" May 10 00:06:45.998452 containerd[1591]: time="2025-05-10T00:06:45.997796027Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:06:45.998452 containerd[1591]: time="2025-05-10T00:06:45.997851269Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:06:45.998452 containerd[1591]: time="2025-05-10T00:06:45.997869430Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:06:46.000464 containerd[1591]: time="2025-05-10T00:06:46.000048394Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:06:46.005644 kubelet[2970]: I0510 00:06:46.005600 2970 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/0766211e-6e96-4ed6-b977-d34cdc94d220-varrun\") pod \"csi-node-driver-4hjhz\" (UID: \"0766211e-6e96-4ed6-b977-d34cdc94d220\") " pod="calico-system/csi-node-driver-4hjhz" May 10 00:06:46.005644 kubelet[2970]: I0510 00:06:46.005645 2970 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0766211e-6e96-4ed6-b977-d34cdc94d220-kubelet-dir\") pod \"csi-node-driver-4hjhz\" (UID: \"0766211e-6e96-4ed6-b977-d34cdc94d220\") " pod="calico-system/csi-node-driver-4hjhz" May 10 00:06:46.005776 kubelet[2970]: I0510 00:06:46.005739 2970 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0766211e-6e96-4ed6-b977-d34cdc94d220-registration-dir\") pod \"csi-node-driver-4hjhz\" (UID: \"0766211e-6e96-4ed6-b977-d34cdc94d220\") " pod="calico-system/csi-node-driver-4hjhz" May 10 00:06:46.005776 kubelet[2970]: I0510 00:06:46.005766 2970 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmj9w\" (UniqueName: \"kubernetes.io/projected/0766211e-6e96-4ed6-b977-d34cdc94d220-kube-api-access-jmj9w\") pod \"csi-node-driver-4hjhz\" (UID: \"0766211e-6e96-4ed6-b977-d34cdc94d220\") " pod="calico-system/csi-node-driver-4hjhz" May 10 00:06:46.005825 kubelet[2970]: I0510 00:06:46.005793 2970 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0766211e-6e96-4ed6-b977-d34cdc94d220-socket-dir\") pod \"csi-node-driver-4hjhz\" (UID: \"0766211e-6e96-4ed6-b977-d34cdc94d220\") " pod="calico-system/csi-node-driver-4hjhz" May 10 00:06:46.017222 kubelet[2970]: E0510 00:06:46.013442 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:46.017222 kubelet[2970]: W0510 00:06:46.013569 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:46.017222 kubelet[2970]: E0510 00:06:46.013589 2970 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:46.019373 kubelet[2970]: E0510 00:06:46.019332 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:46.019373 kubelet[2970]: W0510 00:06:46.019364 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:46.019579 kubelet[2970]: E0510 00:06:46.019386 2970 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:46.021147 kubelet[2970]: E0510 00:06:46.019765 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:46.021147 kubelet[2970]: W0510 00:06:46.019779 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:46.021147 kubelet[2970]: E0510 00:06:46.019790 2970 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:46.021721 kubelet[2970]: E0510 00:06:46.021529 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:46.021721 kubelet[2970]: W0510 00:06:46.021640 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:46.021721 kubelet[2970]: E0510 00:06:46.021655 2970 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:46.045071 kubelet[2970]: E0510 00:06:46.045029 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:46.045071 kubelet[2970]: W0510 00:06:46.045061 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:46.045311 kubelet[2970]: E0510 00:06:46.045185 2970 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:46.047906 kubelet[2970]: E0510 00:06:46.047867 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:46.047906 kubelet[2970]: W0510 00:06:46.047892 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:46.048044 kubelet[2970]: E0510 00:06:46.047913 2970 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:46.107905 kubelet[2970]: E0510 00:06:46.107642 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:46.107905 kubelet[2970]: W0510 00:06:46.107691 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:46.107905 kubelet[2970]: E0510 00:06:46.107712 2970 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:46.107905 kubelet[2970]: E0510 00:06:46.107908 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:46.107905 kubelet[2970]: W0510 00:06:46.107920 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:46.107905 kubelet[2970]: E0510 00:06:46.107929 2970 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:46.110599 kubelet[2970]: E0510 00:06:46.108495 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:46.110599 kubelet[2970]: W0510 00:06:46.108528 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:46.110599 kubelet[2970]: E0510 00:06:46.108546 2970 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:46.110599 kubelet[2970]: E0510 00:06:46.110423 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:46.110599 kubelet[2970]: W0510 00:06:46.110436 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:46.110599 kubelet[2970]: E0510 00:06:46.110472 2970 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:46.111559 kubelet[2970]: E0510 00:06:46.110693 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:46.111559 kubelet[2970]: W0510 00:06:46.110702 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:46.111559 kubelet[2970]: E0510 00:06:46.110712 2970 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:46.111559 kubelet[2970]: E0510 00:06:46.110934 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:46.111559 kubelet[2970]: W0510 00:06:46.110941 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:46.111559 kubelet[2970]: E0510 00:06:46.110955 2970 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:46.111559 kubelet[2970]: E0510 00:06:46.111118 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:46.111559 kubelet[2970]: W0510 00:06:46.111126 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:46.111559 kubelet[2970]: E0510 00:06:46.111158 2970 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:46.111559 kubelet[2970]: E0510 00:06:46.111477 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:46.111767 kubelet[2970]: W0510 00:06:46.111487 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:46.111767 kubelet[2970]: E0510 00:06:46.111499 2970 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:46.111767 kubelet[2970]: E0510 00:06:46.111690 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:46.111767 kubelet[2970]: W0510 00:06:46.111706 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:46.111767 kubelet[2970]: E0510 00:06:46.111719 2970 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:46.112533 kubelet[2970]: E0510 00:06:46.111869 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:46.112533 kubelet[2970]: W0510 00:06:46.111882 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:46.112533 kubelet[2970]: E0510 00:06:46.111898 2970 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:46.112533 kubelet[2970]: E0510 00:06:46.112026 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:46.112533 kubelet[2970]: W0510 00:06:46.112034 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:46.112533 kubelet[2970]: E0510 00:06:46.112042 2970 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:46.112835 kubelet[2970]: E0510 00:06:46.112817 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:46.113080 kubelet[2970]: W0510 00:06:46.113058 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:46.113196 kubelet[2970]: E0510 00:06:46.113088 2970 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:46.114438 kubelet[2970]: E0510 00:06:46.114415 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:46.114438 kubelet[2970]: W0510 00:06:46.114437 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:46.114547 kubelet[2970]: E0510 00:06:46.114457 2970 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:46.115064 kubelet[2970]: E0510 00:06:46.115044 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:46.115064 kubelet[2970]: W0510 00:06:46.115063 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:46.116065 kubelet[2970]: E0510 00:06:46.115815 2970 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:46.116737 kubelet[2970]: E0510 00:06:46.116274 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:46.116737 kubelet[2970]: W0510 00:06:46.116291 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:46.116737 kubelet[2970]: E0510 00:06:46.116640 2970 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:46.118062 kubelet[2970]: E0510 00:06:46.116750 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:46.118062 kubelet[2970]: W0510 00:06:46.116759 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:46.118062 kubelet[2970]: E0510 00:06:46.116813 2970 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:46.118062 kubelet[2970]: E0510 00:06:46.117006 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:46.118062 kubelet[2970]: W0510 00:06:46.117015 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:46.118062 kubelet[2970]: E0510 00:06:46.117047 2970 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:46.118062 kubelet[2970]: E0510 00:06:46.117444 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:46.118062 kubelet[2970]: W0510 00:06:46.117457 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:46.118062 kubelet[2970]: E0510 00:06:46.117497 2970 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:46.118062 kubelet[2970]: E0510 00:06:46.117772 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:46.118668 kubelet[2970]: W0510 00:06:46.117782 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:46.118668 kubelet[2970]: E0510 00:06:46.117795 2970 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:46.118668 kubelet[2970]: E0510 00:06:46.118050 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:46.118668 kubelet[2970]: W0510 00:06:46.118062 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:46.118668 kubelet[2970]: E0510 00:06:46.118081 2970 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:46.118668 kubelet[2970]: E0510 00:06:46.118344 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:46.118668 kubelet[2970]: W0510 00:06:46.118354 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:46.118668 kubelet[2970]: E0510 00:06:46.118438 2970 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:46.118668 kubelet[2970]: E0510 00:06:46.118597 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:46.118668 kubelet[2970]: W0510 00:06:46.118606 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:46.118880 kubelet[2970]: E0510 00:06:46.118635 2970 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:46.118905 kubelet[2970]: E0510 00:06:46.118888 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:46.118905 kubelet[2970]: W0510 00:06:46.118898 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:46.118951 kubelet[2970]: E0510 00:06:46.118914 2970 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:46.119781 kubelet[2970]: E0510 00:06:46.119145 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:46.119781 kubelet[2970]: W0510 00:06:46.119166 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:46.119781 kubelet[2970]: E0510 00:06:46.119185 2970 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:46.119781 kubelet[2970]: E0510 00:06:46.119482 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:46.119781 kubelet[2970]: W0510 00:06:46.119492 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:46.119781 kubelet[2970]: E0510 00:06:46.119513 2970 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:46.120180 containerd[1591]: time="2025-05-10T00:06:46.120085243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5d69f494c-pkx7x,Uid:b98b64fd-b29f-4751-937a-aac0d91bd1ea,Namespace:calico-system,Attempt:0,} returns sandbox id \"ed197422ec38fb86e35d6cad85ed489a90c765af95ac1abcd74d0289f1616bde\"" May 10 00:06:46.129563 containerd[1591]: time="2025-05-10T00:06:46.128588458Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 10 00:06:46.142428 kubelet[2970]: E0510 00:06:46.142149 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:46.142428 kubelet[2970]: W0510 00:06:46.142172 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:46.142428 kubelet[2970]: E0510 00:06:46.142190 2970 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:46.158382 containerd[1591]: time="2025-05-10T00:06:46.158271108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-v2mgl,Uid:de29be5b-cfcb-464c-b84a-f7a5c33bd1f7,Namespace:calico-system,Attempt:0,}" May 10 00:06:46.203534 containerd[1591]: time="2025-05-10T00:06:46.203366125Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:06:46.203534 containerd[1591]: time="2025-05-10T00:06:46.203438848Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:06:46.203534 containerd[1591]: time="2025-05-10T00:06:46.203457089Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:06:46.203722 containerd[1591]: time="2025-05-10T00:06:46.203664617Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:06:46.268011 containerd[1591]: time="2025-05-10T00:06:46.267974671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-v2mgl,Uid:de29be5b-cfcb-464c-b84a-f7a5c33bd1f7,Namespace:calico-system,Attempt:0,} returns sandbox id \"81d25a0509d818a0fe1c96421ed49d42f238900995a37d131c8a6ca877ff5089\"" May 10 00:06:47.773305 kubelet[2970]: E0510 00:06:47.773218 2970 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4hjhz" podUID="0766211e-6e96-4ed6-b977-d34cdc94d220" May 10 00:06:47.822005 containerd[1591]: time="2025-05-10T00:06:47.821921828Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:47.823586 containerd[1591]: time="2025-05-10T00:06:47.822747981Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=28370571" May 10 00:06:47.824233 containerd[1591]: time="2025-05-10T00:06:47.824198159Z" level=info msg="ImageCreate event name:\"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:47.827295 containerd[1591]: time="2025-05-10T00:06:47.827226201Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:47.828216 containerd[1591]: time="2025-05-10T00:06:47.828181639Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"29739745\" in 1.699535259s" May 10 00:06:47.828297 containerd[1591]: time="2025-05-10T00:06:47.828218800Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\"" May 10 00:06:47.830635 containerd[1591]: time="2025-05-10T00:06:47.830584895Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 10 00:06:47.849209 containerd[1591]: time="2025-05-10T00:06:47.848983994Z" level=info msg="CreateContainer within sandbox \"ed197422ec38fb86e35d6cad85ed489a90c765af95ac1abcd74d0289f1616bde\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 10 00:06:47.865349 containerd[1591]: time="2025-05-10T00:06:47.865214365Z" level=info msg="CreateContainer within sandbox \"ed197422ec38fb86e35d6cad85ed489a90c765af95ac1abcd74d0289f1616bde\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"3aa4cd907134a1de66ee9dd98ef7d349d9124ba057c69d0c9525a8260cdfa66d\"" May 10 00:06:47.870198 containerd[1591]: time="2025-05-10T00:06:47.868560340Z" level=info msg="StartContainer for \"3aa4cd907134a1de66ee9dd98ef7d349d9124ba057c69d0c9525a8260cdfa66d\"" May 10 00:06:47.935230 containerd[1591]: time="2025-05-10T00:06:47.935171493Z" level=info msg="StartContainer for \"3aa4cd907134a1de66ee9dd98ef7d349d9124ba057c69d0c9525a8260cdfa66d\" returns successfully" May 10 00:06:48.919636 kubelet[2970]: E0510 00:06:48.919576 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:48.919636 kubelet[2970]: W0510 00:06:48.919597 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:48.920680 kubelet[2970]: E0510 00:06:48.919615 2970 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:48.920680 kubelet[2970]: E0510 00:06:48.920570 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:48.920680 kubelet[2970]: W0510 00:06:48.920583 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:48.920680 kubelet[2970]: E0510 00:06:48.920614 2970 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:48.920999 kubelet[2970]: E0510 00:06:48.920798 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:48.920999 kubelet[2970]: W0510 00:06:48.920814 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:48.920999 kubelet[2970]: E0510 00:06:48.920824 2970 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:48.921539 kubelet[2970]: E0510 00:06:48.921498 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:48.921539 kubelet[2970]: W0510 00:06:48.921517 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:48.921539 kubelet[2970]: E0510 00:06:48.921533 2970 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:48.922061 kubelet[2970]: E0510 00:06:48.921759 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:48.922061 kubelet[2970]: W0510 00:06:48.921776 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:48.922061 kubelet[2970]: E0510 00:06:48.921787 2970 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:48.922061 kubelet[2970]: E0510 00:06:48.921985 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:48.922061 kubelet[2970]: W0510 00:06:48.921994 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:48.922061 kubelet[2970]: E0510 00:06:48.922003 2970 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:48.922395 kubelet[2970]: E0510 00:06:48.922169 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:48.922395 kubelet[2970]: W0510 00:06:48.922177 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:48.922395 kubelet[2970]: E0510 00:06:48.922186 2970 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:48.922395 kubelet[2970]: E0510 00:06:48.922376 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:48.922395 kubelet[2970]: W0510 00:06:48.922384 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:48.922554 kubelet[2970]: E0510 00:06:48.922403 2970 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:48.922585 kubelet[2970]: E0510 00:06:48.922572 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:48.922585 kubelet[2970]: W0510 00:06:48.922581 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:48.922639 kubelet[2970]: E0510 00:06:48.922589 2970 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:48.923011 kubelet[2970]: E0510 00:06:48.922733 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:48.923011 kubelet[2970]: W0510 00:06:48.922747 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:48.923011 kubelet[2970]: E0510 00:06:48.922758 2970 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:48.923011 kubelet[2970]: E0510 00:06:48.922974 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:48.923011 kubelet[2970]: W0510 00:06:48.922984 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:48.923011 kubelet[2970]: E0510 00:06:48.922993 2970 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:48.923388 kubelet[2970]: E0510 00:06:48.923237 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:48.923388 kubelet[2970]: W0510 00:06:48.923248 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:48.923388 kubelet[2970]: E0510 00:06:48.923291 2970 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:48.924709 kubelet[2970]: E0510 00:06:48.924323 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:48.924709 kubelet[2970]: W0510 00:06:48.924339 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:48.924709 kubelet[2970]: E0510 00:06:48.924352 2970 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:48.926071 kubelet[2970]: E0510 00:06:48.926039 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:48.926071 kubelet[2970]: W0510 00:06:48.926066 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:48.926071 kubelet[2970]: E0510 00:06:48.926082 2970 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:48.926715 kubelet[2970]: E0510 00:06:48.926375 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:48.926715 kubelet[2970]: W0510 00:06:48.926390 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:48.926715 kubelet[2970]: E0510 00:06:48.926401 2970 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:48.934806 kubelet[2970]: E0510 00:06:48.934774 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:48.934806 kubelet[2970]: W0510 00:06:48.934802 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:48.934970 kubelet[2970]: E0510 00:06:48.934823 2970 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:48.935561 kubelet[2970]: E0510 00:06:48.935187 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:48.935561 kubelet[2970]: W0510 00:06:48.935208 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:48.935561 kubelet[2970]: E0510 00:06:48.935391 2970 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:48.936011 kubelet[2970]: E0510 00:06:48.935991 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:48.936011 kubelet[2970]: W0510 00:06:48.936012 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:48.936145 kubelet[2970]: E0510 00:06:48.936128 2970 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:48.936776 kubelet[2970]: E0510 00:06:48.936751 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:48.936819 kubelet[2970]: W0510 00:06:48.936778 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:48.936819 kubelet[2970]: E0510 00:06:48.936793 2970 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:48.937189 kubelet[2970]: E0510 00:06:48.937152 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:48.937189 kubelet[2970]: W0510 00:06:48.937170 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:48.937335 kubelet[2970]: E0510 00:06:48.937199 2970 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:48.937423 kubelet[2970]: E0510 00:06:48.937407 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:48.937423 kubelet[2970]: W0510 00:06:48.937422 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:48.937469 kubelet[2970]: E0510 00:06:48.937435 2970 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:48.938072 kubelet[2970]: E0510 00:06:48.938051 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:48.938072 kubelet[2970]: W0510 00:06:48.938069 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:48.938158 kubelet[2970]: E0510 00:06:48.938086 2970 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:48.939137 kubelet[2970]: E0510 00:06:48.939000 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:48.939137 kubelet[2970]: W0510 00:06:48.939023 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:48.939137 kubelet[2970]: E0510 00:06:48.939038 2970 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:48.939430 kubelet[2970]: E0510 00:06:48.939413 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:48.939430 kubelet[2970]: W0510 00:06:48.939427 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:48.939988 kubelet[2970]: E0510 00:06:48.939505 2970 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:48.939988 kubelet[2970]: E0510 00:06:48.939667 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:48.939988 kubelet[2970]: W0510 00:06:48.939675 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:48.939988 kubelet[2970]: E0510 00:06:48.939751 2970 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:48.939988 kubelet[2970]: E0510 00:06:48.939848 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:48.939988 kubelet[2970]: W0510 00:06:48.939855 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:48.939988 kubelet[2970]: E0510 00:06:48.939871 2970 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:48.940171 kubelet[2970]: E0510 00:06:48.940079 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:48.940171 kubelet[2970]: W0510 00:06:48.940087 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:48.940171 kubelet[2970]: E0510 00:06:48.940103 2970 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:48.940432 kubelet[2970]: E0510 00:06:48.940332 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:48.940432 kubelet[2970]: W0510 00:06:48.940349 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:48.940432 kubelet[2970]: E0510 00:06:48.940367 2970 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:48.941316 kubelet[2970]: E0510 00:06:48.941029 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:48.941316 kubelet[2970]: W0510 00:06:48.941046 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:48.941316 kubelet[2970]: E0510 00:06:48.941060 2970 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:48.941423 kubelet[2970]: E0510 00:06:48.941370 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:48.941423 kubelet[2970]: W0510 00:06:48.941381 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:48.941423 kubelet[2970]: E0510 00:06:48.941394 2970 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:48.941869 kubelet[2970]: E0510 00:06:48.941601 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:48.941869 kubelet[2970]: W0510 00:06:48.941617 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:48.941869 kubelet[2970]: E0510 00:06:48.941631 2970 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:48.942008 kubelet[2970]: E0510 00:06:48.941968 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:48.942008 kubelet[2970]: W0510 00:06:48.941978 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:48.942008 kubelet[2970]: E0510 00:06:48.941993 2970 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:48.942231 kubelet[2970]: E0510 00:06:48.942212 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:06:48.942231 kubelet[2970]: W0510 00:06:48.942228 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:06:48.942335 kubelet[2970]: E0510 00:06:48.942237 2970 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:06:49.117283 containerd[1591]: time="2025-05-10T00:06:49.117203290Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:49.118759 containerd[1591]: time="2025-05-10T00:06:49.118691152Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5122903" May 10 00:06:49.119526 containerd[1591]: time="2025-05-10T00:06:49.119198893Z" level=info msg="ImageCreate event name:\"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:49.121854 containerd[1591]: time="2025-05-10T00:06:49.121804042Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:49.122785 containerd[1591]: time="2025-05-10T00:06:49.122743721Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6492045\" in 1.291981858s" May 10 00:06:49.122785 containerd[1591]: time="2025-05-10T00:06:49.122782362Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\"" May 10 00:06:49.130624 containerd[1591]: time="2025-05-10T00:06:49.130580206Z" level=info msg="CreateContainer within sandbox \"81d25a0509d818a0fe1c96421ed49d42f238900995a37d131c8a6ca877ff5089\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 10 00:06:49.145558 containerd[1591]: time="2025-05-10T00:06:49.145508066Z" level=info msg="CreateContainer within sandbox \"81d25a0509d818a0fe1c96421ed49d42f238900995a37d131c8a6ca877ff5089\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"5393af60b9ad5544c83b78c74bffb3a72f697ca738d92ffc06e6a70738b373f2\"" May 10 00:06:49.146350 containerd[1591]: time="2025-05-10T00:06:49.146284258Z" level=info msg="StartContainer for \"5393af60b9ad5544c83b78c74bffb3a72f697ca738d92ffc06e6a70738b373f2\"" May 10 00:06:49.216685 containerd[1591]: time="2025-05-10T00:06:49.214949790Z" level=info msg="StartContainer for \"5393af60b9ad5544c83b78c74bffb3a72f697ca738d92ffc06e6a70738b373f2\" returns successfully" May 10 00:06:49.368569 containerd[1591]: time="2025-05-10T00:06:49.368450806Z" level=info msg="shim disconnected" id=5393af60b9ad5544c83b78c74bffb3a72f697ca738d92ffc06e6a70738b373f2 namespace=k8s.io May 10 00:06:49.368569 containerd[1591]: time="2025-05-10T00:06:49.368555971Z" level=warning msg="cleaning up after shim disconnected" id=5393af60b9ad5544c83b78c74bffb3a72f697ca738d92ffc06e6a70738b373f2 namespace=k8s.io May 10 00:06:49.368569 containerd[1591]: time="2025-05-10T00:06:49.368570331Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 10 00:06:49.773429 kubelet[2970]: E0510 00:06:49.773208 2970 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4hjhz" podUID="0766211e-6e96-4ed6-b977-d34cdc94d220" May 10 00:06:49.838541 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5393af60b9ad5544c83b78c74bffb3a72f697ca738d92ffc06e6a70738b373f2-rootfs.mount: Deactivated successfully. May 10 00:06:49.872471 kubelet[2970]: I0510 00:06:49.872437 2970 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 10 00:06:49.875738 containerd[1591]: time="2025-05-10T00:06:49.875048008Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 10 00:06:49.910156 kubelet[2970]: I0510 00:06:49.910086 2970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5d69f494c-pkx7x" podStartSLOduration=3.2079218210000002 podStartE2EDuration="4.910067303s" podCreationTimestamp="2025-05-10 00:06:45 +0000 UTC" firstStartedPulling="2025-05-10 00:06:46.127486935 +0000 UTC m=+21.478600468" lastFinishedPulling="2025-05-10 00:06:47.829632417 +0000 UTC m=+23.180745950" observedRunningTime="2025-05-10 00:06:48.887619193 +0000 UTC m=+24.238732726" watchObservedRunningTime="2025-05-10 00:06:49.910067303 +0000 UTC m=+25.261180836" May 10 00:06:51.772733 kubelet[2970]: E0510 00:06:51.772504 2970 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4hjhz" podUID="0766211e-6e96-4ed6-b977-d34cdc94d220" May 10 00:06:51.909077 update_engine[1563]: I20250510 00:06:51.908509 1563 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs May 10 00:06:51.909077 update_engine[1563]: I20250510 00:06:51.908556 1563 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs May 10 00:06:51.909077 update_engine[1563]: I20250510 00:06:51.908774 1563 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs May 10 00:06:51.910283 update_engine[1563]: I20250510 00:06:51.909686 1563 omaha_request_params.cc:62] Current group set to lts May 10 00:06:51.910283 update_engine[1563]: I20250510 00:06:51.909780 1563 update_attempter.cc:499] Already updated boot flags. Skipping. May 10 00:06:51.910283 update_engine[1563]: I20250510 00:06:51.909788 1563 update_attempter.cc:643] Scheduling an action processor start. May 10 00:06:51.910283 update_engine[1563]: I20250510 00:06:51.909803 1563 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 10 00:06:51.910283 update_engine[1563]: I20250510 00:06:51.909832 1563 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs May 10 00:06:51.910283 update_engine[1563]: I20250510 00:06:51.909883 1563 omaha_request_action.cc:271] Posting an Omaha request to disabled May 10 00:06:51.910283 update_engine[1563]: I20250510 00:06:51.909891 1563 omaha_request_action.cc:272] Request: May 10 00:06:51.910283 update_engine[1563]: May 10 00:06:51.910283 update_engine[1563]: May 10 00:06:51.910283 update_engine[1563]: May 10 00:06:51.910283 update_engine[1563]: May 10 00:06:51.910283 update_engine[1563]: May 10 00:06:51.910283 update_engine[1563]: May 10 00:06:51.910283 update_engine[1563]: May 10 00:06:51.910283 update_engine[1563]: May 10 00:06:51.910283 update_engine[1563]: I20250510 00:06:51.909896 1563 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 10 00:06:51.911816 locksmithd[1612]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 May 10 00:06:51.912138 update_engine[1563]: I20250510 00:06:51.912101 1563 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 10 00:06:51.912587 update_engine[1563]: I20250510 00:06:51.912559 1563 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 10 00:06:51.913676 update_engine[1563]: E20250510 00:06:51.913640 1563 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 10 00:06:51.913742 update_engine[1563]: I20250510 00:06:51.913709 1563 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 May 10 00:06:52.456109 containerd[1591]: time="2025-05-10T00:06:52.455290877Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:52.457572 containerd[1591]: time="2025-05-10T00:06:52.457535575Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=91256270" May 10 00:06:52.458834 containerd[1591]: time="2025-05-10T00:06:52.458796629Z" level=info msg="ImageCreate event name:\"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:52.462661 containerd[1591]: time="2025-05-10T00:06:52.462617596Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:52.463388 containerd[1591]: time="2025-05-10T00:06:52.463208581Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"92625452\" in 2.587456384s" May 10 00:06:52.463388 containerd[1591]: time="2025-05-10T00:06:52.463309066Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\"" May 10 00:06:52.468268 containerd[1591]: time="2025-05-10T00:06:52.467857783Z" level=info msg="CreateContainer within sandbox \"81d25a0509d818a0fe1c96421ed49d42f238900995a37d131c8a6ca877ff5089\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 10 00:06:52.492809 containerd[1591]: time="2025-05-10T00:06:52.492747905Z" level=info msg="CreateContainer within sandbox \"81d25a0509d818a0fe1c96421ed49d42f238900995a37d131c8a6ca877ff5089\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"dc9693878c7efd450e55b4cef7865e5762717a8b3b7a67c8f9500436ae6d8255\"" May 10 00:06:52.495897 containerd[1591]: time="2025-05-10T00:06:52.495849560Z" level=info msg="StartContainer for \"dc9693878c7efd450e55b4cef7865e5762717a8b3b7a67c8f9500436ae6d8255\"" May 10 00:06:52.555113 containerd[1591]: time="2025-05-10T00:06:52.555052334Z" level=info msg="StartContainer for \"dc9693878c7efd450e55b4cef7865e5762717a8b3b7a67c8f9500436ae6d8255\" returns successfully" May 10 00:06:53.075504 containerd[1591]: time="2025-05-10T00:06:53.075434881Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 10 00:06:53.099210 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dc9693878c7efd450e55b4cef7865e5762717a8b3b7a67c8f9500436ae6d8255-rootfs.mount: Deactivated successfully. May 10 00:06:53.103470 kubelet[2970]: I0510 00:06:53.102536 2970 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 10 00:06:53.148995 kubelet[2970]: I0510 00:06:53.147371 2970 topology_manager.go:215] "Topology Admit Handler" podUID="ca8a5621-5e04-45a0-a9d3-4c8113513a58" podNamespace="kube-system" podName="coredns-7db6d8ff4d-zhcs4" May 10 00:06:53.150816 kubelet[2970]: I0510 00:06:53.149571 2970 topology_manager.go:215] "Topology Admit Handler" podUID="871fa5f6-cf5f-424d-92ff-7537c13487e5" podNamespace="kube-system" podName="coredns-7db6d8ff4d-zk9wm" May 10 00:06:53.158593 kubelet[2970]: I0510 00:06:53.154081 2970 topology_manager.go:215] "Topology Admit Handler" podUID="934e1bfa-39db-47d3-8258-500edca573e3" podNamespace="calico-system" podName="calico-kube-controllers-55c46f8bc8-l65kq" May 10 00:06:53.163138 kubelet[2970]: I0510 00:06:53.163031 2970 topology_manager.go:215] "Topology Admit Handler" podUID="ca652ab2-74d7-4a4c-a866-d714bca54c18" podNamespace="calico-apiserver" podName="calico-apiserver-849f9bb4b4-rkt4m" May 10 00:06:53.165819 kubelet[2970]: I0510 00:06:53.164186 2970 topology_manager.go:215] "Topology Admit Handler" podUID="c8370026-6156-4314-9b6f-165657c0861d" podNamespace="calico-apiserver" podName="calico-apiserver-849f9bb4b4-mnkn2" May 10 00:06:53.167391 kubelet[2970]: I0510 00:06:53.166092 2970 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/871fa5f6-cf5f-424d-92ff-7537c13487e5-config-volume\") pod \"coredns-7db6d8ff4d-zk9wm\" (UID: \"871fa5f6-cf5f-424d-92ff-7537c13487e5\") " pod="kube-system/coredns-7db6d8ff4d-zk9wm" May 10 00:06:53.167391 kubelet[2970]: I0510 00:06:53.166219 2970 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ca8a5621-5e04-45a0-a9d3-4c8113513a58-config-volume\") pod \"coredns-7db6d8ff4d-zhcs4\" (UID: \"ca8a5621-5e04-45a0-a9d3-4c8113513a58\") " pod="kube-system/coredns-7db6d8ff4d-zhcs4" May 10 00:06:53.167391 kubelet[2970]: I0510 00:06:53.166388 2970 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77zxt\" (UniqueName: \"kubernetes.io/projected/934e1bfa-39db-47d3-8258-500edca573e3-kube-api-access-77zxt\") pod \"calico-kube-controllers-55c46f8bc8-l65kq\" (UID: \"934e1bfa-39db-47d3-8258-500edca573e3\") " pod="calico-system/calico-kube-controllers-55c46f8bc8-l65kq" May 10 00:06:53.167391 kubelet[2970]: I0510 00:06:53.166415 2970 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frz24\" (UniqueName: \"kubernetes.io/projected/ca8a5621-5e04-45a0-a9d3-4c8113513a58-kube-api-access-frz24\") pod \"coredns-7db6d8ff4d-zhcs4\" (UID: \"ca8a5621-5e04-45a0-a9d3-4c8113513a58\") " pod="kube-system/coredns-7db6d8ff4d-zhcs4" May 10 00:06:53.167391 kubelet[2970]: I0510 00:06:53.166435 2970 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcnmc\" (UniqueName: \"kubernetes.io/projected/871fa5f6-cf5f-424d-92ff-7537c13487e5-kube-api-access-fcnmc\") pod \"coredns-7db6d8ff4d-zk9wm\" (UID: \"871fa5f6-cf5f-424d-92ff-7537c13487e5\") " pod="kube-system/coredns-7db6d8ff4d-zk9wm" May 10 00:06:53.169427 kubelet[2970]: I0510 00:06:53.169366 2970 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/934e1bfa-39db-47d3-8258-500edca573e3-tigera-ca-bundle\") pod \"calico-kube-controllers-55c46f8bc8-l65kq\" (UID: \"934e1bfa-39db-47d3-8258-500edca573e3\") " pod="calico-system/calico-kube-controllers-55c46f8bc8-l65kq" May 10 00:06:53.222655 containerd[1591]: time="2025-05-10T00:06:53.222409279Z" level=info msg="shim disconnected" id=dc9693878c7efd450e55b4cef7865e5762717a8b3b7a67c8f9500436ae6d8255 namespace=k8s.io May 10 00:06:53.222655 containerd[1591]: time="2025-05-10T00:06:53.222571526Z" level=warning msg="cleaning up after shim disconnected" id=dc9693878c7efd450e55b4cef7865e5762717a8b3b7a67c8f9500436ae6d8255 namespace=k8s.io May 10 00:06:53.222655 containerd[1591]: time="2025-05-10T00:06:53.222592407Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 10 00:06:53.272295 kubelet[2970]: I0510 00:06:53.270362 2970 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ht8hf\" (UniqueName: \"kubernetes.io/projected/c8370026-6156-4314-9b6f-165657c0861d-kube-api-access-ht8hf\") pod \"calico-apiserver-849f9bb4b4-mnkn2\" (UID: \"c8370026-6156-4314-9b6f-165657c0861d\") " pod="calico-apiserver/calico-apiserver-849f9bb4b4-mnkn2" May 10 00:06:53.272295 kubelet[2970]: I0510 00:06:53.270442 2970 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ca652ab2-74d7-4a4c-a866-d714bca54c18-calico-apiserver-certs\") pod \"calico-apiserver-849f9bb4b4-rkt4m\" (UID: \"ca652ab2-74d7-4a4c-a866-d714bca54c18\") " pod="calico-apiserver/calico-apiserver-849f9bb4b4-rkt4m" May 10 00:06:53.272295 kubelet[2970]: I0510 00:06:53.270590 2970 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c8370026-6156-4314-9b6f-165657c0861d-calico-apiserver-certs\") pod \"calico-apiserver-849f9bb4b4-mnkn2\" (UID: \"c8370026-6156-4314-9b6f-165657c0861d\") " pod="calico-apiserver/calico-apiserver-849f9bb4b4-mnkn2" May 10 00:06:53.272295 kubelet[2970]: I0510 00:06:53.270683 2970 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5mzs\" (UniqueName: \"kubernetes.io/projected/ca652ab2-74d7-4a4c-a866-d714bca54c18-kube-api-access-b5mzs\") pod \"calico-apiserver-849f9bb4b4-rkt4m\" (UID: \"ca652ab2-74d7-4a4c-a866-d714bca54c18\") " pod="calico-apiserver/calico-apiserver-849f9bb4b4-rkt4m" May 10 00:06:53.458959 containerd[1591]: time="2025-05-10T00:06:53.458824460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zk9wm,Uid:871fa5f6-cf5f-424d-92ff-7537c13487e5,Namespace:kube-system,Attempt:0,}" May 10 00:06:53.484498 containerd[1591]: time="2025-05-10T00:06:53.484442789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zhcs4,Uid:ca8a5621-5e04-45a0-a9d3-4c8113513a58,Namespace:kube-system,Attempt:0,}" May 10 00:06:53.490870 containerd[1591]: time="2025-05-10T00:06:53.490095758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-849f9bb4b4-mnkn2,Uid:c8370026-6156-4314-9b6f-165657c0861d,Namespace:calico-apiserver,Attempt:0,}" May 10 00:06:53.492634 containerd[1591]: time="2025-05-10T00:06:53.491302091Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55c46f8bc8-l65kq,Uid:934e1bfa-39db-47d3-8258-500edca573e3,Namespace:calico-system,Attempt:0,}" May 10 00:06:53.496615 containerd[1591]: time="2025-05-10T00:06:53.495449394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-849f9bb4b4-rkt4m,Uid:ca652ab2-74d7-4a4c-a866-d714bca54c18,Namespace:calico-apiserver,Attempt:0,}" May 10 00:06:53.599194 containerd[1591]: time="2025-05-10T00:06:53.599143244Z" level=error msg="Failed to destroy network for sandbox \"1fba9d0469a9f2a28309d9f284605b4a5588ec750c59d1a6c76f950fbc456624\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:06:53.601769 containerd[1591]: time="2025-05-10T00:06:53.601249137Z" level=error msg="encountered an error cleaning up failed sandbox \"1fba9d0469a9f2a28309d9f284605b4a5588ec750c59d1a6c76f950fbc456624\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:06:53.602521 containerd[1591]: time="2025-05-10T00:06:53.601791361Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zk9wm,Uid:871fa5f6-cf5f-424d-92ff-7537c13487e5,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1fba9d0469a9f2a28309d9f284605b4a5588ec750c59d1a6c76f950fbc456624\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:06:53.602608 kubelet[2970]: E0510 00:06:53.602027 2970 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1fba9d0469a9f2a28309d9f284605b4a5588ec750c59d1a6c76f950fbc456624\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:06:53.602608 kubelet[2970]: E0510 00:06:53.602118 2970 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1fba9d0469a9f2a28309d9f284605b4a5588ec750c59d1a6c76f950fbc456624\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-zk9wm" May 10 00:06:53.602608 kubelet[2970]: E0510 00:06:53.602137 2970 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1fba9d0469a9f2a28309d9f284605b4a5588ec750c59d1a6c76f950fbc456624\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-zk9wm" May 10 00:06:53.602720 kubelet[2970]: E0510 00:06:53.602188 2970 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-zk9wm_kube-system(871fa5f6-cf5f-424d-92ff-7537c13487e5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-zk9wm_kube-system(871fa5f6-cf5f-424d-92ff-7537c13487e5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1fba9d0469a9f2a28309d9f284605b4a5588ec750c59d1a6c76f950fbc456624\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-zk9wm" podUID="871fa5f6-cf5f-424d-92ff-7537c13487e5" May 10 00:06:53.672881 containerd[1591]: time="2025-05-10T00:06:53.672829572Z" level=error msg="Failed to destroy network for sandbox \"2d3bb549488f1465603ebc4adf7861737cb05a901a7950d8e394ac141d8a68e5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:06:53.673782 containerd[1591]: time="2025-05-10T00:06:53.673672049Z" level=error msg="encountered an error cleaning up failed sandbox \"2d3bb549488f1465603ebc4adf7861737cb05a901a7950d8e394ac141d8a68e5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:06:53.673782 containerd[1591]: time="2025-05-10T00:06:53.673730092Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55c46f8bc8-l65kq,Uid:934e1bfa-39db-47d3-8258-500edca573e3,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2d3bb549488f1465603ebc4adf7861737cb05a901a7950d8e394ac141d8a68e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:06:53.674566 kubelet[2970]: E0510 00:06:53.673924 2970 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d3bb549488f1465603ebc4adf7861737cb05a901a7950d8e394ac141d8a68e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:06:53.674566 kubelet[2970]: E0510 00:06:53.673976 2970 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d3bb549488f1465603ebc4adf7861737cb05a901a7950d8e394ac141d8a68e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-55c46f8bc8-l65kq" May 10 00:06:53.674566 kubelet[2970]: E0510 00:06:53.673995 2970 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d3bb549488f1465603ebc4adf7861737cb05a901a7950d8e394ac141d8a68e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-55c46f8bc8-l65kq" May 10 00:06:53.674668 kubelet[2970]: E0510 00:06:53.674032 2970 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-55c46f8bc8-l65kq_calico-system(934e1bfa-39db-47d3-8258-500edca573e3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-55c46f8bc8-l65kq_calico-system(934e1bfa-39db-47d3-8258-500edca573e3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2d3bb549488f1465603ebc4adf7861737cb05a901a7950d8e394ac141d8a68e5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-55c46f8bc8-l65kq" podUID="934e1bfa-39db-47d3-8258-500edca573e3" May 10 00:06:53.692726 containerd[1591]: time="2025-05-10T00:06:53.692585083Z" level=error msg="Failed to destroy network for sandbox \"3affc32fb3d33e954ea9b2b79849791757e49aeb9866a5d0cea925c0c172110e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:06:53.693075 containerd[1591]: time="2025-05-10T00:06:53.692883696Z" level=error msg="encountered an error cleaning up failed sandbox \"3affc32fb3d33e954ea9b2b79849791757e49aeb9866a5d0cea925c0c172110e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:06:53.693075 containerd[1591]: time="2025-05-10T00:06:53.692998661Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zhcs4,Uid:ca8a5621-5e04-45a0-a9d3-4c8113513a58,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3affc32fb3d33e954ea9b2b79849791757e49aeb9866a5d0cea925c0c172110e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:06:53.693792 kubelet[2970]: E0510 00:06:53.693248 2970 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3affc32fb3d33e954ea9b2b79849791757e49aeb9866a5d0cea925c0c172110e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:06:53.693792 kubelet[2970]: E0510 00:06:53.693333 2970 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3affc32fb3d33e954ea9b2b79849791757e49aeb9866a5d0cea925c0c172110e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-zhcs4" May 10 00:06:53.693792 kubelet[2970]: E0510 00:06:53.693353 2970 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3affc32fb3d33e954ea9b2b79849791757e49aeb9866a5d0cea925c0c172110e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-zhcs4" May 10 00:06:53.693877 kubelet[2970]: E0510 00:06:53.693400 2970 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-zhcs4_kube-system(ca8a5621-5e04-45a0-a9d3-4c8113513a58)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-zhcs4_kube-system(ca8a5621-5e04-45a0-a9d3-4c8113513a58)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3affc32fb3d33e954ea9b2b79849791757e49aeb9866a5d0cea925c0c172110e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-zhcs4" podUID="ca8a5621-5e04-45a0-a9d3-4c8113513a58" May 10 00:06:53.703509 containerd[1591]: time="2025-05-10T00:06:53.703283395Z" level=error msg="Failed to destroy network for sandbox \"9653f3e50090731920b66c3558844262d4fd050f8d6d624b77c19919948fe092\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:06:53.704004 containerd[1591]: time="2025-05-10T00:06:53.703842659Z" level=error msg="encountered an error cleaning up failed sandbox \"9653f3e50090731920b66c3558844262d4fd050f8d6d624b77c19919948fe092\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:06:53.704004 containerd[1591]: time="2025-05-10T00:06:53.703919783Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-849f9bb4b4-mnkn2,Uid:c8370026-6156-4314-9b6f-165657c0861d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9653f3e50090731920b66c3558844262d4fd050f8d6d624b77c19919948fe092\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:06:53.705003 kubelet[2970]: E0510 00:06:53.704163 2970 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9653f3e50090731920b66c3558844262d4fd050f8d6d624b77c19919948fe092\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:06:53.705003 kubelet[2970]: E0510 00:06:53.704221 2970 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9653f3e50090731920b66c3558844262d4fd050f8d6d624b77c19919948fe092\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-849f9bb4b4-mnkn2" May 10 00:06:53.705003 kubelet[2970]: E0510 00:06:53.704245 2970 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9653f3e50090731920b66c3558844262d4fd050f8d6d624b77c19919948fe092\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-849f9bb4b4-mnkn2" May 10 00:06:53.705100 kubelet[2970]: E0510 00:06:53.704623 2970 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-849f9bb4b4-mnkn2_calico-apiserver(c8370026-6156-4314-9b6f-165657c0861d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-849f9bb4b4-mnkn2_calico-apiserver(c8370026-6156-4314-9b6f-165657c0861d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9653f3e50090731920b66c3558844262d4fd050f8d6d624b77c19919948fe092\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-849f9bb4b4-mnkn2" podUID="c8370026-6156-4314-9b6f-165657c0861d" May 10 00:06:53.705725 containerd[1591]: time="2025-05-10T00:06:53.705691621Z" level=error msg="Failed to destroy network for sandbox \"26bba018d6d949d2a4fa770d8d69779d5bef50640378bc4679e53390744f5d9f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:06:53.706707 containerd[1591]: time="2025-05-10T00:06:53.706670464Z" level=error msg="encountered an error cleaning up failed sandbox \"26bba018d6d949d2a4fa770d8d69779d5bef50640378bc4679e53390744f5d9f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:06:53.706776 containerd[1591]: time="2025-05-10T00:06:53.706732027Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-849f9bb4b4-rkt4m,Uid:ca652ab2-74d7-4a4c-a866-d714bca54c18,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"26bba018d6d949d2a4fa770d8d69779d5bef50640378bc4679e53390744f5d9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:06:53.708522 kubelet[2970]: E0510 00:06:53.708329 2970 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26bba018d6d949d2a4fa770d8d69779d5bef50640378bc4679e53390744f5d9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:06:53.708522 kubelet[2970]: E0510 00:06:53.708391 2970 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26bba018d6d949d2a4fa770d8d69779d5bef50640378bc4679e53390744f5d9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-849f9bb4b4-rkt4m" May 10 00:06:53.708522 kubelet[2970]: E0510 00:06:53.708408 2970 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26bba018d6d949d2a4fa770d8d69779d5bef50640378bc4679e53390744f5d9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-849f9bb4b4-rkt4m" May 10 00:06:53.708744 kubelet[2970]: E0510 00:06:53.708448 2970 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-849f9bb4b4-rkt4m_calico-apiserver(ca652ab2-74d7-4a4c-a866-d714bca54c18)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-849f9bb4b4-rkt4m_calico-apiserver(ca652ab2-74d7-4a4c-a866-d714bca54c18)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"26bba018d6d949d2a4fa770d8d69779d5bef50640378bc4679e53390744f5d9f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-849f9bb4b4-rkt4m" podUID="ca652ab2-74d7-4a4c-a866-d714bca54c18" May 10 00:06:53.782160 containerd[1591]: time="2025-05-10T00:06:53.781726732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4hjhz,Uid:0766211e-6e96-4ed6-b977-d34cdc94d220,Namespace:calico-system,Attempt:0,}" May 10 00:06:53.861854 containerd[1591]: time="2025-05-10T00:06:53.861727258Z" level=error msg="Failed to destroy network for sandbox \"d63c2089dce95ba446be9618d4d02418a5e71a0ac71a2c014a23b09390d4cdf5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:06:53.862659 containerd[1591]: time="2025-05-10T00:06:53.862434689Z" level=error msg="encountered an error cleaning up failed sandbox \"d63c2089dce95ba446be9618d4d02418a5e71a0ac71a2c014a23b09390d4cdf5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:06:53.862659 containerd[1591]: time="2025-05-10T00:06:53.862532894Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4hjhz,Uid:0766211e-6e96-4ed6-b977-d34cdc94d220,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d63c2089dce95ba446be9618d4d02418a5e71a0ac71a2c014a23b09390d4cdf5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:06:53.863036 kubelet[2970]: E0510 00:06:53.862966 2970 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d63c2089dce95ba446be9618d4d02418a5e71a0ac71a2c014a23b09390d4cdf5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:06:53.863205 kubelet[2970]: E0510 00:06:53.863110 2970 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d63c2089dce95ba446be9618d4d02418a5e71a0ac71a2c014a23b09390d4cdf5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4hjhz" May 10 00:06:53.863426 kubelet[2970]: E0510 00:06:53.863242 2970 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d63c2089dce95ba446be9618d4d02418a5e71a0ac71a2c014a23b09390d4cdf5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4hjhz" May 10 00:06:53.863946 kubelet[2970]: E0510 00:06:53.863872 2970 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4hjhz_calico-system(0766211e-6e96-4ed6-b977-d34cdc94d220)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4hjhz_calico-system(0766211e-6e96-4ed6-b977-d34cdc94d220)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d63c2089dce95ba446be9618d4d02418a5e71a0ac71a2c014a23b09390d4cdf5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4hjhz" podUID="0766211e-6e96-4ed6-b977-d34cdc94d220" May 10 00:06:53.903849 kubelet[2970]: I0510 00:06:53.903643 2970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d3bb549488f1465603ebc4adf7861737cb05a901a7950d8e394ac141d8a68e5" May 10 00:06:53.904192 containerd[1591]: time="2025-05-10T00:06:53.903571023Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 10 00:06:53.908037 containerd[1591]: time="2025-05-10T00:06:53.907703805Z" level=info msg="StopPodSandbox for \"2d3bb549488f1465603ebc4adf7861737cb05a901a7950d8e394ac141d8a68e5\"" May 10 00:06:53.908553 containerd[1591]: time="2025-05-10T00:06:53.908374034Z" level=info msg="Ensure that sandbox 2d3bb549488f1465603ebc4adf7861737cb05a901a7950d8e394ac141d8a68e5 in task-service has been cleanup successfully" May 10 00:06:53.911835 kubelet[2970]: I0510 00:06:53.911694 2970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9653f3e50090731920b66c3558844262d4fd050f8d6d624b77c19919948fe092" May 10 00:06:53.914168 containerd[1591]: time="2025-05-10T00:06:53.913797433Z" level=info msg="StopPodSandbox for \"9653f3e50090731920b66c3558844262d4fd050f8d6d624b77c19919948fe092\"" May 10 00:06:53.914168 containerd[1591]: time="2025-05-10T00:06:53.913956720Z" level=info msg="Ensure that sandbox 9653f3e50090731920b66c3558844262d4fd050f8d6d624b77c19919948fe092 in task-service has been cleanup successfully" May 10 00:06:53.915531 kubelet[2970]: I0510 00:06:53.914914 2970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3affc32fb3d33e954ea9b2b79849791757e49aeb9866a5d0cea925c0c172110e" May 10 00:06:53.916432 containerd[1591]: time="2025-05-10T00:06:53.916182218Z" level=info msg="StopPodSandbox for \"3affc32fb3d33e954ea9b2b79849791757e49aeb9866a5d0cea925c0c172110e\"" May 10 00:06:53.917734 containerd[1591]: time="2025-05-10T00:06:53.917464555Z" level=info msg="Ensure that sandbox 3affc32fb3d33e954ea9b2b79849791757e49aeb9866a5d0cea925c0c172110e in task-service has been cleanup successfully" May 10 00:06:53.918770 kubelet[2970]: I0510 00:06:53.918741 2970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1fba9d0469a9f2a28309d9f284605b4a5588ec750c59d1a6c76f950fbc456624" May 10 00:06:53.922218 containerd[1591]: time="2025-05-10T00:06:53.921373087Z" level=info msg="StopPodSandbox for \"1fba9d0469a9f2a28309d9f284605b4a5588ec750c59d1a6c76f950fbc456624\"" May 10 00:06:53.923065 containerd[1591]: time="2025-05-10T00:06:53.923024600Z" level=info msg="Ensure that sandbox 1fba9d0469a9f2a28309d9f284605b4a5588ec750c59d1a6c76f950fbc456624 in task-service has been cleanup successfully" May 10 00:06:53.924325 kubelet[2970]: I0510 00:06:53.923892 2970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d63c2089dce95ba446be9618d4d02418a5e71a0ac71a2c014a23b09390d4cdf5" May 10 00:06:53.938771 containerd[1591]: time="2025-05-10T00:06:53.938719692Z" level=info msg="StopPodSandbox for \"d63c2089dce95ba446be9618d4d02418a5e71a0ac71a2c014a23b09390d4cdf5\"" May 10 00:06:53.939121 containerd[1591]: time="2025-05-10T00:06:53.938913780Z" level=info msg="Ensure that sandbox d63c2089dce95ba446be9618d4d02418a5e71a0ac71a2c014a23b09390d4cdf5 in task-service has been cleanup successfully" May 10 00:06:53.954856 kubelet[2970]: I0510 00:06:53.954795 2970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26bba018d6d949d2a4fa770d8d69779d5bef50640378bc4679e53390744f5d9f" May 10 00:06:53.960836 containerd[1591]: time="2025-05-10T00:06:53.960798105Z" level=info msg="StopPodSandbox for \"26bba018d6d949d2a4fa770d8d69779d5bef50640378bc4679e53390744f5d9f\"" May 10 00:06:53.962338 containerd[1591]: time="2025-05-10T00:06:53.961558018Z" level=info msg="Ensure that sandbox 26bba018d6d949d2a4fa770d8d69779d5bef50640378bc4679e53390744f5d9f in task-service has been cleanup successfully" May 10 00:06:54.071021 containerd[1591]: time="2025-05-10T00:06:54.070676989Z" level=error msg="StopPodSandbox for \"2d3bb549488f1465603ebc4adf7861737cb05a901a7950d8e394ac141d8a68e5\" failed" error="failed to destroy network for sandbox \"2d3bb549488f1465603ebc4adf7861737cb05a901a7950d8e394ac141d8a68e5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:06:54.071760 kubelet[2970]: E0510 00:06:54.071436 2970 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2d3bb549488f1465603ebc4adf7861737cb05a901a7950d8e394ac141d8a68e5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2d3bb549488f1465603ebc4adf7861737cb05a901a7950d8e394ac141d8a68e5" May 10 00:06:54.071760 kubelet[2970]: E0510 00:06:54.071527 2970 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2d3bb549488f1465603ebc4adf7861737cb05a901a7950d8e394ac141d8a68e5"} May 10 00:06:54.071760 kubelet[2970]: E0510 00:06:54.071587 2970 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"934e1bfa-39db-47d3-8258-500edca573e3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2d3bb549488f1465603ebc4adf7861737cb05a901a7950d8e394ac141d8a68e5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 10 00:06:54.071760 kubelet[2970]: E0510 00:06:54.071615 2970 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"934e1bfa-39db-47d3-8258-500edca573e3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2d3bb549488f1465603ebc4adf7861737cb05a901a7950d8e394ac141d8a68e5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-55c46f8bc8-l65kq" podUID="934e1bfa-39db-47d3-8258-500edca573e3" May 10 00:06:54.092249 containerd[1591]: time="2025-05-10T00:06:54.092119587Z" level=error msg="StopPodSandbox for \"26bba018d6d949d2a4fa770d8d69779d5bef50640378bc4679e53390744f5d9f\" failed" error="failed to destroy network for sandbox \"26bba018d6d949d2a4fa770d8d69779d5bef50640378bc4679e53390744f5d9f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:06:54.093000 kubelet[2970]: E0510 00:06:54.092433 2970 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"26bba018d6d949d2a4fa770d8d69779d5bef50640378bc4679e53390744f5d9f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="26bba018d6d949d2a4fa770d8d69779d5bef50640378bc4679e53390744f5d9f" May 10 00:06:54.093000 kubelet[2970]: E0510 00:06:54.092503 2970 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"26bba018d6d949d2a4fa770d8d69779d5bef50640378bc4679e53390744f5d9f"} May 10 00:06:54.093000 kubelet[2970]: E0510 00:06:54.092535 2970 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ca652ab2-74d7-4a4c-a866-d714bca54c18\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"26bba018d6d949d2a4fa770d8d69779d5bef50640378bc4679e53390744f5d9f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 10 00:06:54.093000 kubelet[2970]: E0510 00:06:54.092557 2970 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ca652ab2-74d7-4a4c-a866-d714bca54c18\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"26bba018d6d949d2a4fa770d8d69779d5bef50640378bc4679e53390744f5d9f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-849f9bb4b4-rkt4m" podUID="ca652ab2-74d7-4a4c-a866-d714bca54c18" May 10 00:06:54.094338 containerd[1591]: time="2025-05-10T00:06:54.094240922Z" level=error msg="StopPodSandbox for \"9653f3e50090731920b66c3558844262d4fd050f8d6d624b77c19919948fe092\" failed" error="failed to destroy network for sandbox \"9653f3e50090731920b66c3558844262d4fd050f8d6d624b77c19919948fe092\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:06:54.094682 kubelet[2970]: E0510 00:06:54.094646 2970 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9653f3e50090731920b66c3558844262d4fd050f8d6d624b77c19919948fe092\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9653f3e50090731920b66c3558844262d4fd050f8d6d624b77c19919948fe092" May 10 00:06:54.094743 kubelet[2970]: E0510 00:06:54.094688 2970 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9653f3e50090731920b66c3558844262d4fd050f8d6d624b77c19919948fe092"} May 10 00:06:54.094743 kubelet[2970]: E0510 00:06:54.094722 2970 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c8370026-6156-4314-9b6f-165657c0861d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9653f3e50090731920b66c3558844262d4fd050f8d6d624b77c19919948fe092\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 10 00:06:54.094830 kubelet[2970]: E0510 00:06:54.094741 2970 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c8370026-6156-4314-9b6f-165657c0861d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9653f3e50090731920b66c3558844262d4fd050f8d6d624b77c19919948fe092\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-849f9bb4b4-mnkn2" podUID="c8370026-6156-4314-9b6f-165657c0861d" May 10 00:06:54.097599 containerd[1591]: time="2025-05-10T00:06:54.097403103Z" level=error msg="StopPodSandbox for \"d63c2089dce95ba446be9618d4d02418a5e71a0ac71a2c014a23b09390d4cdf5\" failed" error="failed to destroy network for sandbox \"d63c2089dce95ba446be9618d4d02418a5e71a0ac71a2c014a23b09390d4cdf5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:06:54.097681 kubelet[2970]: E0510 00:06:54.097643 2970 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d63c2089dce95ba446be9618d4d02418a5e71a0ac71a2c014a23b09390d4cdf5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d63c2089dce95ba446be9618d4d02418a5e71a0ac71a2c014a23b09390d4cdf5" May 10 00:06:54.097724 kubelet[2970]: E0510 00:06:54.097689 2970 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d63c2089dce95ba446be9618d4d02418a5e71a0ac71a2c014a23b09390d4cdf5"} May 10 00:06:54.097750 kubelet[2970]: E0510 00:06:54.097726 2970 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0766211e-6e96-4ed6-b977-d34cdc94d220\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d63c2089dce95ba446be9618d4d02418a5e71a0ac71a2c014a23b09390d4cdf5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 10 00:06:54.097800 kubelet[2970]: E0510 00:06:54.097749 2970 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0766211e-6e96-4ed6-b977-d34cdc94d220\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d63c2089dce95ba446be9618d4d02418a5e71a0ac71a2c014a23b09390d4cdf5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4hjhz" podUID="0766211e-6e96-4ed6-b977-d34cdc94d220" May 10 00:06:54.098825 containerd[1591]: time="2025-05-10T00:06:54.098791365Z" level=error msg="StopPodSandbox for \"1fba9d0469a9f2a28309d9f284605b4a5588ec750c59d1a6c76f950fbc456624\" failed" error="failed to destroy network for sandbox \"1fba9d0469a9f2a28309d9f284605b4a5588ec750c59d1a6c76f950fbc456624\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:06:54.099101 kubelet[2970]: E0510 00:06:54.098947 2970 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1fba9d0469a9f2a28309d9f284605b4a5588ec750c59d1a6c76f950fbc456624\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1fba9d0469a9f2a28309d9f284605b4a5588ec750c59d1a6c76f950fbc456624" May 10 00:06:54.099101 kubelet[2970]: E0510 00:06:54.098979 2970 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1fba9d0469a9f2a28309d9f284605b4a5588ec750c59d1a6c76f950fbc456624"} May 10 00:06:54.099101 kubelet[2970]: E0510 00:06:54.099004 2970 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"871fa5f6-cf5f-424d-92ff-7537c13487e5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1fba9d0469a9f2a28309d9f284605b4a5588ec750c59d1a6c76f950fbc456624\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 10 00:06:54.099101 kubelet[2970]: E0510 00:06:54.099021 2970 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"871fa5f6-cf5f-424d-92ff-7537c13487e5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1fba9d0469a9f2a28309d9f284605b4a5588ec750c59d1a6c76f950fbc456624\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-zk9wm" podUID="871fa5f6-cf5f-424d-92ff-7537c13487e5" May 10 00:06:54.101691 containerd[1591]: time="2025-05-10T00:06:54.101648733Z" level=error msg="StopPodSandbox for \"3affc32fb3d33e954ea9b2b79849791757e49aeb9866a5d0cea925c0c172110e\" failed" error="failed to destroy network for sandbox \"3affc32fb3d33e954ea9b2b79849791757e49aeb9866a5d0cea925c0c172110e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:06:54.101870 kubelet[2970]: E0510 00:06:54.101833 2970 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3affc32fb3d33e954ea9b2b79849791757e49aeb9866a5d0cea925c0c172110e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3affc32fb3d33e954ea9b2b79849791757e49aeb9866a5d0cea925c0c172110e" May 10 00:06:54.101921 kubelet[2970]: E0510 00:06:54.101882 2970 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3affc32fb3d33e954ea9b2b79849791757e49aeb9866a5d0cea925c0c172110e"} May 10 00:06:54.101956 kubelet[2970]: E0510 00:06:54.101946 2970 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ca8a5621-5e04-45a0-a9d3-4c8113513a58\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3affc32fb3d33e954ea9b2b79849791757e49aeb9866a5d0cea925c0c172110e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 10 00:06:54.102005 kubelet[2970]: E0510 00:06:54.101970 2970 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ca8a5621-5e04-45a0-a9d3-4c8113513a58\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3affc32fb3d33e954ea9b2b79849791757e49aeb9866a5d0cea925c0c172110e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-zhcs4" podUID="ca8a5621-5e04-45a0-a9d3-4c8113513a58" May 10 00:06:54.484618 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-26bba018d6d949d2a4fa770d8d69779d5bef50640378bc4679e53390744f5d9f-shm.mount: Deactivated successfully. May 10 00:06:54.484764 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2d3bb549488f1465603ebc4adf7861737cb05a901a7950d8e394ac141d8a68e5-shm.mount: Deactivated successfully. May 10 00:06:54.484850 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3affc32fb3d33e954ea9b2b79849791757e49aeb9866a5d0cea925c0c172110e-shm.mount: Deactivated successfully. May 10 00:06:54.484983 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9653f3e50090731920b66c3558844262d4fd050f8d6d624b77c19919948fe092-shm.mount: Deactivated successfully. May 10 00:06:54.485064 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1fba9d0469a9f2a28309d9f284605b4a5588ec750c59d1a6c76f950fbc456624-shm.mount: Deactivated successfully. May 10 00:06:57.999467 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount444764540.mount: Deactivated successfully. May 10 00:06:58.037429 containerd[1591]: time="2025-05-10T00:06:58.037378092Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:58.039057 containerd[1591]: time="2025-05-10T00:06:58.038997008Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=138981893" May 10 00:06:58.040490 containerd[1591]: time="2025-05-10T00:06:58.039408667Z" level=info msg="ImageCreate event name:\"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:58.042310 containerd[1591]: time="2025-05-10T00:06:58.041804259Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:58.042877 containerd[1591]: time="2025-05-10T00:06:58.042466010Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"138981755\" in 4.138758742s" May 10 00:06:58.042877 containerd[1591]: time="2025-05-10T00:06:58.042511092Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\"" May 10 00:06:58.057837 containerd[1591]: time="2025-05-10T00:06:58.057793328Z" level=info msg="CreateContainer within sandbox \"81d25a0509d818a0fe1c96421ed49d42f238900995a37d131c8a6ca877ff5089\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 10 00:06:58.077022 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3020445884.mount: Deactivated successfully. May 10 00:06:58.081846 containerd[1591]: time="2025-05-10T00:06:58.081775611Z" level=info msg="CreateContainer within sandbox \"81d25a0509d818a0fe1c96421ed49d42f238900995a37d131c8a6ca877ff5089\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"1b6a1e3f212939d873d8300165b35ba1b161601c15ca9d6f67274191c3a91cc2\"" May 10 00:06:58.082825 containerd[1591]: time="2025-05-10T00:06:58.082731456Z" level=info msg="StartContainer for \"1b6a1e3f212939d873d8300165b35ba1b161601c15ca9d6f67274191c3a91cc2\"" May 10 00:06:58.146777 containerd[1591]: time="2025-05-10T00:06:58.146728813Z" level=info msg="StartContainer for \"1b6a1e3f212939d873d8300165b35ba1b161601c15ca9d6f67274191c3a91cc2\" returns successfully" May 10 00:06:58.256285 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 10 00:06:58.256404 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 10 00:06:59.971608 kubelet[2970]: I0510 00:06:59.971564 2970 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 10 00:07:01.911407 update_engine[1563]: I20250510 00:07:01.911338 1563 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 10 00:07:01.912231 update_engine[1563]: I20250510 00:07:01.911996 1563 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 10 00:07:01.912231 update_engine[1563]: I20250510 00:07:01.912189 1563 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 10 00:07:01.913427 update_engine[1563]: E20250510 00:07:01.913322 1563 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 10 00:07:01.913427 update_engine[1563]: I20250510 00:07:01.913390 1563 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 May 10 00:07:02.936716 kubelet[2970]: I0510 00:07:02.936547 2970 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 10 00:07:06.776038 containerd[1591]: time="2025-05-10T00:07:06.775600979Z" level=info msg="StopPodSandbox for \"3affc32fb3d33e954ea9b2b79849791757e49aeb9866a5d0cea925c0c172110e\"" May 10 00:07:06.778544 containerd[1591]: time="2025-05-10T00:07:06.777958818Z" level=info msg="StopPodSandbox for \"26bba018d6d949d2a4fa770d8d69779d5bef50640378bc4679e53390744f5d9f\"" May 10 00:07:06.779849 containerd[1591]: time="2025-05-10T00:07:06.779626222Z" level=info msg="StopPodSandbox for \"1fba9d0469a9f2a28309d9f284605b4a5588ec750c59d1a6c76f950fbc456624\"" May 10 00:07:06.880551 kubelet[2970]: I0510 00:07:06.880454 2970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-v2mgl" podStartSLOduration=10.109185487 podStartE2EDuration="21.880380263s" podCreationTimestamp="2025-05-10 00:06:45 +0000 UTC" firstStartedPulling="2025-05-10 00:06:46.272158756 +0000 UTC m=+21.623272289" lastFinishedPulling="2025-05-10 00:06:58.043353532 +0000 UTC m=+33.394467065" observedRunningTime="2025-05-10 00:06:58.989434678 +0000 UTC m=+34.340548211" watchObservedRunningTime="2025-05-10 00:07:06.880380263 +0000 UTC m=+42.231493796" May 10 00:07:06.950755 containerd[1591]: 2025-05-10 00:07:06.878 [INFO][4348] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1fba9d0469a9f2a28309d9f284605b4a5588ec750c59d1a6c76f950fbc456624" May 10 00:07:06.950755 containerd[1591]: 2025-05-10 00:07:06.880 [INFO][4348] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1fba9d0469a9f2a28309d9f284605b4a5588ec750c59d1a6c76f950fbc456624" iface="eth0" netns="/var/run/netns/cni-b0f99596-3de4-4e94-70cf-b7427d982e76" May 10 00:07:06.950755 containerd[1591]: 2025-05-10 00:07:06.881 [INFO][4348] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1fba9d0469a9f2a28309d9f284605b4a5588ec750c59d1a6c76f950fbc456624" iface="eth0" netns="/var/run/netns/cni-b0f99596-3de4-4e94-70cf-b7427d982e76" May 10 00:07:06.950755 containerd[1591]: 2025-05-10 00:07:06.882 [INFO][4348] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1fba9d0469a9f2a28309d9f284605b4a5588ec750c59d1a6c76f950fbc456624" iface="eth0" netns="/var/run/netns/cni-b0f99596-3de4-4e94-70cf-b7427d982e76" May 10 00:07:06.950755 containerd[1591]: 2025-05-10 00:07:06.882 [INFO][4348] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1fba9d0469a9f2a28309d9f284605b4a5588ec750c59d1a6c76f950fbc456624" May 10 00:07:06.950755 containerd[1591]: 2025-05-10 00:07:06.882 [INFO][4348] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1fba9d0469a9f2a28309d9f284605b4a5588ec750c59d1a6c76f950fbc456624" May 10 00:07:06.950755 containerd[1591]: 2025-05-10 00:07:06.928 [INFO][4371] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1fba9d0469a9f2a28309d9f284605b4a5588ec750c59d1a6c76f950fbc456624" HandleID="k8s-pod-network.1fba9d0469a9f2a28309d9f284605b4a5588ec750c59d1a6c76f950fbc456624" Workload="ci--4081--3--3--n--60bc3761e6-k8s-coredns--7db6d8ff4d--zk9wm-eth0" May 10 00:07:06.950755 containerd[1591]: 2025-05-10 00:07:06.928 [INFO][4371] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:07:06.950755 containerd[1591]: 2025-05-10 00:07:06.928 [INFO][4371] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:07:06.950755 containerd[1591]: 2025-05-10 00:07:06.940 [WARNING][4371] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1fba9d0469a9f2a28309d9f284605b4a5588ec750c59d1a6c76f950fbc456624" HandleID="k8s-pod-network.1fba9d0469a9f2a28309d9f284605b4a5588ec750c59d1a6c76f950fbc456624" Workload="ci--4081--3--3--n--60bc3761e6-k8s-coredns--7db6d8ff4d--zk9wm-eth0" May 10 00:07:06.950755 containerd[1591]: 2025-05-10 00:07:06.940 [INFO][4371] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1fba9d0469a9f2a28309d9f284605b4a5588ec750c59d1a6c76f950fbc456624" HandleID="k8s-pod-network.1fba9d0469a9f2a28309d9f284605b4a5588ec750c59d1a6c76f950fbc456624" Workload="ci--4081--3--3--n--60bc3761e6-k8s-coredns--7db6d8ff4d--zk9wm-eth0" May 10 00:07:06.950755 containerd[1591]: 2025-05-10 00:07:06.941 [INFO][4371] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:07:06.950755 containerd[1591]: 2025-05-10 00:07:06.947 [INFO][4348] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1fba9d0469a9f2a28309d9f284605b4a5588ec750c59d1a6c76f950fbc456624" May 10 00:07:06.952241 containerd[1591]: time="2025-05-10T00:07:06.951188033Z" level=info msg="TearDown network for sandbox \"1fba9d0469a9f2a28309d9f284605b4a5588ec750c59d1a6c76f950fbc456624\" successfully" May 10 00:07:06.952241 containerd[1591]: time="2025-05-10T00:07:06.951219754Z" level=info msg="StopPodSandbox for \"1fba9d0469a9f2a28309d9f284605b4a5588ec750c59d1a6c76f950fbc456624\" returns successfully" May 10 00:07:06.954697 containerd[1591]: time="2025-05-10T00:07:06.954655728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zk9wm,Uid:871fa5f6-cf5f-424d-92ff-7537c13487e5,Namespace:kube-system,Attempt:1,}" May 10 00:07:06.957497 systemd[1]: run-netns-cni\x2db0f99596\x2d3de4\x2d4e94\x2d70cf\x2db7427d982e76.mount: Deactivated successfully. May 10 00:07:06.968692 containerd[1591]: 2025-05-10 00:07:06.869 [INFO][4352] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3affc32fb3d33e954ea9b2b79849791757e49aeb9866a5d0cea925c0c172110e" May 10 00:07:06.968692 containerd[1591]: 2025-05-10 00:07:06.870 [INFO][4352] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3affc32fb3d33e954ea9b2b79849791757e49aeb9866a5d0cea925c0c172110e" iface="eth0" netns="/var/run/netns/cni-183fc49f-4c50-58ac-c4a2-edeb65bb0cf9" May 10 00:07:06.968692 containerd[1591]: 2025-05-10 00:07:06.871 [INFO][4352] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3affc32fb3d33e954ea9b2b79849791757e49aeb9866a5d0cea925c0c172110e" iface="eth0" netns="/var/run/netns/cni-183fc49f-4c50-58ac-c4a2-edeb65bb0cf9" May 10 00:07:06.968692 containerd[1591]: 2025-05-10 00:07:06.871 [INFO][4352] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3affc32fb3d33e954ea9b2b79849791757e49aeb9866a5d0cea925c0c172110e" iface="eth0" netns="/var/run/netns/cni-183fc49f-4c50-58ac-c4a2-edeb65bb0cf9" May 10 00:07:06.968692 containerd[1591]: 2025-05-10 00:07:06.871 [INFO][4352] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3affc32fb3d33e954ea9b2b79849791757e49aeb9866a5d0cea925c0c172110e" May 10 00:07:06.968692 containerd[1591]: 2025-05-10 00:07:06.871 [INFO][4352] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3affc32fb3d33e954ea9b2b79849791757e49aeb9866a5d0cea925c0c172110e" May 10 00:07:06.968692 containerd[1591]: 2025-05-10 00:07:06.929 [INFO][4367] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3affc32fb3d33e954ea9b2b79849791757e49aeb9866a5d0cea925c0c172110e" HandleID="k8s-pod-network.3affc32fb3d33e954ea9b2b79849791757e49aeb9866a5d0cea925c0c172110e" Workload="ci--4081--3--3--n--60bc3761e6-k8s-coredns--7db6d8ff4d--zhcs4-eth0" May 10 00:07:06.968692 containerd[1591]: 2025-05-10 00:07:06.929 [INFO][4367] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:07:06.968692 containerd[1591]: 2025-05-10 00:07:06.942 [INFO][4367] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:07:06.968692 containerd[1591]: 2025-05-10 00:07:06.956 [WARNING][4367] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3affc32fb3d33e954ea9b2b79849791757e49aeb9866a5d0cea925c0c172110e" HandleID="k8s-pod-network.3affc32fb3d33e954ea9b2b79849791757e49aeb9866a5d0cea925c0c172110e" Workload="ci--4081--3--3--n--60bc3761e6-k8s-coredns--7db6d8ff4d--zhcs4-eth0" May 10 00:07:06.968692 containerd[1591]: 2025-05-10 00:07:06.956 [INFO][4367] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3affc32fb3d33e954ea9b2b79849791757e49aeb9866a5d0cea925c0c172110e" HandleID="k8s-pod-network.3affc32fb3d33e954ea9b2b79849791757e49aeb9866a5d0cea925c0c172110e" Workload="ci--4081--3--3--n--60bc3761e6-k8s-coredns--7db6d8ff4d--zhcs4-eth0" May 10 00:07:06.968692 containerd[1591]: 2025-05-10 00:07:06.959 [INFO][4367] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:07:06.968692 containerd[1591]: 2025-05-10 00:07:06.962 [INFO][4352] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3affc32fb3d33e954ea9b2b79849791757e49aeb9866a5d0cea925c0c172110e" May 10 00:07:06.969694 containerd[1591]: time="2025-05-10T00:07:06.969347068Z" level=info msg="TearDown network for sandbox \"3affc32fb3d33e954ea9b2b79849791757e49aeb9866a5d0cea925c0c172110e\" successfully" May 10 00:07:06.969694 containerd[1591]: time="2025-05-10T00:07:06.969391471Z" level=info msg="StopPodSandbox for \"3affc32fb3d33e954ea9b2b79849791757e49aeb9866a5d0cea925c0c172110e\" returns successfully" May 10 00:07:06.969949 systemd[1]: run-netns-cni\x2d183fc49f\x2d4c50\x2d58ac\x2dc4a2\x2dedeb65bb0cf9.mount: Deactivated successfully. May 10 00:07:06.973193 containerd[1591]: time="2025-05-10T00:07:06.973080017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zhcs4,Uid:ca8a5621-5e04-45a0-a9d3-4c8113513a58,Namespace:kube-system,Attempt:1,}" May 10 00:07:06.985220 containerd[1591]: 2025-05-10 00:07:06.879 [INFO][4347] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="26bba018d6d949d2a4fa770d8d69779d5bef50640378bc4679e53390744f5d9f" May 10 00:07:06.985220 containerd[1591]: 2025-05-10 00:07:06.879 [INFO][4347] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="26bba018d6d949d2a4fa770d8d69779d5bef50640378bc4679e53390744f5d9f" iface="eth0" netns="/var/run/netns/cni-486349bf-bc44-6eb5-e9c7-3df184f377ca" May 10 00:07:06.985220 containerd[1591]: 2025-05-10 00:07:06.879 [INFO][4347] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="26bba018d6d949d2a4fa770d8d69779d5bef50640378bc4679e53390744f5d9f" iface="eth0" netns="/var/run/netns/cni-486349bf-bc44-6eb5-e9c7-3df184f377ca" May 10 00:07:06.985220 containerd[1591]: 2025-05-10 00:07:06.879 [INFO][4347] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="26bba018d6d949d2a4fa770d8d69779d5bef50640378bc4679e53390744f5d9f" iface="eth0" netns="/var/run/netns/cni-486349bf-bc44-6eb5-e9c7-3df184f377ca" May 10 00:07:06.985220 containerd[1591]: 2025-05-10 00:07:06.879 [INFO][4347] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="26bba018d6d949d2a4fa770d8d69779d5bef50640378bc4679e53390744f5d9f" May 10 00:07:06.985220 containerd[1591]: 2025-05-10 00:07:06.879 [INFO][4347] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="26bba018d6d949d2a4fa770d8d69779d5bef50640378bc4679e53390744f5d9f" May 10 00:07:06.985220 containerd[1591]: 2025-05-10 00:07:06.931 [INFO][4369] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="26bba018d6d949d2a4fa770d8d69779d5bef50640378bc4679e53390744f5d9f" HandleID="k8s-pod-network.26bba018d6d949d2a4fa770d8d69779d5bef50640378bc4679e53390744f5d9f" Workload="ci--4081--3--3--n--60bc3761e6-k8s-calico--apiserver--849f9bb4b4--rkt4m-eth0" May 10 00:07:06.985220 containerd[1591]: 2025-05-10 00:07:06.931 [INFO][4369] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:07:06.985220 containerd[1591]: 2025-05-10 00:07:06.959 [INFO][4369] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:07:06.985220 containerd[1591]: 2025-05-10 00:07:06.976 [WARNING][4369] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="26bba018d6d949d2a4fa770d8d69779d5bef50640378bc4679e53390744f5d9f" HandleID="k8s-pod-network.26bba018d6d949d2a4fa770d8d69779d5bef50640378bc4679e53390744f5d9f" Workload="ci--4081--3--3--n--60bc3761e6-k8s-calico--apiserver--849f9bb4b4--rkt4m-eth0" May 10 00:07:06.985220 containerd[1591]: 2025-05-10 00:07:06.977 [INFO][4369] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="26bba018d6d949d2a4fa770d8d69779d5bef50640378bc4679e53390744f5d9f" HandleID="k8s-pod-network.26bba018d6d949d2a4fa770d8d69779d5bef50640378bc4679e53390744f5d9f" Workload="ci--4081--3--3--n--60bc3761e6-k8s-calico--apiserver--849f9bb4b4--rkt4m-eth0" May 10 00:07:06.985220 containerd[1591]: 2025-05-10 00:07:06.980 [INFO][4369] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:07:06.985220 containerd[1591]: 2025-05-10 00:07:06.983 [INFO][4347] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="26bba018d6d949d2a4fa770d8d69779d5bef50640378bc4679e53390744f5d9f" May 10 00:07:06.985648 containerd[1591]: time="2025-05-10T00:07:06.985341355Z" level=info msg="TearDown network for sandbox \"26bba018d6d949d2a4fa770d8d69779d5bef50640378bc4679e53390744f5d9f\" successfully" May 10 00:07:06.985648 containerd[1591]: time="2025-05-10T00:07:06.985399998Z" level=info msg="StopPodSandbox for \"26bba018d6d949d2a4fa770d8d69779d5bef50640378bc4679e53390744f5d9f\" returns successfully" May 10 00:07:06.989561 containerd[1591]: time="2025-05-10T00:07:06.989348597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-849f9bb4b4-rkt4m,Uid:ca652ab2-74d7-4a4c-a866-d714bca54c18,Namespace:calico-apiserver,Attempt:1,}" May 10 00:07:06.999780 systemd[1]: run-netns-cni\x2d486349bf\x2dbc44\x2d6eb5\x2de9c7\x2d3df184f377ca.mount: Deactivated successfully. May 10 00:07:07.255759 systemd-networkd[1244]: cali523edb034bd: Link UP May 10 00:07:07.256092 systemd-networkd[1244]: cali523edb034bd: Gained carrier May 10 00:07:07.299111 containerd[1591]: 2025-05-10 00:07:07.052 [INFO][4390] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 10 00:07:07.299111 containerd[1591]: 2025-05-10 00:07:07.087 [INFO][4390] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--n--60bc3761e6-k8s-coredns--7db6d8ff4d--zk9wm-eth0 coredns-7db6d8ff4d- kube-system 871fa5f6-cf5f-424d-92ff-7537c13487e5 776 0 2025-05-10 00:06:38 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-3-n-60bc3761e6 coredns-7db6d8ff4d-zk9wm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali523edb034bd [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="155f870f96c92d7ed48fa1b4ebac511e32e44ccda94990df373a7527d8e812de" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zk9wm" WorkloadEndpoint="ci--4081--3--3--n--60bc3761e6-k8s-coredns--7db6d8ff4d--zk9wm-" May 10 00:07:07.299111 containerd[1591]: 2025-05-10 00:07:07.087 [INFO][4390] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="155f870f96c92d7ed48fa1b4ebac511e32e44ccda94990df373a7527d8e812de" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zk9wm" WorkloadEndpoint="ci--4081--3--3--n--60bc3761e6-k8s-coredns--7db6d8ff4d--zk9wm-eth0" May 10 00:07:07.299111 containerd[1591]: 2025-05-10 00:07:07.145 [INFO][4428] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="155f870f96c92d7ed48fa1b4ebac511e32e44ccda94990df373a7527d8e812de" HandleID="k8s-pod-network.155f870f96c92d7ed48fa1b4ebac511e32e44ccda94990df373a7527d8e812de" Workload="ci--4081--3--3--n--60bc3761e6-k8s-coredns--7db6d8ff4d--zk9wm-eth0" May 10 00:07:07.299111 containerd[1591]: 2025-05-10 00:07:07.168 [INFO][4428] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="155f870f96c92d7ed48fa1b4ebac511e32e44ccda94990df373a7527d8e812de" HandleID="k8s-pod-network.155f870f96c92d7ed48fa1b4ebac511e32e44ccda94990df373a7527d8e812de" Workload="ci--4081--3--3--n--60bc3761e6-k8s-coredns--7db6d8ff4d--zk9wm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400028cf60), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-3-n-60bc3761e6", "pod":"coredns-7db6d8ff4d-zk9wm", "timestamp":"2025-05-10 00:07:07.145690136 +0000 UTC"}, Hostname:"ci-4081-3-3-n-60bc3761e6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 10 00:07:07.299111 containerd[1591]: 2025-05-10 00:07:07.168 [INFO][4428] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:07:07.299111 containerd[1591]: 2025-05-10 00:07:07.168 [INFO][4428] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:07:07.299111 containerd[1591]: 2025-05-10 00:07:07.168 [INFO][4428] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-n-60bc3761e6' May 10 00:07:07.299111 containerd[1591]: 2025-05-10 00:07:07.173 [INFO][4428] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.155f870f96c92d7ed48fa1b4ebac511e32e44ccda94990df373a7527d8e812de" host="ci-4081-3-3-n-60bc3761e6" May 10 00:07:07.299111 containerd[1591]: 2025-05-10 00:07:07.184 [INFO][4428] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-n-60bc3761e6" May 10 00:07:07.299111 containerd[1591]: 2025-05-10 00:07:07.194 [INFO][4428] ipam/ipam.go 489: Trying affinity for 192.168.86.128/26 host="ci-4081-3-3-n-60bc3761e6" May 10 00:07:07.299111 containerd[1591]: 2025-05-10 00:07:07.205 [INFO][4428] ipam/ipam.go 155: Attempting to load block cidr=192.168.86.128/26 host="ci-4081-3-3-n-60bc3761e6" May 10 00:07:07.299111 containerd[1591]: 2025-05-10 00:07:07.211 [INFO][4428] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.86.128/26 host="ci-4081-3-3-n-60bc3761e6" May 10 00:07:07.299111 containerd[1591]: 2025-05-10 00:07:07.212 [INFO][4428] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.86.128/26 handle="k8s-pod-network.155f870f96c92d7ed48fa1b4ebac511e32e44ccda94990df373a7527d8e812de" host="ci-4081-3-3-n-60bc3761e6" May 10 00:07:07.299111 containerd[1591]: 2025-05-10 00:07:07.215 [INFO][4428] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.155f870f96c92d7ed48fa1b4ebac511e32e44ccda94990df373a7527d8e812de May 10 00:07:07.299111 containerd[1591]: 2025-05-10 00:07:07.222 [INFO][4428] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.86.128/26 handle="k8s-pod-network.155f870f96c92d7ed48fa1b4ebac511e32e44ccda94990df373a7527d8e812de" host="ci-4081-3-3-n-60bc3761e6" May 10 00:07:07.299111 containerd[1591]: 2025-05-10 00:07:07.234 [INFO][4428] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.86.129/26] block=192.168.86.128/26 handle="k8s-pod-network.155f870f96c92d7ed48fa1b4ebac511e32e44ccda94990df373a7527d8e812de" host="ci-4081-3-3-n-60bc3761e6" May 10 00:07:07.299111 containerd[1591]: 2025-05-10 00:07:07.234 [INFO][4428] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.86.129/26] handle="k8s-pod-network.155f870f96c92d7ed48fa1b4ebac511e32e44ccda94990df373a7527d8e812de" host="ci-4081-3-3-n-60bc3761e6" May 10 00:07:07.299111 containerd[1591]: 2025-05-10 00:07:07.234 [INFO][4428] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:07:07.299111 containerd[1591]: 2025-05-10 00:07:07.234 [INFO][4428] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.86.129/26] IPv6=[] ContainerID="155f870f96c92d7ed48fa1b4ebac511e32e44ccda94990df373a7527d8e812de" HandleID="k8s-pod-network.155f870f96c92d7ed48fa1b4ebac511e32e44ccda94990df373a7527d8e812de" Workload="ci--4081--3--3--n--60bc3761e6-k8s-coredns--7db6d8ff4d--zk9wm-eth0" May 10 00:07:07.300962 containerd[1591]: 2025-05-10 00:07:07.238 [INFO][4390] cni-plugin/k8s.go 386: Populated endpoint ContainerID="155f870f96c92d7ed48fa1b4ebac511e32e44ccda94990df373a7527d8e812de" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zk9wm" WorkloadEndpoint="ci--4081--3--3--n--60bc3761e6-k8s-coredns--7db6d8ff4d--zk9wm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--60bc3761e6-k8s-coredns--7db6d8ff4d--zk9wm-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"871fa5f6-cf5f-424d-92ff-7537c13487e5", ResourceVersion:"776", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 6, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-60bc3761e6", ContainerID:"", Pod:"coredns-7db6d8ff4d-zk9wm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.86.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali523edb034bd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:07:07.300962 containerd[1591]: 2025-05-10 00:07:07.238 [INFO][4390] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.86.129/32] ContainerID="155f870f96c92d7ed48fa1b4ebac511e32e44ccda94990df373a7527d8e812de" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zk9wm" WorkloadEndpoint="ci--4081--3--3--n--60bc3761e6-k8s-coredns--7db6d8ff4d--zk9wm-eth0" May 10 00:07:07.300962 containerd[1591]: 2025-05-10 00:07:07.238 [INFO][4390] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali523edb034bd ContainerID="155f870f96c92d7ed48fa1b4ebac511e32e44ccda94990df373a7527d8e812de" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zk9wm" WorkloadEndpoint="ci--4081--3--3--n--60bc3761e6-k8s-coredns--7db6d8ff4d--zk9wm-eth0" May 10 00:07:07.300962 containerd[1591]: 2025-05-10 00:07:07.256 [INFO][4390] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="155f870f96c92d7ed48fa1b4ebac511e32e44ccda94990df373a7527d8e812de" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zk9wm" WorkloadEndpoint="ci--4081--3--3--n--60bc3761e6-k8s-coredns--7db6d8ff4d--zk9wm-eth0" May 10 00:07:07.300962 containerd[1591]: 2025-05-10 00:07:07.258 [INFO][4390] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="155f870f96c92d7ed48fa1b4ebac511e32e44ccda94990df373a7527d8e812de" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zk9wm" WorkloadEndpoint="ci--4081--3--3--n--60bc3761e6-k8s-coredns--7db6d8ff4d--zk9wm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--60bc3761e6-k8s-coredns--7db6d8ff4d--zk9wm-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"871fa5f6-cf5f-424d-92ff-7537c13487e5", ResourceVersion:"776", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 6, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-60bc3761e6", ContainerID:"155f870f96c92d7ed48fa1b4ebac511e32e44ccda94990df373a7527d8e812de", Pod:"coredns-7db6d8ff4d-zk9wm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.86.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali523edb034bd", MAC:"42:aa:2b:54:4a:a4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:07:07.300962 containerd[1591]: 2025-05-10 00:07:07.285 [INFO][4390] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="155f870f96c92d7ed48fa1b4ebac511e32e44ccda94990df373a7527d8e812de" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zk9wm" WorkloadEndpoint="ci--4081--3--3--n--60bc3761e6-k8s-coredns--7db6d8ff4d--zk9wm-eth0" May 10 00:07:07.360367 systemd-networkd[1244]: cali81b6abd688f: Link UP May 10 00:07:07.366226 systemd-networkd[1244]: cali81b6abd688f: Gained carrier May 10 00:07:07.408968 containerd[1591]: time="2025-05-10T00:07:07.407329950Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:07:07.408968 containerd[1591]: time="2025-05-10T00:07:07.407388913Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:07:07.408968 containerd[1591]: time="2025-05-10T00:07:07.407404634Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:07:07.408968 containerd[1591]: time="2025-05-10T00:07:07.407493278Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:07:07.410378 containerd[1591]: 2025-05-10 00:07:07.058 [INFO][4406] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 10 00:07:07.410378 containerd[1591]: 2025-05-10 00:07:07.098 [INFO][4406] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--n--60bc3761e6-k8s-calico--apiserver--849f9bb4b4--rkt4m-eth0 calico-apiserver-849f9bb4b4- calico-apiserver ca652ab2-74d7-4a4c-a866-d714bca54c18 777 0 2025-05-10 00:06:45 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:849f9bb4b4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-3-n-60bc3761e6 calico-apiserver-849f9bb4b4-rkt4m eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali81b6abd688f [] []}} ContainerID="bb676bfd8621132b95930d86e28b39cc953c2f2f587210e38188db7f6a8641c9" Namespace="calico-apiserver" Pod="calico-apiserver-849f9bb4b4-rkt4m" WorkloadEndpoint="ci--4081--3--3--n--60bc3761e6-k8s-calico--apiserver--849f9bb4b4--rkt4m-" May 10 00:07:07.410378 containerd[1591]: 2025-05-10 00:07:07.100 [INFO][4406] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="bb676bfd8621132b95930d86e28b39cc953c2f2f587210e38188db7f6a8641c9" Namespace="calico-apiserver" Pod="calico-apiserver-849f9bb4b4-rkt4m" WorkloadEndpoint="ci--4081--3--3--n--60bc3761e6-k8s-calico--apiserver--849f9bb4b4--rkt4m-eth0" May 10 00:07:07.410378 containerd[1591]: 2025-05-10 00:07:07.173 [INFO][4436] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bb676bfd8621132b95930d86e28b39cc953c2f2f587210e38188db7f6a8641c9" HandleID="k8s-pod-network.bb676bfd8621132b95930d86e28b39cc953c2f2f587210e38188db7f6a8641c9" Workload="ci--4081--3--3--n--60bc3761e6-k8s-calico--apiserver--849f9bb4b4--rkt4m-eth0" May 10 00:07:07.410378 containerd[1591]: 2025-05-10 00:07:07.195 [INFO][4436] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bb676bfd8621132b95930d86e28b39cc953c2f2f587210e38188db7f6a8641c9" HandleID="k8s-pod-network.bb676bfd8621132b95930d86e28b39cc953c2f2f587210e38188db7f6a8641c9" Workload="ci--4081--3--3--n--60bc3761e6-k8s-calico--apiserver--849f9bb4b4--rkt4m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000282630), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-3-n-60bc3761e6", "pod":"calico-apiserver-849f9bb4b4-rkt4m", "timestamp":"2025-05-10 00:07:07.173725321 +0000 UTC"}, Hostname:"ci-4081-3-3-n-60bc3761e6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 10 00:07:07.410378 containerd[1591]: 2025-05-10 00:07:07.195 [INFO][4436] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:07:07.410378 containerd[1591]: 2025-05-10 00:07:07.234 [INFO][4436] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:07:07.410378 containerd[1591]: 2025-05-10 00:07:07.235 [INFO][4436] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-n-60bc3761e6' May 10 00:07:07.410378 containerd[1591]: 2025-05-10 00:07:07.243 [INFO][4436] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.bb676bfd8621132b95930d86e28b39cc953c2f2f587210e38188db7f6a8641c9" host="ci-4081-3-3-n-60bc3761e6" May 10 00:07:07.410378 containerd[1591]: 2025-05-10 00:07:07.249 [INFO][4436] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-n-60bc3761e6" May 10 00:07:07.410378 containerd[1591]: 2025-05-10 00:07:07.261 [INFO][4436] ipam/ipam.go 489: Trying affinity for 192.168.86.128/26 host="ci-4081-3-3-n-60bc3761e6" May 10 00:07:07.410378 containerd[1591]: 2025-05-10 00:07:07.267 [INFO][4436] ipam/ipam.go 155: Attempting to load block cidr=192.168.86.128/26 host="ci-4081-3-3-n-60bc3761e6" May 10 00:07:07.410378 containerd[1591]: 2025-05-10 00:07:07.280 [INFO][4436] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.86.128/26 host="ci-4081-3-3-n-60bc3761e6" May 10 00:07:07.410378 containerd[1591]: 2025-05-10 00:07:07.280 [INFO][4436] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.86.128/26 handle="k8s-pod-network.bb676bfd8621132b95930d86e28b39cc953c2f2f587210e38188db7f6a8641c9" host="ci-4081-3-3-n-60bc3761e6" May 10 00:07:07.410378 containerd[1591]: 2025-05-10 00:07:07.295 [INFO][4436] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.bb676bfd8621132b95930d86e28b39cc953c2f2f587210e38188db7f6a8641c9 May 10 00:07:07.410378 containerd[1591]: 2025-05-10 00:07:07.302 [INFO][4436] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.86.128/26 handle="k8s-pod-network.bb676bfd8621132b95930d86e28b39cc953c2f2f587210e38188db7f6a8641c9" host="ci-4081-3-3-n-60bc3761e6" May 10 00:07:07.410378 containerd[1591]: 2025-05-10 00:07:07.325 [INFO][4436] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.86.130/26] block=192.168.86.128/26 handle="k8s-pod-network.bb676bfd8621132b95930d86e28b39cc953c2f2f587210e38188db7f6a8641c9" host="ci-4081-3-3-n-60bc3761e6" May 10 00:07:07.410378 containerd[1591]: 2025-05-10 00:07:07.327 [INFO][4436] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.86.130/26] handle="k8s-pod-network.bb676bfd8621132b95930d86e28b39cc953c2f2f587210e38188db7f6a8641c9" host="ci-4081-3-3-n-60bc3761e6" May 10 00:07:07.410378 containerd[1591]: 2025-05-10 00:07:07.327 [INFO][4436] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:07:07.410378 containerd[1591]: 2025-05-10 00:07:07.327 [INFO][4436] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.86.130/26] IPv6=[] ContainerID="bb676bfd8621132b95930d86e28b39cc953c2f2f587210e38188db7f6a8641c9" HandleID="k8s-pod-network.bb676bfd8621132b95930d86e28b39cc953c2f2f587210e38188db7f6a8641c9" Workload="ci--4081--3--3--n--60bc3761e6-k8s-calico--apiserver--849f9bb4b4--rkt4m-eth0" May 10 00:07:07.410895 containerd[1591]: 2025-05-10 00:07:07.342 [INFO][4406] cni-plugin/k8s.go 386: Populated endpoint ContainerID="bb676bfd8621132b95930d86e28b39cc953c2f2f587210e38188db7f6a8641c9" Namespace="calico-apiserver" Pod="calico-apiserver-849f9bb4b4-rkt4m" WorkloadEndpoint="ci--4081--3--3--n--60bc3761e6-k8s-calico--apiserver--849f9bb4b4--rkt4m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--60bc3761e6-k8s-calico--apiserver--849f9bb4b4--rkt4m-eth0", GenerateName:"calico-apiserver-849f9bb4b4-", Namespace:"calico-apiserver", SelfLink:"", UID:"ca652ab2-74d7-4a4c-a866-d714bca54c18", ResourceVersion:"777", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 6, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"849f9bb4b4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-60bc3761e6", ContainerID:"", Pod:"calico-apiserver-849f9bb4b4-rkt4m", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.86.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali81b6abd688f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:07:07.410895 containerd[1591]: 2025-05-10 00:07:07.346 [INFO][4406] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.86.130/32] ContainerID="bb676bfd8621132b95930d86e28b39cc953c2f2f587210e38188db7f6a8641c9" Namespace="calico-apiserver" Pod="calico-apiserver-849f9bb4b4-rkt4m" WorkloadEndpoint="ci--4081--3--3--n--60bc3761e6-k8s-calico--apiserver--849f9bb4b4--rkt4m-eth0" May 10 00:07:07.410895 containerd[1591]: 2025-05-10 00:07:07.346 [INFO][4406] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali81b6abd688f ContainerID="bb676bfd8621132b95930d86e28b39cc953c2f2f587210e38188db7f6a8641c9" Namespace="calico-apiserver" Pod="calico-apiserver-849f9bb4b4-rkt4m" WorkloadEndpoint="ci--4081--3--3--n--60bc3761e6-k8s-calico--apiserver--849f9bb4b4--rkt4m-eth0" May 10 00:07:07.410895 containerd[1591]: 2025-05-10 00:07:07.359 [INFO][4406] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bb676bfd8621132b95930d86e28b39cc953c2f2f587210e38188db7f6a8641c9" Namespace="calico-apiserver" Pod="calico-apiserver-849f9bb4b4-rkt4m" WorkloadEndpoint="ci--4081--3--3--n--60bc3761e6-k8s-calico--apiserver--849f9bb4b4--rkt4m-eth0" May 10 00:07:07.410895 containerd[1591]: 2025-05-10 00:07:07.363 [INFO][4406] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="bb676bfd8621132b95930d86e28b39cc953c2f2f587210e38188db7f6a8641c9" Namespace="calico-apiserver" Pod="calico-apiserver-849f9bb4b4-rkt4m" WorkloadEndpoint="ci--4081--3--3--n--60bc3761e6-k8s-calico--apiserver--849f9bb4b4--rkt4m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--60bc3761e6-k8s-calico--apiserver--849f9bb4b4--rkt4m-eth0", GenerateName:"calico-apiserver-849f9bb4b4-", Namespace:"calico-apiserver", SelfLink:"", UID:"ca652ab2-74d7-4a4c-a866-d714bca54c18", ResourceVersion:"777", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 6, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"849f9bb4b4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-60bc3761e6", ContainerID:"bb676bfd8621132b95930d86e28b39cc953c2f2f587210e38188db7f6a8641c9", Pod:"calico-apiserver-849f9bb4b4-rkt4m", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.86.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali81b6abd688f", MAC:"de:3a:10:8c:3b:19", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:07:07.410895 containerd[1591]: 2025-05-10 00:07:07.397 [INFO][4406] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="bb676bfd8621132b95930d86e28b39cc953c2f2f587210e38188db7f6a8641c9" Namespace="calico-apiserver" Pod="calico-apiserver-849f9bb4b4-rkt4m" WorkloadEndpoint="ci--4081--3--3--n--60bc3761e6-k8s-calico--apiserver--849f9bb4b4--rkt4m-eth0" May 10 00:07:07.462073 systemd-networkd[1244]: calie4630501ead: Link UP May 10 00:07:07.462833 systemd-networkd[1244]: calie4630501ead: Gained carrier May 10 00:07:07.485901 containerd[1591]: time="2025-05-10T00:07:07.485778936Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:07:07.486183 containerd[1591]: time="2025-05-10T00:07:07.486079311Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:07:07.486353 containerd[1591]: time="2025-05-10T00:07:07.486232879Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:07:07.487056 containerd[1591]: time="2025-05-10T00:07:07.486651740Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:07:07.489332 containerd[1591]: 2025-05-10 00:07:07.091 [INFO][4398] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 10 00:07:07.489332 containerd[1591]: 2025-05-10 00:07:07.124 [INFO][4398] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--n--60bc3761e6-k8s-coredns--7db6d8ff4d--zhcs4-eth0 coredns-7db6d8ff4d- kube-system ca8a5621-5e04-45a0-a9d3-4c8113513a58 775 0 2025-05-10 00:06:38 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-3-n-60bc3761e6 coredns-7db6d8ff4d-zhcs4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie4630501ead [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="a3c7b2f2d06016b823423ba6b6be7799627c6765a94bdde9ba6789ecbeb050a6" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zhcs4" WorkloadEndpoint="ci--4081--3--3--n--60bc3761e6-k8s-coredns--7db6d8ff4d--zhcs4-" May 10 00:07:07.489332 containerd[1591]: 2025-05-10 00:07:07.124 [INFO][4398] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a3c7b2f2d06016b823423ba6b6be7799627c6765a94bdde9ba6789ecbeb050a6" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zhcs4" WorkloadEndpoint="ci--4081--3--3--n--60bc3761e6-k8s-coredns--7db6d8ff4d--zhcs4-eth0" May 10 00:07:07.489332 containerd[1591]: 2025-05-10 00:07:07.200 [INFO][4442] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a3c7b2f2d06016b823423ba6b6be7799627c6765a94bdde9ba6789ecbeb050a6" HandleID="k8s-pod-network.a3c7b2f2d06016b823423ba6b6be7799627c6765a94bdde9ba6789ecbeb050a6" Workload="ci--4081--3--3--n--60bc3761e6-k8s-coredns--7db6d8ff4d--zhcs4-eth0" May 10 00:07:07.489332 containerd[1591]: 2025-05-10 00:07:07.231 [INFO][4442] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a3c7b2f2d06016b823423ba6b6be7799627c6765a94bdde9ba6789ecbeb050a6" HandleID="k8s-pod-network.a3c7b2f2d06016b823423ba6b6be7799627c6765a94bdde9ba6789ecbeb050a6" Workload="ci--4081--3--3--n--60bc3761e6-k8s-coredns--7db6d8ff4d--zhcs4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d5e0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-3-n-60bc3761e6", "pod":"coredns-7db6d8ff4d-zhcs4", "timestamp":"2025-05-10 00:07:07.19753313 +0000 UTC"}, Hostname:"ci-4081-3-3-n-60bc3761e6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 10 00:07:07.489332 containerd[1591]: 2025-05-10 00:07:07.231 [INFO][4442] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:07:07.489332 containerd[1591]: 2025-05-10 00:07:07.327 [INFO][4442] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:07:07.489332 containerd[1591]: 2025-05-10 00:07:07.327 [INFO][4442] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-n-60bc3761e6' May 10 00:07:07.489332 containerd[1591]: 2025-05-10 00:07:07.341 [INFO][4442] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a3c7b2f2d06016b823423ba6b6be7799627c6765a94bdde9ba6789ecbeb050a6" host="ci-4081-3-3-n-60bc3761e6" May 10 00:07:07.489332 containerd[1591]: 2025-05-10 00:07:07.375 [INFO][4442] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-n-60bc3761e6" May 10 00:07:07.489332 containerd[1591]: 2025-05-10 00:07:07.415 [INFO][4442] ipam/ipam.go 489: Trying affinity for 192.168.86.128/26 host="ci-4081-3-3-n-60bc3761e6" May 10 00:07:07.489332 containerd[1591]: 2025-05-10 00:07:07.420 [INFO][4442] ipam/ipam.go 155: Attempting to load block cidr=192.168.86.128/26 host="ci-4081-3-3-n-60bc3761e6" May 10 00:07:07.489332 containerd[1591]: 2025-05-10 00:07:07.424 [INFO][4442] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.86.128/26 host="ci-4081-3-3-n-60bc3761e6" May 10 00:07:07.489332 containerd[1591]: 2025-05-10 00:07:07.425 [INFO][4442] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.86.128/26 handle="k8s-pod-network.a3c7b2f2d06016b823423ba6b6be7799627c6765a94bdde9ba6789ecbeb050a6" host="ci-4081-3-3-n-60bc3761e6" May 10 00:07:07.489332 containerd[1591]: 2025-05-10 00:07:07.429 [INFO][4442] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a3c7b2f2d06016b823423ba6b6be7799627c6765a94bdde9ba6789ecbeb050a6 May 10 00:07:07.489332 containerd[1591]: 2025-05-10 00:07:07.440 [INFO][4442] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.86.128/26 handle="k8s-pod-network.a3c7b2f2d06016b823423ba6b6be7799627c6765a94bdde9ba6789ecbeb050a6" host="ci-4081-3-3-n-60bc3761e6" May 10 00:07:07.489332 containerd[1591]: 2025-05-10 00:07:07.450 [INFO][4442] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.86.131/26] block=192.168.86.128/26 handle="k8s-pod-network.a3c7b2f2d06016b823423ba6b6be7799627c6765a94bdde9ba6789ecbeb050a6" host="ci-4081-3-3-n-60bc3761e6" May 10 00:07:07.489332 containerd[1591]: 2025-05-10 00:07:07.450 [INFO][4442] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.86.131/26] handle="k8s-pod-network.a3c7b2f2d06016b823423ba6b6be7799627c6765a94bdde9ba6789ecbeb050a6" host="ci-4081-3-3-n-60bc3761e6" May 10 00:07:07.489332 containerd[1591]: 2025-05-10 00:07:07.450 [INFO][4442] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:07:07.489332 containerd[1591]: 2025-05-10 00:07:07.450 [INFO][4442] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.86.131/26] IPv6=[] ContainerID="a3c7b2f2d06016b823423ba6b6be7799627c6765a94bdde9ba6789ecbeb050a6" HandleID="k8s-pod-network.a3c7b2f2d06016b823423ba6b6be7799627c6765a94bdde9ba6789ecbeb050a6" Workload="ci--4081--3--3--n--60bc3761e6-k8s-coredns--7db6d8ff4d--zhcs4-eth0" May 10 00:07:07.490112 containerd[1591]: 2025-05-10 00:07:07.457 [INFO][4398] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a3c7b2f2d06016b823423ba6b6be7799627c6765a94bdde9ba6789ecbeb050a6" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zhcs4" WorkloadEndpoint="ci--4081--3--3--n--60bc3761e6-k8s-coredns--7db6d8ff4d--zhcs4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--60bc3761e6-k8s-coredns--7db6d8ff4d--zhcs4-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"ca8a5621-5e04-45a0-a9d3-4c8113513a58", ResourceVersion:"775", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 6, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-60bc3761e6", ContainerID:"", Pod:"coredns-7db6d8ff4d-zhcs4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.86.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie4630501ead", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:07:07.490112 containerd[1591]: 2025-05-10 00:07:07.457 [INFO][4398] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.86.131/32] ContainerID="a3c7b2f2d06016b823423ba6b6be7799627c6765a94bdde9ba6789ecbeb050a6" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zhcs4" WorkloadEndpoint="ci--4081--3--3--n--60bc3761e6-k8s-coredns--7db6d8ff4d--zhcs4-eth0" May 10 00:07:07.490112 containerd[1591]: 2025-05-10 00:07:07.457 [INFO][4398] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie4630501ead ContainerID="a3c7b2f2d06016b823423ba6b6be7799627c6765a94bdde9ba6789ecbeb050a6" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zhcs4" WorkloadEndpoint="ci--4081--3--3--n--60bc3761e6-k8s-coredns--7db6d8ff4d--zhcs4-eth0" May 10 00:07:07.490112 containerd[1591]: 2025-05-10 00:07:07.462 [INFO][4398] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a3c7b2f2d06016b823423ba6b6be7799627c6765a94bdde9ba6789ecbeb050a6" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zhcs4" WorkloadEndpoint="ci--4081--3--3--n--60bc3761e6-k8s-coredns--7db6d8ff4d--zhcs4-eth0" May 10 00:07:07.490112 containerd[1591]: 2025-05-10 00:07:07.465 [INFO][4398] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a3c7b2f2d06016b823423ba6b6be7799627c6765a94bdde9ba6789ecbeb050a6" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zhcs4" WorkloadEndpoint="ci--4081--3--3--n--60bc3761e6-k8s-coredns--7db6d8ff4d--zhcs4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--60bc3761e6-k8s-coredns--7db6d8ff4d--zhcs4-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"ca8a5621-5e04-45a0-a9d3-4c8113513a58", ResourceVersion:"775", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 6, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-60bc3761e6", ContainerID:"a3c7b2f2d06016b823423ba6b6be7799627c6765a94bdde9ba6789ecbeb050a6", Pod:"coredns-7db6d8ff4d-zhcs4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.86.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie4630501ead", MAC:"5e:8a:ef:4d:85:a6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:07:07.490112 containerd[1591]: 2025-05-10 00:07:07.486 [INFO][4398] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a3c7b2f2d06016b823423ba6b6be7799627c6765a94bdde9ba6789ecbeb050a6" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zhcs4" WorkloadEndpoint="ci--4081--3--3--n--60bc3761e6-k8s-coredns--7db6d8ff4d--zhcs4-eth0" May 10 00:07:07.536002 containerd[1591]: time="2025-05-10T00:07:07.535964926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zk9wm,Uid:871fa5f6-cf5f-424d-92ff-7537c13487e5,Namespace:kube-system,Attempt:1,} returns sandbox id \"155f870f96c92d7ed48fa1b4ebac511e32e44ccda94990df373a7527d8e812de\"" May 10 00:07:07.543586 containerd[1591]: time="2025-05-10T00:07:07.542980202Z" level=info msg="CreateContainer within sandbox \"155f870f96c92d7ed48fa1b4ebac511e32e44ccda94990df373a7527d8e812de\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 10 00:07:07.546391 containerd[1591]: time="2025-05-10T00:07:07.545574814Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:07:07.546897 containerd[1591]: time="2025-05-10T00:07:07.546559264Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:07:07.547720 containerd[1591]: time="2025-05-10T00:07:07.547555315Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:07:07.548290 containerd[1591]: time="2025-05-10T00:07:07.548149145Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:07:07.550535 containerd[1591]: time="2025-05-10T00:07:07.550497224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-849f9bb4b4-rkt4m,Uid:ca652ab2-74d7-4a4c-a866-d714bca54c18,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"bb676bfd8621132b95930d86e28b39cc953c2f2f587210e38188db7f6a8641c9\"" May 10 00:07:07.555507 containerd[1591]: time="2025-05-10T00:07:07.555471117Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 10 00:07:07.573562 containerd[1591]: time="2025-05-10T00:07:07.573492153Z" level=info msg="CreateContainer within sandbox \"155f870f96c92d7ed48fa1b4ebac511e32e44ccda94990df373a7527d8e812de\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"73ce2265e5e01bd8de41938948051e9fa9e3592dcd895ebbc6c95f26cde2af79\"" May 10 00:07:07.574754 containerd[1591]: time="2025-05-10T00:07:07.574500844Z" level=info msg="StartContainer for \"73ce2265e5e01bd8de41938948051e9fa9e3592dcd895ebbc6c95f26cde2af79\"" May 10 00:07:07.632642 containerd[1591]: time="2025-05-10T00:07:07.632585515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zhcs4,Uid:ca8a5621-5e04-45a0-a9d3-4c8113513a58,Namespace:kube-system,Attempt:1,} returns sandbox id \"a3c7b2f2d06016b823423ba6b6be7799627c6765a94bdde9ba6789ecbeb050a6\"" May 10 00:07:07.639758 containerd[1591]: time="2025-05-10T00:07:07.639658435Z" level=info msg="CreateContainer within sandbox \"a3c7b2f2d06016b823423ba6b6be7799627c6765a94bdde9ba6789ecbeb050a6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 10 00:07:07.642305 containerd[1591]: time="2025-05-10T00:07:07.642225045Z" level=info msg="StartContainer for \"73ce2265e5e01bd8de41938948051e9fa9e3592dcd895ebbc6c95f26cde2af79\" returns successfully" May 10 00:07:07.656829 containerd[1591]: time="2025-05-10T00:07:07.656712701Z" level=info msg="CreateContainer within sandbox \"a3c7b2f2d06016b823423ba6b6be7799627c6765a94bdde9ba6789ecbeb050a6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"dbe0bcd886ab40d1fcef429098ad325f4c231620db5864c8d77d80b7c482f134\"" May 10 00:07:07.658404 containerd[1591]: time="2025-05-10T00:07:07.658364945Z" level=info msg="StartContainer for \"dbe0bcd886ab40d1fcef429098ad325f4c231620db5864c8d77d80b7c482f134\"" May 10 00:07:07.753867 containerd[1591]: time="2025-05-10T00:07:07.753720710Z" level=info msg="StartContainer for \"dbe0bcd886ab40d1fcef429098ad325f4c231620db5864c8d77d80b7c482f134\" returns successfully" May 10 00:07:07.776015 containerd[1591]: time="2025-05-10T00:07:07.775505697Z" level=info msg="StopPodSandbox for \"d63c2089dce95ba446be9618d4d02418a5e71a0ac71a2c014a23b09390d4cdf5\"" May 10 00:07:07.895313 containerd[1591]: 2025-05-10 00:07:07.850 [INFO][4705] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d63c2089dce95ba446be9618d4d02418a5e71a0ac71a2c014a23b09390d4cdf5" May 10 00:07:07.895313 containerd[1591]: 2025-05-10 00:07:07.851 [INFO][4705] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d63c2089dce95ba446be9618d4d02418a5e71a0ac71a2c014a23b09390d4cdf5" iface="eth0" netns="/var/run/netns/cni-577b9f94-caf5-5de5-04d3-0d17431e3216" May 10 00:07:07.895313 containerd[1591]: 2025-05-10 00:07:07.852 [INFO][4705] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d63c2089dce95ba446be9618d4d02418a5e71a0ac71a2c014a23b09390d4cdf5" iface="eth0" netns="/var/run/netns/cni-577b9f94-caf5-5de5-04d3-0d17431e3216" May 10 00:07:07.895313 containerd[1591]: 2025-05-10 00:07:07.853 [INFO][4705] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d63c2089dce95ba446be9618d4d02418a5e71a0ac71a2c014a23b09390d4cdf5" iface="eth0" netns="/var/run/netns/cni-577b9f94-caf5-5de5-04d3-0d17431e3216" May 10 00:07:07.895313 containerd[1591]: 2025-05-10 00:07:07.853 [INFO][4705] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d63c2089dce95ba446be9618d4d02418a5e71a0ac71a2c014a23b09390d4cdf5" May 10 00:07:07.895313 containerd[1591]: 2025-05-10 00:07:07.853 [INFO][4705] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d63c2089dce95ba446be9618d4d02418a5e71a0ac71a2c014a23b09390d4cdf5" May 10 00:07:07.895313 containerd[1591]: 2025-05-10 00:07:07.876 [INFO][4714] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d63c2089dce95ba446be9618d4d02418a5e71a0ac71a2c014a23b09390d4cdf5" HandleID="k8s-pod-network.d63c2089dce95ba446be9618d4d02418a5e71a0ac71a2c014a23b09390d4cdf5" Workload="ci--4081--3--3--n--60bc3761e6-k8s-csi--node--driver--4hjhz-eth0" May 10 00:07:07.895313 containerd[1591]: 2025-05-10 00:07:07.876 [INFO][4714] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:07:07.895313 containerd[1591]: 2025-05-10 00:07:07.876 [INFO][4714] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:07:07.895313 containerd[1591]: 2025-05-10 00:07:07.888 [WARNING][4714] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d63c2089dce95ba446be9618d4d02418a5e71a0ac71a2c014a23b09390d4cdf5" HandleID="k8s-pod-network.d63c2089dce95ba446be9618d4d02418a5e71a0ac71a2c014a23b09390d4cdf5" Workload="ci--4081--3--3--n--60bc3761e6-k8s-csi--node--driver--4hjhz-eth0" May 10 00:07:07.895313 containerd[1591]: 2025-05-10 00:07:07.889 [INFO][4714] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d63c2089dce95ba446be9618d4d02418a5e71a0ac71a2c014a23b09390d4cdf5" HandleID="k8s-pod-network.d63c2089dce95ba446be9618d4d02418a5e71a0ac71a2c014a23b09390d4cdf5" Workload="ci--4081--3--3--n--60bc3761e6-k8s-csi--node--driver--4hjhz-eth0" May 10 00:07:07.895313 containerd[1591]: 2025-05-10 00:07:07.891 [INFO][4714] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:07:07.895313 containerd[1591]: 2025-05-10 00:07:07.893 [INFO][4705] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d63c2089dce95ba446be9618d4d02418a5e71a0ac71a2c014a23b09390d4cdf5" May 10 00:07:07.896008 containerd[1591]: time="2025-05-10T00:07:07.895453432Z" level=info msg="TearDown network for sandbox \"d63c2089dce95ba446be9618d4d02418a5e71a0ac71a2c014a23b09390d4cdf5\" successfully" May 10 00:07:07.896008 containerd[1591]: time="2025-05-10T00:07:07.895481433Z" level=info msg="StopPodSandbox for \"d63c2089dce95ba446be9618d4d02418a5e71a0ac71a2c014a23b09390d4cdf5\" returns successfully" May 10 00:07:07.896239 containerd[1591]: time="2025-05-10T00:07:07.896131386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4hjhz,Uid:0766211e-6e96-4ed6-b977-d34cdc94d220,Namespace:calico-system,Attempt:1,}" May 10 00:07:07.970107 systemd[1]: run-netns-cni\x2d577b9f94\x2dcaf5\x2d5de5\x2d04d3\x2d0d17431e3216.mount: Deactivated successfully. May 10 00:07:08.035713 kubelet[2970]: I0510 00:07:08.032273 2970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-zhcs4" podStartSLOduration=30.032240674 podStartE2EDuration="30.032240674s" podCreationTimestamp="2025-05-10 00:06:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:07:08.030082443 +0000 UTC m=+43.381196016" watchObservedRunningTime="2025-05-10 00:07:08.032240674 +0000 UTC m=+43.383354207" May 10 00:07:08.093628 systemd-networkd[1244]: calid9adb5875ea: Link UP May 10 00:07:08.093778 systemd-networkd[1244]: calid9adb5875ea: Gained carrier May 10 00:07:08.126920 kubelet[2970]: I0510 00:07:08.126063 2970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-zk9wm" podStartSLOduration=30.126029674 podStartE2EDuration="30.126029674s" podCreationTimestamp="2025-05-10 00:06:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:07:08.076663347 +0000 UTC m=+43.427776880" watchObservedRunningTime="2025-05-10 00:07:08.126029674 +0000 UTC m=+43.477143207" May 10 00:07:08.129351 containerd[1591]: 2025-05-10 00:07:07.931 [INFO][4721] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 10 00:07:08.129351 containerd[1591]: 2025-05-10 00:07:07.949 [INFO][4721] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--n--60bc3761e6-k8s-csi--node--driver--4hjhz-eth0 csi-node-driver- calico-system 0766211e-6e96-4ed6-b977-d34cdc94d220 795 0 2025-05-10 00:06:45 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b7b4b9d k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-3-n-60bc3761e6 csi-node-driver-4hjhz eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calid9adb5875ea [] []}} ContainerID="eedec18dba0609f7ba2f4700582fb62074e987c821263d4c2bf0f0d7ee65726d" Namespace="calico-system" Pod="csi-node-driver-4hjhz" WorkloadEndpoint="ci--4081--3--3--n--60bc3761e6-k8s-csi--node--driver--4hjhz-" May 10 00:07:08.129351 containerd[1591]: 2025-05-10 00:07:07.949 [INFO][4721] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="eedec18dba0609f7ba2f4700582fb62074e987c821263d4c2bf0f0d7ee65726d" Namespace="calico-system" Pod="csi-node-driver-4hjhz" WorkloadEndpoint="ci--4081--3--3--n--60bc3761e6-k8s-csi--node--driver--4hjhz-eth0" May 10 00:07:08.129351 containerd[1591]: 2025-05-10 00:07:07.993 [INFO][4733] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="eedec18dba0609f7ba2f4700582fb62074e987c821263d4c2bf0f0d7ee65726d" HandleID="k8s-pod-network.eedec18dba0609f7ba2f4700582fb62074e987c821263d4c2bf0f0d7ee65726d" Workload="ci--4081--3--3--n--60bc3761e6-k8s-csi--node--driver--4hjhz-eth0" May 10 00:07:08.129351 containerd[1591]: 2025-05-10 00:07:08.011 [INFO][4733] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="eedec18dba0609f7ba2f4700582fb62074e987c821263d4c2bf0f0d7ee65726d" HandleID="k8s-pod-network.eedec18dba0609f7ba2f4700582fb62074e987c821263d4c2bf0f0d7ee65726d" Workload="ci--4081--3--3--n--60bc3761e6-k8s-csi--node--driver--4hjhz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000319700), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-3-n-60bc3761e6", "pod":"csi-node-driver-4hjhz", "timestamp":"2025-05-10 00:07:07.993954637 +0000 UTC"}, Hostname:"ci-4081-3-3-n-60bc3761e6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 10 00:07:08.129351 containerd[1591]: 2025-05-10 00:07:08.011 [INFO][4733] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:07:08.129351 containerd[1591]: 2025-05-10 00:07:08.011 [INFO][4733] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:07:08.129351 containerd[1591]: 2025-05-10 00:07:08.011 [INFO][4733] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-n-60bc3761e6' May 10 00:07:08.129351 containerd[1591]: 2025-05-10 00:07:08.017 [INFO][4733] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.eedec18dba0609f7ba2f4700582fb62074e987c821263d4c2bf0f0d7ee65726d" host="ci-4081-3-3-n-60bc3761e6" May 10 00:07:08.129351 containerd[1591]: 2025-05-10 00:07:08.036 [INFO][4733] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-n-60bc3761e6" May 10 00:07:08.129351 containerd[1591]: 2025-05-10 00:07:08.043 [INFO][4733] ipam/ipam.go 489: Trying affinity for 192.168.86.128/26 host="ci-4081-3-3-n-60bc3761e6" May 10 00:07:08.129351 containerd[1591]: 2025-05-10 00:07:08.048 [INFO][4733] ipam/ipam.go 155: Attempting to load block cidr=192.168.86.128/26 host="ci-4081-3-3-n-60bc3761e6" May 10 00:07:08.129351 containerd[1591]: 2025-05-10 00:07:08.053 [INFO][4733] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.86.128/26 host="ci-4081-3-3-n-60bc3761e6" May 10 00:07:08.129351 containerd[1591]: 2025-05-10 00:07:08.054 [INFO][4733] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.86.128/26 handle="k8s-pod-network.eedec18dba0609f7ba2f4700582fb62074e987c821263d4c2bf0f0d7ee65726d" host="ci-4081-3-3-n-60bc3761e6" May 10 00:07:08.129351 containerd[1591]: 2025-05-10 00:07:08.063 [INFO][4733] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.eedec18dba0609f7ba2f4700582fb62074e987c821263d4c2bf0f0d7ee65726d May 10 00:07:08.129351 containerd[1591]: 2025-05-10 00:07:08.071 [INFO][4733] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.86.128/26 handle="k8s-pod-network.eedec18dba0609f7ba2f4700582fb62074e987c821263d4c2bf0f0d7ee65726d" host="ci-4081-3-3-n-60bc3761e6" May 10 00:07:08.129351 containerd[1591]: 2025-05-10 00:07:08.080 [INFO][4733] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.86.132/26] block=192.168.86.128/26 handle="k8s-pod-network.eedec18dba0609f7ba2f4700582fb62074e987c821263d4c2bf0f0d7ee65726d" host="ci-4081-3-3-n-60bc3761e6" May 10 00:07:08.129351 containerd[1591]: 2025-05-10 00:07:08.081 [INFO][4733] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.86.132/26] handle="k8s-pod-network.eedec18dba0609f7ba2f4700582fb62074e987c821263d4c2bf0f0d7ee65726d" host="ci-4081-3-3-n-60bc3761e6" May 10 00:07:08.129351 containerd[1591]: 2025-05-10 00:07:08.081 [INFO][4733] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:07:08.129351 containerd[1591]: 2025-05-10 00:07:08.081 [INFO][4733] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.86.132/26] IPv6=[] ContainerID="eedec18dba0609f7ba2f4700582fb62074e987c821263d4c2bf0f0d7ee65726d" HandleID="k8s-pod-network.eedec18dba0609f7ba2f4700582fb62074e987c821263d4c2bf0f0d7ee65726d" Workload="ci--4081--3--3--n--60bc3761e6-k8s-csi--node--driver--4hjhz-eth0" May 10 00:07:08.129923 containerd[1591]: 2025-05-10 00:07:08.086 [INFO][4721] cni-plugin/k8s.go 386: Populated endpoint ContainerID="eedec18dba0609f7ba2f4700582fb62074e987c821263d4c2bf0f0d7ee65726d" Namespace="calico-system" Pod="csi-node-driver-4hjhz" WorkloadEndpoint="ci--4081--3--3--n--60bc3761e6-k8s-csi--node--driver--4hjhz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--60bc3761e6-k8s-csi--node--driver--4hjhz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0766211e-6e96-4ed6-b977-d34cdc94d220", ResourceVersion:"795", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 6, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-60bc3761e6", ContainerID:"", Pod:"csi-node-driver-4hjhz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.86.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid9adb5875ea", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:07:08.129923 containerd[1591]: 2025-05-10 00:07:08.086 [INFO][4721] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.86.132/32] ContainerID="eedec18dba0609f7ba2f4700582fb62074e987c821263d4c2bf0f0d7ee65726d" Namespace="calico-system" Pod="csi-node-driver-4hjhz" WorkloadEndpoint="ci--4081--3--3--n--60bc3761e6-k8s-csi--node--driver--4hjhz-eth0" May 10 00:07:08.129923 containerd[1591]: 2025-05-10 00:07:08.086 [INFO][4721] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid9adb5875ea ContainerID="eedec18dba0609f7ba2f4700582fb62074e987c821263d4c2bf0f0d7ee65726d" Namespace="calico-system" Pod="csi-node-driver-4hjhz" WorkloadEndpoint="ci--4081--3--3--n--60bc3761e6-k8s-csi--node--driver--4hjhz-eth0" May 10 00:07:08.129923 containerd[1591]: 2025-05-10 00:07:08.092 [INFO][4721] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="eedec18dba0609f7ba2f4700582fb62074e987c821263d4c2bf0f0d7ee65726d" Namespace="calico-system" Pod="csi-node-driver-4hjhz" WorkloadEndpoint="ci--4081--3--3--n--60bc3761e6-k8s-csi--node--driver--4hjhz-eth0" May 10 00:07:08.129923 containerd[1591]: 2025-05-10 00:07:08.094 [INFO][4721] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="eedec18dba0609f7ba2f4700582fb62074e987c821263d4c2bf0f0d7ee65726d" Namespace="calico-system" Pod="csi-node-driver-4hjhz" WorkloadEndpoint="ci--4081--3--3--n--60bc3761e6-k8s-csi--node--driver--4hjhz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--60bc3761e6-k8s-csi--node--driver--4hjhz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0766211e-6e96-4ed6-b977-d34cdc94d220", ResourceVersion:"795", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 6, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-60bc3761e6", ContainerID:"eedec18dba0609f7ba2f4700582fb62074e987c821263d4c2bf0f0d7ee65726d", Pod:"csi-node-driver-4hjhz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.86.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid9adb5875ea", MAC:"06:5c:8d:a8:48:58", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:07:08.129923 containerd[1591]: 2025-05-10 00:07:08.121 [INFO][4721] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="eedec18dba0609f7ba2f4700582fb62074e987c821263d4c2bf0f0d7ee65726d" Namespace="calico-system" Pod="csi-node-driver-4hjhz" WorkloadEndpoint="ci--4081--3--3--n--60bc3761e6-k8s-csi--node--driver--4hjhz-eth0" May 10 00:07:08.160055 containerd[1591]: time="2025-05-10T00:07:08.158456734Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:07:08.160055 containerd[1591]: time="2025-05-10T00:07:08.158586221Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:07:08.160055 containerd[1591]: time="2025-05-10T00:07:08.158605422Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:07:08.160055 containerd[1591]: time="2025-05-10T00:07:08.158912077Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:07:08.235700 containerd[1591]: time="2025-05-10T00:07:08.235660286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4hjhz,Uid:0766211e-6e96-4ed6-b977-d34cdc94d220,Namespace:calico-system,Attempt:1,} returns sandbox id \"eedec18dba0609f7ba2f4700582fb62074e987c821263d4c2bf0f0d7ee65726d\"" May 10 00:07:08.421855 systemd-networkd[1244]: cali523edb034bd: Gained IPv6LL May 10 00:07:09.316411 systemd-networkd[1244]: cali81b6abd688f: Gained IPv6LL May 10 00:07:09.316994 systemd-networkd[1244]: calid9adb5875ea: Gained IPv6LL May 10 00:07:09.380515 systemd-networkd[1244]: calie4630501ead: Gained IPv6LL May 10 00:07:09.510779 kubelet[2970]: I0510 00:07:09.510436 2970 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 10 00:07:09.596561 containerd[1591]: time="2025-05-10T00:07:09.596478838Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:07:09.598681 containerd[1591]: time="2025-05-10T00:07:09.597657458Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=40247603" May 10 00:07:09.601977 containerd[1591]: time="2025-05-10T00:07:09.599380507Z" level=info msg="ImageCreate event name:\"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:07:09.610171 containerd[1591]: time="2025-05-10T00:07:09.610132901Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:07:09.614629 containerd[1591]: time="2025-05-10T00:07:09.614586571Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 2.058736234s" May 10 00:07:09.615062 containerd[1591]: time="2025-05-10T00:07:09.615033674Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" May 10 00:07:09.618743 containerd[1591]: time="2025-05-10T00:07:09.617764575Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 10 00:07:09.625723 containerd[1591]: time="2025-05-10T00:07:09.625591058Z" level=info msg="CreateContainer within sandbox \"bb676bfd8621132b95930d86e28b39cc953c2f2f587210e38188db7f6a8641c9\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 10 00:07:09.662100 containerd[1591]: time="2025-05-10T00:07:09.662056898Z" level=info msg="CreateContainer within sandbox \"bb676bfd8621132b95930d86e28b39cc953c2f2f587210e38188db7f6a8641c9\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d6672bb2411880b4a451acb812f2462ad20b08e113f7c3fe8fbafd72e4f859eb\"" May 10 00:07:09.667590 containerd[1591]: time="2025-05-10T00:07:09.666357040Z" level=info msg="StartContainer for \"d6672bb2411880b4a451acb812f2462ad20b08e113f7c3fe8fbafd72e4f859eb\"" May 10 00:07:09.752460 containerd[1591]: time="2025-05-10T00:07:09.752417156Z" level=info msg="StartContainer for \"d6672bb2411880b4a451acb812f2462ad20b08e113f7c3fe8fbafd72e4f859eb\" returns successfully" May 10 00:07:09.776357 containerd[1591]: time="2025-05-10T00:07:09.775094645Z" level=info msg="StopPodSandbox for \"2d3bb549488f1465603ebc4adf7861737cb05a901a7950d8e394ac141d8a68e5\"" May 10 00:07:09.776958 containerd[1591]: time="2025-05-10T00:07:09.775442703Z" level=info msg="StopPodSandbox for \"9653f3e50090731920b66c3558844262d4fd050f8d6d624b77c19919948fe092\"" May 10 00:07:09.971666 containerd[1591]: 2025-05-10 00:07:09.887 [INFO][4928] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9653f3e50090731920b66c3558844262d4fd050f8d6d624b77c19919948fe092" May 10 00:07:09.971666 containerd[1591]: 2025-05-10 00:07:09.887 [INFO][4928] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9653f3e50090731920b66c3558844262d4fd050f8d6d624b77c19919948fe092" iface="eth0" netns="/var/run/netns/cni-46728cd7-8c93-8366-5333-46a6486de7e8" May 10 00:07:09.971666 containerd[1591]: 2025-05-10 00:07:09.888 [INFO][4928] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9653f3e50090731920b66c3558844262d4fd050f8d6d624b77c19919948fe092" iface="eth0" netns="/var/run/netns/cni-46728cd7-8c93-8366-5333-46a6486de7e8" May 10 00:07:09.971666 containerd[1591]: 2025-05-10 00:07:09.889 [INFO][4928] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9653f3e50090731920b66c3558844262d4fd050f8d6d624b77c19919948fe092" iface="eth0" netns="/var/run/netns/cni-46728cd7-8c93-8366-5333-46a6486de7e8" May 10 00:07:09.971666 containerd[1591]: 2025-05-10 00:07:09.889 [INFO][4928] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9653f3e50090731920b66c3558844262d4fd050f8d6d624b77c19919948fe092" May 10 00:07:09.971666 containerd[1591]: 2025-05-10 00:07:09.889 [INFO][4928] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9653f3e50090731920b66c3558844262d4fd050f8d6d624b77c19919948fe092" May 10 00:07:09.971666 containerd[1591]: 2025-05-10 00:07:09.938 [INFO][4947] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9653f3e50090731920b66c3558844262d4fd050f8d6d624b77c19919948fe092" HandleID="k8s-pod-network.9653f3e50090731920b66c3558844262d4fd050f8d6d624b77c19919948fe092" Workload="ci--4081--3--3--n--60bc3761e6-k8s-calico--apiserver--849f9bb4b4--mnkn2-eth0" May 10 00:07:09.971666 containerd[1591]: 2025-05-10 00:07:09.938 [INFO][4947] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:07:09.971666 containerd[1591]: 2025-05-10 00:07:09.938 [INFO][4947] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:07:09.971666 containerd[1591]: 2025-05-10 00:07:09.953 [WARNING][4947] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9653f3e50090731920b66c3558844262d4fd050f8d6d624b77c19919948fe092" HandleID="k8s-pod-network.9653f3e50090731920b66c3558844262d4fd050f8d6d624b77c19919948fe092" Workload="ci--4081--3--3--n--60bc3761e6-k8s-calico--apiserver--849f9bb4b4--mnkn2-eth0" May 10 00:07:09.971666 containerd[1591]: 2025-05-10 00:07:09.953 [INFO][4947] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9653f3e50090731920b66c3558844262d4fd050f8d6d624b77c19919948fe092" HandleID="k8s-pod-network.9653f3e50090731920b66c3558844262d4fd050f8d6d624b77c19919948fe092" Workload="ci--4081--3--3--n--60bc3761e6-k8s-calico--apiserver--849f9bb4b4--mnkn2-eth0" May 10 00:07:09.971666 containerd[1591]: 2025-05-10 00:07:09.955 [INFO][4947] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:07:09.971666 containerd[1591]: 2025-05-10 00:07:09.960 [INFO][4928] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9653f3e50090731920b66c3558844262d4fd050f8d6d624b77c19919948fe092" May 10 00:07:09.975628 containerd[1591]: time="2025-05-10T00:07:09.971762664Z" level=info msg="TearDown network for sandbox \"9653f3e50090731920b66c3558844262d4fd050f8d6d624b77c19919948fe092\" successfully" May 10 00:07:09.975628 containerd[1591]: time="2025-05-10T00:07:09.971799265Z" level=info msg="StopPodSandbox for \"9653f3e50090731920b66c3558844262d4fd050f8d6d624b77c19919948fe092\" returns successfully" May 10 00:07:09.979495 systemd[1]: run-netns-cni\x2d46728cd7\x2d8c93\x2d8366\x2d5333\x2d46a6486de7e8.mount: Deactivated successfully. May 10 00:07:09.981015 containerd[1591]: 2025-05-10 00:07:09.884 [INFO][4929] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2d3bb549488f1465603ebc4adf7861737cb05a901a7950d8e394ac141d8a68e5" May 10 00:07:09.981015 containerd[1591]: 2025-05-10 00:07:09.884 [INFO][4929] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2d3bb549488f1465603ebc4adf7861737cb05a901a7950d8e394ac141d8a68e5" iface="eth0" netns="/var/run/netns/cni-505cf0d4-8456-0ad1-3e23-bd85c8b5bd89" May 10 00:07:09.981015 containerd[1591]: 2025-05-10 00:07:09.886 [INFO][4929] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2d3bb549488f1465603ebc4adf7861737cb05a901a7950d8e394ac141d8a68e5" iface="eth0" netns="/var/run/netns/cni-505cf0d4-8456-0ad1-3e23-bd85c8b5bd89" May 10 00:07:09.981015 containerd[1591]: 2025-05-10 00:07:09.887 [INFO][4929] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2d3bb549488f1465603ebc4adf7861737cb05a901a7950d8e394ac141d8a68e5" iface="eth0" netns="/var/run/netns/cni-505cf0d4-8456-0ad1-3e23-bd85c8b5bd89" May 10 00:07:09.981015 containerd[1591]: 2025-05-10 00:07:09.887 [INFO][4929] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2d3bb549488f1465603ebc4adf7861737cb05a901a7950d8e394ac141d8a68e5" May 10 00:07:09.981015 containerd[1591]: 2025-05-10 00:07:09.887 [INFO][4929] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2d3bb549488f1465603ebc4adf7861737cb05a901a7950d8e394ac141d8a68e5" May 10 00:07:09.981015 containerd[1591]: 2025-05-10 00:07:09.942 [INFO][4945] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2d3bb549488f1465603ebc4adf7861737cb05a901a7950d8e394ac141d8a68e5" HandleID="k8s-pod-network.2d3bb549488f1465603ebc4adf7861737cb05a901a7950d8e394ac141d8a68e5" Workload="ci--4081--3--3--n--60bc3761e6-k8s-calico--kube--controllers--55c46f8bc8--l65kq-eth0" May 10 00:07:09.981015 containerd[1591]: 2025-05-10 00:07:09.942 [INFO][4945] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:07:09.981015 containerd[1591]: 2025-05-10 00:07:09.955 [INFO][4945] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:07:09.981015 containerd[1591]: 2025-05-10 00:07:09.967 [WARNING][4945] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2d3bb549488f1465603ebc4adf7861737cb05a901a7950d8e394ac141d8a68e5" HandleID="k8s-pod-network.2d3bb549488f1465603ebc4adf7861737cb05a901a7950d8e394ac141d8a68e5" Workload="ci--4081--3--3--n--60bc3761e6-k8s-calico--kube--controllers--55c46f8bc8--l65kq-eth0" May 10 00:07:09.981015 containerd[1591]: 2025-05-10 00:07:09.967 [INFO][4945] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2d3bb549488f1465603ebc4adf7861737cb05a901a7950d8e394ac141d8a68e5" HandleID="k8s-pod-network.2d3bb549488f1465603ebc4adf7861737cb05a901a7950d8e394ac141d8a68e5" Workload="ci--4081--3--3--n--60bc3761e6-k8s-calico--kube--controllers--55c46f8bc8--l65kq-eth0" May 10 00:07:09.981015 containerd[1591]: 2025-05-10 00:07:09.969 [INFO][4945] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:07:09.981015 containerd[1591]: 2025-05-10 00:07:09.972 [INFO][4929] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2d3bb549488f1465603ebc4adf7861737cb05a901a7950d8e394ac141d8a68e5" May 10 00:07:09.983333 containerd[1591]: time="2025-05-10T00:07:09.982351649Z" level=info msg="TearDown network for sandbox \"2d3bb549488f1465603ebc4adf7861737cb05a901a7950d8e394ac141d8a68e5\" successfully" May 10 00:07:09.983333 containerd[1591]: time="2025-05-10T00:07:09.982874876Z" level=info msg="StopPodSandbox for \"2d3bb549488f1465603ebc4adf7861737cb05a901a7950d8e394ac141d8a68e5\" returns successfully" May 10 00:07:09.983333 containerd[1591]: time="2025-05-10T00:07:09.983066486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-849f9bb4b4-mnkn2,Uid:c8370026-6156-4314-9b6f-165657c0861d,Namespace:calico-apiserver,Attempt:1,}" May 10 00:07:09.986041 systemd[1]: run-netns-cni\x2d505cf0d4\x2d8456\x2d0ad1\x2d3e23\x2dbd85c8b5bd89.mount: Deactivated successfully. May 10 00:07:09.986859 containerd[1591]: time="2025-05-10T00:07:09.986823480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55c46f8bc8-l65kq,Uid:934e1bfa-39db-47d3-8258-500edca573e3,Namespace:calico-system,Attempt:1,}" May 10 00:07:10.271301 systemd-networkd[1244]: cali29d4f34faf2: Link UP May 10 00:07:10.274486 systemd-networkd[1244]: cali29d4f34faf2: Gained carrier May 10 00:07:10.294601 kubelet[2970]: I0510 00:07:10.292027 2970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-849f9bb4b4-rkt4m" podStartSLOduration=23.230117399 podStartE2EDuration="25.292006595s" podCreationTimestamp="2025-05-10 00:06:45 +0000 UTC" firstStartedPulling="2025-05-10 00:07:07.554756561 +0000 UTC m=+42.905870094" lastFinishedPulling="2025-05-10 00:07:09.616645797 +0000 UTC m=+44.967759290" observedRunningTime="2025-05-10 00:07:10.130707943 +0000 UTC m=+45.481821516" watchObservedRunningTime="2025-05-10 00:07:10.292006595 +0000 UTC m=+45.643120128" May 10 00:07:10.300125 containerd[1591]: 2025-05-10 00:07:10.060 [INFO][4961] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 10 00:07:10.300125 containerd[1591]: 2025-05-10 00:07:10.089 [INFO][4961] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--n--60bc3761e6-k8s-calico--apiserver--849f9bb4b4--mnkn2-eth0 calico-apiserver-849f9bb4b4- calico-apiserver c8370026-6156-4314-9b6f-165657c0861d 834 0 2025-05-10 00:06:45 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:849f9bb4b4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-3-n-60bc3761e6 calico-apiserver-849f9bb4b4-mnkn2 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali29d4f34faf2 [] []}} ContainerID="a5e7f3717c96b4e66795cbc98681e62d95c83a929d1f9abe8d5c3132c409bd3a" Namespace="calico-apiserver" Pod="calico-apiserver-849f9bb4b4-mnkn2" WorkloadEndpoint="ci--4081--3--3--n--60bc3761e6-k8s-calico--apiserver--849f9bb4b4--mnkn2-" May 10 00:07:10.300125 containerd[1591]: 2025-05-10 00:07:10.090 [INFO][4961] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a5e7f3717c96b4e66795cbc98681e62d95c83a929d1f9abe8d5c3132c409bd3a" Namespace="calico-apiserver" Pod="calico-apiserver-849f9bb4b4-mnkn2" WorkloadEndpoint="ci--4081--3--3--n--60bc3761e6-k8s-calico--apiserver--849f9bb4b4--mnkn2-eth0" May 10 00:07:10.300125 containerd[1591]: 2025-05-10 00:07:10.198 [INFO][4987] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a5e7f3717c96b4e66795cbc98681e62d95c83a929d1f9abe8d5c3132c409bd3a" HandleID="k8s-pod-network.a5e7f3717c96b4e66795cbc98681e62d95c83a929d1f9abe8d5c3132c409bd3a" Workload="ci--4081--3--3--n--60bc3761e6-k8s-calico--apiserver--849f9bb4b4--mnkn2-eth0" May 10 00:07:10.300125 containerd[1591]: 2025-05-10 00:07:10.217 [INFO][4987] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a5e7f3717c96b4e66795cbc98681e62d95c83a929d1f9abe8d5c3132c409bd3a" HandleID="k8s-pod-network.a5e7f3717c96b4e66795cbc98681e62d95c83a929d1f9abe8d5c3132c409bd3a" Workload="ci--4081--3--3--n--60bc3761e6-k8s-calico--apiserver--849f9bb4b4--mnkn2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400038cee0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-3-n-60bc3761e6", "pod":"calico-apiserver-849f9bb4b4-mnkn2", "timestamp":"2025-05-10 00:07:10.198944965 +0000 UTC"}, Hostname:"ci-4081-3-3-n-60bc3761e6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 10 00:07:10.300125 containerd[1591]: 2025-05-10 00:07:10.218 [INFO][4987] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:07:10.300125 containerd[1591]: 2025-05-10 00:07:10.218 [INFO][4987] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:07:10.300125 containerd[1591]: 2025-05-10 00:07:10.218 [INFO][4987] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-n-60bc3761e6' May 10 00:07:10.300125 containerd[1591]: 2025-05-10 00:07:10.220 [INFO][4987] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a5e7f3717c96b4e66795cbc98681e62d95c83a929d1f9abe8d5c3132c409bd3a" host="ci-4081-3-3-n-60bc3761e6" May 10 00:07:10.300125 containerd[1591]: 2025-05-10 00:07:10.226 [INFO][4987] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-n-60bc3761e6" May 10 00:07:10.300125 containerd[1591]: 2025-05-10 00:07:10.231 [INFO][4987] ipam/ipam.go 489: Trying affinity for 192.168.86.128/26 host="ci-4081-3-3-n-60bc3761e6" May 10 00:07:10.300125 containerd[1591]: 2025-05-10 00:07:10.239 [INFO][4987] ipam/ipam.go 155: Attempting to load block cidr=192.168.86.128/26 host="ci-4081-3-3-n-60bc3761e6" May 10 00:07:10.300125 containerd[1591]: 2025-05-10 00:07:10.243 [INFO][4987] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.86.128/26 host="ci-4081-3-3-n-60bc3761e6" May 10 00:07:10.300125 containerd[1591]: 2025-05-10 00:07:10.243 [INFO][4987] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.86.128/26 handle="k8s-pod-network.a5e7f3717c96b4e66795cbc98681e62d95c83a929d1f9abe8d5c3132c409bd3a" host="ci-4081-3-3-n-60bc3761e6" May 10 00:07:10.300125 containerd[1591]: 2025-05-10 00:07:10.245 [INFO][4987] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a5e7f3717c96b4e66795cbc98681e62d95c83a929d1f9abe8d5c3132c409bd3a May 10 00:07:10.300125 containerd[1591]: 2025-05-10 00:07:10.251 [INFO][4987] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.86.128/26 handle="k8s-pod-network.a5e7f3717c96b4e66795cbc98681e62d95c83a929d1f9abe8d5c3132c409bd3a" host="ci-4081-3-3-n-60bc3761e6" May 10 00:07:10.300125 containerd[1591]: 2025-05-10 00:07:10.259 [INFO][4987] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.86.133/26] block=192.168.86.128/26 handle="k8s-pod-network.a5e7f3717c96b4e66795cbc98681e62d95c83a929d1f9abe8d5c3132c409bd3a" host="ci-4081-3-3-n-60bc3761e6" May 10 00:07:10.300125 containerd[1591]: 2025-05-10 00:07:10.259 [INFO][4987] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.86.133/26] handle="k8s-pod-network.a5e7f3717c96b4e66795cbc98681e62d95c83a929d1f9abe8d5c3132c409bd3a" host="ci-4081-3-3-n-60bc3761e6" May 10 00:07:10.300125 containerd[1591]: 2025-05-10 00:07:10.259 [INFO][4987] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:07:10.300125 containerd[1591]: 2025-05-10 00:07:10.259 [INFO][4987] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.86.133/26] IPv6=[] ContainerID="a5e7f3717c96b4e66795cbc98681e62d95c83a929d1f9abe8d5c3132c409bd3a" HandleID="k8s-pod-network.a5e7f3717c96b4e66795cbc98681e62d95c83a929d1f9abe8d5c3132c409bd3a" Workload="ci--4081--3--3--n--60bc3761e6-k8s-calico--apiserver--849f9bb4b4--mnkn2-eth0" May 10 00:07:10.302460 containerd[1591]: 2025-05-10 00:07:10.261 [INFO][4961] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a5e7f3717c96b4e66795cbc98681e62d95c83a929d1f9abe8d5c3132c409bd3a" Namespace="calico-apiserver" Pod="calico-apiserver-849f9bb4b4-mnkn2" WorkloadEndpoint="ci--4081--3--3--n--60bc3761e6-k8s-calico--apiserver--849f9bb4b4--mnkn2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--60bc3761e6-k8s-calico--apiserver--849f9bb4b4--mnkn2-eth0", GenerateName:"calico-apiserver-849f9bb4b4-", Namespace:"calico-apiserver", SelfLink:"", UID:"c8370026-6156-4314-9b6f-165657c0861d", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 6, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"849f9bb4b4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-60bc3761e6", ContainerID:"", Pod:"calico-apiserver-849f9bb4b4-mnkn2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.86.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali29d4f34faf2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:07:10.302460 containerd[1591]: 2025-05-10 00:07:10.262 [INFO][4961] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.86.133/32] ContainerID="a5e7f3717c96b4e66795cbc98681e62d95c83a929d1f9abe8d5c3132c409bd3a" Namespace="calico-apiserver" Pod="calico-apiserver-849f9bb4b4-mnkn2" WorkloadEndpoint="ci--4081--3--3--n--60bc3761e6-k8s-calico--apiserver--849f9bb4b4--mnkn2-eth0" May 10 00:07:10.302460 containerd[1591]: 2025-05-10 00:07:10.262 [INFO][4961] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali29d4f34faf2 ContainerID="a5e7f3717c96b4e66795cbc98681e62d95c83a929d1f9abe8d5c3132c409bd3a" Namespace="calico-apiserver" Pod="calico-apiserver-849f9bb4b4-mnkn2" WorkloadEndpoint="ci--4081--3--3--n--60bc3761e6-k8s-calico--apiserver--849f9bb4b4--mnkn2-eth0" May 10 00:07:10.302460 containerd[1591]: 2025-05-10 00:07:10.272 [INFO][4961] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a5e7f3717c96b4e66795cbc98681e62d95c83a929d1f9abe8d5c3132c409bd3a" Namespace="calico-apiserver" Pod="calico-apiserver-849f9bb4b4-mnkn2" WorkloadEndpoint="ci--4081--3--3--n--60bc3761e6-k8s-calico--apiserver--849f9bb4b4--mnkn2-eth0" May 10 00:07:10.302460 containerd[1591]: 2025-05-10 00:07:10.273 [INFO][4961] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a5e7f3717c96b4e66795cbc98681e62d95c83a929d1f9abe8d5c3132c409bd3a" Namespace="calico-apiserver" Pod="calico-apiserver-849f9bb4b4-mnkn2" WorkloadEndpoint="ci--4081--3--3--n--60bc3761e6-k8s-calico--apiserver--849f9bb4b4--mnkn2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--60bc3761e6-k8s-calico--apiserver--849f9bb4b4--mnkn2-eth0", GenerateName:"calico-apiserver-849f9bb4b4-", Namespace:"calico-apiserver", SelfLink:"", UID:"c8370026-6156-4314-9b6f-165657c0861d", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 6, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"849f9bb4b4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-60bc3761e6", ContainerID:"a5e7f3717c96b4e66795cbc98681e62d95c83a929d1f9abe8d5c3132c409bd3a", Pod:"calico-apiserver-849f9bb4b4-mnkn2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.86.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali29d4f34faf2", MAC:"ee:05:6a:9d:bd:34", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:07:10.302460 containerd[1591]: 2025-05-10 00:07:10.290 [INFO][4961] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a5e7f3717c96b4e66795cbc98681e62d95c83a929d1f9abe8d5c3132c409bd3a" Namespace="calico-apiserver" Pod="calico-apiserver-849f9bb4b4-mnkn2" WorkloadEndpoint="ci--4081--3--3--n--60bc3761e6-k8s-calico--apiserver--849f9bb4b4--mnkn2-eth0" May 10 00:07:10.355695 containerd[1591]: time="2025-05-10T00:07:10.353602392Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:07:10.355695 containerd[1591]: time="2025-05-10T00:07:10.353664275Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:07:10.355695 containerd[1591]: time="2025-05-10T00:07:10.353680196Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:07:10.355695 containerd[1591]: time="2025-05-10T00:07:10.353776241Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:07:10.373586 systemd-networkd[1244]: cali5edd2a81f86: Link UP May 10 00:07:10.376928 systemd-networkd[1244]: cali5edd2a81f86: Gained carrier May 10 00:07:10.413846 containerd[1591]: 2025-05-10 00:07:10.071 [INFO][4971] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 10 00:07:10.413846 containerd[1591]: 2025-05-10 00:07:10.121 [INFO][4971] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--n--60bc3761e6-k8s-calico--kube--controllers--55c46f8bc8--l65kq-eth0 calico-kube-controllers-55c46f8bc8- calico-system 934e1bfa-39db-47d3-8258-500edca573e3 833 0 2025-05-10 00:06:46 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:55c46f8bc8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-3-n-60bc3761e6 calico-kube-controllers-55c46f8bc8-l65kq eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali5edd2a81f86 [] []}} ContainerID="f98c014fbbe4014c469777a3714661548133dd189bde031b206da531eb360da9" Namespace="calico-system" Pod="calico-kube-controllers-55c46f8bc8-l65kq" WorkloadEndpoint="ci--4081--3--3--n--60bc3761e6-k8s-calico--kube--controllers--55c46f8bc8--l65kq-" May 10 00:07:10.413846 containerd[1591]: 2025-05-10 00:07:10.121 [INFO][4971] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f98c014fbbe4014c469777a3714661548133dd189bde031b206da531eb360da9" Namespace="calico-system" Pod="calico-kube-controllers-55c46f8bc8-l65kq" WorkloadEndpoint="ci--4081--3--3--n--60bc3761e6-k8s-calico--kube--controllers--55c46f8bc8--l65kq-eth0" May 10 00:07:10.413846 containerd[1591]: 2025-05-10 00:07:10.229 [INFO][4993] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f98c014fbbe4014c469777a3714661548133dd189bde031b206da531eb360da9" HandleID="k8s-pod-network.f98c014fbbe4014c469777a3714661548133dd189bde031b206da531eb360da9" Workload="ci--4081--3--3--n--60bc3761e6-k8s-calico--kube--controllers--55c46f8bc8--l65kq-eth0" May 10 00:07:10.413846 containerd[1591]: 2025-05-10 00:07:10.244 [INFO][4993] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f98c014fbbe4014c469777a3714661548133dd189bde031b206da531eb360da9" HandleID="k8s-pod-network.f98c014fbbe4014c469777a3714661548133dd189bde031b206da531eb360da9" Workload="ci--4081--3--3--n--60bc3761e6-k8s-calico--kube--controllers--55c46f8bc8--l65kq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003a5850), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-3-n-60bc3761e6", "pod":"calico-kube-controllers-55c46f8bc8-l65kq", "timestamp":"2025-05-10 00:07:10.229123291 +0000 UTC"}, Hostname:"ci-4081-3-3-n-60bc3761e6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 10 00:07:10.413846 containerd[1591]: 2025-05-10 00:07:10.244 [INFO][4993] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:07:10.413846 containerd[1591]: 2025-05-10 00:07:10.259 [INFO][4993] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:07:10.413846 containerd[1591]: 2025-05-10 00:07:10.259 [INFO][4993] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-n-60bc3761e6' May 10 00:07:10.413846 containerd[1591]: 2025-05-10 00:07:10.263 [INFO][4993] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f98c014fbbe4014c469777a3714661548133dd189bde031b206da531eb360da9" host="ci-4081-3-3-n-60bc3761e6" May 10 00:07:10.413846 containerd[1591]: 2025-05-10 00:07:10.277 [INFO][4993] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-n-60bc3761e6" May 10 00:07:10.413846 containerd[1591]: 2025-05-10 00:07:10.309 [INFO][4993] ipam/ipam.go 489: Trying affinity for 192.168.86.128/26 host="ci-4081-3-3-n-60bc3761e6" May 10 00:07:10.413846 containerd[1591]: 2025-05-10 00:07:10.314 [INFO][4993] ipam/ipam.go 155: Attempting to load block cidr=192.168.86.128/26 host="ci-4081-3-3-n-60bc3761e6" May 10 00:07:10.413846 containerd[1591]: 2025-05-10 00:07:10.320 [INFO][4993] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.86.128/26 host="ci-4081-3-3-n-60bc3761e6" May 10 00:07:10.413846 containerd[1591]: 2025-05-10 00:07:10.321 [INFO][4993] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.86.128/26 handle="k8s-pod-network.f98c014fbbe4014c469777a3714661548133dd189bde031b206da531eb360da9" host="ci-4081-3-3-n-60bc3761e6" May 10 00:07:10.413846 containerd[1591]: 2025-05-10 00:07:10.324 [INFO][4993] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f98c014fbbe4014c469777a3714661548133dd189bde031b206da531eb360da9 May 10 00:07:10.413846 containerd[1591]: 2025-05-10 00:07:10.334 [INFO][4993] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.86.128/26 handle="k8s-pod-network.f98c014fbbe4014c469777a3714661548133dd189bde031b206da531eb360da9" host="ci-4081-3-3-n-60bc3761e6" May 10 00:07:10.413846 containerd[1591]: 2025-05-10 00:07:10.347 [INFO][4993] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.86.134/26] block=192.168.86.128/26 handle="k8s-pod-network.f98c014fbbe4014c469777a3714661548133dd189bde031b206da531eb360da9" host="ci-4081-3-3-n-60bc3761e6" May 10 00:07:10.413846 containerd[1591]: 2025-05-10 00:07:10.347 [INFO][4993] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.86.134/26] handle="k8s-pod-network.f98c014fbbe4014c469777a3714661548133dd189bde031b206da531eb360da9" host="ci-4081-3-3-n-60bc3761e6" May 10 00:07:10.413846 containerd[1591]: 2025-05-10 00:07:10.347 [INFO][4993] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:07:10.413846 containerd[1591]: 2025-05-10 00:07:10.347 [INFO][4993] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.86.134/26] IPv6=[] ContainerID="f98c014fbbe4014c469777a3714661548133dd189bde031b206da531eb360da9" HandleID="k8s-pod-network.f98c014fbbe4014c469777a3714661548133dd189bde031b206da531eb360da9" Workload="ci--4081--3--3--n--60bc3761e6-k8s-calico--kube--controllers--55c46f8bc8--l65kq-eth0" May 10 00:07:10.414527 containerd[1591]: 2025-05-10 00:07:10.360 [INFO][4971] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f98c014fbbe4014c469777a3714661548133dd189bde031b206da531eb360da9" Namespace="calico-system" Pod="calico-kube-controllers-55c46f8bc8-l65kq" WorkloadEndpoint="ci--4081--3--3--n--60bc3761e6-k8s-calico--kube--controllers--55c46f8bc8--l65kq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--60bc3761e6-k8s-calico--kube--controllers--55c46f8bc8--l65kq-eth0", GenerateName:"calico-kube-controllers-55c46f8bc8-", Namespace:"calico-system", SelfLink:"", UID:"934e1bfa-39db-47d3-8258-500edca573e3", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 6, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"55c46f8bc8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-60bc3761e6", ContainerID:"", Pod:"calico-kube-controllers-55c46f8bc8-l65kq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.86.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5edd2a81f86", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:07:10.414527 containerd[1591]: 2025-05-10 00:07:10.360 [INFO][4971] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.86.134/32] ContainerID="f98c014fbbe4014c469777a3714661548133dd189bde031b206da531eb360da9" Namespace="calico-system" Pod="calico-kube-controllers-55c46f8bc8-l65kq" WorkloadEndpoint="ci--4081--3--3--n--60bc3761e6-k8s-calico--kube--controllers--55c46f8bc8--l65kq-eth0" May 10 00:07:10.414527 containerd[1591]: 2025-05-10 00:07:10.360 [INFO][4971] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5edd2a81f86 ContainerID="f98c014fbbe4014c469777a3714661548133dd189bde031b206da531eb360da9" Namespace="calico-system" Pod="calico-kube-controllers-55c46f8bc8-l65kq" WorkloadEndpoint="ci--4081--3--3--n--60bc3761e6-k8s-calico--kube--controllers--55c46f8bc8--l65kq-eth0" May 10 00:07:10.414527 containerd[1591]: 2025-05-10 00:07:10.381 [INFO][4971] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f98c014fbbe4014c469777a3714661548133dd189bde031b206da531eb360da9" Namespace="calico-system" Pod="calico-kube-controllers-55c46f8bc8-l65kq" WorkloadEndpoint="ci--4081--3--3--n--60bc3761e6-k8s-calico--kube--controllers--55c46f8bc8--l65kq-eth0" May 10 00:07:10.414527 containerd[1591]: 2025-05-10 00:07:10.386 [INFO][4971] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f98c014fbbe4014c469777a3714661548133dd189bde031b206da531eb360da9" Namespace="calico-system" Pod="calico-kube-controllers-55c46f8bc8-l65kq" WorkloadEndpoint="ci--4081--3--3--n--60bc3761e6-k8s-calico--kube--controllers--55c46f8bc8--l65kq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--60bc3761e6-k8s-calico--kube--controllers--55c46f8bc8--l65kq-eth0", GenerateName:"calico-kube-controllers-55c46f8bc8-", Namespace:"calico-system", SelfLink:"", UID:"934e1bfa-39db-47d3-8258-500edca573e3", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 6, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"55c46f8bc8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-60bc3761e6", ContainerID:"f98c014fbbe4014c469777a3714661548133dd189bde031b206da531eb360da9", Pod:"calico-kube-controllers-55c46f8bc8-l65kq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.86.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5edd2a81f86", MAC:"36:18:a8:8e:91:2a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:07:10.414527 containerd[1591]: 2025-05-10 00:07:10.406 [INFO][4971] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f98c014fbbe4014c469777a3714661548133dd189bde031b206da531eb360da9" Namespace="calico-system" Pod="calico-kube-controllers-55c46f8bc8-l65kq" WorkloadEndpoint="ci--4081--3--3--n--60bc3761e6-k8s-calico--kube--controllers--55c46f8bc8--l65kq-eth0" May 10 00:07:10.469439 containerd[1591]: time="2025-05-10T00:07:10.463678185Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:07:10.469439 containerd[1591]: time="2025-05-10T00:07:10.464304498Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:07:10.469439 containerd[1591]: time="2025-05-10T00:07:10.464365701Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:07:10.469439 containerd[1591]: time="2025-05-10T00:07:10.464610714Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:07:10.559920 containerd[1591]: time="2025-05-10T00:07:10.559866738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-849f9bb4b4-mnkn2,Uid:c8370026-6156-4314-9b6f-165657c0861d,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"a5e7f3717c96b4e66795cbc98681e62d95c83a929d1f9abe8d5c3132c409bd3a\"" May 10 00:07:10.568078 containerd[1591]: time="2025-05-10T00:07:10.567951717Z" level=info msg="CreateContainer within sandbox \"a5e7f3717c96b4e66795cbc98681e62d95c83a929d1f9abe8d5c3132c409bd3a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 10 00:07:10.582281 kernel: bpftool[5123]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 10 00:07:10.597301 containerd[1591]: time="2025-05-10T00:07:10.596729571Z" level=info msg="CreateContainer within sandbox \"a5e7f3717c96b4e66795cbc98681e62d95c83a929d1f9abe8d5c3132c409bd3a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"48f58438205176387de0faaed425be77ec4d9097d1c64051423ed0b65a73f46c\"" May 10 00:07:10.599021 containerd[1591]: time="2025-05-10T00:07:10.598277411Z" level=info msg="StartContainer for \"48f58438205176387de0faaed425be77ec4d9097d1c64051423ed0b65a73f46c\"" May 10 00:07:10.634468 containerd[1591]: time="2025-05-10T00:07:10.634175194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55c46f8bc8-l65kq,Uid:934e1bfa-39db-47d3-8258-500edca573e3,Namespace:calico-system,Attempt:1,} returns sandbox id \"f98c014fbbe4014c469777a3714661548133dd189bde031b206da531eb360da9\"" May 10 00:07:10.759915 containerd[1591]: time="2025-05-10T00:07:10.759861558Z" level=info msg="StartContainer for \"48f58438205176387de0faaed425be77ec4d9097d1c64051423ed0b65a73f46c\" returns successfully" May 10 00:07:11.115709 kubelet[2970]: I0510 00:07:11.115538 2970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-849f9bb4b4-mnkn2" podStartSLOduration=26.115519257 podStartE2EDuration="26.115519257s" podCreationTimestamp="2025-05-10 00:06:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:07:11.110880174 +0000 UTC m=+46.461993707" watchObservedRunningTime="2025-05-10 00:07:11.115519257 +0000 UTC m=+46.466632790" May 10 00:07:11.182140 systemd-networkd[1244]: vxlan.calico: Link UP May 10 00:07:11.182154 systemd-networkd[1244]: vxlan.calico: Gained carrier May 10 00:07:11.204551 containerd[1591]: time="2025-05-10T00:07:11.204495225Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:07:11.208207 containerd[1591]: time="2025-05-10T00:07:11.207131483Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7474935" May 10 00:07:11.208378 containerd[1591]: time="2025-05-10T00:07:11.208338866Z" level=info msg="ImageCreate event name:\"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:07:11.212581 containerd[1591]: time="2025-05-10T00:07:11.212004418Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:07:11.214847 containerd[1591]: time="2025-05-10T00:07:11.214124368Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"8844117\" in 1.596317631s" May 10 00:07:11.214847 containerd[1591]: time="2025-05-10T00:07:11.214185012Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\"" May 10 00:07:11.216434 containerd[1591]: time="2025-05-10T00:07:11.216402447Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 10 00:07:11.220024 containerd[1591]: time="2025-05-10T00:07:11.219989875Z" level=info msg="CreateContainer within sandbox \"eedec18dba0609f7ba2f4700582fb62074e987c821263d4c2bf0f0d7ee65726d\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 10 00:07:11.328270 containerd[1591]: time="2025-05-10T00:07:11.328216529Z" level=info msg="CreateContainer within sandbox \"eedec18dba0609f7ba2f4700582fb62074e987c821263d4c2bf0f0d7ee65726d\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"e60580301c0466fce0c8c3d3219340e4f421a87ec8bcbf2118728c788980431f\"" May 10 00:07:11.329013 containerd[1591]: time="2025-05-10T00:07:11.328982929Z" level=info msg="StartContainer for \"e60580301c0466fce0c8c3d3219340e4f421a87ec8bcbf2118728c788980431f\"" May 10 00:07:11.487843 containerd[1591]: time="2025-05-10T00:07:11.487727903Z" level=info msg="StartContainer for \"e60580301c0466fce0c8c3d3219340e4f421a87ec8bcbf2118728c788980431f\" returns successfully" May 10 00:07:11.911346 update_engine[1563]: I20250510 00:07:11.911279 1563 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 10 00:07:11.911716 update_engine[1563]: I20250510 00:07:11.911504 1563 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 10 00:07:11.911716 update_engine[1563]: I20250510 00:07:11.911697 1563 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 10 00:07:11.912621 update_engine[1563]: E20250510 00:07:11.912592 1563 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 10 00:07:11.912673 update_engine[1563]: I20250510 00:07:11.912650 1563 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 May 10 00:07:12.090432 kubelet[2970]: I0510 00:07:12.089908 2970 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 10 00:07:12.090432 kubelet[2970]: I0510 00:07:12.089959 2970 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 10 00:07:12.261271 systemd-networkd[1244]: cali29d4f34faf2: Gained IPv6LL May 10 00:07:12.324421 systemd-networkd[1244]: cali5edd2a81f86: Gained IPv6LL May 10 00:07:12.967498 systemd-networkd[1244]: vxlan.calico: Gained IPv6LL May 10 00:07:13.151474 containerd[1591]: time="2025-05-10T00:07:13.150612689Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:07:13.153312 containerd[1591]: time="2025-05-10T00:07:13.153215147Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=32554116" May 10 00:07:13.154942 containerd[1591]: time="2025-05-10T00:07:13.154878915Z" level=info msg="ImageCreate event name:\"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:07:13.169539 containerd[1591]: time="2025-05-10T00:07:13.169468047Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:07:13.170082 containerd[1591]: time="2025-05-10T00:07:13.170038917Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"33923266\" in 1.953597827s" May 10 00:07:13.170082 containerd[1591]: time="2025-05-10T00:07:13.170075599Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\"" May 10 00:07:13.172660 containerd[1591]: time="2025-05-10T00:07:13.172576251Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 10 00:07:13.182982 containerd[1591]: time="2025-05-10T00:07:13.182748669Z" level=info msg="CreateContainer within sandbox \"f98c014fbbe4014c469777a3714661548133dd189bde031b206da531eb360da9\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 10 00:07:13.203616 containerd[1591]: time="2025-05-10T00:07:13.203419963Z" level=info msg="CreateContainer within sandbox \"f98c014fbbe4014c469777a3714661548133dd189bde031b206da531eb360da9\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"84ec593fc48618e1f1a143587177708c8f376b41e8ecf5e482fb152dc160eacf\"" May 10 00:07:13.205378 containerd[1591]: time="2025-05-10T00:07:13.204396134Z" level=info msg="StartContainer for \"84ec593fc48618e1f1a143587177708c8f376b41e8ecf5e482fb152dc160eacf\"" May 10 00:07:13.282453 containerd[1591]: time="2025-05-10T00:07:13.280921942Z" level=info msg="StartContainer for \"84ec593fc48618e1f1a143587177708c8f376b41e8ecf5e482fb152dc160eacf\" returns successfully" May 10 00:07:14.123340 kubelet[2970]: I0510 00:07:14.123235 2970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-55c46f8bc8-l65kq" podStartSLOduration=25.592518598 podStartE2EDuration="28.123215735s" podCreationTimestamp="2025-05-10 00:06:46 +0000 UTC" firstStartedPulling="2025-05-10 00:07:10.641376928 +0000 UTC m=+45.992490461" lastFinishedPulling="2025-05-10 00:07:13.172074025 +0000 UTC m=+48.523187598" observedRunningTime="2025-05-10 00:07:14.121874024 +0000 UTC m=+49.472987557" watchObservedRunningTime="2025-05-10 00:07:14.123215735 +0000 UTC m=+49.474329268" May 10 00:07:14.584179 containerd[1591]: time="2025-05-10T00:07:14.584123499Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:07:14.585634 containerd[1591]: time="2025-05-10T00:07:14.585382806Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13124299" May 10 00:07:14.586488 containerd[1591]: time="2025-05-10T00:07:14.586395940Z" level=info msg="ImageCreate event name:\"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:07:14.589082 containerd[1591]: time="2025-05-10T00:07:14.589037880Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:07:14.591388 containerd[1591]: time="2025-05-10T00:07:14.590834456Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"14493433\" in 1.418194562s" May 10 00:07:14.591388 containerd[1591]: time="2025-05-10T00:07:14.590899260Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\"" May 10 00:07:14.595617 containerd[1591]: time="2025-05-10T00:07:14.595572788Z" level=info msg="CreateContainer within sandbox \"eedec18dba0609f7ba2f4700582fb62074e987c821263d4c2bf0f0d7ee65726d\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 10 00:07:14.615274 containerd[1591]: time="2025-05-10T00:07:14.615183472Z" level=info msg="CreateContainer within sandbox \"eedec18dba0609f7ba2f4700582fb62074e987c821263d4c2bf0f0d7ee65726d\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"12ba0027438ab6bbc2d0b00f39726c53fa2b9d3278540ba4b0191caa15a84dd1\"" May 10 00:07:14.618880 containerd[1591]: time="2025-05-10T00:07:14.618837466Z" level=info msg="StartContainer for \"12ba0027438ab6bbc2d0b00f39726c53fa2b9d3278540ba4b0191caa15a84dd1\"" May 10 00:07:14.686071 containerd[1591]: time="2025-05-10T00:07:14.685873793Z" level=info msg="StartContainer for \"12ba0027438ab6bbc2d0b00f39726c53fa2b9d3278540ba4b0191caa15a84dd1\" returns successfully" May 10 00:07:14.888618 kubelet[2970]: I0510 00:07:14.887779 2970 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 10 00:07:14.888618 kubelet[2970]: I0510 00:07:14.887838 2970 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 10 00:07:21.909314 update_engine[1563]: I20250510 00:07:21.908875 1563 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 10 00:07:21.909314 update_engine[1563]: I20250510 00:07:21.909197 1563 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 10 00:07:21.909888 update_engine[1563]: I20250510 00:07:21.909736 1563 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 10 00:07:21.910566 update_engine[1563]: E20250510 00:07:21.910535 1563 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 10 00:07:21.910624 update_engine[1563]: I20250510 00:07:21.910589 1563 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 10 00:07:21.910624 update_engine[1563]: I20250510 00:07:21.910598 1563 omaha_request_action.cc:617] Omaha request response: May 10 00:07:21.910743 update_engine[1563]: E20250510 00:07:21.910710 1563 omaha_request_action.cc:636] Omaha request network transfer failed. May 10 00:07:21.910743 update_engine[1563]: I20250510 00:07:21.910738 1563 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. May 10 00:07:21.910817 update_engine[1563]: I20250510 00:07:21.910745 1563 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 10 00:07:21.910817 update_engine[1563]: I20250510 00:07:21.910750 1563 update_attempter.cc:306] Processing Done. May 10 00:07:21.910817 update_engine[1563]: E20250510 00:07:21.910763 1563 update_attempter.cc:619] Update failed. May 10 00:07:21.910817 update_engine[1563]: I20250510 00:07:21.910769 1563 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse May 10 00:07:21.910817 update_engine[1563]: I20250510 00:07:21.910773 1563 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) May 10 00:07:21.910817 update_engine[1563]: I20250510 00:07:21.910779 1563 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. May 10 00:07:21.910979 update_engine[1563]: I20250510 00:07:21.910844 1563 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 10 00:07:21.910979 update_engine[1563]: I20250510 00:07:21.910866 1563 omaha_request_action.cc:271] Posting an Omaha request to disabled May 10 00:07:21.910979 update_engine[1563]: I20250510 00:07:21.910871 1563 omaha_request_action.cc:272] Request: May 10 00:07:21.910979 update_engine[1563]: May 10 00:07:21.910979 update_engine[1563]: May 10 00:07:21.910979 update_engine[1563]: May 10 00:07:21.910979 update_engine[1563]: May 10 00:07:21.910979 update_engine[1563]: May 10 00:07:21.910979 update_engine[1563]: May 10 00:07:21.910979 update_engine[1563]: I20250510 00:07:21.910876 1563 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 10 00:07:21.911356 update_engine[1563]: I20250510 00:07:21.911002 1563 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 10 00:07:21.911356 update_engine[1563]: I20250510 00:07:21.911159 1563 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 10 00:07:21.911531 locksmithd[1612]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 May 10 00:07:21.912242 update_engine[1563]: E20250510 00:07:21.912191 1563 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 10 00:07:21.912319 update_engine[1563]: I20250510 00:07:21.912247 1563 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 10 00:07:21.912319 update_engine[1563]: I20250510 00:07:21.912281 1563 omaha_request_action.cc:617] Omaha request response: May 10 00:07:21.912319 update_engine[1563]: I20250510 00:07:21.912287 1563 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 10 00:07:21.912319 update_engine[1563]: I20250510 00:07:21.912292 1563 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 10 00:07:21.912319 update_engine[1563]: I20250510 00:07:21.912297 1563 update_attempter.cc:306] Processing Done. May 10 00:07:21.912319 update_engine[1563]: I20250510 00:07:21.912302 1563 update_attempter.cc:310] Error event sent. May 10 00:07:21.912319 update_engine[1563]: I20250510 00:07:21.912310 1563 update_check_scheduler.cc:74] Next update check in 49m28s May 10 00:07:21.912559 locksmithd[1612]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 May 10 00:07:24.780024 containerd[1591]: time="2025-05-10T00:07:24.779713660Z" level=info msg="StopPodSandbox for \"26bba018d6d949d2a4fa770d8d69779d5bef50640378bc4679e53390744f5d9f\"" May 10 00:07:24.879155 containerd[1591]: 2025-05-10 00:07:24.825 [WARNING][5444] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="26bba018d6d949d2a4fa770d8d69779d5bef50640378bc4679e53390744f5d9f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--60bc3761e6-k8s-calico--apiserver--849f9bb4b4--rkt4m-eth0", GenerateName:"calico-apiserver-849f9bb4b4-", Namespace:"calico-apiserver", SelfLink:"", UID:"ca652ab2-74d7-4a4c-a866-d714bca54c18", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 6, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"849f9bb4b4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-60bc3761e6", ContainerID:"bb676bfd8621132b95930d86e28b39cc953c2f2f587210e38188db7f6a8641c9", Pod:"calico-apiserver-849f9bb4b4-rkt4m", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.86.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali81b6abd688f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:07:24.879155 containerd[1591]: 2025-05-10 00:07:24.825 [INFO][5444] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="26bba018d6d949d2a4fa770d8d69779d5bef50640378bc4679e53390744f5d9f" May 10 00:07:24.879155 containerd[1591]: 2025-05-10 00:07:24.825 [INFO][5444] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="26bba018d6d949d2a4fa770d8d69779d5bef50640378bc4679e53390744f5d9f" iface="eth0" netns="" May 10 00:07:24.879155 containerd[1591]: 2025-05-10 00:07:24.825 [INFO][5444] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="26bba018d6d949d2a4fa770d8d69779d5bef50640378bc4679e53390744f5d9f" May 10 00:07:24.879155 containerd[1591]: 2025-05-10 00:07:24.825 [INFO][5444] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="26bba018d6d949d2a4fa770d8d69779d5bef50640378bc4679e53390744f5d9f" May 10 00:07:24.879155 containerd[1591]: 2025-05-10 00:07:24.860 [INFO][5452] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="26bba018d6d949d2a4fa770d8d69779d5bef50640378bc4679e53390744f5d9f" HandleID="k8s-pod-network.26bba018d6d949d2a4fa770d8d69779d5bef50640378bc4679e53390744f5d9f" Workload="ci--4081--3--3--n--60bc3761e6-k8s-calico--apiserver--849f9bb4b4--rkt4m-eth0" May 10 00:07:24.879155 containerd[1591]: 2025-05-10 00:07:24.860 [INFO][5452] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:07:24.879155 containerd[1591]: 2025-05-10 00:07:24.860 [INFO][5452] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:07:24.879155 containerd[1591]: 2025-05-10 00:07:24.871 [WARNING][5452] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="26bba018d6d949d2a4fa770d8d69779d5bef50640378bc4679e53390744f5d9f" HandleID="k8s-pod-network.26bba018d6d949d2a4fa770d8d69779d5bef50640378bc4679e53390744f5d9f" Workload="ci--4081--3--3--n--60bc3761e6-k8s-calico--apiserver--849f9bb4b4--rkt4m-eth0" May 10 00:07:24.879155 containerd[1591]: 2025-05-10 00:07:24.872 [INFO][5452] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="26bba018d6d949d2a4fa770d8d69779d5bef50640378bc4679e53390744f5d9f" HandleID="k8s-pod-network.26bba018d6d949d2a4fa770d8d69779d5bef50640378bc4679e53390744f5d9f" Workload="ci--4081--3--3--n--60bc3761e6-k8s-calico--apiserver--849f9bb4b4--rkt4m-eth0" May 10 00:07:24.879155 containerd[1591]: 2025-05-10 00:07:24.874 [INFO][5452] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:07:24.879155 containerd[1591]: 2025-05-10 00:07:24.875 [INFO][5444] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="26bba018d6d949d2a4fa770d8d69779d5bef50640378bc4679e53390744f5d9f" May 10 00:07:24.882764 containerd[1591]: time="2025-05-10T00:07:24.880876108Z" level=info msg="TearDown network for sandbox \"26bba018d6d949d2a4fa770d8d69779d5bef50640378bc4679e53390744f5d9f\" successfully" May 10 00:07:24.882764 containerd[1591]: time="2025-05-10T00:07:24.880921310Z" level=info msg="StopPodSandbox for \"26bba018d6d949d2a4fa770d8d69779d5bef50640378bc4679e53390744f5d9f\" returns successfully" May 10 00:07:24.882764 containerd[1591]: time="2025-05-10T00:07:24.881582507Z" level=info msg="RemovePodSandbox for \"26bba018d6d949d2a4fa770d8d69779d5bef50640378bc4679e53390744f5d9f\"" May 10 00:07:24.882764 containerd[1591]: time="2025-05-10T00:07:24.881619789Z" level=info msg="Forcibly stopping sandbox \"26bba018d6d949d2a4fa770d8d69779d5bef50640378bc4679e53390744f5d9f\"" May 10 00:07:24.979489 containerd[1591]: 2025-05-10 00:07:24.941 [WARNING][5470] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="26bba018d6d949d2a4fa770d8d69779d5bef50640378bc4679e53390744f5d9f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--60bc3761e6-k8s-calico--apiserver--849f9bb4b4--rkt4m-eth0", GenerateName:"calico-apiserver-849f9bb4b4-", Namespace:"calico-apiserver", SelfLink:"", UID:"ca652ab2-74d7-4a4c-a866-d714bca54c18", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 6, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"849f9bb4b4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-60bc3761e6", ContainerID:"bb676bfd8621132b95930d86e28b39cc953c2f2f587210e38188db7f6a8641c9", Pod:"calico-apiserver-849f9bb4b4-rkt4m", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.86.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali81b6abd688f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:07:24.979489 containerd[1591]: 2025-05-10 00:07:24.942 [INFO][5470] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="26bba018d6d949d2a4fa770d8d69779d5bef50640378bc4679e53390744f5d9f" May 10 00:07:24.979489 containerd[1591]: 2025-05-10 00:07:24.942 [INFO][5470] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="26bba018d6d949d2a4fa770d8d69779d5bef50640378bc4679e53390744f5d9f" iface="eth0" netns="" May 10 00:07:24.979489 containerd[1591]: 2025-05-10 00:07:24.942 [INFO][5470] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="26bba018d6d949d2a4fa770d8d69779d5bef50640378bc4679e53390744f5d9f" May 10 00:07:24.979489 containerd[1591]: 2025-05-10 00:07:24.942 [INFO][5470] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="26bba018d6d949d2a4fa770d8d69779d5bef50640378bc4679e53390744f5d9f" May 10 00:07:24.979489 containerd[1591]: 2025-05-10 00:07:24.961 [INFO][5477] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="26bba018d6d949d2a4fa770d8d69779d5bef50640378bc4679e53390744f5d9f" HandleID="k8s-pod-network.26bba018d6d949d2a4fa770d8d69779d5bef50640378bc4679e53390744f5d9f" Workload="ci--4081--3--3--n--60bc3761e6-k8s-calico--apiserver--849f9bb4b4--rkt4m-eth0" May 10 00:07:24.979489 containerd[1591]: 2025-05-10 00:07:24.961 [INFO][5477] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:07:24.979489 containerd[1591]: 2025-05-10 00:07:24.961 [INFO][5477] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:07:24.979489 containerd[1591]: 2025-05-10 00:07:24.973 [WARNING][5477] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="26bba018d6d949d2a4fa770d8d69779d5bef50640378bc4679e53390744f5d9f" HandleID="k8s-pod-network.26bba018d6d949d2a4fa770d8d69779d5bef50640378bc4679e53390744f5d9f" Workload="ci--4081--3--3--n--60bc3761e6-k8s-calico--apiserver--849f9bb4b4--rkt4m-eth0" May 10 00:07:24.979489 containerd[1591]: 2025-05-10 00:07:24.973 [INFO][5477] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="26bba018d6d949d2a4fa770d8d69779d5bef50640378bc4679e53390744f5d9f" HandleID="k8s-pod-network.26bba018d6d949d2a4fa770d8d69779d5bef50640378bc4679e53390744f5d9f" Workload="ci--4081--3--3--n--60bc3761e6-k8s-calico--apiserver--849f9bb4b4--rkt4m-eth0" May 10 00:07:24.979489 containerd[1591]: 2025-05-10 00:07:24.976 [INFO][5477] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:07:24.979489 containerd[1591]: 2025-05-10 00:07:24.977 [INFO][5470] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="26bba018d6d949d2a4fa770d8d69779d5bef50640378bc4679e53390744f5d9f" May 10 00:07:24.979489 containerd[1591]: time="2025-05-10T00:07:24.979482013Z" level=info msg="TearDown network for sandbox \"26bba018d6d949d2a4fa770d8d69779d5bef50640378bc4679e53390744f5d9f\" successfully" May 10 00:07:24.983416 containerd[1591]: time="2025-05-10T00:07:24.983361789Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"26bba018d6d949d2a4fa770d8d69779d5bef50640378bc4679e53390744f5d9f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 10 00:07:24.983566 containerd[1591]: time="2025-05-10T00:07:24.983440434Z" level=info msg="RemovePodSandbox \"26bba018d6d949d2a4fa770d8d69779d5bef50640378bc4679e53390744f5d9f\" returns successfully" May 10 00:07:24.984753 containerd[1591]: time="2025-05-10T00:07:24.984314362Z" level=info msg="StopPodSandbox for \"1fba9d0469a9f2a28309d9f284605b4a5588ec750c59d1a6c76f950fbc456624\"" May 10 00:07:25.097996 containerd[1591]: 2025-05-10 00:07:25.031 [WARNING][5495] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1fba9d0469a9f2a28309d9f284605b4a5588ec750c59d1a6c76f950fbc456624" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--60bc3761e6-k8s-coredns--7db6d8ff4d--zk9wm-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"871fa5f6-cf5f-424d-92ff-7537c13487e5", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 6, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-60bc3761e6", ContainerID:"155f870f96c92d7ed48fa1b4ebac511e32e44ccda94990df373a7527d8e812de", Pod:"coredns-7db6d8ff4d-zk9wm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.86.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali523edb034bd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:07:25.097996 containerd[1591]: 2025-05-10 00:07:25.031 [INFO][5495] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1fba9d0469a9f2a28309d9f284605b4a5588ec750c59d1a6c76f950fbc456624" May 10 00:07:25.097996 containerd[1591]: 2025-05-10 00:07:25.031 [INFO][5495] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1fba9d0469a9f2a28309d9f284605b4a5588ec750c59d1a6c76f950fbc456624" iface="eth0" netns="" May 10 00:07:25.097996 containerd[1591]: 2025-05-10 00:07:25.031 [INFO][5495] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1fba9d0469a9f2a28309d9f284605b4a5588ec750c59d1a6c76f950fbc456624" May 10 00:07:25.097996 containerd[1591]: 2025-05-10 00:07:25.031 [INFO][5495] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1fba9d0469a9f2a28309d9f284605b4a5588ec750c59d1a6c76f950fbc456624" May 10 00:07:25.097996 containerd[1591]: 2025-05-10 00:07:25.080 [INFO][5502] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1fba9d0469a9f2a28309d9f284605b4a5588ec750c59d1a6c76f950fbc456624" HandleID="k8s-pod-network.1fba9d0469a9f2a28309d9f284605b4a5588ec750c59d1a6c76f950fbc456624" Workload="ci--4081--3--3--n--60bc3761e6-k8s-coredns--7db6d8ff4d--zk9wm-eth0" May 10 00:07:25.097996 containerd[1591]: 2025-05-10 00:07:25.081 [INFO][5502] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:07:25.097996 containerd[1591]: 2025-05-10 00:07:25.081 [INFO][5502] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:07:25.097996 containerd[1591]: 2025-05-10 00:07:25.091 [WARNING][5502] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1fba9d0469a9f2a28309d9f284605b4a5588ec750c59d1a6c76f950fbc456624" HandleID="k8s-pod-network.1fba9d0469a9f2a28309d9f284605b4a5588ec750c59d1a6c76f950fbc456624" Workload="ci--4081--3--3--n--60bc3761e6-k8s-coredns--7db6d8ff4d--zk9wm-eth0" May 10 00:07:25.097996 containerd[1591]: 2025-05-10 00:07:25.091 [INFO][5502] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1fba9d0469a9f2a28309d9f284605b4a5588ec750c59d1a6c76f950fbc456624" HandleID="k8s-pod-network.1fba9d0469a9f2a28309d9f284605b4a5588ec750c59d1a6c76f950fbc456624" Workload="ci--4081--3--3--n--60bc3761e6-k8s-coredns--7db6d8ff4d--zk9wm-eth0" May 10 00:07:25.097996 containerd[1591]: 2025-05-10 00:07:25.093 [INFO][5502] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:07:25.097996 containerd[1591]: 2025-05-10 00:07:25.095 [INFO][5495] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1fba9d0469a9f2a28309d9f284605b4a5588ec750c59d1a6c76f950fbc456624" May 10 00:07:25.100009 containerd[1591]: time="2025-05-10T00:07:25.098375031Z" level=info msg="TearDown network for sandbox \"1fba9d0469a9f2a28309d9f284605b4a5588ec750c59d1a6c76f950fbc456624\" successfully" May 10 00:07:25.100009 containerd[1591]: time="2025-05-10T00:07:25.098415954Z" level=info msg="StopPodSandbox for \"1fba9d0469a9f2a28309d9f284605b4a5588ec750c59d1a6c76f950fbc456624\" returns successfully" May 10 00:07:25.100009 containerd[1591]: time="2025-05-10T00:07:25.099458972Z" level=info msg="RemovePodSandbox for \"1fba9d0469a9f2a28309d9f284605b4a5588ec750c59d1a6c76f950fbc456624\"" May 10 00:07:25.100009 containerd[1591]: time="2025-05-10T00:07:25.099489054Z" level=info msg="Forcibly stopping sandbox \"1fba9d0469a9f2a28309d9f284605b4a5588ec750c59d1a6c76f950fbc456624\"" May 10 00:07:25.183876 containerd[1591]: 2025-05-10 00:07:25.145 [WARNING][5520] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1fba9d0469a9f2a28309d9f284605b4a5588ec750c59d1a6c76f950fbc456624" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--60bc3761e6-k8s-coredns--7db6d8ff4d--zk9wm-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"871fa5f6-cf5f-424d-92ff-7537c13487e5", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 6, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-60bc3761e6", ContainerID:"155f870f96c92d7ed48fa1b4ebac511e32e44ccda94990df373a7527d8e812de", Pod:"coredns-7db6d8ff4d-zk9wm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.86.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali523edb034bd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:07:25.183876 containerd[1591]: 2025-05-10 00:07:25.146 [INFO][5520] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1fba9d0469a9f2a28309d9f284605b4a5588ec750c59d1a6c76f950fbc456624" May 10 00:07:25.183876 containerd[1591]: 2025-05-10 00:07:25.146 [INFO][5520] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1fba9d0469a9f2a28309d9f284605b4a5588ec750c59d1a6c76f950fbc456624" iface="eth0" netns="" May 10 00:07:25.183876 containerd[1591]: 2025-05-10 00:07:25.146 [INFO][5520] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1fba9d0469a9f2a28309d9f284605b4a5588ec750c59d1a6c76f950fbc456624" May 10 00:07:25.183876 containerd[1591]: 2025-05-10 00:07:25.146 [INFO][5520] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1fba9d0469a9f2a28309d9f284605b4a5588ec750c59d1a6c76f950fbc456624" May 10 00:07:25.183876 containerd[1591]: 2025-05-10 00:07:25.165 [INFO][5527] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1fba9d0469a9f2a28309d9f284605b4a5588ec750c59d1a6c76f950fbc456624" HandleID="k8s-pod-network.1fba9d0469a9f2a28309d9f284605b4a5588ec750c59d1a6c76f950fbc456624" Workload="ci--4081--3--3--n--60bc3761e6-k8s-coredns--7db6d8ff4d--zk9wm-eth0" May 10 00:07:25.183876 containerd[1591]: 2025-05-10 00:07:25.165 [INFO][5527] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:07:25.183876 containerd[1591]: 2025-05-10 00:07:25.165 [INFO][5527] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:07:25.183876 containerd[1591]: 2025-05-10 00:07:25.178 [WARNING][5527] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1fba9d0469a9f2a28309d9f284605b4a5588ec750c59d1a6c76f950fbc456624" HandleID="k8s-pod-network.1fba9d0469a9f2a28309d9f284605b4a5588ec750c59d1a6c76f950fbc456624" Workload="ci--4081--3--3--n--60bc3761e6-k8s-coredns--7db6d8ff4d--zk9wm-eth0" May 10 00:07:25.183876 containerd[1591]: 2025-05-10 00:07:25.178 [INFO][5527] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1fba9d0469a9f2a28309d9f284605b4a5588ec750c59d1a6c76f950fbc456624" HandleID="k8s-pod-network.1fba9d0469a9f2a28309d9f284605b4a5588ec750c59d1a6c76f950fbc456624" Workload="ci--4081--3--3--n--60bc3761e6-k8s-coredns--7db6d8ff4d--zk9wm-eth0" May 10 00:07:25.183876 containerd[1591]: 2025-05-10 00:07:25.180 [INFO][5527] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:07:25.183876 containerd[1591]: 2025-05-10 00:07:25.182 [INFO][5520] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1fba9d0469a9f2a28309d9f284605b4a5588ec750c59d1a6c76f950fbc456624" May 10 00:07:25.184408 containerd[1591]: time="2025-05-10T00:07:25.183922386Z" level=info msg="TearDown network for sandbox \"1fba9d0469a9f2a28309d9f284605b4a5588ec750c59d1a6c76f950fbc456624\" successfully" May 10 00:07:25.188154 containerd[1591]: time="2025-05-10T00:07:25.188109741Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1fba9d0469a9f2a28309d9f284605b4a5588ec750c59d1a6c76f950fbc456624\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 10 00:07:25.188369 containerd[1591]: time="2025-05-10T00:07:25.188189545Z" level=info msg="RemovePodSandbox \"1fba9d0469a9f2a28309d9f284605b4a5588ec750c59d1a6c76f950fbc456624\" returns successfully" May 10 00:07:25.188954 containerd[1591]: time="2025-05-10T00:07:25.188875024Z" level=info msg="StopPodSandbox for \"3affc32fb3d33e954ea9b2b79849791757e49aeb9866a5d0cea925c0c172110e\"" May 10 00:07:25.271714 containerd[1591]: 2025-05-10 00:07:25.229 [WARNING][5545] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3affc32fb3d33e954ea9b2b79849791757e49aeb9866a5d0cea925c0c172110e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--60bc3761e6-k8s-coredns--7db6d8ff4d--zhcs4-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"ca8a5621-5e04-45a0-a9d3-4c8113513a58", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 6, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-60bc3761e6", ContainerID:"a3c7b2f2d06016b823423ba6b6be7799627c6765a94bdde9ba6789ecbeb050a6", Pod:"coredns-7db6d8ff4d-zhcs4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.86.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie4630501ead", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:07:25.271714 containerd[1591]: 2025-05-10 00:07:25.229 [INFO][5545] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3affc32fb3d33e954ea9b2b79849791757e49aeb9866a5d0cea925c0c172110e" May 10 00:07:25.271714 containerd[1591]: 2025-05-10 00:07:25.229 [INFO][5545] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3affc32fb3d33e954ea9b2b79849791757e49aeb9866a5d0cea925c0c172110e" iface="eth0" netns="" May 10 00:07:25.271714 containerd[1591]: 2025-05-10 00:07:25.229 [INFO][5545] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3affc32fb3d33e954ea9b2b79849791757e49aeb9866a5d0cea925c0c172110e" May 10 00:07:25.271714 containerd[1591]: 2025-05-10 00:07:25.229 [INFO][5545] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3affc32fb3d33e954ea9b2b79849791757e49aeb9866a5d0cea925c0c172110e" May 10 00:07:25.271714 containerd[1591]: 2025-05-10 00:07:25.253 [INFO][5552] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3affc32fb3d33e954ea9b2b79849791757e49aeb9866a5d0cea925c0c172110e" HandleID="k8s-pod-network.3affc32fb3d33e954ea9b2b79849791757e49aeb9866a5d0cea925c0c172110e" Workload="ci--4081--3--3--n--60bc3761e6-k8s-coredns--7db6d8ff4d--zhcs4-eth0" May 10 00:07:25.271714 containerd[1591]: 2025-05-10 00:07:25.254 [INFO][5552] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:07:25.271714 containerd[1591]: 2025-05-10 00:07:25.254 [INFO][5552] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:07:25.271714 containerd[1591]: 2025-05-10 00:07:25.265 [WARNING][5552] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3affc32fb3d33e954ea9b2b79849791757e49aeb9866a5d0cea925c0c172110e" HandleID="k8s-pod-network.3affc32fb3d33e954ea9b2b79849791757e49aeb9866a5d0cea925c0c172110e" Workload="ci--4081--3--3--n--60bc3761e6-k8s-coredns--7db6d8ff4d--zhcs4-eth0" May 10 00:07:25.271714 containerd[1591]: 2025-05-10 00:07:25.266 [INFO][5552] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3affc32fb3d33e954ea9b2b79849791757e49aeb9866a5d0cea925c0c172110e" HandleID="k8s-pod-network.3affc32fb3d33e954ea9b2b79849791757e49aeb9866a5d0cea925c0c172110e" Workload="ci--4081--3--3--n--60bc3761e6-k8s-coredns--7db6d8ff4d--zhcs4-eth0" May 10 00:07:25.271714 containerd[1591]: 2025-05-10 00:07:25.268 [INFO][5552] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:07:25.271714 containerd[1591]: 2025-05-10 00:07:25.269 [INFO][5545] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3affc32fb3d33e954ea9b2b79849791757e49aeb9866a5d0cea925c0c172110e" May 10 00:07:25.273352 containerd[1591]: time="2025-05-10T00:07:25.272660760Z" level=info msg="TearDown network for sandbox \"3affc32fb3d33e954ea9b2b79849791757e49aeb9866a5d0cea925c0c172110e\" successfully" May 10 00:07:25.273352 containerd[1591]: time="2025-05-10T00:07:25.272705602Z" level=info msg="StopPodSandbox for \"3affc32fb3d33e954ea9b2b79849791757e49aeb9866a5d0cea925c0c172110e\" returns successfully" May 10 00:07:25.273790 containerd[1591]: time="2025-05-10T00:07:25.273750941Z" level=info msg="RemovePodSandbox for \"3affc32fb3d33e954ea9b2b79849791757e49aeb9866a5d0cea925c0c172110e\"" May 10 00:07:25.273882 containerd[1591]: time="2025-05-10T00:07:25.273806744Z" level=info msg="Forcibly stopping sandbox \"3affc32fb3d33e954ea9b2b79849791757e49aeb9866a5d0cea925c0c172110e\"" May 10 00:07:25.360806 containerd[1591]: 2025-05-10 00:07:25.318 [WARNING][5570] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3affc32fb3d33e954ea9b2b79849791757e49aeb9866a5d0cea925c0c172110e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--60bc3761e6-k8s-coredns--7db6d8ff4d--zhcs4-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"ca8a5621-5e04-45a0-a9d3-4c8113513a58", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 6, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-60bc3761e6", ContainerID:"a3c7b2f2d06016b823423ba6b6be7799627c6765a94bdde9ba6789ecbeb050a6", Pod:"coredns-7db6d8ff4d-zhcs4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.86.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie4630501ead", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:07:25.360806 containerd[1591]: 2025-05-10 00:07:25.319 [INFO][5570] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3affc32fb3d33e954ea9b2b79849791757e49aeb9866a5d0cea925c0c172110e" May 10 00:07:25.360806 containerd[1591]: 2025-05-10 00:07:25.319 [INFO][5570] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3affc32fb3d33e954ea9b2b79849791757e49aeb9866a5d0cea925c0c172110e" iface="eth0" netns="" May 10 00:07:25.360806 containerd[1591]: 2025-05-10 00:07:25.319 [INFO][5570] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3affc32fb3d33e954ea9b2b79849791757e49aeb9866a5d0cea925c0c172110e" May 10 00:07:25.360806 containerd[1591]: 2025-05-10 00:07:25.319 [INFO][5570] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3affc32fb3d33e954ea9b2b79849791757e49aeb9866a5d0cea925c0c172110e" May 10 00:07:25.360806 containerd[1591]: 2025-05-10 00:07:25.340 [INFO][5577] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3affc32fb3d33e954ea9b2b79849791757e49aeb9866a5d0cea925c0c172110e" HandleID="k8s-pod-network.3affc32fb3d33e954ea9b2b79849791757e49aeb9866a5d0cea925c0c172110e" Workload="ci--4081--3--3--n--60bc3761e6-k8s-coredns--7db6d8ff4d--zhcs4-eth0" May 10 00:07:25.360806 containerd[1591]: 2025-05-10 00:07:25.340 [INFO][5577] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:07:25.360806 containerd[1591]: 2025-05-10 00:07:25.340 [INFO][5577] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:07:25.360806 containerd[1591]: 2025-05-10 00:07:25.355 [WARNING][5577] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3affc32fb3d33e954ea9b2b79849791757e49aeb9866a5d0cea925c0c172110e" HandleID="k8s-pod-network.3affc32fb3d33e954ea9b2b79849791757e49aeb9866a5d0cea925c0c172110e" Workload="ci--4081--3--3--n--60bc3761e6-k8s-coredns--7db6d8ff4d--zhcs4-eth0" May 10 00:07:25.360806 containerd[1591]: 2025-05-10 00:07:25.355 [INFO][5577] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3affc32fb3d33e954ea9b2b79849791757e49aeb9866a5d0cea925c0c172110e" HandleID="k8s-pod-network.3affc32fb3d33e954ea9b2b79849791757e49aeb9866a5d0cea925c0c172110e" Workload="ci--4081--3--3--n--60bc3761e6-k8s-coredns--7db6d8ff4d--zhcs4-eth0" May 10 00:07:25.360806 containerd[1591]: 2025-05-10 00:07:25.357 [INFO][5577] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:07:25.360806 containerd[1591]: 2025-05-10 00:07:25.359 [INFO][5570] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3affc32fb3d33e954ea9b2b79849791757e49aeb9866a5d0cea925c0c172110e" May 10 00:07:25.360806 containerd[1591]: time="2025-05-10T00:07:25.360770938Z" level=info msg="TearDown network for sandbox \"3affc32fb3d33e954ea9b2b79849791757e49aeb9866a5d0cea925c0c172110e\" successfully" May 10 00:07:25.364921 containerd[1591]: time="2025-05-10T00:07:25.364721839Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3affc32fb3d33e954ea9b2b79849791757e49aeb9866a5d0cea925c0c172110e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 10 00:07:25.364921 containerd[1591]: time="2025-05-10T00:07:25.364801484Z" level=info msg="RemovePodSandbox \"3affc32fb3d33e954ea9b2b79849791757e49aeb9866a5d0cea925c0c172110e\" returns successfully" May 10 00:07:25.365731 containerd[1591]: time="2025-05-10T00:07:25.365457601Z" level=info msg="StopPodSandbox for \"2d3bb549488f1465603ebc4adf7861737cb05a901a7950d8e394ac141d8a68e5\"" May 10 00:07:25.459492 containerd[1591]: 2025-05-10 00:07:25.419 [WARNING][5595] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2d3bb549488f1465603ebc4adf7861737cb05a901a7950d8e394ac141d8a68e5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--60bc3761e6-k8s-calico--kube--controllers--55c46f8bc8--l65kq-eth0", GenerateName:"calico-kube-controllers-55c46f8bc8-", Namespace:"calico-system", SelfLink:"", UID:"934e1bfa-39db-47d3-8258-500edca573e3", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 6, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"55c46f8bc8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-60bc3761e6", ContainerID:"f98c014fbbe4014c469777a3714661548133dd189bde031b206da531eb360da9", Pod:"calico-kube-controllers-55c46f8bc8-l65kq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.86.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5edd2a81f86", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:07:25.459492 containerd[1591]: 2025-05-10 00:07:25.419 [INFO][5595] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2d3bb549488f1465603ebc4adf7861737cb05a901a7950d8e394ac141d8a68e5" May 10 00:07:25.459492 containerd[1591]: 2025-05-10 00:07:25.420 [INFO][5595] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2d3bb549488f1465603ebc4adf7861737cb05a901a7950d8e394ac141d8a68e5" iface="eth0" netns="" May 10 00:07:25.459492 containerd[1591]: 2025-05-10 00:07:25.420 [INFO][5595] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2d3bb549488f1465603ebc4adf7861737cb05a901a7950d8e394ac141d8a68e5" May 10 00:07:25.459492 containerd[1591]: 2025-05-10 00:07:25.420 [INFO][5595] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2d3bb549488f1465603ebc4adf7861737cb05a901a7950d8e394ac141d8a68e5" May 10 00:07:25.459492 containerd[1591]: 2025-05-10 00:07:25.441 [INFO][5602] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2d3bb549488f1465603ebc4adf7861737cb05a901a7950d8e394ac141d8a68e5" HandleID="k8s-pod-network.2d3bb549488f1465603ebc4adf7861737cb05a901a7950d8e394ac141d8a68e5" Workload="ci--4081--3--3--n--60bc3761e6-k8s-calico--kube--controllers--55c46f8bc8--l65kq-eth0" May 10 00:07:25.459492 containerd[1591]: 2025-05-10 00:07:25.441 [INFO][5602] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:07:25.459492 containerd[1591]: 2025-05-10 00:07:25.441 [INFO][5602] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:07:25.459492 containerd[1591]: 2025-05-10 00:07:25.454 [WARNING][5602] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2d3bb549488f1465603ebc4adf7861737cb05a901a7950d8e394ac141d8a68e5" HandleID="k8s-pod-network.2d3bb549488f1465603ebc4adf7861737cb05a901a7950d8e394ac141d8a68e5" Workload="ci--4081--3--3--n--60bc3761e6-k8s-calico--kube--controllers--55c46f8bc8--l65kq-eth0" May 10 00:07:25.459492 containerd[1591]: 2025-05-10 00:07:25.454 [INFO][5602] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2d3bb549488f1465603ebc4adf7861737cb05a901a7950d8e394ac141d8a68e5" HandleID="k8s-pod-network.2d3bb549488f1465603ebc4adf7861737cb05a901a7950d8e394ac141d8a68e5" Workload="ci--4081--3--3--n--60bc3761e6-k8s-calico--kube--controllers--55c46f8bc8--l65kq-eth0" May 10 00:07:25.459492 containerd[1591]: 2025-05-10 00:07:25.456 [INFO][5602] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:07:25.459492 containerd[1591]: 2025-05-10 00:07:25.457 [INFO][5595] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2d3bb549488f1465603ebc4adf7861737cb05a901a7950d8e394ac141d8a68e5" May 10 00:07:25.460521 containerd[1591]: time="2025-05-10T00:07:25.460166429Z" level=info msg="TearDown network for sandbox \"2d3bb549488f1465603ebc4adf7861737cb05a901a7950d8e394ac141d8a68e5\" successfully" May 10 00:07:25.460521 containerd[1591]: time="2025-05-10T00:07:25.460201271Z" level=info msg="StopPodSandbox for \"2d3bb549488f1465603ebc4adf7861737cb05a901a7950d8e394ac141d8a68e5\" returns successfully" May 10 00:07:25.460929 containerd[1591]: time="2025-05-10T00:07:25.460889109Z" level=info msg="RemovePodSandbox for \"2d3bb549488f1465603ebc4adf7861737cb05a901a7950d8e394ac141d8a68e5\"" May 10 00:07:25.460929 containerd[1591]: time="2025-05-10T00:07:25.460932872Z" level=info msg="Forcibly stopping sandbox \"2d3bb549488f1465603ebc4adf7861737cb05a901a7950d8e394ac141d8a68e5\"" May 10 00:07:25.551041 containerd[1591]: 2025-05-10 00:07:25.507 [WARNING][5621] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2d3bb549488f1465603ebc4adf7861737cb05a901a7950d8e394ac141d8a68e5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--60bc3761e6-k8s-calico--kube--controllers--55c46f8bc8--l65kq-eth0", GenerateName:"calico-kube-controllers-55c46f8bc8-", Namespace:"calico-system", SelfLink:"", UID:"934e1bfa-39db-47d3-8258-500edca573e3", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 6, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"55c46f8bc8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-60bc3761e6", ContainerID:"f98c014fbbe4014c469777a3714661548133dd189bde031b206da531eb360da9", Pod:"calico-kube-controllers-55c46f8bc8-l65kq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.86.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5edd2a81f86", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:07:25.551041 containerd[1591]: 2025-05-10 00:07:25.508 [INFO][5621] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2d3bb549488f1465603ebc4adf7861737cb05a901a7950d8e394ac141d8a68e5" May 10 00:07:25.551041 containerd[1591]: 2025-05-10 00:07:25.508 [INFO][5621] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2d3bb549488f1465603ebc4adf7861737cb05a901a7950d8e394ac141d8a68e5" iface="eth0" netns="" May 10 00:07:25.551041 containerd[1591]: 2025-05-10 00:07:25.508 [INFO][5621] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2d3bb549488f1465603ebc4adf7861737cb05a901a7950d8e394ac141d8a68e5" May 10 00:07:25.551041 containerd[1591]: 2025-05-10 00:07:25.508 [INFO][5621] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2d3bb549488f1465603ebc4adf7861737cb05a901a7950d8e394ac141d8a68e5" May 10 00:07:25.551041 containerd[1591]: 2025-05-10 00:07:25.534 [INFO][5628] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2d3bb549488f1465603ebc4adf7861737cb05a901a7950d8e394ac141d8a68e5" HandleID="k8s-pod-network.2d3bb549488f1465603ebc4adf7861737cb05a901a7950d8e394ac141d8a68e5" Workload="ci--4081--3--3--n--60bc3761e6-k8s-calico--kube--controllers--55c46f8bc8--l65kq-eth0" May 10 00:07:25.551041 containerd[1591]: 2025-05-10 00:07:25.534 [INFO][5628] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:07:25.551041 containerd[1591]: 2025-05-10 00:07:25.534 [INFO][5628] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:07:25.551041 containerd[1591]: 2025-05-10 00:07:25.545 [WARNING][5628] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2d3bb549488f1465603ebc4adf7861737cb05a901a7950d8e394ac141d8a68e5" HandleID="k8s-pod-network.2d3bb549488f1465603ebc4adf7861737cb05a901a7950d8e394ac141d8a68e5" Workload="ci--4081--3--3--n--60bc3761e6-k8s-calico--kube--controllers--55c46f8bc8--l65kq-eth0" May 10 00:07:25.551041 containerd[1591]: 2025-05-10 00:07:25.545 [INFO][5628] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2d3bb549488f1465603ebc4adf7861737cb05a901a7950d8e394ac141d8a68e5" HandleID="k8s-pod-network.2d3bb549488f1465603ebc4adf7861737cb05a901a7950d8e394ac141d8a68e5" Workload="ci--4081--3--3--n--60bc3761e6-k8s-calico--kube--controllers--55c46f8bc8--l65kq-eth0" May 10 00:07:25.551041 containerd[1591]: 2025-05-10 00:07:25.547 [INFO][5628] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:07:25.551041 containerd[1591]: 2025-05-10 00:07:25.549 [INFO][5621] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2d3bb549488f1465603ebc4adf7861737cb05a901a7950d8e394ac141d8a68e5" May 10 00:07:25.551976 containerd[1591]: time="2025-05-10T00:07:25.551086805Z" level=info msg="TearDown network for sandbox \"2d3bb549488f1465603ebc4adf7861737cb05a901a7950d8e394ac141d8a68e5\" successfully" May 10 00:07:25.554918 containerd[1591]: time="2025-05-10T00:07:25.554846735Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2d3bb549488f1465603ebc4adf7861737cb05a901a7950d8e394ac141d8a68e5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 10 00:07:25.554918 containerd[1591]: time="2025-05-10T00:07:25.554919020Z" level=info msg="RemovePodSandbox \"2d3bb549488f1465603ebc4adf7861737cb05a901a7950d8e394ac141d8a68e5\" returns successfully" May 10 00:07:25.555891 containerd[1591]: time="2025-05-10T00:07:25.555463570Z" level=info msg="StopPodSandbox for \"d63c2089dce95ba446be9618d4d02418a5e71a0ac71a2c014a23b09390d4cdf5\"" May 10 00:07:25.638649 containerd[1591]: 2025-05-10 00:07:25.596 [WARNING][5647] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d63c2089dce95ba446be9618d4d02418a5e71a0ac71a2c014a23b09390d4cdf5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--60bc3761e6-k8s-csi--node--driver--4hjhz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0766211e-6e96-4ed6-b977-d34cdc94d220", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 6, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-60bc3761e6", ContainerID:"eedec18dba0609f7ba2f4700582fb62074e987c821263d4c2bf0f0d7ee65726d", Pod:"csi-node-driver-4hjhz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.86.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid9adb5875ea", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:07:25.638649 containerd[1591]: 2025-05-10 00:07:25.597 [INFO][5647] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d63c2089dce95ba446be9618d4d02418a5e71a0ac71a2c014a23b09390d4cdf5" May 10 00:07:25.638649 containerd[1591]: 2025-05-10 00:07:25.597 [INFO][5647] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d63c2089dce95ba446be9618d4d02418a5e71a0ac71a2c014a23b09390d4cdf5" iface="eth0" netns="" May 10 00:07:25.638649 containerd[1591]: 2025-05-10 00:07:25.597 [INFO][5647] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d63c2089dce95ba446be9618d4d02418a5e71a0ac71a2c014a23b09390d4cdf5" May 10 00:07:25.638649 containerd[1591]: 2025-05-10 00:07:25.597 [INFO][5647] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d63c2089dce95ba446be9618d4d02418a5e71a0ac71a2c014a23b09390d4cdf5" May 10 00:07:25.638649 containerd[1591]: 2025-05-10 00:07:25.620 [INFO][5654] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d63c2089dce95ba446be9618d4d02418a5e71a0ac71a2c014a23b09390d4cdf5" HandleID="k8s-pod-network.d63c2089dce95ba446be9618d4d02418a5e71a0ac71a2c014a23b09390d4cdf5" Workload="ci--4081--3--3--n--60bc3761e6-k8s-csi--node--driver--4hjhz-eth0" May 10 00:07:25.638649 containerd[1591]: 2025-05-10 00:07:25.620 [INFO][5654] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:07:25.638649 containerd[1591]: 2025-05-10 00:07:25.620 [INFO][5654] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:07:25.638649 containerd[1591]: 2025-05-10 00:07:25.633 [WARNING][5654] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d63c2089dce95ba446be9618d4d02418a5e71a0ac71a2c014a23b09390d4cdf5" HandleID="k8s-pod-network.d63c2089dce95ba446be9618d4d02418a5e71a0ac71a2c014a23b09390d4cdf5" Workload="ci--4081--3--3--n--60bc3761e6-k8s-csi--node--driver--4hjhz-eth0" May 10 00:07:25.638649 containerd[1591]: 2025-05-10 00:07:25.633 [INFO][5654] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d63c2089dce95ba446be9618d4d02418a5e71a0ac71a2c014a23b09390d4cdf5" HandleID="k8s-pod-network.d63c2089dce95ba446be9618d4d02418a5e71a0ac71a2c014a23b09390d4cdf5" Workload="ci--4081--3--3--n--60bc3761e6-k8s-csi--node--driver--4hjhz-eth0" May 10 00:07:25.638649 containerd[1591]: 2025-05-10 00:07:25.635 [INFO][5654] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:07:25.638649 containerd[1591]: 2025-05-10 00:07:25.637 [INFO][5647] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d63c2089dce95ba446be9618d4d02418a5e71a0ac71a2c014a23b09390d4cdf5" May 10 00:07:25.638649 containerd[1591]: time="2025-05-10T00:07:25.638439181Z" level=info msg="TearDown network for sandbox \"d63c2089dce95ba446be9618d4d02418a5e71a0ac71a2c014a23b09390d4cdf5\" successfully" May 10 00:07:25.638649 containerd[1591]: time="2025-05-10T00:07:25.638465742Z" level=info msg="StopPodSandbox for \"d63c2089dce95ba446be9618d4d02418a5e71a0ac71a2c014a23b09390d4cdf5\" returns successfully" May 10 00:07:25.639993 containerd[1591]: time="2025-05-10T00:07:25.639524921Z" level=info msg="RemovePodSandbox for \"d63c2089dce95ba446be9618d4d02418a5e71a0ac71a2c014a23b09390d4cdf5\"" May 10 00:07:25.639993 containerd[1591]: time="2025-05-10T00:07:25.639562244Z" level=info msg="Forcibly stopping sandbox \"d63c2089dce95ba446be9618d4d02418a5e71a0ac71a2c014a23b09390d4cdf5\"" May 10 00:07:25.727207 containerd[1591]: 2025-05-10 00:07:25.681 [WARNING][5672] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d63c2089dce95ba446be9618d4d02418a5e71a0ac71a2c014a23b09390d4cdf5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--60bc3761e6-k8s-csi--node--driver--4hjhz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0766211e-6e96-4ed6-b977-d34cdc94d220", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 6, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-60bc3761e6", ContainerID:"eedec18dba0609f7ba2f4700582fb62074e987c821263d4c2bf0f0d7ee65726d", Pod:"csi-node-driver-4hjhz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.86.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid9adb5875ea", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:07:25.727207 containerd[1591]: 2025-05-10 00:07:25.681 [INFO][5672] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d63c2089dce95ba446be9618d4d02418a5e71a0ac71a2c014a23b09390d4cdf5" May 10 00:07:25.727207 containerd[1591]: 2025-05-10 00:07:25.681 [INFO][5672] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d63c2089dce95ba446be9618d4d02418a5e71a0ac71a2c014a23b09390d4cdf5" iface="eth0" netns="" May 10 00:07:25.727207 containerd[1591]: 2025-05-10 00:07:25.681 [INFO][5672] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d63c2089dce95ba446be9618d4d02418a5e71a0ac71a2c014a23b09390d4cdf5" May 10 00:07:25.727207 containerd[1591]: 2025-05-10 00:07:25.681 [INFO][5672] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d63c2089dce95ba446be9618d4d02418a5e71a0ac71a2c014a23b09390d4cdf5" May 10 00:07:25.727207 containerd[1591]: 2025-05-10 00:07:25.706 [INFO][5680] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d63c2089dce95ba446be9618d4d02418a5e71a0ac71a2c014a23b09390d4cdf5" HandleID="k8s-pod-network.d63c2089dce95ba446be9618d4d02418a5e71a0ac71a2c014a23b09390d4cdf5" Workload="ci--4081--3--3--n--60bc3761e6-k8s-csi--node--driver--4hjhz-eth0" May 10 00:07:25.727207 containerd[1591]: 2025-05-10 00:07:25.707 [INFO][5680] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:07:25.727207 containerd[1591]: 2025-05-10 00:07:25.707 [INFO][5680] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:07:25.727207 containerd[1591]: 2025-05-10 00:07:25.721 [WARNING][5680] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d63c2089dce95ba446be9618d4d02418a5e71a0ac71a2c014a23b09390d4cdf5" HandleID="k8s-pod-network.d63c2089dce95ba446be9618d4d02418a5e71a0ac71a2c014a23b09390d4cdf5" Workload="ci--4081--3--3--n--60bc3761e6-k8s-csi--node--driver--4hjhz-eth0" May 10 00:07:25.727207 containerd[1591]: 2025-05-10 00:07:25.721 [INFO][5680] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d63c2089dce95ba446be9618d4d02418a5e71a0ac71a2c014a23b09390d4cdf5" HandleID="k8s-pod-network.d63c2089dce95ba446be9618d4d02418a5e71a0ac71a2c014a23b09390d4cdf5" Workload="ci--4081--3--3--n--60bc3761e6-k8s-csi--node--driver--4hjhz-eth0" May 10 00:07:25.727207 containerd[1591]: 2025-05-10 00:07:25.723 [INFO][5680] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:07:25.727207 containerd[1591]: 2025-05-10 00:07:25.725 [INFO][5672] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d63c2089dce95ba446be9618d4d02418a5e71a0ac71a2c014a23b09390d4cdf5" May 10 00:07:25.727207 containerd[1591]: time="2025-05-10T00:07:25.726953702Z" level=info msg="TearDown network for sandbox \"d63c2089dce95ba446be9618d4d02418a5e71a0ac71a2c014a23b09390d4cdf5\" successfully" May 10 00:07:25.733780 containerd[1591]: time="2025-05-10T00:07:25.733600594Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d63c2089dce95ba446be9618d4d02418a5e71a0ac71a2c014a23b09390d4cdf5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 10 00:07:25.733780 containerd[1591]: time="2025-05-10T00:07:25.733679599Z" level=info msg="RemovePodSandbox \"d63c2089dce95ba446be9618d4d02418a5e71a0ac71a2c014a23b09390d4cdf5\" returns successfully" May 10 00:07:25.734601 containerd[1591]: time="2025-05-10T00:07:25.734243870Z" level=info msg="StopPodSandbox for \"9653f3e50090731920b66c3558844262d4fd050f8d6d624b77c19919948fe092\"" May 10 00:07:25.821888 containerd[1591]: 2025-05-10 00:07:25.783 [WARNING][5698] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9653f3e50090731920b66c3558844262d4fd050f8d6d624b77c19919948fe092" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--60bc3761e6-k8s-calico--apiserver--849f9bb4b4--mnkn2-eth0", GenerateName:"calico-apiserver-849f9bb4b4-", Namespace:"calico-apiserver", SelfLink:"", UID:"c8370026-6156-4314-9b6f-165657c0861d", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 6, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"849f9bb4b4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-60bc3761e6", ContainerID:"a5e7f3717c96b4e66795cbc98681e62d95c83a929d1f9abe8d5c3132c409bd3a", Pod:"calico-apiserver-849f9bb4b4-mnkn2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.86.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali29d4f34faf2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:07:25.821888 containerd[1591]: 2025-05-10 00:07:25.783 [INFO][5698] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9653f3e50090731920b66c3558844262d4fd050f8d6d624b77c19919948fe092" May 10 00:07:25.821888 containerd[1591]: 2025-05-10 00:07:25.783 [INFO][5698] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9653f3e50090731920b66c3558844262d4fd050f8d6d624b77c19919948fe092" iface="eth0" netns="" May 10 00:07:25.821888 containerd[1591]: 2025-05-10 00:07:25.783 [INFO][5698] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9653f3e50090731920b66c3558844262d4fd050f8d6d624b77c19919948fe092" May 10 00:07:25.821888 containerd[1591]: 2025-05-10 00:07:25.783 [INFO][5698] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9653f3e50090731920b66c3558844262d4fd050f8d6d624b77c19919948fe092" May 10 00:07:25.821888 containerd[1591]: 2025-05-10 00:07:25.805 [INFO][5706] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9653f3e50090731920b66c3558844262d4fd050f8d6d624b77c19919948fe092" HandleID="k8s-pod-network.9653f3e50090731920b66c3558844262d4fd050f8d6d624b77c19919948fe092" Workload="ci--4081--3--3--n--60bc3761e6-k8s-calico--apiserver--849f9bb4b4--mnkn2-eth0" May 10 00:07:25.821888 containerd[1591]: 2025-05-10 00:07:25.805 [INFO][5706] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:07:25.821888 containerd[1591]: 2025-05-10 00:07:25.805 [INFO][5706] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:07:25.821888 containerd[1591]: 2025-05-10 00:07:25.816 [WARNING][5706] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9653f3e50090731920b66c3558844262d4fd050f8d6d624b77c19919948fe092" HandleID="k8s-pod-network.9653f3e50090731920b66c3558844262d4fd050f8d6d624b77c19919948fe092" Workload="ci--4081--3--3--n--60bc3761e6-k8s-calico--apiserver--849f9bb4b4--mnkn2-eth0" May 10 00:07:25.821888 containerd[1591]: 2025-05-10 00:07:25.816 [INFO][5706] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9653f3e50090731920b66c3558844262d4fd050f8d6d624b77c19919948fe092" HandleID="k8s-pod-network.9653f3e50090731920b66c3558844262d4fd050f8d6d624b77c19919948fe092" Workload="ci--4081--3--3--n--60bc3761e6-k8s-calico--apiserver--849f9bb4b4--mnkn2-eth0" May 10 00:07:25.821888 containerd[1591]: 2025-05-10 00:07:25.818 [INFO][5706] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:07:25.821888 containerd[1591]: 2025-05-10 00:07:25.820 [INFO][5698] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9653f3e50090731920b66c3558844262d4fd050f8d6d624b77c19919948fe092" May 10 00:07:25.823055 containerd[1591]: time="2025-05-10T00:07:25.822676787Z" level=info msg="TearDown network for sandbox \"9653f3e50090731920b66c3558844262d4fd050f8d6d624b77c19919948fe092\" successfully" May 10 00:07:25.823055 containerd[1591]: time="2025-05-10T00:07:25.822725709Z" level=info msg="StopPodSandbox for \"9653f3e50090731920b66c3558844262d4fd050f8d6d624b77c19919948fe092\" returns successfully" May 10 00:07:25.823703 containerd[1591]: time="2025-05-10T00:07:25.823352064Z" level=info msg="RemovePodSandbox for \"9653f3e50090731920b66c3558844262d4fd050f8d6d624b77c19919948fe092\"" May 10 00:07:25.823703 containerd[1591]: time="2025-05-10T00:07:25.823379586Z" level=info msg="Forcibly stopping sandbox \"9653f3e50090731920b66c3558844262d4fd050f8d6d624b77c19919948fe092\"" May 10 00:07:25.908470 containerd[1591]: 2025-05-10 00:07:25.863 [WARNING][5724] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9653f3e50090731920b66c3558844262d4fd050f8d6d624b77c19919948fe092" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--60bc3761e6-k8s-calico--apiserver--849f9bb4b4--mnkn2-eth0", GenerateName:"calico-apiserver-849f9bb4b4-", Namespace:"calico-apiserver", SelfLink:"", UID:"c8370026-6156-4314-9b6f-165657c0861d", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 6, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"849f9bb4b4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-60bc3761e6", ContainerID:"a5e7f3717c96b4e66795cbc98681e62d95c83a929d1f9abe8d5c3132c409bd3a", Pod:"calico-apiserver-849f9bb4b4-mnkn2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.86.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali29d4f34faf2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:07:25.908470 containerd[1591]: 2025-05-10 00:07:25.863 [INFO][5724] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9653f3e50090731920b66c3558844262d4fd050f8d6d624b77c19919948fe092" May 10 00:07:25.908470 containerd[1591]: 2025-05-10 00:07:25.863 [INFO][5724] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9653f3e50090731920b66c3558844262d4fd050f8d6d624b77c19919948fe092" iface="eth0" netns="" May 10 00:07:25.908470 containerd[1591]: 2025-05-10 00:07:25.863 [INFO][5724] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9653f3e50090731920b66c3558844262d4fd050f8d6d624b77c19919948fe092" May 10 00:07:25.908470 containerd[1591]: 2025-05-10 00:07:25.863 [INFO][5724] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9653f3e50090731920b66c3558844262d4fd050f8d6d624b77c19919948fe092" May 10 00:07:25.908470 containerd[1591]: 2025-05-10 00:07:25.888 [INFO][5731] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9653f3e50090731920b66c3558844262d4fd050f8d6d624b77c19919948fe092" HandleID="k8s-pod-network.9653f3e50090731920b66c3558844262d4fd050f8d6d624b77c19919948fe092" Workload="ci--4081--3--3--n--60bc3761e6-k8s-calico--apiserver--849f9bb4b4--mnkn2-eth0" May 10 00:07:25.908470 containerd[1591]: 2025-05-10 00:07:25.888 [INFO][5731] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:07:25.908470 containerd[1591]: 2025-05-10 00:07:25.888 [INFO][5731] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:07:25.908470 containerd[1591]: 2025-05-10 00:07:25.899 [WARNING][5731] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9653f3e50090731920b66c3558844262d4fd050f8d6d624b77c19919948fe092" HandleID="k8s-pod-network.9653f3e50090731920b66c3558844262d4fd050f8d6d624b77c19919948fe092" Workload="ci--4081--3--3--n--60bc3761e6-k8s-calico--apiserver--849f9bb4b4--mnkn2-eth0" May 10 00:07:25.908470 containerd[1591]: 2025-05-10 00:07:25.900 [INFO][5731] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9653f3e50090731920b66c3558844262d4fd050f8d6d624b77c19919948fe092" HandleID="k8s-pod-network.9653f3e50090731920b66c3558844262d4fd050f8d6d624b77c19919948fe092" Workload="ci--4081--3--3--n--60bc3761e6-k8s-calico--apiserver--849f9bb4b4--mnkn2-eth0" May 10 00:07:25.908470 containerd[1591]: 2025-05-10 00:07:25.902 [INFO][5731] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:07:25.908470 containerd[1591]: 2025-05-10 00:07:25.905 [INFO][5724] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9653f3e50090731920b66c3558844262d4fd050f8d6d624b77c19919948fe092" May 10 00:07:25.908470 containerd[1591]: time="2025-05-10T00:07:25.907218165Z" level=info msg="TearDown network for sandbox \"9653f3e50090731920b66c3558844262d4fd050f8d6d624b77c19919948fe092\" successfully" May 10 00:07:25.912029 containerd[1591]: time="2025-05-10T00:07:25.911925069Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9653f3e50090731920b66c3558844262d4fd050f8d6d624b77c19919948fe092\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 10 00:07:25.912197 containerd[1591]: time="2025-05-10T00:07:25.912078037Z" level=info msg="RemovePodSandbox \"9653f3e50090731920b66c3558844262d4fd050f8d6d624b77c19919948fe092\" returns successfully" May 10 00:07:25.930548 kubelet[2970]: I0510 00:07:25.930236 2970 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 10 00:07:25.956989 kubelet[2970]: I0510 00:07:25.956891 2970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-4hjhz" podStartSLOduration=34.601815342 podStartE2EDuration="40.956870388s" podCreationTimestamp="2025-05-10 00:06:45 +0000 UTC" firstStartedPulling="2025-05-10 00:07:08.237159203 +0000 UTC m=+43.588272696" lastFinishedPulling="2025-05-10 00:07:14.592214209 +0000 UTC m=+49.943327742" observedRunningTime="2025-05-10 00:07:15.139317641 +0000 UTC m=+50.490431174" watchObservedRunningTime="2025-05-10 00:07:25.956870388 +0000 UTC m=+61.307983921" May 10 00:09:02.962313 systemd[1]: run-containerd-runc-k8s.io-1b6a1e3f212939d873d8300165b35ba1b161601c15ca9d6f67274191c3a91cc2-runc.YJiO6t.mount: Deactivated successfully. May 10 00:10:54.207755 systemd[1]: run-containerd-runc-k8s.io-84ec593fc48618e1f1a143587177708c8f376b41e8ecf5e482fb152dc160eacf-runc.6WcbAr.mount: Deactivated successfully. May 10 00:11:06.628637 systemd[1]: Started sshd@7-88.99.34.22:22-147.75.109.163:55706.service - OpenSSH per-connection server daemon (147.75.109.163:55706). May 10 00:11:07.629979 sshd[6218]: Accepted publickey for core from 147.75.109.163 port 55706 ssh2: RSA SHA256:f5WfDv+qi5DuYrx2bRROpkXs75JJRPKe8+tldd3Tjew May 10 00:11:07.632650 sshd[6218]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:11:07.639774 systemd-logind[1559]: New session 8 of user core. May 10 00:11:07.645626 systemd[1]: Started session-8.scope - Session 8 of User core. May 10 00:11:08.428607 sshd[6218]: pam_unix(sshd:session): session closed for user core May 10 00:11:08.434172 systemd[1]: sshd@7-88.99.34.22:22-147.75.109.163:55706.service: Deactivated successfully. May 10 00:11:08.437740 systemd[1]: session-8.scope: Deactivated successfully. May 10 00:11:08.438130 systemd-logind[1559]: Session 8 logged out. Waiting for processes to exit. May 10 00:11:08.441039 systemd-logind[1559]: Removed session 8. May 10 00:11:13.605795 systemd[1]: Started sshd@8-88.99.34.22:22-147.75.109.163:48144.service - OpenSSH per-connection server daemon (147.75.109.163:48144). May 10 00:11:14.602304 sshd[6237]: Accepted publickey for core from 147.75.109.163 port 48144 ssh2: RSA SHA256:f5WfDv+qi5DuYrx2bRROpkXs75JJRPKe8+tldd3Tjew May 10 00:11:14.604542 sshd[6237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:11:14.613374 systemd-logind[1559]: New session 9 of user core. May 10 00:11:14.619796 systemd[1]: Started session-9.scope - Session 9 of User core. May 10 00:11:15.371561 sshd[6237]: pam_unix(sshd:session): session closed for user core May 10 00:11:15.376562 systemd[1]: sshd@8-88.99.34.22:22-147.75.109.163:48144.service: Deactivated successfully. May 10 00:11:15.380585 systemd-logind[1559]: Session 9 logged out. Waiting for processes to exit. May 10 00:11:15.381234 systemd[1]: session-9.scope: Deactivated successfully. May 10 00:11:15.383478 systemd-logind[1559]: Removed session 9. May 10 00:11:20.541537 systemd[1]: Started sshd@9-88.99.34.22:22-147.75.109.163:40570.service - OpenSSH per-connection server daemon (147.75.109.163:40570). May 10 00:11:21.533030 sshd[6252]: Accepted publickey for core from 147.75.109.163 port 40570 ssh2: RSA SHA256:f5WfDv+qi5DuYrx2bRROpkXs75JJRPKe8+tldd3Tjew May 10 00:11:21.535938 sshd[6252]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:11:21.542107 systemd-logind[1559]: New session 10 of user core. May 10 00:11:21.546652 systemd[1]: Started session-10.scope - Session 10 of User core. May 10 00:11:22.302395 sshd[6252]: pam_unix(sshd:session): session closed for user core May 10 00:11:22.309327 systemd[1]: sshd@9-88.99.34.22:22-147.75.109.163:40570.service: Deactivated successfully. May 10 00:11:22.312975 systemd[1]: session-10.scope: Deactivated successfully. May 10 00:11:22.314141 systemd-logind[1559]: Session 10 logged out. Waiting for processes to exit. May 10 00:11:22.315231 systemd-logind[1559]: Removed session 10. May 10 00:11:22.472691 systemd[1]: Started sshd@10-88.99.34.22:22-147.75.109.163:40584.service - OpenSSH per-connection server daemon (147.75.109.163:40584). May 10 00:11:23.469128 sshd[6267]: Accepted publickey for core from 147.75.109.163 port 40584 ssh2: RSA SHA256:f5WfDv+qi5DuYrx2bRROpkXs75JJRPKe8+tldd3Tjew May 10 00:11:23.470814 sshd[6267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:11:23.476653 systemd-logind[1559]: New session 11 of user core. May 10 00:11:23.484888 systemd[1]: Started session-11.scope - Session 11 of User core. May 10 00:11:24.318011 sshd[6267]: pam_unix(sshd:session): session closed for user core May 10 00:11:24.322283 systemd[1]: sshd@10-88.99.34.22:22-147.75.109.163:40584.service: Deactivated successfully. May 10 00:11:24.327666 systemd[1]: session-11.scope: Deactivated successfully. May 10 00:11:24.328864 systemd-logind[1559]: Session 11 logged out. Waiting for processes to exit. May 10 00:11:24.330870 systemd-logind[1559]: Removed session 11. May 10 00:11:24.492560 systemd[1]: Started sshd@11-88.99.34.22:22-147.75.109.163:40588.service - OpenSSH per-connection server daemon (147.75.109.163:40588). May 10 00:11:25.501471 sshd[6302]: Accepted publickey for core from 147.75.109.163 port 40588 ssh2: RSA SHA256:f5WfDv+qi5DuYrx2bRROpkXs75JJRPKe8+tldd3Tjew May 10 00:11:25.503553 sshd[6302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:11:25.510413 systemd-logind[1559]: New session 12 of user core. May 10 00:11:25.518929 systemd[1]: Started session-12.scope - Session 12 of User core. May 10 00:11:26.282612 sshd[6302]: pam_unix(sshd:session): session closed for user core May 10 00:11:26.288677 systemd[1]: sshd@11-88.99.34.22:22-147.75.109.163:40588.service: Deactivated successfully. May 10 00:11:26.293834 systemd[1]: session-12.scope: Deactivated successfully. May 10 00:11:26.295034 systemd-logind[1559]: Session 12 logged out. Waiting for processes to exit. May 10 00:11:26.296222 systemd-logind[1559]: Removed session 12. May 10 00:11:31.453652 systemd[1]: Started sshd@12-88.99.34.22:22-147.75.109.163:60698.service - OpenSSH per-connection server daemon (147.75.109.163:60698). May 10 00:11:32.464032 sshd[6318]: Accepted publickey for core from 147.75.109.163 port 60698 ssh2: RSA SHA256:f5WfDv+qi5DuYrx2bRROpkXs75JJRPKe8+tldd3Tjew May 10 00:11:32.465843 sshd[6318]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:11:32.472289 systemd-logind[1559]: New session 13 of user core. May 10 00:11:32.474642 systemd[1]: Started session-13.scope - Session 13 of User core. May 10 00:11:33.271543 sshd[6318]: pam_unix(sshd:session): session closed for user core May 10 00:11:33.277333 systemd[1]: sshd@12-88.99.34.22:22-147.75.109.163:60698.service: Deactivated successfully. May 10 00:11:33.282830 systemd[1]: session-13.scope: Deactivated successfully. May 10 00:11:33.283879 systemd-logind[1559]: Session 13 logged out. Waiting for processes to exit. May 10 00:11:33.285025 systemd-logind[1559]: Removed session 13. May 10 00:11:33.439798 systemd[1]: Started sshd@13-88.99.34.22:22-147.75.109.163:60700.service - OpenSSH per-connection server daemon (147.75.109.163:60700). May 10 00:11:34.431187 sshd[6354]: Accepted publickey for core from 147.75.109.163 port 60700 ssh2: RSA SHA256:f5WfDv+qi5DuYrx2bRROpkXs75JJRPKe8+tldd3Tjew May 10 00:11:34.433240 sshd[6354]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:11:34.438307 systemd-logind[1559]: New session 14 of user core. May 10 00:11:34.441568 systemd[1]: Started session-14.scope - Session 14 of User core. May 10 00:11:35.312108 sshd[6354]: pam_unix(sshd:session): session closed for user core May 10 00:11:35.316588 systemd[1]: sshd@13-88.99.34.22:22-147.75.109.163:60700.service: Deactivated successfully. May 10 00:11:35.323026 systemd[1]: session-14.scope: Deactivated successfully. May 10 00:11:35.325324 systemd-logind[1559]: Session 14 logged out. Waiting for processes to exit. May 10 00:11:35.327629 systemd-logind[1559]: Removed session 14. May 10 00:11:35.488731 systemd[1]: Started sshd@14-88.99.34.22:22-147.75.109.163:60708.service - OpenSSH per-connection server daemon (147.75.109.163:60708). May 10 00:11:36.498917 sshd[6366]: Accepted publickey for core from 147.75.109.163 port 60708 ssh2: RSA SHA256:f5WfDv+qi5DuYrx2bRROpkXs75JJRPKe8+tldd3Tjew May 10 00:11:36.502427 sshd[6366]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:11:36.511874 systemd-logind[1559]: New session 15 of user core. May 10 00:11:36.515730 systemd[1]: Started session-15.scope - Session 15 of User core. May 10 00:11:39.164449 sshd[6366]: pam_unix(sshd:session): session closed for user core May 10 00:11:39.172623 systemd[1]: sshd@14-88.99.34.22:22-147.75.109.163:60708.service: Deactivated successfully. May 10 00:11:39.182420 systemd-logind[1559]: Session 15 logged out. Waiting for processes to exit. May 10 00:11:39.184873 systemd[1]: session-15.scope: Deactivated successfully. May 10 00:11:39.189818 systemd-logind[1559]: Removed session 15. May 10 00:11:39.335769 systemd[1]: Started sshd@15-88.99.34.22:22-147.75.109.163:45958.service - OpenSSH per-connection server daemon (147.75.109.163:45958). May 10 00:11:40.358590 sshd[6388]: Accepted publickey for core from 147.75.109.163 port 45958 ssh2: RSA SHA256:f5WfDv+qi5DuYrx2bRROpkXs75JJRPKe8+tldd3Tjew May 10 00:11:40.360162 sshd[6388]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:11:40.364995 systemd-logind[1559]: New session 16 of user core. May 10 00:11:40.375756 systemd[1]: Started session-16.scope - Session 16 of User core. May 10 00:11:41.279673 sshd[6388]: pam_unix(sshd:session): session closed for user core May 10 00:11:41.284624 systemd[1]: sshd@15-88.99.34.22:22-147.75.109.163:45958.service: Deactivated successfully. May 10 00:11:41.288768 systemd[1]: session-16.scope: Deactivated successfully. May 10 00:11:41.289740 systemd-logind[1559]: Session 16 logged out. Waiting for processes to exit. May 10 00:11:41.292090 systemd-logind[1559]: Removed session 16. May 10 00:11:41.454166 systemd[1]: Started sshd@16-88.99.34.22:22-147.75.109.163:45966.service - OpenSSH per-connection server daemon (147.75.109.163:45966). May 10 00:11:42.462412 sshd[6402]: Accepted publickey for core from 147.75.109.163 port 45966 ssh2: RSA SHA256:f5WfDv+qi5DuYrx2bRROpkXs75JJRPKe8+tldd3Tjew May 10 00:11:42.464611 sshd[6402]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:11:42.471332 systemd-logind[1559]: New session 17 of user core. May 10 00:11:42.477857 systemd[1]: Started session-17.scope - Session 17 of User core. May 10 00:11:43.248770 sshd[6402]: pam_unix(sshd:session): session closed for user core May 10 00:11:43.255274 systemd[1]: sshd@16-88.99.34.22:22-147.75.109.163:45966.service: Deactivated successfully. May 10 00:11:43.259028 systemd-logind[1559]: Session 17 logged out. Waiting for processes to exit. May 10 00:11:43.260103 systemd[1]: session-17.scope: Deactivated successfully. May 10 00:11:43.262128 systemd-logind[1559]: Removed session 17. May 10 00:11:47.109412 kubelet[2970]: I0510 00:11:47.109275 2970 log.go:245] http: TLS handshake error from 66.240.219.146:56290: EOF May 10 00:11:47.430654 kubelet[2970]: I0510 00:11:47.430437 2970 log.go:245] http: TLS handshake error from 66.240.219.146:56450: EOF May 10 00:11:47.749808 kubelet[2970]: I0510 00:11:47.748833 2970 log.go:245] http: TLS handshake error from 66.240.219.146:56552: EOF May 10 00:11:47.907512 kubelet[2970]: I0510 00:11:47.907368 2970 log.go:245] http: TLS handshake error from 66.240.219.146:56624: tls: client requested unsupported application protocols ([http/0.9 http/1.0 spdy/1 spdy/2 spdy/3 h2c hq]) May 10 00:11:48.221854 kubelet[2970]: I0510 00:11:48.221725 2970 log.go:245] http: TLS handshake error from 66.240.219.146:56690: tls: client requested unsupported application protocols ([hq h2c spdy/3 spdy/2 spdy/1 http/1.0 http/0.9]) May 10 00:11:48.418832 systemd[1]: Started sshd@17-88.99.34.22:22-147.75.109.163:46090.service - OpenSSH per-connection server daemon (147.75.109.163:46090). May 10 00:11:48.562317 kubelet[2970]: I0510 00:11:48.562055 2970 log.go:245] http: TLS handshake error from 66.240.219.146:56790: tls: client offered only unsupported versions: [302 301] May 10 00:11:49.040187 kubelet[2970]: I0510 00:11:49.040104 2970 log.go:245] http: TLS handshake error from 66.240.219.146:57010: EOF May 10 00:11:49.362614 kubelet[2970]: I0510 00:11:49.362488 2970 log.go:245] http: TLS handshake error from 66.240.219.146:57238: EOF May 10 00:11:49.410584 sshd[6420]: Accepted publickey for core from 147.75.109.163 port 46090 ssh2: RSA SHA256:f5WfDv+qi5DuYrx2bRROpkXs75JJRPKe8+tldd3Tjew May 10 00:11:49.413478 sshd[6420]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:11:49.418983 systemd-logind[1559]: New session 18 of user core. May 10 00:11:49.424800 systemd[1]: Started session-18.scope - Session 18 of User core. May 10 00:11:49.689182 kubelet[2970]: I0510 00:11:49.689022 2970 log.go:245] http: TLS handshake error from 66.240.219.146:57360: EOF May 10 00:11:50.009527 kubelet[2970]: I0510 00:11:50.009242 2970 log.go:245] http: TLS handshake error from 66.240.219.146:57450: EOF May 10 00:11:50.184917 sshd[6420]: pam_unix(sshd:session): session closed for user core May 10 00:11:50.189960 systemd[1]: sshd@17-88.99.34.22:22-147.75.109.163:46090.service: Deactivated successfully. May 10 00:11:50.193053 kubelet[2970]: I0510 00:11:50.192689 2970 log.go:245] http: TLS handshake error from 66.240.219.146:57552: tls: client offered only unsupported versions: [301] May 10 00:11:50.194936 systemd[1]: session-18.scope: Deactivated successfully. May 10 00:11:50.195228 systemd-logind[1559]: Session 18 logged out. Waiting for processes to exit. May 10 00:11:50.197232 systemd-logind[1559]: Removed session 18. May 10 00:11:50.586354 kubelet[2970]: I0510 00:11:50.586237 2970 log.go:245] http: TLS handshake error from 66.240.219.146:57612: tls: unsupported SSLv2 handshake received May 10 00:11:50.932927 kubelet[2970]: I0510 00:11:50.932729 2970 log.go:245] http: TLS handshake error from 66.240.219.146:57674: tls: client offered only unsupported versions: [] May 10 00:11:51.284597 kubelet[2970]: I0510 00:11:51.284406 2970 log.go:245] http: TLS handshake error from 66.240.219.146:57732: tls: client offered only unsupported versions: [302 301] May 10 00:11:54.232011 systemd[1]: run-containerd-runc-k8s.io-84ec593fc48618e1f1a143587177708c8f376b41e8ecf5e482fb152dc160eacf-runc.2T9w3B.mount: Deactivated successfully. May 10 00:11:55.364322 systemd[1]: Started sshd@18-88.99.34.22:22-147.75.109.163:46092.service - OpenSSH per-connection server daemon (147.75.109.163:46092). May 10 00:11:56.380586 sshd[6473]: Accepted publickey for core from 147.75.109.163 port 46092 ssh2: RSA SHA256:f5WfDv+qi5DuYrx2bRROpkXs75JJRPKe8+tldd3Tjew May 10 00:11:56.382696 sshd[6473]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:11:56.388317 systemd-logind[1559]: New session 19 of user core. May 10 00:11:56.396633 systemd[1]: Started session-19.scope - Session 19 of User core. May 10 00:11:57.155379 sshd[6473]: pam_unix(sshd:session): session closed for user core May 10 00:11:57.162640 systemd[1]: sshd@18-88.99.34.22:22-147.75.109.163:46092.service: Deactivated successfully. May 10 00:11:57.167309 systemd[1]: session-19.scope: Deactivated successfully. May 10 00:11:57.168586 systemd-logind[1559]: Session 19 logged out. Waiting for processes to exit. May 10 00:11:57.169765 systemd-logind[1559]: Removed session 19. May 10 00:12:02.959866 systemd[1]: run-containerd-runc-k8s.io-1b6a1e3f212939d873d8300165b35ba1b161601c15ca9d6f67274191c3a91cc2-runc.1In54Y.mount: Deactivated successfully. May 10 00:12:13.414342 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-945429b329e421b46b84df58f876e05dc7cb86fdebf538d1e0cb21c53cd4165f-rootfs.mount: Deactivated successfully. May 10 00:12:13.416128 containerd[1591]: time="2025-05-10T00:12:13.416044989Z" level=info msg="shim disconnected" id=945429b329e421b46b84df58f876e05dc7cb86fdebf538d1e0cb21c53cd4165f namespace=k8s.io May 10 00:12:13.416558 containerd[1591]: time="2025-05-10T00:12:13.416533852Z" level=warning msg="cleaning up after shim disconnected" id=945429b329e421b46b84df58f876e05dc7cb86fdebf538d1e0cb21c53cd4165f namespace=k8s.io May 10 00:12:13.416639 containerd[1591]: time="2025-05-10T00:12:13.416625297Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 10 00:12:13.479143 containerd[1591]: time="2025-05-10T00:12:13.479067640Z" level=info msg="shim disconnected" id=25ed6363d976b70143d0aa5da416f7bee442a216566844f1c61b0fa7bd2a4206 namespace=k8s.io May 10 00:12:13.479528 containerd[1591]: time="2025-05-10T00:12:13.479204407Z" level=warning msg="cleaning up after shim disconnected" id=25ed6363d976b70143d0aa5da416f7bee442a216566844f1c61b0fa7bd2a4206 namespace=k8s.io May 10 00:12:13.479528 containerd[1591]: time="2025-05-10T00:12:13.479216567Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 10 00:12:13.484442 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-25ed6363d976b70143d0aa5da416f7bee442a216566844f1c61b0fa7bd2a4206-rootfs.mount: Deactivated successfully. May 10 00:12:13.493308 containerd[1591]: time="2025-05-10T00:12:13.492599678Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:12:13Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 10 00:12:13.637622 kubelet[2970]: E0510 00:12:13.637574 2970 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:56612->10.0.0.2:2379: read: connection timed out" May 10 00:12:13.665120 containerd[1591]: time="2025-05-10T00:12:13.665060728Z" level=info msg="shim disconnected" id=aab1caffa1c536ea89e7da2c2c869f6d0b987466425467c05a164c79429cf773 namespace=k8s.io May 10 00:12:13.665861 containerd[1591]: time="2025-05-10T00:12:13.665293979Z" level=warning msg="cleaning up after shim disconnected" id=aab1caffa1c536ea89e7da2c2c869f6d0b987466425467c05a164c79429cf773 namespace=k8s.io May 10 00:12:13.665861 containerd[1591]: time="2025-05-10T00:12:13.665308340Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 10 00:12:13.672392 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aab1caffa1c536ea89e7da2c2c869f6d0b987466425467c05a164c79429cf773-rootfs.mount: Deactivated successfully. May 10 00:12:13.934589 kubelet[2970]: I0510 00:12:13.934476 2970 scope.go:117] "RemoveContainer" containerID="25ed6363d976b70143d0aa5da416f7bee442a216566844f1c61b0fa7bd2a4206" May 10 00:12:13.935325 kubelet[2970]: I0510 00:12:13.934774 2970 scope.go:117] "RemoveContainer" containerID="aab1caffa1c536ea89e7da2c2c869f6d0b987466425467c05a164c79429cf773" May 10 00:12:13.936226 kubelet[2970]: I0510 00:12:13.936011 2970 scope.go:117] "RemoveContainer" containerID="945429b329e421b46b84df58f876e05dc7cb86fdebf538d1e0cb21c53cd4165f" May 10 00:12:13.938075 containerd[1591]: time="2025-05-10T00:12:13.938039437Z" level=info msg="CreateContainer within sandbox \"b2c863642668b43ee6fe2d36cf53633ec7dc667a615ef12716bd0aa1f3380d88\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" May 10 00:12:13.939515 containerd[1591]: time="2025-05-10T00:12:13.939120848Z" level=info msg="CreateContainer within sandbox \"a7edd37c58646d756c6926d7c993c78f6314fff1fce5f0d007ff4b6a799e5829\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" May 10 00:12:13.940492 containerd[1591]: time="2025-05-10T00:12:13.940462911Z" level=info msg="CreateContainer within sandbox \"588d3493a6f8076dc8363e4653f8a646c35075d9fc5b3884147392602f156cdd\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" May 10 00:12:13.976965 containerd[1591]: time="2025-05-10T00:12:13.976916470Z" level=info msg="CreateContainer within sandbox \"b2c863642668b43ee6fe2d36cf53633ec7dc667a615ef12716bd0aa1f3380d88\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"4c0b2ec1bb1ee0e9720ecb70ac50682dc805f5328f75a1bf959313913f45ae39\"" May 10 00:12:13.977596 containerd[1591]: time="2025-05-10T00:12:13.977566780Z" level=info msg="StartContainer for \"4c0b2ec1bb1ee0e9720ecb70ac50682dc805f5328f75a1bf959313913f45ae39\"" May 10 00:12:13.978246 containerd[1591]: time="2025-05-10T00:12:13.978115326Z" level=info msg="CreateContainer within sandbox \"a7edd37c58646d756c6926d7c993c78f6314fff1fce5f0d007ff4b6a799e5829\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"522a8cdd895413839ddc85c028fa3e76dfc9f68e423c48d3088042e9dba42cce\"" May 10 00:12:13.978969 containerd[1591]: time="2025-05-10T00:12:13.978788918Z" level=info msg="StartContainer for \"522a8cdd895413839ddc85c028fa3e76dfc9f68e423c48d3088042e9dba42cce\"" May 10 00:12:13.980865 containerd[1591]: time="2025-05-10T00:12:13.980744530Z" level=info msg="CreateContainer within sandbox \"588d3493a6f8076dc8363e4653f8a646c35075d9fc5b3884147392602f156cdd\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"1c04b9954913bb9b24ca3dc7c2bb2d945bc1d50d4551b7ae64de6331f7bfd022\"" May 10 00:12:13.983555 containerd[1591]: time="2025-05-10T00:12:13.983476779Z" level=info msg="StartContainer for \"1c04b9954913bb9b24ca3dc7c2bb2d945bc1d50d4551b7ae64de6331f7bfd022\"" May 10 00:12:14.076289 containerd[1591]: time="2025-05-10T00:12:14.075819134Z" level=info msg="StartContainer for \"4c0b2ec1bb1ee0e9720ecb70ac50682dc805f5328f75a1bf959313913f45ae39\" returns successfully" May 10 00:12:14.081559 containerd[1591]: time="2025-05-10T00:12:14.078863998Z" level=info msg="StartContainer for \"522a8cdd895413839ddc85c028fa3e76dfc9f68e423c48d3088042e9dba42cce\" returns successfully" May 10 00:12:14.106833 containerd[1591]: time="2025-05-10T00:12:14.106063481Z" level=info msg="StartContainer for \"1c04b9954913bb9b24ca3dc7c2bb2d945bc1d50d4551b7ae64de6331f7bfd022\" returns successfully" May 10 00:12:17.312946 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1c04b9954913bb9b24ca3dc7c2bb2d945bc1d50d4551b7ae64de6331f7bfd022-rootfs.mount: Deactivated successfully. May 10 00:12:17.314885 containerd[1591]: time="2025-05-10T00:12:17.314666601Z" level=info msg="shim disconnected" id=1c04b9954913bb9b24ca3dc7c2bb2d945bc1d50d4551b7ae64de6331f7bfd022 namespace=k8s.io May 10 00:12:17.314885 containerd[1591]: time="2025-05-10T00:12:17.314881251Z" level=warning msg="cleaning up after shim disconnected" id=1c04b9954913bb9b24ca3dc7c2bb2d945bc1d50d4551b7ae64de6331f7bfd022 namespace=k8s.io May 10 00:12:17.314885 containerd[1591]: time="2025-05-10T00:12:17.314892411Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 10 00:12:17.964861 kubelet[2970]: I0510 00:12:17.964199 2970 scope.go:117] "RemoveContainer" containerID="945429b329e421b46b84df58f876e05dc7cb86fdebf538d1e0cb21c53cd4165f" May 10 00:12:17.964861 kubelet[2970]: I0510 00:12:17.964540 2970 scope.go:117] "RemoveContainer" containerID="1c04b9954913bb9b24ca3dc7c2bb2d945bc1d50d4551b7ae64de6331f7bfd022" May 10 00:12:17.964861 kubelet[2970]: E0510 00:12:17.964747 2970 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-797db67f8-2b2n5_tigera-operator(caed5c12-766c-4667-9634-21b7e7a71252)\"" pod="tigera-operator/tigera-operator-797db67f8-2b2n5" podUID="caed5c12-766c-4667-9634-21b7e7a71252" May 10 00:12:17.965822 containerd[1591]: time="2025-05-10T00:12:17.965787841Z" level=info msg="RemoveContainer for \"945429b329e421b46b84df58f876e05dc7cb86fdebf538d1e0cb21c53cd4165f\"" May 10 00:12:17.969469 containerd[1591]: time="2025-05-10T00:12:17.969433533Z" level=info msg="RemoveContainer for \"945429b329e421b46b84df58f876e05dc7cb86fdebf538d1e0cb21c53cd4165f\" returns successfully" May 10 00:12:18.504288 kubelet[2970]: E0510 00:12:18.501823 2970 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:56408->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4081-3-3-n-60bc3761e6.183e02057c8ecbb9 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4081-3-3-n-60bc3761e6,UID:52469b807676a11b636599956e9fbac6,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4081-3-3-n-60bc3761e6,},FirstTimestamp:2025-05-10 00:12:08.054156217 +0000 UTC m=+343.405269790,LastTimestamp:2025-05-10 00:12:08.054156217 +0000 UTC m=+343.405269790,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-3-n-60bc3761e6,}"