Jul 12 00:11:38.909037 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 12 00:11:38.909062 kernel: Linux version 6.6.96-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Jul 11 22:42:11 -00 2025 Jul 12 00:11:38.909073 kernel: KASLR enabled Jul 12 00:11:38.909079 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Jul 12 00:11:38.909085 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390c1018 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b43d18 Jul 12 00:11:38.909090 kernel: random: crng init done Jul 12 00:11:38.909098 kernel: ACPI: Early table checksum verification disabled Jul 12 00:11:38.909104 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Jul 12 00:11:38.909111 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Jul 12 00:11:38.909119 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:11:38.909125 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:11:38.909131 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:11:38.909137 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:11:38.909143 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:11:38.909150 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:11:38.909158 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:11:38.909165 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:11:38.909171 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:11:38.909177 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Jul 12 00:11:38.909183 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Jul 12 00:11:38.909190 kernel: NUMA: Failed to initialise from firmware Jul 12 00:11:38.909196 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Jul 12 00:11:38.909202 kernel: NUMA: NODE_DATA [mem 0x13966f800-0x139674fff] Jul 12 00:11:38.909209 kernel: Zone ranges: Jul 12 00:11:38.909215 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jul 12 00:11:38.909223 kernel: DMA32 empty Jul 12 00:11:38.909229 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Jul 12 00:11:38.909236 kernel: Movable zone start for each node Jul 12 00:11:38.909242 kernel: Early memory node ranges Jul 12 00:11:38.909249 kernel: node 0: [mem 0x0000000040000000-0x000000013676ffff] Jul 12 00:11:38.909255 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Jul 12 00:11:38.909261 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Jul 12 00:11:38.909267 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Jul 12 00:11:38.909274 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Jul 12 00:11:38.909280 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Jul 12 00:11:38.909286 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Jul 12 00:11:38.909292 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Jul 12 00:11:38.909300 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Jul 12 00:11:38.909306 kernel: psci: probing for conduit method from ACPI. Jul 12 00:11:38.909312 kernel: psci: PSCIv1.1 detected in firmware. Jul 12 00:11:38.909322 kernel: psci: Using standard PSCI v0.2 function IDs Jul 12 00:11:38.909328 kernel: psci: Trusted OS migration not required Jul 12 00:11:38.909335 kernel: psci: SMC Calling Convention v1.1 Jul 12 00:11:38.909343 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 12 00:11:38.909350 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jul 12 00:11:38.909357 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jul 12 00:11:38.909363 kernel: pcpu-alloc: [0] 0 [0] 1 Jul 12 00:11:38.909370 kernel: Detected PIPT I-cache on CPU0 Jul 12 00:11:38.909376 kernel: CPU features: detected: GIC system register CPU interface Jul 12 00:11:38.909383 kernel: CPU features: detected: Hardware dirty bit management Jul 12 00:11:38.909389 kernel: CPU features: detected: Spectre-v4 Jul 12 00:11:38.909396 kernel: CPU features: detected: Spectre-BHB Jul 12 00:11:38.909403 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 12 00:11:38.909411 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 12 00:11:38.909417 kernel: CPU features: detected: ARM erratum 1418040 Jul 12 00:11:38.909424 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 12 00:11:38.909431 kernel: alternatives: applying boot alternatives Jul 12 00:11:38.909439 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=52e0eba0325ad9e58f7b221f0132165c94b480ebf93a398f4fe935660ba9e15c Jul 12 00:11:38.909446 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 12 00:11:38.909452 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 12 00:11:38.909459 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 12 00:11:38.909466 kernel: Fallback order for Node 0: 0 Jul 12 00:11:38.909472 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Jul 12 00:11:38.909480 kernel: Policy zone: Normal Jul 12 00:11:38.909533 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 12 00:11:38.909541 kernel: software IO TLB: area num 2. Jul 12 00:11:38.909548 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Jul 12 00:11:38.909555 kernel: Memory: 3882808K/4096000K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39424K init, 897K bss, 213192K reserved, 0K cma-reserved) Jul 12 00:11:38.909562 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 12 00:11:38.909569 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 12 00:11:38.909576 kernel: rcu: RCU event tracing is enabled. Jul 12 00:11:38.909583 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 12 00:11:38.909590 kernel: Trampoline variant of Tasks RCU enabled. Jul 12 00:11:38.909597 kernel: Tracing variant of Tasks RCU enabled. Jul 12 00:11:38.909603 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 12 00:11:38.909612 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 12 00:11:38.909619 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 12 00:11:38.909625 kernel: GICv3: 256 SPIs implemented Jul 12 00:11:38.909632 kernel: GICv3: 0 Extended SPIs implemented Jul 12 00:11:38.909638 kernel: Root IRQ handler: gic_handle_irq Jul 12 00:11:38.909645 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 12 00:11:38.909652 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 12 00:11:38.909659 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 12 00:11:38.909665 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Jul 12 00:11:38.909672 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Jul 12 00:11:38.909679 kernel: GICv3: using LPI property table @0x00000001000e0000 Jul 12 00:11:38.909686 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Jul 12 00:11:38.909694 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 12 00:11:38.909701 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 12 00:11:38.909708 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 12 00:11:38.909730 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 12 00:11:38.909737 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 12 00:11:38.909744 kernel: Console: colour dummy device 80x25 Jul 12 00:11:38.909751 kernel: ACPI: Core revision 20230628 Jul 12 00:11:38.909758 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 12 00:11:38.909765 kernel: pid_max: default: 32768 minimum: 301 Jul 12 00:11:38.909772 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 12 00:11:38.909781 kernel: landlock: Up and running. Jul 12 00:11:38.909788 kernel: SELinux: Initializing. Jul 12 00:11:38.909795 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 12 00:11:38.909802 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 12 00:11:38.909809 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 12 00:11:38.909816 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 12 00:11:38.909823 kernel: rcu: Hierarchical SRCU implementation. Jul 12 00:11:38.909830 kernel: rcu: Max phase no-delay instances is 400. Jul 12 00:11:38.909836 kernel: Platform MSI: ITS@0x8080000 domain created Jul 12 00:11:38.909845 kernel: PCI/MSI: ITS@0x8080000 domain created Jul 12 00:11:38.909852 kernel: Remapping and enabling EFI services. Jul 12 00:11:38.909859 kernel: smp: Bringing up secondary CPUs ... Jul 12 00:11:38.909865 kernel: Detected PIPT I-cache on CPU1 Jul 12 00:11:38.909873 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 12 00:11:38.909880 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Jul 12 00:11:38.909886 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 12 00:11:38.909893 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 12 00:11:38.909901 kernel: smp: Brought up 1 node, 2 CPUs Jul 12 00:11:38.909907 kernel: SMP: Total of 2 processors activated. Jul 12 00:11:38.909916 kernel: CPU features: detected: 32-bit EL0 Support Jul 12 00:11:38.909923 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 12 00:11:38.909935 kernel: CPU features: detected: Common not Private translations Jul 12 00:11:38.909943 kernel: CPU features: detected: CRC32 instructions Jul 12 00:11:38.909950 kernel: CPU features: detected: Enhanced Virtualization Traps Jul 12 00:11:38.909958 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 12 00:11:38.911993 kernel: CPU features: detected: LSE atomic instructions Jul 12 00:11:38.912027 kernel: CPU features: detected: Privileged Access Never Jul 12 00:11:38.912036 kernel: CPU features: detected: RAS Extension Support Jul 12 00:11:38.912049 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 12 00:11:38.912057 kernel: CPU: All CPU(s) started at EL1 Jul 12 00:11:38.912064 kernel: alternatives: applying system-wide alternatives Jul 12 00:11:38.912072 kernel: devtmpfs: initialized Jul 12 00:11:38.912079 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 12 00:11:38.912087 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 12 00:11:38.912094 kernel: pinctrl core: initialized pinctrl subsystem Jul 12 00:11:38.912103 kernel: SMBIOS 3.0.0 present. Jul 12 00:11:38.912111 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Jul 12 00:11:38.912118 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 12 00:11:38.912126 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 12 00:11:38.912133 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 12 00:11:38.912141 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 12 00:11:38.912148 kernel: audit: initializing netlink subsys (disabled) Jul 12 00:11:38.912155 kernel: audit: type=2000 audit(0.013:1): state=initialized audit_enabled=0 res=1 Jul 12 00:11:38.912163 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 12 00:11:38.912172 kernel: cpuidle: using governor menu Jul 12 00:11:38.912179 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 12 00:11:38.912186 kernel: ASID allocator initialised with 32768 entries Jul 12 00:11:38.912194 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 12 00:11:38.912201 kernel: Serial: AMBA PL011 UART driver Jul 12 00:11:38.912208 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 12 00:11:38.912215 kernel: Modules: 0 pages in range for non-PLT usage Jul 12 00:11:38.912223 kernel: Modules: 509008 pages in range for PLT usage Jul 12 00:11:38.912230 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 12 00:11:38.912239 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 12 00:11:38.912246 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 12 00:11:38.912254 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 12 00:11:38.912261 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 12 00:11:38.912268 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 12 00:11:38.912276 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 12 00:11:38.912283 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 12 00:11:38.912290 kernel: ACPI: Added _OSI(Module Device) Jul 12 00:11:38.912298 kernel: ACPI: Added _OSI(Processor Device) Jul 12 00:11:38.912307 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 12 00:11:38.912314 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 12 00:11:38.912363 kernel: ACPI: Interpreter enabled Jul 12 00:11:38.912371 kernel: ACPI: Using GIC for interrupt routing Jul 12 00:11:38.912379 kernel: ACPI: MCFG table detected, 1 entries Jul 12 00:11:38.912387 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 12 00:11:38.912394 kernel: printk: console [ttyAMA0] enabled Jul 12 00:11:38.912401 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 12 00:11:38.912567 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 12 00:11:38.912645 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 12 00:11:38.912710 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 12 00:11:38.912797 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 12 00:11:38.912910 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 12 00:11:38.912923 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 12 00:11:38.912931 kernel: PCI host bridge to bus 0000:00 Jul 12 00:11:38.913032 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 12 00:11:38.913104 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 12 00:11:38.913165 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 12 00:11:38.913229 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 12 00:11:38.913318 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jul 12 00:11:38.913403 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Jul 12 00:11:38.913477 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Jul 12 00:11:38.913557 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Jul 12 00:11:38.913641 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jul 12 00:11:38.913758 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Jul 12 00:11:38.913861 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jul 12 00:11:38.913937 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Jul 12 00:11:38.916093 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jul 12 00:11:38.916186 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Jul 12 00:11:38.916262 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jul 12 00:11:38.916329 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Jul 12 00:11:38.916465 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jul 12 00:11:38.916540 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Jul 12 00:11:38.916615 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jul 12 00:11:38.916691 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Jul 12 00:11:38.916816 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jul 12 00:11:38.916891 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Jul 12 00:11:38.916989 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jul 12 00:11:38.917069 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Jul 12 00:11:38.917145 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Jul 12 00:11:38.917212 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Jul 12 00:11:38.917302 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Jul 12 00:11:38.917420 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Jul 12 00:11:38.917521 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Jul 12 00:11:38.917594 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Jul 12 00:11:38.917663 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jul 12 00:11:38.917749 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Jul 12 00:11:38.917836 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Jul 12 00:11:38.917906 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Jul 12 00:11:38.919821 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Jul 12 00:11:38.919943 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Jul 12 00:11:38.922144 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Jul 12 00:11:38.922245 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Jul 12 00:11:38.922318 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Jul 12 00:11:38.922403 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Jul 12 00:11:38.922474 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Jul 12 00:11:38.922556 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Jul 12 00:11:38.922669 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Jul 12 00:11:38.922764 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Jul 12 00:11:38.922848 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Jul 12 00:11:38.922926 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Jul 12 00:11:38.923064 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Jul 12 00:11:38.923138 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Jul 12 00:11:38.923211 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jul 12 00:11:38.923279 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Jul 12 00:11:38.923346 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Jul 12 00:11:38.923423 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jul 12 00:11:38.923549 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jul 12 00:11:38.923619 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Jul 12 00:11:38.923690 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jul 12 00:11:38.923784 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Jul 12 00:11:38.923853 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Jul 12 00:11:38.923928 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jul 12 00:11:38.926146 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Jul 12 00:11:38.926318 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Jul 12 00:11:38.926408 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jul 12 00:11:38.926485 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Jul 12 00:11:38.926559 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x000fffff] to [bus 05] add_size 200000 add_align 100000 Jul 12 00:11:38.926639 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jul 12 00:11:38.926762 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Jul 12 00:11:38.926840 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Jul 12 00:11:38.926919 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jul 12 00:11:38.927025 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Jul 12 00:11:38.927095 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Jul 12 00:11:38.927167 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jul 12 00:11:38.927234 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Jul 12 00:11:38.927302 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Jul 12 00:11:38.927375 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jul 12 00:11:38.927444 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Jul 12 00:11:38.927516 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Jul 12 00:11:38.927586 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Jul 12 00:11:38.927655 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Jul 12 00:11:38.927765 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Jul 12 00:11:38.927844 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Jul 12 00:11:38.927916 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Jul 12 00:11:38.930059 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Jul 12 00:11:38.930161 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Jul 12 00:11:38.930232 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Jul 12 00:11:38.930306 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Jul 12 00:11:38.930374 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Jul 12 00:11:38.930443 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Jul 12 00:11:38.930570 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Jul 12 00:11:38.930650 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Jul 12 00:11:38.930764 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Jul 12 00:11:38.930848 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Jul 12 00:11:38.930921 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Jul 12 00:11:38.931081 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Jul 12 00:11:38.931153 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Jul 12 00:11:38.931224 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Jul 12 00:11:38.931299 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Jul 12 00:11:38.931433 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Jul 12 00:11:38.931502 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Jul 12 00:11:38.931570 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Jul 12 00:11:38.931636 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Jul 12 00:11:38.931701 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Jul 12 00:11:38.931785 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Jul 12 00:11:38.931854 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Jul 12 00:11:38.934002 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Jul 12 00:11:38.934132 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Jul 12 00:11:38.934203 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Jul 12 00:11:38.934273 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Jul 12 00:11:38.934341 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Jul 12 00:11:38.934413 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Jul 12 00:11:38.934482 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Jul 12 00:11:38.934605 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Jul 12 00:11:38.934685 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Jul 12 00:11:38.934777 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Jul 12 00:11:38.934848 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Jul 12 00:11:38.934922 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Jul 12 00:11:38.935013 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Jul 12 00:11:38.935084 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jul 12 00:11:38.935154 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Jul 12 00:11:38.935274 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jul 12 00:11:38.935350 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jul 12 00:11:38.935417 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Jul 12 00:11:38.935483 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Jul 12 00:11:38.935557 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Jul 12 00:11:38.935627 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jul 12 00:11:38.935698 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jul 12 00:11:38.935793 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Jul 12 00:11:38.935902 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Jul 12 00:11:38.937093 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Jul 12 00:11:38.937247 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Jul 12 00:11:38.937322 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jul 12 00:11:38.937391 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jul 12 00:11:38.937465 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Jul 12 00:11:38.937534 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Jul 12 00:11:38.937609 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Jul 12 00:11:38.937677 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jul 12 00:11:38.937801 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jul 12 00:11:38.937875 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Jul 12 00:11:38.937944 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Jul 12 00:11:38.938039 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Jul 12 00:11:38.938124 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jul 12 00:11:38.938194 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jul 12 00:11:38.938260 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Jul 12 00:11:38.938328 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Jul 12 00:11:38.938406 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Jul 12 00:11:38.938537 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Jul 12 00:11:38.938623 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jul 12 00:11:38.938698 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jul 12 00:11:38.938857 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Jul 12 00:11:38.938940 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Jul 12 00:11:38.940945 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Jul 12 00:11:38.941051 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Jul 12 00:11:38.941123 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Jul 12 00:11:38.941193 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jul 12 00:11:38.941259 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jul 12 00:11:38.941377 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Jul 12 00:11:38.941464 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Jul 12 00:11:38.941536 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jul 12 00:11:38.941602 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jul 12 00:11:38.941669 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Jul 12 00:11:38.941783 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Jul 12 00:11:38.941859 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jul 12 00:11:38.943086 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Jul 12 00:11:38.943189 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Jul 12 00:11:38.943262 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Jul 12 00:11:38.943330 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 12 00:11:38.943454 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 12 00:11:38.943514 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 12 00:11:38.943594 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jul 12 00:11:38.943657 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Jul 12 00:11:38.943731 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Jul 12 00:11:38.943812 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Jul 12 00:11:38.943873 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Jul 12 00:11:38.943937 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Jul 12 00:11:38.945287 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Jul 12 00:11:38.945370 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Jul 12 00:11:38.945439 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Jul 12 00:11:38.945518 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Jul 12 00:11:38.945583 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Jul 12 00:11:38.945656 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Jul 12 00:11:38.945785 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Jul 12 00:11:38.945924 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Jul 12 00:11:38.946021 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Jul 12 00:11:38.946098 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Jul 12 00:11:38.946166 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Jul 12 00:11:38.946229 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Jul 12 00:11:38.946300 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Jul 12 00:11:38.946362 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Jul 12 00:11:38.946477 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Jul 12 00:11:38.946555 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Jul 12 00:11:38.946618 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Jul 12 00:11:38.946679 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Jul 12 00:11:38.946769 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Jul 12 00:11:38.946835 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Jul 12 00:11:38.946898 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Jul 12 00:11:38.946911 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 12 00:11:38.946919 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 12 00:11:38.946927 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 12 00:11:38.946935 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 12 00:11:38.946942 kernel: iommu: Default domain type: Translated Jul 12 00:11:38.946950 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 12 00:11:38.946958 kernel: efivars: Registered efivars operations Jul 12 00:11:38.947643 kernel: vgaarb: loaded Jul 12 00:11:38.947663 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 12 00:11:38.947677 kernel: VFS: Disk quotas dquot_6.6.0 Jul 12 00:11:38.947685 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 12 00:11:38.947693 kernel: pnp: PnP ACPI init Jul 12 00:11:38.947880 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 12 00:11:38.947897 kernel: pnp: PnP ACPI: found 1 devices Jul 12 00:11:38.947905 kernel: NET: Registered PF_INET protocol family Jul 12 00:11:38.947913 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 12 00:11:38.947921 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 12 00:11:38.947933 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 12 00:11:38.947941 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 12 00:11:38.947949 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 12 00:11:38.947957 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 12 00:11:38.947980 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 12 00:11:38.947990 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 12 00:11:38.947998 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 12 00:11:38.948084 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Jul 12 00:11:38.948096 kernel: PCI: CLS 0 bytes, default 64 Jul 12 00:11:38.948107 kernel: kvm [1]: HYP mode not available Jul 12 00:11:38.948115 kernel: Initialise system trusted keyrings Jul 12 00:11:38.948122 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 12 00:11:38.948130 kernel: Key type asymmetric registered Jul 12 00:11:38.948138 kernel: Asymmetric key parser 'x509' registered Jul 12 00:11:38.948145 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 12 00:11:38.948153 kernel: io scheduler mq-deadline registered Jul 12 00:11:38.948161 kernel: io scheduler kyber registered Jul 12 00:11:38.948168 kernel: io scheduler bfq registered Jul 12 00:11:38.948179 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jul 12 00:11:38.948249 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Jul 12 00:11:38.948318 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Jul 12 00:11:38.948384 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 12 00:11:38.948453 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Jul 12 00:11:38.948575 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Jul 12 00:11:38.948655 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 12 00:11:38.948761 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Jul 12 00:11:38.948839 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Jul 12 00:11:38.948909 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 12 00:11:38.949168 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Jul 12 00:11:38.949264 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Jul 12 00:11:38.949337 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 12 00:11:38.949407 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Jul 12 00:11:38.949474 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Jul 12 00:11:38.949539 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 12 00:11:38.949608 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Jul 12 00:11:38.949676 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Jul 12 00:11:38.949763 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 12 00:11:38.949839 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Jul 12 00:11:38.950498 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Jul 12 00:11:38.950607 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 12 00:11:38.950685 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Jul 12 00:11:38.950783 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Jul 12 00:11:38.950877 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 12 00:11:38.950891 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Jul 12 00:11:38.952220 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Jul 12 00:11:38.952346 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Jul 12 00:11:38.952442 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 12 00:11:38.952460 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 12 00:11:38.952470 kernel: ACPI: button: Power Button [PWRB] Jul 12 00:11:38.952479 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 12 00:11:38.952565 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Jul 12 00:11:38.952646 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Jul 12 00:11:38.952659 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 12 00:11:38.952667 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jul 12 00:11:38.952760 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Jul 12 00:11:38.952774 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Jul 12 00:11:38.952783 kernel: thunder_xcv, ver 1.0 Jul 12 00:11:38.952791 kernel: thunder_bgx, ver 1.0 Jul 12 00:11:38.952802 kernel: nicpf, ver 1.0 Jul 12 00:11:38.952811 kernel: nicvf, ver 1.0 Jul 12 00:11:38.952907 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 12 00:11:38.953673 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-12T00:11:38 UTC (1752279098) Jul 12 00:11:38.953697 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 12 00:11:38.953707 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jul 12 00:11:38.953755 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jul 12 00:11:38.953766 kernel: watchdog: Hard watchdog permanently disabled Jul 12 00:11:38.953780 kernel: NET: Registered PF_INET6 protocol family Jul 12 00:11:38.953788 kernel: Segment Routing with IPv6 Jul 12 00:11:38.953797 kernel: In-situ OAM (IOAM) with IPv6 Jul 12 00:11:38.953805 kernel: NET: Registered PF_PACKET protocol family Jul 12 00:11:38.953814 kernel: Key type dns_resolver registered Jul 12 00:11:38.953822 kernel: registered taskstats version 1 Jul 12 00:11:38.953830 kernel: Loading compiled-in X.509 certificates Jul 12 00:11:38.953839 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.96-flatcar: ed6b382df707adbd5942eaa048a1031fe26cbf15' Jul 12 00:11:38.953847 kernel: Key type .fscrypt registered Jul 12 00:11:38.953857 kernel: Key type fscrypt-provisioning registered Jul 12 00:11:38.953866 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 12 00:11:38.953875 kernel: ima: Allocated hash algorithm: sha1 Jul 12 00:11:38.953883 kernel: ima: No architecture policies found Jul 12 00:11:38.953892 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 12 00:11:38.954950 kernel: clk: Disabling unused clocks Jul 12 00:11:38.955021 kernel: Freeing unused kernel memory: 39424K Jul 12 00:11:38.955032 kernel: Run /init as init process Jul 12 00:11:38.955040 kernel: with arguments: Jul 12 00:11:38.955055 kernel: /init Jul 12 00:11:38.955064 kernel: with environment: Jul 12 00:11:38.955072 kernel: HOME=/ Jul 12 00:11:38.955080 kernel: TERM=linux Jul 12 00:11:38.955088 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 12 00:11:38.955098 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 12 00:11:38.955109 systemd[1]: Detected virtualization kvm. Jul 12 00:11:38.955118 systemd[1]: Detected architecture arm64. Jul 12 00:11:38.955129 systemd[1]: Running in initrd. Jul 12 00:11:38.955137 systemd[1]: No hostname configured, using default hostname. Jul 12 00:11:38.955145 systemd[1]: Hostname set to . Jul 12 00:11:38.955154 systemd[1]: Initializing machine ID from VM UUID. Jul 12 00:11:38.955163 systemd[1]: Queued start job for default target initrd.target. Jul 12 00:11:38.955172 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 12 00:11:38.955181 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 12 00:11:38.955191 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 12 00:11:38.955201 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 12 00:11:38.955210 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 12 00:11:38.955219 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 12 00:11:38.955230 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 12 00:11:38.955239 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 12 00:11:38.955248 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 12 00:11:38.955259 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 12 00:11:38.955269 systemd[1]: Reached target paths.target - Path Units. Jul 12 00:11:38.955278 systemd[1]: Reached target slices.target - Slice Units. Jul 12 00:11:38.955286 systemd[1]: Reached target swap.target - Swaps. Jul 12 00:11:38.955295 systemd[1]: Reached target timers.target - Timer Units. Jul 12 00:11:38.955303 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 12 00:11:38.955312 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 12 00:11:38.955321 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 12 00:11:38.955330 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 12 00:11:38.955340 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 12 00:11:38.955349 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 12 00:11:38.955358 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 12 00:11:38.955367 systemd[1]: Reached target sockets.target - Socket Units. Jul 12 00:11:38.955376 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 12 00:11:38.955385 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 12 00:11:38.955394 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 12 00:11:38.955403 systemd[1]: Starting systemd-fsck-usr.service... Jul 12 00:11:38.955411 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 12 00:11:38.955422 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 12 00:11:38.955431 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 00:11:38.955440 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 12 00:11:38.955449 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 12 00:11:38.955457 systemd[1]: Finished systemd-fsck-usr.service. Jul 12 00:11:38.955467 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 12 00:11:38.955511 systemd-journald[235]: Collecting audit messages is disabled. Jul 12 00:11:38.955534 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:11:38.955545 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 12 00:11:38.955554 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 12 00:11:38.955563 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 12 00:11:38.955572 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 12 00:11:38.955581 kernel: Bridge firewalling registered Jul 12 00:11:38.955589 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 12 00:11:38.955599 systemd-journald[235]: Journal started Jul 12 00:11:38.955620 systemd-journald[235]: Runtime Journal (/run/log/journal/3cd5ab2a93794e76825f25e776f67ad6) is 8.0M, max 76.6M, 68.6M free. Jul 12 00:11:38.918472 systemd-modules-load[236]: Inserted module 'overlay' Jul 12 00:11:38.947039 systemd-modules-load[236]: Inserted module 'br_netfilter' Jul 12 00:11:38.961420 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 12 00:11:38.962989 systemd[1]: Started systemd-journald.service - Journal Service. Jul 12 00:11:38.963305 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 12 00:11:38.965064 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 12 00:11:38.973185 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 12 00:11:38.975503 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 12 00:11:38.977939 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 12 00:11:38.990228 dracut-cmdline[266]: dracut-dracut-053 Jul 12 00:11:38.993998 dracut-cmdline[266]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=52e0eba0325ad9e58f7b221f0132165c94b480ebf93a398f4fe935660ba9e15c Jul 12 00:11:38.997400 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 12 00:11:39.006183 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 12 00:11:39.031133 systemd-resolved[287]: Positive Trust Anchors: Jul 12 00:11:39.031148 systemd-resolved[287]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 12 00:11:39.031179 systemd-resolved[287]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 12 00:11:39.041437 systemd-resolved[287]: Defaulting to hostname 'linux'. Jul 12 00:11:39.043081 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 12 00:11:39.043738 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 12 00:11:39.097983 kernel: SCSI subsystem initialized Jul 12 00:11:39.101991 kernel: Loading iSCSI transport class v2.0-870. Jul 12 00:11:39.110008 kernel: iscsi: registered transport (tcp) Jul 12 00:11:39.124003 kernel: iscsi: registered transport (qla4xxx) Jul 12 00:11:39.124065 kernel: QLogic iSCSI HBA Driver Jul 12 00:11:39.182346 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 12 00:11:39.193228 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 12 00:11:39.214249 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 12 00:11:39.214326 kernel: device-mapper: uevent: version 1.0.3 Jul 12 00:11:39.214342 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 12 00:11:39.266069 kernel: raid6: neonx8 gen() 15684 MB/s Jul 12 00:11:39.283029 kernel: raid6: neonx4 gen() 15594 MB/s Jul 12 00:11:39.300040 kernel: raid6: neonx2 gen() 13160 MB/s Jul 12 00:11:39.317026 kernel: raid6: neonx1 gen() 10401 MB/s Jul 12 00:11:39.334033 kernel: raid6: int64x8 gen() 6914 MB/s Jul 12 00:11:39.351111 kernel: raid6: int64x4 gen() 7312 MB/s Jul 12 00:11:39.368042 kernel: raid6: int64x2 gen() 6095 MB/s Jul 12 00:11:39.385057 kernel: raid6: int64x1 gen() 5034 MB/s Jul 12 00:11:39.385138 kernel: raid6: using algorithm neonx8 gen() 15684 MB/s Jul 12 00:11:39.402031 kernel: raid6: .... xor() 11843 MB/s, rmw enabled Jul 12 00:11:39.402115 kernel: raid6: using neon recovery algorithm Jul 12 00:11:39.407001 kernel: xor: measuring software checksum speed Jul 12 00:11:39.407079 kernel: 8regs : 17631 MB/sec Jul 12 00:11:39.408166 kernel: 32regs : 19650 MB/sec Jul 12 00:11:39.408194 kernel: arm64_neon : 26857 MB/sec Jul 12 00:11:39.408212 kernel: xor: using function: arm64_neon (26857 MB/sec) Jul 12 00:11:39.459011 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 12 00:11:39.474462 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 12 00:11:39.487363 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 12 00:11:39.502536 systemd-udevd[454]: Using default interface naming scheme 'v255'. Jul 12 00:11:39.506315 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 12 00:11:39.513157 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 12 00:11:39.532360 dracut-pre-trigger[461]: rd.md=0: removing MD RAID activation Jul 12 00:11:39.573861 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 12 00:11:39.582206 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 12 00:11:39.635550 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 12 00:11:39.643248 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 12 00:11:39.656882 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 12 00:11:39.659180 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 12 00:11:39.659860 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 12 00:11:39.662143 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 12 00:11:39.668686 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 12 00:11:39.687606 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 12 00:11:39.726336 kernel: scsi host0: Virtio SCSI HBA Jul 12 00:11:39.738109 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 12 00:11:39.738191 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jul 12 00:11:39.750075 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 12 00:11:39.750211 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 12 00:11:39.751784 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 12 00:11:39.758249 kernel: ACPI: bus type USB registered Jul 12 00:11:39.758273 kernel: usbcore: registered new interface driver usbfs Jul 12 00:11:39.758283 kernel: usbcore: registered new interface driver hub Jul 12 00:11:39.754109 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 12 00:11:39.760384 kernel: usbcore: registered new device driver usb Jul 12 00:11:39.754555 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:11:39.757502 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 00:11:39.766451 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 00:11:39.789855 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:11:39.801004 kernel: sr 0:0:0:0: Power-on or device reset occurred Jul 12 00:11:39.799298 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 12 00:11:39.803987 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Jul 12 00:11:39.804191 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 12 00:11:39.805993 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Jul 12 00:11:39.812116 kernel: sd 0:0:0:1: Power-on or device reset occurred Jul 12 00:11:39.812330 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Jul 12 00:11:39.812417 kernel: sd 0:0:0:1: [sda] Write Protect is off Jul 12 00:11:39.813201 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Jul 12 00:11:39.813340 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jul 12 00:11:39.821502 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jul 12 00:11:39.821755 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Jul 12 00:11:39.821866 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 12 00:11:39.821878 kernel: GPT:17805311 != 80003071 Jul 12 00:11:39.821888 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 12 00:11:39.821898 kernel: GPT:17805311 != 80003071 Jul 12 00:11:39.821908 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 12 00:11:39.821917 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 12 00:11:39.823216 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jul 12 00:11:39.826353 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jul 12 00:11:39.826529 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Jul 12 00:11:39.826641 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Jul 12 00:11:39.827999 kernel: hub 1-0:1.0: USB hub found Jul 12 00:11:39.828164 kernel: hub 1-0:1.0: 4 ports detected Jul 12 00:11:39.829029 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Jul 12 00:11:39.830179 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jul 12 00:11:39.830324 kernel: hub 2-0:1.0: USB hub found Jul 12 00:11:39.830418 kernel: hub 2-0:1.0: 4 ports detected Jul 12 00:11:39.831437 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 12 00:11:39.881165 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (520) Jul 12 00:11:39.881234 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jul 12 00:11:39.887012 kernel: BTRFS: device fsid 394cecf3-1fd4-438a-991e-dc2b4121da0c devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (498) Jul 12 00:11:39.893273 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jul 12 00:11:39.903468 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jul 12 00:11:39.909251 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Jul 12 00:11:39.909958 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jul 12 00:11:39.916205 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 12 00:11:39.924584 disk-uuid[574]: Primary Header is updated. Jul 12 00:11:39.924584 disk-uuid[574]: Secondary Entries is updated. Jul 12 00:11:39.924584 disk-uuid[574]: Secondary Header is updated. Jul 12 00:11:39.936019 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 12 00:11:40.073138 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jul 12 00:11:40.211512 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Jul 12 00:11:40.211566 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Jul 12 00:11:40.212754 kernel: usbcore: registered new interface driver usbhid Jul 12 00:11:40.212784 kernel: usbhid: USB HID core driver Jul 12 00:11:40.316065 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Jul 12 00:11:40.446018 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Jul 12 00:11:40.500032 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Jul 12 00:11:40.939100 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 12 00:11:40.939193 disk-uuid[575]: The operation has completed successfully. Jul 12 00:11:41.000158 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 12 00:11:41.000272 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 12 00:11:41.019301 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 12 00:11:41.024883 sh[586]: Success Jul 12 00:11:41.039052 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 12 00:11:41.095919 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 12 00:11:41.105288 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 12 00:11:41.107056 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 12 00:11:41.129457 kernel: BTRFS info (device dm-0): first mount of filesystem 394cecf3-1fd4-438a-991e-dc2b4121da0c Jul 12 00:11:41.129522 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:11:41.129538 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 12 00:11:41.129561 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 12 00:11:41.129574 kernel: BTRFS info (device dm-0): using free space tree Jul 12 00:11:41.138023 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jul 12 00:11:41.140307 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 12 00:11:41.141873 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 12 00:11:41.148198 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 12 00:11:41.152185 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 12 00:11:41.163502 kernel: BTRFS info (device sda6): first mount of filesystem 2ba3179f-4493-4560-9191-8e514f82bd95 Jul 12 00:11:41.163553 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:11:41.164111 kernel: BTRFS info (device sda6): using free space tree Jul 12 00:11:41.169000 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 12 00:11:41.169062 kernel: BTRFS info (device sda6): auto enabling async discard Jul 12 00:11:41.179597 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 12 00:11:41.181001 kernel: BTRFS info (device sda6): last unmount of filesystem 2ba3179f-4493-4560-9191-8e514f82bd95 Jul 12 00:11:41.187531 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 12 00:11:41.195175 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 12 00:11:41.291192 ignition[670]: Ignition 2.19.0 Jul 12 00:11:41.291202 ignition[670]: Stage: fetch-offline Jul 12 00:11:41.291237 ignition[670]: no configs at "/usr/lib/ignition/base.d" Jul 12 00:11:41.291245 ignition[670]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jul 12 00:11:41.292010 ignition[670]: parsed url from cmdline: "" Jul 12 00:11:41.294030 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 12 00:11:41.292013 ignition[670]: no config URL provided Jul 12 00:11:41.295396 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 12 00:11:41.292020 ignition[670]: reading system config file "/usr/lib/ignition/user.ign" Jul 12 00:11:41.292031 ignition[670]: no config at "/usr/lib/ignition/user.ign" Jul 12 00:11:41.292036 ignition[670]: failed to fetch config: resource requires networking Jul 12 00:11:41.292228 ignition[670]: Ignition finished successfully Jul 12 00:11:41.304218 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 12 00:11:41.328741 systemd-networkd[775]: lo: Link UP Jul 12 00:11:41.328758 systemd-networkd[775]: lo: Gained carrier Jul 12 00:11:41.330893 systemd-networkd[775]: Enumeration completed Jul 12 00:11:41.331165 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 12 00:11:41.332448 systemd-networkd[775]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:11:41.332452 systemd-networkd[775]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 12 00:11:41.333553 systemd-networkd[775]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:11:41.333556 systemd-networkd[775]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 12 00:11:41.334764 systemd-networkd[775]: eth0: Link UP Jul 12 00:11:41.334767 systemd-networkd[775]: eth0: Gained carrier Jul 12 00:11:41.334774 systemd-networkd[775]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:11:41.335614 systemd[1]: Reached target network.target - Network. Jul 12 00:11:41.341250 systemd-networkd[775]: eth1: Link UP Jul 12 00:11:41.341254 systemd-networkd[775]: eth1: Gained carrier Jul 12 00:11:41.341263 systemd-networkd[775]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:11:41.342398 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 12 00:11:41.356096 ignition[777]: Ignition 2.19.0 Jul 12 00:11:41.356108 ignition[777]: Stage: fetch Jul 12 00:11:41.356289 ignition[777]: no configs at "/usr/lib/ignition/base.d" Jul 12 00:11:41.356299 ignition[777]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jul 12 00:11:41.356386 ignition[777]: parsed url from cmdline: "" Jul 12 00:11:41.356389 ignition[777]: no config URL provided Jul 12 00:11:41.356394 ignition[777]: reading system config file "/usr/lib/ignition/user.ign" Jul 12 00:11:41.356402 ignition[777]: no config at "/usr/lib/ignition/user.ign" Jul 12 00:11:41.356420 ignition[777]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Jul 12 00:11:41.357099 ignition[777]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Jul 12 00:11:41.372083 systemd-networkd[775]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 12 00:11:41.400097 systemd-networkd[775]: eth0: DHCPv4 address 91.99.189.6/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jul 12 00:11:41.557275 ignition[777]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Jul 12 00:11:41.564152 ignition[777]: GET result: OK Jul 12 00:11:41.564394 ignition[777]: parsing config with SHA512: 42f75c98fcd4c6516fcedbf87eb83c54bdf484a00d4b24bf6b7a071c0b9f4f9edf19cd824c64603f39911afb2ae8efe7ca3d43d202640205c1c0a8b9b47054fc Jul 12 00:11:41.571429 unknown[777]: fetched base config from "system" Jul 12 00:11:41.571450 unknown[777]: fetched base config from "system" Jul 12 00:11:41.571881 ignition[777]: fetch: fetch complete Jul 12 00:11:41.571455 unknown[777]: fetched user config from "hetzner" Jul 12 00:11:41.571886 ignition[777]: fetch: fetch passed Jul 12 00:11:41.571934 ignition[777]: Ignition finished successfully Jul 12 00:11:41.577034 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 12 00:11:41.584227 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 12 00:11:41.604069 ignition[784]: Ignition 2.19.0 Jul 12 00:11:41.604082 ignition[784]: Stage: kargs Jul 12 00:11:41.605555 ignition[784]: no configs at "/usr/lib/ignition/base.d" Jul 12 00:11:41.605569 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jul 12 00:11:41.606613 ignition[784]: kargs: kargs passed Jul 12 00:11:41.606687 ignition[784]: Ignition finished successfully Jul 12 00:11:41.609757 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 12 00:11:41.614156 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 12 00:11:41.626370 ignition[790]: Ignition 2.19.0 Jul 12 00:11:41.626383 ignition[790]: Stage: disks Jul 12 00:11:41.626591 ignition[790]: no configs at "/usr/lib/ignition/base.d" Jul 12 00:11:41.626602 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jul 12 00:11:41.629398 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 12 00:11:41.627949 ignition[790]: disks: disks passed Jul 12 00:11:41.630795 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 12 00:11:41.628028 ignition[790]: Ignition finished successfully Jul 12 00:11:41.631504 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 12 00:11:41.632204 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 12 00:11:41.633164 systemd[1]: Reached target sysinit.target - System Initialization. Jul 12 00:11:41.634175 systemd[1]: Reached target basic.target - Basic System. Jul 12 00:11:41.645143 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 12 00:11:41.663578 systemd-fsck[798]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jul 12 00:11:41.667325 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 12 00:11:41.678198 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 12 00:11:41.732021 kernel: EXT4-fs (sda9): mounted filesystem 44c8362f-9431-4909-bc9a-f90e514bd0e9 r/w with ordered data mode. Quota mode: none. Jul 12 00:11:41.732543 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 12 00:11:41.733642 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 12 00:11:41.747145 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 12 00:11:41.750533 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 12 00:11:41.754207 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jul 12 00:11:41.755021 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 12 00:11:41.755054 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 12 00:11:41.765597 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (806) Jul 12 00:11:41.765640 kernel: BTRFS info (device sda6): first mount of filesystem 2ba3179f-4493-4560-9191-8e514f82bd95 Jul 12 00:11:41.766310 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:11:41.766979 kernel: BTRFS info (device sda6): using free space tree Jul 12 00:11:41.771644 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 12 00:11:41.771747 kernel: BTRFS info (device sda6): auto enabling async discard Jul 12 00:11:41.772248 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 12 00:11:41.781213 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 12 00:11:41.785613 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 12 00:11:41.823166 coreos-metadata[808]: Jul 12 00:11:41.822 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Jul 12 00:11:41.826095 coreos-metadata[808]: Jul 12 00:11:41.824 INFO Fetch successful Jul 12 00:11:41.826095 coreos-metadata[808]: Jul 12 00:11:41.825 INFO wrote hostname ci-4081-3-4-n-bdc5bebc5f to /sysroot/etc/hostname Jul 12 00:11:41.828920 initrd-setup-root[835]: cut: /sysroot/etc/passwd: No such file or directory Jul 12 00:11:41.829524 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 12 00:11:41.835228 initrd-setup-root[843]: cut: /sysroot/etc/group: No such file or directory Jul 12 00:11:41.842237 initrd-setup-root[850]: cut: /sysroot/etc/shadow: No such file or directory Jul 12 00:11:41.846806 initrd-setup-root[857]: cut: /sysroot/etc/gshadow: No such file or directory Jul 12 00:11:41.949087 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 12 00:11:41.957115 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 12 00:11:41.960136 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 12 00:11:41.968017 kernel: BTRFS info (device sda6): last unmount of filesystem 2ba3179f-4493-4560-9191-8e514f82bd95 Jul 12 00:11:41.993038 ignition[925]: INFO : Ignition 2.19.0 Jul 12 00:11:41.993038 ignition[925]: INFO : Stage: mount Jul 12 00:11:41.994197 ignition[925]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 12 00:11:41.994197 ignition[925]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jul 12 00:11:41.994003 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 12 00:11:41.997208 ignition[925]: INFO : mount: mount passed Jul 12 00:11:41.997208 ignition[925]: INFO : Ignition finished successfully Jul 12 00:11:41.999195 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 12 00:11:42.003114 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 12 00:11:42.129238 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 12 00:11:42.139277 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 12 00:11:42.153011 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (936) Jul 12 00:11:42.154838 kernel: BTRFS info (device sda6): first mount of filesystem 2ba3179f-4493-4560-9191-8e514f82bd95 Jul 12 00:11:42.154880 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:11:42.154910 kernel: BTRFS info (device sda6): using free space tree Jul 12 00:11:42.160000 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 12 00:11:42.160062 kernel: BTRFS info (device sda6): auto enabling async discard Jul 12 00:11:42.163447 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 12 00:11:42.193274 ignition[954]: INFO : Ignition 2.19.0 Jul 12 00:11:42.193274 ignition[954]: INFO : Stage: files Jul 12 00:11:42.195747 ignition[954]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 12 00:11:42.195747 ignition[954]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jul 12 00:11:42.195747 ignition[954]: DEBUG : files: compiled without relabeling support, skipping Jul 12 00:11:42.198591 ignition[954]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 12 00:11:42.198591 ignition[954]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 12 00:11:42.202035 ignition[954]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 12 00:11:42.203194 ignition[954]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 12 00:11:42.204571 ignition[954]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 12 00:11:42.204194 unknown[954]: wrote ssh authorized keys file for user: core Jul 12 00:11:42.207628 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 12 00:11:42.209265 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 12 00:11:42.209265 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 12 00:11:42.209265 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jul 12 00:11:42.320152 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 12 00:11:42.458366 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 12 00:11:42.458366 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 12 00:11:42.461832 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 12 00:11:42.461832 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 12 00:11:42.461832 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 12 00:11:42.461832 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 12 00:11:42.461832 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 12 00:11:42.461832 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 12 00:11:42.461832 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 12 00:11:42.461832 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 12 00:11:42.461832 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 12 00:11:42.461832 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 12 00:11:42.461832 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 12 00:11:42.461832 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 12 00:11:42.461832 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Jul 12 00:11:42.821369 systemd-networkd[775]: eth0: Gained IPv6LL Jul 12 00:11:42.949328 systemd-networkd[775]: eth1: Gained IPv6LL Jul 12 00:11:43.134907 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 12 00:11:43.344354 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 12 00:11:43.344354 ignition[954]: INFO : files: op(c): [started] processing unit "containerd.service" Jul 12 00:11:43.348330 ignition[954]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 12 00:11:43.348330 ignition[954]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 12 00:11:43.348330 ignition[954]: INFO : files: op(c): [finished] processing unit "containerd.service" Jul 12 00:11:43.348330 ignition[954]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jul 12 00:11:43.348330 ignition[954]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 12 00:11:43.348330 ignition[954]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 12 00:11:43.348330 ignition[954]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jul 12 00:11:43.348330 ignition[954]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Jul 12 00:11:43.348330 ignition[954]: INFO : files: op(10): op(11): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jul 12 00:11:43.348330 ignition[954]: INFO : files: op(10): op(11): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jul 12 00:11:43.348330 ignition[954]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Jul 12 00:11:43.348330 ignition[954]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jul 12 00:11:43.348330 ignition[954]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jul 12 00:11:43.348330 ignition[954]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 12 00:11:43.348330 ignition[954]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 12 00:11:43.348330 ignition[954]: INFO : files: files passed Jul 12 00:11:43.348330 ignition[954]: INFO : Ignition finished successfully Jul 12 00:11:43.349251 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 12 00:11:43.356262 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 12 00:11:43.362441 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 12 00:11:43.367317 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 12 00:11:43.367406 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 12 00:11:43.375924 initrd-setup-root-after-ignition[981]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 12 00:11:43.375924 initrd-setup-root-after-ignition[981]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 12 00:11:43.378760 initrd-setup-root-after-ignition[985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 12 00:11:43.382068 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 12 00:11:43.384129 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 12 00:11:43.393259 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 12 00:11:43.426342 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 12 00:11:43.426492 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 12 00:11:43.430777 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 12 00:11:43.432079 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 12 00:11:43.433725 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 12 00:11:43.443233 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 12 00:11:43.455704 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 12 00:11:43.463190 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 12 00:11:43.474769 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 12 00:11:43.475525 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 12 00:11:43.476768 systemd[1]: Stopped target timers.target - Timer Units. Jul 12 00:11:43.477790 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 12 00:11:43.477910 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 12 00:11:43.479313 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 12 00:11:43.480548 systemd[1]: Stopped target basic.target - Basic System. Jul 12 00:11:43.481532 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 12 00:11:43.482543 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 12 00:11:43.483660 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 12 00:11:43.484766 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 12 00:11:43.485830 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 12 00:11:43.486962 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 12 00:11:43.488097 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 12 00:11:43.489099 systemd[1]: Stopped target swap.target - Swaps. Jul 12 00:11:43.489979 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 12 00:11:43.490108 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 12 00:11:43.491378 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 12 00:11:43.492072 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 12 00:11:43.493155 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 12 00:11:43.493614 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 12 00:11:43.494421 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 12 00:11:43.494535 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 12 00:11:43.496103 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 12 00:11:43.496222 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 12 00:11:43.497602 systemd[1]: ignition-files.service: Deactivated successfully. Jul 12 00:11:43.497717 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 12 00:11:43.498592 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 12 00:11:43.498709 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 12 00:11:43.509347 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 12 00:11:43.510462 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 12 00:11:43.510678 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 12 00:11:43.516163 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 12 00:11:43.516860 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 12 00:11:43.517069 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 12 00:11:43.519890 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 12 00:11:43.520181 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 12 00:11:43.527508 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 12 00:11:43.528169 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 12 00:11:43.533213 ignition[1005]: INFO : Ignition 2.19.0 Jul 12 00:11:43.533213 ignition[1005]: INFO : Stage: umount Jul 12 00:11:43.533213 ignition[1005]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 12 00:11:43.533213 ignition[1005]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jul 12 00:11:43.538065 ignition[1005]: INFO : umount: umount passed Jul 12 00:11:43.538065 ignition[1005]: INFO : Ignition finished successfully Jul 12 00:11:43.535671 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 12 00:11:43.537323 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 12 00:11:43.538707 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 12 00:11:43.538753 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 12 00:11:43.539441 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 12 00:11:43.539484 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 12 00:11:43.544089 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 12 00:11:43.544136 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 12 00:11:43.546479 systemd[1]: Stopped target network.target - Network. Jul 12 00:11:43.547068 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 12 00:11:43.547121 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 12 00:11:43.547748 systemd[1]: Stopped target paths.target - Path Units. Jul 12 00:11:43.549252 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 12 00:11:43.558044 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 12 00:11:43.559296 systemd[1]: Stopped target slices.target - Slice Units. Jul 12 00:11:43.561090 systemd[1]: Stopped target sockets.target - Socket Units. Jul 12 00:11:43.562587 systemd[1]: iscsid.socket: Deactivated successfully. Jul 12 00:11:43.562695 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 12 00:11:43.563838 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 12 00:11:43.563896 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 12 00:11:43.569265 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 12 00:11:43.569340 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 12 00:11:43.571092 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 12 00:11:43.571168 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 12 00:11:43.573446 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 12 00:11:43.574339 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 12 00:11:43.578066 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 12 00:11:43.582198 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 12 00:11:43.582301 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 12 00:11:43.583082 systemd-networkd[775]: eth0: DHCPv6 lease lost Jul 12 00:11:43.583514 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 12 00:11:43.583561 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 12 00:11:43.587326 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 12 00:11:43.587477 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 12 00:11:43.589591 systemd-networkd[775]: eth1: DHCPv6 lease lost Jul 12 00:11:43.591847 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 12 00:11:43.592546 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 12 00:11:43.593895 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 12 00:11:43.593942 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 12 00:11:43.599128 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 12 00:11:43.599611 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 12 00:11:43.599681 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 12 00:11:43.604394 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 12 00:11:43.604459 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 12 00:11:43.605066 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 12 00:11:43.605105 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 12 00:11:43.606739 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 12 00:11:43.606780 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 12 00:11:43.609002 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 12 00:11:43.622479 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 12 00:11:43.622733 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 12 00:11:43.624687 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 12 00:11:43.624785 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 12 00:11:43.626556 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 12 00:11:43.626617 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 12 00:11:43.627797 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 12 00:11:43.627832 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 12 00:11:43.628916 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 12 00:11:43.628987 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 12 00:11:43.630504 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 12 00:11:43.630546 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 12 00:11:43.632065 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 12 00:11:43.632111 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 12 00:11:43.639250 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 12 00:11:43.641900 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 12 00:11:43.642020 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 12 00:11:43.644394 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 12 00:11:43.644449 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:11:43.646430 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 12 00:11:43.646545 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 12 00:11:43.647929 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 12 00:11:43.652284 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 12 00:11:43.667372 systemd[1]: Switching root. Jul 12 00:11:43.706509 systemd-journald[235]: Journal stopped Jul 12 00:11:44.667725 systemd-journald[235]: Received SIGTERM from PID 1 (systemd). Jul 12 00:11:44.667788 kernel: SELinux: policy capability network_peer_controls=1 Jul 12 00:11:44.667801 kernel: SELinux: policy capability open_perms=1 Jul 12 00:11:44.667811 kernel: SELinux: policy capability extended_socket_class=1 Jul 12 00:11:44.667827 kernel: SELinux: policy capability always_check_network=0 Jul 12 00:11:44.667837 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 12 00:11:44.667851 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 12 00:11:44.667861 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 12 00:11:44.667870 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 12 00:11:44.667879 kernel: audit: type=1403 audit(1752279103.872:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 12 00:11:44.667895 systemd[1]: Successfully loaded SELinux policy in 34.221ms. Jul 12 00:11:44.667916 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.396ms. Jul 12 00:11:44.667927 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 12 00:11:44.667940 systemd[1]: Detected virtualization kvm. Jul 12 00:11:44.667951 systemd[1]: Detected architecture arm64. Jul 12 00:11:44.667961 systemd[1]: Detected first boot. Jul 12 00:11:44.670165 systemd[1]: Hostname set to . Jul 12 00:11:44.670184 systemd[1]: Initializing machine ID from VM UUID. Jul 12 00:11:44.670195 zram_generator::config[1065]: No configuration found. Jul 12 00:11:44.670208 systemd[1]: Populated /etc with preset unit settings. Jul 12 00:11:44.670223 systemd[1]: Queued start job for default target multi-user.target. Jul 12 00:11:44.670234 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jul 12 00:11:44.670246 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 12 00:11:44.670257 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 12 00:11:44.670268 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 12 00:11:44.670278 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 12 00:11:44.670288 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 12 00:11:44.670299 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 12 00:11:44.670310 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 12 00:11:44.670322 systemd[1]: Created slice user.slice - User and Session Slice. Jul 12 00:11:44.670333 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 12 00:11:44.670343 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 12 00:11:44.670355 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 12 00:11:44.670365 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 12 00:11:44.670375 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 12 00:11:44.670386 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 12 00:11:44.670396 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 12 00:11:44.670406 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 12 00:11:44.670419 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 12 00:11:44.670429 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 12 00:11:44.670440 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 12 00:11:44.670450 systemd[1]: Reached target slices.target - Slice Units. Jul 12 00:11:44.670460 systemd[1]: Reached target swap.target - Swaps. Jul 12 00:11:44.670471 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 12 00:11:44.670483 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 12 00:11:44.670494 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 12 00:11:44.670511 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 12 00:11:44.670522 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 12 00:11:44.670532 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 12 00:11:44.670543 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 12 00:11:44.670553 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 12 00:11:44.670564 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 12 00:11:44.670574 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 12 00:11:44.670584 systemd[1]: Mounting media.mount - External Media Directory... Jul 12 00:11:44.670596 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 12 00:11:44.670620 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 12 00:11:44.670633 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 12 00:11:44.670644 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 12 00:11:44.670654 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 12 00:11:44.670665 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 12 00:11:44.670678 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 12 00:11:44.670691 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 12 00:11:44.670704 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 12 00:11:44.670714 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 12 00:11:44.670724 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 12 00:11:44.670735 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 12 00:11:44.670746 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 12 00:11:44.670758 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jul 12 00:11:44.670771 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jul 12 00:11:44.670782 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 12 00:11:44.670793 kernel: loop: module loaded Jul 12 00:11:44.670804 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 12 00:11:44.670814 kernel: ACPI: bus type drm_connector registered Jul 12 00:11:44.670825 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 12 00:11:44.670835 kernel: fuse: init (API version 7.39) Jul 12 00:11:44.670846 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 12 00:11:44.670858 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 12 00:11:44.670868 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 12 00:11:44.670883 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 12 00:11:44.670894 systemd[1]: Mounted media.mount - External Media Directory. Jul 12 00:11:44.670904 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 12 00:11:44.670915 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 12 00:11:44.670925 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 12 00:11:44.670936 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 12 00:11:44.670948 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 12 00:11:44.670959 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 12 00:11:44.672548 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 00:11:44.672574 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 12 00:11:44.672585 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 12 00:11:44.672595 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 12 00:11:44.672627 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 12 00:11:44.672640 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 00:11:44.672651 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 12 00:11:44.672687 systemd-journald[1153]: Collecting audit messages is disabled. Jul 12 00:11:44.672713 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 12 00:11:44.672724 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 12 00:11:44.672734 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 00:11:44.672747 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 12 00:11:44.672758 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 12 00:11:44.672770 systemd-journald[1153]: Journal started Jul 12 00:11:44.672791 systemd-journald[1153]: Runtime Journal (/run/log/journal/3cd5ab2a93794e76825f25e776f67ad6) is 8.0M, max 76.6M, 68.6M free. Jul 12 00:11:44.678825 systemd[1]: Started systemd-journald.service - Journal Service. Jul 12 00:11:44.677052 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 12 00:11:44.678103 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 12 00:11:44.691006 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 12 00:11:44.698074 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 12 00:11:44.702080 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 12 00:11:44.702818 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 12 00:11:44.711120 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 12 00:11:44.715463 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 12 00:11:44.716241 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 12 00:11:44.717689 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 12 00:11:44.720078 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 12 00:11:44.723313 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 12 00:11:44.726121 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 12 00:11:44.733269 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 12 00:11:44.735998 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 12 00:11:44.736757 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 12 00:11:44.746143 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 12 00:11:44.749432 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 12 00:11:44.752576 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 12 00:11:44.759849 systemd-journald[1153]: Time spent on flushing to /var/log/journal/3cd5ab2a93794e76825f25e776f67ad6 is 29.912ms for 1114 entries. Jul 12 00:11:44.759849 systemd-journald[1153]: System Journal (/var/log/journal/3cd5ab2a93794e76825f25e776f67ad6) is 8.0M, max 584.8M, 576.8M free. Jul 12 00:11:44.804059 systemd-journald[1153]: Received client request to flush runtime journal. Jul 12 00:11:44.775407 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 12 00:11:44.788154 udevadm[1209]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 12 00:11:44.790050 systemd-tmpfiles[1202]: ACLs are not supported, ignoring. Jul 12 00:11:44.790060 systemd-tmpfiles[1202]: ACLs are not supported, ignoring. Jul 12 00:11:44.798190 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 12 00:11:44.805239 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 12 00:11:44.808401 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 12 00:11:44.847862 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 12 00:11:44.855229 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 12 00:11:44.868889 systemd-tmpfiles[1223]: ACLs are not supported, ignoring. Jul 12 00:11:44.868909 systemd-tmpfiles[1223]: ACLs are not supported, ignoring. Jul 12 00:11:44.876378 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 12 00:11:45.270901 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 12 00:11:45.277166 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 12 00:11:45.300201 systemd-udevd[1229]: Using default interface naming scheme 'v255'. Jul 12 00:11:45.327787 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 12 00:11:45.337664 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 12 00:11:45.366384 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 12 00:11:45.398524 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Jul 12 00:11:45.441032 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 12 00:11:45.513364 systemd-networkd[1232]: lo: Link UP Jul 12 00:11:45.513379 systemd-networkd[1232]: lo: Gained carrier Jul 12 00:11:45.515749 systemd-networkd[1232]: Enumeration completed Jul 12 00:11:45.515873 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 12 00:11:45.518084 systemd-networkd[1232]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:11:45.518096 systemd-networkd[1232]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 12 00:11:45.520312 systemd-networkd[1232]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:11:45.520324 systemd-networkd[1232]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 12 00:11:45.521257 systemd-networkd[1232]: eth0: Link UP Jul 12 00:11:45.521268 systemd-networkd[1232]: eth0: Gained carrier Jul 12 00:11:45.521281 systemd-networkd[1232]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:11:45.523170 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 12 00:11:45.534123 systemd-networkd[1232]: eth1: Link UP Jul 12 00:11:45.534137 systemd-networkd[1232]: eth1: Gained carrier Jul 12 00:11:45.534157 systemd-networkd[1232]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:11:45.548002 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1240) Jul 12 00:11:45.557241 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Jul 12 00:11:45.557266 systemd[1]: Condition check resulted in dev-vport2p1.device - /dev/vport2p1 being skipped. Jul 12 00:11:45.557412 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 12 00:11:45.564173 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 12 00:11:45.565026 systemd-networkd[1232]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 12 00:11:45.574144 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 12 00:11:45.585065 systemd-networkd[1232]: eth0: DHCPv4 address 91.99.189.6/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jul 12 00:11:45.594770 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 12 00:11:45.595357 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 12 00:11:45.595394 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 12 00:11:45.607186 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 00:11:45.607367 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 12 00:11:45.611315 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 00:11:45.611485 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 12 00:11:45.612773 systemd-networkd[1232]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:11:45.618754 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 00:11:45.620169 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 12 00:11:45.659788 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 12 00:11:45.659831 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 12 00:11:45.663000 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Jul 12 00:11:45.663063 kernel: mousedev: PS/2 mouse device common for all mice Jul 12 00:11:45.664024 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jul 12 00:11:45.664065 kernel: [drm] features: -context_init Jul 12 00:11:45.664999 kernel: [drm] number of scanouts: 1 Jul 12 00:11:45.665097 kernel: [drm] number of cap sets: 0 Jul 12 00:11:45.671005 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Jul 12 00:11:45.675335 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jul 12 00:11:45.680165 kernel: Console: switching to colour frame buffer device 160x50 Jul 12 00:11:45.688004 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jul 12 00:11:45.689206 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 00:11:45.700220 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 12 00:11:45.700466 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:11:45.706154 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 00:11:45.783085 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:11:45.834556 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 12 00:11:45.855276 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 12 00:11:45.867991 lvm[1301]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 12 00:11:45.894765 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 12 00:11:45.897791 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 12 00:11:45.910543 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 12 00:11:45.918009 lvm[1304]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 12 00:11:45.949779 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 12 00:11:45.953084 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 12 00:11:45.954115 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 12 00:11:45.954148 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 12 00:11:45.954707 systemd[1]: Reached target machines.target - Containers. Jul 12 00:11:45.956541 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 12 00:11:45.962213 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 12 00:11:45.967123 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 12 00:11:45.967831 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 12 00:11:45.970154 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 12 00:11:45.975154 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 12 00:11:45.979470 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 12 00:11:45.981336 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 12 00:11:45.993345 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 12 00:11:46.013000 kernel: loop0: detected capacity change from 0 to 114432 Jul 12 00:11:46.020213 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 12 00:11:46.021613 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 12 00:11:46.038200 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 12 00:11:46.062047 kernel: loop1: detected capacity change from 0 to 114328 Jul 12 00:11:46.100062 kernel: loop2: detected capacity change from 0 to 8 Jul 12 00:11:46.126213 kernel: loop3: detected capacity change from 0 to 203944 Jul 12 00:11:46.170018 kernel: loop4: detected capacity change from 0 to 114432 Jul 12 00:11:46.184134 kernel: loop5: detected capacity change from 0 to 114328 Jul 12 00:11:46.195995 kernel: loop6: detected capacity change from 0 to 8 Jul 12 00:11:46.200024 kernel: loop7: detected capacity change from 0 to 203944 Jul 12 00:11:46.212610 (sd-merge)[1325]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Jul 12 00:11:46.213644 (sd-merge)[1325]: Merged extensions into '/usr'. Jul 12 00:11:46.220551 systemd[1]: Reloading requested from client PID 1312 ('systemd-sysext') (unit systemd-sysext.service)... Jul 12 00:11:46.220617 systemd[1]: Reloading... Jul 12 00:11:46.296036 zram_generator::config[1354]: No configuration found. Jul 12 00:11:46.398470 ldconfig[1308]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 12 00:11:46.420502 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:11:46.478617 systemd[1]: Reloading finished in 257 ms. Jul 12 00:11:46.495156 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 12 00:11:46.499303 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 12 00:11:46.504357 systemd[1]: Starting ensure-sysext.service... Jul 12 00:11:46.508115 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 12 00:11:46.523142 systemd[1]: Reloading requested from client PID 1397 ('systemctl') (unit ensure-sysext.service)... Jul 12 00:11:46.523358 systemd[1]: Reloading... Jul 12 00:11:46.539715 systemd-tmpfiles[1398]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 12 00:11:46.540002 systemd-tmpfiles[1398]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 12 00:11:46.540649 systemd-tmpfiles[1398]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 12 00:11:46.540877 systemd-tmpfiles[1398]: ACLs are not supported, ignoring. Jul 12 00:11:46.540922 systemd-tmpfiles[1398]: ACLs are not supported, ignoring. Jul 12 00:11:46.544001 systemd-tmpfiles[1398]: Detected autofs mount point /boot during canonicalization of boot. Jul 12 00:11:46.544011 systemd-tmpfiles[1398]: Skipping /boot Jul 12 00:11:46.551803 systemd-tmpfiles[1398]: Detected autofs mount point /boot during canonicalization of boot. Jul 12 00:11:46.551818 systemd-tmpfiles[1398]: Skipping /boot Jul 12 00:11:46.591995 zram_generator::config[1427]: No configuration found. Jul 12 00:11:46.712271 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:11:46.773457 systemd[1]: Reloading finished in 249 ms. Jul 12 00:11:46.789198 systemd-networkd[1232]: eth1: Gained IPv6LL Jul 12 00:11:46.797448 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 12 00:11:46.807775 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 12 00:11:46.822421 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 12 00:11:46.829401 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 12 00:11:46.833219 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 12 00:11:46.844306 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 12 00:11:46.850386 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 12 00:11:46.857363 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 12 00:11:46.865209 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 12 00:11:46.868745 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 12 00:11:46.883101 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 12 00:11:46.883892 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 12 00:11:46.885802 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 12 00:11:46.893678 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 00:11:46.893846 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 12 00:11:46.901507 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 00:11:46.901722 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 12 00:11:46.905685 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 00:11:46.905844 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 12 00:11:46.911335 augenrules[1502]: No rules Jul 12 00:11:46.914290 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 12 00:11:46.921449 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 12 00:11:46.926298 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 12 00:11:46.931383 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 12 00:11:46.934119 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 12 00:11:46.949814 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 12 00:11:46.954426 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 12 00:11:46.955734 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 00:11:46.955915 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 12 00:11:46.957106 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 00:11:46.957241 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 12 00:11:46.965642 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 12 00:11:46.970629 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 00:11:46.972313 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 12 00:11:46.980194 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 12 00:11:46.987838 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 12 00:11:46.990438 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 12 00:11:46.998896 systemd-resolved[1483]: Positive Trust Anchors: Jul 12 00:11:46.999121 systemd-resolved[1483]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 12 00:11:46.999155 systemd-resolved[1483]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 12 00:11:47.000212 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 12 00:11:47.003168 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 12 00:11:47.005819 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 12 00:11:47.008366 systemd-resolved[1483]: Using system hostname 'ci-4081-3-4-n-bdc5bebc5f'. Jul 12 00:11:47.015962 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 12 00:11:47.016701 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 12 00:11:47.016764 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 12 00:11:47.017271 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 12 00:11:47.018371 systemd[1]: Finished ensure-sysext.service. Jul 12 00:11:47.020749 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 00:11:47.020897 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 12 00:11:47.029181 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 12 00:11:47.029371 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 12 00:11:47.031018 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 00:11:47.031267 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 12 00:11:47.032270 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 00:11:47.032549 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 12 00:11:47.034655 systemd[1]: Reached target network.target - Network. Jul 12 00:11:47.035902 systemd[1]: Reached target network-online.target - Network is Online. Jul 12 00:11:47.036809 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 12 00:11:47.037633 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 12 00:11:47.037791 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 12 00:11:47.043177 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 12 00:11:47.098957 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 12 00:11:47.101331 systemd[1]: Reached target sysinit.target - System Initialization. Jul 12 00:11:47.102265 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 12 00:11:47.103019 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 12 00:11:47.103698 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 12 00:11:47.104521 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 12 00:11:47.104566 systemd[1]: Reached target paths.target - Path Units. Jul 12 00:11:47.105223 systemd[1]: Reached target time-set.target - System Time Set. Jul 12 00:11:47.106094 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 12 00:11:47.106794 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 12 00:11:47.107500 systemd[1]: Reached target timers.target - Timer Units. Jul 12 00:11:47.110070 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 12 00:11:47.112224 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 12 00:11:47.114359 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 12 00:11:47.117393 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 12 00:11:47.118175 systemd[1]: Reached target sockets.target - Socket Units. Jul 12 00:11:47.118838 systemd[1]: Reached target basic.target - Basic System. Jul 12 00:11:47.119739 systemd[1]: System is tainted: cgroupsv1 Jul 12 00:11:47.119793 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 12 00:11:47.119816 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 12 00:11:47.122417 systemd[1]: Starting containerd.service - containerd container runtime... Jul 12 00:11:47.125159 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 12 00:11:47.129164 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 12 00:11:47.132302 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 12 00:11:47.140060 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 12 00:11:47.140639 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 12 00:11:47.144755 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:11:47.154192 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 12 00:11:47.167098 jq[1552]: false Jul 12 00:11:47.165123 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 12 00:11:47.181288 dbus-daemon[1551]: [system] SELinux support is enabled Jul 12 00:11:47.172540 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 12 00:11:47.187340 coreos-metadata[1550]: Jul 12 00:11:47.187 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Jul 12 00:11:47.194146 coreos-metadata[1550]: Jul 12 00:11:47.187 INFO Fetch successful Jul 12 00:11:47.194146 coreos-metadata[1550]: Jul 12 00:11:47.188 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Jul 12 00:11:47.194146 coreos-metadata[1550]: Jul 12 00:11:47.188 INFO Fetch successful Jul 12 00:11:47.190183 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Jul 12 00:11:47.194314 extend-filesystems[1556]: Found loop4 Jul 12 00:11:47.194314 extend-filesystems[1556]: Found loop5 Jul 12 00:11:47.194314 extend-filesystems[1556]: Found loop6 Jul 12 00:11:47.194314 extend-filesystems[1556]: Found loop7 Jul 12 00:11:47.194314 extend-filesystems[1556]: Found sda Jul 12 00:11:47.194314 extend-filesystems[1556]: Found sda1 Jul 12 00:11:47.194314 extend-filesystems[1556]: Found sda2 Jul 12 00:11:47.194314 extend-filesystems[1556]: Found sda3 Jul 12 00:11:47.194314 extend-filesystems[1556]: Found usr Jul 12 00:11:47.194314 extend-filesystems[1556]: Found sda4 Jul 12 00:11:47.194314 extend-filesystems[1556]: Found sda6 Jul 12 00:11:47.194314 extend-filesystems[1556]: Found sda7 Jul 12 00:11:47.194314 extend-filesystems[1556]: Found sda9 Jul 12 00:11:47.194314 extend-filesystems[1556]: Checking size of /dev/sda9 Jul 12 00:11:47.195207 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 12 00:11:47.206142 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 12 00:11:47.211182 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 12 00:11:47.215294 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 12 00:11:47.224111 systemd[1]: Starting update-engine.service - Update Engine... Jul 12 00:11:47.239551 extend-filesystems[1556]: Resized partition /dev/sda9 Jul 12 00:11:46.831186 systemd-resolved[1483]: Clock change detected. Flushing caches. Jul 12 00:11:46.894444 systemd-journald[1153]: Time jumped backwards, rotating. Jul 12 00:11:46.914589 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Jul 12 00:11:46.914623 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1231) Jul 12 00:11:46.914733 extend-filesystems[1586]: resize2fs 1.47.1 (20-May-2024) Jul 12 00:11:46.831527 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 12 00:11:46.939683 update_engine[1580]: I20250712 00:11:46.922911 1580 main.cc:92] Flatcar Update Engine starting Jul 12 00:11:46.832030 systemd-timesyncd[1545]: Contacted time server 57.129.38.82:123 (0.flatcar.pool.ntp.org). Jul 12 00:11:46.832115 systemd-timesyncd[1545]: Initial clock synchronization to Sat 2025-07-12 00:11:46.831142 UTC. Jul 12 00:11:46.940228 jq[1587]: true Jul 12 00:11:46.834429 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 12 00:11:46.858693 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 12 00:11:46.858930 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 12 00:11:46.960752 update_engine[1580]: I20250712 00:11:46.945194 1580 update_check_scheduler.cc:74] Next update check in 2m18s Jul 12 00:11:46.960788 tar[1597]: linux-arm64/helm Jul 12 00:11:46.860969 systemd[1]: motdgen.service: Deactivated successfully. Jul 12 00:11:46.980892 jq[1612]: true Jul 12 00:11:46.861247 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 12 00:11:46.875680 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 12 00:11:46.875915 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 12 00:11:46.932638 (ntainerd)[1601]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 12 00:11:46.934063 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 12 00:11:46.934100 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 12 00:11:46.937106 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 12 00:11:46.937132 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 12 00:11:46.939026 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 12 00:11:46.944124 systemd[1]: Started update-engine.service - Update Engine. Jul 12 00:11:46.945877 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 12 00:11:46.949273 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 12 00:11:47.018941 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 12 00:11:47.020983 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 12 00:11:47.054370 systemd-logind[1577]: New seat seat0. Jul 12 00:11:47.061863 systemd-logind[1577]: Watching system buttons on /dev/input/event0 (Power Button) Jul 12 00:11:47.061883 systemd-logind[1577]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Jul 12 00:11:47.062171 systemd[1]: Started systemd-logind.service - User Login Management. Jul 12 00:11:47.082825 systemd-networkd[1232]: eth0: Gained IPv6LL Jul 12 00:11:47.090531 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Jul 12 00:11:47.110900 bash[1646]: Updated "/home/core/.ssh/authorized_keys" Jul 12 00:11:47.112896 extend-filesystems[1586]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jul 12 00:11:47.112896 extend-filesystems[1586]: old_desc_blocks = 1, new_desc_blocks = 5 Jul 12 00:11:47.112896 extend-filesystems[1586]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Jul 12 00:11:47.123861 extend-filesystems[1556]: Resized filesystem in /dev/sda9 Jul 12 00:11:47.123861 extend-filesystems[1556]: Found sr0 Jul 12 00:11:47.113315 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 12 00:11:47.122683 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 12 00:11:47.122933 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 12 00:11:47.139782 systemd[1]: Starting sshkeys.service... Jul 12 00:11:47.161930 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 12 00:11:47.176850 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 12 00:11:47.256648 coreos-metadata[1659]: Jul 12 00:11:47.256 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Jul 12 00:11:47.260542 coreos-metadata[1659]: Jul 12 00:11:47.260 INFO Fetch successful Jul 12 00:11:47.263806 unknown[1659]: wrote ssh authorized keys file for user: core Jul 12 00:11:47.282867 locksmithd[1621]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 12 00:11:47.315935 update-ssh-keys[1665]: Updated "/home/core/.ssh/authorized_keys" Jul 12 00:11:47.316369 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 12 00:11:47.322904 systemd[1]: Finished sshkeys.service. Jul 12 00:11:47.396134 containerd[1601]: time="2025-07-12T00:11:47.395958174Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jul 12 00:11:47.478098 containerd[1601]: time="2025-07-12T00:11:47.477965294Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 12 00:11:47.485105 containerd[1601]: time="2025-07-12T00:11:47.483329654Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.96-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:11:47.485105 containerd[1601]: time="2025-07-12T00:11:47.483376494Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 12 00:11:47.485105 containerd[1601]: time="2025-07-12T00:11:47.483394454Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 12 00:11:47.485105 containerd[1601]: time="2025-07-12T00:11:47.483638614Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 12 00:11:47.485105 containerd[1601]: time="2025-07-12T00:11:47.483659374Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 12 00:11:47.485105 containerd[1601]: time="2025-07-12T00:11:47.483718814Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:11:47.485105 containerd[1601]: time="2025-07-12T00:11:47.483731574Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 12 00:11:47.485105 containerd[1601]: time="2025-07-12T00:11:47.483951654Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:11:47.485105 containerd[1601]: time="2025-07-12T00:11:47.483967334Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 12 00:11:47.485105 containerd[1601]: time="2025-07-12T00:11:47.483980534Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:11:47.485105 containerd[1601]: time="2025-07-12T00:11:47.483990494Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 12 00:11:47.485439 containerd[1601]: time="2025-07-12T00:11:47.484085334Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 12 00:11:47.485439 containerd[1601]: time="2025-07-12T00:11:47.484281334Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 12 00:11:47.485439 containerd[1601]: time="2025-07-12T00:11:47.484406614Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:11:47.485439 containerd[1601]: time="2025-07-12T00:11:47.484421974Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 12 00:11:47.486471 containerd[1601]: time="2025-07-12T00:11:47.486369574Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 12 00:11:47.486997 containerd[1601]: time="2025-07-12T00:11:47.486973254Z" level=info msg="metadata content store policy set" policy=shared Jul 12 00:11:47.497867 containerd[1601]: time="2025-07-12T00:11:47.495211294Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 12 00:11:47.497867 containerd[1601]: time="2025-07-12T00:11:47.495287054Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 12 00:11:47.497867 containerd[1601]: time="2025-07-12T00:11:47.495309614Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 12 00:11:47.497867 containerd[1601]: time="2025-07-12T00:11:47.495326894Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 12 00:11:47.497867 containerd[1601]: time="2025-07-12T00:11:47.495343054Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 12 00:11:47.497867 containerd[1601]: time="2025-07-12T00:11:47.495546134Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 12 00:11:47.497867 containerd[1601]: time="2025-07-12T00:11:47.495885414Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 12 00:11:47.497867 containerd[1601]: time="2025-07-12T00:11:47.496001214Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 12 00:11:47.497867 containerd[1601]: time="2025-07-12T00:11:47.496038574Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 12 00:11:47.497867 containerd[1601]: time="2025-07-12T00:11:47.496066574Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 12 00:11:47.497867 containerd[1601]: time="2025-07-12T00:11:47.496081494Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 12 00:11:47.497867 containerd[1601]: time="2025-07-12T00:11:47.496095534Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 12 00:11:47.497867 containerd[1601]: time="2025-07-12T00:11:47.496108734Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 12 00:11:47.497867 containerd[1601]: time="2025-07-12T00:11:47.496123334Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 12 00:11:47.498245 containerd[1601]: time="2025-07-12T00:11:47.496139854Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 12 00:11:47.498245 containerd[1601]: time="2025-07-12T00:11:47.496153654Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 12 00:11:47.498245 containerd[1601]: time="2025-07-12T00:11:47.496167454Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 12 00:11:47.498245 containerd[1601]: time="2025-07-12T00:11:47.496182454Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 12 00:11:47.498245 containerd[1601]: time="2025-07-12T00:11:47.496204534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 12 00:11:47.498245 containerd[1601]: time="2025-07-12T00:11:47.496220574Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 12 00:11:47.498245 containerd[1601]: time="2025-07-12T00:11:47.496233614Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 12 00:11:47.498245 containerd[1601]: time="2025-07-12T00:11:47.496247774Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 12 00:11:47.498245 containerd[1601]: time="2025-07-12T00:11:47.496261054Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 12 00:11:47.498245 containerd[1601]: time="2025-07-12T00:11:47.496274854Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 12 00:11:47.498245 containerd[1601]: time="2025-07-12T00:11:47.496289534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 12 00:11:47.498245 containerd[1601]: time="2025-07-12T00:11:47.496304534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 12 00:11:47.498245 containerd[1601]: time="2025-07-12T00:11:47.496319014Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 12 00:11:47.498245 containerd[1601]: time="2025-07-12T00:11:47.496342254Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 12 00:11:47.498516 containerd[1601]: time="2025-07-12T00:11:47.496356014Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 12 00:11:47.498516 containerd[1601]: time="2025-07-12T00:11:47.496369974Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 12 00:11:47.498516 containerd[1601]: time="2025-07-12T00:11:47.496384734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 12 00:11:47.498516 containerd[1601]: time="2025-07-12T00:11:47.496403134Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 12 00:11:47.498516 containerd[1601]: time="2025-07-12T00:11:47.496424934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 12 00:11:47.498516 containerd[1601]: time="2025-07-12T00:11:47.496438214Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 12 00:11:47.498516 containerd[1601]: time="2025-07-12T00:11:47.496476574Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 12 00:11:47.498516 containerd[1601]: time="2025-07-12T00:11:47.496586934Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 12 00:11:47.498516 containerd[1601]: time="2025-07-12T00:11:47.496606414Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 12 00:11:47.498516 containerd[1601]: time="2025-07-12T00:11:47.496618974Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 12 00:11:47.498516 containerd[1601]: time="2025-07-12T00:11:47.496632214Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 12 00:11:47.498516 containerd[1601]: time="2025-07-12T00:11:47.496644254Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 12 00:11:47.498516 containerd[1601]: time="2025-07-12T00:11:47.496660334Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 12 00:11:47.498516 containerd[1601]: time="2025-07-12T00:11:47.496670734Z" level=info msg="NRI interface is disabled by configuration." Jul 12 00:11:47.498742 containerd[1601]: time="2025-07-12T00:11:47.496681854Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 12 00:11:47.498766 containerd[1601]: time="2025-07-12T00:11:47.497063414Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 12 00:11:47.498766 containerd[1601]: time="2025-07-12T00:11:47.497126854Z" level=info msg="Connect containerd service" Jul 12 00:11:47.498766 containerd[1601]: time="2025-07-12T00:11:47.497221014Z" level=info msg="using legacy CRI server" Jul 12 00:11:47.498766 containerd[1601]: time="2025-07-12T00:11:47.497229294Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 12 00:11:47.498766 containerd[1601]: time="2025-07-12T00:11:47.497329774Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 12 00:11:47.502936 containerd[1601]: time="2025-07-12T00:11:47.502898294Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 12 00:11:47.505518 containerd[1601]: time="2025-07-12T00:11:47.505269334Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 12 00:11:47.505663 containerd[1601]: time="2025-07-12T00:11:47.505644614Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 12 00:11:47.512862 containerd[1601]: time="2025-07-12T00:11:47.505279254Z" level=info msg="Start subscribing containerd event" Jul 12 00:11:47.512862 containerd[1601]: time="2025-07-12T00:11:47.510470134Z" level=info msg="Start recovering state" Jul 12 00:11:47.512862 containerd[1601]: time="2025-07-12T00:11:47.510551374Z" level=info msg="Start event monitor" Jul 12 00:11:47.512862 containerd[1601]: time="2025-07-12T00:11:47.510563854Z" level=info msg="Start snapshots syncer" Jul 12 00:11:47.512862 containerd[1601]: time="2025-07-12T00:11:47.510574414Z" level=info msg="Start cni network conf syncer for default" Jul 12 00:11:47.512862 containerd[1601]: time="2025-07-12T00:11:47.510583534Z" level=info msg="Start streaming server" Jul 12 00:11:47.512862 containerd[1601]: time="2025-07-12T00:11:47.510719494Z" level=info msg="containerd successfully booted in 0.120741s" Jul 12 00:11:47.510855 systemd[1]: Started containerd.service - containerd container runtime. Jul 12 00:11:47.982513 sshd_keygen[1598]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 12 00:11:47.987417 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:11:47.997361 (kubelet)[1684]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 12 00:11:48.016574 tar[1597]: linux-arm64/LICENSE Jul 12 00:11:48.017549 tar[1597]: linux-arm64/README.md Jul 12 00:11:48.035217 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 12 00:11:48.045444 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 12 00:11:48.048789 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 12 00:11:48.058901 systemd[1]: issuegen.service: Deactivated successfully. Jul 12 00:11:48.059183 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 12 00:11:48.067995 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 12 00:11:48.079432 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 12 00:11:48.088874 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 12 00:11:48.090819 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 12 00:11:48.091756 systemd[1]: Reached target getty.target - Login Prompts. Jul 12 00:11:48.092724 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 12 00:11:48.093548 systemd[1]: Startup finished in 5.976s (kernel) + 4.666s (userspace) = 10.643s. Jul 12 00:11:48.601303 kubelet[1684]: E0712 00:11:48.601238 1684 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:11:48.604627 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:11:48.604946 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:11:52.990511 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 12 00:11:52.997891 systemd[1]: Started sshd@0-91.99.189.6:22-139.178.68.195:40148.service - OpenSSH per-connection server daemon (139.178.68.195:40148). Jul 12 00:11:53.976992 sshd[1720]: Accepted publickey for core from 139.178.68.195 port 40148 ssh2: RSA SHA256:F+XLD192VdJplBwsaXiDmdHN61qgjd2kCMtCNVPlP/M Jul 12 00:11:53.980075 sshd[1720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:11:53.993045 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 12 00:11:53.998829 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 12 00:11:54.002178 systemd-logind[1577]: New session 1 of user core. Jul 12 00:11:54.015221 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 12 00:11:54.022874 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 12 00:11:54.027024 (systemd)[1726]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:11:54.137358 systemd[1726]: Queued start job for default target default.target. Jul 12 00:11:54.138278 systemd[1726]: Created slice app.slice - User Application Slice. Jul 12 00:11:54.138381 systemd[1726]: Reached target paths.target - Paths. Jul 12 00:11:54.138393 systemd[1726]: Reached target timers.target - Timers. Jul 12 00:11:54.142581 systemd[1726]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 12 00:11:54.163213 systemd[1726]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 12 00:11:54.163281 systemd[1726]: Reached target sockets.target - Sockets. Jul 12 00:11:54.163294 systemd[1726]: Reached target basic.target - Basic System. Jul 12 00:11:54.163337 systemd[1726]: Reached target default.target - Main User Target. Jul 12 00:11:54.163364 systemd[1726]: Startup finished in 130ms. Jul 12 00:11:54.163776 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 12 00:11:54.169559 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 12 00:11:54.856861 systemd[1]: Started sshd@1-91.99.189.6:22-139.178.68.195:40150.service - OpenSSH per-connection server daemon (139.178.68.195:40150). Jul 12 00:11:55.829338 sshd[1738]: Accepted publickey for core from 139.178.68.195 port 40150 ssh2: RSA SHA256:F+XLD192VdJplBwsaXiDmdHN61qgjd2kCMtCNVPlP/M Jul 12 00:11:55.831554 sshd[1738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:11:55.837168 systemd-logind[1577]: New session 2 of user core. Jul 12 00:11:55.847082 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 12 00:11:56.511750 sshd[1738]: pam_unix(sshd:session): session closed for user core Jul 12 00:11:56.516176 systemd[1]: sshd@1-91.99.189.6:22-139.178.68.195:40150.service: Deactivated successfully. Jul 12 00:11:56.519367 systemd[1]: session-2.scope: Deactivated successfully. Jul 12 00:11:56.519763 systemd-logind[1577]: Session 2 logged out. Waiting for processes to exit. Jul 12 00:11:56.521415 systemd-logind[1577]: Removed session 2. Jul 12 00:11:56.712911 systemd[1]: Started sshd@2-91.99.189.6:22-139.178.68.195:40158.service - OpenSSH per-connection server daemon (139.178.68.195:40158). Jul 12 00:11:57.770241 sshd[1746]: Accepted publickey for core from 139.178.68.195 port 40158 ssh2: RSA SHA256:F+XLD192VdJplBwsaXiDmdHN61qgjd2kCMtCNVPlP/M Jul 12 00:11:57.772593 sshd[1746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:11:57.778344 systemd-logind[1577]: New session 3 of user core. Jul 12 00:11:57.790550 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 12 00:11:58.499125 sshd[1746]: pam_unix(sshd:session): session closed for user core Jul 12 00:11:58.502579 systemd-logind[1577]: Session 3 logged out. Waiting for processes to exit. Jul 12 00:11:58.503503 systemd[1]: sshd@2-91.99.189.6:22-139.178.68.195:40158.service: Deactivated successfully. Jul 12 00:11:58.506840 systemd[1]: session-3.scope: Deactivated successfully. Jul 12 00:11:58.508429 systemd-logind[1577]: Removed session 3. Jul 12 00:11:58.650761 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 12 00:11:58.663729 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:11:58.666223 systemd[1]: Started sshd@3-91.99.189.6:22-139.178.68.195:36284.service - OpenSSH per-connection server daemon (139.178.68.195:36284). Jul 12 00:11:58.822835 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:11:58.827218 (kubelet)[1768]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 12 00:11:58.883350 kubelet[1768]: E0712 00:11:58.883191 1768 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:11:58.889224 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:11:58.889577 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:11:59.660669 sshd[1755]: Accepted publickey for core from 139.178.68.195 port 36284 ssh2: RSA SHA256:F+XLD192VdJplBwsaXiDmdHN61qgjd2kCMtCNVPlP/M Jul 12 00:11:59.662694 sshd[1755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:11:59.668571 systemd-logind[1577]: New session 4 of user core. Jul 12 00:11:59.676105 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 12 00:12:00.352920 sshd[1755]: pam_unix(sshd:session): session closed for user core Jul 12 00:12:00.357405 systemd[1]: sshd@3-91.99.189.6:22-139.178.68.195:36284.service: Deactivated successfully. Jul 12 00:12:00.361233 systemd[1]: session-4.scope: Deactivated successfully. Jul 12 00:12:00.362361 systemd-logind[1577]: Session 4 logged out. Waiting for processes to exit. Jul 12 00:12:00.363848 systemd-logind[1577]: Removed session 4. Jul 12 00:12:00.518748 systemd[1]: Started sshd@4-91.99.189.6:22-139.178.68.195:36286.service - OpenSSH per-connection server daemon (139.178.68.195:36286). Jul 12 00:12:01.509592 sshd[1781]: Accepted publickey for core from 139.178.68.195 port 36286 ssh2: RSA SHA256:F+XLD192VdJplBwsaXiDmdHN61qgjd2kCMtCNVPlP/M Jul 12 00:12:01.511260 sshd[1781]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:12:01.515991 systemd-logind[1577]: New session 5 of user core. Jul 12 00:12:01.527079 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 12 00:12:02.047073 sudo[1785]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 12 00:12:02.047389 sudo[1785]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 12 00:12:02.065731 sudo[1785]: pam_unix(sudo:session): session closed for user root Jul 12 00:12:02.229050 sshd[1781]: pam_unix(sshd:session): session closed for user core Jul 12 00:12:02.236430 systemd[1]: sshd@4-91.99.189.6:22-139.178.68.195:36286.service: Deactivated successfully. Jul 12 00:12:02.240220 systemd[1]: session-5.scope: Deactivated successfully. Jul 12 00:12:02.241388 systemd-logind[1577]: Session 5 logged out. Waiting for processes to exit. Jul 12 00:12:02.242809 systemd-logind[1577]: Removed session 5. Jul 12 00:12:02.400061 systemd[1]: Started sshd@5-91.99.189.6:22-139.178.68.195:36294.service - OpenSSH per-connection server daemon (139.178.68.195:36294). Jul 12 00:12:03.392278 sshd[1790]: Accepted publickey for core from 139.178.68.195 port 36294 ssh2: RSA SHA256:F+XLD192VdJplBwsaXiDmdHN61qgjd2kCMtCNVPlP/M Jul 12 00:12:03.394543 sshd[1790]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:12:03.400644 systemd-logind[1577]: New session 6 of user core. Jul 12 00:12:03.410926 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 12 00:12:03.921100 sudo[1795]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 12 00:12:03.921391 sudo[1795]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 12 00:12:03.924893 sudo[1795]: pam_unix(sudo:session): session closed for user root Jul 12 00:12:03.932160 sudo[1794]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 12 00:12:03.932438 sudo[1794]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 12 00:12:03.956980 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 12 00:12:03.960011 auditctl[1798]: No rules Jul 12 00:12:03.961092 systemd[1]: audit-rules.service: Deactivated successfully. Jul 12 00:12:03.962161 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 12 00:12:03.965860 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 12 00:12:03.994914 augenrules[1817]: No rules Jul 12 00:12:03.996666 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 12 00:12:03.999884 sudo[1794]: pam_unix(sudo:session): session closed for user root Jul 12 00:12:04.161878 sshd[1790]: pam_unix(sshd:session): session closed for user core Jul 12 00:12:04.166924 systemd-logind[1577]: Session 6 logged out. Waiting for processes to exit. Jul 12 00:12:04.167413 systemd[1]: sshd@5-91.99.189.6:22-139.178.68.195:36294.service: Deactivated successfully. Jul 12 00:12:04.171174 systemd[1]: session-6.scope: Deactivated successfully. Jul 12 00:12:04.173581 systemd-logind[1577]: Removed session 6. Jul 12 00:12:04.330941 systemd[1]: Started sshd@6-91.99.189.6:22-139.178.68.195:36306.service - OpenSSH per-connection server daemon (139.178.68.195:36306). Jul 12 00:12:05.320060 sshd[1826]: Accepted publickey for core from 139.178.68.195 port 36306 ssh2: RSA SHA256:F+XLD192VdJplBwsaXiDmdHN61qgjd2kCMtCNVPlP/M Jul 12 00:12:05.321985 sshd[1826]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:12:05.327433 systemd-logind[1577]: New session 7 of user core. Jul 12 00:12:05.333033 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 12 00:12:05.849135 sudo[1830]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 12 00:12:05.849403 sudo[1830]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 12 00:12:06.148979 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 12 00:12:06.149376 (dockerd)[1846]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 12 00:12:06.404444 dockerd[1846]: time="2025-07-12T00:12:06.404287254Z" level=info msg="Starting up" Jul 12 00:12:06.532373 dockerd[1846]: time="2025-07-12T00:12:06.532291574Z" level=info msg="Loading containers: start." Jul 12 00:12:06.650687 kernel: Initializing XFRM netlink socket Jul 12 00:12:06.739814 systemd-networkd[1232]: docker0: Link UP Jul 12 00:12:06.757688 dockerd[1846]: time="2025-07-12T00:12:06.757587734Z" level=info msg="Loading containers: done." Jul 12 00:12:06.774994 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1471714628-merged.mount: Deactivated successfully. Jul 12 00:12:06.776775 dockerd[1846]: time="2025-07-12T00:12:06.775514094Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 12 00:12:06.776775 dockerd[1846]: time="2025-07-12T00:12:06.775718334Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jul 12 00:12:06.776775 dockerd[1846]: time="2025-07-12T00:12:06.775870014Z" level=info msg="Daemon has completed initialization" Jul 12 00:12:06.826785 dockerd[1846]: time="2025-07-12T00:12:06.826617014Z" level=info msg="API listen on /run/docker.sock" Jul 12 00:12:06.826901 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 12 00:12:07.900327 containerd[1601]: time="2025-07-12T00:12:07.900279654Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 12 00:12:08.526718 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2161935729.mount: Deactivated successfully. Jul 12 00:12:09.139830 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 12 00:12:09.145719 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:12:09.291442 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:12:09.296086 (kubelet)[2055]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 12 00:12:09.350236 kubelet[2055]: E0712 00:12:09.350185 2055 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:12:09.352326 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:12:09.352502 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:12:09.527693 containerd[1601]: time="2025-07-12T00:12:09.527508254Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:12:09.529998 containerd[1601]: time="2025-07-12T00:12:09.529954054Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=25651885" Jul 12 00:12:09.530128 containerd[1601]: time="2025-07-12T00:12:09.530098094Z" level=info msg="ImageCreate event name:\"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:12:09.534189 containerd[1601]: time="2025-07-12T00:12:09.534076774Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:12:09.535992 containerd[1601]: time="2025-07-12T00:12:09.535752254Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"25648593\" in 1.63542968s" Jul 12 00:12:09.535992 containerd[1601]: time="2025-07-12T00:12:09.535792094Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\"" Jul 12 00:12:09.537383 containerd[1601]: time="2025-07-12T00:12:09.537358774Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 12 00:12:11.012235 containerd[1601]: time="2025-07-12T00:12:11.012140094Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:12:11.014384 containerd[1601]: time="2025-07-12T00:12:11.014317614Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=22459697" Jul 12 00:12:11.015383 containerd[1601]: time="2025-07-12T00:12:11.015308894Z" level=info msg="ImageCreate event name:\"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:12:11.019299 containerd[1601]: time="2025-07-12T00:12:11.019224374Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:12:11.021337 containerd[1601]: time="2025-07-12T00:12:11.020478494Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"23995467\" in 1.4829758s" Jul 12 00:12:11.021337 containerd[1601]: time="2025-07-12T00:12:11.020521094Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\"" Jul 12 00:12:11.021696 containerd[1601]: time="2025-07-12T00:12:11.021665934Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 12 00:12:12.000186 containerd[1601]: time="2025-07-12T00:12:11.998933814Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:12:12.002353 containerd[1601]: time="2025-07-12T00:12:12.002284814Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=17125086" Jul 12 00:12:12.007768 containerd[1601]: time="2025-07-12T00:12:12.006809534Z" level=info msg="ImageCreate event name:\"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:12:12.010342 containerd[1601]: time="2025-07-12T00:12:12.010290214Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:12:12.011615 containerd[1601]: time="2025-07-12T00:12:12.011405734Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"18660874\" in 989.6826ms" Jul 12 00:12:12.011615 containerd[1601]: time="2025-07-12T00:12:12.011444454Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\"" Jul 12 00:12:12.012040 containerd[1601]: time="2025-07-12T00:12:12.012006814Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 12 00:12:12.996527 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3924988709.mount: Deactivated successfully. Jul 12 00:12:13.309347 containerd[1601]: time="2025-07-12T00:12:13.307747934Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:12:13.309347 containerd[1601]: time="2025-07-12T00:12:13.309204614Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=26915983" Jul 12 00:12:13.310269 containerd[1601]: time="2025-07-12T00:12:13.310216054Z" level=info msg="ImageCreate event name:\"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:12:13.312393 containerd[1601]: time="2025-07-12T00:12:13.312355454Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:12:13.313091 containerd[1601]: time="2025-07-12T00:12:13.313046934Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"26914976\" in 1.3010036s" Jul 12 00:12:13.313091 containerd[1601]: time="2025-07-12T00:12:13.313085494Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\"" Jul 12 00:12:13.313510 containerd[1601]: time="2025-07-12T00:12:13.313483494Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 12 00:12:13.901672 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2975869743.mount: Deactivated successfully. Jul 12 00:12:14.636203 containerd[1601]: time="2025-07-12T00:12:14.636131814Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:12:14.638577 containerd[1601]: time="2025-07-12T00:12:14.638424054Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:12:14.638577 containerd[1601]: time="2025-07-12T00:12:14.638542014Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951714" Jul 12 00:12:14.642420 containerd[1601]: time="2025-07-12T00:12:14.642356494Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:12:14.644474 containerd[1601]: time="2025-07-12T00:12:14.644048174Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.33053496s" Jul 12 00:12:14.644474 containerd[1601]: time="2025-07-12T00:12:14.644086934Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 12 00:12:14.644809 containerd[1601]: time="2025-07-12T00:12:14.644788854Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 12 00:12:15.149349 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4080540144.mount: Deactivated successfully. Jul 12 00:12:15.156485 containerd[1601]: time="2025-07-12T00:12:15.155335974Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:12:15.157621 containerd[1601]: time="2025-07-12T00:12:15.157587334Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" Jul 12 00:12:15.158718 containerd[1601]: time="2025-07-12T00:12:15.158651494Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:12:15.161511 containerd[1601]: time="2025-07-12T00:12:15.161434014Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:12:15.162702 containerd[1601]: time="2025-07-12T00:12:15.162663854Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 517.7702ms" Jul 12 00:12:15.162855 containerd[1601]: time="2025-07-12T00:12:15.162835134Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 12 00:12:15.163486 containerd[1601]: time="2025-07-12T00:12:15.163428774Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 12 00:12:15.730510 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1236546876.mount: Deactivated successfully. Jul 12 00:12:17.117029 containerd[1601]: time="2025-07-12T00:12:17.116950974Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:12:17.119164 containerd[1601]: time="2025-07-12T00:12:17.119123094Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406533" Jul 12 00:12:17.120068 containerd[1601]: time="2025-07-12T00:12:17.119600934Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:12:17.126482 containerd[1601]: time="2025-07-12T00:12:17.124771774Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:12:17.129323 containerd[1601]: time="2025-07-12T00:12:17.129270094Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 1.9656782s" Jul 12 00:12:17.129323 containerd[1601]: time="2025-07-12T00:12:17.129316294Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Jul 12 00:12:19.603344 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 12 00:12:19.612700 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:12:19.744662 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:12:19.756318 (kubelet)[2222]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 12 00:12:19.795932 kubelet[2222]: E0712 00:12:19.795795 2222 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:12:19.801174 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:12:19.801329 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:12:21.651239 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:12:21.660954 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:12:21.702293 systemd[1]: Reloading requested from client PID 2238 ('systemctl') (unit session-7.scope)... Jul 12 00:12:21.702310 systemd[1]: Reloading... Jul 12 00:12:21.816563 zram_generator::config[2277]: No configuration found. Jul 12 00:12:21.924780 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:12:21.995222 systemd[1]: Reloading finished in 292 ms. Jul 12 00:12:22.061896 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 12 00:12:22.061975 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 12 00:12:22.062602 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:12:22.070284 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:12:22.192863 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:12:22.202109 (kubelet)[2338]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 12 00:12:22.244469 kubelet[2338]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:12:22.244469 kubelet[2338]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 12 00:12:22.244469 kubelet[2338]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:12:22.244952 kubelet[2338]: I0712 00:12:22.244574 2338 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 12 00:12:23.266502 kubelet[2338]: I0712 00:12:23.266250 2338 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 12 00:12:23.266502 kubelet[2338]: I0712 00:12:23.266291 2338 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 12 00:12:23.266917 kubelet[2338]: I0712 00:12:23.266684 2338 server.go:934] "Client rotation is on, will bootstrap in background" Jul 12 00:12:23.297672 kubelet[2338]: E0712 00:12:23.297611 2338 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://91.99.189.6:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 91.99.189.6:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:12:23.298777 kubelet[2338]: I0712 00:12:23.298753 2338 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 12 00:12:23.308719 kubelet[2338]: E0712 00:12:23.308487 2338 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 12 00:12:23.308719 kubelet[2338]: I0712 00:12:23.308527 2338 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 12 00:12:23.313506 kubelet[2338]: I0712 00:12:23.313440 2338 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 12 00:12:23.315410 kubelet[2338]: I0712 00:12:23.315355 2338 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 12 00:12:23.315707 kubelet[2338]: I0712 00:12:23.315643 2338 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 12 00:12:23.316002 kubelet[2338]: I0712 00:12:23.315699 2338 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-4-n-bdc5bebc5f","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jul 12 00:12:23.316092 kubelet[2338]: I0712 00:12:23.316013 2338 topology_manager.go:138] "Creating topology manager with none policy" Jul 12 00:12:23.316092 kubelet[2338]: I0712 00:12:23.316030 2338 container_manager_linux.go:300] "Creating device plugin manager" Jul 12 00:12:23.316320 kubelet[2338]: I0712 00:12:23.316288 2338 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:12:23.319907 kubelet[2338]: I0712 00:12:23.319843 2338 kubelet.go:408] "Attempting to sync node with API server" Jul 12 00:12:23.319907 kubelet[2338]: I0712 00:12:23.319886 2338 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 12 00:12:23.319907 kubelet[2338]: I0712 00:12:23.319914 2338 kubelet.go:314] "Adding apiserver pod source" Jul 12 00:12:23.321245 kubelet[2338]: I0712 00:12:23.319991 2338 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 12 00:12:23.327479 kubelet[2338]: I0712 00:12:23.327442 2338 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 12 00:12:23.328465 kubelet[2338]: I0712 00:12:23.328428 2338 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 12 00:12:23.328795 kubelet[2338]: W0712 00:12:23.328779 2338 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 12 00:12:23.330069 kubelet[2338]: I0712 00:12:23.330028 2338 server.go:1274] "Started kubelet" Jul 12 00:12:23.330405 kubelet[2338]: W0712 00:12:23.330293 2338 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://91.99.189.6:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-4-n-bdc5bebc5f&limit=500&resourceVersion=0": dial tcp 91.99.189.6:6443: connect: connection refused Jul 12 00:12:23.330539 kubelet[2338]: E0712 00:12:23.330517 2338 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://91.99.189.6:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-4-n-bdc5bebc5f&limit=500&resourceVersion=0\": dial tcp 91.99.189.6:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:12:23.337297 kubelet[2338]: W0712 00:12:23.337237 2338 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://91.99.189.6:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 91.99.189.6:6443: connect: connection refused Jul 12 00:12:23.337376 kubelet[2338]: E0712 00:12:23.337301 2338 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://91.99.189.6:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 91.99.189.6:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:12:23.337685 kubelet[2338]: I0712 00:12:23.337664 2338 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 12 00:12:23.340233 kubelet[2338]: E0712 00:12:23.337350 2338 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://91.99.189.6:6443/api/v1/namespaces/default/events\": dial tcp 91.99.189.6:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-4-n-bdc5bebc5f.18515898cd82b587 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-4-n-bdc5bebc5f,UID:ci-4081-3-4-n-bdc5bebc5f,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-4-n-bdc5bebc5f,},FirstTimestamp:2025-07-12 00:12:23.330002311 +0000 UTC m=+1.123217833,LastTimestamp:2025-07-12 00:12:23.330002311 +0000 UTC m=+1.123217833,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-4-n-bdc5bebc5f,}" Jul 12 00:12:23.341913 kubelet[2338]: I0712 00:12:23.341863 2338 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 12 00:12:23.343395 kubelet[2338]: I0712 00:12:23.343361 2338 server.go:449] "Adding debug handlers to kubelet server" Jul 12 00:12:23.344696 kubelet[2338]: I0712 00:12:23.344656 2338 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 12 00:12:23.346183 kubelet[2338]: I0712 00:12:23.346144 2338 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 12 00:12:23.347138 kubelet[2338]: E0712 00:12:23.346414 2338 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-4-n-bdc5bebc5f\" not found" Jul 12 00:12:23.347265 kubelet[2338]: I0712 00:12:23.347233 2338 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 12 00:12:23.347299 kubelet[2338]: I0712 00:12:23.347290 2338 reconciler.go:26] "Reconciler: start to sync state" Jul 12 00:12:23.347785 kubelet[2338]: I0712 00:12:23.347722 2338 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 12 00:12:23.347988 kubelet[2338]: I0712 00:12:23.347963 2338 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 12 00:12:23.349481 kubelet[2338]: I0712 00:12:23.349432 2338 factory.go:221] Registration of the systemd container factory successfully Jul 12 00:12:23.349581 kubelet[2338]: I0712 00:12:23.349547 2338 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 12 00:12:23.350056 kubelet[2338]: W0712 00:12:23.350004 2338 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://91.99.189.6:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 91.99.189.6:6443: connect: connection refused Jul 12 00:12:23.350128 kubelet[2338]: E0712 00:12:23.350057 2338 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://91.99.189.6:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 91.99.189.6:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:12:23.350163 kubelet[2338]: E0712 00:12:23.350117 2338 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.99.189.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-4-n-bdc5bebc5f?timeout=10s\": dial tcp 91.99.189.6:6443: connect: connection refused" interval="200ms" Jul 12 00:12:23.352115 kubelet[2338]: I0712 00:12:23.352083 2338 factory.go:221] Registration of the containerd container factory successfully Jul 12 00:12:23.361569 kubelet[2338]: I0712 00:12:23.361526 2338 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 12 00:12:23.363249 kubelet[2338]: I0712 00:12:23.362822 2338 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 12 00:12:23.363249 kubelet[2338]: I0712 00:12:23.362849 2338 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 12 00:12:23.363249 kubelet[2338]: I0712 00:12:23.362867 2338 kubelet.go:2321] "Starting kubelet main sync loop" Jul 12 00:12:23.363249 kubelet[2338]: E0712 00:12:23.362915 2338 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 12 00:12:23.371181 kubelet[2338]: W0712 00:12:23.371119 2338 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://91.99.189.6:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 91.99.189.6:6443: connect: connection refused Jul 12 00:12:23.371730 kubelet[2338]: E0712 00:12:23.371426 2338 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://91.99.189.6:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 91.99.189.6:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:12:23.372126 kubelet[2338]: E0712 00:12:23.372106 2338 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 12 00:12:23.382726 kubelet[2338]: I0712 00:12:23.382690 2338 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 12 00:12:23.382726 kubelet[2338]: I0712 00:12:23.382710 2338 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 12 00:12:23.382726 kubelet[2338]: I0712 00:12:23.382730 2338 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:12:23.384826 kubelet[2338]: I0712 00:12:23.384790 2338 policy_none.go:49] "None policy: Start" Jul 12 00:12:23.385883 kubelet[2338]: I0712 00:12:23.385781 2338 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 12 00:12:23.386398 kubelet[2338]: I0712 00:12:23.386012 2338 state_mem.go:35] "Initializing new in-memory state store" Jul 12 00:12:23.396432 kubelet[2338]: I0712 00:12:23.396016 2338 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 12 00:12:23.396432 kubelet[2338]: I0712 00:12:23.396215 2338 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 12 00:12:23.396432 kubelet[2338]: I0712 00:12:23.396227 2338 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 12 00:12:23.397220 kubelet[2338]: I0712 00:12:23.397169 2338 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 12 00:12:23.398313 kubelet[2338]: E0712 00:12:23.398291 2338 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-4-n-bdc5bebc5f\" not found" Jul 12 00:12:23.499628 kubelet[2338]: I0712 00:12:23.499506 2338 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-4-n-bdc5bebc5f" Jul 12 00:12:23.500338 kubelet[2338]: E0712 00:12:23.500284 2338 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://91.99.189.6:6443/api/v1/nodes\": dial tcp 91.99.189.6:6443: connect: connection refused" node="ci-4081-3-4-n-bdc5bebc5f" Jul 12 00:12:23.548774 kubelet[2338]: I0712 00:12:23.548208 2338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1372e655462367628d99fda6577a479c-ca-certs\") pod \"kube-apiserver-ci-4081-3-4-n-bdc5bebc5f\" (UID: \"1372e655462367628d99fda6577a479c\") " pod="kube-system/kube-apiserver-ci-4081-3-4-n-bdc5bebc5f" Jul 12 00:12:23.548774 kubelet[2338]: I0712 00:12:23.548278 2338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1372e655462367628d99fda6577a479c-k8s-certs\") pod \"kube-apiserver-ci-4081-3-4-n-bdc5bebc5f\" (UID: \"1372e655462367628d99fda6577a479c\") " pod="kube-system/kube-apiserver-ci-4081-3-4-n-bdc5bebc5f" Jul 12 00:12:23.548774 kubelet[2338]: I0712 00:12:23.548321 2338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1372e655462367628d99fda6577a479c-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-4-n-bdc5bebc5f\" (UID: \"1372e655462367628d99fda6577a479c\") " pod="kube-system/kube-apiserver-ci-4081-3-4-n-bdc5bebc5f" Jul 12 00:12:23.548774 kubelet[2338]: I0712 00:12:23.548370 2338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b01f32caecb0aa1eece5c9c1c11ea135-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-4-n-bdc5bebc5f\" (UID: \"b01f32caecb0aa1eece5c9c1c11ea135\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-n-bdc5bebc5f" Jul 12 00:12:23.548774 kubelet[2338]: I0712 00:12:23.548409 2338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b01f32caecb0aa1eece5c9c1c11ea135-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-4-n-bdc5bebc5f\" (UID: \"b01f32caecb0aa1eece5c9c1c11ea135\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-n-bdc5bebc5f" Jul 12 00:12:23.549143 kubelet[2338]: I0712 00:12:23.548444 2338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6a4a6cc2d0f47c484db0f332bbd727e7-kubeconfig\") pod \"kube-scheduler-ci-4081-3-4-n-bdc5bebc5f\" (UID: \"6a4a6cc2d0f47c484db0f332bbd727e7\") " pod="kube-system/kube-scheduler-ci-4081-3-4-n-bdc5bebc5f" Jul 12 00:12:23.549143 kubelet[2338]: I0712 00:12:23.548549 2338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b01f32caecb0aa1eece5c9c1c11ea135-ca-certs\") pod \"kube-controller-manager-ci-4081-3-4-n-bdc5bebc5f\" (UID: \"b01f32caecb0aa1eece5c9c1c11ea135\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-n-bdc5bebc5f" Jul 12 00:12:23.549143 kubelet[2338]: I0712 00:12:23.548680 2338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b01f32caecb0aa1eece5c9c1c11ea135-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-4-n-bdc5bebc5f\" (UID: \"b01f32caecb0aa1eece5c9c1c11ea135\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-n-bdc5bebc5f" Jul 12 00:12:23.549143 kubelet[2338]: I0712 00:12:23.548746 2338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b01f32caecb0aa1eece5c9c1c11ea135-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-4-n-bdc5bebc5f\" (UID: \"b01f32caecb0aa1eece5c9c1c11ea135\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-n-bdc5bebc5f" Jul 12 00:12:23.551474 kubelet[2338]: E0712 00:12:23.551288 2338 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.99.189.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-4-n-bdc5bebc5f?timeout=10s\": dial tcp 91.99.189.6:6443: connect: connection refused" interval="400ms" Jul 12 00:12:23.703476 kubelet[2338]: I0712 00:12:23.703346 2338 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-4-n-bdc5bebc5f" Jul 12 00:12:23.704126 kubelet[2338]: E0712 00:12:23.704072 2338 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://91.99.189.6:6443/api/v1/nodes\": dial tcp 91.99.189.6:6443: connect: connection refused" node="ci-4081-3-4-n-bdc5bebc5f" Jul 12 00:12:23.776345 containerd[1601]: time="2025-07-12T00:12:23.776222154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-4-n-bdc5bebc5f,Uid:1372e655462367628d99fda6577a479c,Namespace:kube-system,Attempt:0,}" Jul 12 00:12:23.778307 containerd[1601]: time="2025-07-12T00:12:23.778177564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-4-n-bdc5bebc5f,Uid:b01f32caecb0aa1eece5c9c1c11ea135,Namespace:kube-system,Attempt:0,}" Jul 12 00:12:23.781422 containerd[1601]: time="2025-07-12T00:12:23.781374298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-4-n-bdc5bebc5f,Uid:6a4a6cc2d0f47c484db0f332bbd727e7,Namespace:kube-system,Attempt:0,}" Jul 12 00:12:23.952831 kubelet[2338]: E0712 00:12:23.952658 2338 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.99.189.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-4-n-bdc5bebc5f?timeout=10s\": dial tcp 91.99.189.6:6443: connect: connection refused" interval="800ms" Jul 12 00:12:24.107091 kubelet[2338]: I0712 00:12:24.107061 2338 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-4-n-bdc5bebc5f" Jul 12 00:12:24.107905 kubelet[2338]: E0712 00:12:24.107868 2338 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://91.99.189.6:6443/api/v1/nodes\": dial tcp 91.99.189.6:6443: connect: connection refused" node="ci-4081-3-4-n-bdc5bebc5f" Jul 12 00:12:24.359409 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2036231852.mount: Deactivated successfully. Jul 12 00:12:24.364993 containerd[1601]: time="2025-07-12T00:12:24.364796707Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 00:12:24.367344 containerd[1601]: time="2025-07-12T00:12:24.367193977Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 12 00:12:24.368484 containerd[1601]: time="2025-07-12T00:12:24.368250803Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 00:12:24.369424 containerd[1601]: time="2025-07-12T00:12:24.369334991Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 00:12:24.371827 containerd[1601]: time="2025-07-12T00:12:24.371693299Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Jul 12 00:12:24.373201 containerd[1601]: time="2025-07-12T00:12:24.372925376Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 12 00:12:24.373201 containerd[1601]: time="2025-07-12T00:12:24.373031302Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 00:12:24.374434 containerd[1601]: time="2025-07-12T00:12:24.374367026Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 00:12:24.377944 containerd[1601]: time="2025-07-12T00:12:24.377593388Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 596.107363ms" Jul 12 00:12:24.379431 containerd[1601]: time="2025-07-12T00:12:24.379245171Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 600.95288ms" Jul 12 00:12:24.379983 containerd[1601]: time="2025-07-12T00:12:24.379936415Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 603.601893ms" Jul 12 00:12:24.404377 kubelet[2338]: W0712 00:12:24.404099 2338 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://91.99.189.6:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 91.99.189.6:6443: connect: connection refused Jul 12 00:12:24.404377 kubelet[2338]: E0712 00:12:24.404182 2338 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://91.99.189.6:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 91.99.189.6:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:12:24.511702 containerd[1601]: time="2025-07-12T00:12:24.511560376Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:12:24.511702 containerd[1601]: time="2025-07-12T00:12:24.511624140Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:12:24.511702 containerd[1601]: time="2025-07-12T00:12:24.511634821Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:12:24.512303 containerd[1601]: time="2025-07-12T00:12:24.512032246Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:12:24.515296 containerd[1601]: time="2025-07-12T00:12:24.514913226Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:12:24.515296 containerd[1601]: time="2025-07-12T00:12:24.514969470Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:12:24.515296 containerd[1601]: time="2025-07-12T00:12:24.515001712Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:12:24.515296 containerd[1601]: time="2025-07-12T00:12:24.515107638Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:12:24.516390 containerd[1601]: time="2025-07-12T00:12:24.516270471Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:12:24.516390 containerd[1601]: time="2025-07-12T00:12:24.516331875Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:12:24.516390 containerd[1601]: time="2025-07-12T00:12:24.516370117Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:12:24.517287 containerd[1601]: time="2025-07-12T00:12:24.516678297Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:12:24.552441 kubelet[2338]: W0712 00:12:24.552367 2338 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://91.99.189.6:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-4-n-bdc5bebc5f&limit=500&resourceVersion=0": dial tcp 91.99.189.6:6443: connect: connection refused Jul 12 00:12:24.552441 kubelet[2338]: E0712 00:12:24.552441 2338 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://91.99.189.6:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-4-n-bdc5bebc5f&limit=500&resourceVersion=0\": dial tcp 91.99.189.6:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:12:24.588325 containerd[1601]: time="2025-07-12T00:12:24.588239378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-4-n-bdc5bebc5f,Uid:6a4a6cc2d0f47c484db0f332bbd727e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"fa4cdac54fdaf2c2f499f2a064a57bca06e37fbcfdf94bdd0f6ad4f175aa8c16\"" Jul 12 00:12:24.593822 containerd[1601]: time="2025-07-12T00:12:24.593779604Z" level=info msg="CreateContainer within sandbox \"fa4cdac54fdaf2c2f499f2a064a57bca06e37fbcfdf94bdd0f6ad4f175aa8c16\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 12 00:12:24.594902 containerd[1601]: time="2025-07-12T00:12:24.594861992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-4-n-bdc5bebc5f,Uid:1372e655462367628d99fda6577a479c,Namespace:kube-system,Attempt:0,} returns sandbox id \"7e05909b61a0bc83c8520d8595d0fe4118a70ef03de08b95a962beedfa03d330\"" Jul 12 00:12:24.598729 containerd[1601]: time="2025-07-12T00:12:24.598408334Z" level=info msg="CreateContainer within sandbox \"7e05909b61a0bc83c8520d8595d0fe4118a70ef03de08b95a962beedfa03d330\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 12 00:12:24.601338 containerd[1601]: time="2025-07-12T00:12:24.601297875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-4-n-bdc5bebc5f,Uid:b01f32caecb0aa1eece5c9c1c11ea135,Namespace:kube-system,Attempt:0,} returns sandbox id \"8c0d9188412fa25e88ee01121dcafccf32a80c0ac42e742d10f5964163a36ac2\"" Jul 12 00:12:24.607725 containerd[1601]: time="2025-07-12T00:12:24.607559427Z" level=info msg="CreateContainer within sandbox \"8c0d9188412fa25e88ee01121dcafccf32a80c0ac42e742d10f5964163a36ac2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 12 00:12:24.618602 containerd[1601]: time="2025-07-12T00:12:24.618474311Z" level=info msg="CreateContainer within sandbox \"7e05909b61a0bc83c8520d8595d0fe4118a70ef03de08b95a962beedfa03d330\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6037490739dce42db782cb5b3f6015a37409f1a7127e5a0a2d7c760e36f6b332\"" Jul 12 00:12:24.621481 containerd[1601]: time="2025-07-12T00:12:24.619769032Z" level=info msg="CreateContainer within sandbox \"fa4cdac54fdaf2c2f499f2a064a57bca06e37fbcfdf94bdd0f6ad4f175aa8c16\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"eab5504fe03d50ea5fdabb861d8b666a2daa63962a6192dd4a57864102e017aa\"" Jul 12 00:12:24.621481 containerd[1601]: time="2025-07-12T00:12:24.619984885Z" level=info msg="StartContainer for \"6037490739dce42db782cb5b3f6015a37409f1a7127e5a0a2d7c760e36f6b332\"" Jul 12 00:12:24.621481 containerd[1601]: time="2025-07-12T00:12:24.620389231Z" level=info msg="StartContainer for \"eab5504fe03d50ea5fdabb861d8b666a2daa63962a6192dd4a57864102e017aa\"" Jul 12 00:12:24.639121 containerd[1601]: time="2025-07-12T00:12:24.639060400Z" level=info msg="CreateContainer within sandbox \"8c0d9188412fa25e88ee01121dcafccf32a80c0ac42e742d10f5964163a36ac2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e6c92ef2749e1c64b887140a736b690a4983936064dfb1ffaf0ad0885200dc63\"" Jul 12 00:12:24.639781 containerd[1601]: time="2025-07-12T00:12:24.639720201Z" level=info msg="StartContainer for \"e6c92ef2749e1c64b887140a736b690a4983936064dfb1ffaf0ad0885200dc63\"" Jul 12 00:12:24.697166 containerd[1601]: time="2025-07-12T00:12:24.697107874Z" level=info msg="StartContainer for \"eab5504fe03d50ea5fdabb861d8b666a2daa63962a6192dd4a57864102e017aa\" returns successfully" Jul 12 00:12:24.710221 containerd[1601]: time="2025-07-12T00:12:24.710160932Z" level=info msg="StartContainer for \"6037490739dce42db782cb5b3f6015a37409f1a7127e5a0a2d7c760e36f6b332\" returns successfully" Jul 12 00:12:24.755500 kubelet[2338]: E0712 00:12:24.753141 2338 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.99.189.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-4-n-bdc5bebc5f?timeout=10s\": dial tcp 91.99.189.6:6443: connect: connection refused" interval="1.6s" Jul 12 00:12:24.765191 containerd[1601]: time="2025-07-12T00:12:24.765143774Z" level=info msg="StartContainer for \"e6c92ef2749e1c64b887140a736b690a4983936064dfb1ffaf0ad0885200dc63\" returns successfully" Jul 12 00:12:24.911805 kubelet[2338]: I0712 00:12:24.911704 2338 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-4-n-bdc5bebc5f" Jul 12 00:12:26.716232 kubelet[2338]: E0712 00:12:26.716192 2338 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-4-n-bdc5bebc5f\" not found" node="ci-4081-3-4-n-bdc5bebc5f" Jul 12 00:12:26.868346 kubelet[2338]: I0712 00:12:26.867165 2338 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-3-4-n-bdc5bebc5f" Jul 12 00:12:26.868346 kubelet[2338]: E0712 00:12:26.867207 2338 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4081-3-4-n-bdc5bebc5f\": node \"ci-4081-3-4-n-bdc5bebc5f\" not found" Jul 12 00:12:26.890117 kubelet[2338]: E0712 00:12:26.889855 2338 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-4-n-bdc5bebc5f\" not found" Jul 12 00:12:26.990765 kubelet[2338]: E0712 00:12:26.990640 2338 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-4-n-bdc5bebc5f\" not found" Jul 12 00:12:27.091667 kubelet[2338]: E0712 00:12:27.091618 2338 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-4-n-bdc5bebc5f\" not found" Jul 12 00:12:27.192651 kubelet[2338]: E0712 00:12:27.192593 2338 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-4-n-bdc5bebc5f\" not found" Jul 12 00:12:27.293279 kubelet[2338]: E0712 00:12:27.293154 2338 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-4-n-bdc5bebc5f\" not found" Jul 12 00:12:27.339142 kubelet[2338]: I0712 00:12:27.339078 2338 apiserver.go:52] "Watching apiserver" Jul 12 00:12:27.347971 kubelet[2338]: I0712 00:12:27.347882 2338 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 12 00:12:29.155712 systemd[1]: Reloading requested from client PID 2611 ('systemctl') (unit session-7.scope)... Jul 12 00:12:29.155733 systemd[1]: Reloading... Jul 12 00:12:29.252898 zram_generator::config[2651]: No configuration found. Jul 12 00:12:29.373255 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:12:29.453893 systemd[1]: Reloading finished in 297 ms. Jul 12 00:12:29.482532 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:12:29.501976 systemd[1]: kubelet.service: Deactivated successfully. Jul 12 00:12:29.503566 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:12:29.509126 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:12:29.650204 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:12:29.662034 (kubelet)[2706]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 12 00:12:29.718535 kubelet[2706]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:12:29.718535 kubelet[2706]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 12 00:12:29.718535 kubelet[2706]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:12:29.719154 kubelet[2706]: I0712 00:12:29.718514 2706 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 12 00:12:29.732480 kubelet[2706]: I0712 00:12:29.732413 2706 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 12 00:12:29.732480 kubelet[2706]: I0712 00:12:29.732491 2706 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 12 00:12:29.732872 kubelet[2706]: I0712 00:12:29.732808 2706 server.go:934] "Client rotation is on, will bootstrap in background" Jul 12 00:12:29.734897 kubelet[2706]: I0712 00:12:29.734857 2706 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 12 00:12:29.737362 kubelet[2706]: I0712 00:12:29.737298 2706 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 12 00:12:29.742667 kubelet[2706]: E0712 00:12:29.742627 2706 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 12 00:12:29.743276 kubelet[2706]: I0712 00:12:29.742983 2706 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 12 00:12:29.747485 kubelet[2706]: I0712 00:12:29.747429 2706 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 12 00:12:29.748105 kubelet[2706]: I0712 00:12:29.748061 2706 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 12 00:12:29.748279 kubelet[2706]: I0712 00:12:29.748220 2706 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 12 00:12:29.748630 kubelet[2706]: I0712 00:12:29.748270 2706 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-4-n-bdc5bebc5f","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jul 12 00:12:29.748763 kubelet[2706]: I0712 00:12:29.748638 2706 topology_manager.go:138] "Creating topology manager with none policy" Jul 12 00:12:29.748763 kubelet[2706]: I0712 00:12:29.748658 2706 container_manager_linux.go:300] "Creating device plugin manager" Jul 12 00:12:29.748763 kubelet[2706]: I0712 00:12:29.748717 2706 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:12:29.748964 kubelet[2706]: I0712 00:12:29.748921 2706 kubelet.go:408] "Attempting to sync node with API server" Jul 12 00:12:29.748964 kubelet[2706]: I0712 00:12:29.748949 2706 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 12 00:12:29.749060 kubelet[2706]: I0712 00:12:29.748979 2706 kubelet.go:314] "Adding apiserver pod source" Jul 12 00:12:29.749060 kubelet[2706]: I0712 00:12:29.749003 2706 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 12 00:12:29.751503 kubelet[2706]: I0712 00:12:29.750236 2706 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 12 00:12:29.753201 kubelet[2706]: I0712 00:12:29.753173 2706 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 12 00:12:29.774490 kubelet[2706]: I0712 00:12:29.771228 2706 server.go:1274] "Started kubelet" Jul 12 00:12:29.776501 kubelet[2706]: I0712 00:12:29.776479 2706 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 12 00:12:29.784753 kubelet[2706]: I0712 00:12:29.784512 2706 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 12 00:12:29.785511 kubelet[2706]: I0712 00:12:29.785015 2706 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 12 00:12:29.785511 kubelet[2706]: I0712 00:12:29.785362 2706 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 12 00:12:29.785630 kubelet[2706]: I0712 00:12:29.785530 2706 reconciler.go:26] "Reconciler: start to sync state" Jul 12 00:12:29.787318 kubelet[2706]: I0712 00:12:29.787251 2706 server.go:449] "Adding debug handlers to kubelet server" Jul 12 00:12:29.789336 kubelet[2706]: I0712 00:12:29.789283 2706 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 12 00:12:29.789944 kubelet[2706]: I0712 00:12:29.789626 2706 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 12 00:12:29.790073 kubelet[2706]: I0712 00:12:29.790054 2706 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 12 00:12:29.803823 kubelet[2706]: I0712 00:12:29.803758 2706 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 12 00:12:29.807552 kubelet[2706]: I0712 00:12:29.807519 2706 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 12 00:12:29.807552 kubelet[2706]: I0712 00:12:29.807550 2706 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 12 00:12:29.807780 kubelet[2706]: I0712 00:12:29.807570 2706 kubelet.go:2321] "Starting kubelet main sync loop" Jul 12 00:12:29.807780 kubelet[2706]: E0712 00:12:29.807613 2706 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 12 00:12:29.807955 kubelet[2706]: I0712 00:12:29.807921 2706 factory.go:221] Registration of the systemd container factory successfully Jul 12 00:12:29.808389 kubelet[2706]: I0712 00:12:29.808121 2706 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 12 00:12:29.810615 kubelet[2706]: E0712 00:12:29.810593 2706 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 12 00:12:29.815939 kubelet[2706]: I0712 00:12:29.814476 2706 factory.go:221] Registration of the containerd container factory successfully Jul 12 00:12:29.869651 kubelet[2706]: I0712 00:12:29.869596 2706 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 12 00:12:29.869651 kubelet[2706]: I0712 00:12:29.869617 2706 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 12 00:12:29.869651 kubelet[2706]: I0712 00:12:29.869651 2706 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:12:29.869952 kubelet[2706]: I0712 00:12:29.869890 2706 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 12 00:12:29.869952 kubelet[2706]: I0712 00:12:29.869904 2706 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 12 00:12:29.869952 kubelet[2706]: I0712 00:12:29.869927 2706 policy_none.go:49] "None policy: Start" Jul 12 00:12:29.871039 kubelet[2706]: I0712 00:12:29.871004 2706 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 12 00:12:29.871039 kubelet[2706]: I0712 00:12:29.871028 2706 state_mem.go:35] "Initializing new in-memory state store" Jul 12 00:12:29.871310 kubelet[2706]: I0712 00:12:29.871165 2706 state_mem.go:75] "Updated machine memory state" Jul 12 00:12:29.872484 kubelet[2706]: I0712 00:12:29.872429 2706 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 12 00:12:29.873249 kubelet[2706]: I0712 00:12:29.873191 2706 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 12 00:12:29.873249 kubelet[2706]: I0712 00:12:29.873214 2706 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 12 00:12:29.876060 kubelet[2706]: I0712 00:12:29.875968 2706 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 12 00:12:29.923536 kubelet[2706]: E0712 00:12:29.923477 2706 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081-3-4-n-bdc5bebc5f\" already exists" pod="kube-system/kube-controller-manager-ci-4081-3-4-n-bdc5bebc5f" Jul 12 00:12:29.984680 kubelet[2706]: I0712 00:12:29.984572 2706 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-4-n-bdc5bebc5f" Jul 12 00:12:29.998616 kubelet[2706]: I0712 00:12:29.998524 2706 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081-3-4-n-bdc5bebc5f" Jul 12 00:12:29.998616 kubelet[2706]: I0712 00:12:29.998611 2706 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-3-4-n-bdc5bebc5f" Jul 12 00:12:30.086430 kubelet[2706]: I0712 00:12:30.086343 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1372e655462367628d99fda6577a479c-k8s-certs\") pod \"kube-apiserver-ci-4081-3-4-n-bdc5bebc5f\" (UID: \"1372e655462367628d99fda6577a479c\") " pod="kube-system/kube-apiserver-ci-4081-3-4-n-bdc5bebc5f" Jul 12 00:12:30.086430 kubelet[2706]: I0712 00:12:30.086428 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b01f32caecb0aa1eece5c9c1c11ea135-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-4-n-bdc5bebc5f\" (UID: \"b01f32caecb0aa1eece5c9c1c11ea135\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-n-bdc5bebc5f" Jul 12 00:12:30.086731 kubelet[2706]: I0712 00:12:30.086471 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b01f32caecb0aa1eece5c9c1c11ea135-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-4-n-bdc5bebc5f\" (UID: \"b01f32caecb0aa1eece5c9c1c11ea135\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-n-bdc5bebc5f" Jul 12 00:12:30.086731 kubelet[2706]: I0712 00:12:30.086577 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b01f32caecb0aa1eece5c9c1c11ea135-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-4-n-bdc5bebc5f\" (UID: \"b01f32caecb0aa1eece5c9c1c11ea135\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-n-bdc5bebc5f" Jul 12 00:12:30.086731 kubelet[2706]: I0712 00:12:30.086610 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b01f32caecb0aa1eece5c9c1c11ea135-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-4-n-bdc5bebc5f\" (UID: \"b01f32caecb0aa1eece5c9c1c11ea135\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-n-bdc5bebc5f" Jul 12 00:12:30.086731 kubelet[2706]: I0712 00:12:30.086637 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6a4a6cc2d0f47c484db0f332bbd727e7-kubeconfig\") pod \"kube-scheduler-ci-4081-3-4-n-bdc5bebc5f\" (UID: \"6a4a6cc2d0f47c484db0f332bbd727e7\") " pod="kube-system/kube-scheduler-ci-4081-3-4-n-bdc5bebc5f" Jul 12 00:12:30.086731 kubelet[2706]: I0712 00:12:30.086658 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1372e655462367628d99fda6577a479c-ca-certs\") pod \"kube-apiserver-ci-4081-3-4-n-bdc5bebc5f\" (UID: \"1372e655462367628d99fda6577a479c\") " pod="kube-system/kube-apiserver-ci-4081-3-4-n-bdc5bebc5f" Jul 12 00:12:30.087073 kubelet[2706]: I0712 00:12:30.086679 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b01f32caecb0aa1eece5c9c1c11ea135-ca-certs\") pod \"kube-controller-manager-ci-4081-3-4-n-bdc5bebc5f\" (UID: \"b01f32caecb0aa1eece5c9c1c11ea135\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-n-bdc5bebc5f" Jul 12 00:12:30.087073 kubelet[2706]: I0712 00:12:30.086705 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1372e655462367628d99fda6577a479c-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-4-n-bdc5bebc5f\" (UID: \"1372e655462367628d99fda6577a479c\") " pod="kube-system/kube-apiserver-ci-4081-3-4-n-bdc5bebc5f" Jul 12 00:12:30.749718 kubelet[2706]: I0712 00:12:30.749377 2706 apiserver.go:52] "Watching apiserver" Jul 12 00:12:30.786762 kubelet[2706]: I0712 00:12:30.786634 2706 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 12 00:12:30.839971 kubelet[2706]: E0712 00:12:30.839678 2706 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081-3-4-n-bdc5bebc5f\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-4-n-bdc5bebc5f" Jul 12 00:12:30.859497 kubelet[2706]: I0712 00:12:30.859380 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-4-n-bdc5bebc5f" podStartSLOduration=1.8593415260000001 podStartE2EDuration="1.859341526s" podCreationTimestamp="2025-07-12 00:12:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:12:30.859137038 +0000 UTC m=+1.191677689" watchObservedRunningTime="2025-07-12 00:12:30.859341526 +0000 UTC m=+1.191882177" Jul 12 00:12:30.870470 kubelet[2706]: I0712 00:12:30.870108 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-4-n-bdc5bebc5f" podStartSLOduration=1.870088303 podStartE2EDuration="1.870088303s" podCreationTimestamp="2025-07-12 00:12:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:12:30.869999579 +0000 UTC m=+1.202540230" watchObservedRunningTime="2025-07-12 00:12:30.870088303 +0000 UTC m=+1.202628994" Jul 12 00:12:30.884115 kubelet[2706]: I0712 00:12:30.884029 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-4-n-bdc5bebc5f" podStartSLOduration=1.884008855 podStartE2EDuration="1.884008855s" podCreationTimestamp="2025-07-12 00:12:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:12:30.883600078 +0000 UTC m=+1.216140729" watchObservedRunningTime="2025-07-12 00:12:30.884008855 +0000 UTC m=+1.216549546" Jul 12 00:12:32.477266 update_engine[1580]: I20250712 00:12:32.477150 1580 update_attempter.cc:509] Updating boot flags... Jul 12 00:12:32.528592 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (2758) Jul 12 00:12:32.642194 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (2760) Jul 12 00:12:33.878204 kubelet[2706]: I0712 00:12:33.878085 2706 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 12 00:12:33.879187 kubelet[2706]: I0712 00:12:33.878830 2706 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 12 00:12:33.879278 containerd[1601]: time="2025-07-12T00:12:33.878536671Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 12 00:12:34.817506 kubelet[2706]: I0712 00:12:34.817435 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x27v6\" (UniqueName: \"kubernetes.io/projected/76a64ee9-65a8-4132-a97f-91b22bdff3f7-kube-api-access-x27v6\") pod \"kube-proxy-4bzgt\" (UID: \"76a64ee9-65a8-4132-a97f-91b22bdff3f7\") " pod="kube-system/kube-proxy-4bzgt" Jul 12 00:12:34.817506 kubelet[2706]: I0712 00:12:34.817493 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/76a64ee9-65a8-4132-a97f-91b22bdff3f7-kube-proxy\") pod \"kube-proxy-4bzgt\" (UID: \"76a64ee9-65a8-4132-a97f-91b22bdff3f7\") " pod="kube-system/kube-proxy-4bzgt" Jul 12 00:12:34.817506 kubelet[2706]: I0712 00:12:34.817514 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/76a64ee9-65a8-4132-a97f-91b22bdff3f7-lib-modules\") pod \"kube-proxy-4bzgt\" (UID: \"76a64ee9-65a8-4132-a97f-91b22bdff3f7\") " pod="kube-system/kube-proxy-4bzgt" Jul 12 00:12:34.817506 kubelet[2706]: I0712 00:12:34.817530 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/76a64ee9-65a8-4132-a97f-91b22bdff3f7-xtables-lock\") pod \"kube-proxy-4bzgt\" (UID: \"76a64ee9-65a8-4132-a97f-91b22bdff3f7\") " pod="kube-system/kube-proxy-4bzgt" Jul 12 00:12:35.051577 containerd[1601]: time="2025-07-12T00:12:35.050705381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4bzgt,Uid:76a64ee9-65a8-4132-a97f-91b22bdff3f7,Namespace:kube-system,Attempt:0,}" Jul 12 00:12:35.093484 containerd[1601]: time="2025-07-12T00:12:35.093212170Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:12:35.093484 containerd[1601]: time="2025-07-12T00:12:35.093294373Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:12:35.093484 containerd[1601]: time="2025-07-12T00:12:35.093317253Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:12:35.093484 containerd[1601]: time="2025-07-12T00:12:35.093421977Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:12:35.115058 systemd[1]: run-containerd-runc-k8s.io-608b6e5b200f07bbe6150f1acbe1ee3cf7b3e67e62d5666a622420c251185f57-runc.kTzjRd.mount: Deactivated successfully. Jul 12 00:12:35.120755 kubelet[2706]: I0712 00:12:35.120715 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2acbce14-d6d1-4c0e-9cf9-883e24f56b36-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-jwgvd\" (UID: \"2acbce14-d6d1-4c0e-9cf9-883e24f56b36\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-jwgvd" Jul 12 00:12:35.120755 kubelet[2706]: I0712 00:12:35.120764 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cmbm\" (UniqueName: \"kubernetes.io/projected/2acbce14-d6d1-4c0e-9cf9-883e24f56b36-kube-api-access-5cmbm\") pod \"tigera-operator-5bf8dfcb4-jwgvd\" (UID: \"2acbce14-d6d1-4c0e-9cf9-883e24f56b36\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-jwgvd" Jul 12 00:12:35.147835 containerd[1601]: time="2025-07-12T00:12:35.147798491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4bzgt,Uid:76a64ee9-65a8-4132-a97f-91b22bdff3f7,Namespace:kube-system,Attempt:0,} returns sandbox id \"608b6e5b200f07bbe6150f1acbe1ee3cf7b3e67e62d5666a622420c251185f57\"" Jul 12 00:12:35.153868 containerd[1601]: time="2025-07-12T00:12:35.153708513Z" level=info msg="CreateContainer within sandbox \"608b6e5b200f07bbe6150f1acbe1ee3cf7b3e67e62d5666a622420c251185f57\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 12 00:12:35.175100 containerd[1601]: time="2025-07-12T00:12:35.174930366Z" level=info msg="CreateContainer within sandbox \"608b6e5b200f07bbe6150f1acbe1ee3cf7b3e67e62d5666a622420c251185f57\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0b5e670b5c84d3a0fd503d97db1fb88082b644d97d366ffc25fd171801125570\"" Jul 12 00:12:35.178402 containerd[1601]: time="2025-07-12T00:12:35.176911347Z" level=info msg="StartContainer for \"0b5e670b5c84d3a0fd503d97db1fb88082b644d97d366ffc25fd171801125570\"" Jul 12 00:12:35.243348 containerd[1601]: time="2025-07-12T00:12:35.243086824Z" level=info msg="StartContainer for \"0b5e670b5c84d3a0fd503d97db1fb88082b644d97d366ffc25fd171801125570\" returns successfully" Jul 12 00:12:35.366723 containerd[1601]: time="2025-07-12T00:12:35.366602627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-jwgvd,Uid:2acbce14-d6d1-4c0e-9cf9-883e24f56b36,Namespace:tigera-operator,Attempt:0,}" Jul 12 00:12:35.405842 containerd[1601]: time="2025-07-12T00:12:35.404302068Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:12:35.405842 containerd[1601]: time="2025-07-12T00:12:35.404377630Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:12:35.405842 containerd[1601]: time="2025-07-12T00:12:35.404392551Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:12:35.405842 containerd[1601]: time="2025-07-12T00:12:35.404505954Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:12:35.458812 containerd[1601]: time="2025-07-12T00:12:35.458754064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-jwgvd,Uid:2acbce14-d6d1-4c0e-9cf9-883e24f56b36,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"3770ca9b21a6da69131513a56fe09bd706c51ce13ab783d2c4f9203578f3957a\"" Jul 12 00:12:35.461432 containerd[1601]: time="2025-07-12T00:12:35.461402226Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 12 00:12:35.883467 kubelet[2706]: I0712 00:12:35.882797 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4bzgt" podStartSLOduration=1.882771199 podStartE2EDuration="1.882771199s" podCreationTimestamp="2025-07-12 00:12:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:12:35.882343306 +0000 UTC m=+6.214883997" watchObservedRunningTime="2025-07-12 00:12:35.882771199 +0000 UTC m=+6.215311890" Jul 12 00:12:36.782824 systemd[1]: Started sshd@7-91.99.189.6:22-147.185.132.15:52375.service - OpenSSH per-connection server daemon (147.185.132.15:52375). Jul 12 00:12:36.956151 sshd[3015]: Connection closed by 147.185.132.15 port 52375 Jul 12 00:12:36.958792 systemd[1]: sshd@7-91.99.189.6:22-147.185.132.15:52375.service: Deactivated successfully. Jul 12 00:12:37.442365 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2566111814.mount: Deactivated successfully. Jul 12 00:12:37.831416 containerd[1601]: time="2025-07-12T00:12:37.831349094Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:12:37.834679 containerd[1601]: time="2025-07-12T00:12:37.834627103Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=22150610" Jul 12 00:12:37.835501 containerd[1601]: time="2025-07-12T00:12:37.835119356Z" level=info msg="ImageCreate event name:\"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:12:37.838369 containerd[1601]: time="2025-07-12T00:12:37.838313723Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:12:37.839379 containerd[1601]: time="2025-07-12T00:12:37.839255508Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"22146605\" in 2.377724159s" Jul 12 00:12:37.839379 containerd[1601]: time="2025-07-12T00:12:37.839292949Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\"" Jul 12 00:12:37.842790 containerd[1601]: time="2025-07-12T00:12:37.842736442Z" level=info msg="CreateContainer within sandbox \"3770ca9b21a6da69131513a56fe09bd706c51ce13ab783d2c4f9203578f3957a\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 12 00:12:37.858154 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1335622103.mount: Deactivated successfully. Jul 12 00:12:37.861553 containerd[1601]: time="2025-07-12T00:12:37.861486310Z" level=info msg="CreateContainer within sandbox \"3770ca9b21a6da69131513a56fe09bd706c51ce13ab783d2c4f9203578f3957a\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"3d5b3614bf9565e7248e82a5c2bbfda9b5497f833e393bada881d539c58c043f\"" Jul 12 00:12:37.863402 containerd[1601]: time="2025-07-12T00:12:37.862506537Z" level=info msg="StartContainer for \"3d5b3614bf9565e7248e82a5c2bbfda9b5497f833e393bada881d539c58c043f\"" Jul 12 00:12:37.917420 containerd[1601]: time="2025-07-12T00:12:37.917293540Z" level=info msg="StartContainer for \"3d5b3614bf9565e7248e82a5c2bbfda9b5497f833e393bada881d539c58c043f\" returns successfully" Jul 12 00:12:44.148795 sudo[1830]: pam_unix(sudo:session): session closed for user root Jul 12 00:12:44.312146 sshd[1826]: pam_unix(sshd:session): session closed for user core Jul 12 00:12:44.316414 systemd[1]: sshd@6-91.99.189.6:22-139.178.68.195:36306.service: Deactivated successfully. Jul 12 00:12:44.322825 systemd[1]: session-7.scope: Deactivated successfully. Jul 12 00:12:44.326844 systemd-logind[1577]: Session 7 logged out. Waiting for processes to exit. Jul 12 00:12:44.332262 systemd-logind[1577]: Removed session 7. Jul 12 00:12:51.521168 kubelet[2706]: I0712 00:12:51.521107 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-jwgvd" podStartSLOduration=14.141916155 podStartE2EDuration="16.521091075s" podCreationTimestamp="2025-07-12 00:12:35 +0000 UTC" firstStartedPulling="2025-07-12 00:12:35.460966892 +0000 UTC m=+5.793507503" lastFinishedPulling="2025-07-12 00:12:37.840141772 +0000 UTC m=+8.172682423" observedRunningTime="2025-07-12 00:12:38.877330241 +0000 UTC m=+9.209870972" watchObservedRunningTime="2025-07-12 00:12:51.521091075 +0000 UTC m=+21.853631726" Jul 12 00:12:51.535650 kubelet[2706]: I0712 00:12:51.534669 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/ab650127-3f68-4a9d-a340-7a2b40a9ce40-typha-certs\") pod \"calico-typha-5848bddd86-srhqf\" (UID: \"ab650127-3f68-4a9d-a340-7a2b40a9ce40\") " pod="calico-system/calico-typha-5848bddd86-srhqf" Jul 12 00:12:51.535650 kubelet[2706]: I0712 00:12:51.534716 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab650127-3f68-4a9d-a340-7a2b40a9ce40-tigera-ca-bundle\") pod \"calico-typha-5848bddd86-srhqf\" (UID: \"ab650127-3f68-4a9d-a340-7a2b40a9ce40\") " pod="calico-system/calico-typha-5848bddd86-srhqf" Jul 12 00:12:51.535650 kubelet[2706]: I0712 00:12:51.534737 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnlvx\" (UniqueName: \"kubernetes.io/projected/ab650127-3f68-4a9d-a340-7a2b40a9ce40-kube-api-access-cnlvx\") pod \"calico-typha-5848bddd86-srhqf\" (UID: \"ab650127-3f68-4a9d-a340-7a2b40a9ce40\") " pod="calico-system/calico-typha-5848bddd86-srhqf" Jul 12 00:12:51.737488 kubelet[2706]: I0712 00:12:51.736249 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0b3d461a-4360-4cae-a8b2-464e793e4e15-lib-modules\") pod \"calico-node-dhjn2\" (UID: \"0b3d461a-4360-4cae-a8b2-464e793e4e15\") " pod="calico-system/calico-node-dhjn2" Jul 12 00:12:51.737488 kubelet[2706]: I0712 00:12:51.736293 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/0b3d461a-4360-4cae-a8b2-464e793e4e15-var-run-calico\") pod \"calico-node-dhjn2\" (UID: \"0b3d461a-4360-4cae-a8b2-464e793e4e15\") " pod="calico-system/calico-node-dhjn2" Jul 12 00:12:51.737488 kubelet[2706]: I0712 00:12:51.736311 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b3d461a-4360-4cae-a8b2-464e793e4e15-tigera-ca-bundle\") pod \"calico-node-dhjn2\" (UID: \"0b3d461a-4360-4cae-a8b2-464e793e4e15\") " pod="calico-system/calico-node-dhjn2" Jul 12 00:12:51.737488 kubelet[2706]: I0712 00:12:51.736329 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/0b3d461a-4360-4cae-a8b2-464e793e4e15-node-certs\") pod \"calico-node-dhjn2\" (UID: \"0b3d461a-4360-4cae-a8b2-464e793e4e15\") " pod="calico-system/calico-node-dhjn2" Jul 12 00:12:51.737488 kubelet[2706]: I0712 00:12:51.736345 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/0b3d461a-4360-4cae-a8b2-464e793e4e15-policysync\") pod \"calico-node-dhjn2\" (UID: \"0b3d461a-4360-4cae-a8b2-464e793e4e15\") " pod="calico-system/calico-node-dhjn2" Jul 12 00:12:51.737725 kubelet[2706]: I0712 00:12:51.736360 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hh9z9\" (UniqueName: \"kubernetes.io/projected/0b3d461a-4360-4cae-a8b2-464e793e4e15-kube-api-access-hh9z9\") pod \"calico-node-dhjn2\" (UID: \"0b3d461a-4360-4cae-a8b2-464e793e4e15\") " pod="calico-system/calico-node-dhjn2" Jul 12 00:12:51.737725 kubelet[2706]: I0712 00:12:51.736377 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/0b3d461a-4360-4cae-a8b2-464e793e4e15-cni-net-dir\") pod \"calico-node-dhjn2\" (UID: \"0b3d461a-4360-4cae-a8b2-464e793e4e15\") " pod="calico-system/calico-node-dhjn2" Jul 12 00:12:51.737725 kubelet[2706]: I0712 00:12:51.736392 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/0b3d461a-4360-4cae-a8b2-464e793e4e15-flexvol-driver-host\") pod \"calico-node-dhjn2\" (UID: \"0b3d461a-4360-4cae-a8b2-464e793e4e15\") " pod="calico-system/calico-node-dhjn2" Jul 12 00:12:51.737725 kubelet[2706]: I0712 00:12:51.736406 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/0b3d461a-4360-4cae-a8b2-464e793e4e15-cni-bin-dir\") pod \"calico-node-dhjn2\" (UID: \"0b3d461a-4360-4cae-a8b2-464e793e4e15\") " pod="calico-system/calico-node-dhjn2" Jul 12 00:12:51.737725 kubelet[2706]: I0712 00:12:51.736420 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/0b3d461a-4360-4cae-a8b2-464e793e4e15-cni-log-dir\") pod \"calico-node-dhjn2\" (UID: \"0b3d461a-4360-4cae-a8b2-464e793e4e15\") " pod="calico-system/calico-node-dhjn2" Jul 12 00:12:51.737868 kubelet[2706]: I0712 00:12:51.736435 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0b3d461a-4360-4cae-a8b2-464e793e4e15-var-lib-calico\") pod \"calico-node-dhjn2\" (UID: \"0b3d461a-4360-4cae-a8b2-464e793e4e15\") " pod="calico-system/calico-node-dhjn2" Jul 12 00:12:51.738075 kubelet[2706]: I0712 00:12:51.736463 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0b3d461a-4360-4cae-a8b2-464e793e4e15-xtables-lock\") pod \"calico-node-dhjn2\" (UID: \"0b3d461a-4360-4cae-a8b2-464e793e4e15\") " pod="calico-system/calico-node-dhjn2" Jul 12 00:12:51.831108 containerd[1601]: time="2025-07-12T00:12:51.830998912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5848bddd86-srhqf,Uid:ab650127-3f68-4a9d-a340-7a2b40a9ce40,Namespace:calico-system,Attempt:0,}" Jul 12 00:12:51.853407 kubelet[2706]: E0712 00:12:51.853361 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:51.853407 kubelet[2706]: W0712 00:12:51.853393 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:51.853631 kubelet[2706]: E0712 00:12:51.853420 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:51.856561 kubelet[2706]: E0712 00:12:51.856496 2706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f2q67" podUID="bf0041c9-fdb6-4de6-99ec-d3644807d402" Jul 12 00:12:51.878333 containerd[1601]: time="2025-07-12T00:12:51.877535382Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:12:51.878333 containerd[1601]: time="2025-07-12T00:12:51.877594143Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:12:51.878333 containerd[1601]: time="2025-07-12T00:12:51.877622303Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:12:51.878333 containerd[1601]: time="2025-07-12T00:12:51.877710944Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:12:51.908888 kubelet[2706]: E0712 00:12:51.908573 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:51.908888 kubelet[2706]: W0712 00:12:51.908607 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:51.908888 kubelet[2706]: E0712 00:12:51.908656 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:51.932535 kubelet[2706]: E0712 00:12:51.931694 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:51.932535 kubelet[2706]: W0712 00:12:51.932367 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:51.932535 kubelet[2706]: E0712 00:12:51.932404 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:51.932960 kubelet[2706]: E0712 00:12:51.932835 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:51.932960 kubelet[2706]: W0712 00:12:51.932849 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:51.932960 kubelet[2706]: E0712 00:12:51.932860 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:51.934158 kubelet[2706]: E0712 00:12:51.934122 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:51.934443 kubelet[2706]: W0712 00:12:51.934417 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:51.934494 kubelet[2706]: E0712 00:12:51.934484 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:51.936838 kubelet[2706]: E0712 00:12:51.936803 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:51.936838 kubelet[2706]: W0712 00:12:51.936830 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:51.936991 kubelet[2706]: E0712 00:12:51.936851 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:51.938352 kubelet[2706]: E0712 00:12:51.937783 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:51.938352 kubelet[2706]: W0712 00:12:51.938062 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:51.938352 kubelet[2706]: E0712 00:12:51.938088 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:51.939550 kubelet[2706]: E0712 00:12:51.939522 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:51.939550 kubelet[2706]: W0712 00:12:51.939545 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:51.939676 kubelet[2706]: E0712 00:12:51.939563 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:51.941374 kubelet[2706]: E0712 00:12:51.941349 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:51.941374 kubelet[2706]: W0712 00:12:51.941368 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:51.941374 kubelet[2706]: E0712 00:12:51.941384 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:51.942785 kubelet[2706]: E0712 00:12:51.942763 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:51.942785 kubelet[2706]: W0712 00:12:51.942782 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:51.942878 kubelet[2706]: E0712 00:12:51.942798 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:51.946372 kubelet[2706]: E0712 00:12:51.946327 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:51.946372 kubelet[2706]: W0712 00:12:51.946348 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:51.946499 kubelet[2706]: E0712 00:12:51.946416 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:51.946782 kubelet[2706]: E0712 00:12:51.946764 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:51.946846 kubelet[2706]: W0712 00:12:51.946815 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:51.946846 kubelet[2706]: E0712 00:12:51.946837 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:51.947491 kubelet[2706]: E0712 00:12:51.947470 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:51.947491 kubelet[2706]: W0712 00:12:51.947488 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:51.948244 kubelet[2706]: E0712 00:12:51.947902 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:51.948788 kubelet[2706]: E0712 00:12:51.948769 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:51.948788 kubelet[2706]: W0712 00:12:51.948785 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:51.948877 kubelet[2706]: E0712 00:12:51.948802 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:51.948877 kubelet[2706]: I0712 00:12:51.948829 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bf0041c9-fdb6-4de6-99ec-d3644807d402-kubelet-dir\") pod \"csi-node-driver-f2q67\" (UID: \"bf0041c9-fdb6-4de6-99ec-d3644807d402\") " pod="calico-system/csi-node-driver-f2q67" Jul 12 00:12:51.950530 kubelet[2706]: E0712 00:12:51.950338 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:51.950530 kubelet[2706]: W0712 00:12:51.950525 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:51.951043 kubelet[2706]: E0712 00:12:51.950811 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:51.951043 kubelet[2706]: I0712 00:12:51.950850 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/bf0041c9-fdb6-4de6-99ec-d3644807d402-registration-dir\") pod \"csi-node-driver-f2q67\" (UID: \"bf0041c9-fdb6-4de6-99ec-d3644807d402\") " pod="calico-system/csi-node-driver-f2q67" Jul 12 00:12:51.951437 kubelet[2706]: E0712 00:12:51.951417 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:51.951437 kubelet[2706]: W0712 00:12:51.951433 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:51.952640 kubelet[2706]: E0712 00:12:51.952540 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:51.953070 kubelet[2706]: E0712 00:12:51.953047 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:51.953070 kubelet[2706]: W0712 00:12:51.953064 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:51.953219 kubelet[2706]: E0712 00:12:51.953186 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:51.954839 kubelet[2706]: E0712 00:12:51.954813 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:51.954839 kubelet[2706]: W0712 00:12:51.954831 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:51.954996 kubelet[2706]: E0712 00:12:51.954944 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:51.955562 kubelet[2706]: E0712 00:12:51.955542 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:51.955562 kubelet[2706]: W0712 00:12:51.955559 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:51.956062 kubelet[2706]: E0712 00:12:51.956032 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:51.956552 kubelet[2706]: E0712 00:12:51.956534 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:51.956607 kubelet[2706]: W0712 00:12:51.956551 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:51.956607 kubelet[2706]: E0712 00:12:51.956569 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:51.957746 kubelet[2706]: E0712 00:12:51.957722 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:51.957746 kubelet[2706]: W0712 00:12:51.957743 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:51.957839 kubelet[2706]: E0712 00:12:51.957762 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:51.959300 kubelet[2706]: E0712 00:12:51.959271 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:51.959300 kubelet[2706]: W0712 00:12:51.959292 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:51.959372 kubelet[2706]: E0712 00:12:51.959307 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:51.960720 kubelet[2706]: E0712 00:12:51.960694 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:51.960720 kubelet[2706]: W0712 00:12:51.960716 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:51.960805 kubelet[2706]: E0712 00:12:51.960730 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:51.961283 kubelet[2706]: E0712 00:12:51.961254 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:51.961283 kubelet[2706]: W0712 00:12:51.961279 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:51.961344 kubelet[2706]: E0712 00:12:51.961293 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:51.962710 kubelet[2706]: E0712 00:12:51.962688 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:51.962710 kubelet[2706]: W0712 00:12:51.962707 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:51.962787 kubelet[2706]: E0712 00:12:51.962727 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:51.962971 kubelet[2706]: E0712 00:12:51.962949 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:51.962971 kubelet[2706]: W0712 00:12:51.962967 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:51.963028 kubelet[2706]: E0712 00:12:51.962978 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:51.963956 kubelet[2706]: E0712 00:12:51.963810 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:51.963956 kubelet[2706]: W0712 00:12:51.963831 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:51.963956 kubelet[2706]: E0712 00:12:51.963844 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:51.965616 kubelet[2706]: E0712 00:12:51.965595 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:51.965616 kubelet[2706]: W0712 00:12:51.965615 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:51.965691 kubelet[2706]: E0712 00:12:51.965627 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:52.023848 containerd[1601]: time="2025-07-12T00:12:52.023796090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-dhjn2,Uid:0b3d461a-4360-4cae-a8b2-464e793e4e15,Namespace:calico-system,Attempt:0,}" Jul 12 00:12:52.052428 kubelet[2706]: E0712 00:12:52.052244 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:52.052428 kubelet[2706]: W0712 00:12:52.052273 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:52.052428 kubelet[2706]: E0712 00:12:52.052294 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:52.056209 containerd[1601]: time="2025-07-12T00:12:52.055879219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5848bddd86-srhqf,Uid:ab650127-3f68-4a9d-a340-7a2b40a9ce40,Namespace:calico-system,Attempt:0,} returns sandbox id \"783547d9801c7cfa19e96b10a6f35e2d8df01faad92fa26bc6af34ae610fa5ff\"" Jul 12 00:12:52.056308 kubelet[2706]: E0712 00:12:52.056058 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:52.056308 kubelet[2706]: W0712 00:12:52.056073 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:52.056308 kubelet[2706]: E0712 00:12:52.056094 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:52.058051 kubelet[2706]: E0712 00:12:52.057651 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:52.058051 kubelet[2706]: W0712 00:12:52.057666 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:52.058051 kubelet[2706]: E0712 00:12:52.057969 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:52.058051 kubelet[2706]: I0712 00:12:52.058005 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/bf0041c9-fdb6-4de6-99ec-d3644807d402-varrun\") pod \"csi-node-driver-f2q67\" (UID: \"bf0041c9-fdb6-4de6-99ec-d3644807d402\") " pod="calico-system/csi-node-driver-f2q67" Jul 12 00:12:52.059423 kubelet[2706]: E0712 00:12:52.059236 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:52.059423 kubelet[2706]: W0712 00:12:52.059251 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:52.059871 kubelet[2706]: E0712 00:12:52.059823 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:52.059933 containerd[1601]: time="2025-07-12T00:12:52.059628538Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 12 00:12:52.060592 kubelet[2706]: E0712 00:12:52.060103 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:52.060592 kubelet[2706]: W0712 00:12:52.060117 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:52.060592 kubelet[2706]: E0712 00:12:52.060559 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:52.065487 kubelet[2706]: E0712 00:12:52.065193 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:52.065487 kubelet[2706]: W0712 00:12:52.065212 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:52.066023 kubelet[2706]: E0712 00:12:52.065807 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:52.066023 kubelet[2706]: I0712 00:12:52.065843 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/bf0041c9-fdb6-4de6-99ec-d3644807d402-socket-dir\") pod \"csi-node-driver-f2q67\" (UID: \"bf0041c9-fdb6-4de6-99ec-d3644807d402\") " pod="calico-system/csi-node-driver-f2q67" Jul 12 00:12:52.068550 kubelet[2706]: E0712 00:12:52.066649 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:52.068550 kubelet[2706]: W0712 00:12:52.066730 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:52.069035 kubelet[2706]: E0712 00:12:52.068786 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:52.070431 kubelet[2706]: E0712 00:12:52.070071 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:52.070431 kubelet[2706]: W0712 00:12:52.070092 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:52.070431 kubelet[2706]: E0712 00:12:52.070182 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:52.070431 kubelet[2706]: E0712 00:12:52.070342 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:52.070431 kubelet[2706]: W0712 00:12:52.070349 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:52.070734 kubelet[2706]: E0712 00:12:52.070603 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:52.070833 kubelet[2706]: E0712 00:12:52.070822 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:52.070881 kubelet[2706]: W0712 00:12:52.070871 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:52.071233 kubelet[2706]: E0712 00:12:52.071208 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:52.071624 kubelet[2706]: I0712 00:12:52.071605 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xk4qn\" (UniqueName: \"kubernetes.io/projected/bf0041c9-fdb6-4de6-99ec-d3644807d402-kube-api-access-xk4qn\") pod \"csi-node-driver-f2q67\" (UID: \"bf0041c9-fdb6-4de6-99ec-d3644807d402\") " pod="calico-system/csi-node-driver-f2q67" Jul 12 00:12:52.072656 kubelet[2706]: E0712 00:12:52.072438 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:52.072656 kubelet[2706]: W0712 00:12:52.072463 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:52.072656 kubelet[2706]: E0712 00:12:52.072565 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:52.073020 kubelet[2706]: E0712 00:12:52.072814 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:52.073020 kubelet[2706]: W0712 00:12:52.072824 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:52.073020 kubelet[2706]: E0712 00:12:52.072898 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:52.073803 kubelet[2706]: E0712 00:12:52.073310 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:52.073803 kubelet[2706]: W0712 00:12:52.073323 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:52.073803 kubelet[2706]: E0712 00:12:52.073578 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:52.074683 kubelet[2706]: E0712 00:12:52.074668 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:52.074785 kubelet[2706]: W0712 00:12:52.074773 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:52.076128 kubelet[2706]: E0712 00:12:52.075576 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:52.076128 kubelet[2706]: E0712 00:12:52.075718 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:52.076128 kubelet[2706]: W0712 00:12:52.075725 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:52.076128 kubelet[2706]: E0712 00:12:52.075789 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:52.076128 kubelet[2706]: E0712 00:12:52.075879 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:52.076128 kubelet[2706]: W0712 00:12:52.075886 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:52.076128 kubelet[2706]: E0712 00:12:52.075960 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:52.076688 kubelet[2706]: E0712 00:12:52.076584 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:52.076688 kubelet[2706]: W0712 00:12:52.076597 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:52.077624 kubelet[2706]: E0712 00:12:52.076793 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:52.077624 kubelet[2706]: E0712 00:12:52.077532 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:52.077624 kubelet[2706]: W0712 00:12:52.077543 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:52.077624 kubelet[2706]: E0712 00:12:52.077554 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:52.078542 kubelet[2706]: E0712 00:12:52.078527 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:52.078620 kubelet[2706]: W0712 00:12:52.078608 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:52.078679 kubelet[2706]: E0712 00:12:52.078669 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:52.085052 containerd[1601]: time="2025-07-12T00:12:52.083854827Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:12:52.085052 containerd[1601]: time="2025-07-12T00:12:52.084391152Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:12:52.085052 containerd[1601]: time="2025-07-12T00:12:52.084403673Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:12:52.111399 containerd[1601]: time="2025-07-12T00:12:52.087560625Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:12:52.150955 containerd[1601]: time="2025-07-12T00:12:52.150728114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-dhjn2,Uid:0b3d461a-4360-4cae-a8b2-464e793e4e15,Namespace:calico-system,Attempt:0,} returns sandbox id \"ba4cd5623c56b0c823daff26535a86bae3498addb6d55ff231d976b7810c96cb\"" Jul 12 00:12:52.181009 kubelet[2706]: E0712 00:12:52.180703 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:52.181009 kubelet[2706]: W0712 00:12:52.180740 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:52.181009 kubelet[2706]: E0712 00:12:52.180760 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:52.181009 kubelet[2706]: E0712 00:12:52.181052 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:52.181009 kubelet[2706]: W0712 00:12:52.181062 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:52.181354 kubelet[2706]: E0712 00:12:52.181126 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:52.181774 kubelet[2706]: E0712 00:12:52.181588 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:52.181774 kubelet[2706]: W0712 00:12:52.181600 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:52.181774 kubelet[2706]: E0712 00:12:52.181633 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:52.182339 kubelet[2706]: E0712 00:12:52.182122 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:52.182339 kubelet[2706]: W0712 00:12:52.182244 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:52.182339 kubelet[2706]: E0712 00:12:52.182260 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:52.182592 kubelet[2706]: E0712 00:12:52.182557 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:52.182762 kubelet[2706]: W0712 00:12:52.182650 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:52.182762 kubelet[2706]: E0712 00:12:52.182745 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:52.183130 kubelet[2706]: E0712 00:12:52.183023 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:52.183130 kubelet[2706]: W0712 00:12:52.183041 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:52.183130 kubelet[2706]: E0712 00:12:52.183110 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:52.183420 kubelet[2706]: E0712 00:12:52.183395 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:52.183420 kubelet[2706]: W0712 00:12:52.183404 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:52.183654 kubelet[2706]: E0712 00:12:52.183526 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:52.183877 kubelet[2706]: E0712 00:12:52.183819 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:52.183877 kubelet[2706]: W0712 00:12:52.183832 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:52.183877 kubelet[2706]: E0712 00:12:52.183846 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:52.184434 kubelet[2706]: E0712 00:12:52.184282 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:52.184434 kubelet[2706]: W0712 00:12:52.184293 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:52.184434 kubelet[2706]: E0712 00:12:52.184306 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:52.184731 kubelet[2706]: E0712 00:12:52.184613 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:52.184731 kubelet[2706]: W0712 00:12:52.184623 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:52.184731 kubelet[2706]: E0712 00:12:52.184633 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:52.185151 kubelet[2706]: E0712 00:12:52.185017 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:52.185151 kubelet[2706]: W0712 00:12:52.185027 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:52.185151 kubelet[2706]: E0712 00:12:52.185049 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:52.185461 kubelet[2706]: E0712 00:12:52.185371 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:52.185461 kubelet[2706]: W0712 00:12:52.185382 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:52.185461 kubelet[2706]: E0712 00:12:52.185401 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:52.185867 kubelet[2706]: E0712 00:12:52.185754 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:52.185867 kubelet[2706]: W0712 00:12:52.185766 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:52.185867 kubelet[2706]: E0712 00:12:52.185779 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:52.186410 kubelet[2706]: E0712 00:12:52.186293 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:52.186410 kubelet[2706]: W0712 00:12:52.186303 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:52.186410 kubelet[2706]: E0712 00:12:52.186313 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:52.186716 kubelet[2706]: E0712 00:12:52.186661 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:52.186716 kubelet[2706]: W0712 00:12:52.186672 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:52.186716 kubelet[2706]: E0712 00:12:52.186683 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:52.205185 kubelet[2706]: E0712 00:12:52.205090 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:52.205185 kubelet[2706]: W0712 00:12:52.205116 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:52.205185 kubelet[2706]: E0712 00:12:52.205140 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:53.537729 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1035522202.mount: Deactivated successfully. Jul 12 00:12:53.808870 kubelet[2706]: E0712 00:12:53.808671 2706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f2q67" podUID="bf0041c9-fdb6-4de6-99ec-d3644807d402" Jul 12 00:12:54.659226 containerd[1601]: time="2025-07-12T00:12:54.659165233Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:12:54.661573 containerd[1601]: time="2025-07-12T00:12:54.661014090Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=33087207" Jul 12 00:12:54.663373 containerd[1601]: time="2025-07-12T00:12:54.663343711Z" level=info msg="ImageCreate event name:\"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:12:54.666871 containerd[1601]: time="2025-07-12T00:12:54.666632261Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:12:54.668724 containerd[1601]: time="2025-07-12T00:12:54.668685439Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"33087061\" in 2.609021261s" Jul 12 00:12:54.669106 containerd[1601]: time="2025-07-12T00:12:54.668726400Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\"" Jul 12 00:12:54.671136 containerd[1601]: time="2025-07-12T00:12:54.671012500Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 12 00:12:54.686992 containerd[1601]: time="2025-07-12T00:12:54.686950604Z" level=info msg="CreateContainer within sandbox \"783547d9801c7cfa19e96b10a6f35e2d8df01faad92fa26bc6af34ae610fa5ff\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 12 00:12:54.706042 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount293199282.mount: Deactivated successfully. Jul 12 00:12:54.707416 containerd[1601]: time="2025-07-12T00:12:54.707370589Z" level=info msg="CreateContainer within sandbox \"783547d9801c7cfa19e96b10a6f35e2d8df01faad92fa26bc6af34ae610fa5ff\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"b77bbb8ae64b777b750f38485bff97e271a91a28d755c7cfc4c4d53b14b78351\"" Jul 12 00:12:54.709288 containerd[1601]: time="2025-07-12T00:12:54.709198085Z" level=info msg="StartContainer for \"b77bbb8ae64b777b750f38485bff97e271a91a28d755c7cfc4c4d53b14b78351\"" Jul 12 00:12:54.784053 containerd[1601]: time="2025-07-12T00:12:54.783950520Z" level=info msg="StartContainer for \"b77bbb8ae64b777b750f38485bff97e271a91a28d755c7cfc4c4d53b14b78351\" returns successfully" Jul 12 00:12:54.949066 kubelet[2706]: I0712 00:12:54.947942 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5848bddd86-srhqf" podStartSLOduration=1.336239676 podStartE2EDuration="3.947926202s" podCreationTimestamp="2025-07-12 00:12:51 +0000 UTC" firstStartedPulling="2025-07-12 00:12:52.058545607 +0000 UTC m=+22.391086258" lastFinishedPulling="2025-07-12 00:12:54.670232133 +0000 UTC m=+25.002772784" observedRunningTime="2025-07-12 00:12:54.947367957 +0000 UTC m=+25.279908688" watchObservedRunningTime="2025-07-12 00:12:54.947926202 +0000 UTC m=+25.280466853" Jul 12 00:12:54.989726 kubelet[2706]: E0712 00:12:54.987123 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:54.989726 kubelet[2706]: W0712 00:12:54.987156 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:54.989726 kubelet[2706]: E0712 00:12:54.987204 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:54.990529 kubelet[2706]: E0712 00:12:54.990245 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:54.990529 kubelet[2706]: W0712 00:12:54.990294 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:54.990529 kubelet[2706]: E0712 00:12:54.990322 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:54.992698 kubelet[2706]: E0712 00:12:54.990963 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:54.992698 kubelet[2706]: W0712 00:12:54.991009 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:54.992698 kubelet[2706]: E0712 00:12:54.991031 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:54.993279 kubelet[2706]: E0712 00:12:54.993105 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:54.993279 kubelet[2706]: W0712 00:12:54.993120 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:54.993279 kubelet[2706]: E0712 00:12:54.993133 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:54.993680 kubelet[2706]: E0712 00:12:54.993599 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:54.993680 kubelet[2706]: W0712 00:12:54.993612 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:54.993680 kubelet[2706]: E0712 00:12:54.993638 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:54.994208 kubelet[2706]: E0712 00:12:54.994128 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:54.994208 kubelet[2706]: W0712 00:12:54.994142 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:54.994208 kubelet[2706]: E0712 00:12:54.994153 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:54.994865 kubelet[2706]: E0712 00:12:54.994721 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:54.994865 kubelet[2706]: W0712 00:12:54.994734 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:54.994865 kubelet[2706]: E0712 00:12:54.994746 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:54.996873 kubelet[2706]: E0712 00:12:54.995845 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:54.996873 kubelet[2706]: W0712 00:12:54.995898 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:54.996873 kubelet[2706]: E0712 00:12:54.995912 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:54.997562 kubelet[2706]: E0712 00:12:54.997341 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:54.997562 kubelet[2706]: W0712 00:12:54.997386 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:54.997562 kubelet[2706]: E0712 00:12:54.997400 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:54.997941 kubelet[2706]: E0712 00:12:54.997726 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:54.997941 kubelet[2706]: W0712 00:12:54.997739 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:54.997941 kubelet[2706]: E0712 00:12:54.997750 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:54.999108 kubelet[2706]: E0712 00:12:54.998437 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:54.999108 kubelet[2706]: W0712 00:12:54.998461 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:54.999108 kubelet[2706]: E0712 00:12:54.998473 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:54.999777 kubelet[2706]: E0712 00:12:54.999374 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:54.999777 kubelet[2706]: W0712 00:12:54.999386 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:54.999777 kubelet[2706]: E0712 00:12:54.999398 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:55.000771 kubelet[2706]: E0712 00:12:55.000604 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:55.000771 kubelet[2706]: W0712 00:12:55.000619 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:55.000771 kubelet[2706]: E0712 00:12:55.000630 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:55.003511 kubelet[2706]: E0712 00:12:55.000940 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:55.003511 kubelet[2706]: W0712 00:12:55.000953 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:55.003511 kubelet[2706]: E0712 00:12:55.000965 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:55.003841 kubelet[2706]: E0712 00:12:55.003725 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:55.003841 kubelet[2706]: W0712 00:12:55.003739 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:55.003841 kubelet[2706]: E0712 00:12:55.003751 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:55.004257 kubelet[2706]: E0712 00:12:55.004142 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:55.004257 kubelet[2706]: W0712 00:12:55.004154 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:55.004257 kubelet[2706]: E0712 00:12:55.004164 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:55.004616 kubelet[2706]: E0712 00:12:55.004489 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:55.004616 kubelet[2706]: W0712 00:12:55.004501 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:55.004616 kubelet[2706]: E0712 00:12:55.004520 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:55.004959 kubelet[2706]: E0712 00:12:55.004837 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:55.004959 kubelet[2706]: W0712 00:12:55.004848 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:55.004959 kubelet[2706]: E0712 00:12:55.004871 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:55.005248 kubelet[2706]: E0712 00:12:55.005227 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:55.005248 kubelet[2706]: W0712 00:12:55.005247 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:55.005345 kubelet[2706]: E0712 00:12:55.005265 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:55.007633 kubelet[2706]: E0712 00:12:55.007613 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:55.007633 kubelet[2706]: W0712 00:12:55.007629 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:55.007849 kubelet[2706]: E0712 00:12:55.007646 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:55.007954 kubelet[2706]: E0712 00:12:55.007938 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:55.007954 kubelet[2706]: W0712 00:12:55.007950 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:55.008081 kubelet[2706]: E0712 00:12:55.008005 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:55.008126 kubelet[2706]: E0712 00:12:55.008107 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:55.008126 kubelet[2706]: W0712 00:12:55.008120 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:55.008245 kubelet[2706]: E0712 00:12:55.008178 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:55.010641 kubelet[2706]: E0712 00:12:55.010620 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:55.010641 kubelet[2706]: W0712 00:12:55.010640 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:55.010833 kubelet[2706]: E0712 00:12:55.010753 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:55.012569 kubelet[2706]: E0712 00:12:55.012545 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:55.012569 kubelet[2706]: W0712 00:12:55.012565 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:55.013010 kubelet[2706]: E0712 00:12:55.012643 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:55.013218 kubelet[2706]: E0712 00:12:55.013187 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:55.013218 kubelet[2706]: W0712 00:12:55.013206 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:55.013366 kubelet[2706]: E0712 00:12:55.013341 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:55.015572 kubelet[2706]: E0712 00:12:55.015546 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:55.015572 kubelet[2706]: W0712 00:12:55.015567 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:55.015850 kubelet[2706]: E0712 00:12:55.015755 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:55.017050 kubelet[2706]: E0712 00:12:55.016476 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:55.017050 kubelet[2706]: W0712 00:12:55.016495 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:55.017050 kubelet[2706]: E0712 00:12:55.018535 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:55.017050 kubelet[2706]: W0712 00:12:55.018554 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:55.019697 kubelet[2706]: E0712 00:12:55.019669 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:55.019697 kubelet[2706]: W0712 00:12:55.019687 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:55.019782 kubelet[2706]: E0712 00:12:55.019702 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:55.020596 kubelet[2706]: E0712 00:12:55.020071 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:55.020596 kubelet[2706]: E0712 00:12:55.020099 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:55.020717 kubelet[2706]: E0712 00:12:55.020636 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:55.020717 kubelet[2706]: W0712 00:12:55.020647 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:55.020717 kubelet[2706]: E0712 00:12:55.020673 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:55.021858 kubelet[2706]: E0712 00:12:55.021758 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:55.021858 kubelet[2706]: W0712 00:12:55.021773 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:55.021858 kubelet[2706]: E0712 00:12:55.021796 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:55.023388 kubelet[2706]: E0712 00:12:55.023229 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:55.023388 kubelet[2706]: W0712 00:12:55.023245 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:55.023388 kubelet[2706]: E0712 00:12:55.023270 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:55.023621 kubelet[2706]: E0712 00:12:55.023530 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:55.023621 kubelet[2706]: W0712 00:12:55.023540 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:55.023621 kubelet[2706]: E0712 00:12:55.023551 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:55.809429 kubelet[2706]: E0712 00:12:55.809371 2706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f2q67" podUID="bf0041c9-fdb6-4de6-99ec-d3644807d402" Jul 12 00:12:55.930311 kubelet[2706]: I0712 00:12:55.930275 2706 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 00:12:56.011975 kubelet[2706]: E0712 00:12:56.011844 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:56.011975 kubelet[2706]: W0712 00:12:56.011935 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:56.012533 kubelet[2706]: E0712 00:12:56.011984 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:56.014090 kubelet[2706]: E0712 00:12:56.014048 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:56.014161 kubelet[2706]: W0712 00:12:56.014103 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:56.014161 kubelet[2706]: E0712 00:12:56.014130 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:56.015593 kubelet[2706]: E0712 00:12:56.015565 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:56.015593 kubelet[2706]: W0712 00:12:56.015584 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:56.015593 kubelet[2706]: E0712 00:12:56.015598 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:56.017439 kubelet[2706]: E0712 00:12:56.015924 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:56.017439 kubelet[2706]: W0712 00:12:56.015941 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:56.017439 kubelet[2706]: E0712 00:12:56.015954 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:56.017439 kubelet[2706]: E0712 00:12:56.016203 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:56.017439 kubelet[2706]: W0712 00:12:56.016216 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:56.017439 kubelet[2706]: E0712 00:12:56.016244 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:56.017439 kubelet[2706]: E0712 00:12:56.016428 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:56.017439 kubelet[2706]: W0712 00:12:56.016437 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:56.017439 kubelet[2706]: E0712 00:12:56.016447 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:56.017439 kubelet[2706]: E0712 00:12:56.016666 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:56.018070 kubelet[2706]: W0712 00:12:56.016675 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:56.018070 kubelet[2706]: E0712 00:12:56.016686 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:56.018070 kubelet[2706]: E0712 00:12:56.016861 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:56.018070 kubelet[2706]: W0712 00:12:56.016870 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:56.018070 kubelet[2706]: E0712 00:12:56.016909 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:56.018070 kubelet[2706]: E0712 00:12:56.017097 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:56.018070 kubelet[2706]: W0712 00:12:56.017119 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:56.018070 kubelet[2706]: E0712 00:12:56.017131 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:56.018070 kubelet[2706]: E0712 00:12:56.017295 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:56.018070 kubelet[2706]: W0712 00:12:56.017303 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:56.018413 kubelet[2706]: E0712 00:12:56.017313 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:56.018413 kubelet[2706]: E0712 00:12:56.017536 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:56.018413 kubelet[2706]: W0712 00:12:56.017562 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:56.018413 kubelet[2706]: E0712 00:12:56.017576 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:56.018413 kubelet[2706]: E0712 00:12:56.017769 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:56.018413 kubelet[2706]: W0712 00:12:56.017778 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:56.018413 kubelet[2706]: E0712 00:12:56.017800 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:56.018413 kubelet[2706]: E0712 00:12:56.018101 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:56.018413 kubelet[2706]: W0712 00:12:56.018114 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:56.018413 kubelet[2706]: E0712 00:12:56.018125 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:56.018930 kubelet[2706]: E0712 00:12:56.018300 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:56.018930 kubelet[2706]: W0712 00:12:56.018326 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:56.018930 kubelet[2706]: E0712 00:12:56.018336 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:56.018930 kubelet[2706]: E0712 00:12:56.018536 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:56.018930 kubelet[2706]: W0712 00:12:56.018545 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:56.018930 kubelet[2706]: E0712 00:12:56.018555 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:56.019927 kubelet[2706]: E0712 00:12:56.019870 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:56.019927 kubelet[2706]: W0712 00:12:56.019921 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:56.020026 kubelet[2706]: E0712 00:12:56.019934 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:56.020474 kubelet[2706]: E0712 00:12:56.020429 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:56.020474 kubelet[2706]: W0712 00:12:56.020445 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:56.020474 kubelet[2706]: E0712 00:12:56.020473 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:56.021227 kubelet[2706]: E0712 00:12:56.021192 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:56.021227 kubelet[2706]: W0712 00:12:56.021208 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:56.021227 kubelet[2706]: E0712 00:12:56.021220 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:56.022554 kubelet[2706]: E0712 00:12:56.021441 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:56.022554 kubelet[2706]: W0712 00:12:56.021508 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:56.022554 kubelet[2706]: E0712 00:12:56.021592 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:56.022554 kubelet[2706]: E0712 00:12:56.021708 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:56.022554 kubelet[2706]: W0712 00:12:56.021715 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:56.022554 kubelet[2706]: E0712 00:12:56.021788 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:56.022554 kubelet[2706]: E0712 00:12:56.021948 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:56.022554 kubelet[2706]: W0712 00:12:56.021958 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:56.022554 kubelet[2706]: E0712 00:12:56.021970 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:56.022554 kubelet[2706]: E0712 00:12:56.022151 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:56.022760 kubelet[2706]: W0712 00:12:56.022159 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:56.022760 kubelet[2706]: E0712 00:12:56.022186 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:56.022760 kubelet[2706]: E0712 00:12:56.022337 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:56.022760 kubelet[2706]: W0712 00:12:56.022346 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:56.022760 kubelet[2706]: E0712 00:12:56.022363 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:56.022760 kubelet[2706]: E0712 00:12:56.022579 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:56.022760 kubelet[2706]: W0712 00:12:56.022588 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:56.022760 kubelet[2706]: E0712 00:12:56.022605 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:56.022937 kubelet[2706]: E0712 00:12:56.022912 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:56.022937 kubelet[2706]: W0712 00:12:56.022922 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:56.023254 kubelet[2706]: E0712 00:12:56.022932 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:56.023254 kubelet[2706]: E0712 00:12:56.023167 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:56.023254 kubelet[2706]: W0712 00:12:56.023188 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:56.023911 kubelet[2706]: E0712 00:12:56.023205 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:56.024053 kubelet[2706]: E0712 00:12:56.024006 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:56.024053 kubelet[2706]: W0712 00:12:56.024023 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:56.024053 kubelet[2706]: E0712 00:12:56.024037 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:56.025699 kubelet[2706]: E0712 00:12:56.024661 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:56.025699 kubelet[2706]: W0712 00:12:56.024687 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:56.025699 kubelet[2706]: E0712 00:12:56.024723 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:56.025699 kubelet[2706]: E0712 00:12:56.024859 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:56.025699 kubelet[2706]: W0712 00:12:56.024868 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:56.025699 kubelet[2706]: E0712 00:12:56.024905 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:56.025699 kubelet[2706]: E0712 00:12:56.025075 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:56.025699 kubelet[2706]: W0712 00:12:56.025084 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:56.025699 kubelet[2706]: E0712 00:12:56.025110 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:56.025699 kubelet[2706]: E0712 00:12:56.025308 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:56.026672 kubelet[2706]: W0712 00:12:56.025316 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:56.026672 kubelet[2706]: E0712 00:12:56.025334 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:56.026672 kubelet[2706]: E0712 00:12:56.025664 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:56.026672 kubelet[2706]: W0712 00:12:56.025674 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:56.026672 kubelet[2706]: E0712 00:12:56.025685 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:56.026672 kubelet[2706]: E0712 00:12:56.026071 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:12:56.026672 kubelet[2706]: W0712 00:12:56.026081 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:12:56.026672 kubelet[2706]: E0712 00:12:56.026090 2706 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:12:56.494941 containerd[1601]: time="2025-07-12T00:12:56.493697102Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:12:56.494941 containerd[1601]: time="2025-07-12T00:12:56.494738350Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4266981" Jul 12 00:12:56.495672 containerd[1601]: time="2025-07-12T00:12:56.495633957Z" level=info msg="ImageCreate event name:\"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:12:56.498839 containerd[1601]: time="2025-07-12T00:12:56.498792182Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:12:56.499640 containerd[1601]: time="2025-07-12T00:12:56.499598708Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5636182\" in 1.828529487s" Jul 12 00:12:56.499762 containerd[1601]: time="2025-07-12T00:12:56.499740030Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\"" Jul 12 00:12:56.502631 containerd[1601]: time="2025-07-12T00:12:56.502598932Z" level=info msg="CreateContainer within sandbox \"ba4cd5623c56b0c823daff26535a86bae3498addb6d55ff231d976b7810c96cb\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 12 00:12:56.519561 containerd[1601]: time="2025-07-12T00:12:56.519503186Z" level=info msg="CreateContainer within sandbox \"ba4cd5623c56b0c823daff26535a86bae3498addb6d55ff231d976b7810c96cb\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"d9ec936f5fd60ec69593c4863848d68b3481889b580797926d7d8930fad4e078\"" Jul 12 00:12:56.522167 containerd[1601]: time="2025-07-12T00:12:56.522116367Z" level=info msg="StartContainer for \"d9ec936f5fd60ec69593c4863848d68b3481889b580797926d7d8930fad4e078\"" Jul 12 00:12:56.589391 containerd[1601]: time="2025-07-12T00:12:56.589086939Z" level=info msg="StartContainer for \"d9ec936f5fd60ec69593c4863848d68b3481889b580797926d7d8930fad4e078\" returns successfully" Jul 12 00:12:56.682098 systemd[1]: run-containerd-runc-k8s.io-d9ec936f5fd60ec69593c4863848d68b3481889b580797926d7d8930fad4e078-runc.QXtYZu.mount: Deactivated successfully. Jul 12 00:12:56.683348 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d9ec936f5fd60ec69593c4863848d68b3481889b580797926d7d8930fad4e078-rootfs.mount: Deactivated successfully. Jul 12 00:12:56.741424 containerd[1601]: time="2025-07-12T00:12:56.741189026Z" level=info msg="shim disconnected" id=d9ec936f5fd60ec69593c4863848d68b3481889b580797926d7d8930fad4e078 namespace=k8s.io Jul 12 00:12:56.741424 containerd[1601]: time="2025-07-12T00:12:56.741336148Z" level=warning msg="cleaning up after shim disconnected" id=d9ec936f5fd60ec69593c4863848d68b3481889b580797926d7d8930fad4e078 namespace=k8s.io Jul 12 00:12:56.741424 containerd[1601]: time="2025-07-12T00:12:56.741349988Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:12:56.936599 containerd[1601]: time="2025-07-12T00:12:56.936403376Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 12 00:12:57.808564 kubelet[2706]: E0712 00:12:57.808477 2706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f2q67" podUID="bf0041c9-fdb6-4de6-99ec-d3644807d402" Jul 12 00:12:59.170204 containerd[1601]: time="2025-07-12T00:12:59.170119816Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:12:59.171475 containerd[1601]: time="2025-07-12T00:12:59.171409664Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=65888320" Jul 12 00:12:59.172201 containerd[1601]: time="2025-07-12T00:12:59.172129269Z" level=info msg="ImageCreate event name:\"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:12:59.175302 containerd[1601]: time="2025-07-12T00:12:59.175246969Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:12:59.176367 containerd[1601]: time="2025-07-12T00:12:59.176250896Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"67257561\" in 2.239806279s" Jul 12 00:12:59.176367 containerd[1601]: time="2025-07-12T00:12:59.176283176Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\"" Jul 12 00:12:59.179796 containerd[1601]: time="2025-07-12T00:12:59.179753199Z" level=info msg="CreateContainer within sandbox \"ba4cd5623c56b0c823daff26535a86bae3498addb6d55ff231d976b7810c96cb\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 12 00:12:59.197896 containerd[1601]: time="2025-07-12T00:12:59.197797157Z" level=info msg="CreateContainer within sandbox \"ba4cd5623c56b0c823daff26535a86bae3498addb6d55ff231d976b7810c96cb\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"c44c4548ef5cf6a2cfafb390dc0698cc73ee0c87745acd1ed1fd4eb93060f050\"" Jul 12 00:12:59.199522 containerd[1601]: time="2025-07-12T00:12:59.198987245Z" level=info msg="StartContainer for \"c44c4548ef5cf6a2cfafb390dc0698cc73ee0c87745acd1ed1fd4eb93060f050\"" Jul 12 00:12:59.234488 systemd[1]: run-containerd-runc-k8s.io-c44c4548ef5cf6a2cfafb390dc0698cc73ee0c87745acd1ed1fd4eb93060f050-runc.loSUfk.mount: Deactivated successfully. Jul 12 00:12:59.268460 containerd[1601]: time="2025-07-12T00:12:59.268378819Z" level=info msg="StartContainer for \"c44c4548ef5cf6a2cfafb390dc0698cc73ee0c87745acd1ed1fd4eb93060f050\" returns successfully" Jul 12 00:12:59.774130 containerd[1601]: time="2025-07-12T00:12:59.773983326Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 12 00:12:59.801099 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c44c4548ef5cf6a2cfafb390dc0698cc73ee0c87745acd1ed1fd4eb93060f050-rootfs.mount: Deactivated successfully. Jul 12 00:12:59.810623 kubelet[2706]: E0712 00:12:59.810387 2706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f2q67" podUID="bf0041c9-fdb6-4de6-99ec-d3644807d402" Jul 12 00:12:59.843272 kubelet[2706]: I0712 00:12:59.842234 2706 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 12 00:12:59.897475 containerd[1601]: time="2025-07-12T00:12:59.893212466Z" level=info msg="shim disconnected" id=c44c4548ef5cf6a2cfafb390dc0698cc73ee0c87745acd1ed1fd4eb93060f050 namespace=k8s.io Jul 12 00:12:59.897475 containerd[1601]: time="2025-07-12T00:12:59.893269946Z" level=warning msg="cleaning up after shim disconnected" id=c44c4548ef5cf6a2cfafb390dc0698cc73ee0c87745acd1ed1fd4eb93060f050 namespace=k8s.io Jul 12 00:12:59.897475 containerd[1601]: time="2025-07-12T00:12:59.893279627Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:12:59.945125 containerd[1601]: time="2025-07-12T00:12:59.945089045Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 12 00:13:00.051386 kubelet[2706]: I0712 00:13:00.051203 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lw8rs\" (UniqueName: \"kubernetes.io/projected/b3a5dd31-1f9a-4dc4-ad4d-9d9da3bf2832-kube-api-access-lw8rs\") pod \"coredns-7c65d6cfc9-4txl7\" (UID: \"b3a5dd31-1f9a-4dc4-ad4d-9d9da3bf2832\") " pod="kube-system/coredns-7c65d6cfc9-4txl7" Jul 12 00:13:00.051386 kubelet[2706]: I0712 00:13:00.051254 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72drp\" (UniqueName: \"kubernetes.io/projected/93a79160-6c7d-4ebd-95ed-d9d607047420-kube-api-access-72drp\") pod \"calico-apiserver-b654b5ccd-k7qq8\" (UID: \"93a79160-6c7d-4ebd-95ed-d9d607047420\") " pod="calico-apiserver/calico-apiserver-b654b5ccd-k7qq8" Jul 12 00:13:00.051386 kubelet[2706]: I0712 00:13:00.051285 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhnsk\" (UniqueName: \"kubernetes.io/projected/c86f07ef-0940-4c98-a612-68a23cab6908-kube-api-access-mhnsk\") pod \"coredns-7c65d6cfc9-tpnrh\" (UID: \"c86f07ef-0940-4c98-a612-68a23cab6908\") " pod="kube-system/coredns-7c65d6cfc9-tpnrh" Jul 12 00:13:00.051386 kubelet[2706]: I0712 00:13:00.051304 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkc7t\" (UniqueName: \"kubernetes.io/projected/b638a997-3c36-4850-99ab-ab3d678917cc-kube-api-access-wkc7t\") pod \"calico-apiserver-b654b5ccd-2zvhb\" (UID: \"b638a997-3c36-4850-99ab-ab3d678917cc\") " pod="calico-apiserver/calico-apiserver-b654b5ccd-2zvhb" Jul 12 00:13:00.051386 kubelet[2706]: I0712 00:13:00.051320 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/95ba4f33-c2b5-452d-814d-3c80c989e70e-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-gp2hz\" (UID: \"95ba4f33-c2b5-452d-814d-3c80c989e70e\") " pod="calico-system/goldmane-58fd7646b9-gp2hz" Jul 12 00:13:00.053389 kubelet[2706]: I0712 00:13:00.051682 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b638a997-3c36-4850-99ab-ab3d678917cc-calico-apiserver-certs\") pod \"calico-apiserver-b654b5ccd-2zvhb\" (UID: \"b638a997-3c36-4850-99ab-ab3d678917cc\") " pod="calico-apiserver/calico-apiserver-b654b5ccd-2zvhb" Jul 12 00:13:00.053389 kubelet[2706]: I0712 00:13:00.051725 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/93a79160-6c7d-4ebd-95ed-d9d607047420-calico-apiserver-certs\") pod \"calico-apiserver-b654b5ccd-k7qq8\" (UID: \"93a79160-6c7d-4ebd-95ed-d9d607047420\") " pod="calico-apiserver/calico-apiserver-b654b5ccd-k7qq8" Jul 12 00:13:00.053389 kubelet[2706]: I0712 00:13:00.051822 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/30c5abcc-f15f-4c0f-ab14-2ab204297397-whisker-ca-bundle\") pod \"whisker-6d7879f7fd-jl8nw\" (UID: \"30c5abcc-f15f-4c0f-ab14-2ab204297397\") " pod="calico-system/whisker-6d7879f7fd-jl8nw" Jul 12 00:13:00.053389 kubelet[2706]: I0712 00:13:00.051857 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbnbr\" (UniqueName: \"kubernetes.io/projected/30c5abcc-f15f-4c0f-ab14-2ab204297397-kube-api-access-rbnbr\") pod \"whisker-6d7879f7fd-jl8nw\" (UID: \"30c5abcc-f15f-4c0f-ab14-2ab204297397\") " pod="calico-system/whisker-6d7879f7fd-jl8nw" Jul 12 00:13:00.053389 kubelet[2706]: I0712 00:13:00.051975 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c86f07ef-0940-4c98-a612-68a23cab6908-config-volume\") pod \"coredns-7c65d6cfc9-tpnrh\" (UID: \"c86f07ef-0940-4c98-a612-68a23cab6908\") " pod="kube-system/coredns-7c65d6cfc9-tpnrh" Jul 12 00:13:00.053552 kubelet[2706]: I0712 00:13:00.052000 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/30c5abcc-f15f-4c0f-ab14-2ab204297397-whisker-backend-key-pair\") pod \"whisker-6d7879f7fd-jl8nw\" (UID: \"30c5abcc-f15f-4c0f-ab14-2ab204297397\") " pod="calico-system/whisker-6d7879f7fd-jl8nw" Jul 12 00:13:00.053552 kubelet[2706]: I0712 00:13:00.052016 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95ba4f33-c2b5-452d-814d-3c80c989e70e-config\") pod \"goldmane-58fd7646b9-gp2hz\" (UID: \"95ba4f33-c2b5-452d-814d-3c80c989e70e\") " pod="calico-system/goldmane-58fd7646b9-gp2hz" Jul 12 00:13:00.053552 kubelet[2706]: I0712 00:13:00.052046 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scbs9\" (UniqueName: \"kubernetes.io/projected/95ba4f33-c2b5-452d-814d-3c80c989e70e-kube-api-access-scbs9\") pod \"goldmane-58fd7646b9-gp2hz\" (UID: \"95ba4f33-c2b5-452d-814d-3c80c989e70e\") " pod="calico-system/goldmane-58fd7646b9-gp2hz" Jul 12 00:13:00.053552 kubelet[2706]: I0712 00:13:00.052064 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/95ba4f33-c2b5-452d-814d-3c80c989e70e-goldmane-key-pair\") pod \"goldmane-58fd7646b9-gp2hz\" (UID: \"95ba4f33-c2b5-452d-814d-3c80c989e70e\") " pod="calico-system/goldmane-58fd7646b9-gp2hz" Jul 12 00:13:00.053552 kubelet[2706]: I0712 00:13:00.052083 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndmkz\" (UniqueName: \"kubernetes.io/projected/2c8b35e8-59cb-4a47-869c-9d6193668f96-kube-api-access-ndmkz\") pod \"calico-kube-controllers-54f5749fd-n2b8n\" (UID: \"2c8b35e8-59cb-4a47-869c-9d6193668f96\") " pod="calico-system/calico-kube-controllers-54f5749fd-n2b8n" Jul 12 00:13:00.053667 kubelet[2706]: I0712 00:13:00.052103 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2c8b35e8-59cb-4a47-869c-9d6193668f96-tigera-ca-bundle\") pod \"calico-kube-controllers-54f5749fd-n2b8n\" (UID: \"2c8b35e8-59cb-4a47-869c-9d6193668f96\") " pod="calico-system/calico-kube-controllers-54f5749fd-n2b8n" Jul 12 00:13:00.053667 kubelet[2706]: I0712 00:13:00.052133 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b3a5dd31-1f9a-4dc4-ad4d-9d9da3bf2832-config-volume\") pod \"coredns-7c65d6cfc9-4txl7\" (UID: \"b3a5dd31-1f9a-4dc4-ad4d-9d9da3bf2832\") " pod="kube-system/coredns-7c65d6cfc9-4txl7" Jul 12 00:13:00.203511 containerd[1601]: time="2025-07-12T00:13:00.203447013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-4txl7,Uid:b3a5dd31-1f9a-4dc4-ad4d-9d9da3bf2832,Namespace:kube-system,Attempt:0,}" Jul 12 00:13:00.212391 containerd[1601]: time="2025-07-12T00:13:00.211891584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b654b5ccd-2zvhb,Uid:b638a997-3c36-4850-99ab-ab3d678917cc,Namespace:calico-apiserver,Attempt:0,}" Jul 12 00:13:00.215488 containerd[1601]: time="2025-07-12T00:13:00.214174918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-tpnrh,Uid:c86f07ef-0940-4c98-a612-68a23cab6908,Namespace:kube-system,Attempt:0,}" Jul 12 00:13:00.226589 containerd[1601]: time="2025-07-12T00:13:00.226552434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b654b5ccd-k7qq8,Uid:93a79160-6c7d-4ebd-95ed-d9d607047420,Namespace:calico-apiserver,Attempt:0,}" Jul 12 00:13:00.227033 containerd[1601]: time="2025-07-12T00:13:00.227005837Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-gp2hz,Uid:95ba4f33-c2b5-452d-814d-3c80c989e70e,Namespace:calico-system,Attempt:0,}" Jul 12 00:13:00.235740 containerd[1601]: time="2025-07-12T00:13:00.233630318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54f5749fd-n2b8n,Uid:2c8b35e8-59cb-4a47-869c-9d6193668f96,Namespace:calico-system,Attempt:0,}" Jul 12 00:13:00.235740 containerd[1601]: time="2025-07-12T00:13:00.233935880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6d7879f7fd-jl8nw,Uid:30c5abcc-f15f-4c0f-ab14-2ab204297397,Namespace:calico-system,Attempt:0,}" Jul 12 00:13:00.457749 containerd[1601]: time="2025-07-12T00:13:00.456950887Z" level=error msg="Failed to destroy network for sandbox \"417c12a1078c186c15411172fad1a0108dacf60beadf725eebfc513508343886\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:13:00.461651 containerd[1601]: time="2025-07-12T00:13:00.461597076Z" level=error msg="Failed to destroy network for sandbox \"46e52a83bf7565f21bf8cba3ffe4778f9cd87e310ee7996b780510e4b749451e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:13:00.462666 containerd[1601]: time="2025-07-12T00:13:00.462631082Z" level=error msg="encountered an error cleaning up failed sandbox \"417c12a1078c186c15411172fad1a0108dacf60beadf725eebfc513508343886\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:13:00.463425 containerd[1601]: time="2025-07-12T00:13:00.463393287Z" level=error msg="encountered an error cleaning up failed sandbox \"46e52a83bf7565f21bf8cba3ffe4778f9cd87e310ee7996b780510e4b749451e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:13:00.463975 containerd[1601]: time="2025-07-12T00:13:00.463947330Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6d7879f7fd-jl8nw,Uid:30c5abcc-f15f-4c0f-ab14-2ab204297397,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"46e52a83bf7565f21bf8cba3ffe4778f9cd87e310ee7996b780510e4b749451e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:13:00.464291 containerd[1601]: time="2025-07-12T00:13:00.463679609Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-4txl7,Uid:b3a5dd31-1f9a-4dc4-ad4d-9d9da3bf2832,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"417c12a1078c186c15411172fad1a0108dacf60beadf725eebfc513508343886\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:13:00.464761 kubelet[2706]: E0712 00:13:00.464613 2706 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46e52a83bf7565f21bf8cba3ffe4778f9cd87e310ee7996b780510e4b749451e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:13:00.465499 kubelet[2706]: E0712 00:13:00.465169 2706 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46e52a83bf7565f21bf8cba3ffe4778f9cd87e310ee7996b780510e4b749451e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6d7879f7fd-jl8nw" Jul 12 00:13:00.465499 kubelet[2706]: E0712 00:13:00.465219 2706 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46e52a83bf7565f21bf8cba3ffe4778f9cd87e310ee7996b780510e4b749451e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6d7879f7fd-jl8nw" Jul 12 00:13:00.465499 kubelet[2706]: E0712 00:13:00.465265 2706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6d7879f7fd-jl8nw_calico-system(30c5abcc-f15f-4c0f-ab14-2ab204297397)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6d7879f7fd-jl8nw_calico-system(30c5abcc-f15f-4c0f-ab14-2ab204297397)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"46e52a83bf7565f21bf8cba3ffe4778f9cd87e310ee7996b780510e4b749451e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6d7879f7fd-jl8nw" podUID="30c5abcc-f15f-4c0f-ab14-2ab204297397" Jul 12 00:13:00.467011 kubelet[2706]: E0712 00:13:00.466291 2706 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"417c12a1078c186c15411172fad1a0108dacf60beadf725eebfc513508343886\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:13:00.467011 kubelet[2706]: E0712 00:13:00.466328 2706 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"417c12a1078c186c15411172fad1a0108dacf60beadf725eebfc513508343886\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-4txl7" Jul 12 00:13:00.467011 kubelet[2706]: E0712 00:13:00.466343 2706 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"417c12a1078c186c15411172fad1a0108dacf60beadf725eebfc513508343886\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-4txl7" Jul 12 00:13:00.467137 kubelet[2706]: E0712 00:13:00.466371 2706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-4txl7_kube-system(b3a5dd31-1f9a-4dc4-ad4d-9d9da3bf2832)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-4txl7_kube-system(b3a5dd31-1f9a-4dc4-ad4d-9d9da3bf2832)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"417c12a1078c186c15411172fad1a0108dacf60beadf725eebfc513508343886\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-4txl7" podUID="b3a5dd31-1f9a-4dc4-ad4d-9d9da3bf2832" Jul 12 00:13:00.484051 containerd[1601]: time="2025-07-12T00:13:00.483994813Z" level=error msg="Failed to destroy network for sandbox \"8df534f44de306fdfbdd517a52c9eaae7da824c9bc0207a7f8a2f7fa4d6e36f8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:13:00.484706 containerd[1601]: time="2025-07-12T00:13:00.484672577Z" level=error msg="encountered an error cleaning up failed sandbox \"8df534f44de306fdfbdd517a52c9eaae7da824c9bc0207a7f8a2f7fa4d6e36f8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:13:00.486247 containerd[1601]: time="2025-07-12T00:13:00.485618143Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b654b5ccd-2zvhb,Uid:b638a997-3c36-4850-99ab-ab3d678917cc,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8df534f44de306fdfbdd517a52c9eaae7da824c9bc0207a7f8a2f7fa4d6e36f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:13:00.486405 kubelet[2706]: E0712 00:13:00.485875 2706 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8df534f44de306fdfbdd517a52c9eaae7da824c9bc0207a7f8a2f7fa4d6e36f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:13:00.486405 kubelet[2706]: E0712 00:13:00.485936 2706 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8df534f44de306fdfbdd517a52c9eaae7da824c9bc0207a7f8a2f7fa4d6e36f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b654b5ccd-2zvhb" Jul 12 00:13:00.486405 kubelet[2706]: E0712 00:13:00.485965 2706 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8df534f44de306fdfbdd517a52c9eaae7da824c9bc0207a7f8a2f7fa4d6e36f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b654b5ccd-2zvhb" Jul 12 00:13:00.486511 kubelet[2706]: E0712 00:13:00.486008 2706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-b654b5ccd-2zvhb_calico-apiserver(b638a997-3c36-4850-99ab-ab3d678917cc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-b654b5ccd-2zvhb_calico-apiserver(b638a997-3c36-4850-99ab-ab3d678917cc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8df534f44de306fdfbdd517a52c9eaae7da824c9bc0207a7f8a2f7fa4d6e36f8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-b654b5ccd-2zvhb" podUID="b638a997-3c36-4850-99ab-ab3d678917cc" Jul 12 00:13:00.500491 containerd[1601]: time="2025-07-12T00:13:00.500195832Z" level=error msg="Failed to destroy network for sandbox \"1bd33a55de542271863db82ac7f0b0387055ea0752ba18090c62f8069f4c15bd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:13:00.501407 containerd[1601]: time="2025-07-12T00:13:00.501372000Z" level=error msg="encountered an error cleaning up failed sandbox \"1bd33a55de542271863db82ac7f0b0387055ea0752ba18090c62f8069f4c15bd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:13:00.501560 containerd[1601]: time="2025-07-12T00:13:00.501538641Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b654b5ccd-k7qq8,Uid:93a79160-6c7d-4ebd-95ed-d9d607047420,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1bd33a55de542271863db82ac7f0b0387055ea0752ba18090c62f8069f4c15bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:13:00.501977 kubelet[2706]: E0712 00:13:00.501935 2706 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1bd33a55de542271863db82ac7f0b0387055ea0752ba18090c62f8069f4c15bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:13:00.502061 kubelet[2706]: E0712 00:13:00.501999 2706 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1bd33a55de542271863db82ac7f0b0387055ea0752ba18090c62f8069f4c15bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b654b5ccd-k7qq8" Jul 12 00:13:00.502061 kubelet[2706]: E0712 00:13:00.502023 2706 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1bd33a55de542271863db82ac7f0b0387055ea0752ba18090c62f8069f4c15bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b654b5ccd-k7qq8" Jul 12 00:13:00.502126 kubelet[2706]: E0712 00:13:00.502060 2706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-b654b5ccd-k7qq8_calico-apiserver(93a79160-6c7d-4ebd-95ed-d9d607047420)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-b654b5ccd-k7qq8_calico-apiserver(93a79160-6c7d-4ebd-95ed-d9d607047420)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1bd33a55de542271863db82ac7f0b0387055ea0752ba18090c62f8069f4c15bd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-b654b5ccd-k7qq8" podUID="93a79160-6c7d-4ebd-95ed-d9d607047420" Jul 12 00:13:00.507670 containerd[1601]: time="2025-07-12T00:13:00.507608598Z" level=error msg="Failed to destroy network for sandbox \"5ca3e8dbc88b2020006660b0ae14cbf71cbf389d138b8d0bc83929bbe1538f97\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:13:00.508004 containerd[1601]: time="2025-07-12T00:13:00.507972200Z" level=error msg="encountered an error cleaning up failed sandbox \"5ca3e8dbc88b2020006660b0ae14cbf71cbf389d138b8d0bc83929bbe1538f97\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:13:00.508066 containerd[1601]: time="2025-07-12T00:13:00.508024761Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-tpnrh,Uid:c86f07ef-0940-4c98-a612-68a23cab6908,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5ca3e8dbc88b2020006660b0ae14cbf71cbf389d138b8d0bc83929bbe1538f97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:13:00.508538 kubelet[2706]: E0712 00:13:00.508496 2706 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ca3e8dbc88b2020006660b0ae14cbf71cbf389d138b8d0bc83929bbe1538f97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:13:00.508599 kubelet[2706]: E0712 00:13:00.508572 2706 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ca3e8dbc88b2020006660b0ae14cbf71cbf389d138b8d0bc83929bbe1538f97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-tpnrh" Jul 12 00:13:00.508629 kubelet[2706]: E0712 00:13:00.508596 2706 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ca3e8dbc88b2020006660b0ae14cbf71cbf389d138b8d0bc83929bbe1538f97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-tpnrh" Jul 12 00:13:00.508684 kubelet[2706]: E0712 00:13:00.508655 2706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-tpnrh_kube-system(c86f07ef-0940-4c98-a612-68a23cab6908)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-tpnrh_kube-system(c86f07ef-0940-4c98-a612-68a23cab6908)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5ca3e8dbc88b2020006660b0ae14cbf71cbf389d138b8d0bc83929bbe1538f97\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-tpnrh" podUID="c86f07ef-0940-4c98-a612-68a23cab6908" Jul 12 00:13:00.522704 containerd[1601]: time="2025-07-12T00:13:00.522657930Z" level=error msg="Failed to destroy network for sandbox \"1bea194c56a8459314f623885de55cce3b51932c64c808cdab682331974ad6c8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:13:00.524327 containerd[1601]: time="2025-07-12T00:13:00.524203340Z" level=error msg="encountered an error cleaning up failed sandbox \"1bea194c56a8459314f623885de55cce3b51932c64c808cdab682331974ad6c8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:13:00.524597 containerd[1601]: time="2025-07-12T00:13:00.524526902Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54f5749fd-n2b8n,Uid:2c8b35e8-59cb-4a47-869c-9d6193668f96,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1bea194c56a8459314f623885de55cce3b51932c64c808cdab682331974ad6c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:13:00.526506 kubelet[2706]: E0712 00:13:00.525379 2706 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1bea194c56a8459314f623885de55cce3b51932c64c808cdab682331974ad6c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:13:00.526506 kubelet[2706]: E0712 00:13:00.526582 2706 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1bea194c56a8459314f623885de55cce3b51932c64c808cdab682331974ad6c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-54f5749fd-n2b8n" Jul 12 00:13:00.526506 kubelet[2706]: E0712 00:13:00.526605 2706 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1bea194c56a8459314f623885de55cce3b51932c64c808cdab682331974ad6c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-54f5749fd-n2b8n" Jul 12 00:13:00.526853 containerd[1601]: time="2025-07-12T00:13:00.526591194Z" level=error msg="Failed to destroy network for sandbox \"bc9dff0487da505b7fa119d4dc16b2ecd7080abe735d52d748dbef664a2f919b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:13:00.527377 containerd[1601]: time="2025-07-12T00:13:00.527300999Z" level=error msg="encountered an error cleaning up failed sandbox \"bc9dff0487da505b7fa119d4dc16b2ecd7080abe735d52d748dbef664a2f919b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:13:00.527377 containerd[1601]: time="2025-07-12T00:13:00.527352119Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-gp2hz,Uid:95ba4f33-c2b5-452d-814d-3c80c989e70e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bc9dff0487da505b7fa119d4dc16b2ecd7080abe735d52d748dbef664a2f919b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:13:00.527746 kubelet[2706]: E0712 00:13:00.527649 2706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-54f5749fd-n2b8n_calico-system(2c8b35e8-59cb-4a47-869c-9d6193668f96)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-54f5749fd-n2b8n_calico-system(2c8b35e8-59cb-4a47-869c-9d6193668f96)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1bea194c56a8459314f623885de55cce3b51932c64c808cdab682331974ad6c8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-54f5749fd-n2b8n" podUID="2c8b35e8-59cb-4a47-869c-9d6193668f96" Jul 12 00:13:00.528152 kubelet[2706]: E0712 00:13:00.527930 2706 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc9dff0487da505b7fa119d4dc16b2ecd7080abe735d52d748dbef664a2f919b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:13:00.528204 kubelet[2706]: E0712 00:13:00.528152 2706 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc9dff0487da505b7fa119d4dc16b2ecd7080abe735d52d748dbef664a2f919b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-gp2hz" Jul 12 00:13:00.528204 kubelet[2706]: E0712 00:13:00.528174 2706 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc9dff0487da505b7fa119d4dc16b2ecd7080abe735d52d748dbef664a2f919b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-gp2hz" Jul 12 00:13:00.528269 kubelet[2706]: E0712 00:13:00.528222 2706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-gp2hz_calico-system(95ba4f33-c2b5-452d-814d-3c80c989e70e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-gp2hz_calico-system(95ba4f33-c2b5-452d-814d-3c80c989e70e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bc9dff0487da505b7fa119d4dc16b2ecd7080abe735d52d748dbef664a2f919b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-gp2hz" podUID="95ba4f33-c2b5-452d-814d-3c80c989e70e" Jul 12 00:13:00.947008 kubelet[2706]: I0712 00:13:00.946934 2706 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc9dff0487da505b7fa119d4dc16b2ecd7080abe735d52d748dbef664a2f919b" Jul 12 00:13:00.948648 containerd[1601]: time="2025-07-12T00:13:00.948536462Z" level=info msg="StopPodSandbox for \"bc9dff0487da505b7fa119d4dc16b2ecd7080abe735d52d748dbef664a2f919b\"" Jul 12 00:13:00.948731 containerd[1601]: time="2025-07-12T00:13:00.948708263Z" level=info msg="Ensure that sandbox bc9dff0487da505b7fa119d4dc16b2ecd7080abe735d52d748dbef664a2f919b in task-service has been cleanup successfully" Jul 12 00:13:00.950645 kubelet[2706]: I0712 00:13:00.950614 2706 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ca3e8dbc88b2020006660b0ae14cbf71cbf389d138b8d0bc83929bbe1538f97" Jul 12 00:13:00.951477 containerd[1601]: time="2025-07-12T00:13:00.951057278Z" level=info msg="StopPodSandbox for \"5ca3e8dbc88b2020006660b0ae14cbf71cbf389d138b8d0bc83929bbe1538f97\"" Jul 12 00:13:00.951477 containerd[1601]: time="2025-07-12T00:13:00.951239359Z" level=info msg="Ensure that sandbox 5ca3e8dbc88b2020006660b0ae14cbf71cbf389d138b8d0bc83929bbe1538f97 in task-service has been cleanup successfully" Jul 12 00:13:00.953194 kubelet[2706]: I0712 00:13:00.952889 2706 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="417c12a1078c186c15411172fad1a0108dacf60beadf725eebfc513508343886" Jul 12 00:13:00.953661 containerd[1601]: time="2025-07-12T00:13:00.953638093Z" level=info msg="StopPodSandbox for \"417c12a1078c186c15411172fad1a0108dacf60beadf725eebfc513508343886\"" Jul 12 00:13:00.954564 containerd[1601]: time="2025-07-12T00:13:00.954527979Z" level=info msg="Ensure that sandbox 417c12a1078c186c15411172fad1a0108dacf60beadf725eebfc513508343886 in task-service has been cleanup successfully" Jul 12 00:13:00.957908 kubelet[2706]: I0712 00:13:00.957884 2706 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="46e52a83bf7565f21bf8cba3ffe4778f9cd87e310ee7996b780510e4b749451e" Jul 12 00:13:00.959610 containerd[1601]: time="2025-07-12T00:13:00.959572770Z" level=info msg="StopPodSandbox for \"46e52a83bf7565f21bf8cba3ffe4778f9cd87e310ee7996b780510e4b749451e\"" Jul 12 00:13:00.961223 kubelet[2706]: I0712 00:13:00.961096 2706 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8df534f44de306fdfbdd517a52c9eaae7da824c9bc0207a7f8a2f7fa4d6e36f8" Jul 12 00:13:00.962196 containerd[1601]: time="2025-07-12T00:13:00.962011425Z" level=info msg="Ensure that sandbox 46e52a83bf7565f21bf8cba3ffe4778f9cd87e310ee7996b780510e4b749451e in task-service has been cleanup successfully" Jul 12 00:13:00.964411 containerd[1601]: time="2025-07-12T00:13:00.964387119Z" level=info msg="StopPodSandbox for \"8df534f44de306fdfbdd517a52c9eaae7da824c9bc0207a7f8a2f7fa4d6e36f8\"" Jul 12 00:13:00.967654 containerd[1601]: time="2025-07-12T00:13:00.967628379Z" level=info msg="Ensure that sandbox 8df534f44de306fdfbdd517a52c9eaae7da824c9bc0207a7f8a2f7fa4d6e36f8 in task-service has been cleanup successfully" Jul 12 00:13:00.970918 kubelet[2706]: I0712 00:13:00.970785 2706 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1bd33a55de542271863db82ac7f0b0387055ea0752ba18090c62f8069f4c15bd" Jul 12 00:13:00.971870 containerd[1601]: time="2025-07-12T00:13:00.971763725Z" level=info msg="StopPodSandbox for \"1bd33a55de542271863db82ac7f0b0387055ea0752ba18090c62f8069f4c15bd\"" Jul 12 00:13:00.972219 containerd[1601]: time="2025-07-12T00:13:00.972194087Z" level=info msg="Ensure that sandbox 1bd33a55de542271863db82ac7f0b0387055ea0752ba18090c62f8069f4c15bd in task-service has been cleanup successfully" Jul 12 00:13:00.976843 kubelet[2706]: I0712 00:13:00.976807 2706 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1bea194c56a8459314f623885de55cce3b51932c64c808cdab682331974ad6c8" Jul 12 00:13:00.977622 containerd[1601]: time="2025-07-12T00:13:00.977473400Z" level=info msg="StopPodSandbox for \"1bea194c56a8459314f623885de55cce3b51932c64c808cdab682331974ad6c8\"" Jul 12 00:13:00.977771 containerd[1601]: time="2025-07-12T00:13:00.977710041Z" level=info msg="Ensure that sandbox 1bea194c56a8459314f623885de55cce3b51932c64c808cdab682331974ad6c8 in task-service has been cleanup successfully" Jul 12 00:13:01.032711 containerd[1601]: time="2025-07-12T00:13:01.032660526Z" level=error msg="StopPodSandbox for \"1bd33a55de542271863db82ac7f0b0387055ea0752ba18090c62f8069f4c15bd\" failed" error="failed to destroy network for sandbox \"1bd33a55de542271863db82ac7f0b0387055ea0752ba18090c62f8069f4c15bd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:13:01.033104 containerd[1601]: time="2025-07-12T00:13:01.032805007Z" level=error msg="StopPodSandbox for \"46e52a83bf7565f21bf8cba3ffe4778f9cd87e310ee7996b780510e4b749451e\" failed" error="failed to destroy network for sandbox \"46e52a83bf7565f21bf8cba3ffe4778f9cd87e310ee7996b780510e4b749451e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:13:01.033167 kubelet[2706]: E0712 00:13:01.033084 2706 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1bd33a55de542271863db82ac7f0b0387055ea0752ba18090c62f8069f4c15bd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1bd33a55de542271863db82ac7f0b0387055ea0752ba18090c62f8069f4c15bd" Jul 12 00:13:01.033318 kubelet[2706]: E0712 00:13:01.033144 2706 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1bd33a55de542271863db82ac7f0b0387055ea0752ba18090c62f8069f4c15bd"} Jul 12 00:13:01.033318 kubelet[2706]: E0712 00:13:01.033196 2706 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"93a79160-6c7d-4ebd-95ed-d9d607047420\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1bd33a55de542271863db82ac7f0b0387055ea0752ba18090c62f8069f4c15bd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:13:01.033318 kubelet[2706]: E0712 00:13:01.033217 2706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"93a79160-6c7d-4ebd-95ed-d9d607047420\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1bd33a55de542271863db82ac7f0b0387055ea0752ba18090c62f8069f4c15bd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-b654b5ccd-k7qq8" podUID="93a79160-6c7d-4ebd-95ed-d9d607047420" Jul 12 00:13:01.033318 kubelet[2706]: E0712 00:13:01.033084 2706 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"46e52a83bf7565f21bf8cba3ffe4778f9cd87e310ee7996b780510e4b749451e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="46e52a83bf7565f21bf8cba3ffe4778f9cd87e310ee7996b780510e4b749451e" Jul 12 00:13:01.033318 kubelet[2706]: E0712 00:13:01.033263 2706 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"46e52a83bf7565f21bf8cba3ffe4778f9cd87e310ee7996b780510e4b749451e"} Jul 12 00:13:01.033699 kubelet[2706]: E0712 00:13:01.033291 2706 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"30c5abcc-f15f-4c0f-ab14-2ab204297397\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"46e52a83bf7565f21bf8cba3ffe4778f9cd87e310ee7996b780510e4b749451e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:13:01.033699 kubelet[2706]: E0712 00:13:01.033644 2706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"30c5abcc-f15f-4c0f-ab14-2ab204297397\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"46e52a83bf7565f21bf8cba3ffe4778f9cd87e310ee7996b780510e4b749451e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6d7879f7fd-jl8nw" podUID="30c5abcc-f15f-4c0f-ab14-2ab204297397" Jul 12 00:13:01.053057 containerd[1601]: time="2025-07-12T00:13:01.052988323Z" level=error msg="StopPodSandbox for \"bc9dff0487da505b7fa119d4dc16b2ecd7080abe735d52d748dbef664a2f919b\" failed" error="failed to destroy network for sandbox \"bc9dff0487da505b7fa119d4dc16b2ecd7080abe735d52d748dbef664a2f919b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:13:01.053585 kubelet[2706]: E0712 00:13:01.053396 2706 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bc9dff0487da505b7fa119d4dc16b2ecd7080abe735d52d748dbef664a2f919b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bc9dff0487da505b7fa119d4dc16b2ecd7080abe735d52d748dbef664a2f919b" Jul 12 00:13:01.053585 kubelet[2706]: E0712 00:13:01.053446 2706 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bc9dff0487da505b7fa119d4dc16b2ecd7080abe735d52d748dbef664a2f919b"} Jul 12 00:13:01.053585 kubelet[2706]: E0712 00:13:01.053549 2706 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"95ba4f33-c2b5-452d-814d-3c80c989e70e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bc9dff0487da505b7fa119d4dc16b2ecd7080abe735d52d748dbef664a2f919b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:13:01.053781 kubelet[2706]: E0712 00:13:01.053744 2706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"95ba4f33-c2b5-452d-814d-3c80c989e70e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bc9dff0487da505b7fa119d4dc16b2ecd7080abe735d52d748dbef664a2f919b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-gp2hz" podUID="95ba4f33-c2b5-452d-814d-3c80c989e70e" Jul 12 00:13:01.068005 containerd[1601]: time="2025-07-12T00:13:01.067964689Z" level=error msg="StopPodSandbox for \"417c12a1078c186c15411172fad1a0108dacf60beadf725eebfc513508343886\" failed" error="failed to destroy network for sandbox \"417c12a1078c186c15411172fad1a0108dacf60beadf725eebfc513508343886\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:13:01.068547 kubelet[2706]: E0712 00:13:01.068364 2706 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"417c12a1078c186c15411172fad1a0108dacf60beadf725eebfc513508343886\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="417c12a1078c186c15411172fad1a0108dacf60beadf725eebfc513508343886" Jul 12 00:13:01.068547 kubelet[2706]: E0712 00:13:01.068404 2706 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"417c12a1078c186c15411172fad1a0108dacf60beadf725eebfc513508343886"} Jul 12 00:13:01.068894 kubelet[2706]: E0712 00:13:01.068794 2706 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b3a5dd31-1f9a-4dc4-ad4d-9d9da3bf2832\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"417c12a1078c186c15411172fad1a0108dacf60beadf725eebfc513508343886\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:13:01.068894 kubelet[2706]: E0712 00:13:01.068839 2706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b3a5dd31-1f9a-4dc4-ad4d-9d9da3bf2832\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"417c12a1078c186c15411172fad1a0108dacf60beadf725eebfc513508343886\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-4txl7" podUID="b3a5dd31-1f9a-4dc4-ad4d-9d9da3bf2832" Jul 12 00:13:01.072673 containerd[1601]: time="2025-07-12T00:13:01.072315274Z" level=error msg="StopPodSandbox for \"5ca3e8dbc88b2020006660b0ae14cbf71cbf389d138b8d0bc83929bbe1538f97\" failed" error="failed to destroy network for sandbox \"5ca3e8dbc88b2020006660b0ae14cbf71cbf389d138b8d0bc83929bbe1538f97\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:13:01.072764 kubelet[2706]: E0712 00:13:01.072564 2706 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5ca3e8dbc88b2020006660b0ae14cbf71cbf389d138b8d0bc83929bbe1538f97\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5ca3e8dbc88b2020006660b0ae14cbf71cbf389d138b8d0bc83929bbe1538f97" Jul 12 00:13:01.072764 kubelet[2706]: E0712 00:13:01.072595 2706 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5ca3e8dbc88b2020006660b0ae14cbf71cbf389d138b8d0bc83929bbe1538f97"} Jul 12 00:13:01.072764 kubelet[2706]: E0712 00:13:01.072620 2706 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c86f07ef-0940-4c98-a612-68a23cab6908\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5ca3e8dbc88b2020006660b0ae14cbf71cbf389d138b8d0bc83929bbe1538f97\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:13:01.072764 kubelet[2706]: E0712 00:13:01.072640 2706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c86f07ef-0940-4c98-a612-68a23cab6908\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5ca3e8dbc88b2020006660b0ae14cbf71cbf389d138b8d0bc83929bbe1538f97\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-tpnrh" podUID="c86f07ef-0940-4c98-a612-68a23cab6908" Jul 12 00:13:01.076479 containerd[1601]: time="2025-07-12T00:13:01.075684773Z" level=error msg="StopPodSandbox for \"1bea194c56a8459314f623885de55cce3b51932c64c808cdab682331974ad6c8\" failed" error="failed to destroy network for sandbox \"1bea194c56a8459314f623885de55cce3b51932c64c808cdab682331974ad6c8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:13:01.076560 kubelet[2706]: E0712 00:13:01.075870 2706 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1bea194c56a8459314f623885de55cce3b51932c64c808cdab682331974ad6c8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1bea194c56a8459314f623885de55cce3b51932c64c808cdab682331974ad6c8" Jul 12 00:13:01.076560 kubelet[2706]: E0712 00:13:01.075902 2706 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1bea194c56a8459314f623885de55cce3b51932c64c808cdab682331974ad6c8"} Jul 12 00:13:01.076560 kubelet[2706]: E0712 00:13:01.075925 2706 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2c8b35e8-59cb-4a47-869c-9d6193668f96\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1bea194c56a8459314f623885de55cce3b51932c64c808cdab682331974ad6c8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:13:01.076560 kubelet[2706]: E0712 00:13:01.075946 2706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2c8b35e8-59cb-4a47-869c-9d6193668f96\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1bea194c56a8459314f623885de55cce3b51932c64c808cdab682331974ad6c8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-54f5749fd-n2b8n" podUID="2c8b35e8-59cb-4a47-869c-9d6193668f96" Jul 12 00:13:01.078557 containerd[1601]: time="2025-07-12T00:13:01.078519349Z" level=error msg="StopPodSandbox for \"8df534f44de306fdfbdd517a52c9eaae7da824c9bc0207a7f8a2f7fa4d6e36f8\" failed" error="failed to destroy network for sandbox \"8df534f44de306fdfbdd517a52c9eaae7da824c9bc0207a7f8a2f7fa4d6e36f8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:13:01.078931 kubelet[2706]: E0712 00:13:01.078804 2706 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8df534f44de306fdfbdd517a52c9eaae7da824c9bc0207a7f8a2f7fa4d6e36f8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8df534f44de306fdfbdd517a52c9eaae7da824c9bc0207a7f8a2f7fa4d6e36f8" Jul 12 00:13:01.078931 kubelet[2706]: E0712 00:13:01.078903 2706 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8df534f44de306fdfbdd517a52c9eaae7da824c9bc0207a7f8a2f7fa4d6e36f8"} Jul 12 00:13:01.079052 kubelet[2706]: E0712 00:13:01.078945 2706 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b638a997-3c36-4850-99ab-ab3d678917cc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8df534f44de306fdfbdd517a52c9eaae7da824c9bc0207a7f8a2f7fa4d6e36f8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:13:01.079052 kubelet[2706]: E0712 00:13:01.078967 2706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b638a997-3c36-4850-99ab-ab3d678917cc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8df534f44de306fdfbdd517a52c9eaae7da824c9bc0207a7f8a2f7fa4d6e36f8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-b654b5ccd-2zvhb" podUID="b638a997-3c36-4850-99ab-ab3d678917cc" Jul 12 00:13:01.199634 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8df534f44de306fdfbdd517a52c9eaae7da824c9bc0207a7f8a2f7fa4d6e36f8-shm.mount: Deactivated successfully. Jul 12 00:13:01.199797 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-417c12a1078c186c15411172fad1a0108dacf60beadf725eebfc513508343886-shm.mount: Deactivated successfully. Jul 12 00:13:01.812170 containerd[1601]: time="2025-07-12T00:13:01.812125367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-f2q67,Uid:bf0041c9-fdb6-4de6-99ec-d3644807d402,Namespace:calico-system,Attempt:0,}" Jul 12 00:13:01.898471 kubelet[2706]: I0712 00:13:01.898407 2706 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 00:13:01.968913 containerd[1601]: time="2025-07-12T00:13:01.968563587Z" level=error msg="Failed to destroy network for sandbox \"e849895c4dc9b971a95abc2d7eaa0482d4477e6e9ab4780ebc9120c37441fd12\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:13:01.969390 containerd[1601]: time="2025-07-12T00:13:01.969332471Z" level=error msg="encountered an error cleaning up failed sandbox \"e849895c4dc9b971a95abc2d7eaa0482d4477e6e9ab4780ebc9120c37441fd12\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:13:01.970152 containerd[1601]: time="2025-07-12T00:13:01.969387392Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-f2q67,Uid:bf0041c9-fdb6-4de6-99ec-d3644807d402,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e849895c4dc9b971a95abc2d7eaa0482d4477e6e9ab4780ebc9120c37441fd12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:13:01.970387 kubelet[2706]: E0712 00:13:01.970354 2706 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e849895c4dc9b971a95abc2d7eaa0482d4477e6e9ab4780ebc9120c37441fd12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:13:01.970701 kubelet[2706]: E0712 00:13:01.970405 2706 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e849895c4dc9b971a95abc2d7eaa0482d4477e6e9ab4780ebc9120c37441fd12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-f2q67" Jul 12 00:13:01.970701 kubelet[2706]: E0712 00:13:01.970426 2706 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e849895c4dc9b971a95abc2d7eaa0482d4477e6e9ab4780ebc9120c37441fd12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-f2q67" Jul 12 00:13:01.973596 kubelet[2706]: E0712 00:13:01.970669 2706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-f2q67_calico-system(bf0041c9-fdb6-4de6-99ec-d3644807d402)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-f2q67_calico-system(bf0041c9-fdb6-4de6-99ec-d3644807d402)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e849895c4dc9b971a95abc2d7eaa0482d4477e6e9ab4780ebc9120c37441fd12\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-f2q67" podUID="bf0041c9-fdb6-4de6-99ec-d3644807d402" Jul 12 00:13:01.973048 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e849895c4dc9b971a95abc2d7eaa0482d4477e6e9ab4780ebc9120c37441fd12-shm.mount: Deactivated successfully. Jul 12 00:13:01.984177 kubelet[2706]: I0712 00:13:01.983579 2706 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e849895c4dc9b971a95abc2d7eaa0482d4477e6e9ab4780ebc9120c37441fd12" Jul 12 00:13:01.985428 containerd[1601]: time="2025-07-12T00:13:01.985382684Z" level=info msg="StopPodSandbox for \"e849895c4dc9b971a95abc2d7eaa0482d4477e6e9ab4780ebc9120c37441fd12\"" Jul 12 00:13:01.987007 containerd[1601]: time="2025-07-12T00:13:01.986970533Z" level=info msg="Ensure that sandbox e849895c4dc9b971a95abc2d7eaa0482d4477e6e9ab4780ebc9120c37441fd12 in task-service has been cleanup successfully" Jul 12 00:13:02.037026 containerd[1601]: time="2025-07-12T00:13:02.036971487Z" level=error msg="StopPodSandbox for \"e849895c4dc9b971a95abc2d7eaa0482d4477e6e9ab4780ebc9120c37441fd12\" failed" error="failed to destroy network for sandbox \"e849895c4dc9b971a95abc2d7eaa0482d4477e6e9ab4780ebc9120c37441fd12\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:13:02.037542 kubelet[2706]: E0712 00:13:02.037343 2706 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e849895c4dc9b971a95abc2d7eaa0482d4477e6e9ab4780ebc9120c37441fd12\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e849895c4dc9b971a95abc2d7eaa0482d4477e6e9ab4780ebc9120c37441fd12" Jul 12 00:13:02.037542 kubelet[2706]: E0712 00:13:02.037401 2706 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e849895c4dc9b971a95abc2d7eaa0482d4477e6e9ab4780ebc9120c37441fd12"} Jul 12 00:13:02.037542 kubelet[2706]: E0712 00:13:02.037438 2706 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bf0041c9-fdb6-4de6-99ec-d3644807d402\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e849895c4dc9b971a95abc2d7eaa0482d4477e6e9ab4780ebc9120c37441fd12\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:13:02.037542 kubelet[2706]: E0712 00:13:02.037478 2706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bf0041c9-fdb6-4de6-99ec-d3644807d402\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e849895c4dc9b971a95abc2d7eaa0482d4477e6e9ab4780ebc9120c37441fd12\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-f2q67" podUID="bf0041c9-fdb6-4de6-99ec-d3644807d402" Jul 12 00:13:04.257462 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1604152142.mount: Deactivated successfully. Jul 12 00:13:04.290668 containerd[1601]: time="2025-07-12T00:13:04.290605948Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:13:04.292459 containerd[1601]: time="2025-07-12T00:13:04.292372117Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=152544909" Jul 12 00:13:04.293440 containerd[1601]: time="2025-07-12T00:13:04.293396442Z" level=info msg="ImageCreate event name:\"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:13:04.296511 containerd[1601]: time="2025-07-12T00:13:04.296383216Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:13:04.297920 containerd[1601]: time="2025-07-12T00:13:04.297862263Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"152544771\" in 4.352587896s" Jul 12 00:13:04.297920 containerd[1601]: time="2025-07-12T00:13:04.297904383Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\"" Jul 12 00:13:04.318124 containerd[1601]: time="2025-07-12T00:13:04.317999038Z" level=info msg="CreateContainer within sandbox \"ba4cd5623c56b0c823daff26535a86bae3498addb6d55ff231d976b7810c96cb\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 12 00:13:04.351263 containerd[1601]: time="2025-07-12T00:13:04.351216876Z" level=info msg="CreateContainer within sandbox \"ba4cd5623c56b0c823daff26535a86bae3498addb6d55ff231d976b7810c96cb\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"b6a79ce228c78ab13d11e3323770a9779b97587cb2b1270eb0ead6419a1d86b1\"" Jul 12 00:13:04.353521 containerd[1601]: time="2025-07-12T00:13:04.352959084Z" level=info msg="StartContainer for \"b6a79ce228c78ab13d11e3323770a9779b97587cb2b1270eb0ead6419a1d86b1\"" Jul 12 00:13:04.419468 containerd[1601]: time="2025-07-12T00:13:04.417949072Z" level=info msg="StartContainer for \"b6a79ce228c78ab13d11e3323770a9779b97587cb2b1270eb0ead6419a1d86b1\" returns successfully" Jul 12 00:13:04.586045 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 12 00:13:04.586232 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 12 00:13:04.738589 containerd[1601]: time="2025-07-12T00:13:04.738488950Z" level=info msg="StopPodSandbox for \"46e52a83bf7565f21bf8cba3ffe4778f9cd87e310ee7996b780510e4b749451e\"" Jul 12 00:13:04.949502 containerd[1601]: 2025-07-12 00:13:04.858 [INFO][3949] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="46e52a83bf7565f21bf8cba3ffe4778f9cd87e310ee7996b780510e4b749451e" Jul 12 00:13:04.949502 containerd[1601]: 2025-07-12 00:13:04.858 [INFO][3949] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="46e52a83bf7565f21bf8cba3ffe4778f9cd87e310ee7996b780510e4b749451e" iface="eth0" netns="/var/run/netns/cni-b2b8d2e8-7a28-912d-1930-e275bcb64597" Jul 12 00:13:04.949502 containerd[1601]: 2025-07-12 00:13:04.859 [INFO][3949] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="46e52a83bf7565f21bf8cba3ffe4778f9cd87e310ee7996b780510e4b749451e" iface="eth0" netns="/var/run/netns/cni-b2b8d2e8-7a28-912d-1930-e275bcb64597" Jul 12 00:13:04.949502 containerd[1601]: 2025-07-12 00:13:04.859 [INFO][3949] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="46e52a83bf7565f21bf8cba3ffe4778f9cd87e310ee7996b780510e4b749451e" iface="eth0" netns="/var/run/netns/cni-b2b8d2e8-7a28-912d-1930-e275bcb64597" Jul 12 00:13:04.949502 containerd[1601]: 2025-07-12 00:13:04.859 [INFO][3949] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="46e52a83bf7565f21bf8cba3ffe4778f9cd87e310ee7996b780510e4b749451e" Jul 12 00:13:04.949502 containerd[1601]: 2025-07-12 00:13:04.859 [INFO][3949] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="46e52a83bf7565f21bf8cba3ffe4778f9cd87e310ee7996b780510e4b749451e" Jul 12 00:13:04.949502 containerd[1601]: 2025-07-12 00:13:04.922 [INFO][3957] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="46e52a83bf7565f21bf8cba3ffe4778f9cd87e310ee7996b780510e4b749451e" HandleID="k8s-pod-network.46e52a83bf7565f21bf8cba3ffe4778f9cd87e310ee7996b780510e4b749451e" Workload="ci--4081--3--4--n--bdc5bebc5f-k8s-whisker--6d7879f7fd--jl8nw-eth0" Jul 12 00:13:04.949502 containerd[1601]: 2025-07-12 00:13:04.922 [INFO][3957] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:13:04.949502 containerd[1601]: 2025-07-12 00:13:04.922 [INFO][3957] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:13:04.949502 containerd[1601]: 2025-07-12 00:13:04.938 [WARNING][3957] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="46e52a83bf7565f21bf8cba3ffe4778f9cd87e310ee7996b780510e4b749451e" HandleID="k8s-pod-network.46e52a83bf7565f21bf8cba3ffe4778f9cd87e310ee7996b780510e4b749451e" Workload="ci--4081--3--4--n--bdc5bebc5f-k8s-whisker--6d7879f7fd--jl8nw-eth0" Jul 12 00:13:04.949502 containerd[1601]: 2025-07-12 00:13:04.938 [INFO][3957] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="46e52a83bf7565f21bf8cba3ffe4778f9cd87e310ee7996b780510e4b749451e" HandleID="k8s-pod-network.46e52a83bf7565f21bf8cba3ffe4778f9cd87e310ee7996b780510e4b749451e" Workload="ci--4081--3--4--n--bdc5bebc5f-k8s-whisker--6d7879f7fd--jl8nw-eth0" Jul 12 00:13:04.949502 containerd[1601]: 2025-07-12 00:13:04.941 [INFO][3957] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:13:04.949502 containerd[1601]: 2025-07-12 00:13:04.946 [INFO][3949] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="46e52a83bf7565f21bf8cba3ffe4778f9cd87e310ee7996b780510e4b749451e" Jul 12 00:13:04.950389 containerd[1601]: time="2025-07-12T00:13:04.949657751Z" level=info msg="TearDown network for sandbox \"46e52a83bf7565f21bf8cba3ffe4778f9cd87e310ee7996b780510e4b749451e\" successfully" Jul 12 00:13:04.950389 containerd[1601]: time="2025-07-12T00:13:04.949685351Z" level=info msg="StopPodSandbox for \"46e52a83bf7565f21bf8cba3ffe4778f9cd87e310ee7996b780510e4b749451e\" returns successfully" Jul 12 00:13:04.989886 kubelet[2706]: I0712 00:13:04.989740 2706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rbnbr\" (UniqueName: \"kubernetes.io/projected/30c5abcc-f15f-4c0f-ab14-2ab204297397-kube-api-access-rbnbr\") pod \"30c5abcc-f15f-4c0f-ab14-2ab204297397\" (UID: \"30c5abcc-f15f-4c0f-ab14-2ab204297397\") " Jul 12 00:13:04.989886 kubelet[2706]: I0712 00:13:04.989834 2706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/30c5abcc-f15f-4c0f-ab14-2ab204297397-whisker-ca-bundle\") pod \"30c5abcc-f15f-4c0f-ab14-2ab204297397\" (UID: \"30c5abcc-f15f-4c0f-ab14-2ab204297397\") " Jul 12 00:13:04.990262 kubelet[2706]: I0712 00:13:04.989898 2706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/30c5abcc-f15f-4c0f-ab14-2ab204297397-whisker-backend-key-pair\") pod \"30c5abcc-f15f-4c0f-ab14-2ab204297397\" (UID: \"30c5abcc-f15f-4c0f-ab14-2ab204297397\") " Jul 12 00:13:04.994531 kubelet[2706]: I0712 00:13:04.994173 2706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30c5abcc-f15f-4c0f-ab14-2ab204297397-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "30c5abcc-f15f-4c0f-ab14-2ab204297397" (UID: "30c5abcc-f15f-4c0f-ab14-2ab204297397"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 12 00:13:04.994531 kubelet[2706]: I0712 00:13:04.994413 2706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30c5abcc-f15f-4c0f-ab14-2ab204297397-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "30c5abcc-f15f-4c0f-ab14-2ab204297397" (UID: "30c5abcc-f15f-4c0f-ab14-2ab204297397"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 12 00:13:05.001975 kubelet[2706]: I0712 00:13:05.001909 2706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30c5abcc-f15f-4c0f-ab14-2ab204297397-kube-api-access-rbnbr" (OuterVolumeSpecName: "kube-api-access-rbnbr") pod "30c5abcc-f15f-4c0f-ab14-2ab204297397" (UID: "30c5abcc-f15f-4c0f-ab14-2ab204297397"). InnerVolumeSpecName "kube-api-access-rbnbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 12 00:13:05.029183 kubelet[2706]: I0712 00:13:05.029110 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-dhjn2" podStartSLOduration=1.883356151 podStartE2EDuration="14.029092399s" podCreationTimestamp="2025-07-12 00:12:51 +0000 UTC" firstStartedPulling="2025-07-12 00:12:52.15320634 +0000 UTC m=+22.485746991" lastFinishedPulling="2025-07-12 00:13:04.298942588 +0000 UTC m=+34.631483239" observedRunningTime="2025-07-12 00:13:05.023642934 +0000 UTC m=+35.356183585" watchObservedRunningTime="2025-07-12 00:13:05.029092399 +0000 UTC m=+35.361633050" Jul 12 00:13:05.090336 kubelet[2706]: I0712 00:13:05.090259 2706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rbnbr\" (UniqueName: \"kubernetes.io/projected/30c5abcc-f15f-4c0f-ab14-2ab204297397-kube-api-access-rbnbr\") on node \"ci-4081-3-4-n-bdc5bebc5f\" DevicePath \"\"" Jul 12 00:13:05.090799 kubelet[2706]: I0712 00:13:05.090322 2706 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/30c5abcc-f15f-4c0f-ab14-2ab204297397-whisker-ca-bundle\") on node \"ci-4081-3-4-n-bdc5bebc5f\" DevicePath \"\"" Jul 12 00:13:05.090799 kubelet[2706]: I0712 00:13:05.090795 2706 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/30c5abcc-f15f-4c0f-ab14-2ab204297397-whisker-backend-key-pair\") on node \"ci-4081-3-4-n-bdc5bebc5f\" DevicePath \"\"" Jul 12 00:13:05.260204 systemd[1]: run-netns-cni\x2db2b8d2e8\x2d7a28\x2d912d\x2d1930\x2de275bcb64597.mount: Deactivated successfully. Jul 12 00:13:05.260620 systemd[1]: var-lib-kubelet-pods-30c5abcc\x2df15f\x2d4c0f\x2dab14\x2d2ab204297397-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drbnbr.mount: Deactivated successfully. Jul 12 00:13:05.260920 systemd[1]: var-lib-kubelet-pods-30c5abcc\x2df15f\x2d4c0f\x2dab14\x2d2ab204297397-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 12 00:13:05.393360 kubelet[2706]: I0712 00:13:05.393318 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/80f4b5cd-0b5e-408a-a039-e8a41c346458-whisker-ca-bundle\") pod \"whisker-659d8f9bb-84tcn\" (UID: \"80f4b5cd-0b5e-408a-a039-e8a41c346458\") " pod="calico-system/whisker-659d8f9bb-84tcn" Jul 12 00:13:05.393514 kubelet[2706]: I0712 00:13:05.393372 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/80f4b5cd-0b5e-408a-a039-e8a41c346458-whisker-backend-key-pair\") pod \"whisker-659d8f9bb-84tcn\" (UID: \"80f4b5cd-0b5e-408a-a039-e8a41c346458\") " pod="calico-system/whisker-659d8f9bb-84tcn" Jul 12 00:13:05.393514 kubelet[2706]: I0712 00:13:05.393398 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76pjc\" (UniqueName: \"kubernetes.io/projected/80f4b5cd-0b5e-408a-a039-e8a41c346458-kube-api-access-76pjc\") pod \"whisker-659d8f9bb-84tcn\" (UID: \"80f4b5cd-0b5e-408a-a039-e8a41c346458\") " pod="calico-system/whisker-659d8f9bb-84tcn" Jul 12 00:13:05.689911 containerd[1601]: time="2025-07-12T00:13:05.689378091Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-659d8f9bb-84tcn,Uid:80f4b5cd-0b5e-408a-a039-e8a41c346458,Namespace:calico-system,Attempt:0,}" Jul 12 00:13:05.812143 kubelet[2706]: I0712 00:13:05.812096 2706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30c5abcc-f15f-4c0f-ab14-2ab204297397" path="/var/lib/kubelet/pods/30c5abcc-f15f-4c0f-ab14-2ab204297397/volumes" Jul 12 00:13:05.847044 systemd-networkd[1232]: cali92ae9e95aec: Link UP Jul 12 00:13:05.847267 systemd-networkd[1232]: cali92ae9e95aec: Gained carrier Jul 12 00:13:05.880978 containerd[1601]: 2025-07-12 00:13:05.730 [INFO][3980] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 12 00:13:05.880978 containerd[1601]: 2025-07-12 00:13:05.749 [INFO][3980] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--4--n--bdc5bebc5f-k8s-whisker--659d8f9bb--84tcn-eth0 whisker-659d8f9bb- calico-system 80f4b5cd-0b5e-408a-a039-e8a41c346458 869 0 2025-07-12 00:13:05 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:659d8f9bb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081-3-4-n-bdc5bebc5f whisker-659d8f9bb-84tcn eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali92ae9e95aec [] [] }} ContainerID="d55bc73aa7cc5a5a4aed038076d3b3e9d51ad07e1862444b8c16904ac1d475a9" Namespace="calico-system" Pod="whisker-659d8f9bb-84tcn" WorkloadEndpoint="ci--4081--3--4--n--bdc5bebc5f-k8s-whisker--659d8f9bb--84tcn-" Jul 12 00:13:05.880978 containerd[1601]: 2025-07-12 00:13:05.750 [INFO][3980] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d55bc73aa7cc5a5a4aed038076d3b3e9d51ad07e1862444b8c16904ac1d475a9" Namespace="calico-system" Pod="whisker-659d8f9bb-84tcn" WorkloadEndpoint="ci--4081--3--4--n--bdc5bebc5f-k8s-whisker--659d8f9bb--84tcn-eth0" Jul 12 00:13:05.880978 containerd[1601]: 2025-07-12 00:13:05.780 [INFO][3992] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d55bc73aa7cc5a5a4aed038076d3b3e9d51ad07e1862444b8c16904ac1d475a9" HandleID="k8s-pod-network.d55bc73aa7cc5a5a4aed038076d3b3e9d51ad07e1862444b8c16904ac1d475a9" Workload="ci--4081--3--4--n--bdc5bebc5f-k8s-whisker--659d8f9bb--84tcn-eth0" Jul 12 00:13:05.880978 containerd[1601]: 2025-07-12 00:13:05.781 [INFO][3992] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d55bc73aa7cc5a5a4aed038076d3b3e9d51ad07e1862444b8c16904ac1d475a9" HandleID="k8s-pod-network.d55bc73aa7cc5a5a4aed038076d3b3e9d51ad07e1862444b8c16904ac1d475a9" Workload="ci--4081--3--4--n--bdc5bebc5f-k8s-whisker--659d8f9bb--84tcn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b060), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-4-n-bdc5bebc5f", "pod":"whisker-659d8f9bb-84tcn", "timestamp":"2025-07-12 00:13:05.780814737 +0000 UTC"}, Hostname:"ci-4081-3-4-n-bdc5bebc5f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:13:05.880978 containerd[1601]: 2025-07-12 00:13:05.782 [INFO][3992] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:13:05.880978 containerd[1601]: 2025-07-12 00:13:05.782 [INFO][3992] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:13:05.880978 containerd[1601]: 2025-07-12 00:13:05.782 [INFO][3992] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-4-n-bdc5bebc5f' Jul 12 00:13:05.880978 containerd[1601]: 2025-07-12 00:13:05.796 [INFO][3992] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d55bc73aa7cc5a5a4aed038076d3b3e9d51ad07e1862444b8c16904ac1d475a9" host="ci-4081-3-4-n-bdc5bebc5f" Jul 12 00:13:05.880978 containerd[1601]: 2025-07-12 00:13:05.803 [INFO][3992] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-4-n-bdc5bebc5f" Jul 12 00:13:05.880978 containerd[1601]: 2025-07-12 00:13:05.811 [INFO][3992] ipam/ipam.go 511: Trying affinity for 192.168.106.128/26 host="ci-4081-3-4-n-bdc5bebc5f" Jul 12 00:13:05.880978 containerd[1601]: 2025-07-12 00:13:05.814 [INFO][3992] ipam/ipam.go 158: Attempting to load block cidr=192.168.106.128/26 host="ci-4081-3-4-n-bdc5bebc5f" Jul 12 00:13:05.880978 containerd[1601]: 2025-07-12 00:13:05.818 [INFO][3992] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.106.128/26 host="ci-4081-3-4-n-bdc5bebc5f" Jul 12 00:13:05.880978 containerd[1601]: 2025-07-12 00:13:05.819 [INFO][3992] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.106.128/26 handle="k8s-pod-network.d55bc73aa7cc5a5a4aed038076d3b3e9d51ad07e1862444b8c16904ac1d475a9" host="ci-4081-3-4-n-bdc5bebc5f" Jul 12 00:13:05.880978 containerd[1601]: 2025-07-12 00:13:05.821 [INFO][3992] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d55bc73aa7cc5a5a4aed038076d3b3e9d51ad07e1862444b8c16904ac1d475a9 Jul 12 00:13:05.880978 containerd[1601]: 2025-07-12 00:13:05.826 [INFO][3992] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.106.128/26 handle="k8s-pod-network.d55bc73aa7cc5a5a4aed038076d3b3e9d51ad07e1862444b8c16904ac1d475a9" host="ci-4081-3-4-n-bdc5bebc5f" Jul 12 00:13:05.880978 containerd[1601]: 2025-07-12 00:13:05.837 [INFO][3992] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.106.129/26] block=192.168.106.128/26 handle="k8s-pod-network.d55bc73aa7cc5a5a4aed038076d3b3e9d51ad07e1862444b8c16904ac1d475a9" host="ci-4081-3-4-n-bdc5bebc5f" Jul 12 00:13:05.880978 containerd[1601]: 2025-07-12 00:13:05.837 [INFO][3992] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.106.129/26] handle="k8s-pod-network.d55bc73aa7cc5a5a4aed038076d3b3e9d51ad07e1862444b8c16904ac1d475a9" host="ci-4081-3-4-n-bdc5bebc5f" Jul 12 00:13:05.880978 containerd[1601]: 2025-07-12 00:13:05.837 [INFO][3992] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:13:05.880978 containerd[1601]: 2025-07-12 00:13:05.837 [INFO][3992] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.106.129/26] IPv6=[] ContainerID="d55bc73aa7cc5a5a4aed038076d3b3e9d51ad07e1862444b8c16904ac1d475a9" HandleID="k8s-pod-network.d55bc73aa7cc5a5a4aed038076d3b3e9d51ad07e1862444b8c16904ac1d475a9" Workload="ci--4081--3--4--n--bdc5bebc5f-k8s-whisker--659d8f9bb--84tcn-eth0" Jul 12 00:13:05.881697 containerd[1601]: 2025-07-12 00:13:05.840 [INFO][3980] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d55bc73aa7cc5a5a4aed038076d3b3e9d51ad07e1862444b8c16904ac1d475a9" Namespace="calico-system" Pod="whisker-659d8f9bb-84tcn" WorkloadEndpoint="ci--4081--3--4--n--bdc5bebc5f-k8s-whisker--659d8f9bb--84tcn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--n--bdc5bebc5f-k8s-whisker--659d8f9bb--84tcn-eth0", GenerateName:"whisker-659d8f9bb-", Namespace:"calico-system", SelfLink:"", UID:"80f4b5cd-0b5e-408a-a039-e8a41c346458", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 13, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"659d8f9bb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-n-bdc5bebc5f", ContainerID:"", Pod:"whisker-659d8f9bb-84tcn", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.106.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali92ae9e95aec", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:13:05.881697 containerd[1601]: 2025-07-12 00:13:05.840 [INFO][3980] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.106.129/32] ContainerID="d55bc73aa7cc5a5a4aed038076d3b3e9d51ad07e1862444b8c16904ac1d475a9" Namespace="calico-system" Pod="whisker-659d8f9bb-84tcn" WorkloadEndpoint="ci--4081--3--4--n--bdc5bebc5f-k8s-whisker--659d8f9bb--84tcn-eth0" Jul 12 00:13:05.881697 containerd[1601]: 2025-07-12 00:13:05.840 [INFO][3980] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali92ae9e95aec ContainerID="d55bc73aa7cc5a5a4aed038076d3b3e9d51ad07e1862444b8c16904ac1d475a9" Namespace="calico-system" Pod="whisker-659d8f9bb-84tcn" WorkloadEndpoint="ci--4081--3--4--n--bdc5bebc5f-k8s-whisker--659d8f9bb--84tcn-eth0" Jul 12 00:13:05.881697 containerd[1601]: 2025-07-12 00:13:05.847 [INFO][3980] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d55bc73aa7cc5a5a4aed038076d3b3e9d51ad07e1862444b8c16904ac1d475a9" Namespace="calico-system" Pod="whisker-659d8f9bb-84tcn" WorkloadEndpoint="ci--4081--3--4--n--bdc5bebc5f-k8s-whisker--659d8f9bb--84tcn-eth0" Jul 12 00:13:05.881697 containerd[1601]: 2025-07-12 00:13:05.847 [INFO][3980] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d55bc73aa7cc5a5a4aed038076d3b3e9d51ad07e1862444b8c16904ac1d475a9" Namespace="calico-system" Pod="whisker-659d8f9bb-84tcn" WorkloadEndpoint="ci--4081--3--4--n--bdc5bebc5f-k8s-whisker--659d8f9bb--84tcn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--n--bdc5bebc5f-k8s-whisker--659d8f9bb--84tcn-eth0", GenerateName:"whisker-659d8f9bb-", Namespace:"calico-system", SelfLink:"", UID:"80f4b5cd-0b5e-408a-a039-e8a41c346458", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 13, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"659d8f9bb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-n-bdc5bebc5f", ContainerID:"d55bc73aa7cc5a5a4aed038076d3b3e9d51ad07e1862444b8c16904ac1d475a9", Pod:"whisker-659d8f9bb-84tcn", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.106.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali92ae9e95aec", MAC:"9a:f2:f2:e0:f3:be", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:13:05.881697 containerd[1601]: 2025-07-12 00:13:05.874 [INFO][3980] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d55bc73aa7cc5a5a4aed038076d3b3e9d51ad07e1862444b8c16904ac1d475a9" Namespace="calico-system" Pod="whisker-659d8f9bb-84tcn" WorkloadEndpoint="ci--4081--3--4--n--bdc5bebc5f-k8s-whisker--659d8f9bb--84tcn-eth0" Jul 12 00:13:05.932637 containerd[1601]: time="2025-07-12T00:13:05.932512411Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:13:05.933580 containerd[1601]: time="2025-07-12T00:13:05.932650572Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:13:05.933580 containerd[1601]: time="2025-07-12T00:13:05.932678332Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:13:05.933580 containerd[1601]: time="2025-07-12T00:13:05.932867773Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:13:05.991599 containerd[1601]: time="2025-07-12T00:13:05.991544833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-659d8f9bb-84tcn,Uid:80f4b5cd-0b5e-408a-a039-e8a41c346458,Namespace:calico-system,Attempt:0,} returns sandbox id \"d55bc73aa7cc5a5a4aed038076d3b3e9d51ad07e1862444b8c16904ac1d475a9\"" Jul 12 00:13:05.994636 containerd[1601]: time="2025-07-12T00:13:05.994558567Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 12 00:13:06.003923 kubelet[2706]: I0712 00:13:06.003896 2706 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 00:13:06.462797 kernel: bpftool[4169]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jul 12 00:13:06.659587 systemd-networkd[1232]: vxlan.calico: Link UP Jul 12 00:13:06.659593 systemd-networkd[1232]: vxlan.calico: Gained carrier Jul 12 00:13:07.651264 containerd[1601]: time="2025-07-12T00:13:07.649859372Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:13:07.651840 containerd[1601]: time="2025-07-12T00:13:07.651808459Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4605614" Jul 12 00:13:07.651989 containerd[1601]: time="2025-07-12T00:13:07.651964500Z" level=info msg="ImageCreate event name:\"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:13:07.655357 containerd[1601]: time="2025-07-12T00:13:07.655015312Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:13:07.657130 containerd[1601]: time="2025-07-12T00:13:07.657086880Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"5974847\" in 1.662217992s" Jul 12 00:13:07.657285 containerd[1601]: time="2025-07-12T00:13:07.657258241Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\"" Jul 12 00:13:07.670212 containerd[1601]: time="2025-07-12T00:13:07.670142691Z" level=info msg="CreateContainer within sandbox \"d55bc73aa7cc5a5a4aed038076d3b3e9d51ad07e1862444b8c16904ac1d475a9\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 12 00:13:07.691400 containerd[1601]: time="2025-07-12T00:13:07.691350734Z" level=info msg="CreateContainer within sandbox \"d55bc73aa7cc5a5a4aed038076d3b3e9d51ad07e1862444b8c16904ac1d475a9\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"c694949a0e1857e7d6a152fd9f81da8fbb8748c515104dc51c140c7eb27051f4\"" Jul 12 00:13:07.692870 containerd[1601]: time="2025-07-12T00:13:07.692518218Z" level=info msg="StartContainer for \"c694949a0e1857e7d6a152fd9f81da8fbb8748c515104dc51c140c7eb27051f4\"" Jul 12 00:13:07.769070 containerd[1601]: time="2025-07-12T00:13:07.768919156Z" level=info msg="StartContainer for \"c694949a0e1857e7d6a152fd9f81da8fbb8748c515104dc51c140c7eb27051f4\" returns successfully" Jul 12 00:13:07.772093 containerd[1601]: time="2025-07-12T00:13:07.771798728Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 12 00:13:07.913768 systemd-networkd[1232]: cali92ae9e95aec: Gained IPv6LL Jul 12 00:13:07.978377 systemd-networkd[1232]: vxlan.calico: Gained IPv6LL Jul 12 00:13:10.355997 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3215683407.mount: Deactivated successfully. Jul 12 00:13:10.377112 containerd[1601]: time="2025-07-12T00:13:10.375808038Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:13:10.377112 containerd[1601]: time="2025-07-12T00:13:10.377036922Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=30814581" Jul 12 00:13:10.378158 containerd[1601]: time="2025-07-12T00:13:10.378092526Z" level=info msg="ImageCreate event name:\"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:13:10.381659 containerd[1601]: time="2025-07-12T00:13:10.381563897Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:13:10.383352 containerd[1601]: time="2025-07-12T00:13:10.383202422Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"30814411\" in 2.610954893s" Jul 12 00:13:10.383352 containerd[1601]: time="2025-07-12T00:13:10.383251742Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\"" Jul 12 00:13:10.387580 containerd[1601]: time="2025-07-12T00:13:10.387394556Z" level=info msg="CreateContainer within sandbox \"d55bc73aa7cc5a5a4aed038076d3b3e9d51ad07e1862444b8c16904ac1d475a9\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 12 00:13:10.404175 containerd[1601]: time="2025-07-12T00:13:10.404066169Z" level=info msg="CreateContainer within sandbox \"d55bc73aa7cc5a5a4aed038076d3b3e9d51ad07e1862444b8c16904ac1d475a9\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"e4c69cc336924155eddcc474c3a9754a6d44ea3fb72c81c2b2a49d7f6b03a0c3\"" Jul 12 00:13:10.408491 containerd[1601]: time="2025-07-12T00:13:10.406651698Z" level=info msg="StartContainer for \"e4c69cc336924155eddcc474c3a9754a6d44ea3fb72c81c2b2a49d7f6b03a0c3\"" Jul 12 00:13:10.478578 containerd[1601]: time="2025-07-12T00:13:10.478520609Z" level=info msg="StartContainer for \"e4c69cc336924155eddcc474c3a9754a6d44ea3fb72c81c2b2a49d7f6b03a0c3\" returns successfully" Jul 12 00:13:11.050984 kubelet[2706]: I0712 00:13:11.050494 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-659d8f9bb-84tcn" podStartSLOduration=1.656806926 podStartE2EDuration="6.048261233s" podCreationTimestamp="2025-07-12 00:13:05 +0000 UTC" firstStartedPulling="2025-07-12 00:13:05.992919399 +0000 UTC m=+36.325460050" lastFinishedPulling="2025-07-12 00:13:10.384373666 +0000 UTC m=+40.716914357" observedRunningTime="2025-07-12 00:13:11.04726263 +0000 UTC m=+41.379803321" watchObservedRunningTime="2025-07-12 00:13:11.048261233 +0000 UTC m=+41.380801924" Jul 12 00:13:11.810832 containerd[1601]: time="2025-07-12T00:13:11.810776172Z" level=info msg="StopPodSandbox for \"bc9dff0487da505b7fa119d4dc16b2ecd7080abe735d52d748dbef664a2f919b\"" Jul 12 00:13:11.928731 containerd[1601]: 2025-07-12 00:13:11.883 [INFO][4339] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bc9dff0487da505b7fa119d4dc16b2ecd7080abe735d52d748dbef664a2f919b" Jul 12 00:13:11.928731 containerd[1601]: 2025-07-12 00:13:11.884 [INFO][4339] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bc9dff0487da505b7fa119d4dc16b2ecd7080abe735d52d748dbef664a2f919b" iface="eth0" netns="/var/run/netns/cni-e264aa4b-d8fa-8adc-0a78-47d336edfa05" Jul 12 00:13:11.928731 containerd[1601]: 2025-07-12 00:13:11.885 [INFO][4339] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bc9dff0487da505b7fa119d4dc16b2ecd7080abe735d52d748dbef664a2f919b" iface="eth0" netns="/var/run/netns/cni-e264aa4b-d8fa-8adc-0a78-47d336edfa05" Jul 12 00:13:11.928731 containerd[1601]: 2025-07-12 00:13:11.885 [INFO][4339] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bc9dff0487da505b7fa119d4dc16b2ecd7080abe735d52d748dbef664a2f919b" iface="eth0" netns="/var/run/netns/cni-e264aa4b-d8fa-8adc-0a78-47d336edfa05" Jul 12 00:13:11.928731 containerd[1601]: 2025-07-12 00:13:11.885 [INFO][4339] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bc9dff0487da505b7fa119d4dc16b2ecd7080abe735d52d748dbef664a2f919b" Jul 12 00:13:11.928731 containerd[1601]: 2025-07-12 00:13:11.886 [INFO][4339] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bc9dff0487da505b7fa119d4dc16b2ecd7080abe735d52d748dbef664a2f919b" Jul 12 00:13:11.928731 containerd[1601]: 2025-07-12 00:13:11.909 [INFO][4347] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bc9dff0487da505b7fa119d4dc16b2ecd7080abe735d52d748dbef664a2f919b" HandleID="k8s-pod-network.bc9dff0487da505b7fa119d4dc16b2ecd7080abe735d52d748dbef664a2f919b" Workload="ci--4081--3--4--n--bdc5bebc5f-k8s-goldmane--58fd7646b9--gp2hz-eth0" Jul 12 00:13:11.928731 containerd[1601]: 2025-07-12 00:13:11.910 [INFO][4347] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:13:11.928731 containerd[1601]: 2025-07-12 00:13:11.910 [INFO][4347] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:13:11.928731 containerd[1601]: 2025-07-12 00:13:11.920 [WARNING][4347] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bc9dff0487da505b7fa119d4dc16b2ecd7080abe735d52d748dbef664a2f919b" HandleID="k8s-pod-network.bc9dff0487da505b7fa119d4dc16b2ecd7080abe735d52d748dbef664a2f919b" Workload="ci--4081--3--4--n--bdc5bebc5f-k8s-goldmane--58fd7646b9--gp2hz-eth0" Jul 12 00:13:11.928731 containerd[1601]: 2025-07-12 00:13:11.920 [INFO][4347] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bc9dff0487da505b7fa119d4dc16b2ecd7080abe735d52d748dbef664a2f919b" HandleID="k8s-pod-network.bc9dff0487da505b7fa119d4dc16b2ecd7080abe735d52d748dbef664a2f919b" Workload="ci--4081--3--4--n--bdc5bebc5f-k8s-goldmane--58fd7646b9--gp2hz-eth0" Jul 12 00:13:11.928731 containerd[1601]: 2025-07-12 00:13:11.922 [INFO][4347] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:13:11.928731 containerd[1601]: 2025-07-12 00:13:11.925 [INFO][4339] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bc9dff0487da505b7fa119d4dc16b2ecd7080abe735d52d748dbef664a2f919b" Jul 12 00:13:11.929308 containerd[1601]: time="2025-07-12T00:13:11.929176969Z" level=info msg="TearDown network for sandbox \"bc9dff0487da505b7fa119d4dc16b2ecd7080abe735d52d748dbef664a2f919b\" successfully" Jul 12 00:13:11.929308 containerd[1601]: time="2025-07-12T00:13:11.929216209Z" level=info msg="StopPodSandbox for \"bc9dff0487da505b7fa119d4dc16b2ecd7080abe735d52d748dbef664a2f919b\" returns successfully" Jul 12 00:13:11.930266 containerd[1601]: time="2025-07-12T00:13:11.929916131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-gp2hz,Uid:95ba4f33-c2b5-452d-814d-3c80c989e70e,Namespace:calico-system,Attempt:1,}" Jul 12 00:13:11.935644 systemd[1]: run-netns-cni\x2de264aa4b\x2dd8fa\x2d8adc\x2d0a78\x2d47d336edfa05.mount: Deactivated successfully. Jul 12 00:13:12.098629 systemd-networkd[1232]: calia289408f8c2: Link UP Jul 12 00:13:12.099163 systemd-networkd[1232]: calia289408f8c2: Gained carrier Jul 12 00:13:12.119975 containerd[1601]: 2025-07-12 00:13:11.998 [INFO][4361] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--4--n--bdc5bebc5f-k8s-goldmane--58fd7646b9--gp2hz-eth0 goldmane-58fd7646b9- calico-system 95ba4f33-c2b5-452d-814d-3c80c989e70e 904 0 2025-07-12 00:12:51 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081-3-4-n-bdc5bebc5f goldmane-58fd7646b9-gp2hz eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calia289408f8c2 [] [] }} ContainerID="c2d6a67f7bc399b26f888a3194975c5eb6bc0ca3fb8a00443de5697740a7d5a1" Namespace="calico-system" Pod="goldmane-58fd7646b9-gp2hz" WorkloadEndpoint="ci--4081--3--4--n--bdc5bebc5f-k8s-goldmane--58fd7646b9--gp2hz-" Jul 12 00:13:12.119975 containerd[1601]: 2025-07-12 00:13:11.998 [INFO][4361] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c2d6a67f7bc399b26f888a3194975c5eb6bc0ca3fb8a00443de5697740a7d5a1" Namespace="calico-system" Pod="goldmane-58fd7646b9-gp2hz" WorkloadEndpoint="ci--4081--3--4--n--bdc5bebc5f-k8s-goldmane--58fd7646b9--gp2hz-eth0" Jul 12 00:13:12.119975 containerd[1601]: 2025-07-12 00:13:12.030 [INFO][4374] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c2d6a67f7bc399b26f888a3194975c5eb6bc0ca3fb8a00443de5697740a7d5a1" HandleID="k8s-pod-network.c2d6a67f7bc399b26f888a3194975c5eb6bc0ca3fb8a00443de5697740a7d5a1" Workload="ci--4081--3--4--n--bdc5bebc5f-k8s-goldmane--58fd7646b9--gp2hz-eth0" Jul 12 00:13:12.119975 containerd[1601]: 2025-07-12 00:13:12.030 [INFO][4374] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c2d6a67f7bc399b26f888a3194975c5eb6bc0ca3fb8a00443de5697740a7d5a1" HandleID="k8s-pod-network.c2d6a67f7bc399b26f888a3194975c5eb6bc0ca3fb8a00443de5697740a7d5a1" Workload="ci--4081--3--4--n--bdc5bebc5f-k8s-goldmane--58fd7646b9--gp2hz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002aa680), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-4-n-bdc5bebc5f", "pod":"goldmane-58fd7646b9-gp2hz", "timestamp":"2025-07-12 00:13:12.030680749 +0000 UTC"}, Hostname:"ci-4081-3-4-n-bdc5bebc5f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:13:12.119975 containerd[1601]: 2025-07-12 00:13:12.030 [INFO][4374] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:13:12.119975 containerd[1601]: 2025-07-12 00:13:12.030 [INFO][4374] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:13:12.119975 containerd[1601]: 2025-07-12 00:13:12.030 [INFO][4374] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-4-n-bdc5bebc5f' Jul 12 00:13:12.119975 containerd[1601]: 2025-07-12 00:13:12.048 [INFO][4374] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c2d6a67f7bc399b26f888a3194975c5eb6bc0ca3fb8a00443de5697740a7d5a1" host="ci-4081-3-4-n-bdc5bebc5f" Jul 12 00:13:12.119975 containerd[1601]: 2025-07-12 00:13:12.053 [INFO][4374] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-4-n-bdc5bebc5f" Jul 12 00:13:12.119975 containerd[1601]: 2025-07-12 00:13:12.062 [INFO][4374] ipam/ipam.go 511: Trying affinity for 192.168.106.128/26 host="ci-4081-3-4-n-bdc5bebc5f" Jul 12 00:13:12.119975 containerd[1601]: 2025-07-12 00:13:12.064 [INFO][4374] ipam/ipam.go 158: Attempting to load block cidr=192.168.106.128/26 host="ci-4081-3-4-n-bdc5bebc5f" Jul 12 00:13:12.119975 containerd[1601]: 2025-07-12 00:13:12.068 [INFO][4374] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.106.128/26 host="ci-4081-3-4-n-bdc5bebc5f" Jul 12 00:13:12.119975 containerd[1601]: 2025-07-12 00:13:12.068 [INFO][4374] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.106.128/26 handle="k8s-pod-network.c2d6a67f7bc399b26f888a3194975c5eb6bc0ca3fb8a00443de5697740a7d5a1" host="ci-4081-3-4-n-bdc5bebc5f" Jul 12 00:13:12.119975 containerd[1601]: 2025-07-12 00:13:12.071 [INFO][4374] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c2d6a67f7bc399b26f888a3194975c5eb6bc0ca3fb8a00443de5697740a7d5a1 Jul 12 00:13:12.119975 containerd[1601]: 2025-07-12 00:13:12.077 [INFO][4374] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.106.128/26 handle="k8s-pod-network.c2d6a67f7bc399b26f888a3194975c5eb6bc0ca3fb8a00443de5697740a7d5a1" host="ci-4081-3-4-n-bdc5bebc5f" Jul 12 00:13:12.119975 containerd[1601]: 2025-07-12 00:13:12.087 [INFO][4374] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.106.130/26] block=192.168.106.128/26 handle="k8s-pod-network.c2d6a67f7bc399b26f888a3194975c5eb6bc0ca3fb8a00443de5697740a7d5a1" host="ci-4081-3-4-n-bdc5bebc5f" Jul 12 00:13:12.119975 containerd[1601]: 2025-07-12 00:13:12.087 [INFO][4374] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.106.130/26] handle="k8s-pod-network.c2d6a67f7bc399b26f888a3194975c5eb6bc0ca3fb8a00443de5697740a7d5a1" host="ci-4081-3-4-n-bdc5bebc5f" Jul 12 00:13:12.119975 containerd[1601]: 2025-07-12 00:13:12.087 [INFO][4374] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:13:12.119975 containerd[1601]: 2025-07-12 00:13:12.087 [INFO][4374] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.106.130/26] IPv6=[] ContainerID="c2d6a67f7bc399b26f888a3194975c5eb6bc0ca3fb8a00443de5697740a7d5a1" HandleID="k8s-pod-network.c2d6a67f7bc399b26f888a3194975c5eb6bc0ca3fb8a00443de5697740a7d5a1" Workload="ci--4081--3--4--n--bdc5bebc5f-k8s-goldmane--58fd7646b9--gp2hz-eth0" Jul 12 00:13:12.121362 containerd[1601]: 2025-07-12 00:13:12.089 [INFO][4361] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c2d6a67f7bc399b26f888a3194975c5eb6bc0ca3fb8a00443de5697740a7d5a1" Namespace="calico-system" Pod="goldmane-58fd7646b9-gp2hz" WorkloadEndpoint="ci--4081--3--4--n--bdc5bebc5f-k8s-goldmane--58fd7646b9--gp2hz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--n--bdc5bebc5f-k8s-goldmane--58fd7646b9--gp2hz-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"95ba4f33-c2b5-452d-814d-3c80c989e70e", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 12, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-n-bdc5bebc5f", ContainerID:"", Pod:"goldmane-58fd7646b9-gp2hz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.106.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia289408f8c2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:13:12.121362 containerd[1601]: 2025-07-12 00:13:12.090 [INFO][4361] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.106.130/32] ContainerID="c2d6a67f7bc399b26f888a3194975c5eb6bc0ca3fb8a00443de5697740a7d5a1" Namespace="calico-system" Pod="goldmane-58fd7646b9-gp2hz" WorkloadEndpoint="ci--4081--3--4--n--bdc5bebc5f-k8s-goldmane--58fd7646b9--gp2hz-eth0" Jul 12 00:13:12.121362 containerd[1601]: 2025-07-12 00:13:12.090 [INFO][4361] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia289408f8c2 ContainerID="c2d6a67f7bc399b26f888a3194975c5eb6bc0ca3fb8a00443de5697740a7d5a1" Namespace="calico-system" Pod="goldmane-58fd7646b9-gp2hz" WorkloadEndpoint="ci--4081--3--4--n--bdc5bebc5f-k8s-goldmane--58fd7646b9--gp2hz-eth0" Jul 12 00:13:12.121362 containerd[1601]: 2025-07-12 00:13:12.098 [INFO][4361] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c2d6a67f7bc399b26f888a3194975c5eb6bc0ca3fb8a00443de5697740a7d5a1" Namespace="calico-system" Pod="goldmane-58fd7646b9-gp2hz" WorkloadEndpoint="ci--4081--3--4--n--bdc5bebc5f-k8s-goldmane--58fd7646b9--gp2hz-eth0" Jul 12 00:13:12.121362 containerd[1601]: 2025-07-12 00:13:12.101 [INFO][4361] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c2d6a67f7bc399b26f888a3194975c5eb6bc0ca3fb8a00443de5697740a7d5a1" Namespace="calico-system" Pod="goldmane-58fd7646b9-gp2hz" WorkloadEndpoint="ci--4081--3--4--n--bdc5bebc5f-k8s-goldmane--58fd7646b9--gp2hz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--n--bdc5bebc5f-k8s-goldmane--58fd7646b9--gp2hz-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"95ba4f33-c2b5-452d-814d-3c80c989e70e", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 12, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-n-bdc5bebc5f", ContainerID:"c2d6a67f7bc399b26f888a3194975c5eb6bc0ca3fb8a00443de5697740a7d5a1", Pod:"goldmane-58fd7646b9-gp2hz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.106.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia289408f8c2", MAC:"76:9f:c7:53:84:ee", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:13:12.121362 containerd[1601]: 2025-07-12 00:13:12.116 [INFO][4361] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c2d6a67f7bc399b26f888a3194975c5eb6bc0ca3fb8a00443de5697740a7d5a1" Namespace="calico-system" Pod="goldmane-58fd7646b9-gp2hz" WorkloadEndpoint="ci--4081--3--4--n--bdc5bebc5f-k8s-goldmane--58fd7646b9--gp2hz-eth0" Jul 12 00:13:12.146516 containerd[1601]: time="2025-07-12T00:13:12.145948435Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:13:12.146516 containerd[1601]: time="2025-07-12T00:13:12.146406837Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:13:12.146516 containerd[1601]: time="2025-07-12T00:13:12.146447797Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:13:12.147411 containerd[1601]: time="2025-07-12T00:13:12.146586077Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:13:12.198525 containerd[1601]: time="2025-07-12T00:13:12.198420744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-gp2hz,Uid:95ba4f33-c2b5-452d-814d-3c80c989e70e,Namespace:calico-system,Attempt:1,} returns sandbox id \"c2d6a67f7bc399b26f888a3194975c5eb6bc0ca3fb8a00443de5697740a7d5a1\"" Jul 12 00:13:12.202980 containerd[1601]: time="2025-07-12T00:13:12.202741756Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 12 00:13:13.673751 systemd-networkd[1232]: calia289408f8c2: Gained IPv6LL Jul 12 00:13:13.811030 containerd[1601]: time="2025-07-12T00:13:13.810508918Z" level=info msg="StopPodSandbox for \"1bd33a55de542271863db82ac7f0b0387055ea0752ba18090c62f8069f4c15bd\"" Jul 12 00:13:13.934658 containerd[1601]: 2025-07-12 00:13:13.885 [INFO][4443] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1bd33a55de542271863db82ac7f0b0387055ea0752ba18090c62f8069f4c15bd" Jul 12 00:13:13.934658 containerd[1601]: 2025-07-12 00:13:13.886 [INFO][4443] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1bd33a55de542271863db82ac7f0b0387055ea0752ba18090c62f8069f4c15bd" iface="eth0" netns="/var/run/netns/cni-0c5fb61b-218e-334b-30c2-714723f7a352" Jul 12 00:13:13.934658 containerd[1601]: 2025-07-12 00:13:13.887 [INFO][4443] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1bd33a55de542271863db82ac7f0b0387055ea0752ba18090c62f8069f4c15bd" iface="eth0" netns="/var/run/netns/cni-0c5fb61b-218e-334b-30c2-714723f7a352" Jul 12 00:13:13.934658 containerd[1601]: 2025-07-12 00:13:13.888 [INFO][4443] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1bd33a55de542271863db82ac7f0b0387055ea0752ba18090c62f8069f4c15bd" iface="eth0" netns="/var/run/netns/cni-0c5fb61b-218e-334b-30c2-714723f7a352" Jul 12 00:13:13.934658 containerd[1601]: 2025-07-12 00:13:13.888 [INFO][4443] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1bd33a55de542271863db82ac7f0b0387055ea0752ba18090c62f8069f4c15bd" Jul 12 00:13:13.934658 containerd[1601]: 2025-07-12 00:13:13.888 [INFO][4443] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1bd33a55de542271863db82ac7f0b0387055ea0752ba18090c62f8069f4c15bd" Jul 12 00:13:13.934658 containerd[1601]: 2025-07-12 00:13:13.912 [INFO][4451] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1bd33a55de542271863db82ac7f0b0387055ea0752ba18090c62f8069f4c15bd" HandleID="k8s-pod-network.1bd33a55de542271863db82ac7f0b0387055ea0752ba18090c62f8069f4c15bd" Workload="ci--4081--3--4--n--bdc5bebc5f-k8s-calico--apiserver--b654b5ccd--k7qq8-eth0" Jul 12 00:13:13.934658 containerd[1601]: 2025-07-12 00:13:13.913 [INFO][4451] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:13:13.934658 containerd[1601]: 2025-07-12 00:13:13.913 [INFO][4451] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:13:13.934658 containerd[1601]: 2025-07-12 00:13:13.923 [WARNING][4451] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1bd33a55de542271863db82ac7f0b0387055ea0752ba18090c62f8069f4c15bd" HandleID="k8s-pod-network.1bd33a55de542271863db82ac7f0b0387055ea0752ba18090c62f8069f4c15bd" Workload="ci--4081--3--4--n--bdc5bebc5f-k8s-calico--apiserver--b654b5ccd--k7qq8-eth0" Jul 12 00:13:13.934658 containerd[1601]: 2025-07-12 00:13:13.923 [INFO][4451] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1bd33a55de542271863db82ac7f0b0387055ea0752ba18090c62f8069f4c15bd" HandleID="k8s-pod-network.1bd33a55de542271863db82ac7f0b0387055ea0752ba18090c62f8069f4c15bd" Workload="ci--4081--3--4--n--bdc5bebc5f-k8s-calico--apiserver--b654b5ccd--k7qq8-eth0" Jul 12 00:13:13.934658 containerd[1601]: 2025-07-12 00:13:13.925 [INFO][4451] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:13:13.934658 containerd[1601]: 2025-07-12 00:13:13.932 [INFO][4443] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1bd33a55de542271863db82ac7f0b0387055ea0752ba18090c62f8069f4c15bd" Jul 12 00:13:13.937846 systemd[1]: run-netns-cni\x2d0c5fb61b\x2d218e\x2d334b\x2d30c2\x2d714723f7a352.mount: Deactivated successfully. Jul 12 00:13:13.939889 containerd[1601]: time="2025-07-12T00:13:13.939851381Z" level=info msg="TearDown network for sandbox \"1bd33a55de542271863db82ac7f0b0387055ea0752ba18090c62f8069f4c15bd\" successfully" Jul 12 00:13:13.940116 containerd[1601]: time="2025-07-12T00:13:13.940096462Z" level=info msg="StopPodSandbox for \"1bd33a55de542271863db82ac7f0b0387055ea0752ba18090c62f8069f4c15bd\" returns successfully" Jul 12 00:13:13.940843 containerd[1601]: time="2025-07-12T00:13:13.940816863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b654b5ccd-k7qq8,Uid:93a79160-6c7d-4ebd-95ed-d9d607047420,Namespace:calico-apiserver,Attempt:1,}" Jul 12 00:13:14.175413 systemd-networkd[1232]: calic34aee32c80: Link UP Jul 12 00:13:14.176217 systemd-networkd[1232]: calic34aee32c80: Gained carrier Jul 12 00:13:14.201396 containerd[1601]: 2025-07-12 00:13:14.039 [INFO][4462] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--4--n--bdc5bebc5f-k8s-calico--apiserver--b654b5ccd--k7qq8-eth0 calico-apiserver-b654b5ccd- calico-apiserver 93a79160-6c7d-4ebd-95ed-d9d607047420 913 0 2025-07-12 00:12:46 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:b654b5ccd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-4-n-bdc5bebc5f calico-apiserver-b654b5ccd-k7qq8 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic34aee32c80 [] [] }} ContainerID="99c53d9c746851752840759182d197749018eab70220909a74faf4d961d656ec" Namespace="calico-apiserver" Pod="calico-apiserver-b654b5ccd-k7qq8" WorkloadEndpoint="ci--4081--3--4--n--bdc5bebc5f-k8s-calico--apiserver--b654b5ccd--k7qq8-" Jul 12 00:13:14.201396 containerd[1601]: 2025-07-12 00:13:14.039 [INFO][4462] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="99c53d9c746851752840759182d197749018eab70220909a74faf4d961d656ec" Namespace="calico-apiserver" Pod="calico-apiserver-b654b5ccd-k7qq8" WorkloadEndpoint="ci--4081--3--4--n--bdc5bebc5f-k8s-calico--apiserver--b654b5ccd--k7qq8-eth0" Jul 12 00:13:14.201396 containerd[1601]: 2025-07-12 00:13:14.095 [INFO][4469] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="99c53d9c746851752840759182d197749018eab70220909a74faf4d961d656ec" HandleID="k8s-pod-network.99c53d9c746851752840759182d197749018eab70220909a74faf4d961d656ec" Workload="ci--4081--3--4--n--bdc5bebc5f-k8s-calico--apiserver--b654b5ccd--k7qq8-eth0" Jul 12 00:13:14.201396 containerd[1601]: 2025-07-12 00:13:14.095 [INFO][4469] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="99c53d9c746851752840759182d197749018eab70220909a74faf4d961d656ec" HandleID="k8s-pod-network.99c53d9c746851752840759182d197749018eab70220909a74faf4d961d656ec" Workload="ci--4081--3--4--n--bdc5bebc5f-k8s-calico--apiserver--b654b5ccd--k7qq8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3710), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-4-n-bdc5bebc5f", "pod":"calico-apiserver-b654b5ccd-k7qq8", "timestamp":"2025-07-12 00:13:14.095322737 +0000 UTC"}, Hostname:"ci-4081-3-4-n-bdc5bebc5f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:13:14.201396 containerd[1601]: 2025-07-12 00:13:14.095 [INFO][4469] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:13:14.201396 containerd[1601]: 2025-07-12 00:13:14.095 [INFO][4469] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:13:14.201396 containerd[1601]: 2025-07-12 00:13:14.095 [INFO][4469] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-4-n-bdc5bebc5f' Jul 12 00:13:14.201396 containerd[1601]: 2025-07-12 00:13:14.113 [INFO][4469] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.99c53d9c746851752840759182d197749018eab70220909a74faf4d961d656ec" host="ci-4081-3-4-n-bdc5bebc5f" Jul 12 00:13:14.201396 containerd[1601]: 2025-07-12 00:13:14.123 [INFO][4469] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-4-n-bdc5bebc5f" Jul 12 00:13:14.201396 containerd[1601]: 2025-07-12 00:13:14.129 [INFO][4469] ipam/ipam.go 511: Trying affinity for 192.168.106.128/26 host="ci-4081-3-4-n-bdc5bebc5f" Jul 12 00:13:14.201396 containerd[1601]: 2025-07-12 00:13:14.132 [INFO][4469] ipam/ipam.go 158: Attempting to load block cidr=192.168.106.128/26 host="ci-4081-3-4-n-bdc5bebc5f" Jul 12 00:13:14.201396 containerd[1601]: 2025-07-12 00:13:14.135 [INFO][4469] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.106.128/26 host="ci-4081-3-4-n-bdc5bebc5f" Jul 12 00:13:14.201396 containerd[1601]: 2025-07-12 00:13:14.135 [INFO][4469] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.106.128/26 handle="k8s-pod-network.99c53d9c746851752840759182d197749018eab70220909a74faf4d961d656ec" host="ci-4081-3-4-n-bdc5bebc5f" Jul 12 00:13:14.201396 containerd[1601]: 2025-07-12 00:13:14.138 [INFO][4469] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.99c53d9c746851752840759182d197749018eab70220909a74faf4d961d656ec Jul 12 00:13:14.201396 containerd[1601]: 2025-07-12 00:13:14.147 [INFO][4469] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.106.128/26 handle="k8s-pod-network.99c53d9c746851752840759182d197749018eab70220909a74faf4d961d656ec" host="ci-4081-3-4-n-bdc5bebc5f" Jul 12 00:13:14.201396 containerd[1601]: 2025-07-12 00:13:14.161 [INFO][4469] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.106.131/26] block=192.168.106.128/26 handle="k8s-pod-network.99c53d9c746851752840759182d197749018eab70220909a74faf4d961d656ec" host="ci-4081-3-4-n-bdc5bebc5f" Jul 12 00:13:14.201396 containerd[1601]: 2025-07-12 00:13:14.162 [INFO][4469] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.106.131/26] handle="k8s-pod-network.99c53d9c746851752840759182d197749018eab70220909a74faf4d961d656ec" host="ci-4081-3-4-n-bdc5bebc5f" Jul 12 00:13:14.201396 containerd[1601]: 2025-07-12 00:13:14.162 [INFO][4469] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:13:14.201396 containerd[1601]: 2025-07-12 00:13:14.162 [INFO][4469] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.106.131/26] IPv6=[] ContainerID="99c53d9c746851752840759182d197749018eab70220909a74faf4d961d656ec" HandleID="k8s-pod-network.99c53d9c746851752840759182d197749018eab70220909a74faf4d961d656ec" Workload="ci--4081--3--4--n--bdc5bebc5f-k8s-calico--apiserver--b654b5ccd--k7qq8-eth0" Jul 12 00:13:14.202270 containerd[1601]: 2025-07-12 00:13:14.165 [INFO][4462] cni-plugin/k8s.go 418: Populated endpoint ContainerID="99c53d9c746851752840759182d197749018eab70220909a74faf4d961d656ec" Namespace="calico-apiserver" Pod="calico-apiserver-b654b5ccd-k7qq8" WorkloadEndpoint="ci--4081--3--4--n--bdc5bebc5f-k8s-calico--apiserver--b654b5ccd--k7qq8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--n--bdc5bebc5f-k8s-calico--apiserver--b654b5ccd--k7qq8-eth0", GenerateName:"calico-apiserver-b654b5ccd-", Namespace:"calico-apiserver", SelfLink:"", UID:"93a79160-6c7d-4ebd-95ed-d9d607047420", ResourceVersion:"913", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 12, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b654b5ccd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-n-bdc5bebc5f", ContainerID:"", Pod:"calico-apiserver-b654b5ccd-k7qq8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.106.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic34aee32c80", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:13:14.202270 containerd[1601]: 2025-07-12 00:13:14.165 [INFO][4462] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.106.131/32] ContainerID="99c53d9c746851752840759182d197749018eab70220909a74faf4d961d656ec" Namespace="calico-apiserver" Pod="calico-apiserver-b654b5ccd-k7qq8" WorkloadEndpoint="ci--4081--3--4--n--bdc5bebc5f-k8s-calico--apiserver--b654b5ccd--k7qq8-eth0" Jul 12 00:13:14.202270 containerd[1601]: 2025-07-12 00:13:14.165 [INFO][4462] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic34aee32c80 ContainerID="99c53d9c746851752840759182d197749018eab70220909a74faf4d961d656ec" Namespace="calico-apiserver" Pod="calico-apiserver-b654b5ccd-k7qq8" WorkloadEndpoint="ci--4081--3--4--n--bdc5bebc5f-k8s-calico--apiserver--b654b5ccd--k7qq8-eth0" Jul 12 00:13:14.202270 containerd[1601]: 2025-07-12 00:13:14.176 [INFO][4462] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="99c53d9c746851752840759182d197749018eab70220909a74faf4d961d656ec" Namespace="calico-apiserver" Pod="calico-apiserver-b654b5ccd-k7qq8" WorkloadEndpoint="ci--4081--3--4--n--bdc5bebc5f-k8s-calico--apiserver--b654b5ccd--k7qq8-eth0" Jul 12 00:13:14.202270 containerd[1601]: 2025-07-12 00:13:14.177 [INFO][4462] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="99c53d9c746851752840759182d197749018eab70220909a74faf4d961d656ec" Namespace="calico-apiserver" Pod="calico-apiserver-b654b5ccd-k7qq8" WorkloadEndpoint="ci--4081--3--4--n--bdc5bebc5f-k8s-calico--apiserver--b654b5ccd--k7qq8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--n--bdc5bebc5f-k8s-calico--apiserver--b654b5ccd--k7qq8-eth0", GenerateName:"calico-apiserver-b654b5ccd-", Namespace:"calico-apiserver", SelfLink:"", UID:"93a79160-6c7d-4ebd-95ed-d9d607047420", ResourceVersion:"913", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 12, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b654b5ccd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-n-bdc5bebc5f", ContainerID:"99c53d9c746851752840759182d197749018eab70220909a74faf4d961d656ec", Pod:"calico-apiserver-b654b5ccd-k7qq8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.106.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic34aee32c80", MAC:"26:87:6e:d2:34:fc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:13:14.202270 containerd[1601]: 2025-07-12 00:13:14.194 [INFO][4462] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="99c53d9c746851752840759182d197749018eab70220909a74faf4d961d656ec" Namespace="calico-apiserver" Pod="calico-apiserver-b654b5ccd-k7qq8" WorkloadEndpoint="ci--4081--3--4--n--bdc5bebc5f-k8s-calico--apiserver--b654b5ccd--k7qq8-eth0" Jul 12 00:13:14.225539 containerd[1601]: time="2025-07-12T00:13:14.225087700Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:13:14.225539 containerd[1601]: time="2025-07-12T00:13:14.225175980Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:13:14.225539 containerd[1601]: time="2025-07-12T00:13:14.225197620Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:13:14.225539 containerd[1601]: time="2025-07-12T00:13:14.225334340Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:13:14.297055 containerd[1601]: time="2025-07-12T00:13:14.297011438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b654b5ccd-k7qq8,Uid:93a79160-6c7d-4ebd-95ed-d9d607047420,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"99c53d9c746851752840759182d197749018eab70220909a74faf4d961d656ec\"" Jul 12 00:13:14.809471 containerd[1601]: time="2025-07-12T00:13:14.809417631Z" level=info msg="StopPodSandbox for \"417c12a1078c186c15411172fad1a0108dacf60beadf725eebfc513508343886\"" Jul 12 00:13:14.810485 containerd[1601]: time="2025-07-12T00:13:14.809666272Z" level=info msg="StopPodSandbox for \"e849895c4dc9b971a95abc2d7eaa0482d4477e6e9ab4780ebc9120c37441fd12\"" Jul 12 00:13:14.938213 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1000124884.mount: Deactivated successfully. Jul 12 00:13:14.988518 containerd[1601]: 2025-07-12 00:13:14.906 [INFO][4550] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e849895c4dc9b971a95abc2d7eaa0482d4477e6e9ab4780ebc9120c37441fd12" Jul 12 00:13:14.988518 containerd[1601]: 2025-07-12 00:13:14.906 [INFO][4550] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e849895c4dc9b971a95abc2d7eaa0482d4477e6e9ab4780ebc9120c37441fd12" iface="eth0" netns="/var/run/netns/cni-99202d17-335c-b0e5-3fc9-8e419dd79356" Jul 12 00:13:14.988518 containerd[1601]: 2025-07-12 00:13:14.907 [INFO][4550] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e849895c4dc9b971a95abc2d7eaa0482d4477e6e9ab4780ebc9120c37441fd12" iface="eth0" netns="/var/run/netns/cni-99202d17-335c-b0e5-3fc9-8e419dd79356" Jul 12 00:13:14.988518 containerd[1601]: 2025-07-12 00:13:14.907 [INFO][4550] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e849895c4dc9b971a95abc2d7eaa0482d4477e6e9ab4780ebc9120c37441fd12" iface="eth0" netns="/var/run/netns/cni-99202d17-335c-b0e5-3fc9-8e419dd79356" Jul 12 00:13:14.988518 containerd[1601]: 2025-07-12 00:13:14.907 [INFO][4550] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e849895c4dc9b971a95abc2d7eaa0482d4477e6e9ab4780ebc9120c37441fd12" Jul 12 00:13:14.988518 containerd[1601]: 2025-07-12 00:13:14.907 [INFO][4550] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e849895c4dc9b971a95abc2d7eaa0482d4477e6e9ab4780ebc9120c37441fd12" Jul 12 00:13:14.988518 containerd[1601]: 2025-07-12 00:13:14.962 [INFO][4570] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e849895c4dc9b971a95abc2d7eaa0482d4477e6e9ab4780ebc9120c37441fd12" HandleID="k8s-pod-network.e849895c4dc9b971a95abc2d7eaa0482d4477e6e9ab4780ebc9120c37441fd12" Workload="ci--4081--3--4--n--bdc5bebc5f-k8s-csi--node--driver--f2q67-eth0" Jul 12 00:13:14.988518 containerd[1601]: 2025-07-12 00:13:14.962 [INFO][4570] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:13:14.988518 containerd[1601]: 2025-07-12 00:13:14.962 [INFO][4570] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:13:14.988518 containerd[1601]: 2025-07-12 00:13:14.976 [WARNING][4570] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e849895c4dc9b971a95abc2d7eaa0482d4477e6e9ab4780ebc9120c37441fd12" HandleID="k8s-pod-network.e849895c4dc9b971a95abc2d7eaa0482d4477e6e9ab4780ebc9120c37441fd12" Workload="ci--4081--3--4--n--bdc5bebc5f-k8s-csi--node--driver--f2q67-eth0" Jul 12 00:13:14.988518 containerd[1601]: 2025-07-12 00:13:14.976 [INFO][4570] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e849895c4dc9b971a95abc2d7eaa0482d4477e6e9ab4780ebc9120c37441fd12" HandleID="k8s-pod-network.e849895c4dc9b971a95abc2d7eaa0482d4477e6e9ab4780ebc9120c37441fd12" Workload="ci--4081--3--4--n--bdc5bebc5f-k8s-csi--node--driver--f2q67-eth0" Jul 12 00:13:14.988518 containerd[1601]: 2025-07-12 00:13:14.978 [INFO][4570] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:13:14.988518 containerd[1601]: 2025-07-12 00:13:14.983 [INFO][4550] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e849895c4dc9b971a95abc2d7eaa0482d4477e6e9ab4780ebc9120c37441fd12" Jul 12 00:13:14.992181 containerd[1601]: time="2025-07-12T00:13:14.991931285Z" level=info msg="TearDown network for sandbox \"e849895c4dc9b971a95abc2d7eaa0482d4477e6e9ab4780ebc9120c37441fd12\" successfully" Jul 12 00:13:14.992237 containerd[1601]: time="2025-07-12T00:13:14.992182726Z" level=info msg="StopPodSandbox for \"e849895c4dc9b971a95abc2d7eaa0482d4477e6e9ab4780ebc9120c37441fd12\" returns successfully" Jul 12 00:13:14.994133 systemd[1]: run-netns-cni\x2d99202d17\x2d335c\x2db0e5\x2d3fc9\x2d8e419dd79356.mount: Deactivated successfully. Jul 12 00:13:14.998096 containerd[1601]: time="2025-07-12T00:13:14.998048260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-f2q67,Uid:bf0041c9-fdb6-4de6-99ec-d3644807d402,Namespace:calico-system,Attempt:1,}" Jul 12 00:13:15.008184 containerd[1601]: 2025-07-12 00:13:14.901 [INFO][4549] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="417c12a1078c186c15411172fad1a0108dacf60beadf725eebfc513508343886" Jul 12 00:13:15.008184 containerd[1601]: 2025-07-12 00:13:14.901 [INFO][4549] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="417c12a1078c186c15411172fad1a0108dacf60beadf725eebfc513508343886" iface="eth0" netns="/var/run/netns/cni-05633ef6-970f-cb0a-553c-4f13c5a64d42" Jul 12 00:13:15.008184 containerd[1601]: 2025-07-12 00:13:14.901 [INFO][4549] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="417c12a1078c186c15411172fad1a0108dacf60beadf725eebfc513508343886" iface="eth0" netns="/var/run/netns/cni-05633ef6-970f-cb0a-553c-4f13c5a64d42" Jul 12 00:13:15.008184 containerd[1601]: 2025-07-12 00:13:14.902 [INFO][4549] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="417c12a1078c186c15411172fad1a0108dacf60beadf725eebfc513508343886" iface="eth0" netns="/var/run/netns/cni-05633ef6-970f-cb0a-553c-4f13c5a64d42" Jul 12 00:13:15.008184 containerd[1601]: 2025-07-12 00:13:14.902 [INFO][4549] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="417c12a1078c186c15411172fad1a0108dacf60beadf725eebfc513508343886" Jul 12 00:13:15.008184 containerd[1601]: 2025-07-12 00:13:14.902 [INFO][4549] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="417c12a1078c186c15411172fad1a0108dacf60beadf725eebfc513508343886" Jul 12 00:13:15.008184 containerd[1601]: 2025-07-12 00:13:14.973 [INFO][4568] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="417c12a1078c186c15411172fad1a0108dacf60beadf725eebfc513508343886" HandleID="k8s-pod-network.417c12a1078c186c15411172fad1a0108dacf60beadf725eebfc513508343886" Workload="ci--4081--3--4--n--bdc5bebc5f-k8s-coredns--7c65d6cfc9--4txl7-eth0" Jul 12 00:13:15.008184 containerd[1601]: 2025-07-12 00:13:14.973 [INFO][4568] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:13:15.008184 containerd[1601]: 2025-07-12 00:13:14.979 [INFO][4568] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:13:15.008184 containerd[1601]: 2025-07-12 00:13:14.998 [WARNING][4568] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="417c12a1078c186c15411172fad1a0108dacf60beadf725eebfc513508343886" HandleID="k8s-pod-network.417c12a1078c186c15411172fad1a0108dacf60beadf725eebfc513508343886" Workload="ci--4081--3--4--n--bdc5bebc5f-k8s-coredns--7c65d6cfc9--4txl7-eth0" Jul 12 00:13:15.008184 containerd[1601]: 2025-07-12 00:13:14.998 [INFO][4568] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="417c12a1078c186c15411172fad1a0108dacf60beadf725eebfc513508343886" HandleID="k8s-pod-network.417c12a1078c186c15411172fad1a0108dacf60beadf725eebfc513508343886" Workload="ci--4081--3--4--n--bdc5bebc5f-k8s-coredns--7c65d6cfc9--4txl7-eth0" Jul 12 00:13:15.008184 containerd[1601]: 2025-07-12 00:13:15.001 [INFO][4568] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:13:15.008184 containerd[1601]: 2025-07-12 00:13:15.003 [INFO][4549] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="417c12a1078c186c15411172fad1a0108dacf60beadf725eebfc513508343886" Jul 12 00:13:15.011210 containerd[1601]: time="2025-07-12T00:13:15.011159771Z" level=info msg="TearDown network for sandbox \"417c12a1078c186c15411172fad1a0108dacf60beadf725eebfc513508343886\" successfully" Jul 12 00:13:15.011210 containerd[1601]: time="2025-07-12T00:13:15.011201291Z" level=info msg="StopPodSandbox for \"417c12a1078c186c15411172fad1a0108dacf60beadf725eebfc513508343886\" returns successfully" Jul 12 00:13:15.012233 systemd[1]: run-netns-cni\x2d05633ef6\x2d970f\x2dcb0a\x2d553c\x2d4f13c5a64d42.mount: Deactivated successfully. Jul 12 00:13:15.014271 containerd[1601]: time="2025-07-12T00:13:15.014128698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-4txl7,Uid:b3a5dd31-1f9a-4dc4-ad4d-9d9da3bf2832,Namespace:kube-system,Attempt:1,}" Jul 12 00:13:15.244084 systemd-networkd[1232]: calif77743d0edd: Link UP Jul 12 00:13:15.244418 systemd-networkd[1232]: calif77743d0edd: Gained carrier Jul 12 00:13:15.267988 containerd[1601]: 2025-07-12 00:13:15.090 [INFO][4582] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--4--n--bdc5bebc5f-k8s-csi--node--driver--f2q67-eth0 csi-node-driver- calico-system bf0041c9-fdb6-4de6-99ec-d3644807d402 923 0 2025-07-12 00:12:51 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-4-n-bdc5bebc5f csi-node-driver-f2q67 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calif77743d0edd [] [] }} ContainerID="7ead39e1a6355684a42c85b1dd3de37f7c90f00970f128d4a17ef81511828e2d" Namespace="calico-system" Pod="csi-node-driver-f2q67" WorkloadEndpoint="ci--4081--3--4--n--bdc5bebc5f-k8s-csi--node--driver--f2q67-" Jul 12 00:13:15.267988 containerd[1601]: 2025-07-12 00:13:15.090 [INFO][4582] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7ead39e1a6355684a42c85b1dd3de37f7c90f00970f128d4a17ef81511828e2d" Namespace="calico-system" Pod="csi-node-driver-f2q67" WorkloadEndpoint="ci--4081--3--4--n--bdc5bebc5f-k8s-csi--node--driver--f2q67-eth0" Jul 12 00:13:15.267988 containerd[1601]: 2025-07-12 00:13:15.152 [INFO][4606] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7ead39e1a6355684a42c85b1dd3de37f7c90f00970f128d4a17ef81511828e2d" HandleID="k8s-pod-network.7ead39e1a6355684a42c85b1dd3de37f7c90f00970f128d4a17ef81511828e2d" Workload="ci--4081--3--4--n--bdc5bebc5f-k8s-csi--node--driver--f2q67-eth0" Jul 12 00:13:15.267988 containerd[1601]: 2025-07-12 00:13:15.152 [INFO][4606] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7ead39e1a6355684a42c85b1dd3de37f7c90f00970f128d4a17ef81511828e2d" HandleID="k8s-pod-network.7ead39e1a6355684a42c85b1dd3de37f7c90f00970f128d4a17ef81511828e2d" Workload="ci--4081--3--4--n--bdc5bebc5f-k8s-csi--node--driver--f2q67-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b100), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-4-n-bdc5bebc5f", "pod":"csi-node-driver-f2q67", "timestamp":"2025-07-12 00:13:15.152673421 +0000 UTC"}, Hostname:"ci-4081-3-4-n-bdc5bebc5f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:13:15.267988 containerd[1601]: 2025-07-12 00:13:15.152 [INFO][4606] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:13:15.267988 containerd[1601]: 2025-07-12 00:13:15.153 [INFO][4606] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:13:15.267988 containerd[1601]: 2025-07-12 00:13:15.153 [INFO][4606] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-4-n-bdc5bebc5f' Jul 12 00:13:15.267988 containerd[1601]: 2025-07-12 00:13:15.171 [INFO][4606] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7ead39e1a6355684a42c85b1dd3de37f7c90f00970f128d4a17ef81511828e2d" host="ci-4081-3-4-n-bdc5bebc5f" Jul 12 00:13:15.267988 containerd[1601]: 2025-07-12 00:13:15.182 [INFO][4606] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-4-n-bdc5bebc5f" Jul 12 00:13:15.267988 containerd[1601]: 2025-07-12 00:13:15.193 [INFO][4606] ipam/ipam.go 511: Trying affinity for 192.168.106.128/26 host="ci-4081-3-4-n-bdc5bebc5f" Jul 12 00:13:15.267988 containerd[1601]: 2025-07-12 00:13:15.197 [INFO][4606] ipam/ipam.go 158: Attempting to load block cidr=192.168.106.128/26 host="ci-4081-3-4-n-bdc5bebc5f" Jul 12 00:13:15.267988 containerd[1601]: 2025-07-12 00:13:15.201 [INFO][4606] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.106.128/26 host="ci-4081-3-4-n-bdc5bebc5f" Jul 12 00:13:15.267988 containerd[1601]: 2025-07-12 00:13:15.201 [INFO][4606] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.106.128/26 handle="k8s-pod-network.7ead39e1a6355684a42c85b1dd3de37f7c90f00970f128d4a17ef81511828e2d" host="ci-4081-3-4-n-bdc5bebc5f" Jul 12 00:13:15.267988 containerd[1601]: 2025-07-12 00:13:15.203 [INFO][4606] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7ead39e1a6355684a42c85b1dd3de37f7c90f00970f128d4a17ef81511828e2d Jul 12 00:13:15.267988 containerd[1601]: 2025-07-12 00:13:15.210 [INFO][4606] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.106.128/26 handle="k8s-pod-network.7ead39e1a6355684a42c85b1dd3de37f7c90f00970f128d4a17ef81511828e2d" host="ci-4081-3-4-n-bdc5bebc5f" Jul 12 00:13:15.267988 containerd[1601]: 2025-07-12 00:13:15.225 [INFO][4606] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.106.132/26] block=192.168.106.128/26 handle="k8s-pod-network.7ead39e1a6355684a42c85b1dd3de37f7c90f00970f128d4a17ef81511828e2d" host="ci-4081-3-4-n-bdc5bebc5f" Jul 12 00:13:15.267988 containerd[1601]: 2025-07-12 00:13:15.225 [INFO][4606] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.106.132/26] handle="k8s-pod-network.7ead39e1a6355684a42c85b1dd3de37f7c90f00970f128d4a17ef81511828e2d" host="ci-4081-3-4-n-bdc5bebc5f" Jul 12 00:13:15.267988 containerd[1601]: 2025-07-12 00:13:15.225 [INFO][4606] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:13:15.267988 containerd[1601]: 2025-07-12 00:13:15.225 [INFO][4606] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.106.132/26] IPv6=[] ContainerID="7ead39e1a6355684a42c85b1dd3de37f7c90f00970f128d4a17ef81511828e2d" HandleID="k8s-pod-network.7ead39e1a6355684a42c85b1dd3de37f7c90f00970f128d4a17ef81511828e2d" Workload="ci--4081--3--4--n--bdc5bebc5f-k8s-csi--node--driver--f2q67-eth0" Jul 12 00:13:15.269422 containerd[1601]: 2025-07-12 00:13:15.230 [INFO][4582] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7ead39e1a6355684a42c85b1dd3de37f7c90f00970f128d4a17ef81511828e2d" Namespace="calico-system" Pod="csi-node-driver-f2q67" WorkloadEndpoint="ci--4081--3--4--n--bdc5bebc5f-k8s-csi--node--driver--f2q67-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--n--bdc5bebc5f-k8s-csi--node--driver--f2q67-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"bf0041c9-fdb6-4de6-99ec-d3644807d402", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 12, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-n-bdc5bebc5f", ContainerID:"", Pod:"csi-node-driver-f2q67", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.106.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif77743d0edd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:13:15.269422 containerd[1601]: 2025-07-12 00:13:15.231 [INFO][4582] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.106.132/32] ContainerID="7ead39e1a6355684a42c85b1dd3de37f7c90f00970f128d4a17ef81511828e2d" Namespace="calico-system" Pod="csi-node-driver-f2q67" WorkloadEndpoint="ci--4081--3--4--n--bdc5bebc5f-k8s-csi--node--driver--f2q67-eth0" Jul 12 00:13:15.269422 containerd[1601]: 2025-07-12 00:13:15.231 [INFO][4582] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif77743d0edd ContainerID="7ead39e1a6355684a42c85b1dd3de37f7c90f00970f128d4a17ef81511828e2d" Namespace="calico-system" Pod="csi-node-driver-f2q67" WorkloadEndpoint="ci--4081--3--4--n--bdc5bebc5f-k8s-csi--node--driver--f2q67-eth0" Jul 12 00:13:15.269422 containerd[1601]: 2025-07-12 00:13:15.243 [INFO][4582] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7ead39e1a6355684a42c85b1dd3de37f7c90f00970f128d4a17ef81511828e2d" Namespace="calico-system" Pod="csi-node-driver-f2q67" WorkloadEndpoint="ci--4081--3--4--n--bdc5bebc5f-k8s-csi--node--driver--f2q67-eth0" Jul 12 00:13:15.269422 containerd[1601]: 2025-07-12 00:13:15.249 [INFO][4582] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7ead39e1a6355684a42c85b1dd3de37f7c90f00970f128d4a17ef81511828e2d" Namespace="calico-system" Pod="csi-node-driver-f2q67" WorkloadEndpoint="ci--4081--3--4--n--bdc5bebc5f-k8s-csi--node--driver--f2q67-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--n--bdc5bebc5f-k8s-csi--node--driver--f2q67-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"bf0041c9-fdb6-4de6-99ec-d3644807d402", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 12, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-n-bdc5bebc5f", ContainerID:"7ead39e1a6355684a42c85b1dd3de37f7c90f00970f128d4a17ef81511828e2d", Pod:"csi-node-driver-f2q67", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.106.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif77743d0edd", MAC:"02:04:73:d1:15:3e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:13:15.269422 containerd[1601]: 2025-07-12 00:13:15.265 [INFO][4582] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7ead39e1a6355684a42c85b1dd3de37f7c90f00970f128d4a17ef81511828e2d" Namespace="calico-system" Pod="csi-node-driver-f2q67" WorkloadEndpoint="ci--4081--3--4--n--bdc5bebc5f-k8s-csi--node--driver--f2q67-eth0" Jul 12 00:13:15.323700 containerd[1601]: time="2025-07-12T00:13:15.322882057Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:13:15.323700 containerd[1601]: time="2025-07-12T00:13:15.322964377Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:13:15.323700 containerd[1601]: time="2025-07-12T00:13:15.323082578Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:13:15.324345 containerd[1601]: time="2025-07-12T00:13:15.324285420Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:13:15.352735 systemd-networkd[1232]: cali18fe0076dae: Link UP Jul 12 00:13:15.353829 systemd-networkd[1232]: cali18fe0076dae: Gained carrier Jul 12 00:13:15.394772 kubelet[2706]: I0712 00:13:15.394725 2706 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 00:13:15.403083 containerd[1601]: 2025-07-12 00:13:15.137 [INFO][4592] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--4--n--bdc5bebc5f-k8s-coredns--7c65d6cfc9--4txl7-eth0 coredns-7c65d6cfc9- kube-system b3a5dd31-1f9a-4dc4-ad4d-9d9da3bf2832 922 0 2025-07-12 00:12:35 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-4-n-bdc5bebc5f coredns-7c65d6cfc9-4txl7 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali18fe0076dae [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="707fc4c01066e9c34a8146a25e13f460e9887b7473f83358e9cbc421779506ee" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4txl7" WorkloadEndpoint="ci--4081--3--4--n--bdc5bebc5f-k8s-coredns--7c65d6cfc9--4txl7-" Jul 12 00:13:15.403083 containerd[1601]: 2025-07-12 00:13:15.138 [INFO][4592] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="707fc4c01066e9c34a8146a25e13f460e9887b7473f83358e9cbc421779506ee" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4txl7" WorkloadEndpoint="ci--4081--3--4--n--bdc5bebc5f-k8s-coredns--7c65d6cfc9--4txl7-eth0" Jul 12 00:13:15.403083 containerd[1601]: 2025-07-12 00:13:15.190 [INFO][4615] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="707fc4c01066e9c34a8146a25e13f460e9887b7473f83358e9cbc421779506ee" HandleID="k8s-pod-network.707fc4c01066e9c34a8146a25e13f460e9887b7473f83358e9cbc421779506ee" Workload="ci--4081--3--4--n--bdc5bebc5f-k8s-coredns--7c65d6cfc9--4txl7-eth0" Jul 12 00:13:15.403083 containerd[1601]: 2025-07-12 00:13:15.190 [INFO][4615] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="707fc4c01066e9c34a8146a25e13f460e9887b7473f83358e9cbc421779506ee" HandleID="k8s-pod-network.707fc4c01066e9c34a8146a25e13f460e9887b7473f83358e9cbc421779506ee" Workload="ci--4081--3--4--n--bdc5bebc5f-k8s-coredns--7c65d6cfc9--4txl7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3200), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-4-n-bdc5bebc5f", "pod":"coredns-7c65d6cfc9-4txl7", "timestamp":"2025-07-12 00:13:15.190447869 +0000 UTC"}, Hostname:"ci-4081-3-4-n-bdc5bebc5f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:13:15.403083 containerd[1601]: 2025-07-12 00:13:15.190 [INFO][4615] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:13:15.403083 containerd[1601]: 2025-07-12 00:13:15.225 [INFO][4615] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:13:15.403083 containerd[1601]: 2025-07-12 00:13:15.226 [INFO][4615] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-4-n-bdc5bebc5f' Jul 12 00:13:15.403083 containerd[1601]: 2025-07-12 00:13:15.272 [INFO][4615] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.707fc4c01066e9c34a8146a25e13f460e9887b7473f83358e9cbc421779506ee" host="ci-4081-3-4-n-bdc5bebc5f" Jul 12 00:13:15.403083 containerd[1601]: 2025-07-12 00:13:15.283 [INFO][4615] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-4-n-bdc5bebc5f" Jul 12 00:13:15.403083 containerd[1601]: 2025-07-12 00:13:15.293 [INFO][4615] ipam/ipam.go 511: Trying affinity for 192.168.106.128/26 host="ci-4081-3-4-n-bdc5bebc5f" Jul 12 00:13:15.403083 containerd[1601]: 2025-07-12 00:13:15.296 [INFO][4615] ipam/ipam.go 158: Attempting to load block cidr=192.168.106.128/26 host="ci-4081-3-4-n-bdc5bebc5f" Jul 12 00:13:15.403083 containerd[1601]: 2025-07-12 00:13:15.301 [INFO][4615] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.106.128/26 host="ci-4081-3-4-n-bdc5bebc5f" Jul 12 00:13:15.403083 containerd[1601]: 2025-07-12 00:13:15.301 [INFO][4615] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.106.128/26 handle="k8s-pod-network.707fc4c01066e9c34a8146a25e13f460e9887b7473f83358e9cbc421779506ee" host="ci-4081-3-4-n-bdc5bebc5f" Jul 12 00:13:15.403083 containerd[1601]: 2025-07-12 00:13:15.304 [INFO][4615] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.707fc4c01066e9c34a8146a25e13f460e9887b7473f83358e9cbc421779506ee Jul 12 00:13:15.403083 containerd[1601]: 2025-07-12 00:13:15.321 [INFO][4615] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.106.128/26 handle="k8s-pod-network.707fc4c01066e9c34a8146a25e13f460e9887b7473f83358e9cbc421779506ee" host="ci-4081-3-4-n-bdc5bebc5f" Jul 12 00:13:15.403083 containerd[1601]: 2025-07-12 00:13:15.336 [INFO][4615] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.106.133/26] block=192.168.106.128/26 handle="k8s-pod-network.707fc4c01066e9c34a8146a25e13f460e9887b7473f83358e9cbc421779506ee" host="ci-4081-3-4-n-bdc5bebc5f" Jul 12 00:13:15.403083 containerd[1601]: 2025-07-12 00:13:15.336 [INFO][4615] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.106.133/26] handle="k8s-pod-network.707fc4c01066e9c34a8146a25e13f460e9887b7473f83358e9cbc421779506ee" host="ci-4081-3-4-n-bdc5bebc5f" Jul 12 00:13:15.403083 containerd[1601]: 2025-07-12 00:13:15.337 [INFO][4615] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:13:15.403083 containerd[1601]: 2025-07-12 00:13:15.337 [INFO][4615] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.106.133/26] IPv6=[] ContainerID="707fc4c01066e9c34a8146a25e13f460e9887b7473f83358e9cbc421779506ee" HandleID="k8s-pod-network.707fc4c01066e9c34a8146a25e13f460e9887b7473f83358e9cbc421779506ee" Workload="ci--4081--3--4--n--bdc5bebc5f-k8s-coredns--7c65d6cfc9--4txl7-eth0" Jul 12 00:13:15.404473 containerd[1601]: 2025-07-12 00:13:15.342 [INFO][4592] cni-plugin/k8s.go 418: Populated endpoint ContainerID="707fc4c01066e9c34a8146a25e13f460e9887b7473f83358e9cbc421779506ee" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4txl7" WorkloadEndpoint="ci--4081--3--4--n--bdc5bebc5f-k8s-coredns--7c65d6cfc9--4txl7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--n--bdc5bebc5f-k8s-coredns--7c65d6cfc9--4txl7-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"b3a5dd31-1f9a-4dc4-ad4d-9d9da3bf2832", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 12, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-n-bdc5bebc5f", ContainerID:"", Pod:"coredns-7c65d6cfc9-4txl7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.106.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali18fe0076dae", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:13:15.404473 containerd[1601]: 2025-07-12 00:13:15.343 [INFO][4592] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.106.133/32] ContainerID="707fc4c01066e9c34a8146a25e13f460e9887b7473f83358e9cbc421779506ee" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4txl7" WorkloadEndpoint="ci--4081--3--4--n--bdc5bebc5f-k8s-coredns--7c65d6cfc9--4txl7-eth0" Jul 12 00:13:15.404473 containerd[1601]: 2025-07-12 00:13:15.343 [INFO][4592] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali18fe0076dae ContainerID="707fc4c01066e9c34a8146a25e13f460e9887b7473f83358e9cbc421779506ee" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4txl7" WorkloadEndpoint="ci--4081--3--4--n--bdc5bebc5f-k8s-coredns--7c65d6cfc9--4txl7-eth0" Jul 12 00:13:15.404473 containerd[1601]: 2025-07-12 00:13:15.363 [INFO][4592] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="707fc4c01066e9c34a8146a25e13f460e9887b7473f83358e9cbc421779506ee" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4txl7" WorkloadEndpoint="ci--4081--3--4--n--bdc5bebc5f-k8s-coredns--7c65d6cfc9--4txl7-eth0" Jul 12 00:13:15.404473 containerd[1601]: 2025-07-12 00:13:15.371 [INFO][4592] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="707fc4c01066e9c34a8146a25e13f460e9887b7473f83358e9cbc421779506ee" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4txl7" WorkloadEndpoint="ci--4081--3--4--n--bdc5bebc5f-k8s-coredns--7c65d6cfc9--4txl7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--n--bdc5bebc5f-k8s-coredns--7c65d6cfc9--4txl7-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"b3a5dd31-1f9a-4dc4-ad4d-9d9da3bf2832", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 12, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-n-bdc5bebc5f", ContainerID:"707fc4c01066e9c34a8146a25e13f460e9887b7473f83358e9cbc421779506ee", Pod:"coredns-7c65d6cfc9-4txl7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.106.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali18fe0076dae", MAC:"36:a2:c2:07:48:8d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:13:15.404473 containerd[1601]: 2025-07-12 00:13:15.386 [INFO][4592] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="707fc4c01066e9c34a8146a25e13f460e9887b7473f83358e9cbc421779506ee" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4txl7" WorkloadEndpoint="ci--4081--3--4--n--bdc5bebc5f-k8s-coredns--7c65d6cfc9--4txl7-eth0" Jul 12 00:13:15.445526 containerd[1601]: time="2025-07-12T00:13:15.445356462Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:13:15.445526 containerd[1601]: time="2025-07-12T00:13:15.445416063Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:13:15.445526 containerd[1601]: time="2025-07-12T00:13:15.445435423Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:13:15.447050 containerd[1601]: time="2025-07-12T00:13:15.445554863Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:13:15.466520 systemd-networkd[1232]: calic34aee32c80: Gained IPv6LL Jul 12 00:13:15.481199 containerd[1601]: time="2025-07-12T00:13:15.481080746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-f2q67,Uid:bf0041c9-fdb6-4de6-99ec-d3644807d402,Namespace:calico-system,Attempt:1,} returns sandbox id \"7ead39e1a6355684a42c85b1dd3de37f7c90f00970f128d4a17ef81511828e2d\"" Jul 12 00:13:15.519653 containerd[1601]: time="2025-07-12T00:13:15.518711313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-4txl7,Uid:b3a5dd31-1f9a-4dc4-ad4d-9d9da3bf2832,Namespace:kube-system,Attempt:1,} returns sandbox id \"707fc4c01066e9c34a8146a25e13f460e9887b7473f83358e9cbc421779506ee\"" Jul 12 00:13:15.527823 containerd[1601]: time="2025-07-12T00:13:15.526963693Z" level=info msg="CreateContainer within sandbox \"707fc4c01066e9c34a8146a25e13f460e9887b7473f83358e9cbc421779506ee\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 12 00:13:15.556495 containerd[1601]: time="2025-07-12T00:13:15.555623559Z" level=info msg="CreateContainer within sandbox \"707fc4c01066e9c34a8146a25e13f460e9887b7473f83358e9cbc421779506ee\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7be8a0ef3adda36c32cd4f9b0544dc4c38f19c4acb8624996c55b0baff726ee4\"" Jul 12 00:13:15.558118 containerd[1601]: time="2025-07-12T00:13:15.557951885Z" level=info msg="StartContainer for \"7be8a0ef3adda36c32cd4f9b0544dc4c38f19c4acb8624996c55b0baff726ee4\"" Jul 12 00:13:15.672872 containerd[1601]: time="2025-07-12T00:13:15.672646072Z" level=info msg="StartContainer for \"7be8a0ef3adda36c32cd4f9b0544dc4c38f19c4acb8624996c55b0baff726ee4\" returns successfully" Jul 12 00:13:15.816410 containerd[1601]: time="2025-07-12T00:13:15.815884646Z" level=info msg="StopPodSandbox for \"1bea194c56a8459314f623885de55cce3b51932c64c808cdab682331974ad6c8\"" Jul 12 00:13:15.825089 containerd[1601]: time="2025-07-12T00:13:15.823224023Z" level=info msg="StopPodSandbox for \"8df534f44de306fdfbdd517a52c9eaae7da824c9bc0207a7f8a2f7fa4d6e36f8\"" Jul 12 00:13:15.829475 containerd[1601]: time="2025-07-12T00:13:15.829097956Z" level=info msg="StopPodSandbox for \"5ca3e8dbc88b2020006660b0ae14cbf71cbf389d138b8d0bc83929bbe1538f97\"" Jul 12 00:13:16.067501 containerd[1601]: 2025-07-12 00:13:15.959 [INFO][4834] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8df534f44de306fdfbdd517a52c9eaae7da824c9bc0207a7f8a2f7fa4d6e36f8" Jul 12 00:13:16.067501 containerd[1601]: 2025-07-12 00:13:15.960 [INFO][4834] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8df534f44de306fdfbdd517a52c9eaae7da824c9bc0207a7f8a2f7fa4d6e36f8" iface="eth0" netns="/var/run/netns/cni-52b88a6f-ecb2-8847-2218-5614352c86c6" Jul 12 00:13:16.067501 containerd[1601]: 2025-07-12 00:13:15.961 [INFO][4834] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8df534f44de306fdfbdd517a52c9eaae7da824c9bc0207a7f8a2f7fa4d6e36f8" iface="eth0" netns="/var/run/netns/cni-52b88a6f-ecb2-8847-2218-5614352c86c6" Jul 12 00:13:16.067501 containerd[1601]: 2025-07-12 00:13:15.961 [INFO][4834] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8df534f44de306fdfbdd517a52c9eaae7da824c9bc0207a7f8a2f7fa4d6e36f8" iface="eth0" netns="/var/run/netns/cni-52b88a6f-ecb2-8847-2218-5614352c86c6" Jul 12 00:13:16.067501 containerd[1601]: 2025-07-12 00:13:15.961 [INFO][4834] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8df534f44de306fdfbdd517a52c9eaae7da824c9bc0207a7f8a2f7fa4d6e36f8" Jul 12 00:13:16.067501 containerd[1601]: 2025-07-12 00:13:15.961 [INFO][4834] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8df534f44de306fdfbdd517a52c9eaae7da824c9bc0207a7f8a2f7fa4d6e36f8" Jul 12 00:13:16.067501 containerd[1601]: 2025-07-12 00:13:16.021 [INFO][4853] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8df534f44de306fdfbdd517a52c9eaae7da824c9bc0207a7f8a2f7fa4d6e36f8" HandleID="k8s-pod-network.8df534f44de306fdfbdd517a52c9eaae7da824c9bc0207a7f8a2f7fa4d6e36f8" Workload="ci--4081--3--4--n--bdc5bebc5f-k8s-calico--apiserver--b654b5ccd--2zvhb-eth0" Jul 12 00:13:16.067501 containerd[1601]: 2025-07-12 00:13:16.022 [INFO][4853] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:13:16.067501 containerd[1601]: 2025-07-12 00:13:16.022 [INFO][4853] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:13:16.067501 containerd[1601]: 2025-07-12 00:13:16.038 [WARNING][4853] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8df534f44de306fdfbdd517a52c9eaae7da824c9bc0207a7f8a2f7fa4d6e36f8" HandleID="k8s-pod-network.8df534f44de306fdfbdd517a52c9eaae7da824c9bc0207a7f8a2f7fa4d6e36f8" Workload="ci--4081--3--4--n--bdc5bebc5f-k8s-calico--apiserver--b654b5ccd--2zvhb-eth0" Jul 12 00:13:16.067501 containerd[1601]: 2025-07-12 00:13:16.039 [INFO][4853] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8df534f44de306fdfbdd517a52c9eaae7da824c9bc0207a7f8a2f7fa4d6e36f8" HandleID="k8s-pod-network.8df534f44de306fdfbdd517a52c9eaae7da824c9bc0207a7f8a2f7fa4d6e36f8" Workload="ci--4081--3--4--n--bdc5bebc5f-k8s-calico--apiserver--b654b5ccd--2zvhb-eth0" Jul 12 00:13:16.067501 containerd[1601]: 2025-07-12 00:13:16.042 [INFO][4853] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:13:16.067501 containerd[1601]: 2025-07-12 00:13:16.050 [INFO][4834] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8df534f44de306fdfbdd517a52c9eaae7da824c9bc0207a7f8a2f7fa4d6e36f8" Jul 12 00:13:16.069830 containerd[1601]: time="2025-07-12T00:13:16.069801267Z" level=info msg="TearDown network for sandbox \"8df534f44de306fdfbdd517a52c9eaae7da824c9bc0207a7f8a2f7fa4d6e36f8\" successfully" Jul 12 00:13:16.070013 containerd[1601]: time="2025-07-12T00:13:16.069949347Z" level=info msg="StopPodSandbox for \"8df534f44de306fdfbdd517a52c9eaae7da824c9bc0207a7f8a2f7fa4d6e36f8\" returns successfully" Jul 12 00:13:16.074638 containerd[1601]: time="2025-07-12T00:13:16.074364437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b654b5ccd-2zvhb,Uid:b638a997-3c36-4850-99ab-ab3d678917cc,Namespace:calico-apiserver,Attempt:1,}" Jul 12 00:13:16.078854 systemd[1]: run-netns-cni\x2d52b88a6f\x2decb2\x2d8847\x2d2218\x2d5614352c86c6.mount: Deactivated successfully. Jul 12 00:13:16.096464 containerd[1601]: 2025-07-12 00:13:15.982 [INFO][4841] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5ca3e8dbc88b2020006660b0ae14cbf71cbf389d138b8d0bc83929bbe1538f97" Jul 12 00:13:16.096464 containerd[1601]: 2025-07-12 00:13:15.983 [INFO][4841] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5ca3e8dbc88b2020006660b0ae14cbf71cbf389d138b8d0bc83929bbe1538f97" iface="eth0" netns="/var/run/netns/cni-62ee1c24-b8c8-b5af-3b2b-384807749f2e" Jul 12 00:13:16.096464 containerd[1601]: 2025-07-12 00:13:15.984 [INFO][4841] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5ca3e8dbc88b2020006660b0ae14cbf71cbf389d138b8d0bc83929bbe1538f97" iface="eth0" netns="/var/run/netns/cni-62ee1c24-b8c8-b5af-3b2b-384807749f2e" Jul 12 00:13:16.096464 containerd[1601]: 2025-07-12 00:13:15.985 [INFO][4841] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5ca3e8dbc88b2020006660b0ae14cbf71cbf389d138b8d0bc83929bbe1538f97" iface="eth0" netns="/var/run/netns/cni-62ee1c24-b8c8-b5af-3b2b-384807749f2e" Jul 12 00:13:16.096464 containerd[1601]: 2025-07-12 00:13:15.985 [INFO][4841] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5ca3e8dbc88b2020006660b0ae14cbf71cbf389d138b8d0bc83929bbe1538f97" Jul 12 00:13:16.096464 containerd[1601]: 2025-07-12 00:13:15.985 [INFO][4841] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5ca3e8dbc88b2020006660b0ae14cbf71cbf389d138b8d0bc83929bbe1538f97" Jul 12 00:13:16.096464 containerd[1601]: 2025-07-12 00:13:16.043 [INFO][4858] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5ca3e8dbc88b2020006660b0ae14cbf71cbf389d138b8d0bc83929bbe1538f97" HandleID="k8s-pod-network.5ca3e8dbc88b2020006660b0ae14cbf71cbf389d138b8d0bc83929bbe1538f97" Workload="ci--4081--3--4--n--bdc5bebc5f-k8s-coredns--7c65d6cfc9--tpnrh-eth0" Jul 12 00:13:16.096464 containerd[1601]: 2025-07-12 00:13:16.044 [INFO][4858] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:13:16.096464 containerd[1601]: 2025-07-12 00:13:16.044 [INFO][4858] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:13:16.096464 containerd[1601]: 2025-07-12 00:13:16.069 [WARNING][4858] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5ca3e8dbc88b2020006660b0ae14cbf71cbf389d138b8d0bc83929bbe1538f97" HandleID="k8s-pod-network.5ca3e8dbc88b2020006660b0ae14cbf71cbf389d138b8d0bc83929bbe1538f97" Workload="ci--4081--3--4--n--bdc5bebc5f-k8s-coredns--7c65d6cfc9--tpnrh-eth0" Jul 12 00:13:16.096464 containerd[1601]: 2025-07-12 00:13:16.070 [INFO][4858] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5ca3e8dbc88b2020006660b0ae14cbf71cbf389d138b8d0bc83929bbe1538f97" HandleID="k8s-pod-network.5ca3e8dbc88b2020006660b0ae14cbf71cbf389d138b8d0bc83929bbe1538f97" Workload="ci--4081--3--4--n--bdc5bebc5f-k8s-coredns--7c65d6cfc9--tpnrh-eth0" Jul 12 00:13:16.096464 containerd[1601]: 2025-07-12 00:13:16.080 [INFO][4858] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:13:16.096464 containerd[1601]: 2025-07-12 00:13:16.091 [INFO][4841] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5ca3e8dbc88b2020006660b0ae14cbf71cbf389d138b8d0bc83929bbe1538f97" Jul 12 00:13:16.097885 containerd[1601]: time="2025-07-12T00:13:16.097474687Z" level=info msg="TearDown network for sandbox \"5ca3e8dbc88b2020006660b0ae14cbf71cbf389d138b8d0bc83929bbe1538f97\" successfully" Jul 12 00:13:16.097885 containerd[1601]: time="2025-07-12T00:13:16.097505367Z" level=info msg="StopPodSandbox for \"5ca3e8dbc88b2020006660b0ae14cbf71cbf389d138b8d0bc83929bbe1538f97\" returns successfully" Jul 12 00:13:16.098471 containerd[1601]: time="2025-07-12T00:13:16.098225089Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-tpnrh,Uid:c86f07ef-0940-4c98-a612-68a23cab6908,Namespace:kube-system,Attempt:1,}" Jul 12 00:13:16.107822 systemd[1]: run-netns-cni\x2d62ee1c24\x2db8c8\x2db5af\x2d3b2b\x2d384807749f2e.mount: Deactivated successfully. Jul 12 00:13:16.133906 kubelet[2706]: I0712 00:13:16.131939 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-4txl7" podStartSLOduration=41.131921043 podStartE2EDuration="41.131921043s" podCreationTimestamp="2025-07-12 00:12:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:13:16.106121306 +0000 UTC m=+46.438661957" watchObservedRunningTime="2025-07-12 00:13:16.131921043 +0000 UTC m=+46.464461694" Jul 12 00:13:16.150967 containerd[1601]: 2025-07-12 00:13:15.984 [INFO][4827] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1bea194c56a8459314f623885de55cce3b51932c64c808cdab682331974ad6c8" Jul 12 00:13:16.150967 containerd[1601]: 2025-07-12 00:13:15.985 [INFO][4827] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1bea194c56a8459314f623885de55cce3b51932c64c808cdab682331974ad6c8" iface="eth0" netns="/var/run/netns/cni-1a5f8358-01c9-808d-b20d-f4eef2f77de4" Jul 12 00:13:16.150967 containerd[1601]: 2025-07-12 00:13:15.985 [INFO][4827] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1bea194c56a8459314f623885de55cce3b51932c64c808cdab682331974ad6c8" iface="eth0" netns="/var/run/netns/cni-1a5f8358-01c9-808d-b20d-f4eef2f77de4" Jul 12 00:13:16.150967 containerd[1601]: 2025-07-12 00:13:15.986 [INFO][4827] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1bea194c56a8459314f623885de55cce3b51932c64c808cdab682331974ad6c8" iface="eth0" netns="/var/run/netns/cni-1a5f8358-01c9-808d-b20d-f4eef2f77de4" Jul 12 00:13:16.150967 containerd[1601]: 2025-07-12 00:13:15.986 [INFO][4827] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1bea194c56a8459314f623885de55cce3b51932c64c808cdab682331974ad6c8" Jul 12 00:13:16.150967 containerd[1601]: 2025-07-12 00:13:15.986 [INFO][4827] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1bea194c56a8459314f623885de55cce3b51932c64c808cdab682331974ad6c8" Jul 12 00:13:16.150967 containerd[1601]: 2025-07-12 00:13:16.054 [INFO][4860] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1bea194c56a8459314f623885de55cce3b51932c64c808cdab682331974ad6c8" HandleID="k8s-pod-network.1bea194c56a8459314f623885de55cce3b51932c64c808cdab682331974ad6c8" Workload="ci--4081--3--4--n--bdc5bebc5f-k8s-calico--kube--controllers--54f5749fd--n2b8n-eth0" Jul 12 00:13:16.150967 containerd[1601]: 2025-07-12 00:13:16.054 [INFO][4860] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:13:16.150967 containerd[1601]: 2025-07-12 00:13:16.082 [INFO][4860] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:13:16.150967 containerd[1601]: 2025-07-12 00:13:16.120 [WARNING][4860] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1bea194c56a8459314f623885de55cce3b51932c64c808cdab682331974ad6c8" HandleID="k8s-pod-network.1bea194c56a8459314f623885de55cce3b51932c64c808cdab682331974ad6c8" Workload="ci--4081--3--4--n--bdc5bebc5f-k8s-calico--kube--controllers--54f5749fd--n2b8n-eth0" Jul 12 00:13:16.150967 containerd[1601]: 2025-07-12 00:13:16.122 [INFO][4860] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1bea194c56a8459314f623885de55cce3b51932c64c808cdab682331974ad6c8" HandleID="k8s-pod-network.1bea194c56a8459314f623885de55cce3b51932c64c808cdab682331974ad6c8" Workload="ci--4081--3--4--n--bdc5bebc5f-k8s-calico--kube--controllers--54f5749fd--n2b8n-eth0" Jul 12 00:13:16.150967 containerd[1601]: 2025-07-12 00:13:16.135 [INFO][4860] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:13:16.150967 containerd[1601]: 2025-07-12 00:13:16.143 [INFO][4827] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1bea194c56a8459314f623885de55cce3b51932c64c808cdab682331974ad6c8" Jul 12 00:13:16.154248 containerd[1601]: time="2025-07-12T00:13:16.154184251Z" level=info msg="TearDown network for sandbox \"1bea194c56a8459314f623885de55cce3b51932c64c808cdab682331974ad6c8\" successfully" Jul 12 00:13:16.154248 containerd[1601]: time="2025-07-12T00:13:16.154238771Z" level=info msg="StopPodSandbox for \"1bea194c56a8459314f623885de55cce3b51932c64c808cdab682331974ad6c8\" returns successfully" Jul 12 00:13:16.156372 containerd[1601]: time="2025-07-12T00:13:16.156323416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54f5749fd-n2b8n,Uid:2c8b35e8-59cb-4a47-869c-9d6193668f96,Namespace:calico-system,Attempt:1,}" Jul 12 00:13:16.158126 systemd[1]: run-netns-cni\x2d1a5f8358\x2d01c9\x2d808d\x2db20d\x2df4eef2f77de4.mount: Deactivated successfully. Jul 12 00:13:16.283218 containerd[1601]: time="2025-07-12T00:13:16.282762372Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:13:16.286840 containerd[1601]: time="2025-07-12T00:13:16.286794541Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=61838790" Jul 12 00:13:16.288948 containerd[1601]: time="2025-07-12T00:13:16.288693025Z" level=info msg="ImageCreate event name:\"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:13:16.301129 containerd[1601]: time="2025-07-12T00:13:16.301079132Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:13:16.304032 containerd[1601]: time="2025-07-12T00:13:16.301912854Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"61838636\" in 4.099132298s" Jul 12 00:13:16.304032 containerd[1601]: time="2025-07-12T00:13:16.301977174Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\"" Jul 12 00:13:16.305678 containerd[1601]: time="2025-07-12T00:13:16.304938220Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 12 00:13:16.312790 containerd[1601]: time="2025-07-12T00:13:16.312728837Z" level=info msg="CreateContainer within sandbox \"c2d6a67f7bc399b26f888a3194975c5eb6bc0ca3fb8a00443de5697740a7d5a1\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 12 00:13:16.347843 containerd[1601]: time="2025-07-12T00:13:16.347702714Z" level=info msg="CreateContainer within sandbox \"c2d6a67f7bc399b26f888a3194975c5eb6bc0ca3fb8a00443de5697740a7d5a1\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"07737bc3230a6c6586e1d78b6adfea04e13a232d1985cfaf8a6a656014694726\"" Jul 12 00:13:16.353269 containerd[1601]: time="2025-07-12T00:13:16.353208046Z" level=info msg="StartContainer for \"07737bc3230a6c6586e1d78b6adfea04e13a232d1985cfaf8a6a656014694726\"" Jul 12 00:13:16.422065 systemd-networkd[1232]: cali81b5433cb02: Link UP Jul 12 00:13:16.423986 systemd-networkd[1232]: cali81b5433cb02: Gained carrier Jul 12 00:13:16.456080 containerd[1601]: 2025-07-12 00:13:16.257 [INFO][4899] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--4--n--bdc5bebc5f-k8s-calico--kube--controllers--54f5749fd--n2b8n-eth0 calico-kube-controllers-54f5749fd- calico-system 2c8b35e8-59cb-4a47-869c-9d6193668f96 941 0 2025-07-12 00:12:51 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:54f5749fd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-4-n-bdc5bebc5f calico-kube-controllers-54f5749fd-n2b8n eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali81b5433cb02 [] [] }} ContainerID="a8cceefaafd529af5b20782c617b085b10eb44265faea5c4b3880555cbe0d321" Namespace="calico-system" Pod="calico-kube-controllers-54f5749fd-n2b8n" WorkloadEndpoint="ci--4081--3--4--n--bdc5bebc5f-k8s-calico--kube--controllers--54f5749fd--n2b8n-" Jul 12 00:13:16.456080 containerd[1601]: 2025-07-12 00:13:16.258 [INFO][4899] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a8cceefaafd529af5b20782c617b085b10eb44265faea5c4b3880555cbe0d321" Namespace="calico-system" Pod="calico-kube-controllers-54f5749fd-n2b8n" WorkloadEndpoint="ci--4081--3--4--n--bdc5bebc5f-k8s-calico--kube--controllers--54f5749fd--n2b8n-eth0" Jul 12 00:13:16.456080 containerd[1601]: 2025-07-12 00:13:16.326 [INFO][4920] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a8cceefaafd529af5b20782c617b085b10eb44265faea5c4b3880555cbe0d321" HandleID="k8s-pod-network.a8cceefaafd529af5b20782c617b085b10eb44265faea5c4b3880555cbe0d321" Workload="ci--4081--3--4--n--bdc5bebc5f-k8s-calico--kube--controllers--54f5749fd--n2b8n-eth0" Jul 12 00:13:16.456080 containerd[1601]: 2025-07-12 00:13:16.326 [INFO][4920] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a8cceefaafd529af5b20782c617b085b10eb44265faea5c4b3880555cbe0d321" HandleID="k8s-pod-network.a8cceefaafd529af5b20782c617b085b10eb44265faea5c4b3880555cbe0d321" Workload="ci--4081--3--4--n--bdc5bebc5f-k8s-calico--kube--controllers--54f5749fd--n2b8n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b230), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-4-n-bdc5bebc5f", "pod":"calico-kube-controllers-54f5749fd-n2b8n", "timestamp":"2025-07-12 00:13:16.325062264 +0000 UTC"}, Hostname:"ci-4081-3-4-n-bdc5bebc5f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:13:16.456080 containerd[1601]: 2025-07-12 00:13:16.326 [INFO][4920] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:13:16.456080 containerd[1601]: 2025-07-12 00:13:16.326 [INFO][4920] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:13:16.456080 containerd[1601]: 2025-07-12 00:13:16.326 [INFO][4920] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-4-n-bdc5bebc5f' Jul 12 00:13:16.456080 containerd[1601]: 2025-07-12 00:13:16.342 [INFO][4920] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a8cceefaafd529af5b20782c617b085b10eb44265faea5c4b3880555cbe0d321" host="ci-4081-3-4-n-bdc5bebc5f" Jul 12 00:13:16.456080 containerd[1601]: 2025-07-12 00:13:16.357 [INFO][4920] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-4-n-bdc5bebc5f" Jul 12 00:13:16.456080 containerd[1601]: 2025-07-12 00:13:16.367 [INFO][4920] ipam/ipam.go 511: Trying affinity for 192.168.106.128/26 host="ci-4081-3-4-n-bdc5bebc5f" Jul 12 00:13:16.456080 containerd[1601]: 2025-07-12 00:13:16.370 [INFO][4920] ipam/ipam.go 158: Attempting to load block cidr=192.168.106.128/26 host="ci-4081-3-4-n-bdc5bebc5f" Jul 12 00:13:16.456080 containerd[1601]: 2025-07-12 00:13:16.376 [INFO][4920] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.106.128/26 host="ci-4081-3-4-n-bdc5bebc5f" Jul 12 00:13:16.456080 containerd[1601]: 2025-07-12 00:13:16.376 [INFO][4920] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.106.128/26 handle="k8s-pod-network.a8cceefaafd529af5b20782c617b085b10eb44265faea5c4b3880555cbe0d321" host="ci-4081-3-4-n-bdc5bebc5f" Jul 12 00:13:16.456080 containerd[1601]: 2025-07-12 00:13:16.379 [INFO][4920] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a8cceefaafd529af5b20782c617b085b10eb44265faea5c4b3880555cbe0d321 Jul 12 00:13:16.456080 containerd[1601]: 2025-07-12 00:13:16.393 [INFO][4920] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.106.128/26 handle="k8s-pod-network.a8cceefaafd529af5b20782c617b085b10eb44265faea5c4b3880555cbe0d321" host="ci-4081-3-4-n-bdc5bebc5f" Jul 12 00:13:16.456080 containerd[1601]: 2025-07-12 00:13:16.408 [INFO][4920] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.106.134/26] block=192.168.106.128/26 handle="k8s-pod-network.a8cceefaafd529af5b20782c617b085b10eb44265faea5c4b3880555cbe0d321" host="ci-4081-3-4-n-bdc5bebc5f" Jul 12 00:13:16.456080 containerd[1601]: 2025-07-12 00:13:16.408 [INFO][4920] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.106.134/26] handle="k8s-pod-network.a8cceefaafd529af5b20782c617b085b10eb44265faea5c4b3880555cbe0d321" host="ci-4081-3-4-n-bdc5bebc5f" Jul 12 00:13:16.456080 containerd[1601]: 2025-07-12 00:13:16.409 [INFO][4920] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:13:16.456080 containerd[1601]: 2025-07-12 00:13:16.409 [INFO][4920] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.106.134/26] IPv6=[] ContainerID="a8cceefaafd529af5b20782c617b085b10eb44265faea5c4b3880555cbe0d321" HandleID="k8s-pod-network.a8cceefaafd529af5b20782c617b085b10eb44265faea5c4b3880555cbe0d321" Workload="ci--4081--3--4--n--bdc5bebc5f-k8s-calico--kube--controllers--54f5749fd--n2b8n-eth0" Jul 12 00:13:16.457189 containerd[1601]: 2025-07-12 00:13:16.413 [INFO][4899] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a8cceefaafd529af5b20782c617b085b10eb44265faea5c4b3880555cbe0d321" Namespace="calico-system" Pod="calico-kube-controllers-54f5749fd-n2b8n" WorkloadEndpoint="ci--4081--3--4--n--bdc5bebc5f-k8s-calico--kube--controllers--54f5749fd--n2b8n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--n--bdc5bebc5f-k8s-calico--kube--controllers--54f5749fd--n2b8n-eth0", GenerateName:"calico-kube-controllers-54f5749fd-", Namespace:"calico-system", SelfLink:"", UID:"2c8b35e8-59cb-4a47-869c-9d6193668f96", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 12, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"54f5749fd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-n-bdc5bebc5f", ContainerID:"", Pod:"calico-kube-controllers-54f5749fd-n2b8n", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.106.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali81b5433cb02", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:13:16.457189 containerd[1601]: 2025-07-12 00:13:16.413 [INFO][4899] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.106.134/32] ContainerID="a8cceefaafd529af5b20782c617b085b10eb44265faea5c4b3880555cbe0d321" Namespace="calico-system" Pod="calico-kube-controllers-54f5749fd-n2b8n" WorkloadEndpoint="ci--4081--3--4--n--bdc5bebc5f-k8s-calico--kube--controllers--54f5749fd--n2b8n-eth0" Jul 12 00:13:16.457189 containerd[1601]: 2025-07-12 00:13:16.413 [INFO][4899] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali81b5433cb02 ContainerID="a8cceefaafd529af5b20782c617b085b10eb44265faea5c4b3880555cbe0d321" Namespace="calico-system" Pod="calico-kube-controllers-54f5749fd-n2b8n" WorkloadEndpoint="ci--4081--3--4--n--bdc5bebc5f-k8s-calico--kube--controllers--54f5749fd--n2b8n-eth0" Jul 12 00:13:16.457189 containerd[1601]: 2025-07-12 00:13:16.425 [INFO][4899] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a8cceefaafd529af5b20782c617b085b10eb44265faea5c4b3880555cbe0d321" Namespace="calico-system" Pod="calico-kube-controllers-54f5749fd-n2b8n" WorkloadEndpoint="ci--4081--3--4--n--bdc5bebc5f-k8s-calico--kube--controllers--54f5749fd--n2b8n-eth0" Jul 12 00:13:16.457189 containerd[1601]: 2025-07-12 00:13:16.427 [INFO][4899] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a8cceefaafd529af5b20782c617b085b10eb44265faea5c4b3880555cbe0d321" Namespace="calico-system" Pod="calico-kube-controllers-54f5749fd-n2b8n" WorkloadEndpoint="ci--4081--3--4--n--bdc5bebc5f-k8s-calico--kube--controllers--54f5749fd--n2b8n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--n--bdc5bebc5f-k8s-calico--kube--controllers--54f5749fd--n2b8n-eth0", GenerateName:"calico-kube-controllers-54f5749fd-", Namespace:"calico-system", SelfLink:"", UID:"2c8b35e8-59cb-4a47-869c-9d6193668f96", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 12, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"54f5749fd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-n-bdc5bebc5f", ContainerID:"a8cceefaafd529af5b20782c617b085b10eb44265faea5c4b3880555cbe0d321", Pod:"calico-kube-controllers-54f5749fd-n2b8n", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.106.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali81b5433cb02", MAC:"7a:5d:bd:e5:c7:8e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:13:16.457189 containerd[1601]: 2025-07-12 00:13:16.447 [INFO][4899] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a8cceefaafd529af5b20782c617b085b10eb44265faea5c4b3880555cbe0d321" Namespace="calico-system" Pod="calico-kube-controllers-54f5749fd-n2b8n" WorkloadEndpoint="ci--4081--3--4--n--bdc5bebc5f-k8s-calico--kube--controllers--54f5749fd--n2b8n-eth0" Jul 12 00:13:16.479975 containerd[1601]: time="2025-07-12T00:13:16.479779562Z" level=info msg="StartContainer for \"07737bc3230a6c6586e1d78b6adfea04e13a232d1985cfaf8a6a656014694726\" returns successfully" Jul 12 00:13:16.508615 containerd[1601]: time="2025-07-12T00:13:16.508355625Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:13:16.508615 containerd[1601]: time="2025-07-12T00:13:16.508416385Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:13:16.508615 containerd[1601]: time="2025-07-12T00:13:16.508430865Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:13:16.509087 containerd[1601]: time="2025-07-12T00:13:16.508961786Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:13:16.531105 systemd-networkd[1232]: cali811ec2e5713: Link UP Jul 12 00:13:16.533677 systemd-networkd[1232]: cali811ec2e5713: Gained carrier Jul 12 00:13:16.573905 containerd[1601]: 2025-07-12 00:13:16.230 [INFO][4874] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--4--n--bdc5bebc5f-k8s-calico--apiserver--b654b5ccd--2zvhb-eth0 calico-apiserver-b654b5ccd- calico-apiserver b638a997-3c36-4850-99ab-ab3d678917cc 940 0 2025-07-12 00:12:46 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:b654b5ccd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-4-n-bdc5bebc5f calico-apiserver-b654b5ccd-2zvhb eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali811ec2e5713 [] [] }} ContainerID="8153543106ad52702b6982f0650485e7ca2229f340a54991f925171fef4cace7" Namespace="calico-apiserver" Pod="calico-apiserver-b654b5ccd-2zvhb" WorkloadEndpoint="ci--4081--3--4--n--bdc5bebc5f-k8s-calico--apiserver--b654b5ccd--2zvhb-" Jul 12 00:13:16.573905 containerd[1601]: 2025-07-12 00:13:16.231 [INFO][4874] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8153543106ad52702b6982f0650485e7ca2229f340a54991f925171fef4cace7" Namespace="calico-apiserver" Pod="calico-apiserver-b654b5ccd-2zvhb" WorkloadEndpoint="ci--4081--3--4--n--bdc5bebc5f-k8s-calico--apiserver--b654b5ccd--2zvhb-eth0" Jul 12 00:13:16.573905 containerd[1601]: 2025-07-12 00:13:16.333 [INFO][4914] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8153543106ad52702b6982f0650485e7ca2229f340a54991f925171fef4cace7" HandleID="k8s-pod-network.8153543106ad52702b6982f0650485e7ca2229f340a54991f925171fef4cace7" Workload="ci--4081--3--4--n--bdc5bebc5f-k8s-calico--apiserver--b654b5ccd--2zvhb-eth0" Jul 12 00:13:16.573905 containerd[1601]: 2025-07-12 00:13:16.333 [INFO][4914] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8153543106ad52702b6982f0650485e7ca2229f340a54991f925171fef4cace7" HandleID="k8s-pod-network.8153543106ad52702b6982f0650485e7ca2229f340a54991f925171fef4cace7" Workload="ci--4081--3--4--n--bdc5bebc5f-k8s-calico--apiserver--b654b5ccd--2zvhb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000308ac0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-4-n-bdc5bebc5f", "pod":"calico-apiserver-b654b5ccd-2zvhb", "timestamp":"2025-07-12 00:13:16.332953442 +0000 UTC"}, Hostname:"ci-4081-3-4-n-bdc5bebc5f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:13:16.573905 containerd[1601]: 2025-07-12 00:13:16.334 [INFO][4914] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:13:16.573905 containerd[1601]: 2025-07-12 00:13:16.408 [INFO][4914] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:13:16.573905 containerd[1601]: 2025-07-12 00:13:16.409 [INFO][4914] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-4-n-bdc5bebc5f' Jul 12 00:13:16.573905 containerd[1601]: 2025-07-12 00:13:16.443 [INFO][4914] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8153543106ad52702b6982f0650485e7ca2229f340a54991f925171fef4cace7" host="ci-4081-3-4-n-bdc5bebc5f" Jul 12 00:13:16.573905 containerd[1601]: 2025-07-12 00:13:16.457 [INFO][4914] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-4-n-bdc5bebc5f" Jul 12 00:13:16.573905 containerd[1601]: 2025-07-12 00:13:16.469 [INFO][4914] ipam/ipam.go 511: Trying affinity for 192.168.106.128/26 host="ci-4081-3-4-n-bdc5bebc5f" Jul 12 00:13:16.573905 containerd[1601]: 2025-07-12 00:13:16.482 [INFO][4914] ipam/ipam.go 158: Attempting to load block cidr=192.168.106.128/26 host="ci-4081-3-4-n-bdc5bebc5f" Jul 12 00:13:16.573905 containerd[1601]: 2025-07-12 00:13:16.487 [INFO][4914] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.106.128/26 host="ci-4081-3-4-n-bdc5bebc5f" Jul 12 00:13:16.573905 containerd[1601]: 2025-07-12 00:13:16.487 [INFO][4914] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.106.128/26 handle="k8s-pod-network.8153543106ad52702b6982f0650485e7ca2229f340a54991f925171fef4cace7" host="ci-4081-3-4-n-bdc5bebc5f" Jul 12 00:13:16.573905 containerd[1601]: 2025-07-12 00:13:16.490 [INFO][4914] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8153543106ad52702b6982f0650485e7ca2229f340a54991f925171fef4cace7 Jul 12 00:13:16.573905 containerd[1601]: 2025-07-12 00:13:16.499 [INFO][4914] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.106.128/26 handle="k8s-pod-network.8153543106ad52702b6982f0650485e7ca2229f340a54991f925171fef4cace7" host="ci-4081-3-4-n-bdc5bebc5f" Jul 12 00:13:16.573905 containerd[1601]: 2025-07-12 00:13:16.514 [INFO][4914] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.106.135/26] block=192.168.106.128/26 handle="k8s-pod-network.8153543106ad52702b6982f0650485e7ca2229f340a54991f925171fef4cace7" host="ci-4081-3-4-n-bdc5bebc5f" Jul 12 00:13:16.573905 containerd[1601]: 2025-07-12 00:13:16.514 [INFO][4914] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.106.135/26] handle="k8s-pod-network.8153543106ad52702b6982f0650485e7ca2229f340a54991f925171fef4cace7" host="ci-4081-3-4-n-bdc5bebc5f" Jul 12 00:13:16.573905 containerd[1601]: 2025-07-12 00:13:16.514 [INFO][4914] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:13:16.573905 containerd[1601]: 2025-07-12 00:13:16.514 [INFO][4914] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.106.135/26] IPv6=[] ContainerID="8153543106ad52702b6982f0650485e7ca2229f340a54991f925171fef4cace7" HandleID="k8s-pod-network.8153543106ad52702b6982f0650485e7ca2229f340a54991f925171fef4cace7" Workload="ci--4081--3--4--n--bdc5bebc5f-k8s-calico--apiserver--b654b5ccd--2zvhb-eth0" Jul 12 00:13:16.574513 containerd[1601]: 2025-07-12 00:13:16.520 [INFO][4874] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8153543106ad52702b6982f0650485e7ca2229f340a54991f925171fef4cace7" Namespace="calico-apiserver" Pod="calico-apiserver-b654b5ccd-2zvhb" WorkloadEndpoint="ci--4081--3--4--n--bdc5bebc5f-k8s-calico--apiserver--b654b5ccd--2zvhb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--n--bdc5bebc5f-k8s-calico--apiserver--b654b5ccd--2zvhb-eth0", GenerateName:"calico-apiserver-b654b5ccd-", Namespace:"calico-apiserver", SelfLink:"", UID:"b638a997-3c36-4850-99ab-ab3d678917cc", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 12, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b654b5ccd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-n-bdc5bebc5f", ContainerID:"", Pod:"calico-apiserver-b654b5ccd-2zvhb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.106.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali811ec2e5713", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:13:16.574513 containerd[1601]: 2025-07-12 00:13:16.520 [INFO][4874] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.106.135/32] ContainerID="8153543106ad52702b6982f0650485e7ca2229f340a54991f925171fef4cace7" Namespace="calico-apiserver" Pod="calico-apiserver-b654b5ccd-2zvhb" WorkloadEndpoint="ci--4081--3--4--n--bdc5bebc5f-k8s-calico--apiserver--b654b5ccd--2zvhb-eth0" Jul 12 00:13:16.574513 containerd[1601]: 2025-07-12 00:13:16.521 [INFO][4874] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali811ec2e5713 ContainerID="8153543106ad52702b6982f0650485e7ca2229f340a54991f925171fef4cace7" Namespace="calico-apiserver" Pod="calico-apiserver-b654b5ccd-2zvhb" WorkloadEndpoint="ci--4081--3--4--n--bdc5bebc5f-k8s-calico--apiserver--b654b5ccd--2zvhb-eth0" Jul 12 00:13:16.574513 containerd[1601]: 2025-07-12 00:13:16.535 [INFO][4874] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8153543106ad52702b6982f0650485e7ca2229f340a54991f925171fef4cace7" Namespace="calico-apiserver" Pod="calico-apiserver-b654b5ccd-2zvhb" WorkloadEndpoint="ci--4081--3--4--n--bdc5bebc5f-k8s-calico--apiserver--b654b5ccd--2zvhb-eth0" Jul 12 00:13:16.574513 containerd[1601]: 2025-07-12 00:13:16.537 [INFO][4874] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8153543106ad52702b6982f0650485e7ca2229f340a54991f925171fef4cace7" Namespace="calico-apiserver" Pod="calico-apiserver-b654b5ccd-2zvhb" WorkloadEndpoint="ci--4081--3--4--n--bdc5bebc5f-k8s-calico--apiserver--b654b5ccd--2zvhb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--n--bdc5bebc5f-k8s-calico--apiserver--b654b5ccd--2zvhb-eth0", GenerateName:"calico-apiserver-b654b5ccd-", Namespace:"calico-apiserver", SelfLink:"", UID:"b638a997-3c36-4850-99ab-ab3d678917cc", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 12, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b654b5ccd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-n-bdc5bebc5f", ContainerID:"8153543106ad52702b6982f0650485e7ca2229f340a54991f925171fef4cace7", Pod:"calico-apiserver-b654b5ccd-2zvhb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.106.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali811ec2e5713", MAC:"a2:ee:90:52:b6:35", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:13:16.574513 containerd[1601]: 2025-07-12 00:13:16.566 [INFO][4874] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8153543106ad52702b6982f0650485e7ca2229f340a54991f925171fef4cace7" Namespace="calico-apiserver" Pod="calico-apiserver-b654b5ccd-2zvhb" WorkloadEndpoint="ci--4081--3--4--n--bdc5bebc5f-k8s-calico--apiserver--b654b5ccd--2zvhb-eth0" Jul 12 00:13:16.616162 containerd[1601]: time="2025-07-12T00:13:16.613744975Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:13:16.616162 containerd[1601]: time="2025-07-12T00:13:16.613801335Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:13:16.616162 containerd[1601]: time="2025-07-12T00:13:16.613816535Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:13:16.616162 containerd[1601]: time="2025-07-12T00:13:16.613894855Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:13:16.636433 systemd-networkd[1232]: cali1ef6dde0b3b: Link UP Jul 12 00:13:16.641870 systemd-networkd[1232]: cali1ef6dde0b3b: Gained carrier Jul 12 00:13:16.687329 containerd[1601]: 2025-07-12 00:13:16.272 [INFO][4884] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--4--n--bdc5bebc5f-k8s-coredns--7c65d6cfc9--tpnrh-eth0 coredns-7c65d6cfc9- kube-system c86f07ef-0940-4c98-a612-68a23cab6908 942 0 2025-07-12 00:12:35 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-4-n-bdc5bebc5f coredns-7c65d6cfc9-tpnrh eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1ef6dde0b3b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="1e332b7e39977b746d8f694769a85bba1fc3a6dff5560d374ba47c2945d5add9" Namespace="kube-system" Pod="coredns-7c65d6cfc9-tpnrh" WorkloadEndpoint="ci--4081--3--4--n--bdc5bebc5f-k8s-coredns--7c65d6cfc9--tpnrh-" Jul 12 00:13:16.687329 containerd[1601]: 2025-07-12 00:13:16.273 [INFO][4884] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1e332b7e39977b746d8f694769a85bba1fc3a6dff5560d374ba47c2945d5add9" Namespace="kube-system" Pod="coredns-7c65d6cfc9-tpnrh" WorkloadEndpoint="ci--4081--3--4--n--bdc5bebc5f-k8s-coredns--7c65d6cfc9--tpnrh-eth0" Jul 12 00:13:16.687329 containerd[1601]: 2025-07-12 00:13:16.366 [INFO][4930] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1e332b7e39977b746d8f694769a85bba1fc3a6dff5560d374ba47c2945d5add9" HandleID="k8s-pod-network.1e332b7e39977b746d8f694769a85bba1fc3a6dff5560d374ba47c2945d5add9" Workload="ci--4081--3--4--n--bdc5bebc5f-k8s-coredns--7c65d6cfc9--tpnrh-eth0" Jul 12 00:13:16.687329 containerd[1601]: 2025-07-12 00:13:16.366 [INFO][4930] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1e332b7e39977b746d8f694769a85bba1fc3a6dff5560d374ba47c2945d5add9" HandleID="k8s-pod-network.1e332b7e39977b746d8f694769a85bba1fc3a6dff5560d374ba47c2945d5add9" Workload="ci--4081--3--4--n--bdc5bebc5f-k8s-coredns--7c65d6cfc9--tpnrh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b820), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-4-n-bdc5bebc5f", "pod":"coredns-7c65d6cfc9-tpnrh", "timestamp":"2025-07-12 00:13:16.363802949 +0000 UTC"}, Hostname:"ci-4081-3-4-n-bdc5bebc5f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:13:16.687329 containerd[1601]: 2025-07-12 00:13:16.366 [INFO][4930] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:13:16.687329 containerd[1601]: 2025-07-12 00:13:16.514 [INFO][4930] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:13:16.687329 containerd[1601]: 2025-07-12 00:13:16.514 [INFO][4930] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-4-n-bdc5bebc5f' Jul 12 00:13:16.687329 containerd[1601]: 2025-07-12 00:13:16.547 [INFO][4930] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1e332b7e39977b746d8f694769a85bba1fc3a6dff5560d374ba47c2945d5add9" host="ci-4081-3-4-n-bdc5bebc5f" Jul 12 00:13:16.687329 containerd[1601]: 2025-07-12 00:13:16.561 [INFO][4930] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-4-n-bdc5bebc5f" Jul 12 00:13:16.687329 containerd[1601]: 2025-07-12 00:13:16.571 [INFO][4930] ipam/ipam.go 511: Trying affinity for 192.168.106.128/26 host="ci-4081-3-4-n-bdc5bebc5f" Jul 12 00:13:16.687329 containerd[1601]: 2025-07-12 00:13:16.575 [INFO][4930] ipam/ipam.go 158: Attempting to load block cidr=192.168.106.128/26 host="ci-4081-3-4-n-bdc5bebc5f" Jul 12 00:13:16.687329 containerd[1601]: 2025-07-12 00:13:16.579 [INFO][4930] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.106.128/26 host="ci-4081-3-4-n-bdc5bebc5f" Jul 12 00:13:16.687329 containerd[1601]: 2025-07-12 00:13:16.580 [INFO][4930] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.106.128/26 handle="k8s-pod-network.1e332b7e39977b746d8f694769a85bba1fc3a6dff5560d374ba47c2945d5add9" host="ci-4081-3-4-n-bdc5bebc5f" Jul 12 00:13:16.687329 containerd[1601]: 2025-07-12 00:13:16.583 [INFO][4930] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1e332b7e39977b746d8f694769a85bba1fc3a6dff5560d374ba47c2945d5add9 Jul 12 00:13:16.687329 containerd[1601]: 2025-07-12 00:13:16.596 [INFO][4930] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.106.128/26 handle="k8s-pod-network.1e332b7e39977b746d8f694769a85bba1fc3a6dff5560d374ba47c2945d5add9" host="ci-4081-3-4-n-bdc5bebc5f" Jul 12 00:13:16.687329 containerd[1601]: 2025-07-12 00:13:16.609 [INFO][4930] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.106.136/26] block=192.168.106.128/26 handle="k8s-pod-network.1e332b7e39977b746d8f694769a85bba1fc3a6dff5560d374ba47c2945d5add9" host="ci-4081-3-4-n-bdc5bebc5f" Jul 12 00:13:16.687329 containerd[1601]: 2025-07-12 00:13:16.609 [INFO][4930] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.106.136/26] handle="k8s-pod-network.1e332b7e39977b746d8f694769a85bba1fc3a6dff5560d374ba47c2945d5add9" host="ci-4081-3-4-n-bdc5bebc5f" Jul 12 00:13:16.687329 containerd[1601]: 2025-07-12 00:13:16.609 [INFO][4930] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:13:16.687329 containerd[1601]: 2025-07-12 00:13:16.609 [INFO][4930] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.106.136/26] IPv6=[] ContainerID="1e332b7e39977b746d8f694769a85bba1fc3a6dff5560d374ba47c2945d5add9" HandleID="k8s-pod-network.1e332b7e39977b746d8f694769a85bba1fc3a6dff5560d374ba47c2945d5add9" Workload="ci--4081--3--4--n--bdc5bebc5f-k8s-coredns--7c65d6cfc9--tpnrh-eth0" Jul 12 00:13:16.687954 containerd[1601]: 2025-07-12 00:13:16.618 [INFO][4884] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1e332b7e39977b746d8f694769a85bba1fc3a6dff5560d374ba47c2945d5add9" Namespace="kube-system" Pod="coredns-7c65d6cfc9-tpnrh" WorkloadEndpoint="ci--4081--3--4--n--bdc5bebc5f-k8s-coredns--7c65d6cfc9--tpnrh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--n--bdc5bebc5f-k8s-coredns--7c65d6cfc9--tpnrh-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"c86f07ef-0940-4c98-a612-68a23cab6908", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 12, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-n-bdc5bebc5f", ContainerID:"", Pod:"coredns-7c65d6cfc9-tpnrh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.106.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1ef6dde0b3b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:13:16.687954 containerd[1601]: 2025-07-12 00:13:16.619 [INFO][4884] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.106.136/32] ContainerID="1e332b7e39977b746d8f694769a85bba1fc3a6dff5560d374ba47c2945d5add9" Namespace="kube-system" Pod="coredns-7c65d6cfc9-tpnrh" WorkloadEndpoint="ci--4081--3--4--n--bdc5bebc5f-k8s-coredns--7c65d6cfc9--tpnrh-eth0" Jul 12 00:13:16.687954 containerd[1601]: 2025-07-12 00:13:16.619 [INFO][4884] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1ef6dde0b3b ContainerID="1e332b7e39977b746d8f694769a85bba1fc3a6dff5560d374ba47c2945d5add9" Namespace="kube-system" Pod="coredns-7c65d6cfc9-tpnrh" WorkloadEndpoint="ci--4081--3--4--n--bdc5bebc5f-k8s-coredns--7c65d6cfc9--tpnrh-eth0" Jul 12 00:13:16.687954 containerd[1601]: 2025-07-12 00:13:16.646 [INFO][4884] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1e332b7e39977b746d8f694769a85bba1fc3a6dff5560d374ba47c2945d5add9" Namespace="kube-system" Pod="coredns-7c65d6cfc9-tpnrh" WorkloadEndpoint="ci--4081--3--4--n--bdc5bebc5f-k8s-coredns--7c65d6cfc9--tpnrh-eth0" Jul 12 00:13:16.687954 containerd[1601]: 2025-07-12 00:13:16.651 [INFO][4884] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1e332b7e39977b746d8f694769a85bba1fc3a6dff5560d374ba47c2945d5add9" Namespace="kube-system" Pod="coredns-7c65d6cfc9-tpnrh" WorkloadEndpoint="ci--4081--3--4--n--bdc5bebc5f-k8s-coredns--7c65d6cfc9--tpnrh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--n--bdc5bebc5f-k8s-coredns--7c65d6cfc9--tpnrh-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"c86f07ef-0940-4c98-a612-68a23cab6908", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 12, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-n-bdc5bebc5f", ContainerID:"1e332b7e39977b746d8f694769a85bba1fc3a6dff5560d374ba47c2945d5add9", Pod:"coredns-7c65d6cfc9-tpnrh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.106.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1ef6dde0b3b", MAC:"e6:70:c4:12:0d:87", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:13:16.687954 containerd[1601]: 2025-07-12 00:13:16.665 [INFO][4884] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1e332b7e39977b746d8f694769a85bba1fc3a6dff5560d374ba47c2945d5add9" Namespace="kube-system" Pod="coredns-7c65d6cfc9-tpnrh" WorkloadEndpoint="ci--4081--3--4--n--bdc5bebc5f-k8s-coredns--7c65d6cfc9--tpnrh-eth0" Jul 12 00:13:16.723811 containerd[1601]: time="2025-07-12T00:13:16.721387370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54f5749fd-n2b8n,Uid:2c8b35e8-59cb-4a47-869c-9d6193668f96,Namespace:calico-system,Attempt:1,} returns sandbox id \"a8cceefaafd529af5b20782c617b085b10eb44265faea5c4b3880555cbe0d321\"" Jul 12 00:13:16.746914 systemd-networkd[1232]: cali18fe0076dae: Gained IPv6LL Jul 12 00:13:16.761279 containerd[1601]: time="2025-07-12T00:13:16.760564495Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:13:16.761279 containerd[1601]: time="2025-07-12T00:13:16.760641896Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:13:16.761279 containerd[1601]: time="2025-07-12T00:13:16.760658696Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:13:16.761279 containerd[1601]: time="2025-07-12T00:13:16.760854336Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:13:16.779048 containerd[1601]: time="2025-07-12T00:13:16.778898055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b654b5ccd-2zvhb,Uid:b638a997-3c36-4850-99ab-ab3d678917cc,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"8153543106ad52702b6982f0650485e7ca2229f340a54991f925171fef4cace7\"" Jul 12 00:13:16.819995 containerd[1601]: time="2025-07-12T00:13:16.819939945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-tpnrh,Uid:c86f07ef-0940-4c98-a612-68a23cab6908,Namespace:kube-system,Attempt:1,} returns sandbox id \"1e332b7e39977b746d8f694769a85bba1fc3a6dff5560d374ba47c2945d5add9\"" Jul 12 00:13:16.823816 containerd[1601]: time="2025-07-12T00:13:16.823667953Z" level=info msg="CreateContainer within sandbox \"1e332b7e39977b746d8f694769a85bba1fc3a6dff5560d374ba47c2945d5add9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 12 00:13:16.837523 containerd[1601]: time="2025-07-12T00:13:16.837419503Z" level=info msg="CreateContainer within sandbox \"1e332b7e39977b746d8f694769a85bba1fc3a6dff5560d374ba47c2945d5add9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3a31a63672faa078cc99c3d6374ad6e70078d73ef0cf88c0efceab088bfbe076\"" Jul 12 00:13:16.839836 containerd[1601]: time="2025-07-12T00:13:16.838330905Z" level=info msg="StartContainer for \"3a31a63672faa078cc99c3d6374ad6e70078d73ef0cf88c0efceab088bfbe076\"" Jul 12 00:13:16.901361 containerd[1601]: time="2025-07-12T00:13:16.900026000Z" level=info msg="StartContainer for \"3a31a63672faa078cc99c3d6374ad6e70078d73ef0cf88c0efceab088bfbe076\" returns successfully" Jul 12 00:13:17.128291 kubelet[2706]: I0712 00:13:17.127106 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-gp2hz" podStartSLOduration=22.025003814 podStartE2EDuration="26.127089319s" podCreationTimestamp="2025-07-12 00:12:51 +0000 UTC" firstStartedPulling="2025-07-12 00:13:12.201405272 +0000 UTC m=+42.533945923" lastFinishedPulling="2025-07-12 00:13:16.303490777 +0000 UTC m=+46.636031428" observedRunningTime="2025-07-12 00:13:17.127085239 +0000 UTC m=+47.459625890" watchObservedRunningTime="2025-07-12 00:13:17.127089319 +0000 UTC m=+47.459629930" Jul 12 00:13:17.193626 systemd-networkd[1232]: calif77743d0edd: Gained IPv6LL Jul 12 00:13:18.097292 systemd-networkd[1232]: cali811ec2e5713: Gained IPv6LL Jul 12 00:13:18.409889 systemd-networkd[1232]: cali81b5433cb02: Gained IPv6LL Jul 12 00:13:18.602005 systemd-networkd[1232]: cali1ef6dde0b3b: Gained IPv6LL Jul 12 00:13:19.763699 containerd[1601]: time="2025-07-12T00:13:19.763639119Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:13:19.770485 containerd[1601]: time="2025-07-12T00:13:19.766016803Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=44517149" Jul 12 00:13:19.770485 containerd[1601]: time="2025-07-12T00:13:19.768578088Z" level=info msg="ImageCreate event name:\"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:13:19.793489 containerd[1601]: time="2025-07-12T00:13:19.791105249Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:13:19.803472 containerd[1601]: time="2025-07-12T00:13:19.799476904Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 3.494289803s" Jul 12 00:13:19.803472 containerd[1601]: time="2025-07-12T00:13:19.799556184Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 12 00:13:19.805901 containerd[1601]: time="2025-07-12T00:13:19.805856915Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 12 00:13:19.811321 containerd[1601]: time="2025-07-12T00:13:19.811266285Z" level=info msg="CreateContainer within sandbox \"99c53d9c746851752840759182d197749018eab70220909a74faf4d961d656ec\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 12 00:13:19.844473 containerd[1601]: time="2025-07-12T00:13:19.844389424Z" level=info msg="CreateContainer within sandbox \"99c53d9c746851752840759182d197749018eab70220909a74faf4d961d656ec\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"fa2d4c315d675972d562c99dd38401b7f5dd41b264df4e1e3622e1205f98cf53\"" Jul 12 00:13:19.845859 containerd[1601]: time="2025-07-12T00:13:19.845819467Z" level=info msg="StartContainer for \"fa2d4c315d675972d562c99dd38401b7f5dd41b264df4e1e3622e1205f98cf53\"" Jul 12 00:13:20.047466 containerd[1601]: time="2025-07-12T00:13:20.047351584Z" level=info msg="StartContainer for \"fa2d4c315d675972d562c99dd38401b7f5dd41b264df4e1e3622e1205f98cf53\" returns successfully" Jul 12 00:13:20.169495 kubelet[2706]: I0712 00:13:20.166096 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-tpnrh" podStartSLOduration=45.166079065 podStartE2EDuration="45.166079065s" podCreationTimestamp="2025-07-12 00:12:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:13:17.152824331 +0000 UTC m=+47.485364982" watchObservedRunningTime="2025-07-12 00:13:20.166079065 +0000 UTC m=+50.498619716" Jul 12 00:13:20.188258 systemd[1]: run-containerd-runc-k8s.io-fa2d4c315d675972d562c99dd38401b7f5dd41b264df4e1e3622e1205f98cf53-runc.6icsvM.mount: Deactivated successfully. Jul 12 00:13:21.154259 kubelet[2706]: I0712 00:13:21.152862 2706 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 00:13:21.181377 containerd[1601]: time="2025-07-12T00:13:21.181316878Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:13:21.184474 containerd[1601]: time="2025-07-12T00:13:21.182797121Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8225702" Jul 12 00:13:21.184474 containerd[1601]: time="2025-07-12T00:13:21.183798602Z" level=info msg="ImageCreate event name:\"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:13:21.186114 containerd[1601]: time="2025-07-12T00:13:21.186070526Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:13:21.187535 containerd[1601]: time="2025-07-12T00:13:21.187502928Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"9594943\" in 1.381593533s" Jul 12 00:13:21.187623 containerd[1601]: time="2025-07-12T00:13:21.187545168Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\"" Jul 12 00:13:21.189054 containerd[1601]: time="2025-07-12T00:13:21.188726770Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 12 00:13:21.191463 containerd[1601]: time="2025-07-12T00:13:21.190077292Z" level=info msg="CreateContainer within sandbox \"7ead39e1a6355684a42c85b1dd3de37f7c90f00970f128d4a17ef81511828e2d\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 12 00:13:21.210607 containerd[1601]: time="2025-07-12T00:13:21.210546564Z" level=info msg="CreateContainer within sandbox \"7ead39e1a6355684a42c85b1dd3de37f7c90f00970f128d4a17ef81511828e2d\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"aa97dab733507644fa2a439dcb066ef89f2964acffbeab7aa64a53d58689dd63\"" Jul 12 00:13:21.213004 containerd[1601]: time="2025-07-12T00:13:21.212958528Z" level=info msg="StartContainer for \"aa97dab733507644fa2a439dcb066ef89f2964acffbeab7aa64a53d58689dd63\"" Jul 12 00:13:21.419573 containerd[1601]: time="2025-07-12T00:13:21.418602494Z" level=info msg="StartContainer for \"aa97dab733507644fa2a439dcb066ef89f2964acffbeab7aa64a53d58689dd63\" returns successfully" Jul 12 00:13:24.600122 containerd[1601]: time="2025-07-12T00:13:24.600066530Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:13:24.602733 containerd[1601]: time="2025-07-12T00:13:24.602486413Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=48128336" Jul 12 00:13:24.604539 containerd[1601]: time="2025-07-12T00:13:24.603602152Z" level=info msg="ImageCreate event name:\"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:13:24.607320 containerd[1601]: time="2025-07-12T00:13:24.607239937Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:13:24.619334 containerd[1601]: time="2025-07-12T00:13:24.619269629Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"49497545\" in 3.430506699s" Jul 12 00:13:24.619334 containerd[1601]: time="2025-07-12T00:13:24.619326510Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\"" Jul 12 00:13:24.623481 containerd[1601]: time="2025-07-12T00:13:24.620891658Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 12 00:13:24.653721 containerd[1601]: time="2025-07-12T00:13:24.653317071Z" level=info msg="CreateContainer within sandbox \"a8cceefaafd529af5b20782c617b085b10eb44265faea5c4b3880555cbe0d321\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 12 00:13:24.694143 containerd[1601]: time="2025-07-12T00:13:24.693126455Z" level=info msg="CreateContainer within sandbox \"a8cceefaafd529af5b20782c617b085b10eb44265faea5c4b3880555cbe0d321\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"50b980c207beaaea7f8b34f490339fd9668941fb683b59f6764ba198f97e519a\"" Jul 12 00:13:24.694401 containerd[1601]: time="2025-07-12T00:13:24.694336916Z" level=info msg="StartContainer for \"50b980c207beaaea7f8b34f490339fd9668941fb683b59f6764ba198f97e519a\"" Jul 12 00:13:24.837333 containerd[1601]: time="2025-07-12T00:13:24.837230402Z" level=info msg="StartContainer for \"50b980c207beaaea7f8b34f490339fd9668941fb683b59f6764ba198f97e519a\" returns successfully" Jul 12 00:13:25.004345 containerd[1601]: time="2025-07-12T00:13:25.003602022Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:13:25.004979 containerd[1601]: time="2025-07-12T00:13:25.004952365Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 12 00:13:25.010247 containerd[1601]: time="2025-07-12T00:13:25.010199335Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 389.268676ms" Jul 12 00:13:25.010391 containerd[1601]: time="2025-07-12T00:13:25.010375698Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 12 00:13:25.012833 containerd[1601]: time="2025-07-12T00:13:25.012797620Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 12 00:13:25.017079 containerd[1601]: time="2025-07-12T00:13:25.017034693Z" level=info msg="CreateContainer within sandbox \"8153543106ad52702b6982f0650485e7ca2229f340a54991f925171fef4cace7\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 12 00:13:25.034872 containerd[1601]: time="2025-07-12T00:13:25.034821238Z" level=info msg="CreateContainer within sandbox \"8153543106ad52702b6982f0650485e7ca2229f340a54991f925171fef4cace7\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d841a79cfb80529fc013af6b047700874f15877d97c304163a67eab4248c07f9\"" Jul 12 00:13:25.036565 containerd[1601]: time="2025-07-12T00:13:25.035707453Z" level=info msg="StartContainer for \"d841a79cfb80529fc013af6b047700874f15877d97c304163a67eab4248c07f9\"" Jul 12 00:13:25.126974 containerd[1601]: time="2025-07-12T00:13:25.126926381Z" level=info msg="StartContainer for \"d841a79cfb80529fc013af6b047700874f15877d97c304163a67eab4248c07f9\" returns successfully" Jul 12 00:13:25.221899 kubelet[2706]: I0712 00:13:25.221414 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-b654b5ccd-k7qq8" podStartSLOduration=33.715978616 podStartE2EDuration="39.221393565s" podCreationTimestamp="2025-07-12 00:12:46 +0000 UTC" firstStartedPulling="2025-07-12 00:13:14.298896123 +0000 UTC m=+44.631436774" lastFinishedPulling="2025-07-12 00:13:19.804311072 +0000 UTC m=+50.136851723" observedRunningTime="2025-07-12 00:13:20.166217345 +0000 UTC m=+50.498757956" watchObservedRunningTime="2025-07-12 00:13:25.221393565 +0000 UTC m=+55.553934256" Jul 12 00:13:25.249425 kubelet[2706]: I0712 00:13:25.248610 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-b654b5ccd-2zvhb" podStartSLOduration=31.016476317 podStartE2EDuration="39.248590672s" podCreationTimestamp="2025-07-12 00:12:46 +0000 UTC" firstStartedPulling="2025-07-12 00:13:16.780293818 +0000 UTC m=+47.112834469" lastFinishedPulling="2025-07-12 00:13:25.012408093 +0000 UTC m=+55.344948824" observedRunningTime="2025-07-12 00:13:25.220241665 +0000 UTC m=+55.552782316" watchObservedRunningTime="2025-07-12 00:13:25.248590672 +0000 UTC m=+55.581131283" Jul 12 00:13:25.250190 kubelet[2706]: I0712 00:13:25.250143 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-54f5749fd-n2b8n" podStartSLOduration=26.353373425 podStartE2EDuration="34.250132099s" podCreationTimestamp="2025-07-12 00:12:51 +0000 UTC" firstStartedPulling="2025-07-12 00:13:16.723631895 +0000 UTC m=+47.056172506" lastFinishedPulling="2025-07-12 00:13:24.620390529 +0000 UTC m=+54.952931180" observedRunningTime="2025-07-12 00:13:25.247346091 +0000 UTC m=+55.579886742" watchObservedRunningTime="2025-07-12 00:13:25.250132099 +0000 UTC m=+55.582672710" Jul 12 00:13:26.523887 containerd[1601]: time="2025-07-12T00:13:26.523809540Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:13:26.525667 containerd[1601]: time="2025-07-12T00:13:26.525619330Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=13754366" Jul 12 00:13:26.527444 containerd[1601]: time="2025-07-12T00:13:26.527211676Z" level=info msg="ImageCreate event name:\"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:13:26.530992 containerd[1601]: time="2025-07-12T00:13:26.530950339Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:13:26.531812 containerd[1601]: time="2025-07-12T00:13:26.531676911Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"15123559\" in 1.518699808s" Jul 12 00:13:26.531812 containerd[1601]: time="2025-07-12T00:13:26.531724832Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\"" Jul 12 00:13:26.537391 containerd[1601]: time="2025-07-12T00:13:26.537276125Z" level=info msg="CreateContainer within sandbox \"7ead39e1a6355684a42c85b1dd3de37f7c90f00970f128d4a17ef81511828e2d\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 12 00:13:26.567413 containerd[1601]: time="2025-07-12T00:13:26.567366587Z" level=info msg="CreateContainer within sandbox \"7ead39e1a6355684a42c85b1dd3de37f7c90f00970f128d4a17ef81511828e2d\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"a37da2e4870a76c2fe6f844372bfcaccdce482ca3b3247d9bf1e3efabcabf208\"" Jul 12 00:13:26.574327 containerd[1601]: time="2025-07-12T00:13:26.572779358Z" level=info msg="StartContainer for \"a37da2e4870a76c2fe6f844372bfcaccdce482ca3b3247d9bf1e3efabcabf208\"" Jul 12 00:13:26.667589 containerd[1601]: time="2025-07-12T00:13:26.666949372Z" level=info msg="StartContainer for \"a37da2e4870a76c2fe6f844372bfcaccdce482ca3b3247d9bf1e3efabcabf208\" returns successfully" Jul 12 00:13:26.928676 kubelet[2706]: I0712 00:13:26.928573 2706 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 12 00:13:26.929149 kubelet[2706]: I0712 00:13:26.929137 2706 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 12 00:13:28.250028 kubelet[2706]: I0712 00:13:28.249945 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-f2q67" podStartSLOduration=26.200212391 podStartE2EDuration="37.249924777s" podCreationTimestamp="2025-07-12 00:12:51 +0000 UTC" firstStartedPulling="2025-07-12 00:13:15.483490871 +0000 UTC m=+45.816031522" lastFinishedPulling="2025-07-12 00:13:26.533203257 +0000 UTC m=+56.865743908" observedRunningTime="2025-07-12 00:13:27.213740771 +0000 UTC m=+57.546281422" watchObservedRunningTime="2025-07-12 00:13:28.249924777 +0000 UTC m=+58.582465428" Jul 12 00:13:29.799992 containerd[1601]: time="2025-07-12T00:13:29.799812165Z" level=info msg="StopPodSandbox for \"46e52a83bf7565f21bf8cba3ffe4778f9cd87e310ee7996b780510e4b749451e\"" Jul 12 00:13:29.949406 containerd[1601]: 2025-07-12 00:13:29.865 [WARNING][5500] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="46e52a83bf7565f21bf8cba3ffe4778f9cd87e310ee7996b780510e4b749451e" WorkloadEndpoint="ci--4081--3--4--n--bdc5bebc5f-k8s-whisker--6d7879f7fd--jl8nw-eth0" Jul 12 00:13:29.949406 containerd[1601]: 2025-07-12 00:13:29.866 [INFO][5500] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="46e52a83bf7565f21bf8cba3ffe4778f9cd87e310ee7996b780510e4b749451e" Jul 12 00:13:29.949406 containerd[1601]: 2025-07-12 00:13:29.866 [INFO][5500] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="46e52a83bf7565f21bf8cba3ffe4778f9cd87e310ee7996b780510e4b749451e" iface="eth0" netns="" Jul 12 00:13:29.949406 containerd[1601]: 2025-07-12 00:13:29.866 [INFO][5500] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="46e52a83bf7565f21bf8cba3ffe4778f9cd87e310ee7996b780510e4b749451e" Jul 12 00:13:29.949406 containerd[1601]: 2025-07-12 00:13:29.866 [INFO][5500] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="46e52a83bf7565f21bf8cba3ffe4778f9cd87e310ee7996b780510e4b749451e" Jul 12 00:13:29.949406 containerd[1601]: 2025-07-12 00:13:29.916 [INFO][5510] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="46e52a83bf7565f21bf8cba3ffe4778f9cd87e310ee7996b780510e4b749451e" HandleID="k8s-pod-network.46e52a83bf7565f21bf8cba3ffe4778f9cd87e310ee7996b780510e4b749451e" Workload="ci--4081--3--4--n--bdc5bebc5f-k8s-whisker--6d7879f7fd--jl8nw-eth0" Jul 12 00:13:29.949406 containerd[1601]: 2025-07-12 00:13:29.916 [INFO][5510] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:13:29.949406 containerd[1601]: 2025-07-12 00:13:29.917 [INFO][5510] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:13:29.949406 containerd[1601]: 2025-07-12 00:13:29.941 [WARNING][5510] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="46e52a83bf7565f21bf8cba3ffe4778f9cd87e310ee7996b780510e4b749451e" HandleID="k8s-pod-network.46e52a83bf7565f21bf8cba3ffe4778f9cd87e310ee7996b780510e4b749451e" Workload="ci--4081--3--4--n--bdc5bebc5f-k8s-whisker--6d7879f7fd--jl8nw-eth0" Jul 12 00:13:29.949406 containerd[1601]: 2025-07-12 00:13:29.941 [INFO][5510] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="46e52a83bf7565f21bf8cba3ffe4778f9cd87e310ee7996b780510e4b749451e" HandleID="k8s-pod-network.46e52a83bf7565f21bf8cba3ffe4778f9cd87e310ee7996b780510e4b749451e" Workload="ci--4081--3--4--n--bdc5bebc5f-k8s-whisker--6d7879f7fd--jl8nw-eth0" Jul 12 00:13:29.949406 containerd[1601]: 2025-07-12 00:13:29.943 [INFO][5510] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:13:29.949406 containerd[1601]: 2025-07-12 00:13:29.945 [INFO][5500] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="46e52a83bf7565f21bf8cba3ffe4778f9cd87e310ee7996b780510e4b749451e" Jul 12 00:13:29.949406 containerd[1601]: time="2025-07-12T00:13:29.947596757Z" level=info msg="TearDown network for sandbox \"46e52a83bf7565f21bf8cba3ffe4778f9cd87e310ee7996b780510e4b749451e\" successfully" Jul 12 00:13:29.949406 containerd[1601]: time="2025-07-12T00:13:29.947622717Z" level=info msg="StopPodSandbox for \"46e52a83bf7565f21bf8cba3ffe4778f9cd87e310ee7996b780510e4b749451e\" returns successfully" Jul 12 00:13:29.951807 containerd[1601]: time="2025-07-12T00:13:29.950572723Z" level=info msg="RemovePodSandbox for \"46e52a83bf7565f21bf8cba3ffe4778f9cd87e310ee7996b780510e4b749451e\"" Jul 12 00:13:29.951807 containerd[1601]: time="2025-07-12T00:13:29.950606923Z" level=info msg="Forcibly stopping sandbox \"46e52a83bf7565f21bf8cba3ffe4778f9cd87e310ee7996b780510e4b749451e\"" Jul 12 00:13:30.143411 containerd[1601]: 2025-07-12 00:13:30.087 [WARNING][5525] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="46e52a83bf7565f21bf8cba3ffe4778f9cd87e310ee7996b780510e4b749451e" WorkloadEndpoint="ci--4081--3--4--n--bdc5bebc5f-k8s-whisker--6d7879f7fd--jl8nw-eth0" Jul 12 00:13:30.143411 containerd[1601]: 2025-07-12 00:13:30.087 [INFO][5525] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="46e52a83bf7565f21bf8cba3ffe4778f9cd87e310ee7996b780510e4b749451e" Jul 12 00:13:30.143411 containerd[1601]: 2025-07-12 00:13:30.087 [INFO][5525] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="46e52a83bf7565f21bf8cba3ffe4778f9cd87e310ee7996b780510e4b749451e" iface="eth0" netns="" Jul 12 00:13:30.143411 containerd[1601]: 2025-07-12 00:13:30.087 [INFO][5525] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="46e52a83bf7565f21bf8cba3ffe4778f9cd87e310ee7996b780510e4b749451e" Jul 12 00:13:30.143411 containerd[1601]: 2025-07-12 00:13:30.087 [INFO][5525] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="46e52a83bf7565f21bf8cba3ffe4778f9cd87e310ee7996b780510e4b749451e" Jul 12 00:13:30.143411 containerd[1601]: 2025-07-12 00:13:30.111 [INFO][5532] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="46e52a83bf7565f21bf8cba3ffe4778f9cd87e310ee7996b780510e4b749451e" HandleID="k8s-pod-network.46e52a83bf7565f21bf8cba3ffe4778f9cd87e310ee7996b780510e4b749451e" Workload="ci--4081--3--4--n--bdc5bebc5f-k8s-whisker--6d7879f7fd--jl8nw-eth0" Jul 12 00:13:30.143411 containerd[1601]: 2025-07-12 00:13:30.111 [INFO][5532] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:13:30.143411 containerd[1601]: 2025-07-12 00:13:30.111 [INFO][5532] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:13:30.143411 containerd[1601]: 2025-07-12 00:13:30.125 [WARNING][5532] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="46e52a83bf7565f21bf8cba3ffe4778f9cd87e310ee7996b780510e4b749451e" HandleID="k8s-pod-network.46e52a83bf7565f21bf8cba3ffe4778f9cd87e310ee7996b780510e4b749451e" Workload="ci--4081--3--4--n--bdc5bebc5f-k8s-whisker--6d7879f7fd--jl8nw-eth0" Jul 12 00:13:30.143411 containerd[1601]: 2025-07-12 00:13:30.125 [INFO][5532] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="46e52a83bf7565f21bf8cba3ffe4778f9cd87e310ee7996b780510e4b749451e" HandleID="k8s-pod-network.46e52a83bf7565f21bf8cba3ffe4778f9cd87e310ee7996b780510e4b749451e" Workload="ci--4081--3--4--n--bdc5bebc5f-k8s-whisker--6d7879f7fd--jl8nw-eth0" Jul 12 00:13:30.143411 containerd[1601]: 2025-07-12 00:13:30.130 [INFO][5532] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:13:30.143411 containerd[1601]: 2025-07-12 00:13:30.140 [INFO][5525] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="46e52a83bf7565f21bf8cba3ffe4778f9cd87e310ee7996b780510e4b749451e" Jul 12 00:13:30.144076 containerd[1601]: time="2025-07-12T00:13:30.143535469Z" level=info msg="TearDown network for sandbox \"46e52a83bf7565f21bf8cba3ffe4778f9cd87e310ee7996b780510e4b749451e\" successfully" Jul 12 00:13:30.149764 containerd[1601]: time="2025-07-12T00:13:30.149236794Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"46e52a83bf7565f21bf8cba3ffe4778f9cd87e310ee7996b780510e4b749451e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 12 00:13:30.149764 containerd[1601]: time="2025-07-12T00:13:30.149519719Z" level=info msg="RemovePodSandbox \"46e52a83bf7565f21bf8cba3ffe4778f9cd87e310ee7996b780510e4b749451e\" returns successfully" Jul 12 00:13:30.151292 containerd[1601]: time="2025-07-12T00:13:30.151009821Z" level=info msg="StopPodSandbox for \"1bea194c56a8459314f623885de55cce3b51932c64c808cdab682331974ad6c8\"" Jul 12 00:13:30.352611 containerd[1601]: 2025-07-12 00:13:30.241 [WARNING][5546] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1bea194c56a8459314f623885de55cce3b51932c64c808cdab682331974ad6c8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--n--bdc5bebc5f-k8s-calico--kube--controllers--54f5749fd--n2b8n-eth0", GenerateName:"calico-kube-controllers-54f5749fd-", Namespace:"calico-system", SelfLink:"", UID:"2c8b35e8-59cb-4a47-869c-9d6193668f96", ResourceVersion:"1025", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 12, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"54f5749fd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-n-bdc5bebc5f", ContainerID:"a8cceefaafd529af5b20782c617b085b10eb44265faea5c4b3880555cbe0d321", Pod:"calico-kube-controllers-54f5749fd-n2b8n", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.106.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali81b5433cb02", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:13:30.352611 containerd[1601]: 2025-07-12 00:13:30.241 [INFO][5546] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1bea194c56a8459314f623885de55cce3b51932c64c808cdab682331974ad6c8" Jul 12 00:13:30.352611 containerd[1601]: 2025-07-12 00:13:30.241 [INFO][5546] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1bea194c56a8459314f623885de55cce3b51932c64c808cdab682331974ad6c8" iface="eth0" netns="" Jul 12 00:13:30.352611 containerd[1601]: 2025-07-12 00:13:30.241 [INFO][5546] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1bea194c56a8459314f623885de55cce3b51932c64c808cdab682331974ad6c8" Jul 12 00:13:30.352611 containerd[1601]: 2025-07-12 00:13:30.241 [INFO][5546] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1bea194c56a8459314f623885de55cce3b51932c64c808cdab682331974ad6c8" Jul 12 00:13:30.352611 containerd[1601]: 2025-07-12 00:13:30.321 [INFO][5554] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1bea194c56a8459314f623885de55cce3b51932c64c808cdab682331974ad6c8" HandleID="k8s-pod-network.1bea194c56a8459314f623885de55cce3b51932c64c808cdab682331974ad6c8" Workload="ci--4081--3--4--n--bdc5bebc5f-k8s-calico--kube--controllers--54f5749fd--n2b8n-eth0" Jul 12 00:13:30.352611 containerd[1601]: 2025-07-12 00:13:30.322 [INFO][5554] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:13:30.352611 containerd[1601]: 2025-07-12 00:13:30.322 [INFO][5554] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:13:30.352611 containerd[1601]: 2025-07-12 00:13:30.340 [WARNING][5554] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1bea194c56a8459314f623885de55cce3b51932c64c808cdab682331974ad6c8" HandleID="k8s-pod-network.1bea194c56a8459314f623885de55cce3b51932c64c808cdab682331974ad6c8" Workload="ci--4081--3--4--n--bdc5bebc5f-k8s-calico--kube--controllers--54f5749fd--n2b8n-eth0" Jul 12 00:13:30.352611 containerd[1601]: 2025-07-12 00:13:30.340 [INFO][5554] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1bea194c56a8459314f623885de55cce3b51932c64c808cdab682331974ad6c8" HandleID="k8s-pod-network.1bea194c56a8459314f623885de55cce3b51932c64c808cdab682331974ad6c8" Workload="ci--4081--3--4--n--bdc5bebc5f-k8s-calico--kube--controllers--54f5749fd--n2b8n-eth0" Jul 12 00:13:30.352611 containerd[1601]: 2025-07-12 00:13:30.343 [INFO][5554] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:13:30.352611 containerd[1601]: 2025-07-12 00:13:30.350 [INFO][5546] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1bea194c56a8459314f623885de55cce3b51932c64c808cdab682331974ad6c8" Jul 12 00:13:30.353461 containerd[1601]: time="2025-07-12T00:13:30.353077682Z" level=info msg="TearDown network for sandbox \"1bea194c56a8459314f623885de55cce3b51932c64c808cdab682331974ad6c8\" successfully" Jul 12 00:13:30.353461 containerd[1601]: time="2025-07-12T00:13:30.353122723Z" level=info msg="StopPodSandbox for \"1bea194c56a8459314f623885de55cce3b51932c64c808cdab682331974ad6c8\" returns successfully" Jul 12 00:13:30.354072 containerd[1601]: time="2025-07-12T00:13:30.353810293Z" level=info msg="RemovePodSandbox for \"1bea194c56a8459314f623885de55cce3b51932c64c808cdab682331974ad6c8\"" Jul 12 00:13:30.354072 containerd[1601]: time="2025-07-12T00:13:30.353843734Z" level=info msg="Forcibly stopping sandbox \"1bea194c56a8459314f623885de55cce3b51932c64c808cdab682331974ad6c8\"" Jul 12 00:13:30.481237 containerd[1601]: 2025-07-12 00:13:30.423 [WARNING][5602] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1bea194c56a8459314f623885de55cce3b51932c64c808cdab682331974ad6c8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--n--bdc5bebc5f-k8s-calico--kube--controllers--54f5749fd--n2b8n-eth0", GenerateName:"calico-kube-controllers-54f5749fd-", Namespace:"calico-system", SelfLink:"", UID:"2c8b35e8-59cb-4a47-869c-9d6193668f96", ResourceVersion:"1025", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 12, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"54f5749fd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-n-bdc5bebc5f", ContainerID:"a8cceefaafd529af5b20782c617b085b10eb44265faea5c4b3880555cbe0d321", Pod:"calico-kube-controllers-54f5749fd-n2b8n", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.106.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali81b5433cb02", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:13:30.481237 containerd[1601]: 2025-07-12 00:13:30.427 [INFO][5602] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1bea194c56a8459314f623885de55cce3b51932c64c808cdab682331974ad6c8" Jul 12 00:13:30.481237 containerd[1601]: 2025-07-12 00:13:30.427 [INFO][5602] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1bea194c56a8459314f623885de55cce3b51932c64c808cdab682331974ad6c8" iface="eth0" netns="" Jul 12 00:13:30.481237 containerd[1601]: 2025-07-12 00:13:30.427 [INFO][5602] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1bea194c56a8459314f623885de55cce3b51932c64c808cdab682331974ad6c8" Jul 12 00:13:30.481237 containerd[1601]: 2025-07-12 00:13:30.427 [INFO][5602] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1bea194c56a8459314f623885de55cce3b51932c64c808cdab682331974ad6c8" Jul 12 00:13:30.481237 containerd[1601]: 2025-07-12 00:13:30.462 [INFO][5613] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1bea194c56a8459314f623885de55cce3b51932c64c808cdab682331974ad6c8" HandleID="k8s-pod-network.1bea194c56a8459314f623885de55cce3b51932c64c808cdab682331974ad6c8" Workload="ci--4081--3--4--n--bdc5bebc5f-k8s-calico--kube--controllers--54f5749fd--n2b8n-eth0" Jul 12 00:13:30.481237 containerd[1601]: 2025-07-12 00:13:30.462 [INFO][5613] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:13:30.481237 containerd[1601]: 2025-07-12 00:13:30.462 [INFO][5613] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:13:30.481237 containerd[1601]: 2025-07-12 00:13:30.472 [WARNING][5613] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1bea194c56a8459314f623885de55cce3b51932c64c808cdab682331974ad6c8" HandleID="k8s-pod-network.1bea194c56a8459314f623885de55cce3b51932c64c808cdab682331974ad6c8" Workload="ci--4081--3--4--n--bdc5bebc5f-k8s-calico--kube--controllers--54f5749fd--n2b8n-eth0" Jul 12 00:13:30.481237 containerd[1601]: 2025-07-12 00:13:30.473 [INFO][5613] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1bea194c56a8459314f623885de55cce3b51932c64c808cdab682331974ad6c8" HandleID="k8s-pod-network.1bea194c56a8459314f623885de55cce3b51932c64c808cdab682331974ad6c8" Workload="ci--4081--3--4--n--bdc5bebc5f-k8s-calico--kube--controllers--54f5749fd--n2b8n-eth0" Jul 12 00:13:30.481237 containerd[1601]: 2025-07-12 00:13:30.475 [INFO][5613] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:13:30.481237 containerd[1601]: 2025-07-12 00:13:30.479 [INFO][5602] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1bea194c56a8459314f623885de55cce3b51932c64c808cdab682331974ad6c8" Jul 12 00:13:30.482841 containerd[1601]: time="2025-07-12T00:13:30.481301439Z" level=info msg="TearDown network for sandbox \"1bea194c56a8459314f623885de55cce3b51932c64c808cdab682331974ad6c8\" successfully" Jul 12 00:13:30.490230 containerd[1601]: time="2025-07-12T00:13:30.490171532Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1bea194c56a8459314f623885de55cce3b51932c64c808cdab682331974ad6c8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 12 00:13:30.490334 containerd[1601]: time="2025-07-12T00:13:30.490246173Z" level=info msg="RemovePodSandbox \"1bea194c56a8459314f623885de55cce3b51932c64c808cdab682331974ad6c8\" returns successfully" Jul 12 00:13:30.490852 containerd[1601]: time="2025-07-12T00:13:30.490819822Z" level=info msg="StopPodSandbox for \"1bd33a55de542271863db82ac7f0b0387055ea0752ba18090c62f8069f4c15bd\"" Jul 12 00:13:30.609801 containerd[1601]: 2025-07-12 00:13:30.549 [WARNING][5627] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1bd33a55de542271863db82ac7f0b0387055ea0752ba18090c62f8069f4c15bd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--n--bdc5bebc5f-k8s-calico--apiserver--b654b5ccd--k7qq8-eth0", GenerateName:"calico-apiserver-b654b5ccd-", Namespace:"calico-apiserver", SelfLink:"", UID:"93a79160-6c7d-4ebd-95ed-d9d607047420", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 12, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b654b5ccd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-n-bdc5bebc5f", ContainerID:"99c53d9c746851752840759182d197749018eab70220909a74faf4d961d656ec", Pod:"calico-apiserver-b654b5ccd-k7qq8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.106.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic34aee32c80", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:13:30.609801 containerd[1601]: 2025-07-12 00:13:30.549 [INFO][5627] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1bd33a55de542271863db82ac7f0b0387055ea0752ba18090c62f8069f4c15bd" Jul 12 00:13:30.609801 containerd[1601]: 2025-07-12 00:13:30.549 [INFO][5627] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1bd33a55de542271863db82ac7f0b0387055ea0752ba18090c62f8069f4c15bd" iface="eth0" netns="" Jul 12 00:13:30.609801 containerd[1601]: 2025-07-12 00:13:30.549 [INFO][5627] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1bd33a55de542271863db82ac7f0b0387055ea0752ba18090c62f8069f4c15bd" Jul 12 00:13:30.609801 containerd[1601]: 2025-07-12 00:13:30.549 [INFO][5627] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1bd33a55de542271863db82ac7f0b0387055ea0752ba18090c62f8069f4c15bd" Jul 12 00:13:30.609801 containerd[1601]: 2025-07-12 00:13:30.588 [INFO][5634] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1bd33a55de542271863db82ac7f0b0387055ea0752ba18090c62f8069f4c15bd" HandleID="k8s-pod-network.1bd33a55de542271863db82ac7f0b0387055ea0752ba18090c62f8069f4c15bd" Workload="ci--4081--3--4--n--bdc5bebc5f-k8s-calico--apiserver--b654b5ccd--k7qq8-eth0" Jul 12 00:13:30.609801 containerd[1601]: 2025-07-12 00:13:30.588 [INFO][5634] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:13:30.609801 containerd[1601]: 2025-07-12 00:13:30.588 [INFO][5634] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:13:30.609801 containerd[1601]: 2025-07-12 00:13:30.599 [WARNING][5634] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1bd33a55de542271863db82ac7f0b0387055ea0752ba18090c62f8069f4c15bd" HandleID="k8s-pod-network.1bd33a55de542271863db82ac7f0b0387055ea0752ba18090c62f8069f4c15bd" Workload="ci--4081--3--4--n--bdc5bebc5f-k8s-calico--apiserver--b654b5ccd--k7qq8-eth0" Jul 12 00:13:30.609801 containerd[1601]: 2025-07-12 00:13:30.599 [INFO][5634] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1bd33a55de542271863db82ac7f0b0387055ea0752ba18090c62f8069f4c15bd" HandleID="k8s-pod-network.1bd33a55de542271863db82ac7f0b0387055ea0752ba18090c62f8069f4c15bd" Workload="ci--4081--3--4--n--bdc5bebc5f-k8s-calico--apiserver--b654b5ccd--k7qq8-eth0" Jul 12 00:13:30.609801 containerd[1601]: 2025-07-12 00:13:30.602 [INFO][5634] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:13:30.609801 containerd[1601]: 2025-07-12 00:13:30.605 [INFO][5627] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1bd33a55de542271863db82ac7f0b0387055ea0752ba18090c62f8069f4c15bd" Jul 12 00:13:30.610779 containerd[1601]: time="2025-07-12T00:13:30.609892602Z" level=info msg="TearDown network for sandbox \"1bd33a55de542271863db82ac7f0b0387055ea0752ba18090c62f8069f4c15bd\" successfully" Jul 12 00:13:30.610779 containerd[1601]: time="2025-07-12T00:13:30.609935243Z" level=info msg="StopPodSandbox for \"1bd33a55de542271863db82ac7f0b0387055ea0752ba18090c62f8069f4c15bd\" returns successfully" Jul 12 00:13:30.612577 containerd[1601]: time="2025-07-12T00:13:30.611150901Z" level=info msg="RemovePodSandbox for \"1bd33a55de542271863db82ac7f0b0387055ea0752ba18090c62f8069f4c15bd\"" Jul 12 00:13:30.612577 containerd[1601]: time="2025-07-12T00:13:30.611215142Z" level=info msg="Forcibly stopping sandbox \"1bd33a55de542271863db82ac7f0b0387055ea0752ba18090c62f8069f4c15bd\"" Jul 12 00:13:30.726665 containerd[1601]: 2025-07-12 00:13:30.665 [WARNING][5648] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1bd33a55de542271863db82ac7f0b0387055ea0752ba18090c62f8069f4c15bd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--n--bdc5bebc5f-k8s-calico--apiserver--b654b5ccd--k7qq8-eth0", GenerateName:"calico-apiserver-b654b5ccd-", Namespace:"calico-apiserver", SelfLink:"", UID:"93a79160-6c7d-4ebd-95ed-d9d607047420", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 12, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b654b5ccd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-n-bdc5bebc5f", ContainerID:"99c53d9c746851752840759182d197749018eab70220909a74faf4d961d656ec", Pod:"calico-apiserver-b654b5ccd-k7qq8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.106.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic34aee32c80", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:13:30.726665 containerd[1601]: 2025-07-12 00:13:30.670 [INFO][5648] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1bd33a55de542271863db82ac7f0b0387055ea0752ba18090c62f8069f4c15bd" Jul 12 00:13:30.726665 containerd[1601]: 2025-07-12 00:13:30.670 [INFO][5648] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1bd33a55de542271863db82ac7f0b0387055ea0752ba18090c62f8069f4c15bd" iface="eth0" netns="" Jul 12 00:13:30.726665 containerd[1601]: 2025-07-12 00:13:30.670 [INFO][5648] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1bd33a55de542271863db82ac7f0b0387055ea0752ba18090c62f8069f4c15bd" Jul 12 00:13:30.726665 containerd[1601]: 2025-07-12 00:13:30.670 [INFO][5648] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1bd33a55de542271863db82ac7f0b0387055ea0752ba18090c62f8069f4c15bd" Jul 12 00:13:30.726665 containerd[1601]: 2025-07-12 00:13:30.704 [INFO][5655] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1bd33a55de542271863db82ac7f0b0387055ea0752ba18090c62f8069f4c15bd" HandleID="k8s-pod-network.1bd33a55de542271863db82ac7f0b0387055ea0752ba18090c62f8069f4c15bd" Workload="ci--4081--3--4--n--bdc5bebc5f-k8s-calico--apiserver--b654b5ccd--k7qq8-eth0" Jul 12 00:13:30.726665 containerd[1601]: 2025-07-12 00:13:30.704 [INFO][5655] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:13:30.726665 containerd[1601]: 2025-07-12 00:13:30.704 [INFO][5655] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:13:30.726665 containerd[1601]: 2025-07-12 00:13:30.718 [WARNING][5655] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1bd33a55de542271863db82ac7f0b0387055ea0752ba18090c62f8069f4c15bd" HandleID="k8s-pod-network.1bd33a55de542271863db82ac7f0b0387055ea0752ba18090c62f8069f4c15bd" Workload="ci--4081--3--4--n--bdc5bebc5f-k8s-calico--apiserver--b654b5ccd--k7qq8-eth0" Jul 12 00:13:30.726665 containerd[1601]: 2025-07-12 00:13:30.718 [INFO][5655] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1bd33a55de542271863db82ac7f0b0387055ea0752ba18090c62f8069f4c15bd" HandleID="k8s-pod-network.1bd33a55de542271863db82ac7f0b0387055ea0752ba18090c62f8069f4c15bd" Workload="ci--4081--3--4--n--bdc5bebc5f-k8s-calico--apiserver--b654b5ccd--k7qq8-eth0" Jul 12 00:13:30.726665 containerd[1601]: 2025-07-12 00:13:30.721 [INFO][5655] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:13:30.726665 containerd[1601]: 2025-07-12 00:13:30.723 [INFO][5648] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1bd33a55de542271863db82ac7f0b0387055ea0752ba18090c62f8069f4c15bd" Jul 12 00:13:30.729599 containerd[1601]: time="2025-07-12T00:13:30.729551191Z" level=info msg="TearDown network for sandbox \"1bd33a55de542271863db82ac7f0b0387055ea0752ba18090c62f8069f4c15bd\" successfully" Jul 12 00:13:30.736492 containerd[1601]: time="2025-07-12T00:13:30.733975377Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1bd33a55de542271863db82ac7f0b0387055ea0752ba18090c62f8069f4c15bd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 12 00:13:30.736492 containerd[1601]: time="2025-07-12T00:13:30.734089139Z" level=info msg="RemovePodSandbox \"1bd33a55de542271863db82ac7f0b0387055ea0752ba18090c62f8069f4c15bd\" returns successfully" Jul 12 00:13:30.737827 containerd[1601]: time="2025-07-12T00:13:30.736976302Z" level=info msg="StopPodSandbox for \"e849895c4dc9b971a95abc2d7eaa0482d4477e6e9ab4780ebc9120c37441fd12\"" Jul 12 00:13:30.845183 containerd[1601]: 2025-07-12 00:13:30.788 [WARNING][5669] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e849895c4dc9b971a95abc2d7eaa0482d4477e6e9ab4780ebc9120c37441fd12" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--n--bdc5bebc5f-k8s-csi--node--driver--f2q67-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"bf0041c9-fdb6-4de6-99ec-d3644807d402", ResourceVersion:"1043", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 12, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-n-bdc5bebc5f", ContainerID:"7ead39e1a6355684a42c85b1dd3de37f7c90f00970f128d4a17ef81511828e2d", Pod:"csi-node-driver-f2q67", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.106.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif77743d0edd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:13:30.845183 containerd[1601]: 2025-07-12 00:13:30.789 [INFO][5669] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e849895c4dc9b971a95abc2d7eaa0482d4477e6e9ab4780ebc9120c37441fd12" Jul 12 00:13:30.845183 containerd[1601]: 2025-07-12 00:13:30.789 [INFO][5669] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e849895c4dc9b971a95abc2d7eaa0482d4477e6e9ab4780ebc9120c37441fd12" iface="eth0" netns="" Jul 12 00:13:30.845183 containerd[1601]: 2025-07-12 00:13:30.789 [INFO][5669] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e849895c4dc9b971a95abc2d7eaa0482d4477e6e9ab4780ebc9120c37441fd12" Jul 12 00:13:30.845183 containerd[1601]: 2025-07-12 00:13:30.789 [INFO][5669] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e849895c4dc9b971a95abc2d7eaa0482d4477e6e9ab4780ebc9120c37441fd12" Jul 12 00:13:30.845183 containerd[1601]: 2025-07-12 00:13:30.819 [INFO][5676] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e849895c4dc9b971a95abc2d7eaa0482d4477e6e9ab4780ebc9120c37441fd12" HandleID="k8s-pod-network.e849895c4dc9b971a95abc2d7eaa0482d4477e6e9ab4780ebc9120c37441fd12" Workload="ci--4081--3--4--n--bdc5bebc5f-k8s-csi--node--driver--f2q67-eth0" Jul 12 00:13:30.845183 containerd[1601]: 2025-07-12 00:13:30.819 [INFO][5676] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:13:30.845183 containerd[1601]: 2025-07-12 00:13:30.819 [INFO][5676] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:13:30.845183 containerd[1601]: 2025-07-12 00:13:30.833 [WARNING][5676] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e849895c4dc9b971a95abc2d7eaa0482d4477e6e9ab4780ebc9120c37441fd12" HandleID="k8s-pod-network.e849895c4dc9b971a95abc2d7eaa0482d4477e6e9ab4780ebc9120c37441fd12" Workload="ci--4081--3--4--n--bdc5bebc5f-k8s-csi--node--driver--f2q67-eth0" Jul 12 00:13:30.845183 containerd[1601]: 2025-07-12 00:13:30.833 [INFO][5676] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e849895c4dc9b971a95abc2d7eaa0482d4477e6e9ab4780ebc9120c37441fd12" HandleID="k8s-pod-network.e849895c4dc9b971a95abc2d7eaa0482d4477e6e9ab4780ebc9120c37441fd12" Workload="ci--4081--3--4--n--bdc5bebc5f-k8s-csi--node--driver--f2q67-eth0" Jul 12 00:13:30.845183 containerd[1601]: 2025-07-12 00:13:30.835 [INFO][5676] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:13:30.845183 containerd[1601]: 2025-07-12 00:13:30.842 [INFO][5669] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e849895c4dc9b971a95abc2d7eaa0482d4477e6e9ab4780ebc9120c37441fd12" Jul 12 00:13:30.846158 containerd[1601]: time="2025-07-12T00:13:30.845982612Z" level=info msg="TearDown network for sandbox \"e849895c4dc9b971a95abc2d7eaa0482d4477e6e9ab4780ebc9120c37441fd12\" successfully" Jul 12 00:13:30.846158 containerd[1601]: time="2025-07-12T00:13:30.846031933Z" level=info msg="StopPodSandbox for \"e849895c4dc9b971a95abc2d7eaa0482d4477e6e9ab4780ebc9120c37441fd12\" returns successfully" Jul 12 00:13:30.848223 containerd[1601]: time="2025-07-12T00:13:30.848188005Z" level=info msg="RemovePodSandbox for \"e849895c4dc9b971a95abc2d7eaa0482d4477e6e9ab4780ebc9120c37441fd12\"" Jul 12 00:13:30.848291 containerd[1601]: time="2025-07-12T00:13:30.848225325Z" level=info msg="Forcibly stopping sandbox \"e849895c4dc9b971a95abc2d7eaa0482d4477e6e9ab4780ebc9120c37441fd12\"" Jul 12 00:13:30.952365 containerd[1601]: 2025-07-12 00:13:30.906 [WARNING][5690] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e849895c4dc9b971a95abc2d7eaa0482d4477e6e9ab4780ebc9120c37441fd12" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--n--bdc5bebc5f-k8s-csi--node--driver--f2q67-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"bf0041c9-fdb6-4de6-99ec-d3644807d402", ResourceVersion:"1043", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 12, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-n-bdc5bebc5f", ContainerID:"7ead39e1a6355684a42c85b1dd3de37f7c90f00970f128d4a17ef81511828e2d", Pod:"csi-node-driver-f2q67", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.106.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif77743d0edd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:13:30.952365 containerd[1601]: 2025-07-12 00:13:30.907 [INFO][5690] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e849895c4dc9b971a95abc2d7eaa0482d4477e6e9ab4780ebc9120c37441fd12" Jul 12 00:13:30.952365 containerd[1601]: 2025-07-12 00:13:30.907 [INFO][5690] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e849895c4dc9b971a95abc2d7eaa0482d4477e6e9ab4780ebc9120c37441fd12" iface="eth0" netns="" Jul 12 00:13:30.952365 containerd[1601]: 2025-07-12 00:13:30.907 [INFO][5690] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e849895c4dc9b971a95abc2d7eaa0482d4477e6e9ab4780ebc9120c37441fd12" Jul 12 00:13:30.952365 containerd[1601]: 2025-07-12 00:13:30.907 [INFO][5690] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e849895c4dc9b971a95abc2d7eaa0482d4477e6e9ab4780ebc9120c37441fd12" Jul 12 00:13:30.952365 containerd[1601]: 2025-07-12 00:13:30.931 [INFO][5697] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e849895c4dc9b971a95abc2d7eaa0482d4477e6e9ab4780ebc9120c37441fd12" HandleID="k8s-pod-network.e849895c4dc9b971a95abc2d7eaa0482d4477e6e9ab4780ebc9120c37441fd12" Workload="ci--4081--3--4--n--bdc5bebc5f-k8s-csi--node--driver--f2q67-eth0" Jul 12 00:13:30.952365 containerd[1601]: 2025-07-12 00:13:30.932 [INFO][5697] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:13:30.952365 containerd[1601]: 2025-07-12 00:13:30.932 [INFO][5697] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:13:30.952365 containerd[1601]: 2025-07-12 00:13:30.943 [WARNING][5697] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e849895c4dc9b971a95abc2d7eaa0482d4477e6e9ab4780ebc9120c37441fd12" HandleID="k8s-pod-network.e849895c4dc9b971a95abc2d7eaa0482d4477e6e9ab4780ebc9120c37441fd12" Workload="ci--4081--3--4--n--bdc5bebc5f-k8s-csi--node--driver--f2q67-eth0" Jul 12 00:13:30.952365 containerd[1601]: 2025-07-12 00:13:30.944 [INFO][5697] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e849895c4dc9b971a95abc2d7eaa0482d4477e6e9ab4780ebc9120c37441fd12" HandleID="k8s-pod-network.e849895c4dc9b971a95abc2d7eaa0482d4477e6e9ab4780ebc9120c37441fd12" Workload="ci--4081--3--4--n--bdc5bebc5f-k8s-csi--node--driver--f2q67-eth0" Jul 12 00:13:30.952365 containerd[1601]: 2025-07-12 00:13:30.946 [INFO][5697] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:13:30.952365 containerd[1601]: 2025-07-12 00:13:30.950 [INFO][5690] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e849895c4dc9b971a95abc2d7eaa0482d4477e6e9ab4780ebc9120c37441fd12" Jul 12 00:13:30.952922 containerd[1601]: time="2025-07-12T00:13:30.952398683Z" level=info msg="TearDown network for sandbox \"e849895c4dc9b971a95abc2d7eaa0482d4477e6e9ab4780ebc9120c37441fd12\" successfully" Jul 12 00:13:30.963189 containerd[1601]: time="2025-07-12T00:13:30.962725597Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e849895c4dc9b971a95abc2d7eaa0482d4477e6e9ab4780ebc9120c37441fd12\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 12 00:13:30.963189 containerd[1601]: time="2025-07-12T00:13:30.962835359Z" level=info msg="RemovePodSandbox \"e849895c4dc9b971a95abc2d7eaa0482d4477e6e9ab4780ebc9120c37441fd12\" returns successfully" Jul 12 00:13:30.963789 containerd[1601]: time="2025-07-12T00:13:30.963760933Z" level=info msg="StopPodSandbox for \"8df534f44de306fdfbdd517a52c9eaae7da824c9bc0207a7f8a2f7fa4d6e36f8\"" Jul 12 00:13:31.157664 containerd[1601]: 2025-07-12 00:13:31.081 [WARNING][5711] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8df534f44de306fdfbdd517a52c9eaae7da824c9bc0207a7f8a2f7fa4d6e36f8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--n--bdc5bebc5f-k8s-calico--apiserver--b654b5ccd--2zvhb-eth0", GenerateName:"calico-apiserver-b654b5ccd-", Namespace:"calico-apiserver", SelfLink:"", UID:"b638a997-3c36-4850-99ab-ab3d678917cc", ResourceVersion:"1048", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 12, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b654b5ccd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-n-bdc5bebc5f", ContainerID:"8153543106ad52702b6982f0650485e7ca2229f340a54991f925171fef4cace7", Pod:"calico-apiserver-b654b5ccd-2zvhb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.106.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali811ec2e5713", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:13:31.157664 containerd[1601]: 2025-07-12 00:13:31.082 [INFO][5711] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8df534f44de306fdfbdd517a52c9eaae7da824c9bc0207a7f8a2f7fa4d6e36f8" Jul 12 00:13:31.157664 containerd[1601]: 2025-07-12 00:13:31.082 [INFO][5711] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8df534f44de306fdfbdd517a52c9eaae7da824c9bc0207a7f8a2f7fa4d6e36f8" iface="eth0" netns="" Jul 12 00:13:31.157664 containerd[1601]: 2025-07-12 00:13:31.082 [INFO][5711] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8df534f44de306fdfbdd517a52c9eaae7da824c9bc0207a7f8a2f7fa4d6e36f8" Jul 12 00:13:31.157664 containerd[1601]: 2025-07-12 00:13:31.082 [INFO][5711] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8df534f44de306fdfbdd517a52c9eaae7da824c9bc0207a7f8a2f7fa4d6e36f8" Jul 12 00:13:31.157664 containerd[1601]: 2025-07-12 00:13:31.134 [INFO][5718] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8df534f44de306fdfbdd517a52c9eaae7da824c9bc0207a7f8a2f7fa4d6e36f8" HandleID="k8s-pod-network.8df534f44de306fdfbdd517a52c9eaae7da824c9bc0207a7f8a2f7fa4d6e36f8" Workload="ci--4081--3--4--n--bdc5bebc5f-k8s-calico--apiserver--b654b5ccd--2zvhb-eth0" Jul 12 00:13:31.157664 containerd[1601]: 2025-07-12 00:13:31.135 [INFO][5718] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:13:31.157664 containerd[1601]: 2025-07-12 00:13:31.135 [INFO][5718] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:13:31.157664 containerd[1601]: 2025-07-12 00:13:31.148 [WARNING][5718] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8df534f44de306fdfbdd517a52c9eaae7da824c9bc0207a7f8a2f7fa4d6e36f8" HandleID="k8s-pod-network.8df534f44de306fdfbdd517a52c9eaae7da824c9bc0207a7f8a2f7fa4d6e36f8" Workload="ci--4081--3--4--n--bdc5bebc5f-k8s-calico--apiserver--b654b5ccd--2zvhb-eth0" Jul 12 00:13:31.157664 containerd[1601]: 2025-07-12 00:13:31.148 [INFO][5718] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8df534f44de306fdfbdd517a52c9eaae7da824c9bc0207a7f8a2f7fa4d6e36f8" HandleID="k8s-pod-network.8df534f44de306fdfbdd517a52c9eaae7da824c9bc0207a7f8a2f7fa4d6e36f8" Workload="ci--4081--3--4--n--bdc5bebc5f-k8s-calico--apiserver--b654b5ccd--2zvhb-eth0" Jul 12 00:13:31.157664 containerd[1601]: 2025-07-12 00:13:31.151 [INFO][5718] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:13:31.157664 containerd[1601]: 2025-07-12 00:13:31.154 [INFO][5711] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8df534f44de306fdfbdd517a52c9eaae7da824c9bc0207a7f8a2f7fa4d6e36f8" Jul 12 00:13:31.157664 containerd[1601]: time="2025-07-12T00:13:31.157636448Z" level=info msg="TearDown network for sandbox \"8df534f44de306fdfbdd517a52c9eaae7da824c9bc0207a7f8a2f7fa4d6e36f8\" successfully" Jul 12 00:13:31.158312 containerd[1601]: time="2025-07-12T00:13:31.157668889Z" level=info msg="StopPodSandbox for \"8df534f44de306fdfbdd517a52c9eaae7da824c9bc0207a7f8a2f7fa4d6e36f8\" returns successfully" Jul 12 00:13:31.158658 containerd[1601]: time="2025-07-12T00:13:31.158617263Z" level=info msg="RemovePodSandbox for \"8df534f44de306fdfbdd517a52c9eaae7da824c9bc0207a7f8a2f7fa4d6e36f8\"" Jul 12 00:13:31.158658 containerd[1601]: time="2025-07-12T00:13:31.158656983Z" level=info msg="Forcibly stopping sandbox \"8df534f44de306fdfbdd517a52c9eaae7da824c9bc0207a7f8a2f7fa4d6e36f8\"" Jul 12 00:13:31.260688 containerd[1601]: 2025-07-12 00:13:31.214 [WARNING][5732] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8df534f44de306fdfbdd517a52c9eaae7da824c9bc0207a7f8a2f7fa4d6e36f8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--n--bdc5bebc5f-k8s-calico--apiserver--b654b5ccd--2zvhb-eth0", GenerateName:"calico-apiserver-b654b5ccd-", Namespace:"calico-apiserver", SelfLink:"", UID:"b638a997-3c36-4850-99ab-ab3d678917cc", ResourceVersion:"1048", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 12, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b654b5ccd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-n-bdc5bebc5f", ContainerID:"8153543106ad52702b6982f0650485e7ca2229f340a54991f925171fef4cace7", Pod:"calico-apiserver-b654b5ccd-2zvhb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.106.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali811ec2e5713", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:13:31.260688 containerd[1601]: 2025-07-12 00:13:31.215 [INFO][5732] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8df534f44de306fdfbdd517a52c9eaae7da824c9bc0207a7f8a2f7fa4d6e36f8" Jul 12 00:13:31.260688 containerd[1601]: 2025-07-12 00:13:31.215 [INFO][5732] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8df534f44de306fdfbdd517a52c9eaae7da824c9bc0207a7f8a2f7fa4d6e36f8" iface="eth0" netns="" Jul 12 00:13:31.260688 containerd[1601]: 2025-07-12 00:13:31.215 [INFO][5732] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8df534f44de306fdfbdd517a52c9eaae7da824c9bc0207a7f8a2f7fa4d6e36f8" Jul 12 00:13:31.260688 containerd[1601]: 2025-07-12 00:13:31.215 [INFO][5732] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8df534f44de306fdfbdd517a52c9eaae7da824c9bc0207a7f8a2f7fa4d6e36f8" Jul 12 00:13:31.260688 containerd[1601]: 2025-07-12 00:13:31.244 [INFO][5739] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8df534f44de306fdfbdd517a52c9eaae7da824c9bc0207a7f8a2f7fa4d6e36f8" HandleID="k8s-pod-network.8df534f44de306fdfbdd517a52c9eaae7da824c9bc0207a7f8a2f7fa4d6e36f8" Workload="ci--4081--3--4--n--bdc5bebc5f-k8s-calico--apiserver--b654b5ccd--2zvhb-eth0" Jul 12 00:13:31.260688 containerd[1601]: 2025-07-12 00:13:31.244 [INFO][5739] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:13:31.260688 containerd[1601]: 2025-07-12 00:13:31.244 [INFO][5739] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:13:31.260688 containerd[1601]: 2025-07-12 00:13:31.254 [WARNING][5739] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8df534f44de306fdfbdd517a52c9eaae7da824c9bc0207a7f8a2f7fa4d6e36f8" HandleID="k8s-pod-network.8df534f44de306fdfbdd517a52c9eaae7da824c9bc0207a7f8a2f7fa4d6e36f8" Workload="ci--4081--3--4--n--bdc5bebc5f-k8s-calico--apiserver--b654b5ccd--2zvhb-eth0" Jul 12 00:13:31.260688 containerd[1601]: 2025-07-12 00:13:31.254 [INFO][5739] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8df534f44de306fdfbdd517a52c9eaae7da824c9bc0207a7f8a2f7fa4d6e36f8" HandleID="k8s-pod-network.8df534f44de306fdfbdd517a52c9eaae7da824c9bc0207a7f8a2f7fa4d6e36f8" Workload="ci--4081--3--4--n--bdc5bebc5f-k8s-calico--apiserver--b654b5ccd--2zvhb-eth0" Jul 12 00:13:31.260688 containerd[1601]: 2025-07-12 00:13:31.256 [INFO][5739] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:13:31.260688 containerd[1601]: 2025-07-12 00:13:31.258 [INFO][5732] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8df534f44de306fdfbdd517a52c9eaae7da824c9bc0207a7f8a2f7fa4d6e36f8" Jul 12 00:13:31.262570 containerd[1601]: time="2025-07-12T00:13:31.260748188Z" level=info msg="TearDown network for sandbox \"8df534f44de306fdfbdd517a52c9eaae7da824c9bc0207a7f8a2f7fa4d6e36f8\" successfully" Jul 12 00:13:31.265951 containerd[1601]: time="2025-07-12T00:13:31.265891623Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8df534f44de306fdfbdd517a52c9eaae7da824c9bc0207a7f8a2f7fa4d6e36f8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 12 00:13:31.266511 containerd[1601]: time="2025-07-12T00:13:31.266047025Z" level=info msg="RemovePodSandbox \"8df534f44de306fdfbdd517a52c9eaae7da824c9bc0207a7f8a2f7fa4d6e36f8\" returns successfully" Jul 12 00:13:31.268243 containerd[1601]: time="2025-07-12T00:13:31.268214857Z" level=info msg="StopPodSandbox for \"bc9dff0487da505b7fa119d4dc16b2ecd7080abe735d52d748dbef664a2f919b\"" Jul 12 00:13:31.361488 containerd[1601]: 2025-07-12 00:13:31.316 [WARNING][5753] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bc9dff0487da505b7fa119d4dc16b2ecd7080abe735d52d748dbef664a2f919b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--n--bdc5bebc5f-k8s-goldmane--58fd7646b9--gp2hz-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"95ba4f33-c2b5-452d-814d-3c80c989e70e", ResourceVersion:"971", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 12, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-n-bdc5bebc5f", ContainerID:"c2d6a67f7bc399b26f888a3194975c5eb6bc0ca3fb8a00443de5697740a7d5a1", Pod:"goldmane-58fd7646b9-gp2hz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.106.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia289408f8c2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:13:31.361488 containerd[1601]: 2025-07-12 00:13:31.317 [INFO][5753] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bc9dff0487da505b7fa119d4dc16b2ecd7080abe735d52d748dbef664a2f919b" Jul 12 00:13:31.361488 containerd[1601]: 2025-07-12 00:13:31.317 [INFO][5753] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bc9dff0487da505b7fa119d4dc16b2ecd7080abe735d52d748dbef664a2f919b" iface="eth0" netns="" Jul 12 00:13:31.361488 containerd[1601]: 2025-07-12 00:13:31.317 [INFO][5753] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bc9dff0487da505b7fa119d4dc16b2ecd7080abe735d52d748dbef664a2f919b" Jul 12 00:13:31.361488 containerd[1601]: 2025-07-12 00:13:31.317 [INFO][5753] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bc9dff0487da505b7fa119d4dc16b2ecd7080abe735d52d748dbef664a2f919b" Jul 12 00:13:31.361488 containerd[1601]: 2025-07-12 00:13:31.340 [INFO][5761] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bc9dff0487da505b7fa119d4dc16b2ecd7080abe735d52d748dbef664a2f919b" HandleID="k8s-pod-network.bc9dff0487da505b7fa119d4dc16b2ecd7080abe735d52d748dbef664a2f919b" Workload="ci--4081--3--4--n--bdc5bebc5f-k8s-goldmane--58fd7646b9--gp2hz-eth0" Jul 12 00:13:31.361488 containerd[1601]: 2025-07-12 00:13:31.340 [INFO][5761] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:13:31.361488 containerd[1601]: 2025-07-12 00:13:31.341 [INFO][5761] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:13:31.361488 containerd[1601]: 2025-07-12 00:13:31.355 [WARNING][5761] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bc9dff0487da505b7fa119d4dc16b2ecd7080abe735d52d748dbef664a2f919b" HandleID="k8s-pod-network.bc9dff0487da505b7fa119d4dc16b2ecd7080abe735d52d748dbef664a2f919b" Workload="ci--4081--3--4--n--bdc5bebc5f-k8s-goldmane--58fd7646b9--gp2hz-eth0" Jul 12 00:13:31.361488 containerd[1601]: 2025-07-12 00:13:31.355 [INFO][5761] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bc9dff0487da505b7fa119d4dc16b2ecd7080abe735d52d748dbef664a2f919b" HandleID="k8s-pod-network.bc9dff0487da505b7fa119d4dc16b2ecd7080abe735d52d748dbef664a2f919b" Workload="ci--4081--3--4--n--bdc5bebc5f-k8s-goldmane--58fd7646b9--gp2hz-eth0" Jul 12 00:13:31.361488 containerd[1601]: 2025-07-12 00:13:31.357 [INFO][5761] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:13:31.361488 containerd[1601]: 2025-07-12 00:13:31.359 [INFO][5753] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bc9dff0487da505b7fa119d4dc16b2ecd7080abe735d52d748dbef664a2f919b" Jul 12 00:13:31.361894 containerd[1601]: time="2025-07-12T00:13:31.361519694Z" level=info msg="TearDown network for sandbox \"bc9dff0487da505b7fa119d4dc16b2ecd7080abe735d52d748dbef664a2f919b\" successfully" Jul 12 00:13:31.361894 containerd[1601]: time="2025-07-12T00:13:31.361545454Z" level=info msg="StopPodSandbox for \"bc9dff0487da505b7fa119d4dc16b2ecd7080abe735d52d748dbef664a2f919b\" returns successfully" Jul 12 00:13:31.362763 containerd[1601]: time="2025-07-12T00:13:31.362721991Z" level=info msg="RemovePodSandbox for \"bc9dff0487da505b7fa119d4dc16b2ecd7080abe735d52d748dbef664a2f919b\"" Jul 12 00:13:31.362763 containerd[1601]: time="2025-07-12T00:13:31.362757672Z" level=info msg="Forcibly stopping sandbox \"bc9dff0487da505b7fa119d4dc16b2ecd7080abe735d52d748dbef664a2f919b\"" Jul 12 00:13:31.466837 containerd[1601]: 2025-07-12 00:13:31.412 [WARNING][5776] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bc9dff0487da505b7fa119d4dc16b2ecd7080abe735d52d748dbef664a2f919b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--n--bdc5bebc5f-k8s-goldmane--58fd7646b9--gp2hz-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"95ba4f33-c2b5-452d-814d-3c80c989e70e", ResourceVersion:"971", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 12, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-n-bdc5bebc5f", ContainerID:"c2d6a67f7bc399b26f888a3194975c5eb6bc0ca3fb8a00443de5697740a7d5a1", Pod:"goldmane-58fd7646b9-gp2hz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.106.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia289408f8c2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:13:31.466837 containerd[1601]: 2025-07-12 00:13:31.412 [INFO][5776] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bc9dff0487da505b7fa119d4dc16b2ecd7080abe735d52d748dbef664a2f919b" Jul 12 00:13:31.466837 containerd[1601]: 2025-07-12 00:13:31.412 [INFO][5776] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bc9dff0487da505b7fa119d4dc16b2ecd7080abe735d52d748dbef664a2f919b" iface="eth0" netns="" Jul 12 00:13:31.466837 containerd[1601]: 2025-07-12 00:13:31.412 [INFO][5776] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bc9dff0487da505b7fa119d4dc16b2ecd7080abe735d52d748dbef664a2f919b" Jul 12 00:13:31.466837 containerd[1601]: 2025-07-12 00:13:31.412 [INFO][5776] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bc9dff0487da505b7fa119d4dc16b2ecd7080abe735d52d748dbef664a2f919b" Jul 12 00:13:31.466837 containerd[1601]: 2025-07-12 00:13:31.439 [INFO][5784] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bc9dff0487da505b7fa119d4dc16b2ecd7080abe735d52d748dbef664a2f919b" HandleID="k8s-pod-network.bc9dff0487da505b7fa119d4dc16b2ecd7080abe735d52d748dbef664a2f919b" Workload="ci--4081--3--4--n--bdc5bebc5f-k8s-goldmane--58fd7646b9--gp2hz-eth0" Jul 12 00:13:31.466837 containerd[1601]: 2025-07-12 00:13:31.439 [INFO][5784] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:13:31.466837 containerd[1601]: 2025-07-12 00:13:31.440 [INFO][5784] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:13:31.466837 containerd[1601]: 2025-07-12 00:13:31.456 [WARNING][5784] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bc9dff0487da505b7fa119d4dc16b2ecd7080abe735d52d748dbef664a2f919b" HandleID="k8s-pod-network.bc9dff0487da505b7fa119d4dc16b2ecd7080abe735d52d748dbef664a2f919b" Workload="ci--4081--3--4--n--bdc5bebc5f-k8s-goldmane--58fd7646b9--gp2hz-eth0" Jul 12 00:13:31.466837 containerd[1601]: 2025-07-12 00:13:31.456 [INFO][5784] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bc9dff0487da505b7fa119d4dc16b2ecd7080abe735d52d748dbef664a2f919b" HandleID="k8s-pod-network.bc9dff0487da505b7fa119d4dc16b2ecd7080abe735d52d748dbef664a2f919b" Workload="ci--4081--3--4--n--bdc5bebc5f-k8s-goldmane--58fd7646b9--gp2hz-eth0" Jul 12 00:13:31.466837 containerd[1601]: 2025-07-12 00:13:31.459 [INFO][5784] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:13:31.466837 containerd[1601]: 2025-07-12 00:13:31.464 [INFO][5776] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bc9dff0487da505b7fa119d4dc16b2ecd7080abe735d52d748dbef664a2f919b" Jul 12 00:13:31.466837 containerd[1601]: time="2025-07-12T00:13:31.466760705Z" level=info msg="TearDown network for sandbox \"bc9dff0487da505b7fa119d4dc16b2ecd7080abe735d52d748dbef664a2f919b\" successfully" Jul 12 00:13:31.472241 containerd[1601]: time="2025-07-12T00:13:31.471944060Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bc9dff0487da505b7fa119d4dc16b2ecd7080abe735d52d748dbef664a2f919b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 12 00:13:31.472241 containerd[1601]: time="2025-07-12T00:13:31.472085382Z" level=info msg="RemovePodSandbox \"bc9dff0487da505b7fa119d4dc16b2ecd7080abe735d52d748dbef664a2f919b\" returns successfully" Jul 12 00:13:31.473500 containerd[1601]: time="2025-07-12T00:13:31.472957275Z" level=info msg="StopPodSandbox for \"417c12a1078c186c15411172fad1a0108dacf60beadf725eebfc513508343886\"" Jul 12 00:13:31.571533 containerd[1601]: 2025-07-12 00:13:31.521 [WARNING][5798] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="417c12a1078c186c15411172fad1a0108dacf60beadf725eebfc513508343886" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--n--bdc5bebc5f-k8s-coredns--7c65d6cfc9--4txl7-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"b3a5dd31-1f9a-4dc4-ad4d-9d9da3bf2832", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 12, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-n-bdc5bebc5f", ContainerID:"707fc4c01066e9c34a8146a25e13f460e9887b7473f83358e9cbc421779506ee", Pod:"coredns-7c65d6cfc9-4txl7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.106.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali18fe0076dae", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:13:31.571533 containerd[1601]: 2025-07-12 00:13:31.521 [INFO][5798] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="417c12a1078c186c15411172fad1a0108dacf60beadf725eebfc513508343886" Jul 12 00:13:31.571533 containerd[1601]: 2025-07-12 00:13:31.521 [INFO][5798] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="417c12a1078c186c15411172fad1a0108dacf60beadf725eebfc513508343886" iface="eth0" netns="" Jul 12 00:13:31.571533 containerd[1601]: 2025-07-12 00:13:31.521 [INFO][5798] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="417c12a1078c186c15411172fad1a0108dacf60beadf725eebfc513508343886" Jul 12 00:13:31.571533 containerd[1601]: 2025-07-12 00:13:31.521 [INFO][5798] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="417c12a1078c186c15411172fad1a0108dacf60beadf725eebfc513508343886" Jul 12 00:13:31.571533 containerd[1601]: 2025-07-12 00:13:31.547 [INFO][5805] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="417c12a1078c186c15411172fad1a0108dacf60beadf725eebfc513508343886" HandleID="k8s-pod-network.417c12a1078c186c15411172fad1a0108dacf60beadf725eebfc513508343886" Workload="ci--4081--3--4--n--bdc5bebc5f-k8s-coredns--7c65d6cfc9--4txl7-eth0" Jul 12 00:13:31.571533 containerd[1601]: 2025-07-12 00:13:31.547 [INFO][5805] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:13:31.571533 containerd[1601]: 2025-07-12 00:13:31.547 [INFO][5805] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:13:31.571533 containerd[1601]: 2025-07-12 00:13:31.564 [WARNING][5805] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="417c12a1078c186c15411172fad1a0108dacf60beadf725eebfc513508343886" HandleID="k8s-pod-network.417c12a1078c186c15411172fad1a0108dacf60beadf725eebfc513508343886" Workload="ci--4081--3--4--n--bdc5bebc5f-k8s-coredns--7c65d6cfc9--4txl7-eth0" Jul 12 00:13:31.571533 containerd[1601]: 2025-07-12 00:13:31.564 [INFO][5805] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="417c12a1078c186c15411172fad1a0108dacf60beadf725eebfc513508343886" HandleID="k8s-pod-network.417c12a1078c186c15411172fad1a0108dacf60beadf725eebfc513508343886" Workload="ci--4081--3--4--n--bdc5bebc5f-k8s-coredns--7c65d6cfc9--4txl7-eth0" Jul 12 00:13:31.571533 containerd[1601]: 2025-07-12 00:13:31.566 [INFO][5805] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:13:31.571533 containerd[1601]: 2025-07-12 00:13:31.568 [INFO][5798] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="417c12a1078c186c15411172fad1a0108dacf60beadf725eebfc513508343886" Jul 12 00:13:31.574840 containerd[1601]: time="2025-07-12T00:13:31.572571844Z" level=info msg="TearDown network for sandbox \"417c12a1078c186c15411172fad1a0108dacf60beadf725eebfc513508343886\" successfully" Jul 12 00:13:31.574840 containerd[1601]: time="2025-07-12T00:13:31.572613965Z" level=info msg="StopPodSandbox for \"417c12a1078c186c15411172fad1a0108dacf60beadf725eebfc513508343886\" returns successfully" Jul 12 00:13:31.574840 containerd[1601]: time="2025-07-12T00:13:31.574303749Z" level=info msg="RemovePodSandbox for \"417c12a1078c186c15411172fad1a0108dacf60beadf725eebfc513508343886\"" Jul 12 00:13:31.574840 containerd[1601]: time="2025-07-12T00:13:31.574333910Z" level=info msg="Forcibly stopping sandbox \"417c12a1078c186c15411172fad1a0108dacf60beadf725eebfc513508343886\"" Jul 12 00:13:31.758605 containerd[1601]: 2025-07-12 00:13:31.677 [WARNING][5819] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="417c12a1078c186c15411172fad1a0108dacf60beadf725eebfc513508343886" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--n--bdc5bebc5f-k8s-coredns--7c65d6cfc9--4txl7-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"b3a5dd31-1f9a-4dc4-ad4d-9d9da3bf2832", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 12, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-n-bdc5bebc5f", ContainerID:"707fc4c01066e9c34a8146a25e13f460e9887b7473f83358e9cbc421779506ee", Pod:"coredns-7c65d6cfc9-4txl7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.106.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali18fe0076dae", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:13:31.758605 containerd[1601]: 2025-07-12 00:13:31.678 [INFO][5819] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="417c12a1078c186c15411172fad1a0108dacf60beadf725eebfc513508343886" Jul 12 00:13:31.758605 containerd[1601]: 2025-07-12 00:13:31.678 [INFO][5819] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="417c12a1078c186c15411172fad1a0108dacf60beadf725eebfc513508343886" iface="eth0" netns="" Jul 12 00:13:31.758605 containerd[1601]: 2025-07-12 00:13:31.678 [INFO][5819] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="417c12a1078c186c15411172fad1a0108dacf60beadf725eebfc513508343886" Jul 12 00:13:31.758605 containerd[1601]: 2025-07-12 00:13:31.678 [INFO][5819] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="417c12a1078c186c15411172fad1a0108dacf60beadf725eebfc513508343886" Jul 12 00:13:31.758605 containerd[1601]: 2025-07-12 00:13:31.729 [INFO][5826] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="417c12a1078c186c15411172fad1a0108dacf60beadf725eebfc513508343886" HandleID="k8s-pod-network.417c12a1078c186c15411172fad1a0108dacf60beadf725eebfc513508343886" Workload="ci--4081--3--4--n--bdc5bebc5f-k8s-coredns--7c65d6cfc9--4txl7-eth0" Jul 12 00:13:31.758605 containerd[1601]: 2025-07-12 00:13:31.729 [INFO][5826] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:13:31.758605 containerd[1601]: 2025-07-12 00:13:31.729 [INFO][5826] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:13:31.758605 containerd[1601]: 2025-07-12 00:13:31.746 [WARNING][5826] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="417c12a1078c186c15411172fad1a0108dacf60beadf725eebfc513508343886" HandleID="k8s-pod-network.417c12a1078c186c15411172fad1a0108dacf60beadf725eebfc513508343886" Workload="ci--4081--3--4--n--bdc5bebc5f-k8s-coredns--7c65d6cfc9--4txl7-eth0" Jul 12 00:13:31.758605 containerd[1601]: 2025-07-12 00:13:31.746 [INFO][5826] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="417c12a1078c186c15411172fad1a0108dacf60beadf725eebfc513508343886" HandleID="k8s-pod-network.417c12a1078c186c15411172fad1a0108dacf60beadf725eebfc513508343886" Workload="ci--4081--3--4--n--bdc5bebc5f-k8s-coredns--7c65d6cfc9--4txl7-eth0" Jul 12 00:13:31.758605 containerd[1601]: 2025-07-12 00:13:31.749 [INFO][5826] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:13:31.758605 containerd[1601]: 2025-07-12 00:13:31.753 [INFO][5819] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="417c12a1078c186c15411172fad1a0108dacf60beadf725eebfc513508343886" Jul 12 00:13:31.761319 containerd[1601]: time="2025-07-12T00:13:31.760155652Z" level=info msg="TearDown network for sandbox \"417c12a1078c186c15411172fad1a0108dacf60beadf725eebfc513508343886\" successfully" Jul 12 00:13:31.768826 containerd[1601]: time="2025-07-12T00:13:31.768734217Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"417c12a1078c186c15411172fad1a0108dacf60beadf725eebfc513508343886\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 12 00:13:31.770021 containerd[1601]: time="2025-07-12T00:13:31.769477548Z" level=info msg="RemovePodSandbox \"417c12a1078c186c15411172fad1a0108dacf60beadf725eebfc513508343886\" returns successfully" Jul 12 00:13:31.771615 containerd[1601]: time="2025-07-12T00:13:31.771568698Z" level=info msg="StopPodSandbox for \"5ca3e8dbc88b2020006660b0ae14cbf71cbf389d138b8d0bc83929bbe1538f97\"" Jul 12 00:13:31.922970 containerd[1601]: 2025-07-12 00:13:31.854 [WARNING][5840] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5ca3e8dbc88b2020006660b0ae14cbf71cbf389d138b8d0bc83929bbe1538f97" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--n--bdc5bebc5f-k8s-coredns--7c65d6cfc9--tpnrh-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"c86f07ef-0940-4c98-a612-68a23cab6908", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 12, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-n-bdc5bebc5f", ContainerID:"1e332b7e39977b746d8f694769a85bba1fc3a6dff5560d374ba47c2945d5add9", Pod:"coredns-7c65d6cfc9-tpnrh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.106.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1ef6dde0b3b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:13:31.922970 containerd[1601]: 2025-07-12 00:13:31.854 [INFO][5840] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5ca3e8dbc88b2020006660b0ae14cbf71cbf389d138b8d0bc83929bbe1538f97" Jul 12 00:13:31.922970 containerd[1601]: 2025-07-12 00:13:31.854 [INFO][5840] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5ca3e8dbc88b2020006660b0ae14cbf71cbf389d138b8d0bc83929bbe1538f97" iface="eth0" netns="" Jul 12 00:13:31.922970 containerd[1601]: 2025-07-12 00:13:31.854 [INFO][5840] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5ca3e8dbc88b2020006660b0ae14cbf71cbf389d138b8d0bc83929bbe1538f97" Jul 12 00:13:31.922970 containerd[1601]: 2025-07-12 00:13:31.854 [INFO][5840] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5ca3e8dbc88b2020006660b0ae14cbf71cbf389d138b8d0bc83929bbe1538f97" Jul 12 00:13:31.922970 containerd[1601]: 2025-07-12 00:13:31.895 [INFO][5847] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5ca3e8dbc88b2020006660b0ae14cbf71cbf389d138b8d0bc83929bbe1538f97" HandleID="k8s-pod-network.5ca3e8dbc88b2020006660b0ae14cbf71cbf389d138b8d0bc83929bbe1538f97" Workload="ci--4081--3--4--n--bdc5bebc5f-k8s-coredns--7c65d6cfc9--tpnrh-eth0" Jul 12 00:13:31.922970 containerd[1601]: 2025-07-12 00:13:31.895 [INFO][5847] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:13:31.922970 containerd[1601]: 2025-07-12 00:13:31.895 [INFO][5847] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:13:31.922970 containerd[1601]: 2025-07-12 00:13:31.911 [WARNING][5847] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5ca3e8dbc88b2020006660b0ae14cbf71cbf389d138b8d0bc83929bbe1538f97" HandleID="k8s-pod-network.5ca3e8dbc88b2020006660b0ae14cbf71cbf389d138b8d0bc83929bbe1538f97" Workload="ci--4081--3--4--n--bdc5bebc5f-k8s-coredns--7c65d6cfc9--tpnrh-eth0" Jul 12 00:13:31.922970 containerd[1601]: 2025-07-12 00:13:31.912 [INFO][5847] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5ca3e8dbc88b2020006660b0ae14cbf71cbf389d138b8d0bc83929bbe1538f97" HandleID="k8s-pod-network.5ca3e8dbc88b2020006660b0ae14cbf71cbf389d138b8d0bc83929bbe1538f97" Workload="ci--4081--3--4--n--bdc5bebc5f-k8s-coredns--7c65d6cfc9--tpnrh-eth0" Jul 12 00:13:31.922970 containerd[1601]: 2025-07-12 00:13:31.915 [INFO][5847] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:13:31.922970 containerd[1601]: 2025-07-12 00:13:31.918 [INFO][5840] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5ca3e8dbc88b2020006660b0ae14cbf71cbf389d138b8d0bc83929bbe1538f97" Jul 12 00:13:31.924691 containerd[1601]: time="2025-07-12T00:13:31.924626165Z" level=info msg="TearDown network for sandbox \"5ca3e8dbc88b2020006660b0ae14cbf71cbf389d138b8d0bc83929bbe1538f97\" successfully" Jul 12 00:13:31.924691 containerd[1601]: time="2025-07-12T00:13:31.924679566Z" level=info msg="StopPodSandbox for \"5ca3e8dbc88b2020006660b0ae14cbf71cbf389d138b8d0bc83929bbe1538f97\" returns successfully" Jul 12 00:13:31.926934 containerd[1601]: time="2025-07-12T00:13:31.926045545Z" level=info msg="RemovePodSandbox for \"5ca3e8dbc88b2020006660b0ae14cbf71cbf389d138b8d0bc83929bbe1538f97\"" Jul 12 00:13:31.926934 containerd[1601]: time="2025-07-12T00:13:31.926085386Z" level=info msg="Forcibly stopping sandbox \"5ca3e8dbc88b2020006660b0ae14cbf71cbf389d138b8d0bc83929bbe1538f97\"" Jul 12 00:13:32.062277 containerd[1601]: 2025-07-12 00:13:31.993 [WARNING][5862] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5ca3e8dbc88b2020006660b0ae14cbf71cbf389d138b8d0bc83929bbe1538f97" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--n--bdc5bebc5f-k8s-coredns--7c65d6cfc9--tpnrh-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"c86f07ef-0940-4c98-a612-68a23cab6908", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 12, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-n-bdc5bebc5f", ContainerID:"1e332b7e39977b746d8f694769a85bba1fc3a6dff5560d374ba47c2945d5add9", Pod:"coredns-7c65d6cfc9-tpnrh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.106.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1ef6dde0b3b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:13:32.062277 containerd[1601]: 2025-07-12 00:13:31.993 [INFO][5862] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5ca3e8dbc88b2020006660b0ae14cbf71cbf389d138b8d0bc83929bbe1538f97" Jul 12 00:13:32.062277 containerd[1601]: 2025-07-12 00:13:31.993 [INFO][5862] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5ca3e8dbc88b2020006660b0ae14cbf71cbf389d138b8d0bc83929bbe1538f97" iface="eth0" netns="" Jul 12 00:13:32.062277 containerd[1601]: 2025-07-12 00:13:31.993 [INFO][5862] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5ca3e8dbc88b2020006660b0ae14cbf71cbf389d138b8d0bc83929bbe1538f97" Jul 12 00:13:32.062277 containerd[1601]: 2025-07-12 00:13:31.993 [INFO][5862] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5ca3e8dbc88b2020006660b0ae14cbf71cbf389d138b8d0bc83929bbe1538f97" Jul 12 00:13:32.062277 containerd[1601]: 2025-07-12 00:13:32.037 [INFO][5869] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5ca3e8dbc88b2020006660b0ae14cbf71cbf389d138b8d0bc83929bbe1538f97" HandleID="k8s-pod-network.5ca3e8dbc88b2020006660b0ae14cbf71cbf389d138b8d0bc83929bbe1538f97" Workload="ci--4081--3--4--n--bdc5bebc5f-k8s-coredns--7c65d6cfc9--tpnrh-eth0" Jul 12 00:13:32.062277 containerd[1601]: 2025-07-12 00:13:32.037 [INFO][5869] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:13:32.062277 containerd[1601]: 2025-07-12 00:13:32.037 [INFO][5869] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:13:32.062277 containerd[1601]: 2025-07-12 00:13:32.050 [WARNING][5869] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5ca3e8dbc88b2020006660b0ae14cbf71cbf389d138b8d0bc83929bbe1538f97" HandleID="k8s-pod-network.5ca3e8dbc88b2020006660b0ae14cbf71cbf389d138b8d0bc83929bbe1538f97" Workload="ci--4081--3--4--n--bdc5bebc5f-k8s-coredns--7c65d6cfc9--tpnrh-eth0" Jul 12 00:13:32.062277 containerd[1601]: 2025-07-12 00:13:32.050 [INFO][5869] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5ca3e8dbc88b2020006660b0ae14cbf71cbf389d138b8d0bc83929bbe1538f97" HandleID="k8s-pod-network.5ca3e8dbc88b2020006660b0ae14cbf71cbf389d138b8d0bc83929bbe1538f97" Workload="ci--4081--3--4--n--bdc5bebc5f-k8s-coredns--7c65d6cfc9--tpnrh-eth0" Jul 12 00:13:32.062277 containerd[1601]: 2025-07-12 00:13:32.054 [INFO][5869] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:13:32.062277 containerd[1601]: 2025-07-12 00:13:32.058 [INFO][5862] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5ca3e8dbc88b2020006660b0ae14cbf71cbf389d138b8d0bc83929bbe1538f97" Jul 12 00:13:32.062277 containerd[1601]: time="2025-07-12T00:13:32.062264943Z" level=info msg="TearDown network for sandbox \"5ca3e8dbc88b2020006660b0ae14cbf71cbf389d138b8d0bc83929bbe1538f97\" successfully" Jul 12 00:13:32.077945 containerd[1601]: time="2025-07-12T00:13:32.077571840Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5ca3e8dbc88b2020006660b0ae14cbf71cbf389d138b8d0bc83929bbe1538f97\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 12 00:13:32.077945 containerd[1601]: time="2025-07-12T00:13:32.077719762Z" level=info msg="RemovePodSandbox \"5ca3e8dbc88b2020006660b0ae14cbf71cbf389d138b8d0bc83929bbe1538f97\" returns successfully" Jul 12 00:13:51.500029 kubelet[2706]: I0712 00:13:51.499603 2706 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 00:14:05.532864 update_engine[1580]: I20250712 00:14:05.531591 1580 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jul 12 00:14:05.532864 update_engine[1580]: I20250712 00:14:05.531670 1580 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jul 12 00:14:05.532864 update_engine[1580]: I20250712 00:14:05.532055 1580 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jul 12 00:14:05.534352 update_engine[1580]: I20250712 00:14:05.534325 1580 omaha_request_params.cc:62] Current group set to lts Jul 12 00:14:05.534562 update_engine[1580]: I20250712 00:14:05.534523 1580 update_attempter.cc:499] Already updated boot flags. Skipping. Jul 12 00:14:05.534648 update_engine[1580]: I20250712 00:14:05.534631 1580 update_attempter.cc:643] Scheduling an action processor start. Jul 12 00:14:05.534720 update_engine[1580]: I20250712 00:14:05.534701 1580 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 12 00:14:05.538673 update_engine[1580]: I20250712 00:14:05.538635 1580 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jul 12 00:14:05.539087 update_engine[1580]: I20250712 00:14:05.538763 1580 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 12 00:14:05.539087 update_engine[1580]: I20250712 00:14:05.538782 1580 omaha_request_action.cc:272] Request: Jul 12 00:14:05.539087 update_engine[1580]: Jul 12 00:14:05.539087 update_engine[1580]: Jul 12 00:14:05.539087 update_engine[1580]: Jul 12 00:14:05.539087 update_engine[1580]: Jul 12 00:14:05.539087 update_engine[1580]: Jul 12 00:14:05.539087 update_engine[1580]: Jul 12 00:14:05.539087 update_engine[1580]: Jul 12 00:14:05.539087 update_engine[1580]: Jul 12 00:14:05.539087 update_engine[1580]: I20250712 00:14:05.538792 1580 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 12 00:14:05.544779 locksmithd[1621]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jul 12 00:14:05.545173 update_engine[1580]: I20250712 00:14:05.544974 1580 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 12 00:14:05.545471 update_engine[1580]: I20250712 00:14:05.545360 1580 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 12 00:14:05.547373 update_engine[1580]: E20250712 00:14:05.546950 1580 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 12 00:14:05.547373 update_engine[1580]: I20250712 00:14:05.547035 1580 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jul 12 00:14:11.295523 systemd[1]: run-containerd-runc-k8s.io-07737bc3230a6c6586e1d78b6adfea04e13a232d1985cfaf8a6a656014694726-runc.9nHjbf.mount: Deactivated successfully. Jul 12 00:14:15.455996 update_engine[1580]: I20250712 00:14:15.455905 1580 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 12 00:14:15.456384 update_engine[1580]: I20250712 00:14:15.456171 1580 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 12 00:14:15.456411 update_engine[1580]: I20250712 00:14:15.456393 1580 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 12 00:14:15.457217 update_engine[1580]: E20250712 00:14:15.457160 1580 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 12 00:14:15.457291 update_engine[1580]: I20250712 00:14:15.457226 1580 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jul 12 00:14:23.284163 systemd[1]: run-containerd-runc-k8s.io-50b980c207beaaea7f8b34f490339fd9668941fb683b59f6764ba198f97e519a-runc.ViHdVH.mount: Deactivated successfully. Jul 12 00:14:25.460719 update_engine[1580]: I20250712 00:14:25.460536 1580 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 12 00:14:25.461210 update_engine[1580]: I20250712 00:14:25.460979 1580 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 12 00:14:25.461210 update_engine[1580]: I20250712 00:14:25.461191 1580 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 12 00:14:25.462789 update_engine[1580]: E20250712 00:14:25.462735 1580 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 12 00:14:25.463027 update_engine[1580]: I20250712 00:14:25.462987 1580 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jul 12 00:14:26.508600 systemd[1]: Started sshd@8-91.99.189.6:22-139.178.68.195:36506.service - OpenSSH per-connection server daemon (139.178.68.195:36506). Jul 12 00:14:27.517623 sshd[6020]: Accepted publickey for core from 139.178.68.195 port 36506 ssh2: RSA SHA256:F+XLD192VdJplBwsaXiDmdHN61qgjd2kCMtCNVPlP/M Jul 12 00:14:27.522333 sshd[6020]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:14:27.535429 systemd-logind[1577]: New session 8 of user core. Jul 12 00:14:27.540992 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 12 00:14:28.340387 sshd[6020]: pam_unix(sshd:session): session closed for user core Jul 12 00:14:28.346638 systemd[1]: sshd@8-91.99.189.6:22-139.178.68.195:36506.service: Deactivated successfully. Jul 12 00:14:28.354835 systemd-logind[1577]: Session 8 logged out. Waiting for processes to exit. Jul 12 00:14:28.359533 systemd[1]: session-8.scope: Deactivated successfully. Jul 12 00:14:28.363155 systemd-logind[1577]: Removed session 8. Jul 12 00:14:33.509774 systemd[1]: Started sshd@9-91.99.189.6:22-139.178.68.195:56310.service - OpenSSH per-connection server daemon (139.178.68.195:56310). Jul 12 00:14:34.522973 sshd[6082]: Accepted publickey for core from 139.178.68.195 port 56310 ssh2: RSA SHA256:F+XLD192VdJplBwsaXiDmdHN61qgjd2kCMtCNVPlP/M Jul 12 00:14:34.526240 sshd[6082]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:14:34.532228 systemd-logind[1577]: New session 9 of user core. Jul 12 00:14:34.537139 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 12 00:14:35.288068 sshd[6082]: pam_unix(sshd:session): session closed for user core Jul 12 00:14:35.294365 systemd-logind[1577]: Session 9 logged out. Waiting for processes to exit. Jul 12 00:14:35.295368 systemd[1]: sshd@9-91.99.189.6:22-139.178.68.195:56310.service: Deactivated successfully. Jul 12 00:14:35.301033 systemd[1]: session-9.scope: Deactivated successfully. Jul 12 00:14:35.303963 systemd-logind[1577]: Removed session 9. Jul 12 00:14:35.451121 update_engine[1580]: I20250712 00:14:35.450687 1580 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 12 00:14:35.451121 update_engine[1580]: I20250712 00:14:35.450904 1580 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 12 00:14:35.451121 update_engine[1580]: I20250712 00:14:35.451108 1580 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 12 00:14:35.452123 update_engine[1580]: E20250712 00:14:35.451923 1580 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 12 00:14:35.452123 update_engine[1580]: I20250712 00:14:35.451970 1580 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 12 00:14:35.452123 update_engine[1580]: I20250712 00:14:35.451979 1580 omaha_request_action.cc:617] Omaha request response: Jul 12 00:14:35.452123 update_engine[1580]: E20250712 00:14:35.452056 1580 omaha_request_action.cc:636] Omaha request network transfer failed. Jul 12 00:14:35.452123 update_engine[1580]: I20250712 00:14:35.452076 1580 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jul 12 00:14:35.452123 update_engine[1580]: I20250712 00:14:35.452085 1580 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 12 00:14:35.452123 update_engine[1580]: I20250712 00:14:35.452101 1580 update_attempter.cc:306] Processing Done. Jul 12 00:14:35.452123 update_engine[1580]: E20250712 00:14:35.452122 1580 update_attempter.cc:619] Update failed. Jul 12 00:14:35.452123 update_engine[1580]: I20250712 00:14:35.452131 1580 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jul 12 00:14:35.452123 update_engine[1580]: I20250712 00:14:35.452138 1580 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jul 12 00:14:35.452732 update_engine[1580]: I20250712 00:14:35.452145 1580 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jul 12 00:14:35.452732 update_engine[1580]: I20250712 00:14:35.452224 1580 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 12 00:14:35.452732 update_engine[1580]: I20250712 00:14:35.452250 1580 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 12 00:14:35.452732 update_engine[1580]: I20250712 00:14:35.452258 1580 omaha_request_action.cc:272] Request: Jul 12 00:14:35.452732 update_engine[1580]: Jul 12 00:14:35.452732 update_engine[1580]: Jul 12 00:14:35.452732 update_engine[1580]: Jul 12 00:14:35.452732 update_engine[1580]: Jul 12 00:14:35.452732 update_engine[1580]: Jul 12 00:14:35.452732 update_engine[1580]: Jul 12 00:14:35.452732 update_engine[1580]: I20250712 00:14:35.452266 1580 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 12 00:14:35.452732 update_engine[1580]: I20250712 00:14:35.452415 1580 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 12 00:14:35.452732 update_engine[1580]: I20250712 00:14:35.452643 1580 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 12 00:14:35.454132 update_engine[1580]: E20250712 00:14:35.453445 1580 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 12 00:14:35.454132 update_engine[1580]: I20250712 00:14:35.453527 1580 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 12 00:14:35.454132 update_engine[1580]: I20250712 00:14:35.453538 1580 omaha_request_action.cc:617] Omaha request response: Jul 12 00:14:35.454132 update_engine[1580]: I20250712 00:14:35.453547 1580 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 12 00:14:35.454132 update_engine[1580]: I20250712 00:14:35.453553 1580 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 12 00:14:35.454132 update_engine[1580]: I20250712 00:14:35.453559 1580 update_attempter.cc:306] Processing Done. Jul 12 00:14:35.454132 update_engine[1580]: I20250712 00:14:35.453567 1580 update_attempter.cc:310] Error event sent. Jul 12 00:14:35.454132 update_engine[1580]: I20250712 00:14:35.453578 1580 update_check_scheduler.cc:74] Next update check in 42m44s Jul 12 00:14:35.454569 locksmithd[1621]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jul 12 00:14:35.455080 locksmithd[1621]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jul 12 00:14:40.452717 systemd[1]: Started sshd@10-91.99.189.6:22-139.178.68.195:46388.service - OpenSSH per-connection server daemon (139.178.68.195:46388). Jul 12 00:14:41.430716 sshd[6104]: Accepted publickey for core from 139.178.68.195 port 46388 ssh2: RSA SHA256:F+XLD192VdJplBwsaXiDmdHN61qgjd2kCMtCNVPlP/M Jul 12 00:14:41.432968 sshd[6104]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:14:41.438933 systemd-logind[1577]: New session 10 of user core. Jul 12 00:14:41.448341 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 12 00:14:42.198663 sshd[6104]: pam_unix(sshd:session): session closed for user core Jul 12 00:14:42.204101 systemd[1]: sshd@10-91.99.189.6:22-139.178.68.195:46388.service: Deactivated successfully. Jul 12 00:14:42.209429 systemd[1]: session-10.scope: Deactivated successfully. Jul 12 00:14:42.211496 systemd-logind[1577]: Session 10 logged out. Waiting for processes to exit. Jul 12 00:14:42.212530 systemd-logind[1577]: Removed session 10. Jul 12 00:14:42.369878 systemd[1]: Started sshd@11-91.99.189.6:22-139.178.68.195:46394.service - OpenSSH per-connection server daemon (139.178.68.195:46394). Jul 12 00:14:43.360113 sshd[6119]: Accepted publickey for core from 139.178.68.195 port 46394 ssh2: RSA SHA256:F+XLD192VdJplBwsaXiDmdHN61qgjd2kCMtCNVPlP/M Jul 12 00:14:43.362395 sshd[6119]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:14:43.368334 systemd-logind[1577]: New session 11 of user core. Jul 12 00:14:43.376074 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 12 00:14:44.181824 sshd[6119]: pam_unix(sshd:session): session closed for user core Jul 12 00:14:44.188162 systemd[1]: sshd@11-91.99.189.6:22-139.178.68.195:46394.service: Deactivated successfully. Jul 12 00:14:44.191714 systemd[1]: session-11.scope: Deactivated successfully. Jul 12 00:14:44.192939 systemd-logind[1577]: Session 11 logged out. Waiting for processes to exit. Jul 12 00:14:44.194247 systemd-logind[1577]: Removed session 11. Jul 12 00:14:44.350837 systemd[1]: Started sshd@12-91.99.189.6:22-139.178.68.195:46400.service - OpenSSH per-connection server daemon (139.178.68.195:46400). Jul 12 00:14:45.341882 sshd[6131]: Accepted publickey for core from 139.178.68.195 port 46400 ssh2: RSA SHA256:F+XLD192VdJplBwsaXiDmdHN61qgjd2kCMtCNVPlP/M Jul 12 00:14:45.344236 sshd[6131]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:14:45.350823 systemd-logind[1577]: New session 12 of user core. Jul 12 00:14:45.356099 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 12 00:14:46.108869 sshd[6131]: pam_unix(sshd:session): session closed for user core Jul 12 00:14:46.113941 systemd[1]: sshd@12-91.99.189.6:22-139.178.68.195:46400.service: Deactivated successfully. Jul 12 00:14:46.118106 systemd[1]: session-12.scope: Deactivated successfully. Jul 12 00:14:46.120076 systemd-logind[1577]: Session 12 logged out. Waiting for processes to exit. Jul 12 00:14:46.121284 systemd-logind[1577]: Removed session 12. Jul 12 00:14:51.278837 systemd[1]: Started sshd@13-91.99.189.6:22-139.178.68.195:59686.service - OpenSSH per-connection server daemon (139.178.68.195:59686). Jul 12 00:14:52.280810 sshd[6188]: Accepted publickey for core from 139.178.68.195 port 59686 ssh2: RSA SHA256:F+XLD192VdJplBwsaXiDmdHN61qgjd2kCMtCNVPlP/M Jul 12 00:14:52.283755 sshd[6188]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:14:52.290957 systemd-logind[1577]: New session 13 of user core. Jul 12 00:14:52.295976 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 12 00:14:53.047510 sshd[6188]: pam_unix(sshd:session): session closed for user core Jul 12 00:14:53.053299 systemd[1]: sshd@13-91.99.189.6:22-139.178.68.195:59686.service: Deactivated successfully. Jul 12 00:14:53.058169 systemd[1]: session-13.scope: Deactivated successfully. Jul 12 00:14:53.060162 systemd-logind[1577]: Session 13 logged out. Waiting for processes to exit. Jul 12 00:14:53.061353 systemd-logind[1577]: Removed session 13. Jul 12 00:14:53.209888 systemd[1]: Started sshd@14-91.99.189.6:22-139.178.68.195:59690.service - OpenSSH per-connection server daemon (139.178.68.195:59690). Jul 12 00:14:54.181565 sshd[6202]: Accepted publickey for core from 139.178.68.195 port 59690 ssh2: RSA SHA256:F+XLD192VdJplBwsaXiDmdHN61qgjd2kCMtCNVPlP/M Jul 12 00:14:54.184042 sshd[6202]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:14:54.191103 systemd-logind[1577]: New session 14 of user core. Jul 12 00:14:54.194761 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 12 00:14:55.085785 sshd[6202]: pam_unix(sshd:session): session closed for user core Jul 12 00:14:55.091071 systemd[1]: sshd@14-91.99.189.6:22-139.178.68.195:59690.service: Deactivated successfully. Jul 12 00:14:55.097665 systemd-logind[1577]: Session 14 logged out. Waiting for processes to exit. Jul 12 00:14:55.099822 systemd[1]: session-14.scope: Deactivated successfully. Jul 12 00:14:55.105566 systemd-logind[1577]: Removed session 14. Jul 12 00:14:55.258420 systemd[1]: Started sshd@15-91.99.189.6:22-139.178.68.195:59694.service - OpenSSH per-connection server daemon (139.178.68.195:59694). Jul 12 00:14:56.252073 sshd[6214]: Accepted publickey for core from 139.178.68.195 port 59694 ssh2: RSA SHA256:F+XLD192VdJplBwsaXiDmdHN61qgjd2kCMtCNVPlP/M Jul 12 00:14:56.255237 sshd[6214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:14:56.261781 systemd-logind[1577]: New session 15 of user core. Jul 12 00:14:56.266854 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 12 00:14:59.401671 sshd[6214]: pam_unix(sshd:session): session closed for user core Jul 12 00:14:59.416262 systemd[1]: sshd@15-91.99.189.6:22-139.178.68.195:59694.service: Deactivated successfully. Jul 12 00:14:59.418637 systemd-logind[1577]: Session 15 logged out. Waiting for processes to exit. Jul 12 00:14:59.421441 systemd[1]: session-15.scope: Deactivated successfully. Jul 12 00:14:59.427139 systemd-logind[1577]: Removed session 15. Jul 12 00:14:59.567198 systemd[1]: Started sshd@16-91.99.189.6:22-139.178.68.195:38240.service - OpenSSH per-connection server daemon (139.178.68.195:38240). Jul 12 00:15:00.272596 systemd[1]: run-containerd-runc-k8s.io-07737bc3230a6c6586e1d78b6adfea04e13a232d1985cfaf8a6a656014694726-runc.6xpXbs.mount: Deactivated successfully. Jul 12 00:15:00.546814 sshd[6236]: Accepted publickey for core from 139.178.68.195 port 38240 ssh2: RSA SHA256:F+XLD192VdJplBwsaXiDmdHN61qgjd2kCMtCNVPlP/M Jul 12 00:15:00.549029 sshd[6236]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:15:00.553676 systemd-logind[1577]: New session 16 of user core. Jul 12 00:15:00.560645 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 12 00:15:01.498027 sshd[6236]: pam_unix(sshd:session): session closed for user core Jul 12 00:15:01.503645 systemd-logind[1577]: Session 16 logged out. Waiting for processes to exit. Jul 12 00:15:01.504415 systemd[1]: sshd@16-91.99.189.6:22-139.178.68.195:38240.service: Deactivated successfully. Jul 12 00:15:01.512874 systemd[1]: session-16.scope: Deactivated successfully. Jul 12 00:15:01.520358 systemd-logind[1577]: Removed session 16. Jul 12 00:15:01.665128 systemd[1]: Started sshd@17-91.99.189.6:22-139.178.68.195:38242.service - OpenSSH per-connection server daemon (139.178.68.195:38242). Jul 12 00:15:02.641131 sshd[6286]: Accepted publickey for core from 139.178.68.195 port 38242 ssh2: RSA SHA256:F+XLD192VdJplBwsaXiDmdHN61qgjd2kCMtCNVPlP/M Jul 12 00:15:02.642887 sshd[6286]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:15:02.650366 systemd-logind[1577]: New session 17 of user core. Jul 12 00:15:02.656070 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 12 00:15:03.395346 sshd[6286]: pam_unix(sshd:session): session closed for user core Jul 12 00:15:03.402146 systemd[1]: sshd@17-91.99.189.6:22-139.178.68.195:38242.service: Deactivated successfully. Jul 12 00:15:03.409061 systemd[1]: session-17.scope: Deactivated successfully. Jul 12 00:15:03.410312 systemd-logind[1577]: Session 17 logged out. Waiting for processes to exit. Jul 12 00:15:03.413798 systemd-logind[1577]: Removed session 17. Jul 12 00:15:08.589887 systemd[1]: Started sshd@18-91.99.189.6:22-139.178.68.195:58084.service - OpenSSH per-connection server daemon (139.178.68.195:58084). Jul 12 00:15:09.652850 sshd[6304]: Accepted publickey for core from 139.178.68.195 port 58084 ssh2: RSA SHA256:F+XLD192VdJplBwsaXiDmdHN61qgjd2kCMtCNVPlP/M Jul 12 00:15:09.655618 sshd[6304]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:15:09.661096 systemd-logind[1577]: New session 18 of user core. Jul 12 00:15:09.668088 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 12 00:15:10.457136 sshd[6304]: pam_unix(sshd:session): session closed for user core Jul 12 00:15:10.460995 systemd-logind[1577]: Session 18 logged out. Waiting for processes to exit. Jul 12 00:15:10.462050 systemd[1]: sshd@18-91.99.189.6:22-139.178.68.195:58084.service: Deactivated successfully. Jul 12 00:15:10.468899 systemd[1]: session-18.scope: Deactivated successfully. Jul 12 00:15:10.470145 systemd-logind[1577]: Removed session 18. Jul 12 00:15:15.616924 systemd[1]: Started sshd@19-91.99.189.6:22-139.178.68.195:58092.service - OpenSSH per-connection server daemon (139.178.68.195:58092). Jul 12 00:15:16.618417 sshd[6360]: Accepted publickey for core from 139.178.68.195 port 58092 ssh2: RSA SHA256:F+XLD192VdJplBwsaXiDmdHN61qgjd2kCMtCNVPlP/M Jul 12 00:15:16.621878 sshd[6360]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:15:16.633592 systemd-logind[1577]: New session 19 of user core. Jul 12 00:15:16.635803 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 12 00:15:17.438707 sshd[6360]: pam_unix(sshd:session): session closed for user core Jul 12 00:15:17.444702 systemd[1]: sshd@19-91.99.189.6:22-139.178.68.195:58092.service: Deactivated successfully. Jul 12 00:15:17.449932 systemd-logind[1577]: Session 19 logged out. Waiting for processes to exit. Jul 12 00:15:17.450294 systemd[1]: session-19.scope: Deactivated successfully. Jul 12 00:15:17.452209 systemd-logind[1577]: Removed session 19. Jul 12 00:15:32.014104 containerd[1601]: time="2025-07-12T00:15:32.012638806Z" level=info msg="shim disconnected" id=e6c92ef2749e1c64b887140a736b690a4983936064dfb1ffaf0ad0885200dc63 namespace=k8s.io Jul 12 00:15:32.014886 containerd[1601]: time="2025-07-12T00:15:32.014074729Z" level=warning msg="cleaning up after shim disconnected" id=e6c92ef2749e1c64b887140a736b690a4983936064dfb1ffaf0ad0885200dc63 namespace=k8s.io Jul 12 00:15:32.014886 containerd[1601]: time="2025-07-12T00:15:32.014529290Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:15:32.016547 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e6c92ef2749e1c64b887140a736b690a4983936064dfb1ffaf0ad0885200dc63-rootfs.mount: Deactivated successfully. Jul 12 00:15:32.432274 kubelet[2706]: E0712 00:15:32.429242 2706 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:34202->10.0.0.2:2379: read: connection timed out" Jul 12 00:15:32.610193 kubelet[2706]: I0712 00:15:32.609576 2706 scope.go:117] "RemoveContainer" containerID="e6c92ef2749e1c64b887140a736b690a4983936064dfb1ffaf0ad0885200dc63" Jul 12 00:15:32.612298 containerd[1601]: time="2025-07-12T00:15:32.612099505Z" level=info msg="CreateContainer within sandbox \"8c0d9188412fa25e88ee01121dcafccf32a80c0ac42e742d10f5964163a36ac2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jul 12 00:15:32.630206 containerd[1601]: time="2025-07-12T00:15:32.630044145Z" level=info msg="CreateContainer within sandbox \"8c0d9188412fa25e88ee01121dcafccf32a80c0ac42e742d10f5964163a36ac2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"6f764a20eeb88a39f1fb39087b438fa67eb935abf91fc1104d603417d899424a\"" Jul 12 00:15:32.631527 containerd[1601]: time="2025-07-12T00:15:32.630914987Z" level=info msg="StartContainer for \"6f764a20eeb88a39f1fb39087b438fa67eb935abf91fc1104d603417d899424a\"" Jul 12 00:15:32.711277 containerd[1601]: time="2025-07-12T00:15:32.709964124Z" level=info msg="StartContainer for \"6f764a20eeb88a39f1fb39087b438fa67eb935abf91fc1104d603417d899424a\" returns successfully" Jul 12 00:15:32.787447 containerd[1601]: time="2025-07-12T00:15:32.787375097Z" level=info msg="shim disconnected" id=3d5b3614bf9565e7248e82a5c2bbfda9b5497f833e393bada881d539c58c043f namespace=k8s.io Jul 12 00:15:32.787447 containerd[1601]: time="2025-07-12T00:15:32.787433177Z" level=warning msg="cleaning up after shim disconnected" id=3d5b3614bf9565e7248e82a5c2bbfda9b5497f833e393bada881d539c58c043f namespace=k8s.io Jul 12 00:15:32.787447 containerd[1601]: time="2025-07-12T00:15:32.787540577Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:15:33.017415 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3d5b3614bf9565e7248e82a5c2bbfda9b5497f833e393bada881d539c58c043f-rootfs.mount: Deactivated successfully. Jul 12 00:15:33.619933 kubelet[2706]: I0712 00:15:33.619663 2706 scope.go:117] "RemoveContainer" containerID="3d5b3614bf9565e7248e82a5c2bbfda9b5497f833e393bada881d539c58c043f" Jul 12 00:15:33.622688 containerd[1601]: time="2025-07-12T00:15:33.622223637Z" level=info msg="CreateContainer within sandbox \"3770ca9b21a6da69131513a56fe09bd706c51ce13ab783d2c4f9203578f3957a\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jul 12 00:15:33.637069 containerd[1601]: time="2025-07-12T00:15:33.634863465Z" level=info msg="CreateContainer within sandbox \"3770ca9b21a6da69131513a56fe09bd706c51ce13ab783d2c4f9203578f3957a\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"649b5af970b841fd02748268325d2414f9e99be5020e69d09993bce2d5e2a2fb\"" Jul 12 00:15:33.637069 containerd[1601]: time="2025-07-12T00:15:33.636693309Z" level=info msg="StartContainer for \"649b5af970b841fd02748268325d2414f9e99be5020e69d09993bce2d5e2a2fb\"" Jul 12 00:15:33.703058 containerd[1601]: time="2025-07-12T00:15:33.702983577Z" level=info msg="StartContainer for \"649b5af970b841fd02748268325d2414f9e99be5020e69d09993bce2d5e2a2fb\" returns successfully" Jul 12 00:15:35.449562 kubelet[2706]: E0712 00:15:35.442555 2706 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:34016->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4081-3-4-n-bdc5bebc5f.185158c355c03016 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4081-3-4-n-bdc5bebc5f,UID:1372e655462367628d99fda6577a479c,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Liveness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4081-3-4-n-bdc5bebc5f,},FirstTimestamp:2025-07-12 00:15:26.00435919 +0000 UTC m=+176.336899881,LastTimestamp:2025-07-12 00:15:26.00435919 +0000 UTC m=+176.336899881,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-4-n-bdc5bebc5f,}" Jul 12 00:15:38.240533 containerd[1601]: time="2025-07-12T00:15:38.240339947Z" level=info msg="shim disconnected" id=eab5504fe03d50ea5fdabb861d8b666a2daa63962a6192dd4a57864102e017aa namespace=k8s.io Jul 12 00:15:38.241597 containerd[1601]: time="2025-07-12T00:15:38.240529707Z" level=warning msg="cleaning up after shim disconnected" id=eab5504fe03d50ea5fdabb861d8b666a2daa63962a6192dd4a57864102e017aa namespace=k8s.io Jul 12 00:15:38.241597 containerd[1601]: time="2025-07-12T00:15:38.241368029Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:15:38.241588 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eab5504fe03d50ea5fdabb861d8b666a2daa63962a6192dd4a57864102e017aa-rootfs.mount: Deactivated successfully. Jul 12 00:15:38.639988 kubelet[2706]: I0712 00:15:38.639051 2706 scope.go:117] "RemoveContainer" containerID="eab5504fe03d50ea5fdabb861d8b666a2daa63962a6192dd4a57864102e017aa" Jul 12 00:15:38.641865 containerd[1601]: time="2025-07-12T00:15:38.641795425Z" level=info msg="CreateContainer within sandbox \"fa4cdac54fdaf2c2f499f2a064a57bca06e37fbcfdf94bdd0f6ad4f175aa8c16\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jul 12 00:15:38.656953 containerd[1601]: time="2025-07-12T00:15:38.656904538Z" level=info msg="CreateContainer within sandbox \"fa4cdac54fdaf2c2f499f2a064a57bca06e37fbcfdf94bdd0f6ad4f175aa8c16\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"ff5f807c94eccf99426279257ee55f031180cf58680e4a3509b068758f65518b\"" Jul 12 00:15:38.658641 containerd[1601]: time="2025-07-12T00:15:38.658547302Z" level=info msg="StartContainer for \"ff5f807c94eccf99426279257ee55f031180cf58680e4a3509b068758f65518b\"" Jul 12 00:15:38.659266 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount513925951.mount: Deactivated successfully. Jul 12 00:15:38.727140 containerd[1601]: time="2025-07-12T00:15:38.727077692Z" level=info msg="StartContainer for \"ff5f807c94eccf99426279257ee55f031180cf58680e4a3509b068758f65518b\" returns successfully"