Aug 13 00:14:25.879378 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Aug 13 00:14:25.879411 kernel: Linux version 6.6.100-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Tue Aug 12 22:21:53 -00 2025 Aug 13 00:14:25.879423 kernel: KASLR enabled Aug 13 00:14:25.879428 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Aug 13 00:14:25.879434 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390c1018 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b43d18 Aug 13 00:14:25.879440 kernel: random: crng init done Aug 13 00:14:25.879447 kernel: ACPI: Early table checksum verification disabled Aug 13 00:14:25.879453 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Aug 13 00:14:25.879459 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Aug 13 00:14:25.879467 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:14:25.879473 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:14:25.879479 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:14:25.879485 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:14:25.879491 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:14:25.879499 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:14:25.879507 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:14:25.879513 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:14:25.879520 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:14:25.879526 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Aug 13 00:14:25.879532 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Aug 13 00:14:25.879539 kernel: NUMA: Failed to initialise from firmware Aug 13 00:14:25.879545 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Aug 13 00:14:25.879552 kernel: NUMA: NODE_DATA [mem 0x13966f800-0x139674fff] Aug 13 00:14:25.879558 kernel: Zone ranges: Aug 13 00:14:25.879564 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Aug 13 00:14:25.879572 kernel: DMA32 empty Aug 13 00:14:25.879578 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Aug 13 00:14:25.879584 kernel: Movable zone start for each node Aug 13 00:14:25.879591 kernel: Early memory node ranges Aug 13 00:14:25.879597 kernel: node 0: [mem 0x0000000040000000-0x000000013676ffff] Aug 13 00:14:25.879604 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Aug 13 00:14:25.879610 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Aug 13 00:14:25.879616 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Aug 13 00:14:25.879623 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Aug 13 00:14:25.879629 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Aug 13 00:14:25.879635 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Aug 13 00:14:25.879642 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Aug 13 00:14:25.879650 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Aug 13 00:14:25.879656 kernel: psci: probing for conduit method from ACPI. Aug 13 00:14:25.879707 kernel: psci: PSCIv1.1 detected in firmware. Aug 13 00:14:25.879718 kernel: psci: Using standard PSCI v0.2 function IDs Aug 13 00:14:25.879725 kernel: psci: Trusted OS migration not required Aug 13 00:14:25.879732 kernel: psci: SMC Calling Convention v1.1 Aug 13 00:14:25.879740 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Aug 13 00:14:25.879747 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Aug 13 00:14:25.879754 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Aug 13 00:14:25.879761 kernel: pcpu-alloc: [0] 0 [0] 1 Aug 13 00:14:25.879768 kernel: Detected PIPT I-cache on CPU0 Aug 13 00:14:25.879774 kernel: CPU features: detected: GIC system register CPU interface Aug 13 00:14:25.879781 kernel: CPU features: detected: Hardware dirty bit management Aug 13 00:14:25.879788 kernel: CPU features: detected: Spectre-v4 Aug 13 00:14:25.879794 kernel: CPU features: detected: Spectre-BHB Aug 13 00:14:25.879801 kernel: CPU features: kernel page table isolation forced ON by KASLR Aug 13 00:14:25.879810 kernel: CPU features: detected: Kernel page table isolation (KPTI) Aug 13 00:14:25.879816 kernel: CPU features: detected: ARM erratum 1418040 Aug 13 00:14:25.879823 kernel: CPU features: detected: SSBS not fully self-synchronizing Aug 13 00:14:25.879830 kernel: alternatives: applying boot alternatives Aug 13 00:14:25.879838 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=2f9df6e9e6c671c457040a64675390bbff42294b08c628cd2dc472ed8120146a Aug 13 00:14:25.879845 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 00:14:25.879852 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 13 00:14:25.879859 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 13 00:14:25.879865 kernel: Fallback order for Node 0: 0 Aug 13 00:14:25.879872 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Aug 13 00:14:25.879879 kernel: Policy zone: Normal Aug 13 00:14:25.879887 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 00:14:25.879894 kernel: software IO TLB: area num 2. Aug 13 00:14:25.879900 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Aug 13 00:14:25.879908 kernel: Memory: 3882808K/4096000K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39424K init, 897K bss, 213192K reserved, 0K cma-reserved) Aug 13 00:14:25.879915 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Aug 13 00:14:25.879921 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 13 00:14:25.879929 kernel: rcu: RCU event tracing is enabled. Aug 13 00:14:25.879936 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Aug 13 00:14:25.879944 kernel: Trampoline variant of Tasks RCU enabled. Aug 13 00:14:25.879950 kernel: Tracing variant of Tasks RCU enabled. Aug 13 00:14:25.879957 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 00:14:25.879966 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Aug 13 00:14:25.879972 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Aug 13 00:14:25.879979 kernel: GICv3: 256 SPIs implemented Aug 13 00:14:25.879986 kernel: GICv3: 0 Extended SPIs implemented Aug 13 00:14:25.879992 kernel: Root IRQ handler: gic_handle_irq Aug 13 00:14:25.879999 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Aug 13 00:14:25.880006 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Aug 13 00:14:25.880012 kernel: ITS [mem 0x08080000-0x0809ffff] Aug 13 00:14:25.880019 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Aug 13 00:14:25.880026 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Aug 13 00:14:25.880033 kernel: GICv3: using LPI property table @0x00000001000e0000 Aug 13 00:14:25.880040 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Aug 13 00:14:25.880048 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 13 00:14:25.880055 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 13 00:14:25.880062 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Aug 13 00:14:25.880069 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Aug 13 00:14:25.880075 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Aug 13 00:14:25.880082 kernel: Console: colour dummy device 80x25 Aug 13 00:14:25.880089 kernel: ACPI: Core revision 20230628 Aug 13 00:14:25.880097 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Aug 13 00:14:25.880104 kernel: pid_max: default: 32768 minimum: 301 Aug 13 00:14:25.880111 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Aug 13 00:14:25.880119 kernel: landlock: Up and running. Aug 13 00:14:25.880126 kernel: SELinux: Initializing. Aug 13 00:14:25.880134 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 00:14:25.880141 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 00:14:25.880148 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 00:14:25.880155 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 00:14:25.880163 kernel: rcu: Hierarchical SRCU implementation. Aug 13 00:14:25.880170 kernel: rcu: Max phase no-delay instances is 400. Aug 13 00:14:25.880177 kernel: Platform MSI: ITS@0x8080000 domain created Aug 13 00:14:25.880185 kernel: PCI/MSI: ITS@0x8080000 domain created Aug 13 00:14:25.880192 kernel: Remapping and enabling EFI services. Aug 13 00:14:25.880199 kernel: smp: Bringing up secondary CPUs ... Aug 13 00:14:25.880206 kernel: Detected PIPT I-cache on CPU1 Aug 13 00:14:25.880214 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Aug 13 00:14:25.880221 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Aug 13 00:14:25.880228 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 13 00:14:25.880235 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Aug 13 00:14:25.880242 kernel: smp: Brought up 1 node, 2 CPUs Aug 13 00:14:25.880249 kernel: SMP: Total of 2 processors activated. Aug 13 00:14:25.880258 kernel: CPU features: detected: 32-bit EL0 Support Aug 13 00:14:25.880265 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Aug 13 00:14:25.881306 kernel: CPU features: detected: Common not Private translations Aug 13 00:14:25.881326 kernel: CPU features: detected: CRC32 instructions Aug 13 00:14:25.881334 kernel: CPU features: detected: Enhanced Virtualization Traps Aug 13 00:14:25.881342 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Aug 13 00:14:25.881349 kernel: CPU features: detected: LSE atomic instructions Aug 13 00:14:25.881361 kernel: CPU features: detected: Privileged Access Never Aug 13 00:14:25.881368 kernel: CPU features: detected: RAS Extension Support Aug 13 00:14:25.881378 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Aug 13 00:14:25.881386 kernel: CPU: All CPU(s) started at EL1 Aug 13 00:14:25.881393 kernel: alternatives: applying system-wide alternatives Aug 13 00:14:25.881401 kernel: devtmpfs: initialized Aug 13 00:14:25.881409 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 00:14:25.881416 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Aug 13 00:14:25.881423 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 00:14:25.881432 kernel: SMBIOS 3.0.0 present. Aug 13 00:14:25.881440 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Aug 13 00:14:25.881447 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 00:14:25.881455 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Aug 13 00:14:25.881462 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Aug 13 00:14:25.881470 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Aug 13 00:14:25.881477 kernel: audit: initializing netlink subsys (disabled) Aug 13 00:14:25.881484 kernel: audit: type=2000 audit(0.011:1): state=initialized audit_enabled=0 res=1 Aug 13 00:14:25.881493 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 00:14:25.881502 kernel: cpuidle: using governor menu Aug 13 00:14:25.881509 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Aug 13 00:14:25.881516 kernel: ASID allocator initialised with 32768 entries Aug 13 00:14:25.881524 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 00:14:25.881532 kernel: Serial: AMBA PL011 UART driver Aug 13 00:14:25.881539 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Aug 13 00:14:25.881546 kernel: Modules: 0 pages in range for non-PLT usage Aug 13 00:14:25.881554 kernel: Modules: 509008 pages in range for PLT usage Aug 13 00:14:25.881561 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 13 00:14:25.881571 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Aug 13 00:14:25.881578 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Aug 13 00:14:25.881586 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Aug 13 00:14:25.881593 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 00:14:25.881601 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Aug 13 00:14:25.881608 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Aug 13 00:14:25.881616 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Aug 13 00:14:25.881623 kernel: ACPI: Added _OSI(Module Device) Aug 13 00:14:25.881630 kernel: ACPI: Added _OSI(Processor Device) Aug 13 00:14:25.881637 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 00:14:25.881647 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 13 00:14:25.881654 kernel: ACPI: Interpreter enabled Aug 13 00:14:25.881675 kernel: ACPI: Using GIC for interrupt routing Aug 13 00:14:25.881683 kernel: ACPI: MCFG table detected, 1 entries Aug 13 00:14:25.881691 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Aug 13 00:14:25.881698 kernel: printk: console [ttyAMA0] enabled Aug 13 00:14:25.881706 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 13 00:14:25.881861 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 13 00:14:25.881940 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Aug 13 00:14:25.882022 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Aug 13 00:14:25.882101 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Aug 13 00:14:25.882172 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Aug 13 00:14:25.882181 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Aug 13 00:14:25.882189 kernel: PCI host bridge to bus 0000:00 Aug 13 00:14:25.882260 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Aug 13 00:14:25.882346 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Aug 13 00:14:25.882406 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Aug 13 00:14:25.882465 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 13 00:14:25.882547 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Aug 13 00:14:25.882629 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Aug 13 00:14:25.882744 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Aug 13 00:14:25.882825 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Aug 13 00:14:25.885482 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Aug 13 00:14:25.885574 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Aug 13 00:14:25.885652 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Aug 13 00:14:25.885775 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Aug 13 00:14:25.885853 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Aug 13 00:14:25.885921 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Aug 13 00:14:25.886004 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Aug 13 00:14:25.886072 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Aug 13 00:14:25.886194 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Aug 13 00:14:25.887582 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Aug 13 00:14:25.887745 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Aug 13 00:14:25.887827 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Aug 13 00:14:25.887921 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Aug 13 00:14:25.887998 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Aug 13 00:14:25.888077 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Aug 13 00:14:25.888151 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Aug 13 00:14:25.889060 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Aug 13 00:14:25.889168 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Aug 13 00:14:25.889255 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Aug 13 00:14:25.889345 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Aug 13 00:14:25.889426 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Aug 13 00:14:25.889495 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Aug 13 00:14:25.889567 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Aug 13 00:14:25.889635 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Aug 13 00:14:25.889749 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Aug 13 00:14:25.889824 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Aug 13 00:14:25.889902 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Aug 13 00:14:25.889972 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Aug 13 00:14:25.890058 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Aug 13 00:14:25.890171 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Aug 13 00:14:25.890244 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Aug 13 00:14:25.891439 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Aug 13 00:14:25.891533 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Aug 13 00:14:25.891615 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Aug 13 00:14:25.891712 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Aug 13 00:14:25.891786 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Aug 13 00:14:25.891868 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Aug 13 00:14:25.891948 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Aug 13 00:14:25.892032 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Aug 13 00:14:25.892103 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Aug 13 00:14:25.892177 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Aug 13 00:14:25.892248 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Aug 13 00:14:25.892378 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Aug 13 00:14:25.892452 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Aug 13 00:14:25.892524 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Aug 13 00:14:25.892589 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Aug 13 00:14:25.892669 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Aug 13 00:14:25.892743 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Aug 13 00:14:25.892811 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Aug 13 00:14:25.892881 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Aug 13 00:14:25.892946 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Aug 13 00:14:25.893022 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Aug 13 00:14:25.893091 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Aug 13 00:14:25.893156 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Aug 13 00:14:25.893223 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x000fffff] to [bus 05] add_size 200000 add_align 100000 Aug 13 00:14:25.894387 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Aug 13 00:14:25.894485 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Aug 13 00:14:25.894557 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Aug 13 00:14:25.894639 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Aug 13 00:14:25.894729 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Aug 13 00:14:25.894802 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Aug 13 00:14:25.894875 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Aug 13 00:14:25.894942 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Aug 13 00:14:25.895015 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Aug 13 00:14:25.895087 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Aug 13 00:14:25.895154 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Aug 13 00:14:25.895228 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Aug 13 00:14:25.895325 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Aug 13 00:14:25.895395 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Aug 13 00:14:25.895464 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Aug 13 00:14:25.895530 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Aug 13 00:14:25.895601 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Aug 13 00:14:25.895681 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Aug 13 00:14:25.895760 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Aug 13 00:14:25.895829 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Aug 13 00:14:25.895901 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Aug 13 00:14:25.895974 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Aug 13 00:14:25.896044 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Aug 13 00:14:25.896112 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Aug 13 00:14:25.896179 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Aug 13 00:14:25.896252 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Aug 13 00:14:25.897406 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Aug 13 00:14:25.897492 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Aug 13 00:14:25.897564 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Aug 13 00:14:25.897632 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Aug 13 00:14:25.897729 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Aug 13 00:14:25.897807 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Aug 13 00:14:25.897876 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Aug 13 00:14:25.897942 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Aug 13 00:14:25.898011 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Aug 13 00:14:25.898080 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Aug 13 00:14:25.898150 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Aug 13 00:14:25.898218 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Aug 13 00:14:25.898353 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Aug 13 00:14:25.898449 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Aug 13 00:14:25.898535 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Aug 13 00:14:25.898607 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Aug 13 00:14:25.898687 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Aug 13 00:14:25.898757 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Aug 13 00:14:25.898826 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Aug 13 00:14:25.898892 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Aug 13 00:14:25.898963 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Aug 13 00:14:25.899036 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Aug 13 00:14:25.899103 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Aug 13 00:14:25.899168 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Aug 13 00:14:25.899238 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Aug 13 00:14:25.900283 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Aug 13 00:14:25.900408 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Aug 13 00:14:25.900479 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Aug 13 00:14:25.900548 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Aug 13 00:14:25.900622 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Aug 13 00:14:25.900707 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Aug 13 00:14:25.900777 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Aug 13 00:14:25.900851 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Aug 13 00:14:25.900921 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Aug 13 00:14:25.901002 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Aug 13 00:14:25.901069 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Aug 13 00:14:25.901137 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Aug 13 00:14:25.901213 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Aug 13 00:14:25.901329 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Aug 13 00:14:25.901403 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Aug 13 00:14:25.901467 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Aug 13 00:14:25.901536 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Aug 13 00:14:25.901600 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Aug 13 00:14:25.901711 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Aug 13 00:14:25.901789 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Aug 13 00:14:25.901857 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Aug 13 00:14:25.901921 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Aug 13 00:14:25.901998 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Aug 13 00:14:25.902075 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Aug 13 00:14:25.902163 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Aug 13 00:14:25.902241 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Aug 13 00:14:25.903496 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Aug 13 00:14:25.903582 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Aug 13 00:14:25.903693 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Aug 13 00:14:25.903781 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Aug 13 00:14:25.903851 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Aug 13 00:14:25.903917 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Aug 13 00:14:25.904002 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Aug 13 00:14:25.904069 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Aug 13 00:14:25.904142 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Aug 13 00:14:25.904210 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Aug 13 00:14:25.904292 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Aug 13 00:14:25.904365 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Aug 13 00:14:25.904434 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Aug 13 00:14:25.904498 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Aug 13 00:14:25.904567 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Aug 13 00:14:25.904636 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Aug 13 00:14:25.904720 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Aug 13 00:14:25.904787 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Aug 13 00:14:25.904852 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Aug 13 00:14:25.904920 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Aug 13 00:14:25.904988 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Aug 13 00:14:25.905051 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Aug 13 00:14:25.905121 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Aug 13 00:14:25.905190 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Aug 13 00:14:25.905248 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Aug 13 00:14:25.905428 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Aug 13 00:14:25.905512 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Aug 13 00:14:25.905592 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Aug 13 00:14:25.905655 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Aug 13 00:14:25.905753 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Aug 13 00:14:25.905831 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Aug 13 00:14:25.905898 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Aug 13 00:14:25.906000 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Aug 13 00:14:25.906069 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Aug 13 00:14:25.906131 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Aug 13 00:14:25.906201 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Aug 13 00:14:25.906269 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Aug 13 00:14:25.906397 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Aug 13 00:14:25.906479 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Aug 13 00:14:25.906550 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Aug 13 00:14:25.906618 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Aug 13 00:14:25.906735 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Aug 13 00:14:25.906810 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Aug 13 00:14:25.906874 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Aug 13 00:14:25.906944 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Aug 13 00:14:25.907022 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Aug 13 00:14:25.907091 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Aug 13 00:14:25.907163 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Aug 13 00:14:25.907225 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Aug 13 00:14:25.907315 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Aug 13 00:14:25.907390 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Aug 13 00:14:25.907453 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Aug 13 00:14:25.907515 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Aug 13 00:14:25.907529 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Aug 13 00:14:25.907537 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Aug 13 00:14:25.907545 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Aug 13 00:14:25.907553 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Aug 13 00:14:25.907561 kernel: iommu: Default domain type: Translated Aug 13 00:14:25.907569 kernel: iommu: DMA domain TLB invalidation policy: strict mode Aug 13 00:14:25.907577 kernel: efivars: Registered efivars operations Aug 13 00:14:25.907585 kernel: vgaarb: loaded Aug 13 00:14:25.907592 kernel: clocksource: Switched to clocksource arch_sys_counter Aug 13 00:14:25.907602 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 00:14:25.907612 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 00:14:25.907620 kernel: pnp: PnP ACPI init Aug 13 00:14:25.907723 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Aug 13 00:14:25.907737 kernel: pnp: PnP ACPI: found 1 devices Aug 13 00:14:25.907745 kernel: NET: Registered PF_INET protocol family Aug 13 00:14:25.907753 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 13 00:14:25.907761 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Aug 13 00:14:25.907771 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 00:14:25.907780 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 13 00:14:25.907788 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Aug 13 00:14:25.907796 kernel: TCP: Hash tables configured (established 32768 bind 32768) Aug 13 00:14:25.907803 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 00:14:25.907811 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 00:14:25.907819 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 00:14:25.907897 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Aug 13 00:14:25.907909 kernel: PCI: CLS 0 bytes, default 64 Aug 13 00:14:25.907919 kernel: kvm [1]: HYP mode not available Aug 13 00:14:25.907927 kernel: Initialise system trusted keyrings Aug 13 00:14:25.907934 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Aug 13 00:14:25.907942 kernel: Key type asymmetric registered Aug 13 00:14:25.907950 kernel: Asymmetric key parser 'x509' registered Aug 13 00:14:25.907965 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Aug 13 00:14:25.907973 kernel: io scheduler mq-deadline registered Aug 13 00:14:25.907981 kernel: io scheduler kyber registered Aug 13 00:14:25.907988 kernel: io scheduler bfq registered Aug 13 00:14:25.907999 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Aug 13 00:14:25.908099 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Aug 13 00:14:25.908174 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Aug 13 00:14:25.908241 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Aug 13 00:14:25.908383 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Aug 13 00:14:25.908453 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Aug 13 00:14:25.908531 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Aug 13 00:14:25.908609 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Aug 13 00:14:25.908691 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Aug 13 00:14:25.908761 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Aug 13 00:14:25.908829 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Aug 13 00:14:25.908906 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Aug 13 00:14:25.908988 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Aug 13 00:14:25.909064 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Aug 13 00:14:25.909131 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Aug 13 00:14:25.909200 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Aug 13 00:14:25.909268 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Aug 13 00:14:25.910471 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Aug 13 00:14:25.910542 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Aug 13 00:14:25.910618 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Aug 13 00:14:25.910707 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Aug 13 00:14:25.910778 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Aug 13 00:14:25.910849 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Aug 13 00:14:25.910914 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Aug 13 00:14:25.910992 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Aug 13 00:14:25.911009 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Aug 13 00:14:25.911079 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Aug 13 00:14:25.911144 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Aug 13 00:14:25.911211 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Aug 13 00:14:25.911222 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Aug 13 00:14:25.911230 kernel: ACPI: button: Power Button [PWRB] Aug 13 00:14:25.911238 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Aug 13 00:14:25.911329 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Aug 13 00:14:25.911405 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Aug 13 00:14:25.911418 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 00:14:25.911426 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Aug 13 00:14:25.911494 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Aug 13 00:14:25.911505 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Aug 13 00:14:25.911512 kernel: thunder_xcv, ver 1.0 Aug 13 00:14:25.911520 kernel: thunder_bgx, ver 1.0 Aug 13 00:14:25.911528 kernel: nicpf, ver 1.0 Aug 13 00:14:25.911539 kernel: nicvf, ver 1.0 Aug 13 00:14:25.911619 kernel: rtc-efi rtc-efi.0: registered as rtc0 Aug 13 00:14:25.911697 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-08-13T00:14:25 UTC (1755044065) Aug 13 00:14:25.911709 kernel: hid: raw HID events driver (C) Jiri Kosina Aug 13 00:14:25.911717 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Aug 13 00:14:25.911725 kernel: watchdog: Delayed init of the lockup detector failed: -19 Aug 13 00:14:25.911734 kernel: watchdog: Hard watchdog permanently disabled Aug 13 00:14:25.911745 kernel: NET: Registered PF_INET6 protocol family Aug 13 00:14:25.911753 kernel: Segment Routing with IPv6 Aug 13 00:14:25.911761 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 00:14:25.911768 kernel: NET: Registered PF_PACKET protocol family Aug 13 00:14:25.911776 kernel: Key type dns_resolver registered Aug 13 00:14:25.911784 kernel: registered taskstats version 1 Aug 13 00:14:25.911792 kernel: Loading compiled-in X.509 certificates Aug 13 00:14:25.911800 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.100-flatcar: 7263800c6d21650660e2b030c1023dce09b1e8b6' Aug 13 00:14:25.911808 kernel: Key type .fscrypt registered Aug 13 00:14:25.911815 kernel: Key type fscrypt-provisioning registered Aug 13 00:14:25.911825 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 13 00:14:25.911833 kernel: ima: Allocated hash algorithm: sha1 Aug 13 00:14:25.911840 kernel: ima: No architecture policies found Aug 13 00:14:25.911848 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Aug 13 00:14:25.911856 kernel: clk: Disabling unused clocks Aug 13 00:14:25.911863 kernel: Freeing unused kernel memory: 39424K Aug 13 00:14:25.911871 kernel: Run /init as init process Aug 13 00:14:25.911879 kernel: with arguments: Aug 13 00:14:25.911888 kernel: /init Aug 13 00:14:25.911896 kernel: with environment: Aug 13 00:14:25.911903 kernel: HOME=/ Aug 13 00:14:25.911911 kernel: TERM=linux Aug 13 00:14:25.911919 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 00:14:25.911929 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 13 00:14:25.911939 systemd[1]: Detected virtualization kvm. Aug 13 00:14:25.911947 systemd[1]: Detected architecture arm64. Aug 13 00:14:25.911962 systemd[1]: Running in initrd. Aug 13 00:14:25.911970 systemd[1]: No hostname configured, using default hostname. Aug 13 00:14:25.911978 systemd[1]: Hostname set to . Aug 13 00:14:25.911987 systemd[1]: Initializing machine ID from VM UUID. Aug 13 00:14:25.911995 systemd[1]: Queued start job for default target initrd.target. Aug 13 00:14:25.912003 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 00:14:25.912012 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 00:14:25.912020 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 13 00:14:25.912031 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 00:14:25.912039 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 13 00:14:25.912048 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 13 00:14:25.912058 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 13 00:14:25.912066 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 13 00:14:25.912075 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 00:14:25.912084 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 00:14:25.912094 systemd[1]: Reached target paths.target - Path Units. Aug 13 00:14:25.912102 systemd[1]: Reached target slices.target - Slice Units. Aug 13 00:14:25.912110 systemd[1]: Reached target swap.target - Swaps. Aug 13 00:14:25.912119 systemd[1]: Reached target timers.target - Timer Units. Aug 13 00:14:25.912127 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 00:14:25.912135 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 00:14:25.912144 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 13 00:14:25.912152 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 13 00:14:25.912160 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 00:14:25.912171 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 00:14:25.912179 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 00:14:25.912187 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 00:14:25.912197 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 13 00:14:25.912206 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 00:14:25.912214 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 13 00:14:25.912222 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 00:14:25.912231 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 00:14:25.912241 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 00:14:25.912249 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:14:25.912257 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 13 00:14:25.912266 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 00:14:25.912284 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 00:14:25.912314 systemd-journald[236]: Collecting audit messages is disabled. Aug 13 00:14:25.912336 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 00:14:25.912345 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:14:25.912355 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 00:14:25.912363 kernel: Bridge firewalling registered Aug 13 00:14:25.912373 systemd-journald[236]: Journal started Aug 13 00:14:25.912392 systemd-journald[236]: Runtime Journal (/run/log/journal/9109749ce56f42269b1e62820e57cec3) is 8.0M, max 76.6M, 68.6M free. Aug 13 00:14:25.885063 systemd-modules-load[237]: Inserted module 'overlay' Aug 13 00:14:25.926646 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 00:14:25.926717 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 00:14:25.908897 systemd-modules-load[237]: Inserted module 'br_netfilter' Aug 13 00:14:25.926354 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 00:14:25.928439 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 00:14:25.939468 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 00:14:25.942848 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 00:14:25.948468 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 00:14:25.949395 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:14:25.957487 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 13 00:14:25.960971 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:14:25.967087 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 00:14:25.968136 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 00:14:25.974586 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 00:14:25.995310 dracut-cmdline[268]: dracut-dracut-053 Aug 13 00:14:25.998491 dracut-cmdline[268]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=2f9df6e9e6c671c457040a64675390bbff42294b08c628cd2dc472ed8120146a Aug 13 00:14:26.011059 systemd-resolved[274]: Positive Trust Anchors: Aug 13 00:14:26.011077 systemd-resolved[274]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 00:14:26.011113 systemd-resolved[274]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 00:14:26.017258 systemd-resolved[274]: Defaulting to hostname 'linux'. Aug 13 00:14:26.018894 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 00:14:26.020141 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 00:14:26.101322 kernel: SCSI subsystem initialized Aug 13 00:14:26.106309 kernel: Loading iSCSI transport class v2.0-870. Aug 13 00:14:26.114398 kernel: iscsi: registered transport (tcp) Aug 13 00:14:26.131338 kernel: iscsi: registered transport (qla4xxx) Aug 13 00:14:26.131425 kernel: QLogic iSCSI HBA Driver Aug 13 00:14:26.185090 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 13 00:14:26.193572 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 13 00:14:26.224312 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 00:14:26.224404 kernel: device-mapper: uevent: version 1.0.3 Aug 13 00:14:26.224421 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Aug 13 00:14:26.280326 kernel: raid6: neonx8 gen() 15637 MB/s Aug 13 00:14:26.294320 kernel: raid6: neonx4 gen() 15571 MB/s Aug 13 00:14:26.311335 kernel: raid6: neonx2 gen() 13160 MB/s Aug 13 00:14:26.328335 kernel: raid6: neonx1 gen() 10404 MB/s Aug 13 00:14:26.345357 kernel: raid6: int64x8 gen() 6916 MB/s Aug 13 00:14:26.362331 kernel: raid6: int64x4 gen() 7302 MB/s Aug 13 00:14:26.379343 kernel: raid6: int64x2 gen() 6099 MB/s Aug 13 00:14:26.396352 kernel: raid6: int64x1 gen() 5027 MB/s Aug 13 00:14:26.396434 kernel: raid6: using algorithm neonx8 gen() 15637 MB/s Aug 13 00:14:26.413349 kernel: raid6: .... xor() 11875 MB/s, rmw enabled Aug 13 00:14:26.413444 kernel: raid6: using neon recovery algorithm Aug 13 00:14:26.419337 kernel: xor: measuring software checksum speed Aug 13 00:14:26.419413 kernel: 8regs : 19745 MB/sec Aug 13 00:14:26.419432 kernel: 32regs : 17639 MB/sec Aug 13 00:14:26.420358 kernel: arm64_neon : 25592 MB/sec Aug 13 00:14:26.420385 kernel: xor: using function: arm64_neon (25592 MB/sec) Aug 13 00:14:26.472370 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 13 00:14:26.489316 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 13 00:14:26.498712 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 00:14:26.513509 systemd-udevd[457]: Using default interface naming scheme 'v255'. Aug 13 00:14:26.517205 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 00:14:26.526714 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 13 00:14:26.543403 dracut-pre-trigger[464]: rd.md=0: removing MD RAID activation Aug 13 00:14:26.585788 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 00:14:26.592584 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 00:14:26.646823 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 00:14:26.652458 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 13 00:14:26.680663 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 13 00:14:26.682758 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 00:14:26.684245 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 00:14:26.686026 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 00:14:26.692461 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 13 00:14:26.718132 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 13 00:14:26.759312 kernel: scsi host0: Virtio SCSI HBA Aug 13 00:14:26.766834 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Aug 13 00:14:26.766930 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Aug 13 00:14:26.774633 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 00:14:26.774979 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:14:26.781661 kernel: ACPI: bus type USB registered Aug 13 00:14:26.781685 kernel: usbcore: registered new interface driver usbfs Aug 13 00:14:26.782526 kernel: usbcore: registered new interface driver hub Aug 13 00:14:26.782414 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 00:14:26.783356 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:14:26.788259 kernel: usbcore: registered new device driver usb Aug 13 00:14:26.783675 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:14:26.787753 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:14:26.796850 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:14:26.820100 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:14:26.830475 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 00:14:26.839394 kernel: sr 0:0:0:0: Power-on or device reset occurred Aug 13 00:14:26.839609 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Aug 13 00:14:26.839731 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Aug 13 00:14:26.843339 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Aug 13 00:14:26.853972 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Aug 13 00:14:26.854168 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Aug 13 00:14:26.854256 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Aug 13 00:14:26.854395 kernel: sd 0:0:0:1: Power-on or device reset occurred Aug 13 00:14:26.854757 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:14:26.859373 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Aug 13 00:14:26.859555 kernel: sd 0:0:0:1: [sda] Write Protect is off Aug 13 00:14:26.859641 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Aug 13 00:14:26.859758 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Aug 13 00:14:26.859844 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Aug 13 00:14:26.859926 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Aug 13 00:14:26.860005 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Aug 13 00:14:26.861377 kernel: hub 1-0:1.0: USB hub found Aug 13 00:14:26.862833 kernel: hub 1-0:1.0: 4 ports detected Aug 13 00:14:26.863034 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Aug 13 00:14:26.864998 kernel: hub 2-0:1.0: USB hub found Aug 13 00:14:26.865180 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 13 00:14:26.866548 kernel: GPT:17805311 != 80003071 Aug 13 00:14:26.866587 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 13 00:14:26.866608 kernel: GPT:17805311 != 80003071 Aug 13 00:14:26.866620 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 13 00:14:26.866631 kernel: hub 2-0:1.0: 4 ports detected Aug 13 00:14:26.867612 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 00:14:26.868387 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Aug 13 00:14:26.910292 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (504) Aug 13 00:14:26.918310 kernel: BTRFS: device fsid 03408483-5051-409a-aab4-4e6d5027e982 devid 1 transid 41 /dev/sda3 scanned by (udev-worker) (507) Aug 13 00:14:26.927852 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Aug 13 00:14:26.934949 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Aug 13 00:14:26.943098 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Aug 13 00:14:26.948996 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Aug 13 00:14:26.949784 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Aug 13 00:14:26.961589 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 13 00:14:26.968899 disk-uuid[576]: Primary Header is updated. Aug 13 00:14:26.968899 disk-uuid[576]: Secondary Entries is updated. Aug 13 00:14:26.968899 disk-uuid[576]: Secondary Header is updated. Aug 13 00:14:26.977484 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 00:14:26.982305 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 00:14:26.989307 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 00:14:27.103780 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Aug 13 00:14:27.239324 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Aug 13 00:14:27.239405 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Aug 13 00:14:27.239705 kernel: usbcore: registered new interface driver usbhid Aug 13 00:14:27.240310 kernel: usbhid: USB HID core driver Aug 13 00:14:27.346373 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Aug 13 00:14:27.476345 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Aug 13 00:14:27.529321 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Aug 13 00:14:27.992526 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 00:14:27.994169 disk-uuid[577]: The operation has completed successfully. Aug 13 00:14:28.052581 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 00:14:28.052719 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 13 00:14:28.061695 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 13 00:14:28.066995 sh[594]: Success Aug 13 00:14:28.081309 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Aug 13 00:14:28.156927 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 13 00:14:28.159974 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 13 00:14:28.161587 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 13 00:14:28.187470 kernel: BTRFS info (device dm-0): first mount of filesystem 03408483-5051-409a-aab4-4e6d5027e982 Aug 13 00:14:28.187558 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Aug 13 00:14:28.187586 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Aug 13 00:14:28.188789 kernel: BTRFS info (device dm-0): disabling log replay at mount time Aug 13 00:14:28.188853 kernel: BTRFS info (device dm-0): using free space tree Aug 13 00:14:28.197316 kernel: BTRFS info (device dm-0): enabling ssd optimizations Aug 13 00:14:28.199094 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 13 00:14:28.200510 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 13 00:14:28.206497 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 13 00:14:28.210778 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 13 00:14:28.221070 kernel: BTRFS info (device sda6): first mount of filesystem dbce4b09-c4b8-4cc9-bd11-416717f60c7d Aug 13 00:14:28.221129 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Aug 13 00:14:28.221142 kernel: BTRFS info (device sda6): using free space tree Aug 13 00:14:28.225317 kernel: BTRFS info (device sda6): enabling ssd optimizations Aug 13 00:14:28.225374 kernel: BTRFS info (device sda6): auto enabling async discard Aug 13 00:14:28.237405 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 13 00:14:28.239325 kernel: BTRFS info (device sda6): last unmount of filesystem dbce4b09-c4b8-4cc9-bd11-416717f60c7d Aug 13 00:14:28.247517 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 13 00:14:28.257513 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 13 00:14:28.353599 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 00:14:28.363834 ignition[680]: Ignition 2.19.0 Aug 13 00:14:28.363849 ignition[680]: Stage: fetch-offline Aug 13 00:14:28.365536 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 00:14:28.363891 ignition[680]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:14:28.367169 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 00:14:28.363899 ignition[680]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Aug 13 00:14:28.364473 ignition[680]: parsed url from cmdline: "" Aug 13 00:14:28.364479 ignition[680]: no config URL provided Aug 13 00:14:28.364486 ignition[680]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 00:14:28.364497 ignition[680]: no config at "/usr/lib/ignition/user.ign" Aug 13 00:14:28.364503 ignition[680]: failed to fetch config: resource requires networking Aug 13 00:14:28.364862 ignition[680]: Ignition finished successfully Aug 13 00:14:28.390574 systemd-networkd[780]: lo: Link UP Aug 13 00:14:28.390587 systemd-networkd[780]: lo: Gained carrier Aug 13 00:14:28.392519 systemd-networkd[780]: Enumeration completed Aug 13 00:14:28.393009 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 00:14:28.393826 systemd[1]: Reached target network.target - Network. Aug 13 00:14:28.395121 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:14:28.395124 systemd-networkd[780]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:14:28.395934 systemd-networkd[780]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:14:28.395937 systemd-networkd[780]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:14:28.396491 systemd-networkd[780]: eth0: Link UP Aug 13 00:14:28.396494 systemd-networkd[780]: eth0: Gained carrier Aug 13 00:14:28.396502 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:14:28.404139 systemd-networkd[780]: eth1: Link UP Aug 13 00:14:28.404142 systemd-networkd[780]: eth1: Gained carrier Aug 13 00:14:28.404151 systemd-networkd[780]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:14:28.405746 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Aug 13 00:14:28.423766 ignition[783]: Ignition 2.19.0 Aug 13 00:14:28.423777 ignition[783]: Stage: fetch Aug 13 00:14:28.423980 ignition[783]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:14:28.423989 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Aug 13 00:14:28.424083 ignition[783]: parsed url from cmdline: "" Aug 13 00:14:28.424087 ignition[783]: no config URL provided Aug 13 00:14:28.424091 ignition[783]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 00:14:28.424098 ignition[783]: no config at "/usr/lib/ignition/user.ign" Aug 13 00:14:28.424120 ignition[783]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Aug 13 00:14:28.424599 ignition[783]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Aug 13 00:14:28.435399 systemd-networkd[780]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Aug 13 00:14:28.464383 systemd-networkd[780]: eth0: DHCPv4 address 159.69.112.232/32, gateway 172.31.1.1 acquired from 172.31.1.1 Aug 13 00:14:28.624760 ignition[783]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Aug 13 00:14:28.629164 ignition[783]: GET result: OK Aug 13 00:14:28.629293 ignition[783]: parsing config with SHA512: 132e2d11edafe51e1c17e6d8d35d5fb09b34ed2e6abdc86d0d38c8c6921b471758576a232198e31e9c8017ae8b00c944bf741d513e6d4fe816cad9e271234c40 Aug 13 00:14:28.633703 unknown[783]: fetched base config from "system" Aug 13 00:14:28.633715 unknown[783]: fetched base config from "system" Aug 13 00:14:28.634114 ignition[783]: fetch: fetch complete Aug 13 00:14:28.633720 unknown[783]: fetched user config from "hetzner" Aug 13 00:14:28.634119 ignition[783]: fetch: fetch passed Aug 13 00:14:28.634169 ignition[783]: Ignition finished successfully Aug 13 00:14:28.638147 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Aug 13 00:14:28.643506 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 13 00:14:28.659085 ignition[790]: Ignition 2.19.0 Aug 13 00:14:28.659097 ignition[790]: Stage: kargs Aug 13 00:14:28.659326 ignition[790]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:14:28.659338 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Aug 13 00:14:28.660373 ignition[790]: kargs: kargs passed Aug 13 00:14:28.660429 ignition[790]: Ignition finished successfully Aug 13 00:14:28.662962 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 13 00:14:28.670536 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 13 00:14:28.684202 ignition[796]: Ignition 2.19.0 Aug 13 00:14:28.684215 ignition[796]: Stage: disks Aug 13 00:14:28.684587 ignition[796]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:14:28.684602 ignition[796]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Aug 13 00:14:28.685827 ignition[796]: disks: disks passed Aug 13 00:14:28.685890 ignition[796]: Ignition finished successfully Aug 13 00:14:28.687904 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 13 00:14:28.690184 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 13 00:14:28.692457 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 13 00:14:28.693596 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 00:14:28.695084 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 00:14:28.696248 systemd[1]: Reached target basic.target - Basic System. Aug 13 00:14:28.705572 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 13 00:14:28.725680 systemd-fsck[804]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Aug 13 00:14:28.730593 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 13 00:14:28.738723 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 13 00:14:28.790346 kernel: EXT4-fs (sda9): mounted filesystem 128aec8b-f05d-48ed-8996-c9e8b21a7810 r/w with ordered data mode. Quota mode: none. Aug 13 00:14:28.790833 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 13 00:14:28.792109 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 13 00:14:28.802496 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 00:14:28.806381 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 13 00:14:28.808353 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Aug 13 00:14:28.810535 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 00:14:28.810569 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 00:14:28.819881 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (812) Aug 13 00:14:28.819371 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 13 00:14:28.824455 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 13 00:14:28.827469 kernel: BTRFS info (device sda6): first mount of filesystem dbce4b09-c4b8-4cc9-bd11-416717f60c7d Aug 13 00:14:28.827496 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Aug 13 00:14:28.827507 kernel: BTRFS info (device sda6): using free space tree Aug 13 00:14:28.836308 kernel: BTRFS info (device sda6): enabling ssd optimizations Aug 13 00:14:28.836367 kernel: BTRFS info (device sda6): auto enabling async discard Aug 13 00:14:28.840063 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 00:14:28.884080 initrd-setup-root[840]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 00:14:28.891802 initrd-setup-root[847]: cut: /sysroot/etc/group: No such file or directory Aug 13 00:14:28.892964 coreos-metadata[814]: Aug 13 00:14:28.892 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Aug 13 00:14:28.895370 coreos-metadata[814]: Aug 13 00:14:28.895 INFO Fetch successful Aug 13 00:14:28.898297 coreos-metadata[814]: Aug 13 00:14:28.897 INFO wrote hostname ci-4081-3-5-3-d55e308663 to /sysroot/etc/hostname Aug 13 00:14:28.900007 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Aug 13 00:14:28.904681 initrd-setup-root[854]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 00:14:28.907313 initrd-setup-root[862]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 00:14:29.018437 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 13 00:14:29.026523 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 13 00:14:29.029468 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 13 00:14:29.039300 kernel: BTRFS info (device sda6): last unmount of filesystem dbce4b09-c4b8-4cc9-bd11-416717f60c7d Aug 13 00:14:29.067504 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 13 00:14:29.075230 ignition[930]: INFO : Ignition 2.19.0 Aug 13 00:14:29.075230 ignition[930]: INFO : Stage: mount Aug 13 00:14:29.076788 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:14:29.076788 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Aug 13 00:14:29.079526 ignition[930]: INFO : mount: mount passed Aug 13 00:14:29.079526 ignition[930]: INFO : Ignition finished successfully Aug 13 00:14:29.078652 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 13 00:14:29.086444 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 13 00:14:29.187385 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 13 00:14:29.193598 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 00:14:29.203359 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (942) Aug 13 00:14:29.205372 kernel: BTRFS info (device sda6): first mount of filesystem dbce4b09-c4b8-4cc9-bd11-416717f60c7d Aug 13 00:14:29.205436 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Aug 13 00:14:29.205456 kernel: BTRFS info (device sda6): using free space tree Aug 13 00:14:29.212047 kernel: BTRFS info (device sda6): enabling ssd optimizations Aug 13 00:14:29.212138 kernel: BTRFS info (device sda6): auto enabling async discard Aug 13 00:14:29.215848 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 00:14:29.249487 ignition[958]: INFO : Ignition 2.19.0 Aug 13 00:14:29.249487 ignition[958]: INFO : Stage: files Aug 13 00:14:29.251217 ignition[958]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:14:29.251217 ignition[958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Aug 13 00:14:29.251217 ignition[958]: DEBUG : files: compiled without relabeling support, skipping Aug 13 00:14:29.254332 ignition[958]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 00:14:29.254332 ignition[958]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 00:14:29.256225 ignition[958]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 00:14:29.257326 ignition[958]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 00:14:29.258224 unknown[958]: wrote ssh authorized keys file for user: core Aug 13 00:14:29.259024 ignition[958]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 00:14:29.262260 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Aug 13 00:14:29.262260 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Aug 13 00:14:30.290465 systemd-networkd[780]: eth0: Gained IPv6LL Aug 13 00:14:30.354842 systemd-networkd[780]: eth1: Gained IPv6LL Aug 13 00:14:30.993671 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 13 00:14:32.982726 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Aug 13 00:14:32.984680 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Aug 13 00:14:32.984680 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 00:14:32.984680 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 00:14:32.984680 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 00:14:32.984680 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 00:14:32.984680 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 00:14:32.984680 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 00:14:32.984680 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 00:14:32.984680 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 00:14:32.984680 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 00:14:32.984680 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Aug 13 00:14:32.984680 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Aug 13 00:14:32.984680 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Aug 13 00:14:32.984680 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Aug 13 00:14:33.115214 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Aug 13 00:14:33.319800 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Aug 13 00:14:33.319800 ignition[958]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Aug 13 00:14:33.322979 ignition[958]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 00:14:33.322979 ignition[958]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 00:14:33.322979 ignition[958]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Aug 13 00:14:33.322979 ignition[958]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Aug 13 00:14:33.322979 ignition[958]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Aug 13 00:14:33.322979 ignition[958]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Aug 13 00:14:33.322979 ignition[958]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Aug 13 00:14:33.322979 ignition[958]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Aug 13 00:14:33.322979 ignition[958]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 00:14:33.322979 ignition[958]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 00:14:33.322979 ignition[958]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 00:14:33.322979 ignition[958]: INFO : files: files passed Aug 13 00:14:33.322979 ignition[958]: INFO : Ignition finished successfully Aug 13 00:14:33.324926 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 13 00:14:33.334994 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 13 00:14:33.337522 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 13 00:14:33.340384 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 00:14:33.340491 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 13 00:14:33.367790 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:14:33.367790 initrd-setup-root-after-ignition[988]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:14:33.371579 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:14:33.375393 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 00:14:33.377643 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 13 00:14:33.384631 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 13 00:14:33.431391 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 00:14:33.431642 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 13 00:14:33.434846 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 13 00:14:33.436427 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 13 00:14:33.438350 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 13 00:14:33.446638 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 13 00:14:33.462069 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 00:14:33.471725 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 13 00:14:33.484979 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 13 00:14:33.485832 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 00:14:33.487121 systemd[1]: Stopped target timers.target - Timer Units. Aug 13 00:14:33.488090 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 00:14:33.488219 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 00:14:33.489485 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 13 00:14:33.490356 systemd[1]: Stopped target basic.target - Basic System. Aug 13 00:14:33.491882 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 13 00:14:33.492977 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 00:14:33.494004 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 13 00:14:33.495201 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 13 00:14:33.496349 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 00:14:33.497996 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 13 00:14:33.500144 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 13 00:14:33.502643 systemd[1]: Stopped target swap.target - Swaps. Aug 13 00:14:33.503380 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 00:14:33.503508 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 13 00:14:33.504849 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 13 00:14:33.505600 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 00:14:33.507345 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 13 00:14:33.508476 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 00:14:33.509229 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 00:14:33.509397 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 13 00:14:33.511409 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 00:14:33.511551 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 00:14:33.512899 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 00:14:33.513006 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 13 00:14:33.514658 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Aug 13 00:14:33.514760 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Aug 13 00:14:33.526664 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 13 00:14:33.530806 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 13 00:14:33.532580 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 00:14:33.532920 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 00:14:33.537818 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 00:14:33.538259 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 00:14:33.547209 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 00:14:33.547366 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 13 00:14:33.556247 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 00:14:33.559361 ignition[1012]: INFO : Ignition 2.19.0 Aug 13 00:14:33.559361 ignition[1012]: INFO : Stage: umount Aug 13 00:14:33.559361 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:14:33.559361 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Aug 13 00:14:33.558932 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 00:14:33.564871 ignition[1012]: INFO : umount: umount passed Aug 13 00:14:33.564871 ignition[1012]: INFO : Ignition finished successfully Aug 13 00:14:33.559379 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 13 00:14:33.563432 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 00:14:33.563599 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 13 00:14:33.564497 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 00:14:33.564578 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 13 00:14:33.565609 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 00:14:33.565658 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 13 00:14:33.566503 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 13 00:14:33.566582 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Aug 13 00:14:33.567473 systemd[1]: Stopped target network.target - Network. Aug 13 00:14:33.568268 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 00:14:33.568416 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 00:14:33.569455 systemd[1]: Stopped target paths.target - Path Units. Aug 13 00:14:33.570251 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 00:14:33.574350 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 00:14:33.576995 systemd[1]: Stopped target slices.target - Slice Units. Aug 13 00:14:33.578448 systemd[1]: Stopped target sockets.target - Socket Units. Aug 13 00:14:33.579382 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 00:14:33.579426 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 00:14:33.580613 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 00:14:33.580652 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 00:14:33.581461 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 00:14:33.581515 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 13 00:14:33.582364 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 13 00:14:33.582411 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 13 00:14:33.583313 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 00:14:33.583361 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 13 00:14:33.584454 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 13 00:14:33.585298 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 13 00:14:33.590383 systemd-networkd[780]: eth1: DHCPv6 lease lost Aug 13 00:14:33.594399 systemd-networkd[780]: eth0: DHCPv6 lease lost Aug 13 00:14:33.597196 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 00:14:33.597374 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 13 00:14:33.599622 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 00:14:33.600203 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 13 00:14:33.601640 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 00:14:33.601700 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 13 00:14:33.611030 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 13 00:14:33.612125 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 00:14:33.612228 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 00:14:33.613930 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 00:14:33.614001 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:14:33.615398 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 00:14:33.615455 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 13 00:14:33.616625 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 13 00:14:33.616680 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 00:14:33.618246 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 00:14:33.624998 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 00:14:33.625162 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 00:14:33.626636 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 00:14:33.626715 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 13 00:14:33.627754 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 00:14:33.627787 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 00:14:33.628803 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 00:14:33.628850 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 13 00:14:33.630812 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 00:14:33.630864 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 13 00:14:33.632559 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 00:14:33.632613 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:14:33.644415 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 13 00:14:33.645132 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 13 00:14:33.645196 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 00:14:33.647040 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Aug 13 00:14:33.647096 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 00:14:33.650110 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 00:14:33.650160 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 00:14:33.653034 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:14:33.653086 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:14:33.654822 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 00:14:33.654947 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 13 00:14:33.656681 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 00:14:33.656799 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 13 00:14:33.659790 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 13 00:14:33.666556 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 13 00:14:33.677186 systemd[1]: Switching root. Aug 13 00:14:33.708773 systemd-journald[236]: Journal stopped Aug 13 00:14:34.715892 systemd-journald[236]: Received SIGTERM from PID 1 (systemd). Aug 13 00:14:34.715959 kernel: SELinux: policy capability network_peer_controls=1 Aug 13 00:14:34.715973 kernel: SELinux: policy capability open_perms=1 Aug 13 00:14:34.715988 kernel: SELinux: policy capability extended_socket_class=1 Aug 13 00:14:34.715997 kernel: SELinux: policy capability always_check_network=0 Aug 13 00:14:34.716010 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 13 00:14:34.716021 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 13 00:14:34.716032 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 13 00:14:34.716041 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 13 00:14:34.716051 kernel: audit: type=1403 audit(1755044073.883:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 13 00:14:34.716063 systemd[1]: Successfully loaded SELinux policy in 36.360ms. Aug 13 00:14:34.716084 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.653ms. Aug 13 00:14:34.716095 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 13 00:14:34.716109 systemd[1]: Detected virtualization kvm. Aug 13 00:14:34.716120 systemd[1]: Detected architecture arm64. Aug 13 00:14:34.716130 systemd[1]: Detected first boot. Aug 13 00:14:34.716141 systemd[1]: Hostname set to . Aug 13 00:14:34.716151 systemd[1]: Initializing machine ID from VM UUID. Aug 13 00:14:34.716163 zram_generator::config[1055]: No configuration found. Aug 13 00:14:34.716174 systemd[1]: Populated /etc with preset unit settings. Aug 13 00:14:34.716184 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 13 00:14:34.716194 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 13 00:14:34.716205 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 13 00:14:34.716216 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 13 00:14:34.716226 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 13 00:14:34.716236 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 13 00:14:34.716248 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 13 00:14:34.716259 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 13 00:14:34.716269 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 13 00:14:34.716291 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 13 00:14:34.716303 systemd[1]: Created slice user.slice - User and Session Slice. Aug 13 00:14:34.716313 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 00:14:34.716324 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 00:14:34.716336 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 13 00:14:34.716347 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 13 00:14:34.716360 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 13 00:14:34.716371 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 00:14:34.716383 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Aug 13 00:14:34.716394 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 00:14:34.716404 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 13 00:14:34.718321 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 13 00:14:34.718355 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 13 00:14:34.718367 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 13 00:14:34.718377 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 00:14:34.718393 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 00:14:34.718404 systemd[1]: Reached target slices.target - Slice Units. Aug 13 00:14:34.718414 systemd[1]: Reached target swap.target - Swaps. Aug 13 00:14:34.718424 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 13 00:14:34.718436 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 13 00:14:34.718446 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 00:14:34.718460 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 00:14:34.718470 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 00:14:34.718481 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 13 00:14:34.718492 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 13 00:14:34.718502 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 13 00:14:34.718544 systemd[1]: Mounting media.mount - External Media Directory... Aug 13 00:14:34.718559 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 13 00:14:34.718570 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 13 00:14:34.718580 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 13 00:14:34.718593 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 13 00:14:34.718604 systemd[1]: Reached target machines.target - Containers. Aug 13 00:14:34.718615 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 13 00:14:34.718626 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:14:34.718640 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 00:14:34.718654 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 13 00:14:34.718665 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 00:14:34.718676 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 00:14:34.718686 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 00:14:34.718697 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 13 00:14:34.718708 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 00:14:34.718719 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 00:14:34.718729 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 13 00:14:34.718740 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 13 00:14:34.718753 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 13 00:14:34.718763 systemd[1]: Stopped systemd-fsck-usr.service. Aug 13 00:14:34.718774 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 00:14:34.718784 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 00:14:34.718794 kernel: fuse: init (API version 7.39) Aug 13 00:14:34.718805 kernel: loop: module loaded Aug 13 00:14:34.718815 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 13 00:14:34.718825 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 13 00:14:34.718835 kernel: ACPI: bus type drm_connector registered Aug 13 00:14:34.718847 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 00:14:34.718858 systemd[1]: verity-setup.service: Deactivated successfully. Aug 13 00:14:34.718869 systemd[1]: Stopped verity-setup.service. Aug 13 00:14:34.718879 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 13 00:14:34.718890 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 13 00:14:34.718900 systemd[1]: Mounted media.mount - External Media Directory. Aug 13 00:14:34.718911 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 13 00:14:34.718923 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 13 00:14:34.718933 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 13 00:14:34.718944 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 00:14:34.718954 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 13 00:14:34.718966 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 13 00:14:34.718977 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:14:34.718988 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 00:14:34.719000 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 00:14:34.719010 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 00:14:34.719023 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:14:34.719039 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 00:14:34.719049 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 13 00:14:34.719061 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 13 00:14:34.719072 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:14:34.719082 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 00:14:34.719093 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 00:14:34.719104 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 00:14:34.719114 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 13 00:14:34.719125 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 13 00:14:34.719163 systemd-journald[1122]: Collecting audit messages is disabled. Aug 13 00:14:34.719192 systemd-journald[1122]: Journal started Aug 13 00:14:34.719214 systemd-journald[1122]: Runtime Journal (/run/log/journal/9109749ce56f42269b1e62820e57cec3) is 8.0M, max 76.6M, 68.6M free. Aug 13 00:14:34.433020 systemd[1]: Queued start job for default target multi-user.target. Aug 13 00:14:34.454047 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Aug 13 00:14:34.455114 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 13 00:14:34.723390 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 13 00:14:34.732569 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 13 00:14:34.738313 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 00:14:34.738389 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 00:14:34.741309 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Aug 13 00:14:34.750692 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 13 00:14:34.760307 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 13 00:14:34.760387 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:14:34.764621 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 13 00:14:34.767296 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:14:34.770704 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 13 00:14:34.772312 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 00:14:34.779302 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 00:14:34.783923 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 13 00:14:34.792430 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 00:14:34.798304 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 00:14:34.803267 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 13 00:14:34.812798 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 00:14:34.817549 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 13 00:14:34.820603 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 13 00:14:34.822069 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 13 00:14:34.823864 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 13 00:14:34.829302 kernel: loop0: detected capacity change from 0 to 8 Aug 13 00:14:34.840299 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 13 00:14:34.855821 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 13 00:14:34.860386 kernel: loop1: detected capacity change from 0 to 207008 Aug 13 00:14:34.868572 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 13 00:14:34.872566 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Aug 13 00:14:34.881482 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Aug 13 00:14:34.881938 systemd-tmpfiles[1152]: ACLs are not supported, ignoring. Aug 13 00:14:34.882289 systemd-tmpfiles[1152]: ACLs are not supported, ignoring. Aug 13 00:14:34.892827 systemd-journald[1122]: Time spent on flushing to /var/log/journal/9109749ce56f42269b1e62820e57cec3 is 63.663ms for 1136 entries. Aug 13 00:14:34.892827 systemd-journald[1122]: System Journal (/var/log/journal/9109749ce56f42269b1e62820e57cec3) is 8.0M, max 584.8M, 576.8M free. Aug 13 00:14:34.967264 systemd-journald[1122]: Received client request to flush runtime journal. Aug 13 00:14:34.967382 kernel: loop2: detected capacity change from 0 to 114328 Aug 13 00:14:34.967414 kernel: loop3: detected capacity change from 0 to 114432 Aug 13 00:14:34.896864 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:14:34.903368 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 00:14:34.914500 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 13 00:14:34.926196 udevadm[1181]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Aug 13 00:14:34.972907 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 13 00:14:34.987009 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 13 00:14:34.992398 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Aug 13 00:14:34.997468 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 13 00:14:35.003299 kernel: loop4: detected capacity change from 0 to 8 Aug 13 00:14:35.008292 kernel: loop5: detected capacity change from 0 to 207008 Aug 13 00:14:35.012665 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 00:14:35.048304 kernel: loop6: detected capacity change from 0 to 114328 Aug 13 00:14:35.058743 systemd-tmpfiles[1196]: ACLs are not supported, ignoring. Aug 13 00:14:35.058768 systemd-tmpfiles[1196]: ACLs are not supported, ignoring. Aug 13 00:14:35.067315 kernel: loop7: detected capacity change from 0 to 114432 Aug 13 00:14:35.074759 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 00:14:35.083595 (sd-merge)[1195]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Aug 13 00:14:35.086133 (sd-merge)[1195]: Merged extensions into '/usr'. Aug 13 00:14:35.093111 systemd[1]: Reloading requested from client PID 1151 ('systemd-sysext') (unit systemd-sysext.service)... Aug 13 00:14:35.093137 systemd[1]: Reloading... Aug 13 00:14:35.193033 zram_generator::config[1220]: No configuration found. Aug 13 00:14:35.397353 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:14:35.440171 ldconfig[1147]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 13 00:14:35.465712 systemd[1]: Reloading finished in 371 ms. Aug 13 00:14:35.502203 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 13 00:14:35.504325 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 13 00:14:35.523669 systemd[1]: Starting ensure-sysext.service... Aug 13 00:14:35.528190 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 00:14:35.552381 systemd[1]: Reloading requested from client PID 1261 ('systemctl') (unit ensure-sysext.service)... Aug 13 00:14:35.552430 systemd[1]: Reloading... Aug 13 00:14:35.569091 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 00:14:35.569814 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 13 00:14:35.570686 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 00:14:35.571027 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. Aug 13 00:14:35.571142 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. Aug 13 00:14:35.574860 systemd-tmpfiles[1262]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 00:14:35.574985 systemd-tmpfiles[1262]: Skipping /boot Aug 13 00:14:35.584974 systemd-tmpfiles[1262]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 00:14:35.585094 systemd-tmpfiles[1262]: Skipping /boot Aug 13 00:14:35.630319 zram_generator::config[1288]: No configuration found. Aug 13 00:14:35.739150 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:14:35.796447 systemd[1]: Reloading finished in 243 ms. Aug 13 00:14:35.817751 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 13 00:14:35.829123 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 00:14:35.843697 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 13 00:14:35.849514 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 13 00:14:35.861459 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 13 00:14:35.866525 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 00:14:35.871610 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 00:14:35.875528 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 13 00:14:35.882806 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:14:35.891624 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 00:14:35.898607 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 00:14:35.910622 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 00:14:35.912437 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:14:35.917350 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 13 00:14:35.921316 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 13 00:14:35.932579 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:14:35.934332 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 00:14:35.938600 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:14:35.938955 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:14:35.949648 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 13 00:14:35.951459 augenrules[1356]: No rules Aug 13 00:14:35.952139 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:14:35.952343 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 00:14:35.955190 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 13 00:14:35.956689 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:14:35.956863 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 00:14:35.970775 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 13 00:14:35.975933 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:14:35.978825 systemd-udevd[1339]: Using default interface naming scheme 'v255'. Aug 13 00:14:35.981678 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 00:14:35.986365 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 00:14:35.988891 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 00:14:35.999543 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 00:14:36.000466 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:14:36.001353 systemd[1]: Finished ensure-sysext.service. Aug 13 00:14:36.005352 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 13 00:14:36.006847 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 13 00:14:36.007826 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:14:36.008459 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 00:14:36.012650 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:14:36.012843 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 00:14:36.025659 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:14:36.025817 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 00:14:36.026850 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 00:14:36.032109 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 00:14:36.035322 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 00:14:36.039677 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 13 00:14:36.071993 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 00:14:36.074377 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:14:36.074468 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 00:14:36.078051 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Aug 13 00:14:36.079468 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 00:14:36.163218 systemd-resolved[1337]: Positive Trust Anchors: Aug 13 00:14:36.165322 systemd-resolved[1337]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 00:14:36.165453 systemd-resolved[1337]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 00:14:36.172407 systemd-networkd[1399]: lo: Link UP Aug 13 00:14:36.172418 systemd-networkd[1399]: lo: Gained carrier Aug 13 00:14:36.172990 systemd-resolved[1337]: Using system hostname 'ci-4081-3-5-3-d55e308663'. Aug 13 00:14:36.173078 systemd-networkd[1399]: Enumeration completed Aug 13 00:14:36.173197 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 00:14:36.193459 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 13 00:14:36.194172 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 00:14:36.195670 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Aug 13 00:14:36.195789 systemd[1]: Reached target network.target - Network. Aug 13 00:14:36.197354 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 00:14:36.200063 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Aug 13 00:14:36.201435 systemd[1]: Reached target time-set.target - System Time Set. Aug 13 00:14:36.262347 kernel: mousedev: PS/2 mouse device common for all mice Aug 13 00:14:36.273068 systemd-networkd[1399]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:14:36.273080 systemd-networkd[1399]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:14:36.273796 systemd-networkd[1399]: eth0: Link UP Aug 13 00:14:36.273804 systemd-networkd[1399]: eth0: Gained carrier Aug 13 00:14:36.273827 systemd-networkd[1399]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:14:36.295390 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1381) Aug 13 00:14:36.304669 systemd-networkd[1399]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:14:36.304682 systemd-networkd[1399]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:14:36.305780 systemd-networkd[1399]: eth1: Link UP Aug 13 00:14:36.305789 systemd-networkd[1399]: eth1: Gained carrier Aug 13 00:14:36.305808 systemd-networkd[1399]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:14:36.333473 systemd-networkd[1399]: eth0: DHCPv4 address 159.69.112.232/32, gateway 172.31.1.1 acquired from 172.31.1.1 Aug 13 00:14:36.337438 systemd-networkd[1399]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Aug 13 00:14:36.338722 systemd-timesyncd[1402]: Network configuration changed, trying to establish connection. Aug 13 00:14:36.361178 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Aug 13 00:14:36.361848 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:14:36.369454 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 00:14:36.372028 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 00:14:36.378541 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 00:14:36.379198 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:14:36.379232 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 00:14:36.383958 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Aug 13 00:14:36.385673 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:14:36.387308 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 00:14:36.393425 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 13 00:14:36.396819 systemd-timesyncd[1402]: Contacted time server 17.253.14.251:123 (0.flatcar.pool.ntp.org). Aug 13 00:14:36.396879 systemd-timesyncd[1402]: Initial clock synchronization to Wed 2025-08-13 00:14:36.375315 UTC. Aug 13 00:14:36.397358 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:14:36.397561 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 00:14:36.400102 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 00:14:36.400874 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:14:36.402245 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 00:14:36.403168 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:14:36.404355 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Aug 13 00:14:36.405435 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Aug 13 00:14:36.405530 kernel: [drm] features: -context_init Aug 13 00:14:36.424590 kernel: [drm] number of scanouts: 1 Aug 13 00:14:36.424692 kernel: [drm] number of cap sets: 0 Aug 13 00:14:36.431425 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:14:36.437855 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 13 00:14:36.442303 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Aug 13 00:14:36.452670 kernel: Console: switching to colour frame buffer device 160x50 Aug 13 00:14:36.459312 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Aug 13 00:14:36.478156 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:14:36.478403 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:14:36.485538 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:14:36.556031 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:14:36.606128 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Aug 13 00:14:36.612533 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Aug 13 00:14:36.630319 lvm[1448]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 00:14:36.657357 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Aug 13 00:14:36.658862 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 00:14:36.660288 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 00:14:36.661026 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 13 00:14:36.661932 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 13 00:14:36.662938 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 13 00:14:36.663758 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 13 00:14:36.664635 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 13 00:14:36.665445 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 00:14:36.665522 systemd[1]: Reached target paths.target - Path Units. Aug 13 00:14:36.666082 systemd[1]: Reached target timers.target - Timer Units. Aug 13 00:14:36.669388 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 13 00:14:36.671825 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 13 00:14:36.677715 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 13 00:14:36.680335 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Aug 13 00:14:36.682086 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 13 00:14:36.683069 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 00:14:36.684113 systemd[1]: Reached target basic.target - Basic System. Aug 13 00:14:36.684766 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 13 00:14:36.684800 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 13 00:14:36.696346 systemd[1]: Starting containerd.service - containerd container runtime... Aug 13 00:14:36.700523 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Aug 13 00:14:36.701910 lvm[1452]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 00:14:36.709511 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 13 00:14:36.713005 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 13 00:14:36.722749 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 13 00:14:36.723575 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 13 00:14:36.726751 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 13 00:14:36.732147 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 13 00:14:36.735231 jq[1456]: false Aug 13 00:14:36.745490 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Aug 13 00:14:36.750098 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 13 00:14:36.755585 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 13 00:14:36.764560 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 13 00:14:36.767052 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 13 00:14:36.767641 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 13 00:14:36.770586 systemd[1]: Starting update-engine.service - Update Engine... Aug 13 00:14:36.774139 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 13 00:14:36.775776 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Aug 13 00:14:36.777641 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 13 00:14:36.778342 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 13 00:14:36.780074 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 13 00:14:36.780442 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 13 00:14:36.800740 extend-filesystems[1459]: Found loop4 Aug 13 00:14:36.804365 extend-filesystems[1459]: Found loop5 Aug 13 00:14:36.804365 extend-filesystems[1459]: Found loop6 Aug 13 00:14:36.804365 extend-filesystems[1459]: Found loop7 Aug 13 00:14:36.804365 extend-filesystems[1459]: Found sda Aug 13 00:14:36.804365 extend-filesystems[1459]: Found sda1 Aug 13 00:14:36.804365 extend-filesystems[1459]: Found sda2 Aug 13 00:14:36.804365 extend-filesystems[1459]: Found sda3 Aug 13 00:14:36.804365 extend-filesystems[1459]: Found usr Aug 13 00:14:36.804365 extend-filesystems[1459]: Found sda4 Aug 13 00:14:36.804365 extend-filesystems[1459]: Found sda6 Aug 13 00:14:36.804365 extend-filesystems[1459]: Found sda7 Aug 13 00:14:36.804365 extend-filesystems[1459]: Found sda9 Aug 13 00:14:36.804365 extend-filesystems[1459]: Checking size of /dev/sda9 Aug 13 00:14:36.836994 coreos-metadata[1454]: Aug 13 00:14:36.824 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Aug 13 00:14:36.836994 coreos-metadata[1454]: Aug 13 00:14:36.835 INFO Fetch successful Aug 13 00:14:36.836994 coreos-metadata[1454]: Aug 13 00:14:36.835 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Aug 13 00:14:36.836994 coreos-metadata[1454]: Aug 13 00:14:36.836 INFO Fetch successful Aug 13 00:14:36.810643 dbus-daemon[1455]: [system] SELinux support is enabled Aug 13 00:14:36.815877 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 13 00:14:36.823833 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 13 00:14:36.823862 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 13 00:14:36.850262 jq[1469]: true Aug 13 00:14:36.825779 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 13 00:14:36.825802 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 13 00:14:36.848927 (ntainerd)[1487]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 13 00:14:36.866384 tar[1472]: linux-arm64/LICENSE Aug 13 00:14:36.866384 tar[1472]: linux-arm64/helm Aug 13 00:14:36.877755 update_engine[1468]: I20250813 00:14:36.876926 1468 main.cc:92] Flatcar Update Engine starting Aug 13 00:14:36.878600 extend-filesystems[1459]: Resized partition /dev/sda9 Aug 13 00:14:36.879054 systemd[1]: motdgen.service: Deactivated successfully. Aug 13 00:14:36.879259 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 13 00:14:36.891517 extend-filesystems[1500]: resize2fs 1.47.1 (20-May-2024) Aug 13 00:14:36.892063 systemd[1]: Started update-engine.service - Update Engine. Aug 13 00:14:36.899656 update_engine[1468]: I20250813 00:14:36.894505 1468 update_check_scheduler.cc:74] Next update check in 7m0s Aug 13 00:14:36.917308 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Aug 13 00:14:36.915603 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 13 00:14:36.941317 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1377) Aug 13 00:14:36.946330 jq[1490]: true Aug 13 00:14:37.007934 systemd-logind[1467]: New seat seat0. Aug 13 00:14:37.019405 systemd-logind[1467]: Watching system buttons on /dev/input/event0 (Power Button) Aug 13 00:14:37.019428 systemd-logind[1467]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Aug 13 00:14:37.019671 systemd[1]: Started systemd-logind.service - User Login Management. Aug 13 00:14:37.059814 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Aug 13 00:14:37.060920 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 13 00:14:37.123577 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Aug 13 00:14:37.167883 bash[1528]: Updated "/home/core/.ssh/authorized_keys" Aug 13 00:14:37.127631 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 13 00:14:37.154657 systemd[1]: Starting sshkeys.service... Aug 13 00:14:37.167680 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Aug 13 00:14:37.179243 extend-filesystems[1500]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Aug 13 00:14:37.179243 extend-filesystems[1500]: old_desc_blocks = 1, new_desc_blocks = 5 Aug 13 00:14:37.179243 extend-filesystems[1500]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Aug 13 00:14:37.185459 extend-filesystems[1459]: Resized filesystem in /dev/sda9 Aug 13 00:14:37.185459 extend-filesystems[1459]: Found sr0 Aug 13 00:14:37.208203 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Aug 13 00:14:37.210120 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 13 00:14:37.210902 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 13 00:14:37.265115 coreos-metadata[1533]: Aug 13 00:14:37.264 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Aug 13 00:14:37.270313 coreos-metadata[1533]: Aug 13 00:14:37.269 INFO Fetch successful Aug 13 00:14:37.272220 unknown[1533]: wrote ssh authorized keys file for user: core Aug 13 00:14:37.295836 containerd[1487]: time="2025-08-13T00:14:37.294827905Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Aug 13 00:14:37.306069 update-ssh-keys[1541]: Updated "/home/core/.ssh/authorized_keys" Aug 13 00:14:37.308102 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Aug 13 00:14:37.317121 systemd[1]: Finished sshkeys.service. Aug 13 00:14:37.376763 containerd[1487]: time="2025-08-13T00:14:37.376713377Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:14:37.379698 containerd[1487]: time="2025-08-13T00:14:37.379650506Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.100-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:14:37.379698 containerd[1487]: time="2025-08-13T00:14:37.379689857Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 13 00:14:37.379698 containerd[1487]: time="2025-08-13T00:14:37.379708515Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 13 00:14:37.379904 containerd[1487]: time="2025-08-13T00:14:37.379878826Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Aug 13 00:14:37.379938 containerd[1487]: time="2025-08-13T00:14:37.379905833Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Aug 13 00:14:37.380032 containerd[1487]: time="2025-08-13T00:14:37.379976587Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:14:37.380060 containerd[1487]: time="2025-08-13T00:14:37.379995164Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:14:37.380412 containerd[1487]: time="2025-08-13T00:14:37.380253249Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:14:37.383493 containerd[1487]: time="2025-08-13T00:14:37.383278310Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 13 00:14:37.383493 containerd[1487]: time="2025-08-13T00:14:37.383329926Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:14:37.383493 containerd[1487]: time="2025-08-13T00:14:37.383342191Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 13 00:14:37.383493 containerd[1487]: time="2025-08-13T00:14:37.383449420Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:14:37.383954 containerd[1487]: time="2025-08-13T00:14:37.383669032Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:14:37.383954 containerd[1487]: time="2025-08-13T00:14:37.383806464Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:14:37.383954 containerd[1487]: time="2025-08-13T00:14:37.383821725Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 13 00:14:37.383954 containerd[1487]: time="2025-08-13T00:14:37.383897992Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 13 00:14:37.383954 containerd[1487]: time="2025-08-13T00:14:37.383937823Z" level=info msg="metadata content store policy set" policy=shared Aug 13 00:14:37.392436 containerd[1487]: time="2025-08-13T00:14:37.392231362Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 13 00:14:37.392436 containerd[1487]: time="2025-08-13T00:14:37.392308068Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 13 00:14:37.392436 containerd[1487]: time="2025-08-13T00:14:37.392333557Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Aug 13 00:14:37.392436 containerd[1487]: time="2025-08-13T00:14:37.392352813Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Aug 13 00:14:37.392436 containerd[1487]: time="2025-08-13T00:14:37.392369153Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 13 00:14:37.392664 containerd[1487]: time="2025-08-13T00:14:37.392538946Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 13 00:14:37.393429 containerd[1487]: time="2025-08-13T00:14:37.392826514Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 13 00:14:37.393429 containerd[1487]: time="2025-08-13T00:14:37.392951401Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Aug 13 00:14:37.393429 containerd[1487]: time="2025-08-13T00:14:37.392971577Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Aug 13 00:14:37.393429 containerd[1487]: time="2025-08-13T00:14:37.393070296Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Aug 13 00:14:37.393429 containerd[1487]: time="2025-08-13T00:14:37.393091390Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 13 00:14:37.393429 containerd[1487]: time="2025-08-13T00:14:37.393106772Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 13 00:14:37.393429 containerd[1487]: time="2025-08-13T00:14:37.393119556Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 13 00:14:37.393429 containerd[1487]: time="2025-08-13T00:14:37.393134498Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 13 00:14:37.393429 containerd[1487]: time="2025-08-13T00:14:37.393149719Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 13 00:14:37.393429 containerd[1487]: time="2025-08-13T00:14:37.393161824Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 13 00:14:37.393429 containerd[1487]: time="2025-08-13T00:14:37.393175168Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 13 00:14:37.393429 containerd[1487]: time="2025-08-13T00:14:37.393189790Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 13 00:14:37.393429 containerd[1487]: time="2025-08-13T00:14:37.393210764Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 13 00:14:37.393429 containerd[1487]: time="2025-08-13T00:14:37.393225107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 13 00:14:37.393711 containerd[1487]: time="2025-08-13T00:14:37.393247599Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 13 00:14:37.393711 containerd[1487]: time="2025-08-13T00:14:37.393265937Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 13 00:14:37.394434 systemd-networkd[1399]: eth0: Gained IPv6LL Aug 13 00:14:37.397286 containerd[1487]: time="2025-08-13T00:14:37.395352186Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 13 00:14:37.397286 containerd[1487]: time="2025-08-13T00:14:37.395391458Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 13 00:14:37.397286 containerd[1487]: time="2025-08-13T00:14:37.395405720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 13 00:14:37.397286 containerd[1487]: time="2025-08-13T00:14:37.395419104Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 13 00:14:37.397286 containerd[1487]: time="2025-08-13T00:14:37.395434565Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Aug 13 00:14:37.397286 containerd[1487]: time="2025-08-13T00:14:37.395451385Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Aug 13 00:14:37.397286 containerd[1487]: time="2025-08-13T00:14:37.395465008Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 13 00:14:37.397286 containerd[1487]: time="2025-08-13T00:14:37.395477193Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Aug 13 00:14:37.397286 containerd[1487]: time="2025-08-13T00:14:37.395492454Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 13 00:14:37.397286 containerd[1487]: time="2025-08-13T00:14:37.395509793Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Aug 13 00:14:37.397286 containerd[1487]: time="2025-08-13T00:14:37.395535202Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Aug 13 00:14:37.397286 containerd[1487]: time="2025-08-13T00:14:37.395548546Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 13 00:14:37.397286 containerd[1487]: time="2025-08-13T00:14:37.395563368Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 13 00:14:37.397286 containerd[1487]: time="2025-08-13T00:14:37.395677788Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 13 00:14:37.397696 containerd[1487]: time="2025-08-13T00:14:37.395695965Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Aug 13 00:14:37.397696 containerd[1487]: time="2025-08-13T00:14:37.395709109Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 13 00:14:37.397696 containerd[1487]: time="2025-08-13T00:14:37.395722014Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Aug 13 00:14:37.397696 containerd[1487]: time="2025-08-13T00:14:37.395731322Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 13 00:14:37.397696 containerd[1487]: time="2025-08-13T00:14:37.395743388Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Aug 13 00:14:37.397696 containerd[1487]: time="2025-08-13T00:14:37.395753815Z" level=info msg="NRI interface is disabled by configuration." Aug 13 00:14:37.397696 containerd[1487]: time="2025-08-13T00:14:37.395763723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 13 00:14:37.397829 containerd[1487]: time="2025-08-13T00:14:37.396146255Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 13 00:14:37.397829 containerd[1487]: time="2025-08-13T00:14:37.396204344Z" level=info msg="Connect containerd service" Aug 13 00:14:37.397909 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 13 00:14:37.399925 systemd[1]: Reached target network-online.target - Network is Online. Aug 13 00:14:37.401599 containerd[1487]: time="2025-08-13T00:14:37.401557758Z" level=info msg="using legacy CRI server" Aug 13 00:14:37.401725 containerd[1487]: time="2025-08-13T00:14:37.401684323Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 13 00:14:37.407174 containerd[1487]: time="2025-08-13T00:14:37.406178747Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 13 00:14:37.409022 containerd[1487]: time="2025-08-13T00:14:37.408633626Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 00:14:37.409063 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:14:37.410848 containerd[1487]: time="2025-08-13T00:14:37.410405819Z" level=info msg="Start subscribing containerd event" Aug 13 00:14:37.410848 containerd[1487]: time="2025-08-13T00:14:37.410477291Z" level=info msg="Start recovering state" Aug 13 00:14:37.410848 containerd[1487]: time="2025-08-13T00:14:37.410556355Z" level=info msg="Start event monitor" Aug 13 00:14:37.410848 containerd[1487]: time="2025-08-13T00:14:37.410567501Z" level=info msg="Start snapshots syncer" Aug 13 00:14:37.410848 containerd[1487]: time="2025-08-13T00:14:37.410577609Z" level=info msg="Start cni network conf syncer for default" Aug 13 00:14:37.410848 containerd[1487]: time="2025-08-13T00:14:37.410584720Z" level=info msg="Start streaming server" Aug 13 00:14:37.411894 containerd[1487]: time="2025-08-13T00:14:37.411861439Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 13 00:14:37.412091 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 13 00:14:37.415313 containerd[1487]: time="2025-08-13T00:14:37.412380844Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 13 00:14:37.415313 containerd[1487]: time="2025-08-13T00:14:37.412442448Z" level=info msg="containerd successfully booted in 0.118426s" Aug 13 00:14:37.413648 systemd[1]: Started containerd.service - containerd container runtime. Aug 13 00:14:37.435775 locksmithd[1501]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 13 00:14:37.473829 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 13 00:14:37.820831 tar[1472]: linux-arm64/README.md Aug 13 00:14:37.839949 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 13 00:14:37.907288 systemd-networkd[1399]: eth1: Gained IPv6LL Aug 13 00:14:38.343467 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:14:38.349981 (kubelet)[1570]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:14:38.902776 kubelet[1570]: E0813 00:14:38.902727 1570 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:14:38.906484 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:14:38.906891 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:14:38.921490 sshd_keygen[1491]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 13 00:14:38.943723 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 13 00:14:38.952344 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 13 00:14:38.974973 systemd[1]: issuegen.service: Deactivated successfully. Aug 13 00:14:38.975267 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 13 00:14:38.981774 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 13 00:14:39.006621 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 13 00:14:39.015802 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 13 00:14:39.018729 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Aug 13 00:14:39.019862 systemd[1]: Reached target getty.target - Login Prompts. Aug 13 00:14:39.020722 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 13 00:14:39.021606 systemd[1]: Startup finished in 837ms (kernel) + 8.189s (initrd) + 5.175s (userspace) = 14.202s. Aug 13 00:14:49.157704 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 13 00:14:49.167727 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:14:49.303998 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:14:49.308838 (kubelet)[1607]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:14:49.361478 kubelet[1607]: E0813 00:14:49.361381 1607 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:14:49.366831 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:14:49.367168 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:14:59.617885 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 13 00:14:59.623584 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:14:59.762256 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:14:59.783903 (kubelet)[1622]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:14:59.834446 kubelet[1622]: E0813 00:14:59.834379 1622 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:14:59.837312 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:14:59.837629 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:15:10.088248 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Aug 13 00:15:10.094694 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:15:10.242560 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:15:10.244149 (kubelet)[1638]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:15:10.297121 kubelet[1638]: E0813 00:15:10.297054 1638 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:15:10.299666 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:15:10.299941 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:15:20.398921 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Aug 13 00:15:20.416669 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:15:20.558784 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:15:20.575885 (kubelet)[1653]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:15:20.638439 kubelet[1653]: E0813 00:15:20.638322 1653 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:15:20.641577 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:15:20.641808 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:15:21.063906 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 13 00:15:21.071946 systemd[1]: Started sshd@0-159.69.112.232:22-139.178.89.65:44520.service - OpenSSH per-connection server daemon (139.178.89.65:44520). Aug 13 00:15:21.837453 update_engine[1468]: I20250813 00:15:21.837159 1468 update_attempter.cc:509] Updating boot flags... Aug 13 00:15:21.883347 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1672) Aug 13 00:15:21.949659 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1676) Aug 13 00:15:22.080134 sshd[1661]: Accepted publickey for core from 139.178.89.65 port 44520 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:15:22.082574 sshd[1661]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:15:22.094001 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 13 00:15:22.106987 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 13 00:15:22.111473 systemd-logind[1467]: New session 1 of user core. Aug 13 00:15:22.120784 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 13 00:15:22.132823 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 13 00:15:22.137252 (systemd)[1683]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:15:22.254557 systemd[1683]: Queued start job for default target default.target. Aug 13 00:15:22.266038 systemd[1683]: Created slice app.slice - User Application Slice. Aug 13 00:15:22.266095 systemd[1683]: Reached target paths.target - Paths. Aug 13 00:15:22.266120 systemd[1683]: Reached target timers.target - Timers. Aug 13 00:15:22.270479 systemd[1683]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 13 00:15:22.281377 systemd[1683]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 13 00:15:22.281526 systemd[1683]: Reached target sockets.target - Sockets. Aug 13 00:15:22.281539 systemd[1683]: Reached target basic.target - Basic System. Aug 13 00:15:22.281578 systemd[1683]: Reached target default.target - Main User Target. Aug 13 00:15:22.281605 systemd[1683]: Startup finished in 137ms. Aug 13 00:15:22.282083 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 13 00:15:22.294961 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 13 00:15:22.994309 systemd[1]: Started sshd@1-159.69.112.232:22-139.178.89.65:44534.service - OpenSSH per-connection server daemon (139.178.89.65:44534). Aug 13 00:15:24.000033 sshd[1694]: Accepted publickey for core from 139.178.89.65 port 44534 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:15:24.002471 sshd[1694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:15:24.009505 systemd-logind[1467]: New session 2 of user core. Aug 13 00:15:24.014655 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 13 00:15:24.689416 sshd[1694]: pam_unix(sshd:session): session closed for user core Aug 13 00:15:24.693223 systemd[1]: sshd@1-159.69.112.232:22-139.178.89.65:44534.service: Deactivated successfully. Aug 13 00:15:24.696159 systemd[1]: session-2.scope: Deactivated successfully. Aug 13 00:15:24.697747 systemd-logind[1467]: Session 2 logged out. Waiting for processes to exit. Aug 13 00:15:24.699175 systemd-logind[1467]: Removed session 2. Aug 13 00:15:24.872831 systemd[1]: Started sshd@2-159.69.112.232:22-139.178.89.65:44542.service - OpenSSH per-connection server daemon (139.178.89.65:44542). Aug 13 00:15:25.863422 sshd[1701]: Accepted publickey for core from 139.178.89.65 port 44542 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:15:25.865792 sshd[1701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:15:25.871845 systemd-logind[1467]: New session 3 of user core. Aug 13 00:15:25.879884 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 13 00:15:26.548566 sshd[1701]: pam_unix(sshd:session): session closed for user core Aug 13 00:15:26.553404 systemd-logind[1467]: Session 3 logged out. Waiting for processes to exit. Aug 13 00:15:26.553693 systemd[1]: sshd@2-159.69.112.232:22-139.178.89.65:44542.service: Deactivated successfully. Aug 13 00:15:26.557326 systemd[1]: session-3.scope: Deactivated successfully. Aug 13 00:15:26.559441 systemd-logind[1467]: Removed session 3. Aug 13 00:15:26.728612 systemd[1]: Started sshd@3-159.69.112.232:22-139.178.89.65:44546.service - OpenSSH per-connection server daemon (139.178.89.65:44546). Aug 13 00:15:27.725819 sshd[1708]: Accepted publickey for core from 139.178.89.65 port 44546 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:15:27.727698 sshd[1708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:15:27.734562 systemd-logind[1467]: New session 4 of user core. Aug 13 00:15:27.745653 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 13 00:15:28.418409 sshd[1708]: pam_unix(sshd:session): session closed for user core Aug 13 00:15:28.422452 systemd-logind[1467]: Session 4 logged out. Waiting for processes to exit. Aug 13 00:15:28.422602 systemd[1]: sshd@3-159.69.112.232:22-139.178.89.65:44546.service: Deactivated successfully. Aug 13 00:15:28.425336 systemd[1]: session-4.scope: Deactivated successfully. Aug 13 00:15:28.427651 systemd-logind[1467]: Removed session 4. Aug 13 00:15:28.616122 systemd[1]: Started sshd@4-159.69.112.232:22-139.178.89.65:44558.service - OpenSSH per-connection server daemon (139.178.89.65:44558). Aug 13 00:15:29.667734 sshd[1715]: Accepted publickey for core from 139.178.89.65 port 44558 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:15:29.670589 sshd[1715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:15:29.676866 systemd-logind[1467]: New session 5 of user core. Aug 13 00:15:29.683631 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 13 00:15:30.238581 sudo[1718]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 13 00:15:30.238950 sudo[1718]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:15:30.254333 sudo[1718]: pam_unix(sudo:session): session closed for user root Aug 13 00:15:30.427954 sshd[1715]: pam_unix(sshd:session): session closed for user core Aug 13 00:15:30.434679 systemd[1]: sshd@4-159.69.112.232:22-139.178.89.65:44558.service: Deactivated successfully. Aug 13 00:15:30.436842 systemd[1]: session-5.scope: Deactivated successfully. Aug 13 00:15:30.437643 systemd-logind[1467]: Session 5 logged out. Waiting for processes to exit. Aug 13 00:15:30.439140 systemd-logind[1467]: Removed session 5. Aug 13 00:15:30.595784 systemd[1]: Started sshd@5-159.69.112.232:22-139.178.89.65:60454.service - OpenSSH per-connection server daemon (139.178.89.65:60454). Aug 13 00:15:30.648911 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Aug 13 00:15:30.654745 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:15:30.834904 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:15:30.843932 (kubelet)[1733]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:15:30.909066 kubelet[1733]: E0813 00:15:30.909014 1733 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:15:30.912390 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:15:30.912566 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:15:31.588185 sshd[1723]: Accepted publickey for core from 139.178.89.65 port 60454 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:15:31.590323 sshd[1723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:15:31.595811 systemd-logind[1467]: New session 6 of user core. Aug 13 00:15:31.603866 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 13 00:15:32.122003 sudo[1742]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 13 00:15:32.122454 sudo[1742]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:15:32.127892 sudo[1742]: pam_unix(sudo:session): session closed for user root Aug 13 00:15:32.136069 sudo[1741]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Aug 13 00:15:32.136475 sudo[1741]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:15:32.156079 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Aug 13 00:15:32.159850 auditctl[1745]: No rules Aug 13 00:15:32.161098 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 00:15:32.161469 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Aug 13 00:15:32.169987 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 13 00:15:32.209890 augenrules[1763]: No rules Aug 13 00:15:32.215081 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 13 00:15:32.216832 sudo[1741]: pam_unix(sudo:session): session closed for user root Aug 13 00:15:32.383740 sshd[1723]: pam_unix(sshd:session): session closed for user core Aug 13 00:15:32.390206 systemd[1]: sshd@5-159.69.112.232:22-139.178.89.65:60454.service: Deactivated successfully. Aug 13 00:15:32.393835 systemd[1]: session-6.scope: Deactivated successfully. Aug 13 00:15:32.394916 systemd-logind[1467]: Session 6 logged out. Waiting for processes to exit. Aug 13 00:15:32.396177 systemd-logind[1467]: Removed session 6. Aug 13 00:15:32.586806 systemd[1]: Started sshd@6-159.69.112.232:22-139.178.89.65:60470.service - OpenSSH per-connection server daemon (139.178.89.65:60470). Aug 13 00:15:33.636209 sshd[1771]: Accepted publickey for core from 139.178.89.65 port 60470 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:15:33.638188 sshd[1771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:15:33.643494 systemd-logind[1467]: New session 7 of user core. Aug 13 00:15:33.649512 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 13 00:15:34.191885 sudo[1774]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 13 00:15:34.192181 sudo[1774]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:15:34.507781 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 13 00:15:34.516895 (dockerd)[1789]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 13 00:15:34.778943 dockerd[1789]: time="2025-08-13T00:15:34.778765133Z" level=info msg="Starting up" Aug 13 00:15:34.875932 dockerd[1789]: time="2025-08-13T00:15:34.875839819Z" level=info msg="Loading containers: start." Aug 13 00:15:34.992303 kernel: Initializing XFRM netlink socket Aug 13 00:15:35.083104 systemd-networkd[1399]: docker0: Link UP Aug 13 00:15:35.107771 dockerd[1789]: time="2025-08-13T00:15:35.107672113Z" level=info msg="Loading containers: done." Aug 13 00:15:35.128057 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1682746801-merged.mount: Deactivated successfully. Aug 13 00:15:35.129960 dockerd[1789]: time="2025-08-13T00:15:35.129888230Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 13 00:15:35.130072 dockerd[1789]: time="2025-08-13T00:15:35.130014987Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Aug 13 00:15:35.130182 dockerd[1789]: time="2025-08-13T00:15:35.130145063Z" level=info msg="Daemon has completed initialization" Aug 13 00:15:35.170442 dockerd[1789]: time="2025-08-13T00:15:35.170286622Z" level=info msg="API listen on /run/docker.sock" Aug 13 00:15:35.170808 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 13 00:15:36.256484 containerd[1487]: time="2025-08-13T00:15:36.256330196Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.7\"" Aug 13 00:15:36.867117 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2250551475.mount: Deactivated successfully. Aug 13 00:15:37.988025 containerd[1487]: time="2025-08-13T00:15:37.987948963Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:15:37.990090 containerd[1487]: time="2025-08-13T00:15:37.990021070Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.7: active requests=0, bytes read=26327873" Aug 13 00:15:37.991051 containerd[1487]: time="2025-08-13T00:15:37.990928607Z" level=info msg="ImageCreate event name:\"sha256:edd0d4592f9097d398a2366cf9c2a86f488742a75ee0a73ebbee00f654b8bb3b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:15:37.995323 containerd[1487]: time="2025-08-13T00:15:37.994764149Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e04f6223d52f8041c46ef4545ccaf07894b1ca5851506a9142706d4206911f64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:15:37.996752 containerd[1487]: time="2025-08-13T00:15:37.996464586Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.7\" with image id \"sha256:edd0d4592f9097d398a2366cf9c2a86f488742a75ee0a73ebbee00f654b8bb3b\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e04f6223d52f8041c46ef4545ccaf07894b1ca5851506a9142706d4206911f64\", size \"26324581\" in 1.740080193s" Aug 13 00:15:37.996752 containerd[1487]: time="2025-08-13T00:15:37.996546144Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.7\" returns image reference \"sha256:edd0d4592f9097d398a2366cf9c2a86f488742a75ee0a73ebbee00f654b8bb3b\"" Aug 13 00:15:37.997529 containerd[1487]: time="2025-08-13T00:15:37.997482000Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.7\"" Aug 13 00:15:39.735968 containerd[1487]: time="2025-08-13T00:15:39.735878262Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:15:39.737867 containerd[1487]: time="2025-08-13T00:15:39.737545985Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.7: active requests=0, bytes read=22529716" Aug 13 00:15:39.739059 containerd[1487]: time="2025-08-13T00:15:39.738986713Z" level=info msg="ImageCreate event name:\"sha256:d53e0248330cfa27e6cbb5684905015074d9e59688c339b16207055c6d07a103\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:15:39.743350 containerd[1487]: time="2025-08-13T00:15:39.743290537Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6c7f288ab0181e496606a43dbade954819af2b1e1c0552becf6903436e16ea75\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:15:39.745299 containerd[1487]: time="2025-08-13T00:15:39.745114856Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.7\" with image id \"sha256:d53e0248330cfa27e6cbb5684905015074d9e59688c339b16207055c6d07a103\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6c7f288ab0181e496606a43dbade954819af2b1e1c0552becf6903436e16ea75\", size \"24065486\" in 1.747473099s" Aug 13 00:15:39.745299 containerd[1487]: time="2025-08-13T00:15:39.745159175Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.7\" returns image reference \"sha256:d53e0248330cfa27e6cbb5684905015074d9e59688c339b16207055c6d07a103\"" Aug 13 00:15:39.746905 containerd[1487]: time="2025-08-13T00:15:39.746530064Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.7\"" Aug 13 00:15:41.024399 containerd[1487]: time="2025-08-13T00:15:41.024202897Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:15:41.026981 containerd[1487]: time="2025-08-13T00:15:41.026781572Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.7: active requests=0, bytes read=17484158" Aug 13 00:15:41.026981 containerd[1487]: time="2025-08-13T00:15:41.026869974Z" level=info msg="ImageCreate event name:\"sha256:15a3296b1f1ad53bca0584492c05a9be73d836d12ccacb182daab897cbe9ac1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:15:41.033454 containerd[1487]: time="2025-08-13T00:15:41.033179076Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1c35a970b4450b4285531495be82cda1f6549952f70d6e3de8db57c20a3da4ce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:15:41.038490 containerd[1487]: time="2025-08-13T00:15:41.038363665Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.7\" with image id \"sha256:15a3296b1f1ad53bca0584492c05a9be73d836d12ccacb182daab897cbe9ac1e\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1c35a970b4450b4285531495be82cda1f6549952f70d6e3de8db57c20a3da4ce\", size \"19019946\" in 1.291772801s" Aug 13 00:15:41.038490 containerd[1487]: time="2025-08-13T00:15:41.038415986Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.7\" returns image reference \"sha256:15a3296b1f1ad53bca0584492c05a9be73d836d12ccacb182daab897cbe9ac1e\"" Aug 13 00:15:41.039112 containerd[1487]: time="2025-08-13T00:15:41.038873720Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.7\"" Aug 13 00:15:41.148604 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Aug 13 00:15:41.156753 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:15:41.284637 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:15:41.305930 (kubelet)[1998]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:15:41.354954 kubelet[1998]: E0813 00:15:41.354760 1998 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:15:41.358672 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:15:41.358977 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:15:42.215562 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1646552869.mount: Deactivated successfully. Aug 13 00:15:42.605337 containerd[1487]: time="2025-08-13T00:15:42.605163346Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:15:42.606672 containerd[1487]: time="2025-08-13T00:15:42.606497943Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.7: active requests=0, bytes read=27378431" Aug 13 00:15:42.608252 containerd[1487]: time="2025-08-13T00:15:42.608135069Z" level=info msg="ImageCreate event name:\"sha256:176e5fd5af03be683be55601db94020ad4cc275f4cca27999608d3cf65c9fb11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:15:42.612584 containerd[1487]: time="2025-08-13T00:15:42.612491551Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d589a18b5424f77a784ef2f00feffac0ef210414100822f1c120f0d7221def3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:15:42.614293 containerd[1487]: time="2025-08-13T00:15:42.613591021Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.7\" with image id \"sha256:176e5fd5af03be683be55601db94020ad4cc275f4cca27999608d3cf65c9fb11\", repo tag \"registry.k8s.io/kube-proxy:v1.32.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d589a18b5424f77a784ef2f00feffac0ef210414100822f1c120f0d7221def3\", size \"27377424\" in 1.574671701s" Aug 13 00:15:42.614293 containerd[1487]: time="2025-08-13T00:15:42.613662503Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.7\" returns image reference \"sha256:176e5fd5af03be683be55601db94020ad4cc275f4cca27999608d3cf65c9fb11\"" Aug 13 00:15:42.614607 containerd[1487]: time="2025-08-13T00:15:42.614561969Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 00:15:43.193266 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3371939834.mount: Deactivated successfully. Aug 13 00:15:43.905023 containerd[1487]: time="2025-08-13T00:15:43.904953647Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:15:43.907427 containerd[1487]: time="2025-08-13T00:15:43.907344233Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951714" Aug 13 00:15:43.907928 containerd[1487]: time="2025-08-13T00:15:43.907836606Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:15:43.911421 containerd[1487]: time="2025-08-13T00:15:43.911327141Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:15:43.912829 containerd[1487]: time="2025-08-13T00:15:43.912654377Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.298036847s" Aug 13 00:15:43.912829 containerd[1487]: time="2025-08-13T00:15:43.912702578Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Aug 13 00:15:43.913522 containerd[1487]: time="2025-08-13T00:15:43.913365236Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 13 00:15:44.456829 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3990058610.mount: Deactivated successfully. Aug 13 00:15:44.464360 containerd[1487]: time="2025-08-13T00:15:44.464266716Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:15:44.465592 containerd[1487]: time="2025-08-13T00:15:44.465527109Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" Aug 13 00:15:44.466864 containerd[1487]: time="2025-08-13T00:15:44.466763862Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:15:44.470044 containerd[1487]: time="2025-08-13T00:15:44.469961427Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:15:44.472094 containerd[1487]: time="2025-08-13T00:15:44.471578869Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 558.172672ms" Aug 13 00:15:44.472094 containerd[1487]: time="2025-08-13T00:15:44.471629831Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Aug 13 00:15:44.472330 containerd[1487]: time="2025-08-13T00:15:44.472299568Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Aug 13 00:15:45.084075 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount504476237.mount: Deactivated successfully. Aug 13 00:15:47.029241 containerd[1487]: time="2025-08-13T00:15:47.029126870Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:15:47.031497 containerd[1487]: time="2025-08-13T00:15:47.031345444Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812537" Aug 13 00:15:47.032390 containerd[1487]: time="2025-08-13T00:15:47.032336788Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:15:47.038204 containerd[1487]: time="2025-08-13T00:15:47.038074047Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:15:47.040491 containerd[1487]: time="2025-08-13T00:15:47.040255020Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.567917691s" Aug 13 00:15:47.040491 containerd[1487]: time="2025-08-13T00:15:47.040335102Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Aug 13 00:15:51.400444 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Aug 13 00:15:51.406482 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:15:51.553479 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:15:51.554777 (kubelet)[2151]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:15:51.618487 kubelet[2151]: E0813 00:15:51.618429 2151 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:15:51.621305 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:15:51.621455 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:15:51.940271 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:15:51.946765 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:15:51.980017 systemd[1]: Reloading requested from client PID 2164 ('systemctl') (unit session-7.scope)... Aug 13 00:15:51.980038 systemd[1]: Reloading... Aug 13 00:15:52.103804 zram_generator::config[2204]: No configuration found. Aug 13 00:15:52.215715 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:15:52.301327 systemd[1]: Reloading finished in 320 ms. Aug 13 00:15:52.356486 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 13 00:15:52.356646 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 13 00:15:52.357107 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:15:52.370182 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:15:52.516118 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:15:52.526677 (kubelet)[2251]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 00:15:52.580407 kubelet[2251]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:15:52.580819 kubelet[2251]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 13 00:15:52.580874 kubelet[2251]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:15:52.581203 kubelet[2251]: I0813 00:15:52.581155 2251 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 00:15:53.141782 kubelet[2251]: I0813 00:15:53.141722 2251 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Aug 13 00:15:53.141782 kubelet[2251]: I0813 00:15:53.141761 2251 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 00:15:53.142399 kubelet[2251]: I0813 00:15:53.142373 2251 server.go:954] "Client rotation is on, will bootstrap in background" Aug 13 00:15:53.180708 kubelet[2251]: E0813 00:15:53.180658 2251 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://159.69.112.232:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 159.69.112.232:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:15:53.183683 kubelet[2251]: I0813 00:15:53.183635 2251 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 00:15:53.192379 kubelet[2251]: E0813 00:15:53.191932 2251 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 00:15:53.192379 kubelet[2251]: I0813 00:15:53.191982 2251 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 00:15:53.195421 kubelet[2251]: I0813 00:15:53.195371 2251 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 00:15:53.196504 kubelet[2251]: I0813 00:15:53.196446 2251 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 00:15:53.196724 kubelet[2251]: I0813 00:15:53.196505 2251 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-5-3-d55e308663","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 00:15:53.196828 kubelet[2251]: I0813 00:15:53.196791 2251 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 00:15:53.196828 kubelet[2251]: I0813 00:15:53.196803 2251 container_manager_linux.go:304] "Creating device plugin manager" Aug 13 00:15:53.197048 kubelet[2251]: I0813 00:15:53.197017 2251 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:15:53.200692 kubelet[2251]: I0813 00:15:53.200523 2251 kubelet.go:446] "Attempting to sync node with API server" Aug 13 00:15:53.200692 kubelet[2251]: I0813 00:15:53.200555 2251 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 00:15:53.200692 kubelet[2251]: I0813 00:15:53.200578 2251 kubelet.go:352] "Adding apiserver pod source" Aug 13 00:15:53.200692 kubelet[2251]: I0813 00:15:53.200590 2251 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 00:15:53.207182 kubelet[2251]: W0813 00:15:53.206650 2251 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://159.69.112.232:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-5-3-d55e308663&limit=500&resourceVersion=0": dial tcp 159.69.112.232:6443: connect: connection refused Aug 13 00:15:53.207182 kubelet[2251]: E0813 00:15:53.206713 2251 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://159.69.112.232:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-5-3-d55e308663&limit=500&resourceVersion=0\": dial tcp 159.69.112.232:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:15:53.207445 kubelet[2251]: W0813 00:15:53.207392 2251 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://159.69.112.232:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 159.69.112.232:6443: connect: connection refused Aug 13 00:15:53.207569 kubelet[2251]: E0813 00:15:53.207551 2251 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://159.69.112.232:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 159.69.112.232:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:15:53.207730 kubelet[2251]: I0813 00:15:53.207714 2251 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Aug 13 00:15:53.209303 kubelet[2251]: I0813 00:15:53.208538 2251 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 00:15:53.209303 kubelet[2251]: W0813 00:15:53.208673 2251 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 13 00:15:53.211014 kubelet[2251]: I0813 00:15:53.210978 2251 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 13 00:15:53.211116 kubelet[2251]: I0813 00:15:53.211108 2251 server.go:1287] "Started kubelet" Aug 13 00:15:53.212284 kubelet[2251]: I0813 00:15:53.212205 2251 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 00:15:53.213432 kubelet[2251]: I0813 00:15:53.213200 2251 server.go:479] "Adding debug handlers to kubelet server" Aug 13 00:15:53.218049 kubelet[2251]: I0813 00:15:53.217971 2251 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 00:15:53.218488 kubelet[2251]: I0813 00:15:53.218468 2251 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 00:15:53.220363 kubelet[2251]: E0813 00:15:53.220024 2251 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://159.69.112.232:6443/api/v1/namespaces/default/events\": dial tcp 159.69.112.232:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-5-3-d55e308663.185b2b5bd545e7b2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-5-3-d55e308663,UID:ci-4081-3-5-3-d55e308663,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-5-3-d55e308663,},FirstTimestamp:2025-08-13 00:15:53.21108677 +0000 UTC m=+0.677829956,LastTimestamp:2025-08-13 00:15:53.21108677 +0000 UTC m=+0.677829956,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-5-3-d55e308663,}" Aug 13 00:15:53.220912 kubelet[2251]: I0813 00:15:53.220888 2251 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 00:15:53.224540 kubelet[2251]: I0813 00:15:53.224515 2251 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 00:15:53.224898 kubelet[2251]: I0813 00:15:53.224870 2251 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 13 00:15:53.225252 kubelet[2251]: E0813 00:15:53.225202 2251 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-5-3-d55e308663\" not found" Aug 13 00:15:53.227518 kubelet[2251]: E0813 00:15:53.227476 2251 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://159.69.112.232:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-5-3-d55e308663?timeout=10s\": dial tcp 159.69.112.232:6443: connect: connection refused" interval="200ms" Aug 13 00:15:53.227819 kubelet[2251]: I0813 00:15:53.227800 2251 factory.go:221] Registration of the systemd container factory successfully Aug 13 00:15:53.227970 kubelet[2251]: I0813 00:15:53.227952 2251 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 00:15:53.229959 kubelet[2251]: I0813 00:15:53.229831 2251 reconciler.go:26] "Reconciler: start to sync state" Aug 13 00:15:53.230128 kubelet[2251]: I0813 00:15:53.230112 2251 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 13 00:15:53.231586 kubelet[2251]: W0813 00:15:53.231540 2251 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://159.69.112.232:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 159.69.112.232:6443: connect: connection refused Aug 13 00:15:53.231691 kubelet[2251]: E0813 00:15:53.231674 2251 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://159.69.112.232:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 159.69.112.232:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:15:53.231874 kubelet[2251]: I0813 00:15:53.231858 2251 factory.go:221] Registration of the containerd container factory successfully Aug 13 00:15:53.249013 kubelet[2251]: I0813 00:15:53.248923 2251 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 00:15:53.251039 kubelet[2251]: I0813 00:15:53.250946 2251 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 00:15:53.251039 kubelet[2251]: I0813 00:15:53.251001 2251 status_manager.go:227] "Starting to sync pod status with apiserver" Aug 13 00:15:53.251039 kubelet[2251]: I0813 00:15:53.251041 2251 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 13 00:15:53.251417 kubelet[2251]: I0813 00:15:53.251064 2251 kubelet.go:2382] "Starting kubelet main sync loop" Aug 13 00:15:53.251417 kubelet[2251]: E0813 00:15:53.251135 2251 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 00:15:53.260880 kubelet[2251]: W0813 00:15:53.260451 2251 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://159.69.112.232:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 159.69.112.232:6443: connect: connection refused Aug 13 00:15:53.260880 kubelet[2251]: E0813 00:15:53.260865 2251 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://159.69.112.232:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 159.69.112.232:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:15:53.268270 kubelet[2251]: I0813 00:15:53.267910 2251 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 13 00:15:53.268270 kubelet[2251]: I0813 00:15:53.267930 2251 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 13 00:15:53.268270 kubelet[2251]: I0813 00:15:53.267952 2251 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:15:53.270353 kubelet[2251]: I0813 00:15:53.270327 2251 policy_none.go:49] "None policy: Start" Aug 13 00:15:53.270486 kubelet[2251]: I0813 00:15:53.270475 2251 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 13 00:15:53.270543 kubelet[2251]: I0813 00:15:53.270535 2251 state_mem.go:35] "Initializing new in-memory state store" Aug 13 00:15:53.277626 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 13 00:15:53.297103 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 13 00:15:53.310932 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 13 00:15:53.312751 kubelet[2251]: I0813 00:15:53.312672 2251 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 00:15:53.312937 kubelet[2251]: I0813 00:15:53.312914 2251 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 00:15:53.313052 kubelet[2251]: I0813 00:15:53.312932 2251 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 00:15:53.313860 kubelet[2251]: I0813 00:15:53.313426 2251 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 00:15:53.315530 kubelet[2251]: E0813 00:15:53.315428 2251 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 13 00:15:53.315530 kubelet[2251]: E0813 00:15:53.315491 2251 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-5-3-d55e308663\" not found" Aug 13 00:15:53.365792 systemd[1]: Created slice kubepods-burstable-podb050c8a7cb39ed9e39e73dce340511f5.slice - libcontainer container kubepods-burstable-podb050c8a7cb39ed9e39e73dce340511f5.slice. Aug 13 00:15:53.376158 kubelet[2251]: E0813 00:15:53.375824 2251 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-5-3-d55e308663\" not found" node="ci-4081-3-5-3-d55e308663" Aug 13 00:15:53.381061 systemd[1]: Created slice kubepods-burstable-pod3ae12576553ee962351d3159b9163c53.slice - libcontainer container kubepods-burstable-pod3ae12576553ee962351d3159b9163c53.slice. Aug 13 00:15:53.391377 kubelet[2251]: E0813 00:15:53.391205 2251 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-5-3-d55e308663\" not found" node="ci-4081-3-5-3-d55e308663" Aug 13 00:15:53.395847 systemd[1]: Created slice kubepods-burstable-pod2c9a645a1050db186575c0d1ae4d3ca3.slice - libcontainer container kubepods-burstable-pod2c9a645a1050db186575c0d1ae4d3ca3.slice. Aug 13 00:15:53.398393 kubelet[2251]: E0813 00:15:53.398347 2251 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-5-3-d55e308663\" not found" node="ci-4081-3-5-3-d55e308663" Aug 13 00:15:53.416328 kubelet[2251]: I0813 00:15:53.416008 2251 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-5-3-d55e308663" Aug 13 00:15:53.416682 kubelet[2251]: E0813 00:15:53.416551 2251 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://159.69.112.232:6443/api/v1/nodes\": dial tcp 159.69.112.232:6443: connect: connection refused" node="ci-4081-3-5-3-d55e308663" Aug 13 00:15:53.428694 kubelet[2251]: E0813 00:15:53.428636 2251 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://159.69.112.232:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-5-3-d55e308663?timeout=10s\": dial tcp 159.69.112.232:6443: connect: connection refused" interval="400ms" Aug 13 00:15:53.531129 kubelet[2251]: I0813 00:15:53.531023 2251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b050c8a7cb39ed9e39e73dce340511f5-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-5-3-d55e308663\" (UID: \"b050c8a7cb39ed9e39e73dce340511f5\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-3-d55e308663" Aug 13 00:15:53.531129 kubelet[2251]: I0813 00:15:53.531095 2251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b050c8a7cb39ed9e39e73dce340511f5-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-5-3-d55e308663\" (UID: \"b050c8a7cb39ed9e39e73dce340511f5\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-3-d55e308663" Aug 13 00:15:53.531129 kubelet[2251]: I0813 00:15:53.531131 2251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b050c8a7cb39ed9e39e73dce340511f5-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-5-3-d55e308663\" (UID: \"b050c8a7cb39ed9e39e73dce340511f5\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-3-d55e308663" Aug 13 00:15:53.531493 kubelet[2251]: I0813 00:15:53.531162 2251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3ae12576553ee962351d3159b9163c53-kubeconfig\") pod \"kube-scheduler-ci-4081-3-5-3-d55e308663\" (UID: \"3ae12576553ee962351d3159b9163c53\") " pod="kube-system/kube-scheduler-ci-4081-3-5-3-d55e308663" Aug 13 00:15:53.531493 kubelet[2251]: I0813 00:15:53.531194 2251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2c9a645a1050db186575c0d1ae4d3ca3-ca-certs\") pod \"kube-apiserver-ci-4081-3-5-3-d55e308663\" (UID: \"2c9a645a1050db186575c0d1ae4d3ca3\") " pod="kube-system/kube-apiserver-ci-4081-3-5-3-d55e308663" Aug 13 00:15:53.531493 kubelet[2251]: I0813 00:15:53.531245 2251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2c9a645a1050db186575c0d1ae4d3ca3-k8s-certs\") pod \"kube-apiserver-ci-4081-3-5-3-d55e308663\" (UID: \"2c9a645a1050db186575c0d1ae4d3ca3\") " pod="kube-system/kube-apiserver-ci-4081-3-5-3-d55e308663" Aug 13 00:15:53.531493 kubelet[2251]: I0813 00:15:53.531302 2251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b050c8a7cb39ed9e39e73dce340511f5-ca-certs\") pod \"kube-controller-manager-ci-4081-3-5-3-d55e308663\" (UID: \"b050c8a7cb39ed9e39e73dce340511f5\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-3-d55e308663" Aug 13 00:15:53.531493 kubelet[2251]: I0813 00:15:53.531334 2251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2c9a645a1050db186575c0d1ae4d3ca3-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-5-3-d55e308663\" (UID: \"2c9a645a1050db186575c0d1ae4d3ca3\") " pod="kube-system/kube-apiserver-ci-4081-3-5-3-d55e308663" Aug 13 00:15:53.531900 kubelet[2251]: I0813 00:15:53.531730 2251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b050c8a7cb39ed9e39e73dce340511f5-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-5-3-d55e308663\" (UID: \"b050c8a7cb39ed9e39e73dce340511f5\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-3-d55e308663" Aug 13 00:15:53.619808 kubelet[2251]: I0813 00:15:53.619700 2251 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-5-3-d55e308663" Aug 13 00:15:53.620176 kubelet[2251]: E0813 00:15:53.620125 2251 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://159.69.112.232:6443/api/v1/nodes\": dial tcp 159.69.112.232:6443: connect: connection refused" node="ci-4081-3-5-3-d55e308663" Aug 13 00:15:53.678152 containerd[1487]: time="2025-08-13T00:15:53.678045382Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-5-3-d55e308663,Uid:b050c8a7cb39ed9e39e73dce340511f5,Namespace:kube-system,Attempt:0,}" Aug 13 00:15:53.693687 containerd[1487]: time="2025-08-13T00:15:53.693540903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-5-3-d55e308663,Uid:3ae12576553ee962351d3159b9163c53,Namespace:kube-system,Attempt:0,}" Aug 13 00:15:53.700005 containerd[1487]: time="2025-08-13T00:15:53.699604668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-5-3-d55e308663,Uid:2c9a645a1050db186575c0d1ae4d3ca3,Namespace:kube-system,Attempt:0,}" Aug 13 00:15:53.830193 kubelet[2251]: E0813 00:15:53.830131 2251 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://159.69.112.232:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-5-3-d55e308663?timeout=10s\": dial tcp 159.69.112.232:6443: connect: connection refused" interval="800ms" Aug 13 00:15:54.023430 kubelet[2251]: I0813 00:15:54.023307 2251 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-5-3-d55e308663" Aug 13 00:15:54.023845 kubelet[2251]: E0813 00:15:54.023677 2251 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://159.69.112.232:6443/api/v1/nodes\": dial tcp 159.69.112.232:6443: connect: connection refused" node="ci-4081-3-5-3-d55e308663" Aug 13 00:15:54.028987 kubelet[2251]: W0813 00:15:54.028884 2251 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://159.69.112.232:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-5-3-d55e308663&limit=500&resourceVersion=0": dial tcp 159.69.112.232:6443: connect: connection refused Aug 13 00:15:54.028987 kubelet[2251]: E0813 00:15:54.028958 2251 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://159.69.112.232:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-5-3-d55e308663&limit=500&resourceVersion=0\": dial tcp 159.69.112.232:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:15:54.077585 kubelet[2251]: W0813 00:15:54.077458 2251 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://159.69.112.232:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 159.69.112.232:6443: connect: connection refused Aug 13 00:15:54.077585 kubelet[2251]: E0813 00:15:54.077522 2251 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://159.69.112.232:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 159.69.112.232:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:15:54.206743 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount751617733.mount: Deactivated successfully. Aug 13 00:15:54.214335 containerd[1487]: time="2025-08-13T00:15:54.212987125Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:15:54.215625 containerd[1487]: time="2025-08-13T00:15:54.215569697Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Aug 13 00:15:54.219490 containerd[1487]: time="2025-08-13T00:15:54.219409254Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:15:54.221267 containerd[1487]: time="2025-08-13T00:15:54.221213371Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:15:54.222175 containerd[1487]: time="2025-08-13T00:15:54.222140189Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 13 00:15:54.224441 containerd[1487]: time="2025-08-13T00:15:54.224400555Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:15:54.225872 containerd[1487]: time="2025-08-13T00:15:54.225835504Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 13 00:15:54.227830 containerd[1487]: time="2025-08-13T00:15:54.227775463Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:15:54.231313 containerd[1487]: time="2025-08-13T00:15:54.231249853Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 553.095589ms" Aug 13 00:15:54.233929 containerd[1487]: time="2025-08-13T00:15:54.233880866Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 540.249001ms" Aug 13 00:15:54.237296 containerd[1487]: time="2025-08-13T00:15:54.237210893Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 537.498422ms" Aug 13 00:15:54.365053 containerd[1487]: time="2025-08-13T00:15:54.364669498Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:15:54.365053 containerd[1487]: time="2025-08-13T00:15:54.364742299Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:15:54.365053 containerd[1487]: time="2025-08-13T00:15:54.364753059Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:15:54.365053 containerd[1487]: time="2025-08-13T00:15:54.364837301Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:15:54.375351 containerd[1487]: time="2025-08-13T00:15:54.374030046Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:15:54.375351 containerd[1487]: time="2025-08-13T00:15:54.374101968Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:15:54.375351 containerd[1487]: time="2025-08-13T00:15:54.374114608Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:15:54.375351 containerd[1487]: time="2025-08-13T00:15:54.374199890Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:15:54.379975 containerd[1487]: time="2025-08-13T00:15:54.379870204Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:15:54.379975 containerd[1487]: time="2025-08-13T00:15:54.379924285Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:15:54.379975 containerd[1487]: time="2025-08-13T00:15:54.379935525Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:15:54.380631 containerd[1487]: time="2025-08-13T00:15:54.380210251Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:15:54.405208 systemd[1]: Started cri-containerd-623a7927938505b89839762d1ebbe3ecdb5b863c2bacc1e6a73367ffac014e73.scope - libcontainer container 623a7927938505b89839762d1ebbe3ecdb5b863c2bacc1e6a73367ffac014e73. Aug 13 00:15:54.406443 systemd[1]: Started cri-containerd-9bcce9a3e357348884c5095e7943c8816b96f31d9871a1e9753f46cb402499de.scope - libcontainer container 9bcce9a3e357348884c5095e7943c8816b96f31d9871a1e9753f46cb402499de. Aug 13 00:15:54.421511 systemd[1]: Started cri-containerd-1a4922f1e268082888297ceef7dd2d24d3a80c9be625a11e77a612a366e72256.scope - libcontainer container 1a4922f1e268082888297ceef7dd2d24d3a80c9be625a11e77a612a366e72256. Aug 13 00:15:54.472241 containerd[1487]: time="2025-08-13T00:15:54.472189542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-5-3-d55e308663,Uid:3ae12576553ee962351d3159b9163c53,Namespace:kube-system,Attempt:0,} returns sandbox id \"623a7927938505b89839762d1ebbe3ecdb5b863c2bacc1e6a73367ffac014e73\"" Aug 13 00:15:54.479297 containerd[1487]: time="2025-08-13T00:15:54.478326425Z" level=info msg="CreateContainer within sandbox \"623a7927938505b89839762d1ebbe3ecdb5b863c2bacc1e6a73367ffac014e73\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 13 00:15:54.479824 containerd[1487]: time="2025-08-13T00:15:54.479792295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-5-3-d55e308663,Uid:2c9a645a1050db186575c0d1ae4d3ca3,Namespace:kube-system,Attempt:0,} returns sandbox id \"9bcce9a3e357348884c5095e7943c8816b96f31d9871a1e9753f46cb402499de\"" Aug 13 00:15:54.483683 containerd[1487]: time="2025-08-13T00:15:54.483649852Z" level=info msg="CreateContainer within sandbox \"9bcce9a3e357348884c5095e7943c8816b96f31d9871a1e9753f46cb402499de\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 13 00:15:54.489454 containerd[1487]: time="2025-08-13T00:15:54.489363047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-5-3-d55e308663,Uid:b050c8a7cb39ed9e39e73dce340511f5,Namespace:kube-system,Attempt:0,} returns sandbox id \"1a4922f1e268082888297ceef7dd2d24d3a80c9be625a11e77a612a366e72256\"" Aug 13 00:15:54.495719 containerd[1487]: time="2025-08-13T00:15:54.495675174Z" level=info msg="CreateContainer within sandbox \"1a4922f1e268082888297ceef7dd2d24d3a80c9be625a11e77a612a366e72256\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 13 00:15:54.501193 containerd[1487]: time="2025-08-13T00:15:54.501134044Z" level=info msg="CreateContainer within sandbox \"623a7927938505b89839762d1ebbe3ecdb5b863c2bacc1e6a73367ffac014e73\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c39bbd7e628c57fd5a2c07110f231c8443f07b2e982dac3af13f8f863e3cdb8c\"" Aug 13 00:15:54.502756 containerd[1487]: time="2025-08-13T00:15:54.502474951Z" level=info msg="StartContainer for \"c39bbd7e628c57fd5a2c07110f231c8443f07b2e982dac3af13f8f863e3cdb8c\"" Aug 13 00:15:54.517340 containerd[1487]: time="2025-08-13T00:15:54.517136646Z" level=info msg="CreateContainer within sandbox \"9bcce9a3e357348884c5095e7943c8816b96f31d9871a1e9753f46cb402499de\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"038dc5c89450e1a28d2946574b5e73e290a2244bfefdf49fcbe87c571750cc2c\"" Aug 13 00:15:54.518787 containerd[1487]: time="2025-08-13T00:15:54.518604556Z" level=info msg="StartContainer for \"038dc5c89450e1a28d2946574b5e73e290a2244bfefdf49fcbe87c571750cc2c\"" Aug 13 00:15:54.520138 containerd[1487]: time="2025-08-13T00:15:54.520088106Z" level=info msg="CreateContainer within sandbox \"1a4922f1e268082888297ceef7dd2d24d3a80c9be625a11e77a612a366e72256\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"55d935234ebd8034fe23575b2e987b262eeb4964d61b8aeaa399381676a509ac\"" Aug 13 00:15:54.521040 containerd[1487]: time="2025-08-13T00:15:54.520912922Z" level=info msg="StartContainer for \"55d935234ebd8034fe23575b2e987b262eeb4964d61b8aeaa399381676a509ac\"" Aug 13 00:15:54.536742 systemd[1]: Started cri-containerd-c39bbd7e628c57fd5a2c07110f231c8443f07b2e982dac3af13f8f863e3cdb8c.scope - libcontainer container c39bbd7e628c57fd5a2c07110f231c8443f07b2e982dac3af13f8f863e3cdb8c. Aug 13 00:15:54.563939 systemd[1]: Started cri-containerd-55d935234ebd8034fe23575b2e987b262eeb4964d61b8aeaa399381676a509ac.scope - libcontainer container 55d935234ebd8034fe23575b2e987b262eeb4964d61b8aeaa399381676a509ac. Aug 13 00:15:54.574483 systemd[1]: Started cri-containerd-038dc5c89450e1a28d2946574b5e73e290a2244bfefdf49fcbe87c571750cc2c.scope - libcontainer container 038dc5c89450e1a28d2946574b5e73e290a2244bfefdf49fcbe87c571750cc2c. Aug 13 00:15:54.608483 kubelet[2251]: W0813 00:15:54.608410 2251 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://159.69.112.232:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 159.69.112.232:6443: connect: connection refused Aug 13 00:15:54.608613 kubelet[2251]: E0813 00:15:54.608487 2251 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://159.69.112.232:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 159.69.112.232:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:15:54.609264 containerd[1487]: time="2025-08-13T00:15:54.609210139Z" level=info msg="StartContainer for \"c39bbd7e628c57fd5a2c07110f231c8443f07b2e982dac3af13f8f863e3cdb8c\" returns successfully" Aug 13 00:15:54.632383 kubelet[2251]: E0813 00:15:54.631192 2251 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://159.69.112.232:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-5-3-d55e308663?timeout=10s\": dial tcp 159.69.112.232:6443: connect: connection refused" interval="1.6s" Aug 13 00:15:54.650337 containerd[1487]: time="2025-08-13T00:15:54.648662573Z" level=info msg="StartContainer for \"55d935234ebd8034fe23575b2e987b262eeb4964d61b8aeaa399381676a509ac\" returns successfully" Aug 13 00:15:54.661341 containerd[1487]: time="2025-08-13T00:15:54.661181585Z" level=info msg="StartContainer for \"038dc5c89450e1a28d2946574b5e73e290a2244bfefdf49fcbe87c571750cc2c\" returns successfully" Aug 13 00:15:54.773831 kubelet[2251]: W0813 00:15:54.773717 2251 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://159.69.112.232:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 159.69.112.232:6443: connect: connection refused Aug 13 00:15:54.773831 kubelet[2251]: E0813 00:15:54.773798 2251 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://159.69.112.232:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 159.69.112.232:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:15:54.827047 kubelet[2251]: I0813 00:15:54.827000 2251 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-5-3-d55e308663" Aug 13 00:15:55.279726 kubelet[2251]: E0813 00:15:55.278839 2251 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-5-3-d55e308663\" not found" node="ci-4081-3-5-3-d55e308663" Aug 13 00:15:55.286377 kubelet[2251]: E0813 00:15:55.286346 2251 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-5-3-d55e308663\" not found" node="ci-4081-3-5-3-d55e308663" Aug 13 00:15:55.288306 kubelet[2251]: E0813 00:15:55.288134 2251 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-5-3-d55e308663\" not found" node="ci-4081-3-5-3-d55e308663" Aug 13 00:15:56.290946 kubelet[2251]: E0813 00:15:56.290734 2251 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-5-3-d55e308663\" not found" node="ci-4081-3-5-3-d55e308663" Aug 13 00:15:56.292344 kubelet[2251]: E0813 00:15:56.291649 2251 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-5-3-d55e308663\" not found" node="ci-4081-3-5-3-d55e308663" Aug 13 00:15:56.333878 kubelet[2251]: E0813 00:15:56.333828 2251 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-5-3-d55e308663\" not found" node="ci-4081-3-5-3-d55e308663" Aug 13 00:15:56.733143 kubelet[2251]: I0813 00:15:56.733012 2251 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-5-3-d55e308663" Aug 13 00:15:56.734793 kubelet[2251]: E0813 00:15:56.733797 2251 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4081-3-5-3-d55e308663\": node \"ci-4081-3-5-3-d55e308663\" not found" Aug 13 00:15:56.754367 kubelet[2251]: E0813 00:15:56.754313 2251 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-5-3-d55e308663\" not found" Aug 13 00:15:56.854479 kubelet[2251]: E0813 00:15:56.854429 2251 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-5-3-d55e308663\" not found" Aug 13 00:15:56.926637 kubelet[2251]: I0813 00:15:56.926344 2251 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-5-3-d55e308663" Aug 13 00:15:56.939923 kubelet[2251]: E0813 00:15:56.939795 2251 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-5-3-d55e308663\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-5-3-d55e308663" Aug 13 00:15:56.939923 kubelet[2251]: I0813 00:15:56.939845 2251 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-5-3-d55e308663" Aug 13 00:15:56.942808 kubelet[2251]: E0813 00:15:56.942546 2251 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-5-3-d55e308663\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-5-3-d55e308663" Aug 13 00:15:56.942808 kubelet[2251]: I0813 00:15:56.942588 2251 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-5-3-d55e308663" Aug 13 00:15:56.945503 kubelet[2251]: E0813 00:15:56.945441 2251 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-5-3-d55e308663\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-5-3-d55e308663" Aug 13 00:15:57.211159 kubelet[2251]: I0813 00:15:57.209116 2251 apiserver.go:52] "Watching apiserver" Aug 13 00:15:57.231829 kubelet[2251]: I0813 00:15:57.231765 2251 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 13 00:15:58.995171 systemd[1]: Reloading requested from client PID 2530 ('systemctl') (unit session-7.scope)... Aug 13 00:15:58.995790 systemd[1]: Reloading... Aug 13 00:15:59.125318 zram_generator::config[2582]: No configuration found. Aug 13 00:15:59.219367 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:15:59.320632 systemd[1]: Reloading finished in 324 ms. Aug 13 00:15:59.360085 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:15:59.372830 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 00:15:59.373176 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:15:59.373271 systemd[1]: kubelet.service: Consumed 1.146s CPU time, 132.0M memory peak, 0B memory swap peak. Aug 13 00:15:59.387887 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:15:59.518599 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:15:59.518704 (kubelet)[2615]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 00:15:59.582750 kubelet[2615]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:15:59.582750 kubelet[2615]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 13 00:15:59.582750 kubelet[2615]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:15:59.582750 kubelet[2615]: I0813 00:15:59.579964 2615 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 00:15:59.587329 kubelet[2615]: I0813 00:15:59.587003 2615 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Aug 13 00:15:59.587329 kubelet[2615]: I0813 00:15:59.587033 2615 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 00:15:59.587540 kubelet[2615]: I0813 00:15:59.587442 2615 server.go:954] "Client rotation is on, will bootstrap in background" Aug 13 00:15:59.589044 kubelet[2615]: I0813 00:15:59.588958 2615 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 13 00:15:59.591669 kubelet[2615]: I0813 00:15:59.591622 2615 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 00:15:59.597429 kubelet[2615]: E0813 00:15:59.597377 2615 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 00:15:59.597429 kubelet[2615]: I0813 00:15:59.597418 2615 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 00:15:59.601060 kubelet[2615]: I0813 00:15:59.601014 2615 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 00:15:59.601424 kubelet[2615]: I0813 00:15:59.601360 2615 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 00:15:59.603789 kubelet[2615]: I0813 00:15:59.601416 2615 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-5-3-d55e308663","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 00:15:59.603789 kubelet[2615]: I0813 00:15:59.601798 2615 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 00:15:59.603789 kubelet[2615]: I0813 00:15:59.601812 2615 container_manager_linux.go:304] "Creating device plugin manager" Aug 13 00:15:59.603789 kubelet[2615]: I0813 00:15:59.601878 2615 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:15:59.603789 kubelet[2615]: I0813 00:15:59.602054 2615 kubelet.go:446] "Attempting to sync node with API server" Aug 13 00:15:59.604054 kubelet[2615]: I0813 00:15:59.602070 2615 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 00:15:59.604054 kubelet[2615]: I0813 00:15:59.602090 2615 kubelet.go:352] "Adding apiserver pod source" Aug 13 00:15:59.604054 kubelet[2615]: I0813 00:15:59.602102 2615 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 00:15:59.607480 kubelet[2615]: I0813 00:15:59.607456 2615 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Aug 13 00:15:59.608954 kubelet[2615]: I0813 00:15:59.608930 2615 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 00:15:59.613025 kubelet[2615]: I0813 00:15:59.612999 2615 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 13 00:15:59.613199 kubelet[2615]: I0813 00:15:59.613048 2615 server.go:1287] "Started kubelet" Aug 13 00:15:59.618834 kubelet[2615]: I0813 00:15:59.616464 2615 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 00:15:59.620295 kubelet[2615]: I0813 00:15:59.619597 2615 server.go:479] "Adding debug handlers to kubelet server" Aug 13 00:15:59.622778 kubelet[2615]: I0813 00:15:59.621503 2615 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 00:15:59.623116 kubelet[2615]: I0813 00:15:59.623096 2615 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 00:15:59.624363 kubelet[2615]: I0813 00:15:59.623855 2615 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 00:15:59.636159 kubelet[2615]: I0813 00:15:59.636115 2615 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 00:15:59.636704 kubelet[2615]: I0813 00:15:59.636680 2615 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 13 00:15:59.636969 kubelet[2615]: E0813 00:15:59.636924 2615 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-5-3-d55e308663\" not found" Aug 13 00:15:59.640089 kubelet[2615]: I0813 00:15:59.639287 2615 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 13 00:15:59.640089 kubelet[2615]: I0813 00:15:59.639452 2615 reconciler.go:26] "Reconciler: start to sync state" Aug 13 00:15:59.644078 kubelet[2615]: I0813 00:15:59.644038 2615 factory.go:221] Registration of the systemd container factory successfully Aug 13 00:15:59.644201 kubelet[2615]: I0813 00:15:59.644177 2615 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 00:15:59.650650 kubelet[2615]: I0813 00:15:59.650614 2615 factory.go:221] Registration of the containerd container factory successfully Aug 13 00:15:59.654571 kubelet[2615]: E0813 00:15:59.654489 2615 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 00:15:59.656380 kubelet[2615]: I0813 00:15:59.654966 2615 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 00:15:59.664255 kubelet[2615]: I0813 00:15:59.664208 2615 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 00:15:59.664255 kubelet[2615]: I0813 00:15:59.664257 2615 status_manager.go:227] "Starting to sync pod status with apiserver" Aug 13 00:15:59.664776 kubelet[2615]: I0813 00:15:59.664297 2615 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 13 00:15:59.664776 kubelet[2615]: I0813 00:15:59.664305 2615 kubelet.go:2382] "Starting kubelet main sync loop" Aug 13 00:15:59.665527 kubelet[2615]: E0813 00:15:59.665476 2615 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 00:15:59.708568 kubelet[2615]: I0813 00:15:59.708476 2615 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 13 00:15:59.708568 kubelet[2615]: I0813 00:15:59.708493 2615 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 13 00:15:59.708568 kubelet[2615]: I0813 00:15:59.708515 2615 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:15:59.708798 kubelet[2615]: I0813 00:15:59.708699 2615 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 13 00:15:59.708798 kubelet[2615]: I0813 00:15:59.708713 2615 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 13 00:15:59.708798 kubelet[2615]: I0813 00:15:59.708731 2615 policy_none.go:49] "None policy: Start" Aug 13 00:15:59.708798 kubelet[2615]: I0813 00:15:59.708742 2615 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 13 00:15:59.708798 kubelet[2615]: I0813 00:15:59.708752 2615 state_mem.go:35] "Initializing new in-memory state store" Aug 13 00:15:59.708959 kubelet[2615]: I0813 00:15:59.708844 2615 state_mem.go:75] "Updated machine memory state" Aug 13 00:15:59.714844 kubelet[2615]: I0813 00:15:59.714585 2615 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 00:15:59.715215 kubelet[2615]: I0813 00:15:59.714877 2615 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 00:15:59.715215 kubelet[2615]: I0813 00:15:59.714896 2615 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 00:15:59.715345 kubelet[2615]: I0813 00:15:59.715253 2615 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 00:15:59.718787 kubelet[2615]: E0813 00:15:59.718685 2615 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 13 00:15:59.769309 kubelet[2615]: I0813 00:15:59.767055 2615 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-5-3-d55e308663" Aug 13 00:15:59.769309 kubelet[2615]: I0813 00:15:59.767145 2615 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-5-3-d55e308663" Aug 13 00:15:59.769791 kubelet[2615]: I0813 00:15:59.769762 2615 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-5-3-d55e308663" Aug 13 00:15:59.820499 kubelet[2615]: I0813 00:15:59.820445 2615 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-5-3-d55e308663" Aug 13 00:15:59.832314 kubelet[2615]: I0813 00:15:59.831752 2615 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081-3-5-3-d55e308663" Aug 13 00:15:59.832314 kubelet[2615]: I0813 00:15:59.831840 2615 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-5-3-d55e308663" Aug 13 00:15:59.840041 kubelet[2615]: I0813 00:15:59.839718 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3ae12576553ee962351d3159b9163c53-kubeconfig\") pod \"kube-scheduler-ci-4081-3-5-3-d55e308663\" (UID: \"3ae12576553ee962351d3159b9163c53\") " pod="kube-system/kube-scheduler-ci-4081-3-5-3-d55e308663" Aug 13 00:15:59.840041 kubelet[2615]: I0813 00:15:59.839760 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2c9a645a1050db186575c0d1ae4d3ca3-ca-certs\") pod \"kube-apiserver-ci-4081-3-5-3-d55e308663\" (UID: \"2c9a645a1050db186575c0d1ae4d3ca3\") " pod="kube-system/kube-apiserver-ci-4081-3-5-3-d55e308663" Aug 13 00:15:59.840041 kubelet[2615]: I0813 00:15:59.839780 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2c9a645a1050db186575c0d1ae4d3ca3-k8s-certs\") pod \"kube-apiserver-ci-4081-3-5-3-d55e308663\" (UID: \"2c9a645a1050db186575c0d1ae4d3ca3\") " pod="kube-system/kube-apiserver-ci-4081-3-5-3-d55e308663" Aug 13 00:15:59.840041 kubelet[2615]: I0813 00:15:59.839797 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2c9a645a1050db186575c0d1ae4d3ca3-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-5-3-d55e308663\" (UID: \"2c9a645a1050db186575c0d1ae4d3ca3\") " pod="kube-system/kube-apiserver-ci-4081-3-5-3-d55e308663" Aug 13 00:15:59.840041 kubelet[2615]: I0813 00:15:59.839823 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b050c8a7cb39ed9e39e73dce340511f5-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-5-3-d55e308663\" (UID: \"b050c8a7cb39ed9e39e73dce340511f5\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-3-d55e308663" Aug 13 00:15:59.840321 kubelet[2615]: I0813 00:15:59.839856 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b050c8a7cb39ed9e39e73dce340511f5-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-5-3-d55e308663\" (UID: \"b050c8a7cb39ed9e39e73dce340511f5\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-3-d55e308663" Aug 13 00:15:59.840321 kubelet[2615]: I0813 00:15:59.839887 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b050c8a7cb39ed9e39e73dce340511f5-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-5-3-d55e308663\" (UID: \"b050c8a7cb39ed9e39e73dce340511f5\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-3-d55e308663" Aug 13 00:15:59.840321 kubelet[2615]: I0813 00:15:59.839905 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b050c8a7cb39ed9e39e73dce340511f5-ca-certs\") pod \"kube-controller-manager-ci-4081-3-5-3-d55e308663\" (UID: \"b050c8a7cb39ed9e39e73dce340511f5\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-3-d55e308663" Aug 13 00:15:59.840321 kubelet[2615]: I0813 00:15:59.839923 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b050c8a7cb39ed9e39e73dce340511f5-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-5-3-d55e308663\" (UID: \"b050c8a7cb39ed9e39e73dce340511f5\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-3-d55e308663" Aug 13 00:16:00.612321 kubelet[2615]: I0813 00:16:00.611843 2615 apiserver.go:52] "Watching apiserver" Aug 13 00:16:00.639928 kubelet[2615]: I0813 00:16:00.639863 2615 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 13 00:16:00.688468 kubelet[2615]: I0813 00:16:00.688418 2615 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-5-3-d55e308663" Aug 13 00:16:00.689040 kubelet[2615]: I0813 00:16:00.688842 2615 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-5-3-d55e308663" Aug 13 00:16:00.703028 kubelet[2615]: E0813 00:16:00.702461 2615 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-5-3-d55e308663\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-5-3-d55e308663" Aug 13 00:16:00.705980 kubelet[2615]: E0813 00:16:00.705940 2615 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-5-3-d55e308663\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-5-3-d55e308663" Aug 13 00:16:00.727712 kubelet[2615]: I0813 00:16:00.727435 2615 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-5-3-d55e308663" podStartSLOduration=1.727401242 podStartE2EDuration="1.727401242s" podCreationTimestamp="2025-08-13 00:15:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:16:00.727030956 +0000 UTC m=+1.201544429" watchObservedRunningTime="2025-08-13 00:16:00.727401242 +0000 UTC m=+1.201914715" Aug 13 00:16:00.746553 kubelet[2615]: I0813 00:16:00.746442 2615 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-5-3-d55e308663" podStartSLOduration=1.746412329 podStartE2EDuration="1.746412329s" podCreationTimestamp="2025-08-13 00:15:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:16:00.742122295 +0000 UTC m=+1.216635808" watchObservedRunningTime="2025-08-13 00:16:00.746412329 +0000 UTC m=+1.220925842" Aug 13 00:16:00.798191 kubelet[2615]: I0813 00:16:00.797164 2615 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-5-3-d55e308663" podStartSLOduration=1.797146881 podStartE2EDuration="1.797146881s" podCreationTimestamp="2025-08-13 00:15:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:16:00.764767205 +0000 UTC m=+1.239280718" watchObservedRunningTime="2025-08-13 00:16:00.797146881 +0000 UTC m=+1.271660394" Aug 13 00:16:05.226412 systemd[1]: Created slice kubepods-besteffort-pod6958a3a4_b88d_48f7_a9b4_59d3567d0c8d.slice - libcontainer container kubepods-besteffort-pod6958a3a4_b88d_48f7_a9b4_59d3567d0c8d.slice. Aug 13 00:16:05.272069 kubelet[2615]: I0813 00:16:05.271955 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6958a3a4-b88d-48f7-a9b4-59d3567d0c8d-kube-proxy\") pod \"kube-proxy-pg75m\" (UID: \"6958a3a4-b88d-48f7-a9b4-59d3567d0c8d\") " pod="kube-system/kube-proxy-pg75m" Aug 13 00:16:05.272069 kubelet[2615]: I0813 00:16:05.272031 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6958a3a4-b88d-48f7-a9b4-59d3567d0c8d-xtables-lock\") pod \"kube-proxy-pg75m\" (UID: \"6958a3a4-b88d-48f7-a9b4-59d3567d0c8d\") " pod="kube-system/kube-proxy-pg75m" Aug 13 00:16:05.272069 kubelet[2615]: I0813 00:16:05.272062 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6958a3a4-b88d-48f7-a9b4-59d3567d0c8d-lib-modules\") pod \"kube-proxy-pg75m\" (UID: \"6958a3a4-b88d-48f7-a9b4-59d3567d0c8d\") " pod="kube-system/kube-proxy-pg75m" Aug 13 00:16:05.272716 kubelet[2615]: I0813 00:16:05.272094 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85ntw\" (UniqueName: \"kubernetes.io/projected/6958a3a4-b88d-48f7-a9b4-59d3567d0c8d-kube-api-access-85ntw\") pod \"kube-proxy-pg75m\" (UID: \"6958a3a4-b88d-48f7-a9b4-59d3567d0c8d\") " pod="kube-system/kube-proxy-pg75m" Aug 13 00:16:05.277065 kubelet[2615]: I0813 00:16:05.276898 2615 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 13 00:16:05.277867 containerd[1487]: time="2025-08-13T00:16:05.277762659Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 13 00:16:05.278802 kubelet[2615]: I0813 00:16:05.278087 2615 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 13 00:16:05.384594 kubelet[2615]: E0813 00:16:05.384536 2615 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Aug 13 00:16:05.384895 kubelet[2615]: E0813 00:16:05.384879 2615 projected.go:194] Error preparing data for projected volume kube-api-access-85ntw for pod kube-system/kube-proxy-pg75m: configmap "kube-root-ca.crt" not found Aug 13 00:16:05.385093 kubelet[2615]: E0813 00:16:05.385064 2615 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6958a3a4-b88d-48f7-a9b4-59d3567d0c8d-kube-api-access-85ntw podName:6958a3a4-b88d-48f7-a9b4-59d3567d0c8d nodeName:}" failed. No retries permitted until 2025-08-13 00:16:05.885038363 +0000 UTC m=+6.359551876 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-85ntw" (UniqueName: "kubernetes.io/projected/6958a3a4-b88d-48f7-a9b4-59d3567d0c8d-kube-api-access-85ntw") pod "kube-proxy-pg75m" (UID: "6958a3a4-b88d-48f7-a9b4-59d3567d0c8d") : configmap "kube-root-ca.crt" not found Aug 13 00:16:06.138729 containerd[1487]: time="2025-08-13T00:16:06.138563196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pg75m,Uid:6958a3a4-b88d-48f7-a9b4-59d3567d0c8d,Namespace:kube-system,Attempt:0,}" Aug 13 00:16:06.178393 containerd[1487]: time="2025-08-13T00:16:06.177879056Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:16:06.178393 containerd[1487]: time="2025-08-13T00:16:06.177943657Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:16:06.178393 containerd[1487]: time="2025-08-13T00:16:06.177960537Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:16:06.178393 containerd[1487]: time="2025-08-13T00:16:06.178043099Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:16:06.210518 systemd[1]: Started cri-containerd-88d50707058836a0377bd0f1ada562bc5db8b44d191b77dac48fe08be0993277.scope - libcontainer container 88d50707058836a0377bd0f1ada562bc5db8b44d191b77dac48fe08be0993277. Aug 13 00:16:06.244782 containerd[1487]: time="2025-08-13T00:16:06.244719523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pg75m,Uid:6958a3a4-b88d-48f7-a9b4-59d3567d0c8d,Namespace:kube-system,Attempt:0,} returns sandbox id \"88d50707058836a0377bd0f1ada562bc5db8b44d191b77dac48fe08be0993277\"" Aug 13 00:16:06.250853 containerd[1487]: time="2025-08-13T00:16:06.250786533Z" level=info msg="CreateContainer within sandbox \"88d50707058836a0377bd0f1ada562bc5db8b44d191b77dac48fe08be0993277\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 13 00:16:06.271869 containerd[1487]: time="2025-08-13T00:16:06.271785603Z" level=info msg="CreateContainer within sandbox \"88d50707058836a0377bd0f1ada562bc5db8b44d191b77dac48fe08be0993277\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5a1b29f363b47e917f32ec0cf53be5660f31da3356db3296405aca9f3a0643c1\"" Aug 13 00:16:06.274308 containerd[1487]: time="2025-08-13T00:16:06.273772312Z" level=info msg="StartContainer for \"5a1b29f363b47e917f32ec0cf53be5660f31da3356db3296405aca9f3a0643c1\"" Aug 13 00:16:06.325471 systemd[1]: Started cri-containerd-5a1b29f363b47e917f32ec0cf53be5660f31da3356db3296405aca9f3a0643c1.scope - libcontainer container 5a1b29f363b47e917f32ec0cf53be5660f31da3356db3296405aca9f3a0643c1. Aug 13 00:16:06.361081 systemd[1]: Created slice kubepods-besteffort-pod6454b365_f92f_4154_88b0_2758cfa80c05.slice - libcontainer container kubepods-besteffort-pod6454b365_f92f_4154_88b0_2758cfa80c05.slice. Aug 13 00:16:06.381471 kubelet[2615]: I0813 00:16:06.381410 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8pwq\" (UniqueName: \"kubernetes.io/projected/6454b365-f92f-4154-88b0-2758cfa80c05-kube-api-access-t8pwq\") pod \"tigera-operator-747864d56d-d2ns2\" (UID: \"6454b365-f92f-4154-88b0-2758cfa80c05\") " pod="tigera-operator/tigera-operator-747864d56d-d2ns2" Aug 13 00:16:06.381471 kubelet[2615]: I0813 00:16:06.381463 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6454b365-f92f-4154-88b0-2758cfa80c05-var-lib-calico\") pod \"tigera-operator-747864d56d-d2ns2\" (UID: \"6454b365-f92f-4154-88b0-2758cfa80c05\") " pod="tigera-operator/tigera-operator-747864d56d-d2ns2" Aug 13 00:16:06.399696 containerd[1487]: time="2025-08-13T00:16:06.397757862Z" level=info msg="StartContainer for \"5a1b29f363b47e917f32ec0cf53be5660f31da3356db3296405aca9f3a0643c1\" returns successfully" Aug 13 00:16:06.668815 containerd[1487]: time="2025-08-13T00:16:06.668482779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-d2ns2,Uid:6454b365-f92f-4154-88b0-2758cfa80c05,Namespace:tigera-operator,Attempt:0,}" Aug 13 00:16:06.698119 containerd[1487]: time="2025-08-13T00:16:06.697652489Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:16:06.698119 containerd[1487]: time="2025-08-13T00:16:06.698034815Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:16:06.698119 containerd[1487]: time="2025-08-13T00:16:06.698050135Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:16:06.698706 containerd[1487]: time="2025-08-13T00:16:06.698600543Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:16:06.723305 kubelet[2615]: I0813 00:16:06.722849 2615 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-pg75m" podStartSLOduration=1.722827821 podStartE2EDuration="1.722827821s" podCreationTimestamp="2025-08-13 00:16:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:16:06.721025074 +0000 UTC m=+7.195538627" watchObservedRunningTime="2025-08-13 00:16:06.722827821 +0000 UTC m=+7.197341334" Aug 13 00:16:06.736559 systemd[1]: Started cri-containerd-f3486d6fae45266e261f0eb154e90d03fd650dee256c2d051ef5245f08a922ab.scope - libcontainer container f3486d6fae45266e261f0eb154e90d03fd650dee256c2d051ef5245f08a922ab. Aug 13 00:16:06.793537 containerd[1487]: time="2025-08-13T00:16:06.793489144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-d2ns2,Uid:6454b365-f92f-4154-88b0-2758cfa80c05,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"f3486d6fae45266e261f0eb154e90d03fd650dee256c2d051ef5245f08a922ab\"" Aug 13 00:16:06.798068 containerd[1487]: time="2025-08-13T00:16:06.797669006Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Aug 13 00:16:08.677125 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3285792167.mount: Deactivated successfully. Aug 13 00:16:09.060752 containerd[1487]: time="2025-08-13T00:16:09.060567836Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:16:09.062329 containerd[1487]: time="2025-08-13T00:16:09.062242899Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=22150610" Aug 13 00:16:09.063947 containerd[1487]: time="2025-08-13T00:16:09.063870922Z" level=info msg="ImageCreate event name:\"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:16:09.067681 containerd[1487]: time="2025-08-13T00:16:09.067626813Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:16:09.068474 containerd[1487]: time="2025-08-13T00:16:09.068321183Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"22146605\" in 2.270588176s" Aug 13 00:16:09.068474 containerd[1487]: time="2025-08-13T00:16:09.068365863Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\"" Aug 13 00:16:09.073173 containerd[1487]: time="2025-08-13T00:16:09.072620602Z" level=info msg="CreateContainer within sandbox \"f3486d6fae45266e261f0eb154e90d03fd650dee256c2d051ef5245f08a922ab\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Aug 13 00:16:09.091066 containerd[1487]: time="2025-08-13T00:16:09.090894572Z" level=info msg="CreateContainer within sandbox \"f3486d6fae45266e261f0eb154e90d03fd650dee256c2d051ef5245f08a922ab\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"7a1a242e2b9f952d6441b5012b439985de0bfde062b04c90bbb012baa411c44b\"" Aug 13 00:16:09.093384 containerd[1487]: time="2025-08-13T00:16:09.092145589Z" level=info msg="StartContainer for \"7a1a242e2b9f952d6441b5012b439985de0bfde062b04c90bbb012baa411c44b\"" Aug 13 00:16:09.131591 systemd[1]: Started cri-containerd-7a1a242e2b9f952d6441b5012b439985de0bfde062b04c90bbb012baa411c44b.scope - libcontainer container 7a1a242e2b9f952d6441b5012b439985de0bfde062b04c90bbb012baa411c44b. Aug 13 00:16:09.163571 containerd[1487]: time="2025-08-13T00:16:09.163416327Z" level=info msg="StartContainer for \"7a1a242e2b9f952d6441b5012b439985de0bfde062b04c90bbb012baa411c44b\" returns successfully" Aug 13 00:16:11.111740 kubelet[2615]: I0813 00:16:11.111560 2615 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-d2ns2" podStartSLOduration=2.83726397 podStartE2EDuration="5.111540639s" podCreationTimestamp="2025-08-13 00:16:06 +0000 UTC" firstStartedPulling="2025-08-13 00:16:06.795591615 +0000 UTC m=+7.270105128" lastFinishedPulling="2025-08-13 00:16:09.069868284 +0000 UTC m=+9.544381797" observedRunningTime="2025-08-13 00:16:09.73547885 +0000 UTC m=+10.209992363" watchObservedRunningTime="2025-08-13 00:16:11.111540639 +0000 UTC m=+11.586054112" Aug 13 00:16:15.577122 sudo[1774]: pam_unix(sudo:session): session closed for user root Aug 13 00:16:15.748519 sshd[1771]: pam_unix(sshd:session): session closed for user core Aug 13 00:16:15.753215 systemd[1]: sshd@6-159.69.112.232:22-139.178.89.65:60470.service: Deactivated successfully. Aug 13 00:16:15.756769 systemd[1]: session-7.scope: Deactivated successfully. Aug 13 00:16:15.757569 systemd[1]: session-7.scope: Consumed 6.702s CPU time, 154.9M memory peak, 0B memory swap peak. Aug 13 00:16:15.759914 systemd-logind[1467]: Session 7 logged out. Waiting for processes to exit. Aug 13 00:16:15.762465 systemd-logind[1467]: Removed session 7. Aug 13 00:16:25.619489 systemd[1]: Created slice kubepods-besteffort-podf8086f31_a973_4b8d_a0ca_4aec26e80037.slice - libcontainer container kubepods-besteffort-podf8086f31_a973_4b8d_a0ca_4aec26e80037.slice. Aug 13 00:16:25.708789 kubelet[2615]: I0813 00:16:25.708727 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/f8086f31-a973-4b8d-a0ca-4aec26e80037-typha-certs\") pod \"calico-typha-6dc74bb67d-spxtt\" (UID: \"f8086f31-a973-4b8d-a0ca-4aec26e80037\") " pod="calico-system/calico-typha-6dc74bb67d-spxtt" Aug 13 00:16:25.708789 kubelet[2615]: I0813 00:16:25.708779 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f8086f31-a973-4b8d-a0ca-4aec26e80037-tigera-ca-bundle\") pod \"calico-typha-6dc74bb67d-spxtt\" (UID: \"f8086f31-a973-4b8d-a0ca-4aec26e80037\") " pod="calico-system/calico-typha-6dc74bb67d-spxtt" Aug 13 00:16:25.708789 kubelet[2615]: I0813 00:16:25.708805 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7hgd\" (UniqueName: \"kubernetes.io/projected/f8086f31-a973-4b8d-a0ca-4aec26e80037-kube-api-access-k7hgd\") pod \"calico-typha-6dc74bb67d-spxtt\" (UID: \"f8086f31-a973-4b8d-a0ca-4aec26e80037\") " pod="calico-system/calico-typha-6dc74bb67d-spxtt" Aug 13 00:16:25.886185 systemd[1]: Created slice kubepods-besteffort-pod5ece67d1_c489_4aa1_86a7_1a7319a790d7.slice - libcontainer container kubepods-besteffort-pod5ece67d1_c489_4aa1_86a7_1a7319a790d7.slice. Aug 13 00:16:25.910181 kubelet[2615]: I0813 00:16:25.909872 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/5ece67d1-c489-4aa1-86a7-1a7319a790d7-flexvol-driver-host\") pod \"calico-node-49qbp\" (UID: \"5ece67d1-c489-4aa1-86a7-1a7319a790d7\") " pod="calico-system/calico-node-49qbp" Aug 13 00:16:25.910673 kubelet[2615]: I0813 00:16:25.910488 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/5ece67d1-c489-4aa1-86a7-1a7319a790d7-var-run-calico\") pod \"calico-node-49qbp\" (UID: \"5ece67d1-c489-4aa1-86a7-1a7319a790d7\") " pod="calico-system/calico-node-49qbp" Aug 13 00:16:25.911502 kubelet[2615]: I0813 00:16:25.910941 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5ece67d1-c489-4aa1-86a7-1a7319a790d7-var-lib-calico\") pod \"calico-node-49qbp\" (UID: \"5ece67d1-c489-4aa1-86a7-1a7319a790d7\") " pod="calico-system/calico-node-49qbp" Aug 13 00:16:25.911502 kubelet[2615]: I0813 00:16:25.911020 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9d8qw\" (UniqueName: \"kubernetes.io/projected/5ece67d1-c489-4aa1-86a7-1a7319a790d7-kube-api-access-9d8qw\") pod \"calico-node-49qbp\" (UID: \"5ece67d1-c489-4aa1-86a7-1a7319a790d7\") " pod="calico-system/calico-node-49qbp" Aug 13 00:16:25.912653 kubelet[2615]: I0813 00:16:25.912426 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/5ece67d1-c489-4aa1-86a7-1a7319a790d7-policysync\") pod \"calico-node-49qbp\" (UID: \"5ece67d1-c489-4aa1-86a7-1a7319a790d7\") " pod="calico-system/calico-node-49qbp" Aug 13 00:16:25.912653 kubelet[2615]: I0813 00:16:25.912469 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/5ece67d1-c489-4aa1-86a7-1a7319a790d7-cni-bin-dir\") pod \"calico-node-49qbp\" (UID: \"5ece67d1-c489-4aa1-86a7-1a7319a790d7\") " pod="calico-system/calico-node-49qbp" Aug 13 00:16:25.913121 kubelet[2615]: I0813 00:16:25.912922 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/5ece67d1-c489-4aa1-86a7-1a7319a790d7-cni-log-dir\") pod \"calico-node-49qbp\" (UID: \"5ece67d1-c489-4aa1-86a7-1a7319a790d7\") " pod="calico-system/calico-node-49qbp" Aug 13 00:16:25.913121 kubelet[2615]: I0813 00:16:25.912956 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/5ece67d1-c489-4aa1-86a7-1a7319a790d7-cni-net-dir\") pod \"calico-node-49qbp\" (UID: \"5ece67d1-c489-4aa1-86a7-1a7319a790d7\") " pod="calico-system/calico-node-49qbp" Aug 13 00:16:25.913121 kubelet[2615]: I0813 00:16:25.912976 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/5ece67d1-c489-4aa1-86a7-1a7319a790d7-node-certs\") pod \"calico-node-49qbp\" (UID: \"5ece67d1-c489-4aa1-86a7-1a7319a790d7\") " pod="calico-system/calico-node-49qbp" Aug 13 00:16:25.913121 kubelet[2615]: I0813 00:16:25.912993 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ece67d1-c489-4aa1-86a7-1a7319a790d7-tigera-ca-bundle\") pod \"calico-node-49qbp\" (UID: \"5ece67d1-c489-4aa1-86a7-1a7319a790d7\") " pod="calico-system/calico-node-49qbp" Aug 13 00:16:25.913121 kubelet[2615]: I0813 00:16:25.913013 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5ece67d1-c489-4aa1-86a7-1a7319a790d7-lib-modules\") pod \"calico-node-49qbp\" (UID: \"5ece67d1-c489-4aa1-86a7-1a7319a790d7\") " pod="calico-system/calico-node-49qbp" Aug 13 00:16:25.913623 kubelet[2615]: I0813 00:16:25.913028 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5ece67d1-c489-4aa1-86a7-1a7319a790d7-xtables-lock\") pod \"calico-node-49qbp\" (UID: \"5ece67d1-c489-4aa1-86a7-1a7319a790d7\") " pod="calico-system/calico-node-49qbp" Aug 13 00:16:25.924528 containerd[1487]: time="2025-08-13T00:16:25.924472301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6dc74bb67d-spxtt,Uid:f8086f31-a973-4b8d-a0ca-4aec26e80037,Namespace:calico-system,Attempt:0,}" Aug 13 00:16:25.960313 containerd[1487]: time="2025-08-13T00:16:25.959822357Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:16:25.960313 containerd[1487]: time="2025-08-13T00:16:25.959912398Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:16:25.960313 containerd[1487]: time="2025-08-13T00:16:25.959941078Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:16:25.960313 containerd[1487]: time="2025-08-13T00:16:25.960067040Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:16:25.997497 systemd[1]: Started cri-containerd-adab09c1309e805f4bc4f174658aaa389129c488af3090483365675439ea015e.scope - libcontainer container adab09c1309e805f4bc4f174658aaa389129c488af3090483365675439ea015e. Aug 13 00:16:26.015098 kubelet[2615]: E0813 00:16:26.014250 2615 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qctbn" podUID="15e02000-307e-46ce-8cf9-fed7fd6d6dd8" Aug 13 00:16:26.029779 kubelet[2615]: E0813 00:16:26.029720 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:16:26.030613 kubelet[2615]: W0813 00:16:26.030297 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:16:26.030613 kubelet[2615]: E0813 00:16:26.030333 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:16:26.052591 kubelet[2615]: E0813 00:16:26.052137 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:16:26.052591 kubelet[2615]: W0813 00:16:26.052166 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:16:26.052591 kubelet[2615]: E0813 00:16:26.052189 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:16:26.081836 containerd[1487]: time="2025-08-13T00:16:26.081772503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6dc74bb67d-spxtt,Uid:f8086f31-a973-4b8d-a0ca-4aec26e80037,Namespace:calico-system,Attempt:0,} returns sandbox id \"adab09c1309e805f4bc4f174658aaa389129c488af3090483365675439ea015e\"" Aug 13 00:16:26.086747 containerd[1487]: time="2025-08-13T00:16:26.086648308Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Aug 13 00:16:26.093290 kubelet[2615]: E0813 00:16:26.093112 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:16:26.093290 kubelet[2615]: W0813 00:16:26.093241 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:16:26.093290 kubelet[2615]: E0813 00:16:26.093263 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:16:26.095202 kubelet[2615]: E0813 00:16:26.094734 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:16:26.095782 kubelet[2615]: W0813 00:16:26.094755 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:16:26.095782 kubelet[2615]: E0813 00:16:26.095428 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:16:26.097779 kubelet[2615]: E0813 00:16:26.097340 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:16:26.097779 kubelet[2615]: W0813 00:16:26.097360 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:16:26.097779 kubelet[2615]: E0813 00:16:26.097380 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:16:26.098911 kubelet[2615]: E0813 00:16:26.098392 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:16:26.098911 kubelet[2615]: W0813 00:16:26.098410 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:16:26.098911 kubelet[2615]: E0813 00:16:26.098437 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:16:26.099420 kubelet[2615]: E0813 00:16:26.099290 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:16:26.099420 kubelet[2615]: W0813 00:16:26.099323 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:16:26.099965 kubelet[2615]: E0813 00:16:26.099403 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:16:26.100688 kubelet[2615]: E0813 00:16:26.100650 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:16:26.101189 kubelet[2615]: W0813 00:16:26.100947 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:16:26.101189 kubelet[2615]: E0813 00:16:26.100975 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:16:26.102232 kubelet[2615]: E0813 00:16:26.102214 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:16:26.102632 kubelet[2615]: W0813 00:16:26.102459 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:16:26.102632 kubelet[2615]: E0813 00:16:26.102491 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:16:26.103231 kubelet[2615]: E0813 00:16:26.103081 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:16:26.103231 kubelet[2615]: W0813 00:16:26.103097 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:16:26.103231 kubelet[2615]: E0813 00:16:26.103111 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:16:26.105611 kubelet[2615]: E0813 00:16:26.105197 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:16:26.105611 kubelet[2615]: W0813 00:16:26.105213 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:16:26.105611 kubelet[2615]: E0813 00:16:26.105229 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:16:26.106563 kubelet[2615]: E0813 00:16:26.106073 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:16:26.106563 kubelet[2615]: W0813 00:16:26.106089 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:16:26.106563 kubelet[2615]: E0813 00:16:26.106103 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:16:26.107038 kubelet[2615]: E0813 00:16:26.106914 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:16:26.107038 kubelet[2615]: W0813 00:16:26.106929 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:16:26.107038 kubelet[2615]: E0813 00:16:26.106943 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:16:26.108230 kubelet[2615]: E0813 00:16:26.107347 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:16:26.108230 kubelet[2615]: W0813 00:16:26.107361 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:16:26.108230 kubelet[2615]: E0813 00:16:26.107373 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:16:26.109478 kubelet[2615]: E0813 00:16:26.109399 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:16:26.109478 kubelet[2615]: W0813 00:16:26.109418 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:16:26.109478 kubelet[2615]: E0813 00:16:26.109432 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:16:26.109869 kubelet[2615]: E0813 00:16:26.109765 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:16:26.109869 kubelet[2615]: W0813 00:16:26.109779 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:16:26.109869 kubelet[2615]: E0813 00:16:26.109793 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:16:26.110095 kubelet[2615]: E0813 00:16:26.110033 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:16:26.110095 kubelet[2615]: W0813 00:16:26.110043 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:16:26.110095 kubelet[2615]: E0813 00:16:26.110053 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:16:26.111142 kubelet[2615]: E0813 00:16:26.110963 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:16:26.111142 kubelet[2615]: W0813 00:16:26.110977 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:16:26.111142 kubelet[2615]: E0813 00:16:26.111014 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:16:26.111812 kubelet[2615]: E0813 00:16:26.111740 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:16:26.111812 kubelet[2615]: W0813 00:16:26.111754 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:16:26.111812 kubelet[2615]: E0813 00:16:26.111767 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:16:26.112645 kubelet[2615]: E0813 00:16:26.112571 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:16:26.112645 kubelet[2615]: W0813 00:16:26.112585 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:16:26.112645 kubelet[2615]: E0813 00:16:26.112597 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:16:26.114198 kubelet[2615]: E0813 00:16:26.114105 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:16:26.114198 kubelet[2615]: W0813 00:16:26.114119 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:16:26.114198 kubelet[2615]: E0813 00:16:26.114147 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:16:26.115300 kubelet[2615]: E0813 00:16:26.114578 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:16:26.115300 kubelet[2615]: W0813 00:16:26.114591 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:16:26.115300 kubelet[2615]: E0813 00:16:26.114610 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:16:26.117108 kubelet[2615]: E0813 00:16:26.116529 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:16:26.117372 kubelet[2615]: W0813 00:16:26.117222 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:16:26.117372 kubelet[2615]: E0813 00:16:26.117247 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:16:26.117372 kubelet[2615]: I0813 00:16:26.117315 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/15e02000-307e-46ce-8cf9-fed7fd6d6dd8-varrun\") pod \"csi-node-driver-qctbn\" (UID: \"15e02000-307e-46ce-8cf9-fed7fd6d6dd8\") " pod="calico-system/csi-node-driver-qctbn" Aug 13 00:16:26.118394 kubelet[2615]: E0813 00:16:26.118199 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:16:26.118394 kubelet[2615]: W0813 00:16:26.118215 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:16:26.118394 kubelet[2615]: E0813 00:16:26.118238 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:16:26.118394 kubelet[2615]: I0813 00:16:26.118262 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/15e02000-307e-46ce-8cf9-fed7fd6d6dd8-kubelet-dir\") pod \"csi-node-driver-qctbn\" (UID: \"15e02000-307e-46ce-8cf9-fed7fd6d6dd8\") " pod="calico-system/csi-node-driver-qctbn" Aug 13 00:16:26.121315 kubelet[2615]: E0813 00:16:26.120355 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:16:26.121500 kubelet[2615]: W0813 00:16:26.121424 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:16:26.121638 kubelet[2615]: E0813 00:16:26.121561 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:16:26.121638 kubelet[2615]: I0813 00:16:26.121600 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/15e02000-307e-46ce-8cf9-fed7fd6d6dd8-registration-dir\") pod \"csi-node-driver-qctbn\" (UID: \"15e02000-307e-46ce-8cf9-fed7fd6d6dd8\") " pod="calico-system/csi-node-driver-qctbn" Aug 13 00:16:26.122000 kubelet[2615]: E0813 00:16:26.121923 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:16:26.122000 kubelet[2615]: W0813 00:16:26.121935 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:16:26.122234 kubelet[2615]: E0813 00:16:26.122214 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:16:26.123546 kubelet[2615]: E0813 00:16:26.123383 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:16:26.123546 kubelet[2615]: W0813 00:16:26.123398 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:16:26.123699 kubelet[2615]: E0813 00:16:26.123684 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:16:26.124399 kubelet[2615]: E0813 00:16:26.124368 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:16:26.124399 kubelet[2615]: W0813 00:16:26.124382 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:16:26.124651 kubelet[2615]: E0813 00:16:26.124568 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:16:26.124651 kubelet[2615]: I0813 00:16:26.124601 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/15e02000-307e-46ce-8cf9-fed7fd6d6dd8-socket-dir\") pod \"csi-node-driver-qctbn\" (UID: \"15e02000-307e-46ce-8cf9-fed7fd6d6dd8\") " pod="calico-system/csi-node-driver-qctbn" Aug 13 00:16:26.125488 kubelet[2615]: E0813 00:16:26.125472 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:16:26.126084 kubelet[2615]: W0813 00:16:26.125570 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:16:26.126200 kubelet[2615]: E0813 00:16:26.126168 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:16:26.126561 kubelet[2615]: E0813 00:16:26.126472 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:16:26.126561 kubelet[2615]: W0813 00:16:26.126485 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:16:26.126561 kubelet[2615]: E0813 00:16:26.126497 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:16:26.127665 kubelet[2615]: E0813 00:16:26.127558 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:16:26.127665 kubelet[2615]: W0813 00:16:26.127572 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:16:26.127665 kubelet[2615]: E0813 00:16:26.127587 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:16:26.127934 kubelet[2615]: E0813 00:16:26.127853 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:16:26.127934 kubelet[2615]: W0813 00:16:26.127865 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:16:26.127934 kubelet[2615]: E0813 00:16:26.127875 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:16:26.129358 kubelet[2615]: E0813 00:16:26.129339 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:16:26.129528 kubelet[2615]: W0813 00:16:26.129444 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:16:26.129528 kubelet[2615]: E0813 00:16:26.129462 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:16:26.129766 kubelet[2615]: E0813 00:16:26.129754 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:16:26.129825 kubelet[2615]: W0813 00:16:26.129814 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:16:26.129959 kubelet[2615]: E0813 00:16:26.129872 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:16:26.130358 kubelet[2615]: E0813 00:16:26.130237 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:16:26.130358 kubelet[2615]: W0813 00:16:26.130250 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:16:26.130358 kubelet[2615]: E0813 00:16:26.130261 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:16:26.130719 kubelet[2615]: I0813 00:16:26.130513 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hl9x\" (UniqueName: \"kubernetes.io/projected/15e02000-307e-46ce-8cf9-fed7fd6d6dd8-kube-api-access-4hl9x\") pod \"csi-node-driver-qctbn\" (UID: \"15e02000-307e-46ce-8cf9-fed7fd6d6dd8\") " pod="calico-system/csi-node-driver-qctbn" Aug 13 00:16:26.131426 kubelet[2615]: E0813 00:16:26.131343 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:16:26.131426 kubelet[2615]: W0813 00:16:26.131361 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:16:26.131426 kubelet[2615]: E0813 00:16:26.131382 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:16:26.131782 kubelet[2615]: E0813 00:16:26.131726 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:16:26.131782 kubelet[2615]: W0813 00:16:26.131746 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:16:26.131782 kubelet[2615]: E0813 00:16:26.131757 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:16:26.193419 containerd[1487]: time="2025-08-13T00:16:26.193353423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-49qbp,Uid:5ece67d1-c489-4aa1-86a7-1a7319a790d7,Namespace:calico-system,Attempt:0,}" Aug 13 00:16:26.223003 containerd[1487]: time="2025-08-13T00:16:26.222811698Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:16:26.223003 containerd[1487]: time="2025-08-13T00:16:26.222897459Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:16:26.223003 containerd[1487]: time="2025-08-13T00:16:26.222922859Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:16:26.224422 containerd[1487]: time="2025-08-13T00:16:26.224293832Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:16:26.233323 kubelet[2615]: E0813 00:16:26.232720 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:16:26.233323 kubelet[2615]: W0813 00:16:26.232750 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:16:26.233323 kubelet[2615]: E0813 00:16:26.232775 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:16:26.236669 kubelet[2615]: E0813 00:16:26.235858 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:16:26.236669 kubelet[2615]: W0813 00:16:26.235987 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:16:26.237600 kubelet[2615]: E0813 00:16:26.236190 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:16:26.240617 kubelet[2615]: E0813 00:16:26.238266 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:16:26.240617 kubelet[2615]: W0813 00:16:26.240085 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:16:26.240617 kubelet[2615]: E0813 00:16:26.240177 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:16:26.243356 kubelet[2615]: E0813 00:16:26.242602 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:16:26.243356 kubelet[2615]: W0813 00:16:26.242639 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:16:26.245885 kubelet[2615]: E0813 00:16:26.244574 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:16:26.245885 kubelet[2615]: E0813 00:16:26.245764 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:16:26.245885 kubelet[2615]: W0813 00:16:26.245783 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:16:26.245885 kubelet[2615]: E0813 00:16:26.245822 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:16:26.246311 kubelet[2615]: E0813 00:16:26.246080 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:16:26.246311 kubelet[2615]: W0813 00:16:26.246094 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:16:26.246311 kubelet[2615]: E0813 00:16:26.246238 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:16:26.246644 kubelet[2615]: E0813 00:16:26.246578 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:16:26.246644 kubelet[2615]: W0813 00:16:26.246597 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:16:26.246644 kubelet[2615]: E0813 00:16:26.246639 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:16:26.249925 kubelet[2615]: E0813 00:16:26.249902 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:16:26.249925 kubelet[2615]: W0813 00:16:26.249921 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:16:26.250039 kubelet[2615]: E0813 00:16:26.249955 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:16:26.250288 kubelet[2615]: E0813 00:16:26.250254 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:16:26.250288 kubelet[2615]: W0813 00:16:26.250282 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:16:26.250387 kubelet[2615]: E0813 00:16:26.250363 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:16:26.250704 kubelet[2615]: E0813 00:16:26.250668 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:16:26.250704 kubelet[2615]: W0813 00:16:26.250685 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:16:26.250792 kubelet[2615]: E0813 00:16:26.250746 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:16:26.251830 kubelet[2615]: E0813 00:16:26.250902 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:16:26.251830 kubelet[2615]: W0813 00:16:26.250915 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:16:26.251830 kubelet[2615]: E0813 00:16:26.251218 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:16:26.251830 kubelet[2615]: W0813 00:16:26.251229 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:16:26.251830 kubelet[2615]: E0813 00:16:26.251562 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:16:26.251830 kubelet[2615]: W0813 00:16:26.251575 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:16:26.251830 kubelet[2615]: E0813 00:16:26.251588 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:16:26.251830 kubelet[2615]: E0813 00:16:26.251772 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:16:26.251830 kubelet[2615]: E0813 00:16:26.251796 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:16:26.252585 kubelet[2615]: E0813 00:16:26.252567 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:16:26.252585 kubelet[2615]: W0813 00:16:26.252582 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:16:26.252667 kubelet[2615]: E0813 00:16:26.252605 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:16:26.253656 kubelet[2615]: E0813 00:16:26.253621 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:16:26.253656 kubelet[2615]: W0813 00:16:26.253643 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:16:26.253789 kubelet[2615]: E0813 00:16:26.253766 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:16:26.255501 kubelet[2615]: E0813 00:16:26.255460 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:16:26.255501 kubelet[2615]: W0813 00:16:26.255480 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:16:26.255709 kubelet[2615]: E0813 00:16:26.255645 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:16:26.256547 kubelet[2615]: E0813 00:16:26.256515 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:16:26.256547 kubelet[2615]: W0813 00:16:26.256536 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:16:26.257004 kubelet[2615]: E0813 00:16:26.256867 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:16:26.257004 kubelet[2615]: E0813 00:16:26.256965 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:16:26.257239 kubelet[2615]: W0813 00:16:26.257116 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:16:26.257300 kubelet[2615]: E0813 00:16:26.257257 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:16:26.257634 kubelet[2615]: E0813 00:16:26.257530 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:16:26.257634 kubelet[2615]: W0813 00:16:26.257545 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:16:26.257634 kubelet[2615]: E0813 00:16:26.257577 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:16:26.258745 kubelet[2615]: E0813 00:16:26.258725 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:16:26.259444 kubelet[2615]: W0813 00:16:26.258988 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:16:26.259444 kubelet[2615]: E0813 00:16:26.259024 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:16:26.259642 kubelet[2615]: E0813 00:16:26.259579 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:16:26.260322 kubelet[2615]: W0813 00:16:26.260301 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:16:26.260466 kubelet[2615]: E0813 00:16:26.260429 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:16:26.260717 systemd[1]: Started cri-containerd-8aa6a3e9e4f0da781cdabd06f6ab565af770b7fa5170398d53ca558cc1aef7c2.scope - libcontainer container 8aa6a3e9e4f0da781cdabd06f6ab565af770b7fa5170398d53ca558cc1aef7c2. Aug 13 00:16:26.262178 kubelet[2615]: E0813 00:16:26.262017 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:16:26.262178 kubelet[2615]: W0813 00:16:26.262035 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:16:26.262178 kubelet[2615]: E0813 00:16:26.262082 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:16:26.262555 kubelet[2615]: E0813 00:16:26.262540 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:16:26.262776 kubelet[2615]: W0813 00:16:26.262622 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:16:26.262776 kubelet[2615]: E0813 00:16:26.262652 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:16:26.263503 kubelet[2615]: E0813 00:16:26.263324 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:16:26.263503 kubelet[2615]: W0813 00:16:26.263353 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:16:26.263503 kubelet[2615]: E0813 00:16:26.263369 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:16:26.263797 kubelet[2615]: E0813 00:16:26.263782 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:16:26.264105 kubelet[2615]: W0813 00:16:26.264087 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:16:26.264210 kubelet[2615]: E0813 00:16:26.264197 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:16:26.283485 kubelet[2615]: E0813 00:16:26.283455 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:16:26.283723 kubelet[2615]: W0813 00:16:26.283637 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:16:26.283723 kubelet[2615]: E0813 00:16:26.283669 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:16:26.324291 containerd[1487]: time="2025-08-13T00:16:26.323196194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-49qbp,Uid:5ece67d1-c489-4aa1-86a7-1a7319a790d7,Namespace:calico-system,Attempt:0,} returns sandbox id \"8aa6a3e9e4f0da781cdabd06f6ab565af770b7fa5170398d53ca558cc1aef7c2\"" Aug 13 00:16:27.667963 kubelet[2615]: E0813 00:16:27.665609 2615 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qctbn" podUID="15e02000-307e-46ce-8cf9-fed7fd6d6dd8" Aug 13 00:16:27.681448 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1347266722.mount: Deactivated successfully. Aug 13 00:16:28.995319 containerd[1487]: time="2025-08-13T00:16:28.995218348Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:16:28.997962 containerd[1487]: time="2025-08-13T00:16:28.997801691Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=33087207" Aug 13 00:16:28.998881 containerd[1487]: time="2025-08-13T00:16:28.998835740Z" level=info msg="ImageCreate event name:\"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:16:29.003393 containerd[1487]: time="2025-08-13T00:16:29.003326820Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:16:29.006622 containerd[1487]: time="2025-08-13T00:16:29.005921843Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"33087061\" in 2.919228174s" Aug 13 00:16:29.006622 containerd[1487]: time="2025-08-13T00:16:29.005965683Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\"" Aug 13 00:16:29.009512 containerd[1487]: time="2025-08-13T00:16:29.008871308Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Aug 13 00:16:29.022011 containerd[1487]: time="2025-08-13T00:16:29.021967263Z" level=info msg="CreateContainer within sandbox \"adab09c1309e805f4bc4f174658aaa389129c488af3090483365675439ea015e\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Aug 13 00:16:29.048870 containerd[1487]: time="2025-08-13T00:16:29.048798459Z" level=info msg="CreateContainer within sandbox \"adab09c1309e805f4bc4f174658aaa389129c488af3090483365675439ea015e\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"6f67d20fcc80116c1cdb8ae55d475ad0d3a67ee21877b37a751691bd8d653a41\"" Aug 13 00:16:29.052190 containerd[1487]: time="2025-08-13T00:16:29.051396761Z" level=info msg="StartContainer for \"6f67d20fcc80116c1cdb8ae55d475ad0d3a67ee21877b37a751691bd8d653a41\"" Aug 13 00:16:29.092497 systemd[1]: Started cri-containerd-6f67d20fcc80116c1cdb8ae55d475ad0d3a67ee21877b37a751691bd8d653a41.scope - libcontainer container 6f67d20fcc80116c1cdb8ae55d475ad0d3a67ee21877b37a751691bd8d653a41. Aug 13 00:16:29.132895 containerd[1487]: time="2025-08-13T00:16:29.132791355Z" level=info msg="StartContainer for \"6f67d20fcc80116c1cdb8ae55d475ad0d3a67ee21877b37a751691bd8d653a41\" returns successfully" Aug 13 00:16:29.665922 kubelet[2615]: E0813 00:16:29.665859 2615 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qctbn" podUID="15e02000-307e-46ce-8cf9-fed7fd6d6dd8" Aug 13 00:16:29.800223 kubelet[2615]: I0813 00:16:29.799421 2615 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6dc74bb67d-spxtt" podStartSLOduration=1.8769794389999999 podStartE2EDuration="4.799399361s" podCreationTimestamp="2025-08-13 00:16:25 +0000 UTC" firstStartedPulling="2025-08-13 00:16:26.085513818 +0000 UTC m=+26.560027331" lastFinishedPulling="2025-08-13 00:16:29.00793374 +0000 UTC m=+29.482447253" observedRunningTime="2025-08-13 00:16:29.796479655 +0000 UTC m=+30.270993168" watchObservedRunningTime="2025-08-13 00:16:29.799399361 +0000 UTC m=+30.273912914" Aug 13 00:16:29.839632 kubelet[2615]: E0813 00:16:29.839586 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:16:29.839768 kubelet[2615]: W0813 00:16:29.839647 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:16:29.839768 kubelet[2615]: E0813 00:16:29.839679 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:16:29.840186 kubelet[2615]: E0813 00:16:29.840120 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:16:29.840321 kubelet[2615]: W0813 00:16:29.840160 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:16:29.840321 kubelet[2615]: E0813 00:16:29.840235 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:16:29.840648 kubelet[2615]: E0813 00:16:29.840559 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:16:29.840648 kubelet[2615]: W0813 00:16:29.840599 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:16:29.840880 kubelet[2615]: E0813 00:16:29.840651 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:16:29.841360 kubelet[2615]: E0813 00:16:29.840970 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:16:29.841360 kubelet[2615]: W0813 00:16:29.840998 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:16:29.841360 kubelet[2615]: E0813 00:16:29.841012 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:16:29.841555 kubelet[2615]: E0813 00:16:29.841394 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:16:29.841555 kubelet[2615]: W0813 00:16:29.841405 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:16:29.841555 kubelet[2615]: E0813 00:16:29.841416 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:16:29.841644 kubelet[2615]: E0813 00:16:29.841622 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:16:29.841644 kubelet[2615]: W0813 00:16:29.841631 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:16:29.841644 kubelet[2615]: E0813 00:16:29.841640 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:16:29.842291 kubelet[2615]: E0813 00:16:29.841816 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:16:29.842291 kubelet[2615]: W0813 00:16:29.841831 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:16:29.842291 kubelet[2615]: E0813 00:16:29.841840 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:16:29.842291 kubelet[2615]: E0813 00:16:29.842127 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:16:29.842291 kubelet[2615]: W0813 00:16:29.842137 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:16:29.842291 kubelet[2615]: E0813 00:16:29.842147 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:16:29.842528 kubelet[2615]: E0813 00:16:29.842368 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:16:29.842528 kubelet[2615]: W0813 00:16:29.842379 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:16:29.842528 kubelet[2615]: E0813 00:16:29.842387 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:16:29.842612 kubelet[2615]: E0813 00:16:29.842549 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:16:29.842612 kubelet[2615]: W0813 00:16:29.842557 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:16:29.842612 kubelet[2615]: E0813 00:16:29.842566 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:16:29.843450 kubelet[2615]: E0813 00:16:29.842721 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:16:29.843450 kubelet[2615]: W0813 00:16:29.842736 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:16:29.843450 kubelet[2615]: E0813 00:16:29.842744 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:16:29.843450 kubelet[2615]: E0813 00:16:29.842901 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:16:29.843450 kubelet[2615]: W0813 00:16:29.842909 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:16:29.843450 kubelet[2615]: E0813 00:16:29.842948 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:16:29.843450 kubelet[2615]: E0813 00:16:29.843181 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:16:29.843450 kubelet[2615]: W0813 00:16:29.843191 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:16:29.843450 kubelet[2615]: E0813 00:16:29.843200 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:16:29.843450 kubelet[2615]: E0813 00:16:29.843412 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:16:29.843808 kubelet[2615]: W0813 00:16:29.843421 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:16:29.843808 kubelet[2615]: E0813 00:16:29.843434 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:16:29.843808 kubelet[2615]: E0813 00:16:29.843648 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:16:29.843808 kubelet[2615]: W0813 00:16:29.843656 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:16:29.843808 kubelet[2615]: E0813 00:16:29.843665 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:16:29.875763 kubelet[2615]: E0813 00:16:29.875687 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:16:29.875763 kubelet[2615]: W0813 00:16:29.875728 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:16:29.875763 kubelet[2615]: E0813 00:16:29.875758 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:16:29.876326 kubelet[2615]: E0813 00:16:29.876260 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:16:29.876326 kubelet[2615]: W0813 00:16:29.876319 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:16:29.876503 kubelet[2615]: E0813 00:16:29.876345 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:16:29.876785 kubelet[2615]: E0813 00:16:29.876732 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:16:29.876785 kubelet[2615]: W0813 00:16:29.876759 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:16:29.876920 kubelet[2615]: E0813 00:16:29.876792 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:16:29.877240 kubelet[2615]: E0813 00:16:29.877212 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:16:29.877415 kubelet[2615]: W0813 00:16:29.877242 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:16:29.877415 kubelet[2615]: E0813 00:16:29.877271 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:16:29.877686 kubelet[2615]: E0813 00:16:29.877640 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:16:29.877686 kubelet[2615]: W0813 00:16:29.877668 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:16:29.877778 kubelet[2615]: E0813 00:16:29.877696 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:16:29.878212 kubelet[2615]: E0813 00:16:29.878162 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:16:29.878212 kubelet[2615]: W0813 00:16:29.878201 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:16:29.878343 kubelet[2615]: E0813 00:16:29.878328 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:16:29.878743 kubelet[2615]: E0813 00:16:29.878702 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:16:29.878743 kubelet[2615]: W0813 00:16:29.878734 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:16:29.879004 kubelet[2615]: E0813 00:16:29.878902 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:16:29.879213 kubelet[2615]: E0813 00:16:29.879166 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:16:29.879213 kubelet[2615]: W0813 00:16:29.879207 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:16:29.879572 kubelet[2615]: E0813 00:16:29.879523 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:16:29.879827 kubelet[2615]: E0813 00:16:29.879791 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:16:29.879827 kubelet[2615]: W0813 00:16:29.879819 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:16:29.879911 kubelet[2615]: E0813 00:16:29.879853 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:16:29.880549 kubelet[2615]: E0813 00:16:29.880482 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:16:29.880549 kubelet[2615]: W0813 00:16:29.880515 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:16:29.880549 kubelet[2615]: E0813 00:16:29.880541 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:16:29.880921 kubelet[2615]: E0813 00:16:29.880874 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:16:29.880921 kubelet[2615]: W0813 00:16:29.880901 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:16:29.881007 kubelet[2615]: E0813 00:16:29.880993 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:16:29.881389 kubelet[2615]: E0813 00:16:29.881353 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:16:29.881389 kubelet[2615]: W0813 00:16:29.881384 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:16:29.881516 kubelet[2615]: E0813 00:16:29.881481 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:16:29.881776 kubelet[2615]: E0813 00:16:29.881743 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:16:29.881776 kubelet[2615]: W0813 00:16:29.881770 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:16:29.881893 kubelet[2615]: E0813 00:16:29.881796 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:16:29.882191 kubelet[2615]: E0813 00:16:29.882151 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:16:29.882191 kubelet[2615]: W0813 00:16:29.882181 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:16:29.882329 kubelet[2615]: E0813 00:16:29.882201 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:16:29.882592 kubelet[2615]: E0813 00:16:29.882560 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:16:29.882592 kubelet[2615]: W0813 00:16:29.882576 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:16:29.882592 kubelet[2615]: E0813 00:16:29.882594 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:16:29.882823 kubelet[2615]: E0813 00:16:29.882794 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:16:29.882823 kubelet[2615]: W0813 00:16:29.882810 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:16:29.882881 kubelet[2615]: E0813 00:16:29.882827 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:16:29.883320 kubelet[2615]: E0813 00:16:29.883245 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:16:29.883320 kubelet[2615]: W0813 00:16:29.883263 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:16:29.883320 kubelet[2615]: E0813 00:16:29.883308 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:16:29.883530 kubelet[2615]: E0813 00:16:29.883495 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:16:29.883530 kubelet[2615]: W0813 00:16:29.883512 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:16:29.883530 kubelet[2615]: E0813 00:16:29.883524 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:16:30.489388 containerd[1487]: time="2025-08-13T00:16:30.489184405Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:16:30.492343 containerd[1487]: time="2025-08-13T00:16:30.492269271Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4266981" Aug 13 00:16:30.494394 containerd[1487]: time="2025-08-13T00:16:30.494021126Z" level=info msg="ImageCreate event name:\"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:16:30.499300 containerd[1487]: time="2025-08-13T00:16:30.499246971Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:16:30.501656 containerd[1487]: time="2025-08-13T00:16:30.500870665Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5636182\" in 1.491948156s" Aug 13 00:16:30.501656 containerd[1487]: time="2025-08-13T00:16:30.500941586Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\"" Aug 13 00:16:30.504850 containerd[1487]: time="2025-08-13T00:16:30.504761299Z" level=info msg="CreateContainer within sandbox \"8aa6a3e9e4f0da781cdabd06f6ab565af770b7fa5170398d53ca558cc1aef7c2\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Aug 13 00:16:30.523511 containerd[1487]: time="2025-08-13T00:16:30.523457139Z" level=info msg="CreateContainer within sandbox \"8aa6a3e9e4f0da781cdabd06f6ab565af770b7fa5170398d53ca558cc1aef7c2\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"ae5e586f7472fe73c130c0d657e16c44a1f8e95a3c1bb11555626a691953982b\"" Aug 13 00:16:30.526355 containerd[1487]: time="2025-08-13T00:16:30.524205386Z" level=info msg="StartContainer for \"ae5e586f7472fe73c130c0d657e16c44a1f8e95a3c1bb11555626a691953982b\"" Aug 13 00:16:30.581574 systemd[1]: Started cri-containerd-ae5e586f7472fe73c130c0d657e16c44a1f8e95a3c1bb11555626a691953982b.scope - libcontainer container ae5e586f7472fe73c130c0d657e16c44a1f8e95a3c1bb11555626a691953982b. Aug 13 00:16:30.622337 containerd[1487]: time="2025-08-13T00:16:30.622246509Z" level=info msg="StartContainer for \"ae5e586f7472fe73c130c0d657e16c44a1f8e95a3c1bb11555626a691953982b\" returns successfully" Aug 13 00:16:30.638580 systemd[1]: cri-containerd-ae5e586f7472fe73c130c0d657e16c44a1f8e95a3c1bb11555626a691953982b.scope: Deactivated successfully. Aug 13 00:16:30.671256 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ae5e586f7472fe73c130c0d657e16c44a1f8e95a3c1bb11555626a691953982b-rootfs.mount: Deactivated successfully. Aug 13 00:16:30.774047 containerd[1487]: time="2025-08-13T00:16:30.773848812Z" level=info msg="shim disconnected" id=ae5e586f7472fe73c130c0d657e16c44a1f8e95a3c1bb11555626a691953982b namespace=k8s.io Aug 13 00:16:30.774047 containerd[1487]: time="2025-08-13T00:16:30.773919852Z" level=warning msg="cleaning up after shim disconnected" id=ae5e586f7472fe73c130c0d657e16c44a1f8e95a3c1bb11555626a691953982b namespace=k8s.io Aug 13 00:16:30.774047 containerd[1487]: time="2025-08-13T00:16:30.773933772Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:16:30.781185 kubelet[2615]: I0813 00:16:30.781060 2615 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 00:16:31.666304 kubelet[2615]: E0813 00:16:31.665828 2615 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qctbn" podUID="15e02000-307e-46ce-8cf9-fed7fd6d6dd8" Aug 13 00:16:31.790981 containerd[1487]: time="2025-08-13T00:16:31.790920020Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Aug 13 00:16:33.665031 kubelet[2615]: E0813 00:16:33.664977 2615 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qctbn" podUID="15e02000-307e-46ce-8cf9-fed7fd6d6dd8" Aug 13 00:16:33.835536 kubelet[2615]: I0813 00:16:33.835120 2615 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 00:16:34.107002 containerd[1487]: time="2025-08-13T00:16:34.106842081Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:16:34.108752 containerd[1487]: time="2025-08-13T00:16:34.108668895Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=65888320" Aug 13 00:16:34.109581 containerd[1487]: time="2025-08-13T00:16:34.109500462Z" level=info msg="ImageCreate event name:\"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:16:34.113793 containerd[1487]: time="2025-08-13T00:16:34.113715815Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:16:34.115172 containerd[1487]: time="2025-08-13T00:16:34.114518422Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"67257561\" in 2.3235518s" Aug 13 00:16:34.115172 containerd[1487]: time="2025-08-13T00:16:34.114558782Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\"" Aug 13 00:16:34.118599 containerd[1487]: time="2025-08-13T00:16:34.118551774Z" level=info msg="CreateContainer within sandbox \"8aa6a3e9e4f0da781cdabd06f6ab565af770b7fa5170398d53ca558cc1aef7c2\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Aug 13 00:16:34.136786 containerd[1487]: time="2025-08-13T00:16:34.136664758Z" level=info msg="CreateContainer within sandbox \"8aa6a3e9e4f0da781cdabd06f6ab565af770b7fa5170398d53ca558cc1aef7c2\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"79eacea243eafd74d3f5b2eafdacc35dc93182633919753bc1329325c582ff6d\"" Aug 13 00:16:34.138313 containerd[1487]: time="2025-08-13T00:16:34.137386083Z" level=info msg="StartContainer for \"79eacea243eafd74d3f5b2eafdacc35dc93182633919753bc1329325c582ff6d\"" Aug 13 00:16:34.179490 systemd[1]: Started cri-containerd-79eacea243eafd74d3f5b2eafdacc35dc93182633919753bc1329325c582ff6d.scope - libcontainer container 79eacea243eafd74d3f5b2eafdacc35dc93182633919753bc1329325c582ff6d. Aug 13 00:16:34.211408 containerd[1487]: time="2025-08-13T00:16:34.211033189Z" level=info msg="StartContainer for \"79eacea243eafd74d3f5b2eafdacc35dc93182633919753bc1329325c582ff6d\" returns successfully" Aug 13 00:16:34.742794 containerd[1487]: time="2025-08-13T00:16:34.742711617Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 00:16:34.748208 systemd[1]: cri-containerd-79eacea243eafd74d3f5b2eafdacc35dc93182633919753bc1329325c582ff6d.scope: Deactivated successfully. Aug 13 00:16:34.753206 kubelet[2615]: I0813 00:16:34.752956 2615 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Aug 13 00:16:34.785149 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-79eacea243eafd74d3f5b2eafdacc35dc93182633919753bc1329325c582ff6d-rootfs.mount: Deactivated successfully. Aug 13 00:16:34.826758 systemd[1]: Created slice kubepods-burstable-podfad88c9c_63aa_46b4_a961_f6ee4056948d.slice - libcontainer container kubepods-burstable-podfad88c9c_63aa_46b4_a961_f6ee4056948d.slice. Aug 13 00:16:34.841107 systemd[1]: Created slice kubepods-besteffort-podf9823515_dfc4_4fbb_a853_8b1f7dc08f4c.slice - libcontainer container kubepods-besteffort-podf9823515_dfc4_4fbb_a853_8b1f7dc08f4c.slice. Aug 13 00:16:34.860654 systemd[1]: Created slice kubepods-burstable-pod2776ab78_3da7_4b02_86ed_3ed0ba1a8679.slice - libcontainer container kubepods-burstable-pod2776ab78_3da7_4b02_86ed_3ed0ba1a8679.slice. Aug 13 00:16:34.872417 containerd[1487]: time="2025-08-13T00:16:34.872347687Z" level=info msg="shim disconnected" id=79eacea243eafd74d3f5b2eafdacc35dc93182633919753bc1329325c582ff6d namespace=k8s.io Aug 13 00:16:34.873632 containerd[1487]: time="2025-08-13T00:16:34.872586929Z" level=warning msg="cleaning up after shim disconnected" id=79eacea243eafd74d3f5b2eafdacc35dc93182633919753bc1329325c582ff6d namespace=k8s.io Aug 13 00:16:34.873632 containerd[1487]: time="2025-08-13T00:16:34.872603529Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:16:34.876869 systemd[1]: Created slice kubepods-besteffort-pod58f3844d_62f0_42cd_b23f_5ade8a1b059c.slice - libcontainer container kubepods-besteffort-pod58f3844d_62f0_42cd_b23f_5ade8a1b059c.slice. Aug 13 00:16:34.890851 systemd[1]: Created slice kubepods-besteffort-pod27660f5a_c4be_4a6f_bd2d_93f2bdfb3e66.slice - libcontainer container kubepods-besteffort-pod27660f5a_c4be_4a6f_bd2d_93f2bdfb3e66.slice. Aug 13 00:16:34.906575 systemd[1]: Created slice kubepods-besteffort-pod00aebb23_204a_4ca9_bec2_40b1eea2bd4a.slice - libcontainer container kubepods-besteffort-pod00aebb23_204a_4ca9_bec2_40b1eea2bd4a.slice. Aug 13 00:16:34.916386 systemd[1]: Created slice kubepods-besteffort-podf295d8fb_d7de_4da2_8e3a_8a17fac1f430.slice - libcontainer container kubepods-besteffort-podf295d8fb_d7de_4da2_8e3a_8a17fac1f430.slice. Aug 13 00:16:34.920850 kubelet[2615]: I0813 00:16:34.920412 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/58f3844d-62f0-42cd-b23f-5ade8a1b059c-calico-apiserver-certs\") pod \"calico-apiserver-765654ff88-wtksk\" (UID: \"58f3844d-62f0-42cd-b23f-5ade8a1b059c\") " pod="calico-apiserver/calico-apiserver-765654ff88-wtksk" Aug 13 00:16:34.921962 kubelet[2615]: I0813 00:16:34.921398 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/00aebb23-204a-4ca9-bec2-40b1eea2bd4a-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-nlr8k\" (UID: \"00aebb23-204a-4ca9-bec2-40b1eea2bd4a\") " pod="calico-system/goldmane-768f4c5c69-nlr8k" Aug 13 00:16:34.921962 kubelet[2615]: I0813 00:16:34.921443 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52gfh\" (UniqueName: \"kubernetes.io/projected/f9823515-dfc4-4fbb-a853-8b1f7dc08f4c-kube-api-access-52gfh\") pod \"calico-kube-controllers-b89966564-sv4v2\" (UID: \"f9823515-dfc4-4fbb-a853-8b1f7dc08f4c\") " pod="calico-system/calico-kube-controllers-b89966564-sv4v2" Aug 13 00:16:34.921962 kubelet[2615]: I0813 00:16:34.921667 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ss78c\" (UniqueName: \"kubernetes.io/projected/00aebb23-204a-4ca9-bec2-40b1eea2bd4a-kube-api-access-ss78c\") pod \"goldmane-768f4c5c69-nlr8k\" (UID: \"00aebb23-204a-4ca9-bec2-40b1eea2bd4a\") " pod="calico-system/goldmane-768f4c5c69-nlr8k" Aug 13 00:16:34.921962 kubelet[2615]: I0813 00:16:34.921696 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f295d8fb-d7de-4da2-8e3a-8a17fac1f430-whisker-backend-key-pair\") pod \"whisker-5fb6587569-g54r9\" (UID: \"f295d8fb-d7de-4da2-8e3a-8a17fac1f430\") " pod="calico-system/whisker-5fb6587569-g54r9" Aug 13 00:16:34.921962 kubelet[2615]: I0813 00:16:34.921913 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9823515-dfc4-4fbb-a853-8b1f7dc08f4c-tigera-ca-bundle\") pod \"calico-kube-controllers-b89966564-sv4v2\" (UID: \"f9823515-dfc4-4fbb-a853-8b1f7dc08f4c\") " pod="calico-system/calico-kube-controllers-b89966564-sv4v2" Aug 13 00:16:34.927409 kubelet[2615]: I0813 00:16:34.924545 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/00aebb23-204a-4ca9-bec2-40b1eea2bd4a-goldmane-key-pair\") pod \"goldmane-768f4c5c69-nlr8k\" (UID: \"00aebb23-204a-4ca9-bec2-40b1eea2bd4a\") " pod="calico-system/goldmane-768f4c5c69-nlr8k" Aug 13 00:16:34.927409 kubelet[2615]: I0813 00:16:34.924614 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2776ab78-3da7-4b02-86ed-3ed0ba1a8679-config-volume\") pod \"coredns-668d6bf9bc-hnn7j\" (UID: \"2776ab78-3da7-4b02-86ed-3ed0ba1a8679\") " pod="kube-system/coredns-668d6bf9bc-hnn7j" Aug 13 00:16:34.927409 kubelet[2615]: I0813 00:16:34.924657 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00aebb23-204a-4ca9-bec2-40b1eea2bd4a-config\") pod \"goldmane-768f4c5c69-nlr8k\" (UID: \"00aebb23-204a-4ca9-bec2-40b1eea2bd4a\") " pod="calico-system/goldmane-768f4c5c69-nlr8k" Aug 13 00:16:34.927409 kubelet[2615]: I0813 00:16:34.924700 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhmvm\" (UniqueName: \"kubernetes.io/projected/2776ab78-3da7-4b02-86ed-3ed0ba1a8679-kube-api-access-mhmvm\") pod \"coredns-668d6bf9bc-hnn7j\" (UID: \"2776ab78-3da7-4b02-86ed-3ed0ba1a8679\") " pod="kube-system/coredns-668d6bf9bc-hnn7j" Aug 13 00:16:34.927409 kubelet[2615]: I0813 00:16:34.924746 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fad88c9c-63aa-46b4-a961-f6ee4056948d-config-volume\") pod \"coredns-668d6bf9bc-xvrh7\" (UID: \"fad88c9c-63aa-46b4-a961-f6ee4056948d\") " pod="kube-system/coredns-668d6bf9bc-xvrh7" Aug 13 00:16:34.927620 kubelet[2615]: I0813 00:16:34.924766 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pzbj\" (UniqueName: \"kubernetes.io/projected/fad88c9c-63aa-46b4-a961-f6ee4056948d-kube-api-access-6pzbj\") pod \"coredns-668d6bf9bc-xvrh7\" (UID: \"fad88c9c-63aa-46b4-a961-f6ee4056948d\") " pod="kube-system/coredns-668d6bf9bc-xvrh7" Aug 13 00:16:34.927620 kubelet[2615]: I0813 00:16:34.924793 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f295d8fb-d7de-4da2-8e3a-8a17fac1f430-whisker-ca-bundle\") pod \"whisker-5fb6587569-g54r9\" (UID: \"f295d8fb-d7de-4da2-8e3a-8a17fac1f430\") " pod="calico-system/whisker-5fb6587569-g54r9" Aug 13 00:16:34.927620 kubelet[2615]: I0813 00:16:34.924817 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksdbq\" (UniqueName: \"kubernetes.io/projected/58f3844d-62f0-42cd-b23f-5ade8a1b059c-kube-api-access-ksdbq\") pod \"calico-apiserver-765654ff88-wtksk\" (UID: \"58f3844d-62f0-42cd-b23f-5ade8a1b059c\") " pod="calico-apiserver/calico-apiserver-765654ff88-wtksk" Aug 13 00:16:34.927620 kubelet[2615]: I0813 00:16:34.924839 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r84dq\" (UniqueName: \"kubernetes.io/projected/27660f5a-c4be-4a6f-bd2d-93f2bdfb3e66-kube-api-access-r84dq\") pod \"calico-apiserver-765654ff88-hg6lz\" (UID: \"27660f5a-c4be-4a6f-bd2d-93f2bdfb3e66\") " pod="calico-apiserver/calico-apiserver-765654ff88-hg6lz" Aug 13 00:16:34.927620 kubelet[2615]: I0813 00:16:34.924859 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/27660f5a-c4be-4a6f-bd2d-93f2bdfb3e66-calico-apiserver-certs\") pod \"calico-apiserver-765654ff88-hg6lz\" (UID: \"27660f5a-c4be-4a6f-bd2d-93f2bdfb3e66\") " pod="calico-apiserver/calico-apiserver-765654ff88-hg6lz" Aug 13 00:16:34.927733 kubelet[2615]: I0813 00:16:34.924898 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gs9ff\" (UniqueName: \"kubernetes.io/projected/f295d8fb-d7de-4da2-8e3a-8a17fac1f430-kube-api-access-gs9ff\") pod \"whisker-5fb6587569-g54r9\" (UID: \"f295d8fb-d7de-4da2-8e3a-8a17fac1f430\") " pod="calico-system/whisker-5fb6587569-g54r9" Aug 13 00:16:35.140390 containerd[1487]: time="2025-08-13T00:16:35.139224869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xvrh7,Uid:fad88c9c-63aa-46b4-a961-f6ee4056948d,Namespace:kube-system,Attempt:0,}" Aug 13 00:16:35.154596 containerd[1487]: time="2025-08-13T00:16:35.154516108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b89966564-sv4v2,Uid:f9823515-dfc4-4fbb-a853-8b1f7dc08f4c,Namespace:calico-system,Attempt:0,}" Aug 13 00:16:35.174480 containerd[1487]: time="2025-08-13T00:16:35.174401583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hnn7j,Uid:2776ab78-3da7-4b02-86ed-3ed0ba1a8679,Namespace:kube-system,Attempt:0,}" Aug 13 00:16:35.186481 containerd[1487]: time="2025-08-13T00:16:35.186423357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-765654ff88-wtksk,Uid:58f3844d-62f0-42cd-b23f-5ade8a1b059c,Namespace:calico-apiserver,Attempt:0,}" Aug 13 00:16:35.208680 containerd[1487]: time="2025-08-13T00:16:35.208137807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-765654ff88-hg6lz,Uid:27660f5a-c4be-4a6f-bd2d-93f2bdfb3e66,Namespace:calico-apiserver,Attempt:0,}" Aug 13 00:16:35.226425 containerd[1487]: time="2025-08-13T00:16:35.226289348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-nlr8k,Uid:00aebb23-204a-4ca9-bec2-40b1eea2bd4a,Namespace:calico-system,Attempt:0,}" Aug 13 00:16:35.245006 containerd[1487]: time="2025-08-13T00:16:35.244676692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5fb6587569-g54r9,Uid:f295d8fb-d7de-4da2-8e3a-8a17fac1f430,Namespace:calico-system,Attempt:0,}" Aug 13 00:16:35.419652 containerd[1487]: time="2025-08-13T00:16:35.419601697Z" level=error msg="Failed to destroy network for sandbox \"d39f1233e5d1fec42e5249ebbc08ca6ee08cd40ff689c32e159a4fa4e1f28f61\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:16:35.423752 containerd[1487]: time="2025-08-13T00:16:35.423586048Z" level=error msg="encountered an error cleaning up failed sandbox \"d39f1233e5d1fec42e5249ebbc08ca6ee08cd40ff689c32e159a4fa4e1f28f61\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:16:35.423752 containerd[1487]: time="2025-08-13T00:16:35.423656168Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xvrh7,Uid:fad88c9c-63aa-46b4-a961-f6ee4056948d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d39f1233e5d1fec42e5249ebbc08ca6ee08cd40ff689c32e159a4fa4e1f28f61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:16:35.423914 kubelet[2615]: E0813 00:16:35.423865 2615 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d39f1233e5d1fec42e5249ebbc08ca6ee08cd40ff689c32e159a4fa4e1f28f61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:16:35.423955 kubelet[2615]: E0813 00:16:35.423932 2615 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d39f1233e5d1fec42e5249ebbc08ca6ee08cd40ff689c32e159a4fa4e1f28f61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-xvrh7" Aug 13 00:16:35.423978 kubelet[2615]: E0813 00:16:35.423951 2615 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d39f1233e5d1fec42e5249ebbc08ca6ee08cd40ff689c32e159a4fa4e1f28f61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-xvrh7" Aug 13 00:16:35.424089 kubelet[2615]: E0813 00:16:35.423989 2615 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-xvrh7_kube-system(fad88c9c-63aa-46b4-a961-f6ee4056948d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-xvrh7_kube-system(fad88c9c-63aa-46b4-a961-f6ee4056948d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d39f1233e5d1fec42e5249ebbc08ca6ee08cd40ff689c32e159a4fa4e1f28f61\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-xvrh7" podUID="fad88c9c-63aa-46b4-a961-f6ee4056948d" Aug 13 00:16:35.446483 containerd[1487]: time="2025-08-13T00:16:35.446348345Z" level=error msg="Failed to destroy network for sandbox \"3d57bb024102e7ea0b07080efe2b7dc9b796d88b44b4ae63c089837653fcb8d0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:16:35.448497 containerd[1487]: time="2025-08-13T00:16:35.448316601Z" level=error msg="encountered an error cleaning up failed sandbox \"3d57bb024102e7ea0b07080efe2b7dc9b796d88b44b4ae63c089837653fcb8d0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:16:35.448497 containerd[1487]: time="2025-08-13T00:16:35.448397161Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hnn7j,Uid:2776ab78-3da7-4b02-86ed-3ed0ba1a8679,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3d57bb024102e7ea0b07080efe2b7dc9b796d88b44b4ae63c089837653fcb8d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:16:35.448649 kubelet[2615]: E0813 00:16:35.448619 2615 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d57bb024102e7ea0b07080efe2b7dc9b796d88b44b4ae63c089837653fcb8d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:16:35.448696 kubelet[2615]: E0813 00:16:35.448676 2615 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d57bb024102e7ea0b07080efe2b7dc9b796d88b44b4ae63c089837653fcb8d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-hnn7j" Aug 13 00:16:35.448722 kubelet[2615]: E0813 00:16:35.448695 2615 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d57bb024102e7ea0b07080efe2b7dc9b796d88b44b4ae63c089837653fcb8d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-hnn7j" Aug 13 00:16:35.448784 kubelet[2615]: E0813 00:16:35.448732 2615 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-hnn7j_kube-system(2776ab78-3da7-4b02-86ed-3ed0ba1a8679)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-hnn7j_kube-system(2776ab78-3da7-4b02-86ed-3ed0ba1a8679)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3d57bb024102e7ea0b07080efe2b7dc9b796d88b44b4ae63c089837653fcb8d0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-hnn7j" podUID="2776ab78-3da7-4b02-86ed-3ed0ba1a8679" Aug 13 00:16:35.472739 containerd[1487]: time="2025-08-13T00:16:35.472588150Z" level=error msg="Failed to destroy network for sandbox \"436ce7aee3cc20efd8bf2c1f401c393719939d699cc8d5ab5690b2a31bf42d40\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:16:35.473328 containerd[1487]: time="2025-08-13T00:16:35.473136754Z" level=error msg="encountered an error cleaning up failed sandbox \"436ce7aee3cc20efd8bf2c1f401c393719939d699cc8d5ab5690b2a31bf42d40\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:16:35.473328 containerd[1487]: time="2025-08-13T00:16:35.473210115Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b89966564-sv4v2,Uid:f9823515-dfc4-4fbb-a853-8b1f7dc08f4c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"436ce7aee3cc20efd8bf2c1f401c393719939d699cc8d5ab5690b2a31bf42d40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:16:35.473881 kubelet[2615]: E0813 00:16:35.473646 2615 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"436ce7aee3cc20efd8bf2c1f401c393719939d699cc8d5ab5690b2a31bf42d40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:16:35.473972 kubelet[2615]: E0813 00:16:35.473896 2615 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"436ce7aee3cc20efd8bf2c1f401c393719939d699cc8d5ab5690b2a31bf42d40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-b89966564-sv4v2" Aug 13 00:16:35.473972 kubelet[2615]: E0813 00:16:35.473928 2615 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"436ce7aee3cc20efd8bf2c1f401c393719939d699cc8d5ab5690b2a31bf42d40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-b89966564-sv4v2" Aug 13 00:16:35.474031 kubelet[2615]: E0813 00:16:35.473982 2615 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-b89966564-sv4v2_calico-system(f9823515-dfc4-4fbb-a853-8b1f7dc08f4c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-b89966564-sv4v2_calico-system(f9823515-dfc4-4fbb-a853-8b1f7dc08f4c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"436ce7aee3cc20efd8bf2c1f401c393719939d699cc8d5ab5690b2a31bf42d40\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-b89966564-sv4v2" podUID="f9823515-dfc4-4fbb-a853-8b1f7dc08f4c" Aug 13 00:16:35.482598 containerd[1487]: time="2025-08-13T00:16:35.482418947Z" level=error msg="Failed to destroy network for sandbox \"e0cdc0198487554e49bd19757d13f2026f25ab840bc62e6bcabe80f271c4c719\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:16:35.485633 containerd[1487]: time="2025-08-13T00:16:35.485577251Z" level=error msg="encountered an error cleaning up failed sandbox \"e0cdc0198487554e49bd19757d13f2026f25ab840bc62e6bcabe80f271c4c719\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:16:35.485748 containerd[1487]: time="2025-08-13T00:16:35.485649812Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-nlr8k,Uid:00aebb23-204a-4ca9-bec2-40b1eea2bd4a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e0cdc0198487554e49bd19757d13f2026f25ab840bc62e6bcabe80f271c4c719\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:16:35.485928 kubelet[2615]: E0813 00:16:35.485849 2615 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0cdc0198487554e49bd19757d13f2026f25ab840bc62e6bcabe80f271c4c719\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:16:35.485928 kubelet[2615]: E0813 00:16:35.485906 2615 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0cdc0198487554e49bd19757d13f2026f25ab840bc62e6bcabe80f271c4c719\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-nlr8k" Aug 13 00:16:35.485928 kubelet[2615]: E0813 00:16:35.485926 2615 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0cdc0198487554e49bd19757d13f2026f25ab840bc62e6bcabe80f271c4c719\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-nlr8k" Aug 13 00:16:35.486030 kubelet[2615]: E0813 00:16:35.485974 2615 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-nlr8k_calico-system(00aebb23-204a-4ca9-bec2-40b1eea2bd4a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-nlr8k_calico-system(00aebb23-204a-4ca9-bec2-40b1eea2bd4a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e0cdc0198487554e49bd19757d13f2026f25ab840bc62e6bcabe80f271c4c719\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-nlr8k" podUID="00aebb23-204a-4ca9-bec2-40b1eea2bd4a" Aug 13 00:16:35.500959 containerd[1487]: time="2025-08-13T00:16:35.500629209Z" level=error msg="Failed to destroy network for sandbox \"3f86c6587922f68f1c54663844230c619cca8870150a5e888f6f452008abfeb2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:16:35.502140 containerd[1487]: time="2025-08-13T00:16:35.502088580Z" level=error msg="Failed to destroy network for sandbox \"59d1853111440219c386c83786fa171a27b9b9918d901451d71d8f9e07bc1254\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:16:35.502581 containerd[1487]: time="2025-08-13T00:16:35.502536304Z" level=error msg="encountered an error cleaning up failed sandbox \"3f86c6587922f68f1c54663844230c619cca8870150a5e888f6f452008abfeb2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:16:35.502738 containerd[1487]: time="2025-08-13T00:16:35.502706505Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5fb6587569-g54r9,Uid:f295d8fb-d7de-4da2-8e3a-8a17fac1f430,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3f86c6587922f68f1c54663844230c619cca8870150a5e888f6f452008abfeb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:16:35.502888 containerd[1487]: time="2025-08-13T00:16:35.502858626Z" level=error msg="encountered an error cleaning up failed sandbox \"59d1853111440219c386c83786fa171a27b9b9918d901451d71d8f9e07bc1254\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:16:35.503021 containerd[1487]: time="2025-08-13T00:16:35.502997467Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-765654ff88-hg6lz,Uid:27660f5a-c4be-4a6f-bd2d-93f2bdfb3e66,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"59d1853111440219c386c83786fa171a27b9b9918d901451d71d8f9e07bc1254\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:16:35.503121 kubelet[2615]: E0813 00:16:35.503011 2615 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f86c6587922f68f1c54663844230c619cca8870150a5e888f6f452008abfeb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:16:35.503121 kubelet[2615]: E0813 00:16:35.503110 2615 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f86c6587922f68f1c54663844230c619cca8870150a5e888f6f452008abfeb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5fb6587569-g54r9" Aug 13 00:16:35.503191 kubelet[2615]: E0813 00:16:35.503133 2615 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f86c6587922f68f1c54663844230c619cca8870150a5e888f6f452008abfeb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5fb6587569-g54r9" Aug 13 00:16:35.503218 kubelet[2615]: E0813 00:16:35.503184 2615 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5fb6587569-g54r9_calico-system(f295d8fb-d7de-4da2-8e3a-8a17fac1f430)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5fb6587569-g54r9_calico-system(f295d8fb-d7de-4da2-8e3a-8a17fac1f430)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3f86c6587922f68f1c54663844230c619cca8870150a5e888f6f452008abfeb2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5fb6587569-g54r9" podUID="f295d8fb-d7de-4da2-8e3a-8a17fac1f430" Aug 13 00:16:35.504695 kubelet[2615]: E0813 00:16:35.504635 2615 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"59d1853111440219c386c83786fa171a27b9b9918d901451d71d8f9e07bc1254\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:16:35.504823 kubelet[2615]: E0813 00:16:35.504794 2615 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"59d1853111440219c386c83786fa171a27b9b9918d901451d71d8f9e07bc1254\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-765654ff88-hg6lz" Aug 13 00:16:35.504898 kubelet[2615]: E0813 00:16:35.504824 2615 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"59d1853111440219c386c83786fa171a27b9b9918d901451d71d8f9e07bc1254\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-765654ff88-hg6lz" Aug 13 00:16:35.505387 kubelet[2615]: E0813 00:16:35.505029 2615 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-765654ff88-hg6lz_calico-apiserver(27660f5a-c4be-4a6f-bd2d-93f2bdfb3e66)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-765654ff88-hg6lz_calico-apiserver(27660f5a-c4be-4a6f-bd2d-93f2bdfb3e66)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"59d1853111440219c386c83786fa171a27b9b9918d901451d71d8f9e07bc1254\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-765654ff88-hg6lz" podUID="27660f5a-c4be-4a6f-bd2d-93f2bdfb3e66" Aug 13 00:16:35.510260 containerd[1487]: time="2025-08-13T00:16:35.510147483Z" level=error msg="Failed to destroy network for sandbox \"84786ed3b70e4973214dcce40b9864db2b66c65e0740185d43ccb48d12b49d28\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:16:35.510611 containerd[1487]: time="2025-08-13T00:16:35.510532006Z" level=error msg="encountered an error cleaning up failed sandbox \"84786ed3b70e4973214dcce40b9864db2b66c65e0740185d43ccb48d12b49d28\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:16:35.510611 containerd[1487]: time="2025-08-13T00:16:35.510585727Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-765654ff88-wtksk,Uid:58f3844d-62f0-42cd-b23f-5ade8a1b059c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"84786ed3b70e4973214dcce40b9864db2b66c65e0740185d43ccb48d12b49d28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:16:35.510836 kubelet[2615]: E0813 00:16:35.510793 2615 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84786ed3b70e4973214dcce40b9864db2b66c65e0740185d43ccb48d12b49d28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:16:35.510921 kubelet[2615]: E0813 00:16:35.510840 2615 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84786ed3b70e4973214dcce40b9864db2b66c65e0740185d43ccb48d12b49d28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-765654ff88-wtksk" Aug 13 00:16:35.510921 kubelet[2615]: E0813 00:16:35.510867 2615 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84786ed3b70e4973214dcce40b9864db2b66c65e0740185d43ccb48d12b49d28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-765654ff88-wtksk" Aug 13 00:16:35.511099 kubelet[2615]: E0813 00:16:35.510943 2615 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-765654ff88-wtksk_calico-apiserver(58f3844d-62f0-42cd-b23f-5ade8a1b059c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-765654ff88-wtksk_calico-apiserver(58f3844d-62f0-42cd-b23f-5ade8a1b059c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"84786ed3b70e4973214dcce40b9864db2b66c65e0740185d43ccb48d12b49d28\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-765654ff88-wtksk" podUID="58f3844d-62f0-42cd-b23f-5ade8a1b059c" Aug 13 00:16:35.674915 systemd[1]: Created slice kubepods-besteffort-pod15e02000_307e_46ce_8cf9_fed7fd6d6dd8.slice - libcontainer container kubepods-besteffort-pod15e02000_307e_46ce_8cf9_fed7fd6d6dd8.slice. Aug 13 00:16:35.679615 containerd[1487]: time="2025-08-13T00:16:35.679566845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qctbn,Uid:15e02000-307e-46ce-8cf9-fed7fd6d6dd8,Namespace:calico-system,Attempt:0,}" Aug 13 00:16:35.754946 containerd[1487]: time="2025-08-13T00:16:35.754718472Z" level=error msg="Failed to destroy network for sandbox \"723cfb97cf821550572dbfcc2de9f9ec348e0959abeda1144eeea8b5d8cfdffa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:16:35.755485 containerd[1487]: time="2025-08-13T00:16:35.755378117Z" level=error msg="encountered an error cleaning up failed sandbox \"723cfb97cf821550572dbfcc2de9f9ec348e0959abeda1144eeea8b5d8cfdffa\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:16:35.755485 containerd[1487]: time="2025-08-13T00:16:35.755467517Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qctbn,Uid:15e02000-307e-46ce-8cf9-fed7fd6d6dd8,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"723cfb97cf821550572dbfcc2de9f9ec348e0959abeda1144eeea8b5d8cfdffa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:16:35.756328 kubelet[2615]: E0813 00:16:35.755881 2615 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"723cfb97cf821550572dbfcc2de9f9ec348e0959abeda1144eeea8b5d8cfdffa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:16:35.756328 kubelet[2615]: E0813 00:16:35.755950 2615 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"723cfb97cf821550572dbfcc2de9f9ec348e0959abeda1144eeea8b5d8cfdffa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qctbn" Aug 13 00:16:35.756328 kubelet[2615]: E0813 00:16:35.755979 2615 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"723cfb97cf821550572dbfcc2de9f9ec348e0959abeda1144eeea8b5d8cfdffa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qctbn" Aug 13 00:16:35.756813 kubelet[2615]: E0813 00:16:35.756032 2615 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-qctbn_calico-system(15e02000-307e-46ce-8cf9-fed7fd6d6dd8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-qctbn_calico-system(15e02000-307e-46ce-8cf9-fed7fd6d6dd8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"723cfb97cf821550572dbfcc2de9f9ec348e0959abeda1144eeea8b5d8cfdffa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qctbn" podUID="15e02000-307e-46ce-8cf9-fed7fd6d6dd8" Aug 13 00:16:35.814748 containerd[1487]: time="2025-08-13T00:16:35.813728492Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Aug 13 00:16:35.814968 kubelet[2615]: I0813 00:16:35.813969 2615 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="84786ed3b70e4973214dcce40b9864db2b66c65e0740185d43ccb48d12b49d28" Aug 13 00:16:35.818035 containerd[1487]: time="2025-08-13T00:16:35.817837444Z" level=info msg="StopPodSandbox for \"84786ed3b70e4973214dcce40b9864db2b66c65e0740185d43ccb48d12b49d28\"" Aug 13 00:16:35.819069 containerd[1487]: time="2025-08-13T00:16:35.818899252Z" level=info msg="Ensure that sandbox 84786ed3b70e4973214dcce40b9864db2b66c65e0740185d43ccb48d12b49d28 in task-service has been cleanup successfully" Aug 13 00:16:35.821388 kubelet[2615]: I0813 00:16:35.819617 2615 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d39f1233e5d1fec42e5249ebbc08ca6ee08cd40ff689c32e159a4fa4e1f28f61" Aug 13 00:16:35.822607 containerd[1487]: time="2025-08-13T00:16:35.822539841Z" level=info msg="StopPodSandbox for \"d39f1233e5d1fec42e5249ebbc08ca6ee08cd40ff689c32e159a4fa4e1f28f61\"" Aug 13 00:16:35.823147 containerd[1487]: time="2025-08-13T00:16:35.823115645Z" level=info msg="Ensure that sandbox d39f1233e5d1fec42e5249ebbc08ca6ee08cd40ff689c32e159a4fa4e1f28f61 in task-service has been cleanup successfully" Aug 13 00:16:35.830615 kubelet[2615]: I0813 00:16:35.830421 2615 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0cdc0198487554e49bd19757d13f2026f25ab840bc62e6bcabe80f271c4c719" Aug 13 00:16:35.833169 containerd[1487]: time="2025-08-13T00:16:35.832734480Z" level=info msg="StopPodSandbox for \"e0cdc0198487554e49bd19757d13f2026f25ab840bc62e6bcabe80f271c4c719\"" Aug 13 00:16:35.833169 containerd[1487]: time="2025-08-13T00:16:35.832895602Z" level=info msg="Ensure that sandbox e0cdc0198487554e49bd19757d13f2026f25ab840bc62e6bcabe80f271c4c719 in task-service has been cleanup successfully" Aug 13 00:16:35.839829 kubelet[2615]: I0813 00:16:35.839803 2615 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="436ce7aee3cc20efd8bf2c1f401c393719939d699cc8d5ab5690b2a31bf42d40" Aug 13 00:16:35.841607 containerd[1487]: time="2025-08-13T00:16:35.841575389Z" level=info msg="StopPodSandbox for \"436ce7aee3cc20efd8bf2c1f401c393719939d699cc8d5ab5690b2a31bf42d40\"" Aug 13 00:16:35.841874 containerd[1487]: time="2025-08-13T00:16:35.841853631Z" level=info msg="Ensure that sandbox 436ce7aee3cc20efd8bf2c1f401c393719939d699cc8d5ab5690b2a31bf42d40 in task-service has been cleanup successfully" Aug 13 00:16:35.848115 kubelet[2615]: I0813 00:16:35.845457 2615 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="723cfb97cf821550572dbfcc2de9f9ec348e0959abeda1144eeea8b5d8cfdffa" Aug 13 00:16:35.850639 containerd[1487]: time="2025-08-13T00:16:35.850531499Z" level=info msg="StopPodSandbox for \"723cfb97cf821550572dbfcc2de9f9ec348e0959abeda1144eeea8b5d8cfdffa\"" Aug 13 00:16:35.854470 containerd[1487]: time="2025-08-13T00:16:35.854426970Z" level=info msg="Ensure that sandbox 723cfb97cf821550572dbfcc2de9f9ec348e0959abeda1144eeea8b5d8cfdffa in task-service has been cleanup successfully" Aug 13 00:16:35.855759 kubelet[2615]: I0813 00:16:35.855732 2615 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f86c6587922f68f1c54663844230c619cca8870150a5e888f6f452008abfeb2" Aug 13 00:16:35.859669 containerd[1487]: time="2025-08-13T00:16:35.859635890Z" level=info msg="StopPodSandbox for \"3f86c6587922f68f1c54663844230c619cca8870150a5e888f6f452008abfeb2\"" Aug 13 00:16:35.866670 containerd[1487]: time="2025-08-13T00:16:35.866633625Z" level=info msg="Ensure that sandbox 3f86c6587922f68f1c54663844230c619cca8870150a5e888f6f452008abfeb2 in task-service has been cleanup successfully" Aug 13 00:16:35.869170 kubelet[2615]: I0813 00:16:35.867867 2615 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d57bb024102e7ea0b07080efe2b7dc9b796d88b44b4ae63c089837653fcb8d0" Aug 13 00:16:35.876866 kubelet[2615]: I0813 00:16:35.876842 2615 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59d1853111440219c386c83786fa171a27b9b9918d901451d71d8f9e07bc1254" Aug 13 00:16:35.877318 containerd[1487]: time="2025-08-13T00:16:35.875039690Z" level=info msg="StopPodSandbox for \"3d57bb024102e7ea0b07080efe2b7dc9b796d88b44b4ae63c089837653fcb8d0\"" Aug 13 00:16:35.878902 containerd[1487]: time="2025-08-13T00:16:35.878778360Z" level=info msg="StopPodSandbox for \"59d1853111440219c386c83786fa171a27b9b9918d901451d71d8f9e07bc1254\"" Aug 13 00:16:35.879553 containerd[1487]: time="2025-08-13T00:16:35.879385484Z" level=info msg="Ensure that sandbox 59d1853111440219c386c83786fa171a27b9b9918d901451d71d8f9e07bc1254 in task-service has been cleanup successfully" Aug 13 00:16:35.880493 containerd[1487]: time="2025-08-13T00:16:35.880461253Z" level=info msg="Ensure that sandbox 3d57bb024102e7ea0b07080efe2b7dc9b796d88b44b4ae63c089837653fcb8d0 in task-service has been cleanup successfully" Aug 13 00:16:35.929764 containerd[1487]: time="2025-08-13T00:16:35.929634836Z" level=error msg="StopPodSandbox for \"84786ed3b70e4973214dcce40b9864db2b66c65e0740185d43ccb48d12b49d28\" failed" error="failed to destroy network for sandbox \"84786ed3b70e4973214dcce40b9864db2b66c65e0740185d43ccb48d12b49d28\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:16:35.931199 containerd[1487]: time="2025-08-13T00:16:35.931159288Z" level=error msg="StopPodSandbox for \"d39f1233e5d1fec42e5249ebbc08ca6ee08cd40ff689c32e159a4fa4e1f28f61\" failed" error="failed to destroy network for sandbox \"d39f1233e5d1fec42e5249ebbc08ca6ee08cd40ff689c32e159a4fa4e1f28f61\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:16:35.931671 kubelet[2615]: E0813 00:16:35.931122 2615 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"84786ed3b70e4973214dcce40b9864db2b66c65e0740185d43ccb48d12b49d28\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="84786ed3b70e4973214dcce40b9864db2b66c65e0740185d43ccb48d12b49d28" Aug 13 00:16:35.931671 kubelet[2615]: E0813 00:16:35.931460 2615 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"84786ed3b70e4973214dcce40b9864db2b66c65e0740185d43ccb48d12b49d28"} Aug 13 00:16:35.931671 kubelet[2615]: E0813 00:16:35.931528 2615 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"58f3844d-62f0-42cd-b23f-5ade8a1b059c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"84786ed3b70e4973214dcce40b9864db2b66c65e0740185d43ccb48d12b49d28\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:16:35.931671 kubelet[2615]: E0813 00:16:35.931550 2615 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"58f3844d-62f0-42cd-b23f-5ade8a1b059c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"84786ed3b70e4973214dcce40b9864db2b66c65e0740185d43ccb48d12b49d28\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-765654ff88-wtksk" podUID="58f3844d-62f0-42cd-b23f-5ade8a1b059c" Aug 13 00:16:35.934501 kubelet[2615]: E0813 00:16:35.933203 2615 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d39f1233e5d1fec42e5249ebbc08ca6ee08cd40ff689c32e159a4fa4e1f28f61\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d39f1233e5d1fec42e5249ebbc08ca6ee08cd40ff689c32e159a4fa4e1f28f61" Aug 13 00:16:35.934501 kubelet[2615]: E0813 00:16:35.933245 2615 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d39f1233e5d1fec42e5249ebbc08ca6ee08cd40ff689c32e159a4fa4e1f28f61"} Aug 13 00:16:35.934501 kubelet[2615]: E0813 00:16:35.933298 2615 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fad88c9c-63aa-46b4-a961-f6ee4056948d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d39f1233e5d1fec42e5249ebbc08ca6ee08cd40ff689c32e159a4fa4e1f28f61\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:16:35.934501 kubelet[2615]: E0813 00:16:35.933320 2615 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fad88c9c-63aa-46b4-a961-f6ee4056948d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d39f1233e5d1fec42e5249ebbc08ca6ee08cd40ff689c32e159a4fa4e1f28f61\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-xvrh7" podUID="fad88c9c-63aa-46b4-a961-f6ee4056948d" Aug 13 00:16:35.965175 containerd[1487]: time="2025-08-13T00:16:35.965117753Z" level=error msg="StopPodSandbox for \"e0cdc0198487554e49bd19757d13f2026f25ab840bc62e6bcabe80f271c4c719\" failed" error="failed to destroy network for sandbox \"e0cdc0198487554e49bd19757d13f2026f25ab840bc62e6bcabe80f271c4c719\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:16:35.965678 kubelet[2615]: E0813 00:16:35.965641 2615 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e0cdc0198487554e49bd19757d13f2026f25ab840bc62e6bcabe80f271c4c719\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e0cdc0198487554e49bd19757d13f2026f25ab840bc62e6bcabe80f271c4c719" Aug 13 00:16:35.965877 kubelet[2615]: E0813 00:16:35.965855 2615 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e0cdc0198487554e49bd19757d13f2026f25ab840bc62e6bcabe80f271c4c719"} Aug 13 00:16:35.965970 kubelet[2615]: E0813 00:16:35.965956 2615 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"00aebb23-204a-4ca9-bec2-40b1eea2bd4a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e0cdc0198487554e49bd19757d13f2026f25ab840bc62e6bcabe80f271c4c719\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:16:35.966147 kubelet[2615]: E0813 00:16:35.966123 2615 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"00aebb23-204a-4ca9-bec2-40b1eea2bd4a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e0cdc0198487554e49bd19757d13f2026f25ab840bc62e6bcabe80f271c4c719\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-nlr8k" podUID="00aebb23-204a-4ca9-bec2-40b1eea2bd4a" Aug 13 00:16:35.976539 containerd[1487]: time="2025-08-13T00:16:35.976493402Z" level=error msg="StopPodSandbox for \"3f86c6587922f68f1c54663844230c619cca8870150a5e888f6f452008abfeb2\" failed" error="failed to destroy network for sandbox \"3f86c6587922f68f1c54663844230c619cca8870150a5e888f6f452008abfeb2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:16:35.976888 kubelet[2615]: E0813 00:16:35.976849 2615 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3f86c6587922f68f1c54663844230c619cca8870150a5e888f6f452008abfeb2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3f86c6587922f68f1c54663844230c619cca8870150a5e888f6f452008abfeb2" Aug 13 00:16:35.976985 kubelet[2615]: E0813 00:16:35.976900 2615 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3f86c6587922f68f1c54663844230c619cca8870150a5e888f6f452008abfeb2"} Aug 13 00:16:35.976985 kubelet[2615]: E0813 00:16:35.976933 2615 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f295d8fb-d7de-4da2-8e3a-8a17fac1f430\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3f86c6587922f68f1c54663844230c619cca8870150a5e888f6f452008abfeb2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:16:35.976985 kubelet[2615]: E0813 00:16:35.976969 2615 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f295d8fb-d7de-4da2-8e3a-8a17fac1f430\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3f86c6587922f68f1c54663844230c619cca8870150a5e888f6f452008abfeb2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5fb6587569-g54r9" podUID="f295d8fb-d7de-4da2-8e3a-8a17fac1f430" Aug 13 00:16:35.977809 containerd[1487]: time="2025-08-13T00:16:35.977621451Z" level=error msg="StopPodSandbox for \"436ce7aee3cc20efd8bf2c1f401c393719939d699cc8d5ab5690b2a31bf42d40\" failed" error="failed to destroy network for sandbox \"436ce7aee3cc20efd8bf2c1f401c393719939d699cc8d5ab5690b2a31bf42d40\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:16:35.978317 kubelet[2615]: E0813 00:16:35.978031 2615 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"436ce7aee3cc20efd8bf2c1f401c393719939d699cc8d5ab5690b2a31bf42d40\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="436ce7aee3cc20efd8bf2c1f401c393719939d699cc8d5ab5690b2a31bf42d40" Aug 13 00:16:35.978317 kubelet[2615]: E0813 00:16:35.978087 2615 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"436ce7aee3cc20efd8bf2c1f401c393719939d699cc8d5ab5690b2a31bf42d40"} Aug 13 00:16:35.978317 kubelet[2615]: E0813 00:16:35.978124 2615 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f9823515-dfc4-4fbb-a853-8b1f7dc08f4c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"436ce7aee3cc20efd8bf2c1f401c393719939d699cc8d5ab5690b2a31bf42d40\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:16:35.978317 kubelet[2615]: E0813 00:16:35.978144 2615 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f9823515-dfc4-4fbb-a853-8b1f7dc08f4c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"436ce7aee3cc20efd8bf2c1f401c393719939d699cc8d5ab5690b2a31bf42d40\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-b89966564-sv4v2" podUID="f9823515-dfc4-4fbb-a853-8b1f7dc08f4c" Aug 13 00:16:35.978681 kubelet[2615]: E0813 00:16:35.978597 2615 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3d57bb024102e7ea0b07080efe2b7dc9b796d88b44b4ae63c089837653fcb8d0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3d57bb024102e7ea0b07080efe2b7dc9b796d88b44b4ae63c089837653fcb8d0" Aug 13 00:16:35.978681 kubelet[2615]: E0813 00:16:35.978640 2615 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3d57bb024102e7ea0b07080efe2b7dc9b796d88b44b4ae63c089837653fcb8d0"} Aug 13 00:16:35.978681 kubelet[2615]: E0813 00:16:35.978670 2615 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2776ab78-3da7-4b02-86ed-3ed0ba1a8679\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3d57bb024102e7ea0b07080efe2b7dc9b796d88b44b4ae63c089837653fcb8d0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:16:35.978797 containerd[1487]: time="2025-08-13T00:16:35.978408417Z" level=error msg="StopPodSandbox for \"3d57bb024102e7ea0b07080efe2b7dc9b796d88b44b4ae63c089837653fcb8d0\" failed" error="failed to destroy network for sandbox \"3d57bb024102e7ea0b07080efe2b7dc9b796d88b44b4ae63c089837653fcb8d0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:16:35.978830 kubelet[2615]: E0813 00:16:35.978690 2615 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2776ab78-3da7-4b02-86ed-3ed0ba1a8679\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3d57bb024102e7ea0b07080efe2b7dc9b796d88b44b4ae63c089837653fcb8d0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-hnn7j" podUID="2776ab78-3da7-4b02-86ed-3ed0ba1a8679" Aug 13 00:16:35.980209 containerd[1487]: time="2025-08-13T00:16:35.980098030Z" level=error msg="StopPodSandbox for \"723cfb97cf821550572dbfcc2de9f9ec348e0959abeda1144eeea8b5d8cfdffa\" failed" error="failed to destroy network for sandbox \"723cfb97cf821550572dbfcc2de9f9ec348e0959abeda1144eeea8b5d8cfdffa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:16:35.980379 kubelet[2615]: E0813 00:16:35.980343 2615 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"723cfb97cf821550572dbfcc2de9f9ec348e0959abeda1144eeea8b5d8cfdffa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="723cfb97cf821550572dbfcc2de9f9ec348e0959abeda1144eeea8b5d8cfdffa" Aug 13 00:16:35.980425 kubelet[2615]: E0813 00:16:35.980395 2615 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"723cfb97cf821550572dbfcc2de9f9ec348e0959abeda1144eeea8b5d8cfdffa"} Aug 13 00:16:35.980448 kubelet[2615]: E0813 00:16:35.980424 2615 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"15e02000-307e-46ce-8cf9-fed7fd6d6dd8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"723cfb97cf821550572dbfcc2de9f9ec348e0959abeda1144eeea8b5d8cfdffa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:16:35.980499 kubelet[2615]: E0813 00:16:35.980443 2615 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"15e02000-307e-46ce-8cf9-fed7fd6d6dd8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"723cfb97cf821550572dbfcc2de9f9ec348e0959abeda1144eeea8b5d8cfdffa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qctbn" podUID="15e02000-307e-46ce-8cf9-fed7fd6d6dd8" Aug 13 00:16:35.983500 containerd[1487]: time="2025-08-13T00:16:35.983463416Z" level=error msg="StopPodSandbox for \"59d1853111440219c386c83786fa171a27b9b9918d901451d71d8f9e07bc1254\" failed" error="failed to destroy network for sandbox \"59d1853111440219c386c83786fa171a27b9b9918d901451d71d8f9e07bc1254\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:16:35.983882 kubelet[2615]: E0813 00:16:35.983697 2615 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"59d1853111440219c386c83786fa171a27b9b9918d901451d71d8f9e07bc1254\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="59d1853111440219c386c83786fa171a27b9b9918d901451d71d8f9e07bc1254" Aug 13 00:16:35.983882 kubelet[2615]: E0813 00:16:35.983739 2615 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"59d1853111440219c386c83786fa171a27b9b9918d901451d71d8f9e07bc1254"} Aug 13 00:16:35.983882 kubelet[2615]: E0813 00:16:35.983771 2615 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"27660f5a-c4be-4a6f-bd2d-93f2bdfb3e66\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"59d1853111440219c386c83786fa171a27b9b9918d901451d71d8f9e07bc1254\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:16:35.983882 kubelet[2615]: E0813 00:16:35.983788 2615 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"27660f5a-c4be-4a6f-bd2d-93f2bdfb3e66\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"59d1853111440219c386c83786fa171a27b9b9918d901451d71d8f9e07bc1254\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-765654ff88-hg6lz" podUID="27660f5a-c4be-4a6f-bd2d-93f2bdfb3e66" Aug 13 00:16:36.136428 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3d57bb024102e7ea0b07080efe2b7dc9b796d88b44b4ae63c089837653fcb8d0-shm.mount: Deactivated successfully. Aug 13 00:16:36.136773 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d39f1233e5d1fec42e5249ebbc08ca6ee08cd40ff689c32e159a4fa4e1f28f61-shm.mount: Deactivated successfully. Aug 13 00:16:40.145613 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4292080169.mount: Deactivated successfully. Aug 13 00:16:40.222519 containerd[1487]: time="2025-08-13T00:16:40.222389228Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:16:40.224182 containerd[1487]: time="2025-08-13T00:16:40.223879639Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=152544909" Aug 13 00:16:40.225945 containerd[1487]: time="2025-08-13T00:16:40.225804573Z" level=info msg="ImageCreate event name:\"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:16:40.229850 containerd[1487]: time="2025-08-13T00:16:40.228990035Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:16:40.229850 containerd[1487]: time="2025-08-13T00:16:40.229715600Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"152544771\" in 4.415902187s" Aug 13 00:16:40.229850 containerd[1487]: time="2025-08-13T00:16:40.229751481Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\"" Aug 13 00:16:40.252690 containerd[1487]: time="2025-08-13T00:16:40.252632364Z" level=info msg="CreateContainer within sandbox \"8aa6a3e9e4f0da781cdabd06f6ab565af770b7fa5170398d53ca558cc1aef7c2\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Aug 13 00:16:40.284375 containerd[1487]: time="2025-08-13T00:16:40.284313310Z" level=info msg="CreateContainer within sandbox \"8aa6a3e9e4f0da781cdabd06f6ab565af770b7fa5170398d53ca558cc1aef7c2\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"bff2584ab78b29ac2e66fe7960591e6c5594f9857694e7c78708c6243af67b00\"" Aug 13 00:16:40.287208 containerd[1487]: time="2025-08-13T00:16:40.286423565Z" level=info msg="StartContainer for \"bff2584ab78b29ac2e66fe7960591e6c5594f9857694e7c78708c6243af67b00\"" Aug 13 00:16:40.322893 systemd[1]: Started cri-containerd-bff2584ab78b29ac2e66fe7960591e6c5594f9857694e7c78708c6243af67b00.scope - libcontainer container bff2584ab78b29ac2e66fe7960591e6c5594f9857694e7c78708c6243af67b00. Aug 13 00:16:40.360537 containerd[1487]: time="2025-08-13T00:16:40.360368252Z" level=info msg="StartContainer for \"bff2584ab78b29ac2e66fe7960591e6c5594f9857694e7c78708c6243af67b00\" returns successfully" Aug 13 00:16:40.524741 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Aug 13 00:16:40.524899 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Aug 13 00:16:40.715989 containerd[1487]: time="2025-08-13T00:16:40.715666904Z" level=info msg="StopPodSandbox for \"3f86c6587922f68f1c54663844230c619cca8870150a5e888f6f452008abfeb2\"" Aug 13 00:16:40.924758 kubelet[2615]: I0813 00:16:40.924656 2615 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-49qbp" podStartSLOduration=2.020124571 podStartE2EDuration="15.924636073s" podCreationTimestamp="2025-08-13 00:16:25 +0000 UTC" firstStartedPulling="2025-08-13 00:16:26.327298633 +0000 UTC m=+26.801812146" lastFinishedPulling="2025-08-13 00:16:40.231810135 +0000 UTC m=+40.706323648" observedRunningTime="2025-08-13 00:16:40.923876788 +0000 UTC m=+41.398390301" watchObservedRunningTime="2025-08-13 00:16:40.924636073 +0000 UTC m=+41.399149586" Aug 13 00:16:40.939958 containerd[1487]: 2025-08-13 00:16:40.851 [INFO][3802] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3f86c6587922f68f1c54663844230c619cca8870150a5e888f6f452008abfeb2" Aug 13 00:16:40.939958 containerd[1487]: 2025-08-13 00:16:40.851 [INFO][3802] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3f86c6587922f68f1c54663844230c619cca8870150a5e888f6f452008abfeb2" iface="eth0" netns="/var/run/netns/cni-b0884b41-bf4c-2be7-c4a3-d01d58e4ae66" Aug 13 00:16:40.939958 containerd[1487]: 2025-08-13 00:16:40.852 [INFO][3802] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3f86c6587922f68f1c54663844230c619cca8870150a5e888f6f452008abfeb2" iface="eth0" netns="/var/run/netns/cni-b0884b41-bf4c-2be7-c4a3-d01d58e4ae66" Aug 13 00:16:40.939958 containerd[1487]: 2025-08-13 00:16:40.852 [INFO][3802] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3f86c6587922f68f1c54663844230c619cca8870150a5e888f6f452008abfeb2" iface="eth0" netns="/var/run/netns/cni-b0884b41-bf4c-2be7-c4a3-d01d58e4ae66" Aug 13 00:16:40.939958 containerd[1487]: 2025-08-13 00:16:40.852 [INFO][3802] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3f86c6587922f68f1c54663844230c619cca8870150a5e888f6f452008abfeb2" Aug 13 00:16:40.939958 containerd[1487]: 2025-08-13 00:16:40.852 [INFO][3802] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3f86c6587922f68f1c54663844230c619cca8870150a5e888f6f452008abfeb2" Aug 13 00:16:40.939958 containerd[1487]: 2025-08-13 00:16:40.912 [INFO][3815] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3f86c6587922f68f1c54663844230c619cca8870150a5e888f6f452008abfeb2" HandleID="k8s-pod-network.3f86c6587922f68f1c54663844230c619cca8870150a5e888f6f452008abfeb2" Workload="ci--4081--3--5--3--d55e308663-k8s-whisker--5fb6587569--g54r9-eth0" Aug 13 00:16:40.939958 containerd[1487]: 2025-08-13 00:16:40.912 [INFO][3815] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:16:40.939958 containerd[1487]: 2025-08-13 00:16:40.912 [INFO][3815] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:16:40.939958 containerd[1487]: 2025-08-13 00:16:40.927 [WARNING][3815] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3f86c6587922f68f1c54663844230c619cca8870150a5e888f6f452008abfeb2" HandleID="k8s-pod-network.3f86c6587922f68f1c54663844230c619cca8870150a5e888f6f452008abfeb2" Workload="ci--4081--3--5--3--d55e308663-k8s-whisker--5fb6587569--g54r9-eth0" Aug 13 00:16:40.939958 containerd[1487]: 2025-08-13 00:16:40.927 [INFO][3815] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3f86c6587922f68f1c54663844230c619cca8870150a5e888f6f452008abfeb2" HandleID="k8s-pod-network.3f86c6587922f68f1c54663844230c619cca8870150a5e888f6f452008abfeb2" Workload="ci--4081--3--5--3--d55e308663-k8s-whisker--5fb6587569--g54r9-eth0" Aug 13 00:16:40.939958 containerd[1487]: 2025-08-13 00:16:40.930 [INFO][3815] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:16:40.939958 containerd[1487]: 2025-08-13 00:16:40.934 [INFO][3802] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3f86c6587922f68f1c54663844230c619cca8870150a5e888f6f452008abfeb2" Aug 13 00:16:40.940472 containerd[1487]: time="2025-08-13T00:16:40.940402225Z" level=info msg="TearDown network for sandbox \"3f86c6587922f68f1c54663844230c619cca8870150a5e888f6f452008abfeb2\" successfully" Aug 13 00:16:40.940885 containerd[1487]: time="2025-08-13T00:16:40.940804668Z" level=info msg="StopPodSandbox for \"3f86c6587922f68f1c54663844230c619cca8870150a5e888f6f452008abfeb2\" returns successfully" Aug 13 00:16:41.067919 kubelet[2615]: I0813 00:16:41.067862 2615 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f295d8fb-d7de-4da2-8e3a-8a17fac1f430-whisker-ca-bundle\") pod \"f295d8fb-d7de-4da2-8e3a-8a17fac1f430\" (UID: \"f295d8fb-d7de-4da2-8e3a-8a17fac1f430\") " Aug 13 00:16:41.068164 kubelet[2615]: I0813 00:16:41.067946 2615 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gs9ff\" (UniqueName: \"kubernetes.io/projected/f295d8fb-d7de-4da2-8e3a-8a17fac1f430-kube-api-access-gs9ff\") pod \"f295d8fb-d7de-4da2-8e3a-8a17fac1f430\" (UID: \"f295d8fb-d7de-4da2-8e3a-8a17fac1f430\") " Aug 13 00:16:41.068164 kubelet[2615]: I0813 00:16:41.067988 2615 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f295d8fb-d7de-4da2-8e3a-8a17fac1f430-whisker-backend-key-pair\") pod \"f295d8fb-d7de-4da2-8e3a-8a17fac1f430\" (UID: \"f295d8fb-d7de-4da2-8e3a-8a17fac1f430\") " Aug 13 00:16:41.070350 kubelet[2615]: I0813 00:16:41.069508 2615 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f295d8fb-d7de-4da2-8e3a-8a17fac1f430-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "f295d8fb-d7de-4da2-8e3a-8a17fac1f430" (UID: "f295d8fb-d7de-4da2-8e3a-8a17fac1f430"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 13 00:16:41.073598 kubelet[2615]: I0813 00:16:41.073547 2615 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f295d8fb-d7de-4da2-8e3a-8a17fac1f430-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "f295d8fb-d7de-4da2-8e3a-8a17fac1f430" (UID: "f295d8fb-d7de-4da2-8e3a-8a17fac1f430"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 13 00:16:41.075139 kubelet[2615]: I0813 00:16:41.075088 2615 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f295d8fb-d7de-4da2-8e3a-8a17fac1f430-kube-api-access-gs9ff" (OuterVolumeSpecName: "kube-api-access-gs9ff") pod "f295d8fb-d7de-4da2-8e3a-8a17fac1f430" (UID: "f295d8fb-d7de-4da2-8e3a-8a17fac1f430"). InnerVolumeSpecName "kube-api-access-gs9ff". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 00:16:41.148803 systemd[1]: run-netns-cni\x2db0884b41\x2dbf4c\x2d2be7\x2dc4a3\x2dd01d58e4ae66.mount: Deactivated successfully. Aug 13 00:16:41.149442 systemd[1]: var-lib-kubelet-pods-f295d8fb\x2dd7de\x2d4da2\x2d8e3a\x2d8a17fac1f430-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgs9ff.mount: Deactivated successfully. Aug 13 00:16:41.149636 systemd[1]: var-lib-kubelet-pods-f295d8fb\x2dd7de\x2d4da2\x2d8e3a\x2d8a17fac1f430-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Aug 13 00:16:41.168527 kubelet[2615]: I0813 00:16:41.168444 2615 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gs9ff\" (UniqueName: \"kubernetes.io/projected/f295d8fb-d7de-4da2-8e3a-8a17fac1f430-kube-api-access-gs9ff\") on node \"ci-4081-3-5-3-d55e308663\" DevicePath \"\"" Aug 13 00:16:41.168527 kubelet[2615]: I0813 00:16:41.168498 2615 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f295d8fb-d7de-4da2-8e3a-8a17fac1f430-whisker-backend-key-pair\") on node \"ci-4081-3-5-3-d55e308663\" DevicePath \"\"" Aug 13 00:16:41.168527 kubelet[2615]: I0813 00:16:41.168517 2615 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f295d8fb-d7de-4da2-8e3a-8a17fac1f430-whisker-ca-bundle\") on node \"ci-4081-3-5-3-d55e308663\" DevicePath \"\"" Aug 13 00:16:41.676035 systemd[1]: Removed slice kubepods-besteffort-podf295d8fb_d7de_4da2_8e3a_8a17fac1f430.slice - libcontainer container kubepods-besteffort-podf295d8fb_d7de_4da2_8e3a_8a17fac1f430.slice. Aug 13 00:16:41.901975 kubelet[2615]: I0813 00:16:41.901929 2615 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 00:16:41.995533 systemd[1]: Created slice kubepods-besteffort-podecae2380_09e2_4a03_849f_f74d64a66c53.slice - libcontainer container kubepods-besteffort-podecae2380_09e2_4a03_849f_f74d64a66c53.slice. Aug 13 00:16:42.074671 kubelet[2615]: I0813 00:16:42.074540 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ecae2380-09e2-4a03-849f-f74d64a66c53-whisker-backend-key-pair\") pod \"whisker-f6b686755-wwxf5\" (UID: \"ecae2380-09e2-4a03-849f-f74d64a66c53\") " pod="calico-system/whisker-f6b686755-wwxf5" Aug 13 00:16:42.074671 kubelet[2615]: I0813 00:16:42.074624 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ecae2380-09e2-4a03-849f-f74d64a66c53-whisker-ca-bundle\") pod \"whisker-f6b686755-wwxf5\" (UID: \"ecae2380-09e2-4a03-849f-f74d64a66c53\") " pod="calico-system/whisker-f6b686755-wwxf5" Aug 13 00:16:42.074671 kubelet[2615]: I0813 00:16:42.074690 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnqjn\" (UniqueName: \"kubernetes.io/projected/ecae2380-09e2-4a03-849f-f74d64a66c53-kube-api-access-xnqjn\") pod \"whisker-f6b686755-wwxf5\" (UID: \"ecae2380-09e2-4a03-849f-f74d64a66c53\") " pod="calico-system/whisker-f6b686755-wwxf5" Aug 13 00:16:42.301777 containerd[1487]: time="2025-08-13T00:16:42.301655811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-f6b686755-wwxf5,Uid:ecae2380-09e2-4a03-849f-f74d64a66c53,Namespace:calico-system,Attempt:0,}" Aug 13 00:16:42.598231 systemd-networkd[1399]: cali84761e28e8c: Link UP Aug 13 00:16:42.599259 systemd-networkd[1399]: cali84761e28e8c: Gained carrier Aug 13 00:16:42.634603 containerd[1487]: 2025-08-13 00:16:42.387 [INFO][3922] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 00:16:42.634603 containerd[1487]: 2025-08-13 00:16:42.422 [INFO][3922] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--3--d55e308663-k8s-whisker--f6b686755--wwxf5-eth0 whisker-f6b686755- calico-system ecae2380-09e2-4a03-849f-f74d64a66c53 920 0 2025-08-13 00:16:41 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:f6b686755 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081-3-5-3-d55e308663 whisker-f6b686755-wwxf5 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali84761e28e8c [] [] }} ContainerID="84271baa6e072a8151583bab7fdd0a25fc56b6ee13187593d814c167c339f47f" Namespace="calico-system" Pod="whisker-f6b686755-wwxf5" WorkloadEndpoint="ci--4081--3--5--3--d55e308663-k8s-whisker--f6b686755--wwxf5-" Aug 13 00:16:42.634603 containerd[1487]: 2025-08-13 00:16:42.422 [INFO][3922] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="84271baa6e072a8151583bab7fdd0a25fc56b6ee13187593d814c167c339f47f" Namespace="calico-system" Pod="whisker-f6b686755-wwxf5" WorkloadEndpoint="ci--4081--3--5--3--d55e308663-k8s-whisker--f6b686755--wwxf5-eth0" Aug 13 00:16:42.634603 containerd[1487]: 2025-08-13 00:16:42.524 [INFO][3940] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="84271baa6e072a8151583bab7fdd0a25fc56b6ee13187593d814c167c339f47f" HandleID="k8s-pod-network.84271baa6e072a8151583bab7fdd0a25fc56b6ee13187593d814c167c339f47f" Workload="ci--4081--3--5--3--d55e308663-k8s-whisker--f6b686755--wwxf5-eth0" Aug 13 00:16:42.634603 containerd[1487]: 2025-08-13 00:16:42.525 [INFO][3940] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="84271baa6e072a8151583bab7fdd0a25fc56b6ee13187593d814c167c339f47f" HandleID="k8s-pod-network.84271baa6e072a8151583bab7fdd0a25fc56b6ee13187593d814c167c339f47f" Workload="ci--4081--3--5--3--d55e308663-k8s-whisker--f6b686755--wwxf5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3660), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-5-3-d55e308663", "pod":"whisker-f6b686755-wwxf5", "timestamp":"2025-08-13 00:16:42.524941909 +0000 UTC"}, Hostname:"ci-4081-3-5-3-d55e308663", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:16:42.634603 containerd[1487]: 2025-08-13 00:16:42.525 [INFO][3940] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:16:42.634603 containerd[1487]: 2025-08-13 00:16:42.525 [INFO][3940] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:16:42.634603 containerd[1487]: 2025-08-13 00:16:42.525 [INFO][3940] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-3-d55e308663' Aug 13 00:16:42.634603 containerd[1487]: 2025-08-13 00:16:42.538 [INFO][3940] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.84271baa6e072a8151583bab7fdd0a25fc56b6ee13187593d814c167c339f47f" host="ci-4081-3-5-3-d55e308663" Aug 13 00:16:42.634603 containerd[1487]: 2025-08-13 00:16:42.546 [INFO][3940] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-3-d55e308663" Aug 13 00:16:42.634603 containerd[1487]: 2025-08-13 00:16:42.554 [INFO][3940] ipam/ipam.go 511: Trying affinity for 192.168.75.192/26 host="ci-4081-3-5-3-d55e308663" Aug 13 00:16:42.634603 containerd[1487]: 2025-08-13 00:16:42.556 [INFO][3940] ipam/ipam.go 158: Attempting to load block cidr=192.168.75.192/26 host="ci-4081-3-5-3-d55e308663" Aug 13 00:16:42.634603 containerd[1487]: 2025-08-13 00:16:42.560 [INFO][3940] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.75.192/26 host="ci-4081-3-5-3-d55e308663" Aug 13 00:16:42.634603 containerd[1487]: 2025-08-13 00:16:42.560 [INFO][3940] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.75.192/26 handle="k8s-pod-network.84271baa6e072a8151583bab7fdd0a25fc56b6ee13187593d814c167c339f47f" host="ci-4081-3-5-3-d55e308663" Aug 13 00:16:42.634603 containerd[1487]: 2025-08-13 00:16:42.563 [INFO][3940] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.84271baa6e072a8151583bab7fdd0a25fc56b6ee13187593d814c167c339f47f Aug 13 00:16:42.634603 containerd[1487]: 2025-08-13 00:16:42.573 [INFO][3940] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.75.192/26 handle="k8s-pod-network.84271baa6e072a8151583bab7fdd0a25fc56b6ee13187593d814c167c339f47f" host="ci-4081-3-5-3-d55e308663" Aug 13 00:16:42.634603 containerd[1487]: 2025-08-13 00:16:42.580 [INFO][3940] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.75.193/26] block=192.168.75.192/26 handle="k8s-pod-network.84271baa6e072a8151583bab7fdd0a25fc56b6ee13187593d814c167c339f47f" host="ci-4081-3-5-3-d55e308663" Aug 13 00:16:42.634603 containerd[1487]: 2025-08-13 00:16:42.581 [INFO][3940] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.75.193/26] handle="k8s-pod-network.84271baa6e072a8151583bab7fdd0a25fc56b6ee13187593d814c167c339f47f" host="ci-4081-3-5-3-d55e308663" Aug 13 00:16:42.634603 containerd[1487]: 2025-08-13 00:16:42.581 [INFO][3940] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:16:42.634603 containerd[1487]: 2025-08-13 00:16:42.581 [INFO][3940] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.75.193/26] IPv6=[] ContainerID="84271baa6e072a8151583bab7fdd0a25fc56b6ee13187593d814c167c339f47f" HandleID="k8s-pod-network.84271baa6e072a8151583bab7fdd0a25fc56b6ee13187593d814c167c339f47f" Workload="ci--4081--3--5--3--d55e308663-k8s-whisker--f6b686755--wwxf5-eth0" Aug 13 00:16:42.637039 containerd[1487]: 2025-08-13 00:16:42.584 [INFO][3922] cni-plugin/k8s.go 418: Populated endpoint ContainerID="84271baa6e072a8151583bab7fdd0a25fc56b6ee13187593d814c167c339f47f" Namespace="calico-system" Pod="whisker-f6b686755-wwxf5" WorkloadEndpoint="ci--4081--3--5--3--d55e308663-k8s-whisker--f6b686755--wwxf5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--3--d55e308663-k8s-whisker--f6b686755--wwxf5-eth0", GenerateName:"whisker-f6b686755-", Namespace:"calico-system", SelfLink:"", UID:"ecae2380-09e2-4a03-849f-f74d64a66c53", ResourceVersion:"920", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 16, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"f6b686755", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-3-d55e308663", ContainerID:"", Pod:"whisker-f6b686755-wwxf5", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.75.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali84761e28e8c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:16:42.637039 containerd[1487]: 2025-08-13 00:16:42.584 [INFO][3922] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.75.193/32] ContainerID="84271baa6e072a8151583bab7fdd0a25fc56b6ee13187593d814c167c339f47f" Namespace="calico-system" Pod="whisker-f6b686755-wwxf5" WorkloadEndpoint="ci--4081--3--5--3--d55e308663-k8s-whisker--f6b686755--wwxf5-eth0" Aug 13 00:16:42.637039 containerd[1487]: 2025-08-13 00:16:42.585 [INFO][3922] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali84761e28e8c ContainerID="84271baa6e072a8151583bab7fdd0a25fc56b6ee13187593d814c167c339f47f" Namespace="calico-system" Pod="whisker-f6b686755-wwxf5" WorkloadEndpoint="ci--4081--3--5--3--d55e308663-k8s-whisker--f6b686755--wwxf5-eth0" Aug 13 00:16:42.637039 containerd[1487]: 2025-08-13 00:16:42.600 [INFO][3922] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="84271baa6e072a8151583bab7fdd0a25fc56b6ee13187593d814c167c339f47f" Namespace="calico-system" Pod="whisker-f6b686755-wwxf5" WorkloadEndpoint="ci--4081--3--5--3--d55e308663-k8s-whisker--f6b686755--wwxf5-eth0" Aug 13 00:16:42.637039 containerd[1487]: 2025-08-13 00:16:42.602 [INFO][3922] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="84271baa6e072a8151583bab7fdd0a25fc56b6ee13187593d814c167c339f47f" Namespace="calico-system" Pod="whisker-f6b686755-wwxf5" WorkloadEndpoint="ci--4081--3--5--3--d55e308663-k8s-whisker--f6b686755--wwxf5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--3--d55e308663-k8s-whisker--f6b686755--wwxf5-eth0", GenerateName:"whisker-f6b686755-", Namespace:"calico-system", SelfLink:"", UID:"ecae2380-09e2-4a03-849f-f74d64a66c53", ResourceVersion:"920", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 16, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"f6b686755", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-3-d55e308663", ContainerID:"84271baa6e072a8151583bab7fdd0a25fc56b6ee13187593d814c167c339f47f", Pod:"whisker-f6b686755-wwxf5", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.75.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali84761e28e8c", MAC:"56:75:4e:e6:4e:05", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:16:42.637039 containerd[1487]: 2025-08-13 00:16:42.625 [INFO][3922] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="84271baa6e072a8151583bab7fdd0a25fc56b6ee13187593d814c167c339f47f" Namespace="calico-system" Pod="whisker-f6b686755-wwxf5" WorkloadEndpoint="ci--4081--3--5--3--d55e308663-k8s-whisker--f6b686755--wwxf5-eth0" Aug 13 00:16:42.708408 containerd[1487]: time="2025-08-13T00:16:42.707786288Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:16:42.708408 containerd[1487]: time="2025-08-13T00:16:42.707851608Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:16:42.708408 containerd[1487]: time="2025-08-13T00:16:42.707869968Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:16:42.708408 containerd[1487]: time="2025-08-13T00:16:42.707961329Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:16:42.726337 kernel: bpftool[4012]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Aug 13 00:16:42.735511 systemd[1]: Started cri-containerd-84271baa6e072a8151583bab7fdd0a25fc56b6ee13187593d814c167c339f47f.scope - libcontainer container 84271baa6e072a8151583bab7fdd0a25fc56b6ee13187593d814c167c339f47f. Aug 13 00:16:42.779616 containerd[1487]: time="2025-08-13T00:16:42.779574182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-f6b686755-wwxf5,Uid:ecae2380-09e2-4a03-849f-f74d64a66c53,Namespace:calico-system,Attempt:0,} returns sandbox id \"84271baa6e072a8151583bab7fdd0a25fc56b6ee13187593d814c167c339f47f\"" Aug 13 00:16:42.796555 containerd[1487]: time="2025-08-13T00:16:42.796460538Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Aug 13 00:16:42.949838 systemd-networkd[1399]: vxlan.calico: Link UP Aug 13 00:16:42.949846 systemd-networkd[1399]: vxlan.calico: Gained carrier Aug 13 00:16:43.670412 kubelet[2615]: I0813 00:16:43.670324 2615 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f295d8fb-d7de-4da2-8e3a-8a17fac1f430" path="/var/lib/kubelet/pods/f295d8fb-d7de-4da2-8e3a-8a17fac1f430/volumes" Aug 13 00:16:44.370987 systemd-networkd[1399]: cali84761e28e8c: Gained IPv6LL Aug 13 00:16:44.513879 containerd[1487]: time="2025-08-13T00:16:44.513786971Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:16:44.515740 containerd[1487]: time="2025-08-13T00:16:44.515463702Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4605614" Aug 13 00:16:44.516950 containerd[1487]: time="2025-08-13T00:16:44.516882671Z" level=info msg="ImageCreate event name:\"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:16:44.520324 containerd[1487]: time="2025-08-13T00:16:44.520077132Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:16:44.521401 containerd[1487]: time="2025-08-13T00:16:44.521172140Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"5974847\" in 1.724605681s" Aug 13 00:16:44.521401 containerd[1487]: time="2025-08-13T00:16:44.521219940Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\"" Aug 13 00:16:44.526606 containerd[1487]: time="2025-08-13T00:16:44.526459575Z" level=info msg="CreateContainer within sandbox \"84271baa6e072a8151583bab7fdd0a25fc56b6ee13187593d814c167c339f47f\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Aug 13 00:16:44.546101 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount689578835.mount: Deactivated successfully. Aug 13 00:16:44.547766 containerd[1487]: time="2025-08-13T00:16:44.547507235Z" level=info msg="CreateContainer within sandbox \"84271baa6e072a8151583bab7fdd0a25fc56b6ee13187593d814c167c339f47f\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"e187b2e6817cb6c18a91c2bafeb4807a191d70e306090d553cc8920fa5de451a\"" Aug 13 00:16:44.548341 containerd[1487]: time="2025-08-13T00:16:44.548224760Z" level=info msg="StartContainer for \"e187b2e6817cb6c18a91c2bafeb4807a191d70e306090d553cc8920fa5de451a\"" Aug 13 00:16:44.590583 systemd[1]: Started cri-containerd-e187b2e6817cb6c18a91c2bafeb4807a191d70e306090d553cc8920fa5de451a.scope - libcontainer container e187b2e6817cb6c18a91c2bafeb4807a191d70e306090d553cc8920fa5de451a. Aug 13 00:16:44.640197 containerd[1487]: time="2025-08-13T00:16:44.640001731Z" level=info msg="StartContainer for \"e187b2e6817cb6c18a91c2bafeb4807a191d70e306090d553cc8920fa5de451a\" returns successfully" Aug 13 00:16:44.644617 containerd[1487]: time="2025-08-13T00:16:44.643942877Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Aug 13 00:16:44.754671 systemd-networkd[1399]: vxlan.calico: Gained IPv6LL Aug 13 00:16:46.666952 containerd[1487]: time="2025-08-13T00:16:46.666443574Z" level=info msg="StopPodSandbox for \"84786ed3b70e4973214dcce40b9864db2b66c65e0740185d43ccb48d12b49d28\"" Aug 13 00:16:46.809167 containerd[1487]: 2025-08-13 00:16:46.735 [INFO][4146] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="84786ed3b70e4973214dcce40b9864db2b66c65e0740185d43ccb48d12b49d28" Aug 13 00:16:46.809167 containerd[1487]: 2025-08-13 00:16:46.737 [INFO][4146] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="84786ed3b70e4973214dcce40b9864db2b66c65e0740185d43ccb48d12b49d28" iface="eth0" netns="/var/run/netns/cni-9c77a1ba-bfb5-efb8-6b15-1700a6bf4ffc" Aug 13 00:16:46.809167 containerd[1487]: 2025-08-13 00:16:46.739 [INFO][4146] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="84786ed3b70e4973214dcce40b9864db2b66c65e0740185d43ccb48d12b49d28" iface="eth0" netns="/var/run/netns/cni-9c77a1ba-bfb5-efb8-6b15-1700a6bf4ffc" Aug 13 00:16:46.809167 containerd[1487]: 2025-08-13 00:16:46.739 [INFO][4146] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="84786ed3b70e4973214dcce40b9864db2b66c65e0740185d43ccb48d12b49d28" iface="eth0" netns="/var/run/netns/cni-9c77a1ba-bfb5-efb8-6b15-1700a6bf4ffc" Aug 13 00:16:46.809167 containerd[1487]: 2025-08-13 00:16:46.739 [INFO][4146] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="84786ed3b70e4973214dcce40b9864db2b66c65e0740185d43ccb48d12b49d28" Aug 13 00:16:46.809167 containerd[1487]: 2025-08-13 00:16:46.739 [INFO][4146] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="84786ed3b70e4973214dcce40b9864db2b66c65e0740185d43ccb48d12b49d28" Aug 13 00:16:46.809167 containerd[1487]: 2025-08-13 00:16:46.772 [INFO][4153] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="84786ed3b70e4973214dcce40b9864db2b66c65e0740185d43ccb48d12b49d28" HandleID="k8s-pod-network.84786ed3b70e4973214dcce40b9864db2b66c65e0740185d43ccb48d12b49d28" Workload="ci--4081--3--5--3--d55e308663-k8s-calico--apiserver--765654ff88--wtksk-eth0" Aug 13 00:16:46.809167 containerd[1487]: 2025-08-13 00:16:46.772 [INFO][4153] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:16:46.809167 containerd[1487]: 2025-08-13 00:16:46.773 [INFO][4153] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:16:46.809167 containerd[1487]: 2025-08-13 00:16:46.790 [WARNING][4153] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="84786ed3b70e4973214dcce40b9864db2b66c65e0740185d43ccb48d12b49d28" HandleID="k8s-pod-network.84786ed3b70e4973214dcce40b9864db2b66c65e0740185d43ccb48d12b49d28" Workload="ci--4081--3--5--3--d55e308663-k8s-calico--apiserver--765654ff88--wtksk-eth0" Aug 13 00:16:46.809167 containerd[1487]: 2025-08-13 00:16:46.790 [INFO][4153] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="84786ed3b70e4973214dcce40b9864db2b66c65e0740185d43ccb48d12b49d28" HandleID="k8s-pod-network.84786ed3b70e4973214dcce40b9864db2b66c65e0740185d43ccb48d12b49d28" Workload="ci--4081--3--5--3--d55e308663-k8s-calico--apiserver--765654ff88--wtksk-eth0" Aug 13 00:16:46.809167 containerd[1487]: 2025-08-13 00:16:46.801 [INFO][4153] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:16:46.809167 containerd[1487]: 2025-08-13 00:16:46.804 [INFO][4146] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="84786ed3b70e4973214dcce40b9864db2b66c65e0740185d43ccb48d12b49d28" Aug 13 00:16:46.809715 containerd[1487]: time="2025-08-13T00:16:46.809426695Z" level=info msg="TearDown network for sandbox \"84786ed3b70e4973214dcce40b9864db2b66c65e0740185d43ccb48d12b49d28\" successfully" Aug 13 00:16:46.809715 containerd[1487]: time="2025-08-13T00:16:46.809455535Z" level=info msg="StopPodSandbox for \"84786ed3b70e4973214dcce40b9864db2b66c65e0740185d43ccb48d12b49d28\" returns successfully" Aug 13 00:16:46.815415 containerd[1487]: time="2025-08-13T00:16:46.815374334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-765654ff88-wtksk,Uid:58f3844d-62f0-42cd-b23f-5ade8a1b059c,Namespace:calico-apiserver,Attempt:1,}" Aug 13 00:16:46.816152 systemd[1]: run-netns-cni\x2d9c77a1ba\x2dbfb5\x2defb8\x2d6b15\x2d1700a6bf4ffc.mount: Deactivated successfully. Aug 13 00:16:47.014804 systemd-networkd[1399]: cali2b418482583: Link UP Aug 13 00:16:47.015522 systemd-networkd[1399]: cali2b418482583: Gained carrier Aug 13 00:16:47.049374 containerd[1487]: 2025-08-13 00:16:46.893 [INFO][4164] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--3--d55e308663-k8s-calico--apiserver--765654ff88--wtksk-eth0 calico-apiserver-765654ff88- calico-apiserver 58f3844d-62f0-42cd-b23f-5ade8a1b059c 937 0 2025-08-13 00:16:19 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:765654ff88 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-5-3-d55e308663 calico-apiserver-765654ff88-wtksk eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2b418482583 [] [] }} ContainerID="4a7b1f63f04ba92954bca05be2c1619684920f886e5c1dc679c851ecae11cbf9" Namespace="calico-apiserver" Pod="calico-apiserver-765654ff88-wtksk" WorkloadEndpoint="ci--4081--3--5--3--d55e308663-k8s-calico--apiserver--765654ff88--wtksk-" Aug 13 00:16:47.049374 containerd[1487]: 2025-08-13 00:16:46.893 [INFO][4164] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4a7b1f63f04ba92954bca05be2c1619684920f886e5c1dc679c851ecae11cbf9" Namespace="calico-apiserver" Pod="calico-apiserver-765654ff88-wtksk" WorkloadEndpoint="ci--4081--3--5--3--d55e308663-k8s-calico--apiserver--765654ff88--wtksk-eth0" Aug 13 00:16:47.049374 containerd[1487]: 2025-08-13 00:16:46.931 [INFO][4176] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4a7b1f63f04ba92954bca05be2c1619684920f886e5c1dc679c851ecae11cbf9" HandleID="k8s-pod-network.4a7b1f63f04ba92954bca05be2c1619684920f886e5c1dc679c851ecae11cbf9" Workload="ci--4081--3--5--3--d55e308663-k8s-calico--apiserver--765654ff88--wtksk-eth0" Aug 13 00:16:47.049374 containerd[1487]: 2025-08-13 00:16:46.931 [INFO][4176] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4a7b1f63f04ba92954bca05be2c1619684920f886e5c1dc679c851ecae11cbf9" HandleID="k8s-pod-network.4a7b1f63f04ba92954bca05be2c1619684920f886e5c1dc679c851ecae11cbf9" Workload="ci--4081--3--5--3--d55e308663-k8s-calico--apiserver--765654ff88--wtksk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d36a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-5-3-d55e308663", "pod":"calico-apiserver-765654ff88-wtksk", "timestamp":"2025-08-13 00:16:46.931636283 +0000 UTC"}, Hostname:"ci-4081-3-5-3-d55e308663", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:16:47.049374 containerd[1487]: 2025-08-13 00:16:46.931 [INFO][4176] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:16:47.049374 containerd[1487]: 2025-08-13 00:16:46.932 [INFO][4176] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:16:47.049374 containerd[1487]: 2025-08-13 00:16:46.932 [INFO][4176] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-3-d55e308663' Aug 13 00:16:47.049374 containerd[1487]: 2025-08-13 00:16:46.946 [INFO][4176] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4a7b1f63f04ba92954bca05be2c1619684920f886e5c1dc679c851ecae11cbf9" host="ci-4081-3-5-3-d55e308663" Aug 13 00:16:47.049374 containerd[1487]: 2025-08-13 00:16:46.955 [INFO][4176] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-3-d55e308663" Aug 13 00:16:47.049374 containerd[1487]: 2025-08-13 00:16:46.967 [INFO][4176] ipam/ipam.go 511: Trying affinity for 192.168.75.192/26 host="ci-4081-3-5-3-d55e308663" Aug 13 00:16:47.049374 containerd[1487]: 2025-08-13 00:16:46.971 [INFO][4176] ipam/ipam.go 158: Attempting to load block cidr=192.168.75.192/26 host="ci-4081-3-5-3-d55e308663" Aug 13 00:16:47.049374 containerd[1487]: 2025-08-13 00:16:46.974 [INFO][4176] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.75.192/26 host="ci-4081-3-5-3-d55e308663" Aug 13 00:16:47.049374 containerd[1487]: 2025-08-13 00:16:46.974 [INFO][4176] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.75.192/26 handle="k8s-pod-network.4a7b1f63f04ba92954bca05be2c1619684920f886e5c1dc679c851ecae11cbf9" host="ci-4081-3-5-3-d55e308663" Aug 13 00:16:47.049374 containerd[1487]: 2025-08-13 00:16:46.977 [INFO][4176] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4a7b1f63f04ba92954bca05be2c1619684920f886e5c1dc679c851ecae11cbf9 Aug 13 00:16:47.049374 containerd[1487]: 2025-08-13 00:16:46.984 [INFO][4176] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.75.192/26 handle="k8s-pod-network.4a7b1f63f04ba92954bca05be2c1619684920f886e5c1dc679c851ecae11cbf9" host="ci-4081-3-5-3-d55e308663" Aug 13 00:16:47.049374 containerd[1487]: 2025-08-13 00:16:47.002 [INFO][4176] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.75.194/26] block=192.168.75.192/26 handle="k8s-pod-network.4a7b1f63f04ba92954bca05be2c1619684920f886e5c1dc679c851ecae11cbf9" host="ci-4081-3-5-3-d55e308663" Aug 13 00:16:47.049374 containerd[1487]: 2025-08-13 00:16:47.002 [INFO][4176] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.75.194/26] handle="k8s-pod-network.4a7b1f63f04ba92954bca05be2c1619684920f886e5c1dc679c851ecae11cbf9" host="ci-4081-3-5-3-d55e308663" Aug 13 00:16:47.049374 containerd[1487]: 2025-08-13 00:16:47.002 [INFO][4176] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:16:47.049374 containerd[1487]: 2025-08-13 00:16:47.002 [INFO][4176] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.75.194/26] IPv6=[] ContainerID="4a7b1f63f04ba92954bca05be2c1619684920f886e5c1dc679c851ecae11cbf9" HandleID="k8s-pod-network.4a7b1f63f04ba92954bca05be2c1619684920f886e5c1dc679c851ecae11cbf9" Workload="ci--4081--3--5--3--d55e308663-k8s-calico--apiserver--765654ff88--wtksk-eth0" Aug 13 00:16:47.050192 containerd[1487]: 2025-08-13 00:16:47.007 [INFO][4164] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4a7b1f63f04ba92954bca05be2c1619684920f886e5c1dc679c851ecae11cbf9" Namespace="calico-apiserver" Pod="calico-apiserver-765654ff88-wtksk" WorkloadEndpoint="ci--4081--3--5--3--d55e308663-k8s-calico--apiserver--765654ff88--wtksk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--3--d55e308663-k8s-calico--apiserver--765654ff88--wtksk-eth0", GenerateName:"calico-apiserver-765654ff88-", Namespace:"calico-apiserver", SelfLink:"", UID:"58f3844d-62f0-42cd-b23f-5ade8a1b059c", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 16, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"765654ff88", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-3-d55e308663", ContainerID:"", Pod:"calico-apiserver-765654ff88-wtksk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.75.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2b418482583", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:16:47.050192 containerd[1487]: 2025-08-13 00:16:47.007 [INFO][4164] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.75.194/32] ContainerID="4a7b1f63f04ba92954bca05be2c1619684920f886e5c1dc679c851ecae11cbf9" Namespace="calico-apiserver" Pod="calico-apiserver-765654ff88-wtksk" WorkloadEndpoint="ci--4081--3--5--3--d55e308663-k8s-calico--apiserver--765654ff88--wtksk-eth0" Aug 13 00:16:47.050192 containerd[1487]: 2025-08-13 00:16:47.007 [INFO][4164] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2b418482583 ContainerID="4a7b1f63f04ba92954bca05be2c1619684920f886e5c1dc679c851ecae11cbf9" Namespace="calico-apiserver" Pod="calico-apiserver-765654ff88-wtksk" WorkloadEndpoint="ci--4081--3--5--3--d55e308663-k8s-calico--apiserver--765654ff88--wtksk-eth0" Aug 13 00:16:47.050192 containerd[1487]: 2025-08-13 00:16:47.016 [INFO][4164] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4a7b1f63f04ba92954bca05be2c1619684920f886e5c1dc679c851ecae11cbf9" Namespace="calico-apiserver" Pod="calico-apiserver-765654ff88-wtksk" WorkloadEndpoint="ci--4081--3--5--3--d55e308663-k8s-calico--apiserver--765654ff88--wtksk-eth0" Aug 13 00:16:47.050192 containerd[1487]: 2025-08-13 00:16:47.019 [INFO][4164] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4a7b1f63f04ba92954bca05be2c1619684920f886e5c1dc679c851ecae11cbf9" Namespace="calico-apiserver" Pod="calico-apiserver-765654ff88-wtksk" WorkloadEndpoint="ci--4081--3--5--3--d55e308663-k8s-calico--apiserver--765654ff88--wtksk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--3--d55e308663-k8s-calico--apiserver--765654ff88--wtksk-eth0", GenerateName:"calico-apiserver-765654ff88-", Namespace:"calico-apiserver", SelfLink:"", UID:"58f3844d-62f0-42cd-b23f-5ade8a1b059c", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 16, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"765654ff88", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-3-d55e308663", ContainerID:"4a7b1f63f04ba92954bca05be2c1619684920f886e5c1dc679c851ecae11cbf9", Pod:"calico-apiserver-765654ff88-wtksk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.75.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2b418482583", MAC:"42:1a:2d:20:68:7f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:16:47.050192 containerd[1487]: 2025-08-13 00:16:47.044 [INFO][4164] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4a7b1f63f04ba92954bca05be2c1619684920f886e5c1dc679c851ecae11cbf9" Namespace="calico-apiserver" Pod="calico-apiserver-765654ff88-wtksk" WorkloadEndpoint="ci--4081--3--5--3--d55e308663-k8s-calico--apiserver--765654ff88--wtksk-eth0" Aug 13 00:16:47.096834 containerd[1487]: time="2025-08-13T00:16:47.096026613Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:16:47.096834 containerd[1487]: time="2025-08-13T00:16:47.096562136Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:16:47.096834 containerd[1487]: time="2025-08-13T00:16:47.096579056Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:16:47.097785 containerd[1487]: time="2025-08-13T00:16:47.096831898Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:16:47.149500 systemd[1]: Started cri-containerd-4a7b1f63f04ba92954bca05be2c1619684920f886e5c1dc679c851ecae11cbf9.scope - libcontainer container 4a7b1f63f04ba92954bca05be2c1619684920f886e5c1dc679c851ecae11cbf9. Aug 13 00:16:47.207989 containerd[1487]: time="2025-08-13T00:16:47.207903603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-765654ff88-wtksk,Uid:58f3844d-62f0-42cd-b23f-5ade8a1b059c,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"4a7b1f63f04ba92954bca05be2c1619684920f886e5c1dc679c851ecae11cbf9\"" Aug 13 00:16:47.524830 containerd[1487]: time="2025-08-13T00:16:47.524729812Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:16:47.526193 containerd[1487]: time="2025-08-13T00:16:47.526119981Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=30814581" Aug 13 00:16:47.527244 containerd[1487]: time="2025-08-13T00:16:47.527165188Z" level=info msg="ImageCreate event name:\"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:16:47.531106 containerd[1487]: time="2025-08-13T00:16:47.531028052Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:16:47.533405 containerd[1487]: time="2025-08-13T00:16:47.532203860Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"30814411\" in 2.888203782s" Aug 13 00:16:47.533405 containerd[1487]: time="2025-08-13T00:16:47.532256180Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\"" Aug 13 00:16:47.535077 containerd[1487]: time="2025-08-13T00:16:47.534793156Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 13 00:16:47.538048 containerd[1487]: time="2025-08-13T00:16:47.537955296Z" level=info msg="CreateContainer within sandbox \"84271baa6e072a8151583bab7fdd0a25fc56b6ee13187593d814c167c339f47f\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Aug 13 00:16:47.564595 containerd[1487]: time="2025-08-13T00:16:47.564514345Z" level=info msg="CreateContainer within sandbox \"84271baa6e072a8151583bab7fdd0a25fc56b6ee13187593d814c167c339f47f\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"8c7d73cbf280ebe6805c12e72f069ef698ea8f22138d06e5e70600eed3e81d5b\"" Aug 13 00:16:47.566415 containerd[1487]: time="2025-08-13T00:16:47.566312196Z" level=info msg="StartContainer for \"8c7d73cbf280ebe6805c12e72f069ef698ea8f22138d06e5e70600eed3e81d5b\"" Aug 13 00:16:47.602653 systemd[1]: Started cri-containerd-8c7d73cbf280ebe6805c12e72f069ef698ea8f22138d06e5e70600eed3e81d5b.scope - libcontainer container 8c7d73cbf280ebe6805c12e72f069ef698ea8f22138d06e5e70600eed3e81d5b. Aug 13 00:16:47.640556 containerd[1487]: time="2025-08-13T00:16:47.640499747Z" level=info msg="StartContainer for \"8c7d73cbf280ebe6805c12e72f069ef698ea8f22138d06e5e70600eed3e81d5b\" returns successfully" Aug 13 00:16:47.667700 containerd[1487]: time="2025-08-13T00:16:47.666830474Z" level=info msg="StopPodSandbox for \"59d1853111440219c386c83786fa171a27b9b9918d901451d71d8f9e07bc1254\"" Aug 13 00:16:47.795308 systemd[1]: run-containerd-runc-k8s.io-4a7b1f63f04ba92954bca05be2c1619684920f886e5c1dc679c851ecae11cbf9-runc.vvmuse.mount: Deactivated successfully. Aug 13 00:16:47.795433 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1687924977.mount: Deactivated successfully. Aug 13 00:16:47.803834 containerd[1487]: 2025-08-13 00:16:47.749 [INFO][4280] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="59d1853111440219c386c83786fa171a27b9b9918d901451d71d8f9e07bc1254" Aug 13 00:16:47.803834 containerd[1487]: 2025-08-13 00:16:47.753 [INFO][4280] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="59d1853111440219c386c83786fa171a27b9b9918d901451d71d8f9e07bc1254" iface="eth0" netns="/var/run/netns/cni-541df9c8-b030-153b-4ac3-7ceecd950ca4" Aug 13 00:16:47.803834 containerd[1487]: 2025-08-13 00:16:47.753 [INFO][4280] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="59d1853111440219c386c83786fa171a27b9b9918d901451d71d8f9e07bc1254" iface="eth0" netns="/var/run/netns/cni-541df9c8-b030-153b-4ac3-7ceecd950ca4" Aug 13 00:16:47.803834 containerd[1487]: 2025-08-13 00:16:47.755 [INFO][4280] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="59d1853111440219c386c83786fa171a27b9b9918d901451d71d8f9e07bc1254" iface="eth0" netns="/var/run/netns/cni-541df9c8-b030-153b-4ac3-7ceecd950ca4" Aug 13 00:16:47.803834 containerd[1487]: 2025-08-13 00:16:47.755 [INFO][4280] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="59d1853111440219c386c83786fa171a27b9b9918d901451d71d8f9e07bc1254" Aug 13 00:16:47.803834 containerd[1487]: 2025-08-13 00:16:47.755 [INFO][4280] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="59d1853111440219c386c83786fa171a27b9b9918d901451d71d8f9e07bc1254" Aug 13 00:16:47.803834 containerd[1487]: 2025-08-13 00:16:47.780 [INFO][4290] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="59d1853111440219c386c83786fa171a27b9b9918d901451d71d8f9e07bc1254" HandleID="k8s-pod-network.59d1853111440219c386c83786fa171a27b9b9918d901451d71d8f9e07bc1254" Workload="ci--4081--3--5--3--d55e308663-k8s-calico--apiserver--765654ff88--hg6lz-eth0" Aug 13 00:16:47.803834 containerd[1487]: 2025-08-13 00:16:47.780 [INFO][4290] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:16:47.803834 containerd[1487]: 2025-08-13 00:16:47.780 [INFO][4290] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:16:47.803834 containerd[1487]: 2025-08-13 00:16:47.796 [WARNING][4290] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="59d1853111440219c386c83786fa171a27b9b9918d901451d71d8f9e07bc1254" HandleID="k8s-pod-network.59d1853111440219c386c83786fa171a27b9b9918d901451d71d8f9e07bc1254" Workload="ci--4081--3--5--3--d55e308663-k8s-calico--apiserver--765654ff88--hg6lz-eth0" Aug 13 00:16:47.803834 containerd[1487]: 2025-08-13 00:16:47.796 [INFO][4290] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="59d1853111440219c386c83786fa171a27b9b9918d901451d71d8f9e07bc1254" HandleID="k8s-pod-network.59d1853111440219c386c83786fa171a27b9b9918d901451d71d8f9e07bc1254" Workload="ci--4081--3--5--3--d55e308663-k8s-calico--apiserver--765654ff88--hg6lz-eth0" Aug 13 00:16:47.803834 containerd[1487]: 2025-08-13 00:16:47.800 [INFO][4290] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:16:47.803834 containerd[1487]: 2025-08-13 00:16:47.801 [INFO][4280] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="59d1853111440219c386c83786fa171a27b9b9918d901451d71d8f9e07bc1254" Aug 13 00:16:47.805897 containerd[1487]: time="2025-08-13T00:16:47.805392753Z" level=info msg="TearDown network for sandbox \"59d1853111440219c386c83786fa171a27b9b9918d901451d71d8f9e07bc1254\" successfully" Aug 13 00:16:47.805897 containerd[1487]: time="2025-08-13T00:16:47.805446433Z" level=info msg="StopPodSandbox for \"59d1853111440219c386c83786fa171a27b9b9918d901451d71d8f9e07bc1254\" returns successfully" Aug 13 00:16:47.808694 containerd[1487]: time="2025-08-13T00:16:47.806828002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-765654ff88-hg6lz,Uid:27660f5a-c4be-4a6f-bd2d-93f2bdfb3e66,Namespace:calico-apiserver,Attempt:1,}" Aug 13 00:16:47.808389 systemd[1]: run-netns-cni\x2d541df9c8\x2db030\x2d153b\x2d4ac3\x2d7ceecd950ca4.mount: Deactivated successfully. Aug 13 00:16:47.987853 systemd-networkd[1399]: calieffb97a9ea9: Link UP Aug 13 00:16:47.988038 systemd-networkd[1399]: calieffb97a9ea9: Gained carrier Aug 13 00:16:48.015773 kubelet[2615]: I0813 00:16:48.014238 2615 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-f6b686755-wwxf5" podStartSLOduration=2.274233247 podStartE2EDuration="7.014213676s" podCreationTimestamp="2025-08-13 00:16:41 +0000 UTC" firstStartedPulling="2025-08-13 00:16:42.794343524 +0000 UTC m=+43.268857037" lastFinishedPulling="2025-08-13 00:16:47.534323953 +0000 UTC m=+48.008837466" observedRunningTime="2025-08-13 00:16:47.960814619 +0000 UTC m=+48.435328132" watchObservedRunningTime="2025-08-13 00:16:48.014213676 +0000 UTC m=+48.488727189" Aug 13 00:16:48.020463 containerd[1487]: 2025-08-13 00:16:47.870 [INFO][4297] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--3--d55e308663-k8s-calico--apiserver--765654ff88--hg6lz-eth0 calico-apiserver-765654ff88- calico-apiserver 27660f5a-c4be-4a6f-bd2d-93f2bdfb3e66 949 0 2025-08-13 00:16:19 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:765654ff88 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-5-3-d55e308663 calico-apiserver-765654ff88-hg6lz eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calieffb97a9ea9 [] [] }} ContainerID="476fc8cdacef13f175c9f6c921486c65e283ccaad2e041f1124e9677fa24a980" Namespace="calico-apiserver" Pod="calico-apiserver-765654ff88-hg6lz" WorkloadEndpoint="ci--4081--3--5--3--d55e308663-k8s-calico--apiserver--765654ff88--hg6lz-" Aug 13 00:16:48.020463 containerd[1487]: 2025-08-13 00:16:47.870 [INFO][4297] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="476fc8cdacef13f175c9f6c921486c65e283ccaad2e041f1124e9677fa24a980" Namespace="calico-apiserver" Pod="calico-apiserver-765654ff88-hg6lz" WorkloadEndpoint="ci--4081--3--5--3--d55e308663-k8s-calico--apiserver--765654ff88--hg6lz-eth0" Aug 13 00:16:48.020463 containerd[1487]: 2025-08-13 00:16:47.905 [INFO][4308] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="476fc8cdacef13f175c9f6c921486c65e283ccaad2e041f1124e9677fa24a980" HandleID="k8s-pod-network.476fc8cdacef13f175c9f6c921486c65e283ccaad2e041f1124e9677fa24a980" Workload="ci--4081--3--5--3--d55e308663-k8s-calico--apiserver--765654ff88--hg6lz-eth0" Aug 13 00:16:48.020463 containerd[1487]: 2025-08-13 00:16:47.905 [INFO][4308] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="476fc8cdacef13f175c9f6c921486c65e283ccaad2e041f1124e9677fa24a980" HandleID="k8s-pod-network.476fc8cdacef13f175c9f6c921486c65e283ccaad2e041f1124e9677fa24a980" Workload="ci--4081--3--5--3--d55e308663-k8s-calico--apiserver--765654ff88--hg6lz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002aa3f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-5-3-d55e308663", "pod":"calico-apiserver-765654ff88-hg6lz", "timestamp":"2025-08-13 00:16:47.905697829 +0000 UTC"}, Hostname:"ci-4081-3-5-3-d55e308663", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:16:48.020463 containerd[1487]: 2025-08-13 00:16:47.906 [INFO][4308] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:16:48.020463 containerd[1487]: 2025-08-13 00:16:47.906 [INFO][4308] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:16:48.020463 containerd[1487]: 2025-08-13 00:16:47.906 [INFO][4308] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-3-d55e308663' Aug 13 00:16:48.020463 containerd[1487]: 2025-08-13 00:16:47.918 [INFO][4308] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.476fc8cdacef13f175c9f6c921486c65e283ccaad2e041f1124e9677fa24a980" host="ci-4081-3-5-3-d55e308663" Aug 13 00:16:48.020463 containerd[1487]: 2025-08-13 00:16:47.929 [INFO][4308] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-3-d55e308663" Aug 13 00:16:48.020463 containerd[1487]: 2025-08-13 00:16:47.938 [INFO][4308] ipam/ipam.go 511: Trying affinity for 192.168.75.192/26 host="ci-4081-3-5-3-d55e308663" Aug 13 00:16:48.020463 containerd[1487]: 2025-08-13 00:16:47.941 [INFO][4308] ipam/ipam.go 158: Attempting to load block cidr=192.168.75.192/26 host="ci-4081-3-5-3-d55e308663" Aug 13 00:16:48.020463 containerd[1487]: 2025-08-13 00:16:47.947 [INFO][4308] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.75.192/26 host="ci-4081-3-5-3-d55e308663" Aug 13 00:16:48.020463 containerd[1487]: 2025-08-13 00:16:47.947 [INFO][4308] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.75.192/26 handle="k8s-pod-network.476fc8cdacef13f175c9f6c921486c65e283ccaad2e041f1124e9677fa24a980" host="ci-4081-3-5-3-d55e308663" Aug 13 00:16:48.020463 containerd[1487]: 2025-08-13 00:16:47.951 [INFO][4308] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.476fc8cdacef13f175c9f6c921486c65e283ccaad2e041f1124e9677fa24a980 Aug 13 00:16:48.020463 containerd[1487]: 2025-08-13 00:16:47.964 [INFO][4308] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.75.192/26 handle="k8s-pod-network.476fc8cdacef13f175c9f6c921486c65e283ccaad2e041f1124e9677fa24a980" host="ci-4081-3-5-3-d55e308663" Aug 13 00:16:48.020463 containerd[1487]: 2025-08-13 00:16:47.974 [INFO][4308] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.75.195/26] block=192.168.75.192/26 handle="k8s-pod-network.476fc8cdacef13f175c9f6c921486c65e283ccaad2e041f1124e9677fa24a980" host="ci-4081-3-5-3-d55e308663" Aug 13 00:16:48.020463 containerd[1487]: 2025-08-13 00:16:47.975 [INFO][4308] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.75.195/26] handle="k8s-pod-network.476fc8cdacef13f175c9f6c921486c65e283ccaad2e041f1124e9677fa24a980" host="ci-4081-3-5-3-d55e308663" Aug 13 00:16:48.020463 containerd[1487]: 2025-08-13 00:16:47.975 [INFO][4308] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:16:48.020463 containerd[1487]: 2025-08-13 00:16:47.975 [INFO][4308] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.75.195/26] IPv6=[] ContainerID="476fc8cdacef13f175c9f6c921486c65e283ccaad2e041f1124e9677fa24a980" HandleID="k8s-pod-network.476fc8cdacef13f175c9f6c921486c65e283ccaad2e041f1124e9677fa24a980" Workload="ci--4081--3--5--3--d55e308663-k8s-calico--apiserver--765654ff88--hg6lz-eth0" Aug 13 00:16:48.021042 containerd[1487]: 2025-08-13 00:16:47.981 [INFO][4297] cni-plugin/k8s.go 418: Populated endpoint ContainerID="476fc8cdacef13f175c9f6c921486c65e283ccaad2e041f1124e9677fa24a980" Namespace="calico-apiserver" Pod="calico-apiserver-765654ff88-hg6lz" WorkloadEndpoint="ci--4081--3--5--3--d55e308663-k8s-calico--apiserver--765654ff88--hg6lz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--3--d55e308663-k8s-calico--apiserver--765654ff88--hg6lz-eth0", GenerateName:"calico-apiserver-765654ff88-", Namespace:"calico-apiserver", SelfLink:"", UID:"27660f5a-c4be-4a6f-bd2d-93f2bdfb3e66", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 16, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"765654ff88", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-3-d55e308663", ContainerID:"", Pod:"calico-apiserver-765654ff88-hg6lz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.75.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calieffb97a9ea9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:16:48.021042 containerd[1487]: 2025-08-13 00:16:47.981 [INFO][4297] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.75.195/32] ContainerID="476fc8cdacef13f175c9f6c921486c65e283ccaad2e041f1124e9677fa24a980" Namespace="calico-apiserver" Pod="calico-apiserver-765654ff88-hg6lz" WorkloadEndpoint="ci--4081--3--5--3--d55e308663-k8s-calico--apiserver--765654ff88--hg6lz-eth0" Aug 13 00:16:48.021042 containerd[1487]: 2025-08-13 00:16:47.981 [INFO][4297] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calieffb97a9ea9 ContainerID="476fc8cdacef13f175c9f6c921486c65e283ccaad2e041f1124e9677fa24a980" Namespace="calico-apiserver" Pod="calico-apiserver-765654ff88-hg6lz" WorkloadEndpoint="ci--4081--3--5--3--d55e308663-k8s-calico--apiserver--765654ff88--hg6lz-eth0" Aug 13 00:16:48.021042 containerd[1487]: 2025-08-13 00:16:47.993 [INFO][4297] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="476fc8cdacef13f175c9f6c921486c65e283ccaad2e041f1124e9677fa24a980" Namespace="calico-apiserver" Pod="calico-apiserver-765654ff88-hg6lz" WorkloadEndpoint="ci--4081--3--5--3--d55e308663-k8s-calico--apiserver--765654ff88--hg6lz-eth0" Aug 13 00:16:48.021042 containerd[1487]: 2025-08-13 00:16:47.993 [INFO][4297] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="476fc8cdacef13f175c9f6c921486c65e283ccaad2e041f1124e9677fa24a980" Namespace="calico-apiserver" Pod="calico-apiserver-765654ff88-hg6lz" WorkloadEndpoint="ci--4081--3--5--3--d55e308663-k8s-calico--apiserver--765654ff88--hg6lz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--3--d55e308663-k8s-calico--apiserver--765654ff88--hg6lz-eth0", GenerateName:"calico-apiserver-765654ff88-", Namespace:"calico-apiserver", SelfLink:"", UID:"27660f5a-c4be-4a6f-bd2d-93f2bdfb3e66", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 16, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"765654ff88", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-3-d55e308663", ContainerID:"476fc8cdacef13f175c9f6c921486c65e283ccaad2e041f1124e9677fa24a980", Pod:"calico-apiserver-765654ff88-hg6lz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.75.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calieffb97a9ea9", MAC:"ae:5b:8d:0a:74:02", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:16:48.021042 containerd[1487]: 2025-08-13 00:16:48.011 [INFO][4297] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="476fc8cdacef13f175c9f6c921486c65e283ccaad2e041f1124e9677fa24a980" Namespace="calico-apiserver" Pod="calico-apiserver-765654ff88-hg6lz" WorkloadEndpoint="ci--4081--3--5--3--d55e308663-k8s-calico--apiserver--765654ff88--hg6lz-eth0" Aug 13 00:16:48.045378 containerd[1487]: time="2025-08-13T00:16:48.045211670Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:16:48.045378 containerd[1487]: time="2025-08-13T00:16:48.045322791Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:16:48.045378 containerd[1487]: time="2025-08-13T00:16:48.045341951Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:16:48.046175 containerd[1487]: time="2025-08-13T00:16:48.045431911Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:16:48.070333 systemd[1]: Started cri-containerd-476fc8cdacef13f175c9f6c921486c65e283ccaad2e041f1124e9677fa24a980.scope - libcontainer container 476fc8cdacef13f175c9f6c921486c65e283ccaad2e041f1124e9677fa24a980. Aug 13 00:16:48.113628 containerd[1487]: time="2025-08-13T00:16:48.113557937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-765654ff88-hg6lz,Uid:27660f5a-c4be-4a6f-bd2d-93f2bdfb3e66,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"476fc8cdacef13f175c9f6c921486c65e283ccaad2e041f1124e9677fa24a980\"" Aug 13 00:16:48.274733 systemd-networkd[1399]: cali2b418482583: Gained IPv6LL Aug 13 00:16:48.666529 containerd[1487]: time="2025-08-13T00:16:48.666011587Z" level=info msg="StopPodSandbox for \"723cfb97cf821550572dbfcc2de9f9ec348e0959abeda1144eeea8b5d8cfdffa\"" Aug 13 00:16:48.667896 containerd[1487]: time="2025-08-13T00:16:48.666679471Z" level=info msg="StopPodSandbox for \"3d57bb024102e7ea0b07080efe2b7dc9b796d88b44b4ae63c089837653fcb8d0\"" Aug 13 00:16:48.800879 containerd[1487]: 2025-08-13 00:16:48.727 [INFO][4388] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="723cfb97cf821550572dbfcc2de9f9ec348e0959abeda1144eeea8b5d8cfdffa" Aug 13 00:16:48.800879 containerd[1487]: 2025-08-13 00:16:48.727 [INFO][4388] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="723cfb97cf821550572dbfcc2de9f9ec348e0959abeda1144eeea8b5d8cfdffa" iface="eth0" netns="/var/run/netns/cni-bf541e44-4e59-b3d5-3a82-d8cc4dfb48cd" Aug 13 00:16:48.800879 containerd[1487]: 2025-08-13 00:16:48.728 [INFO][4388] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="723cfb97cf821550572dbfcc2de9f9ec348e0959abeda1144eeea8b5d8cfdffa" iface="eth0" netns="/var/run/netns/cni-bf541e44-4e59-b3d5-3a82-d8cc4dfb48cd" Aug 13 00:16:48.800879 containerd[1487]: 2025-08-13 00:16:48.728 [INFO][4388] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="723cfb97cf821550572dbfcc2de9f9ec348e0959abeda1144eeea8b5d8cfdffa" iface="eth0" netns="/var/run/netns/cni-bf541e44-4e59-b3d5-3a82-d8cc4dfb48cd" Aug 13 00:16:48.800879 containerd[1487]: 2025-08-13 00:16:48.728 [INFO][4388] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="723cfb97cf821550572dbfcc2de9f9ec348e0959abeda1144eeea8b5d8cfdffa" Aug 13 00:16:48.800879 containerd[1487]: 2025-08-13 00:16:48.729 [INFO][4388] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="723cfb97cf821550572dbfcc2de9f9ec348e0959abeda1144eeea8b5d8cfdffa" Aug 13 00:16:48.800879 containerd[1487]: 2025-08-13 00:16:48.772 [INFO][4402] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="723cfb97cf821550572dbfcc2de9f9ec348e0959abeda1144eeea8b5d8cfdffa" HandleID="k8s-pod-network.723cfb97cf821550572dbfcc2de9f9ec348e0959abeda1144eeea8b5d8cfdffa" Workload="ci--4081--3--5--3--d55e308663-k8s-csi--node--driver--qctbn-eth0" Aug 13 00:16:48.800879 containerd[1487]: 2025-08-13 00:16:48.772 [INFO][4402] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:16:48.800879 containerd[1487]: 2025-08-13 00:16:48.772 [INFO][4402] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:16:48.800879 containerd[1487]: 2025-08-13 00:16:48.784 [WARNING][4402] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="723cfb97cf821550572dbfcc2de9f9ec348e0959abeda1144eeea8b5d8cfdffa" HandleID="k8s-pod-network.723cfb97cf821550572dbfcc2de9f9ec348e0959abeda1144eeea8b5d8cfdffa" Workload="ci--4081--3--5--3--d55e308663-k8s-csi--node--driver--qctbn-eth0" Aug 13 00:16:48.800879 containerd[1487]: 2025-08-13 00:16:48.784 [INFO][4402] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="723cfb97cf821550572dbfcc2de9f9ec348e0959abeda1144eeea8b5d8cfdffa" HandleID="k8s-pod-network.723cfb97cf821550572dbfcc2de9f9ec348e0959abeda1144eeea8b5d8cfdffa" Workload="ci--4081--3--5--3--d55e308663-k8s-csi--node--driver--qctbn-eth0" Aug 13 00:16:48.800879 containerd[1487]: 2025-08-13 00:16:48.791 [INFO][4402] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:16:48.800879 containerd[1487]: 2025-08-13 00:16:48.795 [INFO][4388] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="723cfb97cf821550572dbfcc2de9f9ec348e0959abeda1144eeea8b5d8cfdffa" Aug 13 00:16:48.805116 systemd[1]: run-netns-cni\x2dbf541e44\x2d4e59\x2db3d5\x2d3a82\x2dd8cc4dfb48cd.mount: Deactivated successfully. Aug 13 00:16:48.806679 containerd[1487]: time="2025-08-13T00:16:48.806390224Z" level=info msg="TearDown network for sandbox \"723cfb97cf821550572dbfcc2de9f9ec348e0959abeda1144eeea8b5d8cfdffa\" successfully" Aug 13 00:16:48.806679 containerd[1487]: time="2025-08-13T00:16:48.806429744Z" level=info msg="StopPodSandbox for \"723cfb97cf821550572dbfcc2de9f9ec348e0959abeda1144eeea8b5d8cfdffa\" returns successfully" Aug 13 00:16:48.808010 containerd[1487]: time="2025-08-13T00:16:48.807645232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qctbn,Uid:15e02000-307e-46ce-8cf9-fed7fd6d6dd8,Namespace:calico-system,Attempt:1,}" Aug 13 00:16:48.824988 containerd[1487]: 2025-08-13 00:16:48.737 [INFO][4389] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3d57bb024102e7ea0b07080efe2b7dc9b796d88b44b4ae63c089837653fcb8d0" Aug 13 00:16:48.824988 containerd[1487]: 2025-08-13 00:16:48.737 [INFO][4389] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3d57bb024102e7ea0b07080efe2b7dc9b796d88b44b4ae63c089837653fcb8d0" iface="eth0" netns="/var/run/netns/cni-357009c7-4e72-a780-b298-f69e695453f0" Aug 13 00:16:48.824988 containerd[1487]: 2025-08-13 00:16:48.737 [INFO][4389] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3d57bb024102e7ea0b07080efe2b7dc9b796d88b44b4ae63c089837653fcb8d0" iface="eth0" netns="/var/run/netns/cni-357009c7-4e72-a780-b298-f69e695453f0" Aug 13 00:16:48.824988 containerd[1487]: 2025-08-13 00:16:48.738 [INFO][4389] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3d57bb024102e7ea0b07080efe2b7dc9b796d88b44b4ae63c089837653fcb8d0" iface="eth0" netns="/var/run/netns/cni-357009c7-4e72-a780-b298-f69e695453f0" Aug 13 00:16:48.824988 containerd[1487]: 2025-08-13 00:16:48.738 [INFO][4389] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3d57bb024102e7ea0b07080efe2b7dc9b796d88b44b4ae63c089837653fcb8d0" Aug 13 00:16:48.824988 containerd[1487]: 2025-08-13 00:16:48.738 [INFO][4389] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3d57bb024102e7ea0b07080efe2b7dc9b796d88b44b4ae63c089837653fcb8d0" Aug 13 00:16:48.824988 containerd[1487]: 2025-08-13 00:16:48.774 [INFO][4404] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3d57bb024102e7ea0b07080efe2b7dc9b796d88b44b4ae63c089837653fcb8d0" HandleID="k8s-pod-network.3d57bb024102e7ea0b07080efe2b7dc9b796d88b44b4ae63c089837653fcb8d0" Workload="ci--4081--3--5--3--d55e308663-k8s-coredns--668d6bf9bc--hnn7j-eth0" Aug 13 00:16:48.824988 containerd[1487]: 2025-08-13 00:16:48.775 [INFO][4404] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:16:48.824988 containerd[1487]: 2025-08-13 00:16:48.792 [INFO][4404] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:16:48.824988 containerd[1487]: 2025-08-13 00:16:48.813 [WARNING][4404] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3d57bb024102e7ea0b07080efe2b7dc9b796d88b44b4ae63c089837653fcb8d0" HandleID="k8s-pod-network.3d57bb024102e7ea0b07080efe2b7dc9b796d88b44b4ae63c089837653fcb8d0" Workload="ci--4081--3--5--3--d55e308663-k8s-coredns--668d6bf9bc--hnn7j-eth0" Aug 13 00:16:48.824988 containerd[1487]: 2025-08-13 00:16:48.813 [INFO][4404] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3d57bb024102e7ea0b07080efe2b7dc9b796d88b44b4ae63c089837653fcb8d0" HandleID="k8s-pod-network.3d57bb024102e7ea0b07080efe2b7dc9b796d88b44b4ae63c089837653fcb8d0" Workload="ci--4081--3--5--3--d55e308663-k8s-coredns--668d6bf9bc--hnn7j-eth0" Aug 13 00:16:48.824988 containerd[1487]: 2025-08-13 00:16:48.817 [INFO][4404] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:16:48.824988 containerd[1487]: 2025-08-13 00:16:48.820 [INFO][4389] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3d57bb024102e7ea0b07080efe2b7dc9b796d88b44b4ae63c089837653fcb8d0" Aug 13 00:16:48.826222 containerd[1487]: time="2025-08-13T00:16:48.825440863Z" level=info msg="TearDown network for sandbox \"3d57bb024102e7ea0b07080efe2b7dc9b796d88b44b4ae63c089837653fcb8d0\" successfully" Aug 13 00:16:48.826222 containerd[1487]: time="2025-08-13T00:16:48.825487183Z" level=info msg="StopPodSandbox for \"3d57bb024102e7ea0b07080efe2b7dc9b796d88b44b4ae63c089837653fcb8d0\" returns successfully" Aug 13 00:16:48.830487 containerd[1487]: time="2025-08-13T00:16:48.828908844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hnn7j,Uid:2776ab78-3da7-4b02-86ed-3ed0ba1a8679,Namespace:kube-system,Attempt:1,}" Aug 13 00:16:48.829255 systemd[1]: run-netns-cni\x2d357009c7\x2d4e72\x2da780\x2db298\x2df69e695453f0.mount: Deactivated successfully. Aug 13 00:16:49.026823 systemd-networkd[1399]: calid31c84a3ae0: Link UP Aug 13 00:16:49.030526 systemd-networkd[1399]: calid31c84a3ae0: Gained carrier Aug 13 00:16:49.055104 containerd[1487]: 2025-08-13 00:16:48.900 [INFO][4416] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--3--d55e308663-k8s-csi--node--driver--qctbn-eth0 csi-node-driver- calico-system 15e02000-307e-46ce-8cf9-fed7fd6d6dd8 966 0 2025-08-13 00:16:25 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-5-3-d55e308663 csi-node-driver-qctbn eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calid31c84a3ae0 [] [] }} ContainerID="ab2cb5ca9f50a9e056ca54fa9e7a9df12f95ce00edd4e276ff3a01d423f1adba" Namespace="calico-system" Pod="csi-node-driver-qctbn" WorkloadEndpoint="ci--4081--3--5--3--d55e308663-k8s-csi--node--driver--qctbn-" Aug 13 00:16:49.055104 containerd[1487]: 2025-08-13 00:16:48.900 [INFO][4416] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ab2cb5ca9f50a9e056ca54fa9e7a9df12f95ce00edd4e276ff3a01d423f1adba" Namespace="calico-system" Pod="csi-node-driver-qctbn" WorkloadEndpoint="ci--4081--3--5--3--d55e308663-k8s-csi--node--driver--qctbn-eth0" Aug 13 00:16:49.055104 containerd[1487]: 2025-08-13 00:16:48.953 [INFO][4439] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ab2cb5ca9f50a9e056ca54fa9e7a9df12f95ce00edd4e276ff3a01d423f1adba" HandleID="k8s-pod-network.ab2cb5ca9f50a9e056ca54fa9e7a9df12f95ce00edd4e276ff3a01d423f1adba" Workload="ci--4081--3--5--3--d55e308663-k8s-csi--node--driver--qctbn-eth0" Aug 13 00:16:49.055104 containerd[1487]: 2025-08-13 00:16:48.953 [INFO][4439] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ab2cb5ca9f50a9e056ca54fa9e7a9df12f95ce00edd4e276ff3a01d423f1adba" HandleID="k8s-pod-network.ab2cb5ca9f50a9e056ca54fa9e7a9df12f95ce00edd4e276ff3a01d423f1adba" Workload="ci--4081--3--5--3--d55e308663-k8s-csi--node--driver--qctbn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3920), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-5-3-d55e308663", "pod":"csi-node-driver-qctbn", "timestamp":"2025-08-13 00:16:48.953741664 +0000 UTC"}, Hostname:"ci-4081-3-5-3-d55e308663", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:16:49.055104 containerd[1487]: 2025-08-13 00:16:48.953 [INFO][4439] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:16:49.055104 containerd[1487]: 2025-08-13 00:16:48.954 [INFO][4439] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:16:49.055104 containerd[1487]: 2025-08-13 00:16:48.954 [INFO][4439] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-3-d55e308663' Aug 13 00:16:49.055104 containerd[1487]: 2025-08-13 00:16:48.969 [INFO][4439] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ab2cb5ca9f50a9e056ca54fa9e7a9df12f95ce00edd4e276ff3a01d423f1adba" host="ci-4081-3-5-3-d55e308663" Aug 13 00:16:49.055104 containerd[1487]: 2025-08-13 00:16:48.976 [INFO][4439] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-3-d55e308663" Aug 13 00:16:49.055104 containerd[1487]: 2025-08-13 00:16:48.983 [INFO][4439] ipam/ipam.go 511: Trying affinity for 192.168.75.192/26 host="ci-4081-3-5-3-d55e308663" Aug 13 00:16:49.055104 containerd[1487]: 2025-08-13 00:16:48.988 [INFO][4439] ipam/ipam.go 158: Attempting to load block cidr=192.168.75.192/26 host="ci-4081-3-5-3-d55e308663" Aug 13 00:16:49.055104 containerd[1487]: 2025-08-13 00:16:48.991 [INFO][4439] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.75.192/26 host="ci-4081-3-5-3-d55e308663" Aug 13 00:16:49.055104 containerd[1487]: 2025-08-13 00:16:48.991 [INFO][4439] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.75.192/26 handle="k8s-pod-network.ab2cb5ca9f50a9e056ca54fa9e7a9df12f95ce00edd4e276ff3a01d423f1adba" host="ci-4081-3-5-3-d55e308663" Aug 13 00:16:49.055104 containerd[1487]: 2025-08-13 00:16:48.994 [INFO][4439] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ab2cb5ca9f50a9e056ca54fa9e7a9df12f95ce00edd4e276ff3a01d423f1adba Aug 13 00:16:49.055104 containerd[1487]: 2025-08-13 00:16:49.000 [INFO][4439] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.75.192/26 handle="k8s-pod-network.ab2cb5ca9f50a9e056ca54fa9e7a9df12f95ce00edd4e276ff3a01d423f1adba" host="ci-4081-3-5-3-d55e308663" Aug 13 00:16:49.055104 containerd[1487]: 2025-08-13 00:16:49.012 [INFO][4439] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.75.196/26] block=192.168.75.192/26 handle="k8s-pod-network.ab2cb5ca9f50a9e056ca54fa9e7a9df12f95ce00edd4e276ff3a01d423f1adba" host="ci-4081-3-5-3-d55e308663" Aug 13 00:16:49.055104 containerd[1487]: 2025-08-13 00:16:49.012 [INFO][4439] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.75.196/26] handle="k8s-pod-network.ab2cb5ca9f50a9e056ca54fa9e7a9df12f95ce00edd4e276ff3a01d423f1adba" host="ci-4081-3-5-3-d55e308663" Aug 13 00:16:49.055104 containerd[1487]: 2025-08-13 00:16:49.013 [INFO][4439] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:16:49.055104 containerd[1487]: 2025-08-13 00:16:49.013 [INFO][4439] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.75.196/26] IPv6=[] ContainerID="ab2cb5ca9f50a9e056ca54fa9e7a9df12f95ce00edd4e276ff3a01d423f1adba" HandleID="k8s-pod-network.ab2cb5ca9f50a9e056ca54fa9e7a9df12f95ce00edd4e276ff3a01d423f1adba" Workload="ci--4081--3--5--3--d55e308663-k8s-csi--node--driver--qctbn-eth0" Aug 13 00:16:49.056600 containerd[1487]: 2025-08-13 00:16:49.018 [INFO][4416] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ab2cb5ca9f50a9e056ca54fa9e7a9df12f95ce00edd4e276ff3a01d423f1adba" Namespace="calico-system" Pod="csi-node-driver-qctbn" WorkloadEndpoint="ci--4081--3--5--3--d55e308663-k8s-csi--node--driver--qctbn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--3--d55e308663-k8s-csi--node--driver--qctbn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"15e02000-307e-46ce-8cf9-fed7fd6d6dd8", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 16, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-3-d55e308663", ContainerID:"", Pod:"csi-node-driver-qctbn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.75.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid31c84a3ae0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:16:49.056600 containerd[1487]: 2025-08-13 00:16:49.018 [INFO][4416] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.75.196/32] ContainerID="ab2cb5ca9f50a9e056ca54fa9e7a9df12f95ce00edd4e276ff3a01d423f1adba" Namespace="calico-system" Pod="csi-node-driver-qctbn" WorkloadEndpoint="ci--4081--3--5--3--d55e308663-k8s-csi--node--driver--qctbn-eth0" Aug 13 00:16:49.056600 containerd[1487]: 2025-08-13 00:16:49.018 [INFO][4416] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid31c84a3ae0 ContainerID="ab2cb5ca9f50a9e056ca54fa9e7a9df12f95ce00edd4e276ff3a01d423f1adba" Namespace="calico-system" Pod="csi-node-driver-qctbn" WorkloadEndpoint="ci--4081--3--5--3--d55e308663-k8s-csi--node--driver--qctbn-eth0" Aug 13 00:16:49.056600 containerd[1487]: 2025-08-13 00:16:49.030 [INFO][4416] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ab2cb5ca9f50a9e056ca54fa9e7a9df12f95ce00edd4e276ff3a01d423f1adba" Namespace="calico-system" Pod="csi-node-driver-qctbn" WorkloadEndpoint="ci--4081--3--5--3--d55e308663-k8s-csi--node--driver--qctbn-eth0" Aug 13 00:16:49.056600 containerd[1487]: 2025-08-13 00:16:49.031 [INFO][4416] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ab2cb5ca9f50a9e056ca54fa9e7a9df12f95ce00edd4e276ff3a01d423f1adba" Namespace="calico-system" Pod="csi-node-driver-qctbn" WorkloadEndpoint="ci--4081--3--5--3--d55e308663-k8s-csi--node--driver--qctbn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--3--d55e308663-k8s-csi--node--driver--qctbn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"15e02000-307e-46ce-8cf9-fed7fd6d6dd8", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 16, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-3-d55e308663", ContainerID:"ab2cb5ca9f50a9e056ca54fa9e7a9df12f95ce00edd4e276ff3a01d423f1adba", Pod:"csi-node-driver-qctbn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.75.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid31c84a3ae0", MAC:"a2:63:c6:e9:37:d9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:16:49.056600 containerd[1487]: 2025-08-13 00:16:49.052 [INFO][4416] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ab2cb5ca9f50a9e056ca54fa9e7a9df12f95ce00edd4e276ff3a01d423f1adba" Namespace="calico-system" Pod="csi-node-driver-qctbn" WorkloadEndpoint="ci--4081--3--5--3--d55e308663-k8s-csi--node--driver--qctbn-eth0" Aug 13 00:16:49.093052 containerd[1487]: time="2025-08-13T00:16:49.092898244Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:16:49.093052 containerd[1487]: time="2025-08-13T00:16:49.092977925Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:16:49.093052 containerd[1487]: time="2025-08-13T00:16:49.092990245Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:16:49.093555 containerd[1487]: time="2025-08-13T00:16:49.093179726Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:16:49.122508 systemd[1]: Started cri-containerd-ab2cb5ca9f50a9e056ca54fa9e7a9df12f95ce00edd4e276ff3a01d423f1adba.scope - libcontainer container ab2cb5ca9f50a9e056ca54fa9e7a9df12f95ce00edd4e276ff3a01d423f1adba. Aug 13 00:16:49.137231 systemd-networkd[1399]: cali767759facbe: Link UP Aug 13 00:16:49.138250 systemd-networkd[1399]: cali767759facbe: Gained carrier Aug 13 00:16:49.170113 containerd[1487]: 2025-08-13 00:16:48.926 [INFO][4430] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--3--d55e308663-k8s-coredns--668d6bf9bc--hnn7j-eth0 coredns-668d6bf9bc- kube-system 2776ab78-3da7-4b02-86ed-3ed0ba1a8679 967 0 2025-08-13 00:16:06 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-5-3-d55e308663 coredns-668d6bf9bc-hnn7j eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali767759facbe [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="1b19d30f69cdfc0a2b1c5c8ced7022e797af6d552d399dfb99365acdaa39d18f" Namespace="kube-system" Pod="coredns-668d6bf9bc-hnn7j" WorkloadEndpoint="ci--4081--3--5--3--d55e308663-k8s-coredns--668d6bf9bc--hnn7j-" Aug 13 00:16:49.170113 containerd[1487]: 2025-08-13 00:16:48.926 [INFO][4430] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1b19d30f69cdfc0a2b1c5c8ced7022e797af6d552d399dfb99365acdaa39d18f" Namespace="kube-system" Pod="coredns-668d6bf9bc-hnn7j" WorkloadEndpoint="ci--4081--3--5--3--d55e308663-k8s-coredns--668d6bf9bc--hnn7j-eth0" Aug 13 00:16:49.170113 containerd[1487]: 2025-08-13 00:16:48.967 [INFO][4444] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1b19d30f69cdfc0a2b1c5c8ced7022e797af6d552d399dfb99365acdaa39d18f" HandleID="k8s-pod-network.1b19d30f69cdfc0a2b1c5c8ced7022e797af6d552d399dfb99365acdaa39d18f" Workload="ci--4081--3--5--3--d55e308663-k8s-coredns--668d6bf9bc--hnn7j-eth0" Aug 13 00:16:49.170113 containerd[1487]: 2025-08-13 00:16:48.967 [INFO][4444] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1b19d30f69cdfc0a2b1c5c8ced7022e797af6d552d399dfb99365acdaa39d18f" HandleID="k8s-pod-network.1b19d30f69cdfc0a2b1c5c8ced7022e797af6d552d399dfb99365acdaa39d18f" Workload="ci--4081--3--5--3--d55e308663-k8s-coredns--668d6bf9bc--hnn7j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3120), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-5-3-d55e308663", "pod":"coredns-668d6bf9bc-hnn7j", "timestamp":"2025-08-13 00:16:48.967342389 +0000 UTC"}, Hostname:"ci-4081-3-5-3-d55e308663", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:16:49.170113 containerd[1487]: 2025-08-13 00:16:48.967 [INFO][4444] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:16:49.170113 containerd[1487]: 2025-08-13 00:16:49.013 [INFO][4444] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:16:49.170113 containerd[1487]: 2025-08-13 00:16:49.013 [INFO][4444] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-3-d55e308663' Aug 13 00:16:49.170113 containerd[1487]: 2025-08-13 00:16:49.069 [INFO][4444] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1b19d30f69cdfc0a2b1c5c8ced7022e797af6d552d399dfb99365acdaa39d18f" host="ci-4081-3-5-3-d55e308663" Aug 13 00:16:49.170113 containerd[1487]: 2025-08-13 00:16:49.081 [INFO][4444] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-3-d55e308663" Aug 13 00:16:49.170113 containerd[1487]: 2025-08-13 00:16:49.090 [INFO][4444] ipam/ipam.go 511: Trying affinity for 192.168.75.192/26 host="ci-4081-3-5-3-d55e308663" Aug 13 00:16:49.170113 containerd[1487]: 2025-08-13 00:16:49.094 [INFO][4444] ipam/ipam.go 158: Attempting to load block cidr=192.168.75.192/26 host="ci-4081-3-5-3-d55e308663" Aug 13 00:16:49.170113 containerd[1487]: 2025-08-13 00:16:49.098 [INFO][4444] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.75.192/26 host="ci-4081-3-5-3-d55e308663" Aug 13 00:16:49.170113 containerd[1487]: 2025-08-13 00:16:49.098 [INFO][4444] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.75.192/26 handle="k8s-pod-network.1b19d30f69cdfc0a2b1c5c8ced7022e797af6d552d399dfb99365acdaa39d18f" host="ci-4081-3-5-3-d55e308663" Aug 13 00:16:49.170113 containerd[1487]: 2025-08-13 00:16:49.101 [INFO][4444] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1b19d30f69cdfc0a2b1c5c8ced7022e797af6d552d399dfb99365acdaa39d18f Aug 13 00:16:49.170113 containerd[1487]: 2025-08-13 00:16:49.113 [INFO][4444] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.75.192/26 handle="k8s-pod-network.1b19d30f69cdfc0a2b1c5c8ced7022e797af6d552d399dfb99365acdaa39d18f" host="ci-4081-3-5-3-d55e308663" Aug 13 00:16:49.170113 containerd[1487]: 2025-08-13 00:16:49.128 [INFO][4444] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.75.197/26] block=192.168.75.192/26 handle="k8s-pod-network.1b19d30f69cdfc0a2b1c5c8ced7022e797af6d552d399dfb99365acdaa39d18f" host="ci-4081-3-5-3-d55e308663" Aug 13 00:16:49.170113 containerd[1487]: 2025-08-13 00:16:49.128 [INFO][4444] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.75.197/26] handle="k8s-pod-network.1b19d30f69cdfc0a2b1c5c8ced7022e797af6d552d399dfb99365acdaa39d18f" host="ci-4081-3-5-3-d55e308663" Aug 13 00:16:49.170113 containerd[1487]: 2025-08-13 00:16:49.128 [INFO][4444] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:16:49.170113 containerd[1487]: 2025-08-13 00:16:49.128 [INFO][4444] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.75.197/26] IPv6=[] ContainerID="1b19d30f69cdfc0a2b1c5c8ced7022e797af6d552d399dfb99365acdaa39d18f" HandleID="k8s-pod-network.1b19d30f69cdfc0a2b1c5c8ced7022e797af6d552d399dfb99365acdaa39d18f" Workload="ci--4081--3--5--3--d55e308663-k8s-coredns--668d6bf9bc--hnn7j-eth0" Aug 13 00:16:49.170756 containerd[1487]: 2025-08-13 00:16:49.132 [INFO][4430] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1b19d30f69cdfc0a2b1c5c8ced7022e797af6d552d399dfb99365acdaa39d18f" Namespace="kube-system" Pod="coredns-668d6bf9bc-hnn7j" WorkloadEndpoint="ci--4081--3--5--3--d55e308663-k8s-coredns--668d6bf9bc--hnn7j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--3--d55e308663-k8s-coredns--668d6bf9bc--hnn7j-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"2776ab78-3da7-4b02-86ed-3ed0ba1a8679", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 16, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-3-d55e308663", ContainerID:"", Pod:"coredns-668d6bf9bc-hnn7j", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.75.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali767759facbe", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:16:49.170756 containerd[1487]: 2025-08-13 00:16:49.132 [INFO][4430] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.75.197/32] ContainerID="1b19d30f69cdfc0a2b1c5c8ced7022e797af6d552d399dfb99365acdaa39d18f" Namespace="kube-system" Pod="coredns-668d6bf9bc-hnn7j" WorkloadEndpoint="ci--4081--3--5--3--d55e308663-k8s-coredns--668d6bf9bc--hnn7j-eth0" Aug 13 00:16:49.170756 containerd[1487]: 2025-08-13 00:16:49.132 [INFO][4430] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali767759facbe ContainerID="1b19d30f69cdfc0a2b1c5c8ced7022e797af6d552d399dfb99365acdaa39d18f" Namespace="kube-system" Pod="coredns-668d6bf9bc-hnn7j" WorkloadEndpoint="ci--4081--3--5--3--d55e308663-k8s-coredns--668d6bf9bc--hnn7j-eth0" Aug 13 00:16:49.170756 containerd[1487]: 2025-08-13 00:16:49.138 [INFO][4430] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1b19d30f69cdfc0a2b1c5c8ced7022e797af6d552d399dfb99365acdaa39d18f" Namespace="kube-system" Pod="coredns-668d6bf9bc-hnn7j" WorkloadEndpoint="ci--4081--3--5--3--d55e308663-k8s-coredns--668d6bf9bc--hnn7j-eth0" Aug 13 00:16:49.170756 containerd[1487]: 2025-08-13 00:16:49.139 [INFO][4430] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1b19d30f69cdfc0a2b1c5c8ced7022e797af6d552d399dfb99365acdaa39d18f" Namespace="kube-system" Pod="coredns-668d6bf9bc-hnn7j" WorkloadEndpoint="ci--4081--3--5--3--d55e308663-k8s-coredns--668d6bf9bc--hnn7j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--3--d55e308663-k8s-coredns--668d6bf9bc--hnn7j-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"2776ab78-3da7-4b02-86ed-3ed0ba1a8679", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 16, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-3-d55e308663", ContainerID:"1b19d30f69cdfc0a2b1c5c8ced7022e797af6d552d399dfb99365acdaa39d18f", Pod:"coredns-668d6bf9bc-hnn7j", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.75.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali767759facbe", MAC:"16:e9:cb:e2:aa:db", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:16:49.170756 containerd[1487]: 2025-08-13 00:16:49.162 [INFO][4430] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1b19d30f69cdfc0a2b1c5c8ced7022e797af6d552d399dfb99365acdaa39d18f" Namespace="kube-system" Pod="coredns-668d6bf9bc-hnn7j" WorkloadEndpoint="ci--4081--3--5--3--d55e308663-k8s-coredns--668d6bf9bc--hnn7j-eth0" Aug 13 00:16:49.188760 containerd[1487]: time="2025-08-13T00:16:49.186129898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qctbn,Uid:15e02000-307e-46ce-8cf9-fed7fd6d6dd8,Namespace:calico-system,Attempt:1,} returns sandbox id \"ab2cb5ca9f50a9e056ca54fa9e7a9df12f95ce00edd4e276ff3a01d423f1adba\"" Aug 13 00:16:49.209331 containerd[1487]: time="2025-08-13T00:16:49.209197600Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:16:49.209663 containerd[1487]: time="2025-08-13T00:16:49.209387281Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:16:49.209663 containerd[1487]: time="2025-08-13T00:16:49.209406001Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:16:49.209740 containerd[1487]: time="2025-08-13T00:16:49.209664843Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:16:49.231512 systemd[1]: Started cri-containerd-1b19d30f69cdfc0a2b1c5c8ced7022e797af6d552d399dfb99365acdaa39d18f.scope - libcontainer container 1b19d30f69cdfc0a2b1c5c8ced7022e797af6d552d399dfb99365acdaa39d18f. Aug 13 00:16:49.271506 containerd[1487]: time="2025-08-13T00:16:49.271439662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hnn7j,Uid:2776ab78-3da7-4b02-86ed-3ed0ba1a8679,Namespace:kube-system,Attempt:1,} returns sandbox id \"1b19d30f69cdfc0a2b1c5c8ced7022e797af6d552d399dfb99365acdaa39d18f\"" Aug 13 00:16:49.275459 containerd[1487]: time="2025-08-13T00:16:49.275416847Z" level=info msg="CreateContainer within sandbox \"1b19d30f69cdfc0a2b1c5c8ced7022e797af6d552d399dfb99365acdaa39d18f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 00:16:49.293298 containerd[1487]: time="2025-08-13T00:16:49.292373631Z" level=info msg="CreateContainer within sandbox \"1b19d30f69cdfc0a2b1c5c8ced7022e797af6d552d399dfb99365acdaa39d18f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4492c7a89aff1a576fba08968f604662cc950b5121f8c01bef986b6ff5bd2bdd\"" Aug 13 00:16:49.294348 containerd[1487]: time="2025-08-13T00:16:49.293655079Z" level=info msg="StartContainer for \"4492c7a89aff1a576fba08968f604662cc950b5121f8c01bef986b6ff5bd2bdd\"" Aug 13 00:16:49.298810 systemd-networkd[1399]: calieffb97a9ea9: Gained IPv6LL Aug 13 00:16:49.339659 systemd[1]: Started cri-containerd-4492c7a89aff1a576fba08968f604662cc950b5121f8c01bef986b6ff5bd2bdd.scope - libcontainer container 4492c7a89aff1a576fba08968f604662cc950b5121f8c01bef986b6ff5bd2bdd. Aug 13 00:16:49.375851 containerd[1487]: time="2025-08-13T00:16:49.375738304Z" level=info msg="StartContainer for \"4492c7a89aff1a576fba08968f604662cc950b5121f8c01bef986b6ff5bd2bdd\" returns successfully" Aug 13 00:16:49.997024 kubelet[2615]: I0813 00:16:49.996017 2615 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-hnn7j" podStartSLOduration=43.995984639 podStartE2EDuration="43.995984639s" podCreationTimestamp="2025-08-13 00:16:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:16:49.975392992 +0000 UTC m=+50.449906545" watchObservedRunningTime="2025-08-13 00:16:49.995984639 +0000 UTC m=+50.470498192" Aug 13 00:16:50.258436 systemd-networkd[1399]: calid31c84a3ae0: Gained IPv6LL Aug 13 00:16:50.666769 containerd[1487]: time="2025-08-13T00:16:50.666162659Z" level=info msg="StopPodSandbox for \"d39f1233e5d1fec42e5249ebbc08ca6ee08cd40ff689c32e159a4fa4e1f28f61\"" Aug 13 00:16:50.668860 containerd[1487]: time="2025-08-13T00:16:50.667368946Z" level=info msg="StopPodSandbox for \"e0cdc0198487554e49bd19757d13f2026f25ab840bc62e6bcabe80f271c4c719\"" Aug 13 00:16:50.771366 systemd-networkd[1399]: cali767759facbe: Gained IPv6LL Aug 13 00:16:50.795796 containerd[1487]: 2025-08-13 00:16:50.743 [INFO][4615] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e0cdc0198487554e49bd19757d13f2026f25ab840bc62e6bcabe80f271c4c719" Aug 13 00:16:50.795796 containerd[1487]: 2025-08-13 00:16:50.743 [INFO][4615] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e0cdc0198487554e49bd19757d13f2026f25ab840bc62e6bcabe80f271c4c719" iface="eth0" netns="/var/run/netns/cni-999e07d6-8af2-6f76-3047-70bcd8ae01da" Aug 13 00:16:50.795796 containerd[1487]: 2025-08-13 00:16:50.744 [INFO][4615] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e0cdc0198487554e49bd19757d13f2026f25ab840bc62e6bcabe80f271c4c719" iface="eth0" netns="/var/run/netns/cni-999e07d6-8af2-6f76-3047-70bcd8ae01da" Aug 13 00:16:50.795796 containerd[1487]: 2025-08-13 00:16:50.744 [INFO][4615] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e0cdc0198487554e49bd19757d13f2026f25ab840bc62e6bcabe80f271c4c719" iface="eth0" netns="/var/run/netns/cni-999e07d6-8af2-6f76-3047-70bcd8ae01da" Aug 13 00:16:50.795796 containerd[1487]: 2025-08-13 00:16:50.744 [INFO][4615] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e0cdc0198487554e49bd19757d13f2026f25ab840bc62e6bcabe80f271c4c719" Aug 13 00:16:50.795796 containerd[1487]: 2025-08-13 00:16:50.744 [INFO][4615] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e0cdc0198487554e49bd19757d13f2026f25ab840bc62e6bcabe80f271c4c719" Aug 13 00:16:50.795796 containerd[1487]: 2025-08-13 00:16:50.776 [INFO][4629] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e0cdc0198487554e49bd19757d13f2026f25ab840bc62e6bcabe80f271c4c719" HandleID="k8s-pod-network.e0cdc0198487554e49bd19757d13f2026f25ab840bc62e6bcabe80f271c4c719" Workload="ci--4081--3--5--3--d55e308663-k8s-goldmane--768f4c5c69--nlr8k-eth0" Aug 13 00:16:50.795796 containerd[1487]: 2025-08-13 00:16:50.777 [INFO][4629] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:16:50.795796 containerd[1487]: 2025-08-13 00:16:50.777 [INFO][4629] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:16:50.795796 containerd[1487]: 2025-08-13 00:16:50.787 [WARNING][4629] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e0cdc0198487554e49bd19757d13f2026f25ab840bc62e6bcabe80f271c4c719" HandleID="k8s-pod-network.e0cdc0198487554e49bd19757d13f2026f25ab840bc62e6bcabe80f271c4c719" Workload="ci--4081--3--5--3--d55e308663-k8s-goldmane--768f4c5c69--nlr8k-eth0" Aug 13 00:16:50.795796 containerd[1487]: 2025-08-13 00:16:50.787 [INFO][4629] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e0cdc0198487554e49bd19757d13f2026f25ab840bc62e6bcabe80f271c4c719" HandleID="k8s-pod-network.e0cdc0198487554e49bd19757d13f2026f25ab840bc62e6bcabe80f271c4c719" Workload="ci--4081--3--5--3--d55e308663-k8s-goldmane--768f4c5c69--nlr8k-eth0" Aug 13 00:16:50.795796 containerd[1487]: 2025-08-13 00:16:50.789 [INFO][4629] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:16:50.795796 containerd[1487]: 2025-08-13 00:16:50.792 [INFO][4615] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e0cdc0198487554e49bd19757d13f2026f25ab840bc62e6bcabe80f271c4c719" Aug 13 00:16:50.797822 containerd[1487]: time="2025-08-13T00:16:50.796209007Z" level=info msg="TearDown network for sandbox \"e0cdc0198487554e49bd19757d13f2026f25ab840bc62e6bcabe80f271c4c719\" successfully" Aug 13 00:16:50.797822 containerd[1487]: time="2025-08-13T00:16:50.796245567Z" level=info msg="StopPodSandbox for \"e0cdc0198487554e49bd19757d13f2026f25ab840bc62e6bcabe80f271c4c719\" returns successfully" Aug 13 00:16:50.801609 containerd[1487]: time="2025-08-13T00:16:50.800520793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-nlr8k,Uid:00aebb23-204a-4ca9-bec2-40b1eea2bd4a,Namespace:calico-system,Attempt:1,}" Aug 13 00:16:50.803755 systemd[1]: run-netns-cni\x2d999e07d6\x2d8af2\x2d6f76\x2d3047\x2d70bcd8ae01da.mount: Deactivated successfully. Aug 13 00:16:50.826629 containerd[1487]: 2025-08-13 00:16:50.749 [INFO][4616] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d39f1233e5d1fec42e5249ebbc08ca6ee08cd40ff689c32e159a4fa4e1f28f61" Aug 13 00:16:50.826629 containerd[1487]: 2025-08-13 00:16:50.750 [INFO][4616] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d39f1233e5d1fec42e5249ebbc08ca6ee08cd40ff689c32e159a4fa4e1f28f61" iface="eth0" netns="/var/run/netns/cni-a2680a6a-b67d-19ea-7091-500bff57d925" Aug 13 00:16:50.826629 containerd[1487]: 2025-08-13 00:16:50.751 [INFO][4616] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d39f1233e5d1fec42e5249ebbc08ca6ee08cd40ff689c32e159a4fa4e1f28f61" iface="eth0" netns="/var/run/netns/cni-a2680a6a-b67d-19ea-7091-500bff57d925" Aug 13 00:16:50.826629 containerd[1487]: 2025-08-13 00:16:50.753 [INFO][4616] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d39f1233e5d1fec42e5249ebbc08ca6ee08cd40ff689c32e159a4fa4e1f28f61" iface="eth0" netns="/var/run/netns/cni-a2680a6a-b67d-19ea-7091-500bff57d925" Aug 13 00:16:50.826629 containerd[1487]: 2025-08-13 00:16:50.753 [INFO][4616] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d39f1233e5d1fec42e5249ebbc08ca6ee08cd40ff689c32e159a4fa4e1f28f61" Aug 13 00:16:50.826629 containerd[1487]: 2025-08-13 00:16:50.753 [INFO][4616] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d39f1233e5d1fec42e5249ebbc08ca6ee08cd40ff689c32e159a4fa4e1f28f61" Aug 13 00:16:50.826629 containerd[1487]: 2025-08-13 00:16:50.783 [INFO][4634] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d39f1233e5d1fec42e5249ebbc08ca6ee08cd40ff689c32e159a4fa4e1f28f61" HandleID="k8s-pod-network.d39f1233e5d1fec42e5249ebbc08ca6ee08cd40ff689c32e159a4fa4e1f28f61" Workload="ci--4081--3--5--3--d55e308663-k8s-coredns--668d6bf9bc--xvrh7-eth0" Aug 13 00:16:50.826629 containerd[1487]: 2025-08-13 00:16:50.783 [INFO][4634] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:16:50.826629 containerd[1487]: 2025-08-13 00:16:50.790 [INFO][4634] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:16:50.826629 containerd[1487]: 2025-08-13 00:16:50.808 [WARNING][4634] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d39f1233e5d1fec42e5249ebbc08ca6ee08cd40ff689c32e159a4fa4e1f28f61" HandleID="k8s-pod-network.d39f1233e5d1fec42e5249ebbc08ca6ee08cd40ff689c32e159a4fa4e1f28f61" Workload="ci--4081--3--5--3--d55e308663-k8s-coredns--668d6bf9bc--xvrh7-eth0" Aug 13 00:16:50.826629 containerd[1487]: 2025-08-13 00:16:50.808 [INFO][4634] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d39f1233e5d1fec42e5249ebbc08ca6ee08cd40ff689c32e159a4fa4e1f28f61" HandleID="k8s-pod-network.d39f1233e5d1fec42e5249ebbc08ca6ee08cd40ff689c32e159a4fa4e1f28f61" Workload="ci--4081--3--5--3--d55e308663-k8s-coredns--668d6bf9bc--xvrh7-eth0" Aug 13 00:16:50.826629 containerd[1487]: 2025-08-13 00:16:50.811 [INFO][4634] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:16:50.826629 containerd[1487]: 2025-08-13 00:16:50.815 [INFO][4616] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d39f1233e5d1fec42e5249ebbc08ca6ee08cd40ff689c32e159a4fa4e1f28f61" Aug 13 00:16:50.826629 containerd[1487]: time="2025-08-13T00:16:50.825359703Z" level=info msg="TearDown network for sandbox \"d39f1233e5d1fec42e5249ebbc08ca6ee08cd40ff689c32e159a4fa4e1f28f61\" successfully" Aug 13 00:16:50.826629 containerd[1487]: time="2025-08-13T00:16:50.825395304Z" level=info msg="StopPodSandbox for \"d39f1233e5d1fec42e5249ebbc08ca6ee08cd40ff689c32e159a4fa4e1f28f61\" returns successfully" Aug 13 00:16:50.831241 systemd[1]: run-netns-cni\x2da2680a6a\x2db67d\x2d19ea\x2d7091\x2d500bff57d925.mount: Deactivated successfully. Aug 13 00:16:50.836199 containerd[1487]: time="2025-08-13T00:16:50.836162889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xvrh7,Uid:fad88c9c-63aa-46b4-a961-f6ee4056948d,Namespace:kube-system,Attempt:1,}" Aug 13 00:16:50.934412 kubelet[2615]: I0813 00:16:50.934270 2615 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 00:16:51.043671 systemd-networkd[1399]: calid8706d58301: Link UP Aug 13 00:16:51.046771 systemd-networkd[1399]: calid8706d58301: Gained carrier Aug 13 00:16:51.081398 containerd[1487]: 2025-08-13 00:16:50.899 [INFO][4643] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--3--d55e308663-k8s-goldmane--768f4c5c69--nlr8k-eth0 goldmane-768f4c5c69- calico-system 00aebb23-204a-4ca9-bec2-40b1eea2bd4a 992 0 2025-08-13 00:16:26 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081-3-5-3-d55e308663 goldmane-768f4c5c69-nlr8k eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calid8706d58301 [] [] }} ContainerID="fbc28fd8e5dc4f43065701903c832505cdfbdd4ef392f51bfccd187d91dbeec1" Namespace="calico-system" Pod="goldmane-768f4c5c69-nlr8k" WorkloadEndpoint="ci--4081--3--5--3--d55e308663-k8s-goldmane--768f4c5c69--nlr8k-" Aug 13 00:16:51.081398 containerd[1487]: 2025-08-13 00:16:50.899 [INFO][4643] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fbc28fd8e5dc4f43065701903c832505cdfbdd4ef392f51bfccd187d91dbeec1" Namespace="calico-system" Pod="goldmane-768f4c5c69-nlr8k" WorkloadEndpoint="ci--4081--3--5--3--d55e308663-k8s-goldmane--768f4c5c69--nlr8k-eth0" Aug 13 00:16:51.081398 containerd[1487]: 2025-08-13 00:16:50.961 [INFO][4666] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fbc28fd8e5dc4f43065701903c832505cdfbdd4ef392f51bfccd187d91dbeec1" HandleID="k8s-pod-network.fbc28fd8e5dc4f43065701903c832505cdfbdd4ef392f51bfccd187d91dbeec1" Workload="ci--4081--3--5--3--d55e308663-k8s-goldmane--768f4c5c69--nlr8k-eth0" Aug 13 00:16:51.081398 containerd[1487]: 2025-08-13 00:16:50.962 [INFO][4666] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fbc28fd8e5dc4f43065701903c832505cdfbdd4ef392f51bfccd187d91dbeec1" HandleID="k8s-pod-network.fbc28fd8e5dc4f43065701903c832505cdfbdd4ef392f51bfccd187d91dbeec1" Workload="ci--4081--3--5--3--d55e308663-k8s-goldmane--768f4c5c69--nlr8k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3030), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-5-3-d55e308663", "pod":"goldmane-768f4c5c69-nlr8k", "timestamp":"2025-08-13 00:16:50.961469648 +0000 UTC"}, Hostname:"ci-4081-3-5-3-d55e308663", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:16:51.081398 containerd[1487]: 2025-08-13 00:16:50.962 [INFO][4666] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:16:51.081398 containerd[1487]: 2025-08-13 00:16:50.962 [INFO][4666] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:16:51.081398 containerd[1487]: 2025-08-13 00:16:50.962 [INFO][4666] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-3-d55e308663' Aug 13 00:16:51.081398 containerd[1487]: 2025-08-13 00:16:50.981 [INFO][4666] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fbc28fd8e5dc4f43065701903c832505cdfbdd4ef392f51bfccd187d91dbeec1" host="ci-4081-3-5-3-d55e308663" Aug 13 00:16:51.081398 containerd[1487]: 2025-08-13 00:16:50.988 [INFO][4666] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-3-d55e308663" Aug 13 00:16:51.081398 containerd[1487]: 2025-08-13 00:16:50.997 [INFO][4666] ipam/ipam.go 511: Trying affinity for 192.168.75.192/26 host="ci-4081-3-5-3-d55e308663" Aug 13 00:16:51.081398 containerd[1487]: 2025-08-13 00:16:51.001 [INFO][4666] ipam/ipam.go 158: Attempting to load block cidr=192.168.75.192/26 host="ci-4081-3-5-3-d55e308663" Aug 13 00:16:51.081398 containerd[1487]: 2025-08-13 00:16:51.006 [INFO][4666] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.75.192/26 host="ci-4081-3-5-3-d55e308663" Aug 13 00:16:51.081398 containerd[1487]: 2025-08-13 00:16:51.006 [INFO][4666] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.75.192/26 handle="k8s-pod-network.fbc28fd8e5dc4f43065701903c832505cdfbdd4ef392f51bfccd187d91dbeec1" host="ci-4081-3-5-3-d55e308663" Aug 13 00:16:51.081398 containerd[1487]: 2025-08-13 00:16:51.010 [INFO][4666] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.fbc28fd8e5dc4f43065701903c832505cdfbdd4ef392f51bfccd187d91dbeec1 Aug 13 00:16:51.081398 containerd[1487]: 2025-08-13 00:16:51.018 [INFO][4666] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.75.192/26 handle="k8s-pod-network.fbc28fd8e5dc4f43065701903c832505cdfbdd4ef392f51bfccd187d91dbeec1" host="ci-4081-3-5-3-d55e308663" Aug 13 00:16:51.081398 containerd[1487]: 2025-08-13 00:16:51.033 [INFO][4666] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.75.198/26] block=192.168.75.192/26 handle="k8s-pod-network.fbc28fd8e5dc4f43065701903c832505cdfbdd4ef392f51bfccd187d91dbeec1" host="ci-4081-3-5-3-d55e308663" Aug 13 00:16:51.081398 containerd[1487]: 2025-08-13 00:16:51.034 [INFO][4666] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.75.198/26] handle="k8s-pod-network.fbc28fd8e5dc4f43065701903c832505cdfbdd4ef392f51bfccd187d91dbeec1" host="ci-4081-3-5-3-d55e308663" Aug 13 00:16:51.081398 containerd[1487]: 2025-08-13 00:16:51.034 [INFO][4666] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:16:51.081398 containerd[1487]: 2025-08-13 00:16:51.034 [INFO][4666] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.75.198/26] IPv6=[] ContainerID="fbc28fd8e5dc4f43065701903c832505cdfbdd4ef392f51bfccd187d91dbeec1" HandleID="k8s-pod-network.fbc28fd8e5dc4f43065701903c832505cdfbdd4ef392f51bfccd187d91dbeec1" Workload="ci--4081--3--5--3--d55e308663-k8s-goldmane--768f4c5c69--nlr8k-eth0" Aug 13 00:16:51.081998 containerd[1487]: 2025-08-13 00:16:51.039 [INFO][4643] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fbc28fd8e5dc4f43065701903c832505cdfbdd4ef392f51bfccd187d91dbeec1" Namespace="calico-system" Pod="goldmane-768f4c5c69-nlr8k" WorkloadEndpoint="ci--4081--3--5--3--d55e308663-k8s-goldmane--768f4c5c69--nlr8k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--3--d55e308663-k8s-goldmane--768f4c5c69--nlr8k-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"00aebb23-204a-4ca9-bec2-40b1eea2bd4a", ResourceVersion:"992", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 16, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-3-d55e308663", ContainerID:"", Pod:"goldmane-768f4c5c69-nlr8k", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.75.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid8706d58301", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:16:51.081998 containerd[1487]: 2025-08-13 00:16:51.039 [INFO][4643] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.75.198/32] ContainerID="fbc28fd8e5dc4f43065701903c832505cdfbdd4ef392f51bfccd187d91dbeec1" Namespace="calico-system" Pod="goldmane-768f4c5c69-nlr8k" WorkloadEndpoint="ci--4081--3--5--3--d55e308663-k8s-goldmane--768f4c5c69--nlr8k-eth0" Aug 13 00:16:51.081998 containerd[1487]: 2025-08-13 00:16:51.039 [INFO][4643] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid8706d58301 ContainerID="fbc28fd8e5dc4f43065701903c832505cdfbdd4ef392f51bfccd187d91dbeec1" Namespace="calico-system" Pod="goldmane-768f4c5c69-nlr8k" WorkloadEndpoint="ci--4081--3--5--3--d55e308663-k8s-goldmane--768f4c5c69--nlr8k-eth0" Aug 13 00:16:51.081998 containerd[1487]: 2025-08-13 00:16:51.047 [INFO][4643] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fbc28fd8e5dc4f43065701903c832505cdfbdd4ef392f51bfccd187d91dbeec1" Namespace="calico-system" Pod="goldmane-768f4c5c69-nlr8k" WorkloadEndpoint="ci--4081--3--5--3--d55e308663-k8s-goldmane--768f4c5c69--nlr8k-eth0" Aug 13 00:16:51.081998 containerd[1487]: 2025-08-13 00:16:51.048 [INFO][4643] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fbc28fd8e5dc4f43065701903c832505cdfbdd4ef392f51bfccd187d91dbeec1" Namespace="calico-system" Pod="goldmane-768f4c5c69-nlr8k" WorkloadEndpoint="ci--4081--3--5--3--d55e308663-k8s-goldmane--768f4c5c69--nlr8k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--3--d55e308663-k8s-goldmane--768f4c5c69--nlr8k-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"00aebb23-204a-4ca9-bec2-40b1eea2bd4a", ResourceVersion:"992", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 16, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-3-d55e308663", ContainerID:"fbc28fd8e5dc4f43065701903c832505cdfbdd4ef392f51bfccd187d91dbeec1", Pod:"goldmane-768f4c5c69-nlr8k", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.75.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid8706d58301", MAC:"2e:b3:19:e1:d8:9b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:16:51.081998 containerd[1487]: 2025-08-13 00:16:51.072 [INFO][4643] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fbc28fd8e5dc4f43065701903c832505cdfbdd4ef392f51bfccd187d91dbeec1" Namespace="calico-system" Pod="goldmane-768f4c5c69-nlr8k" WorkloadEndpoint="ci--4081--3--5--3--d55e308663-k8s-goldmane--768f4c5c69--nlr8k-eth0" Aug 13 00:16:51.152402 containerd[1487]: time="2025-08-13T00:16:51.151947028Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:16:51.152402 containerd[1487]: time="2025-08-13T00:16:51.152015949Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:16:51.152402 containerd[1487]: time="2025-08-13T00:16:51.152036149Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:16:51.152402 containerd[1487]: time="2025-08-13T00:16:51.152159870Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:16:51.170531 systemd-networkd[1399]: cali9b98450fc81: Link UP Aug 13 00:16:51.178516 systemd-networkd[1399]: cali9b98450fc81: Gained carrier Aug 13 00:16:51.212616 systemd[1]: Started cri-containerd-fbc28fd8e5dc4f43065701903c832505cdfbdd4ef392f51bfccd187d91dbeec1.scope - libcontainer container fbc28fd8e5dc4f43065701903c832505cdfbdd4ef392f51bfccd187d91dbeec1. Aug 13 00:16:51.220874 containerd[1487]: 2025-08-13 00:16:50.950 [INFO][4660] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--3--d55e308663-k8s-coredns--668d6bf9bc--xvrh7-eth0 coredns-668d6bf9bc- kube-system fad88c9c-63aa-46b4-a961-f6ee4056948d 993 0 2025-08-13 00:16:06 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-5-3-d55e308663 coredns-668d6bf9bc-xvrh7 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9b98450fc81 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="9d181fe0ea015f56a07be96685c9000e8548a5f8c56ff015e6ce34eeb198fdb4" Namespace="kube-system" Pod="coredns-668d6bf9bc-xvrh7" WorkloadEndpoint="ci--4081--3--5--3--d55e308663-k8s-coredns--668d6bf9bc--xvrh7-" Aug 13 00:16:51.220874 containerd[1487]: 2025-08-13 00:16:50.951 [INFO][4660] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9d181fe0ea015f56a07be96685c9000e8548a5f8c56ff015e6ce34eeb198fdb4" Namespace="kube-system" Pod="coredns-668d6bf9bc-xvrh7" WorkloadEndpoint="ci--4081--3--5--3--d55e308663-k8s-coredns--668d6bf9bc--xvrh7-eth0" Aug 13 00:16:51.220874 containerd[1487]: 2025-08-13 00:16:51.066 [INFO][4678] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9d181fe0ea015f56a07be96685c9000e8548a5f8c56ff015e6ce34eeb198fdb4" HandleID="k8s-pod-network.9d181fe0ea015f56a07be96685c9000e8548a5f8c56ff015e6ce34eeb198fdb4" Workload="ci--4081--3--5--3--d55e308663-k8s-coredns--668d6bf9bc--xvrh7-eth0" Aug 13 00:16:51.220874 containerd[1487]: 2025-08-13 00:16:51.066 [INFO][4678] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9d181fe0ea015f56a07be96685c9000e8548a5f8c56ff015e6ce34eeb198fdb4" HandleID="k8s-pod-network.9d181fe0ea015f56a07be96685c9000e8548a5f8c56ff015e6ce34eeb198fdb4" Workload="ci--4081--3--5--3--d55e308663-k8s-coredns--668d6bf9bc--xvrh7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004deb0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-5-3-d55e308663", "pod":"coredns-668d6bf9bc-xvrh7", "timestamp":"2025-08-13 00:16:51.066329117 +0000 UTC"}, Hostname:"ci-4081-3-5-3-d55e308663", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:16:51.220874 containerd[1487]: 2025-08-13 00:16:51.066 [INFO][4678] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:16:51.220874 containerd[1487]: 2025-08-13 00:16:51.066 [INFO][4678] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:16:51.220874 containerd[1487]: 2025-08-13 00:16:51.066 [INFO][4678] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-3-d55e308663' Aug 13 00:16:51.220874 containerd[1487]: 2025-08-13 00:16:51.091 [INFO][4678] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9d181fe0ea015f56a07be96685c9000e8548a5f8c56ff015e6ce34eeb198fdb4" host="ci-4081-3-5-3-d55e308663" Aug 13 00:16:51.220874 containerd[1487]: 2025-08-13 00:16:51.102 [INFO][4678] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-3-d55e308663" Aug 13 00:16:51.220874 containerd[1487]: 2025-08-13 00:16:51.115 [INFO][4678] ipam/ipam.go 511: Trying affinity for 192.168.75.192/26 host="ci-4081-3-5-3-d55e308663" Aug 13 00:16:51.220874 containerd[1487]: 2025-08-13 00:16:51.121 [INFO][4678] ipam/ipam.go 158: Attempting to load block cidr=192.168.75.192/26 host="ci-4081-3-5-3-d55e308663" Aug 13 00:16:51.220874 containerd[1487]: 2025-08-13 00:16:51.132 [INFO][4678] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.75.192/26 host="ci-4081-3-5-3-d55e308663" Aug 13 00:16:51.220874 containerd[1487]: 2025-08-13 00:16:51.132 [INFO][4678] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.75.192/26 handle="k8s-pod-network.9d181fe0ea015f56a07be96685c9000e8548a5f8c56ff015e6ce34eeb198fdb4" host="ci-4081-3-5-3-d55e308663" Aug 13 00:16:51.220874 containerd[1487]: 2025-08-13 00:16:51.136 [INFO][4678] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9d181fe0ea015f56a07be96685c9000e8548a5f8c56ff015e6ce34eeb198fdb4 Aug 13 00:16:51.220874 containerd[1487]: 2025-08-13 00:16:51.143 [INFO][4678] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.75.192/26 handle="k8s-pod-network.9d181fe0ea015f56a07be96685c9000e8548a5f8c56ff015e6ce34eeb198fdb4" host="ci-4081-3-5-3-d55e308663" Aug 13 00:16:51.220874 containerd[1487]: 2025-08-13 00:16:51.157 [INFO][4678] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.75.199/26] block=192.168.75.192/26 handle="k8s-pod-network.9d181fe0ea015f56a07be96685c9000e8548a5f8c56ff015e6ce34eeb198fdb4" host="ci-4081-3-5-3-d55e308663" Aug 13 00:16:51.220874 containerd[1487]: 2025-08-13 00:16:51.157 [INFO][4678] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.75.199/26] handle="k8s-pod-network.9d181fe0ea015f56a07be96685c9000e8548a5f8c56ff015e6ce34eeb198fdb4" host="ci-4081-3-5-3-d55e308663" Aug 13 00:16:51.220874 containerd[1487]: 2025-08-13 00:16:51.157 [INFO][4678] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:16:51.220874 containerd[1487]: 2025-08-13 00:16:51.157 [INFO][4678] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.75.199/26] IPv6=[] ContainerID="9d181fe0ea015f56a07be96685c9000e8548a5f8c56ff015e6ce34eeb198fdb4" HandleID="k8s-pod-network.9d181fe0ea015f56a07be96685c9000e8548a5f8c56ff015e6ce34eeb198fdb4" Workload="ci--4081--3--5--3--d55e308663-k8s-coredns--668d6bf9bc--xvrh7-eth0" Aug 13 00:16:51.221438 containerd[1487]: 2025-08-13 00:16:51.162 [INFO][4660] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9d181fe0ea015f56a07be96685c9000e8548a5f8c56ff015e6ce34eeb198fdb4" Namespace="kube-system" Pod="coredns-668d6bf9bc-xvrh7" WorkloadEndpoint="ci--4081--3--5--3--d55e308663-k8s-coredns--668d6bf9bc--xvrh7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--3--d55e308663-k8s-coredns--668d6bf9bc--xvrh7-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"fad88c9c-63aa-46b4-a961-f6ee4056948d", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 16, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-3-d55e308663", ContainerID:"", Pod:"coredns-668d6bf9bc-xvrh7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.75.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9b98450fc81", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:16:51.221438 containerd[1487]: 2025-08-13 00:16:51.163 [INFO][4660] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.75.199/32] ContainerID="9d181fe0ea015f56a07be96685c9000e8548a5f8c56ff015e6ce34eeb198fdb4" Namespace="kube-system" Pod="coredns-668d6bf9bc-xvrh7" WorkloadEndpoint="ci--4081--3--5--3--d55e308663-k8s-coredns--668d6bf9bc--xvrh7-eth0" Aug 13 00:16:51.221438 containerd[1487]: 2025-08-13 00:16:51.163 [INFO][4660] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9b98450fc81 ContainerID="9d181fe0ea015f56a07be96685c9000e8548a5f8c56ff015e6ce34eeb198fdb4" Namespace="kube-system" Pod="coredns-668d6bf9bc-xvrh7" WorkloadEndpoint="ci--4081--3--5--3--d55e308663-k8s-coredns--668d6bf9bc--xvrh7-eth0" Aug 13 00:16:51.221438 containerd[1487]: 2025-08-13 00:16:51.176 [INFO][4660] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9d181fe0ea015f56a07be96685c9000e8548a5f8c56ff015e6ce34eeb198fdb4" Namespace="kube-system" Pod="coredns-668d6bf9bc-xvrh7" WorkloadEndpoint="ci--4081--3--5--3--d55e308663-k8s-coredns--668d6bf9bc--xvrh7-eth0" Aug 13 00:16:51.221438 containerd[1487]: 2025-08-13 00:16:51.180 [INFO][4660] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9d181fe0ea015f56a07be96685c9000e8548a5f8c56ff015e6ce34eeb198fdb4" Namespace="kube-system" Pod="coredns-668d6bf9bc-xvrh7" WorkloadEndpoint="ci--4081--3--5--3--d55e308663-k8s-coredns--668d6bf9bc--xvrh7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--3--d55e308663-k8s-coredns--668d6bf9bc--xvrh7-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"fad88c9c-63aa-46b4-a961-f6ee4056948d", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 16, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-3-d55e308663", ContainerID:"9d181fe0ea015f56a07be96685c9000e8548a5f8c56ff015e6ce34eeb198fdb4", Pod:"coredns-668d6bf9bc-xvrh7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.75.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9b98450fc81", MAC:"ba:db:28:41:55:1b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:16:51.221438 containerd[1487]: 2025-08-13 00:16:51.206 [INFO][4660] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9d181fe0ea015f56a07be96685c9000e8548a5f8c56ff015e6ce34eeb198fdb4" Namespace="kube-system" Pod="coredns-668d6bf9bc-xvrh7" WorkloadEndpoint="ci--4081--3--5--3--d55e308663-k8s-coredns--668d6bf9bc--xvrh7-eth0" Aug 13 00:16:51.296007 containerd[1487]: time="2025-08-13T00:16:51.295921248Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:16:51.297579 containerd[1487]: time="2025-08-13T00:16:51.295982888Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:16:51.297579 containerd[1487]: time="2025-08-13T00:16:51.295999608Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:16:51.297579 containerd[1487]: time="2025-08-13T00:16:51.296089049Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:16:51.335499 systemd[1]: Started cri-containerd-9d181fe0ea015f56a07be96685c9000e8548a5f8c56ff015e6ce34eeb198fdb4.scope - libcontainer container 9d181fe0ea015f56a07be96685c9000e8548a5f8c56ff015e6ce34eeb198fdb4. Aug 13 00:16:51.340807 containerd[1487]: time="2025-08-13T00:16:51.340745915Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-nlr8k,Uid:00aebb23-204a-4ca9-bec2-40b1eea2bd4a,Namespace:calico-system,Attempt:1,} returns sandbox id \"fbc28fd8e5dc4f43065701903c832505cdfbdd4ef392f51bfccd187d91dbeec1\"" Aug 13 00:16:51.421406 containerd[1487]: time="2025-08-13T00:16:51.421354276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xvrh7,Uid:fad88c9c-63aa-46b4-a961-f6ee4056948d,Namespace:kube-system,Attempt:1,} returns sandbox id \"9d181fe0ea015f56a07be96685c9000e8548a5f8c56ff015e6ce34eeb198fdb4\"" Aug 13 00:16:51.429319 containerd[1487]: time="2025-08-13T00:16:51.429141323Z" level=info msg="CreateContainer within sandbox \"9d181fe0ea015f56a07be96685c9000e8548a5f8c56ff015e6ce34eeb198fdb4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 00:16:51.465568 containerd[1487]: time="2025-08-13T00:16:51.465341339Z" level=info msg="CreateContainer within sandbox \"9d181fe0ea015f56a07be96685c9000e8548a5f8c56ff015e6ce34eeb198fdb4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6e24e4bd3ead2b1995fb22dbf0703cbe13b17f4f4b67ec844293f6cd545f6ba9\"" Aug 13 00:16:51.471309 containerd[1487]: time="2025-08-13T00:16:51.469949886Z" level=info msg="StartContainer for \"6e24e4bd3ead2b1995fb22dbf0703cbe13b17f4f4b67ec844293f6cd545f6ba9\"" Aug 13 00:16:51.531581 systemd[1]: Started cri-containerd-6e24e4bd3ead2b1995fb22dbf0703cbe13b17f4f4b67ec844293f6cd545f6ba9.scope - libcontainer container 6e24e4bd3ead2b1995fb22dbf0703cbe13b17f4f4b67ec844293f6cd545f6ba9. Aug 13 00:16:51.599703 containerd[1487]: time="2025-08-13T00:16:51.599541580Z" level=info msg="StartContainer for \"6e24e4bd3ead2b1995fb22dbf0703cbe13b17f4f4b67ec844293f6cd545f6ba9\" returns successfully" Aug 13 00:16:51.670012 containerd[1487]: time="2025-08-13T00:16:51.669571118Z" level=info msg="StopPodSandbox for \"436ce7aee3cc20efd8bf2c1f401c393719939d699cc8d5ab5690b2a31bf42d40\"" Aug 13 00:16:51.849593 containerd[1487]: 2025-08-13 00:16:51.754 [INFO][4884] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="436ce7aee3cc20efd8bf2c1f401c393719939d699cc8d5ab5690b2a31bf42d40" Aug 13 00:16:51.849593 containerd[1487]: 2025-08-13 00:16:51.755 [INFO][4884] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="436ce7aee3cc20efd8bf2c1f401c393719939d699cc8d5ab5690b2a31bf42d40" iface="eth0" netns="/var/run/netns/cni-992871dd-67d6-749a-d872-bec097d351ef" Aug 13 00:16:51.849593 containerd[1487]: 2025-08-13 00:16:51.756 [INFO][4884] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="436ce7aee3cc20efd8bf2c1f401c393719939d699cc8d5ab5690b2a31bf42d40" iface="eth0" netns="/var/run/netns/cni-992871dd-67d6-749a-d872-bec097d351ef" Aug 13 00:16:51.849593 containerd[1487]: 2025-08-13 00:16:51.756 [INFO][4884] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="436ce7aee3cc20efd8bf2c1f401c393719939d699cc8d5ab5690b2a31bf42d40" iface="eth0" netns="/var/run/netns/cni-992871dd-67d6-749a-d872-bec097d351ef" Aug 13 00:16:51.849593 containerd[1487]: 2025-08-13 00:16:51.756 [INFO][4884] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="436ce7aee3cc20efd8bf2c1f401c393719939d699cc8d5ab5690b2a31bf42d40" Aug 13 00:16:51.849593 containerd[1487]: 2025-08-13 00:16:51.756 [INFO][4884] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="436ce7aee3cc20efd8bf2c1f401c393719939d699cc8d5ab5690b2a31bf42d40" Aug 13 00:16:51.849593 containerd[1487]: 2025-08-13 00:16:51.816 [INFO][4891] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="436ce7aee3cc20efd8bf2c1f401c393719939d699cc8d5ab5690b2a31bf42d40" HandleID="k8s-pod-network.436ce7aee3cc20efd8bf2c1f401c393719939d699cc8d5ab5690b2a31bf42d40" Workload="ci--4081--3--5--3--d55e308663-k8s-calico--kube--controllers--b89966564--sv4v2-eth0" Aug 13 00:16:51.849593 containerd[1487]: 2025-08-13 00:16:51.817 [INFO][4891] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:16:51.849593 containerd[1487]: 2025-08-13 00:16:51.817 [INFO][4891] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:16:51.849593 containerd[1487]: 2025-08-13 00:16:51.836 [WARNING][4891] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="436ce7aee3cc20efd8bf2c1f401c393719939d699cc8d5ab5690b2a31bf42d40" HandleID="k8s-pod-network.436ce7aee3cc20efd8bf2c1f401c393719939d699cc8d5ab5690b2a31bf42d40" Workload="ci--4081--3--5--3--d55e308663-k8s-calico--kube--controllers--b89966564--sv4v2-eth0" Aug 13 00:16:51.849593 containerd[1487]: 2025-08-13 00:16:51.836 [INFO][4891] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="436ce7aee3cc20efd8bf2c1f401c393719939d699cc8d5ab5690b2a31bf42d40" HandleID="k8s-pod-network.436ce7aee3cc20efd8bf2c1f401c393719939d699cc8d5ab5690b2a31bf42d40" Workload="ci--4081--3--5--3--d55e308663-k8s-calico--kube--controllers--b89966564--sv4v2-eth0" Aug 13 00:16:51.849593 containerd[1487]: 2025-08-13 00:16:51.840 [INFO][4891] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:16:51.849593 containerd[1487]: 2025-08-13 00:16:51.844 [INFO][4884] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="436ce7aee3cc20efd8bf2c1f401c393719939d699cc8d5ab5690b2a31bf42d40" Aug 13 00:16:51.849593 containerd[1487]: time="2025-08-13T00:16:51.848740467Z" level=info msg="TearDown network for sandbox \"436ce7aee3cc20efd8bf2c1f401c393719939d699cc8d5ab5690b2a31bf42d40\" successfully" Aug 13 00:16:51.849593 containerd[1487]: time="2025-08-13T00:16:51.848771427Z" level=info msg="StopPodSandbox for \"436ce7aee3cc20efd8bf2c1f401c393719939d699cc8d5ab5690b2a31bf42d40\" returns successfully" Aug 13 00:16:51.853367 containerd[1487]: time="2025-08-13T00:16:51.852872052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b89966564-sv4v2,Uid:f9823515-dfc4-4fbb-a853-8b1f7dc08f4c,Namespace:calico-system,Attempt:1,}" Aug 13 00:16:51.855044 systemd[1]: run-netns-cni\x2d992871dd\x2d67d6\x2d749a\x2dd872\x2dbec097d351ef.mount: Deactivated successfully. Aug 13 00:16:52.039474 kubelet[2615]: I0813 00:16:52.039113 2615 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-xvrh7" podStartSLOduration=46.03909404 podStartE2EDuration="46.03909404s" podCreationTimestamp="2025-08-13 00:16:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:16:52.001001016 +0000 UTC m=+52.475514569" watchObservedRunningTime="2025-08-13 00:16:52.03909404 +0000 UTC m=+52.513607553" Aug 13 00:16:52.155771 systemd-networkd[1399]: cali0d2a4850113: Link UP Aug 13 00:16:52.156052 systemd-networkd[1399]: cali0d2a4850113: Gained carrier Aug 13 00:16:52.194375 containerd[1487]: 2025-08-13 00:16:51.948 [INFO][4899] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--3--d55e308663-k8s-calico--kube--controllers--b89966564--sv4v2-eth0 calico-kube-controllers-b89966564- calico-system f9823515-dfc4-4fbb-a853-8b1f7dc08f4c 1009 0 2025-08-13 00:16:26 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:b89966564 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-5-3-d55e308663 calico-kube-controllers-b89966564-sv4v2 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali0d2a4850113 [] [] }} ContainerID="cb0f102fb2d2a8ac9816da76d5261c2af1f83cdc27f97039751162201a5dfdef" Namespace="calico-system" Pod="calico-kube-controllers-b89966564-sv4v2" WorkloadEndpoint="ci--4081--3--5--3--d55e308663-k8s-calico--kube--controllers--b89966564--sv4v2-" Aug 13 00:16:52.194375 containerd[1487]: 2025-08-13 00:16:51.950 [INFO][4899] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cb0f102fb2d2a8ac9816da76d5261c2af1f83cdc27f97039751162201a5dfdef" Namespace="calico-system" Pod="calico-kube-controllers-b89966564-sv4v2" WorkloadEndpoint="ci--4081--3--5--3--d55e308663-k8s-calico--kube--controllers--b89966564--sv4v2-eth0" Aug 13 00:16:52.194375 containerd[1487]: 2025-08-13 00:16:52.029 [INFO][4911] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cb0f102fb2d2a8ac9816da76d5261c2af1f83cdc27f97039751162201a5dfdef" HandleID="k8s-pod-network.cb0f102fb2d2a8ac9816da76d5261c2af1f83cdc27f97039751162201a5dfdef" Workload="ci--4081--3--5--3--d55e308663-k8s-calico--kube--controllers--b89966564--sv4v2-eth0" Aug 13 00:16:52.194375 containerd[1487]: 2025-08-13 00:16:52.030 [INFO][4911] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cb0f102fb2d2a8ac9816da76d5261c2af1f83cdc27f97039751162201a5dfdef" HandleID="k8s-pod-network.cb0f102fb2d2a8ac9816da76d5261c2af1f83cdc27f97039751162201a5dfdef" Workload="ci--4081--3--5--3--d55e308663-k8s-calico--kube--controllers--b89966564--sv4v2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d730), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-5-3-d55e308663", "pod":"calico-kube-controllers-b89966564-sv4v2", "timestamp":"2025-08-13 00:16:52.029625304 +0000 UTC"}, Hostname:"ci-4081-3-5-3-d55e308663", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:16:52.194375 containerd[1487]: 2025-08-13 00:16:52.030 [INFO][4911] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:16:52.194375 containerd[1487]: 2025-08-13 00:16:52.030 [INFO][4911] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:16:52.194375 containerd[1487]: 2025-08-13 00:16:52.030 [INFO][4911] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-3-d55e308663' Aug 13 00:16:52.194375 containerd[1487]: 2025-08-13 00:16:52.051 [INFO][4911] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cb0f102fb2d2a8ac9816da76d5261c2af1f83cdc27f97039751162201a5dfdef" host="ci-4081-3-5-3-d55e308663" Aug 13 00:16:52.194375 containerd[1487]: 2025-08-13 00:16:52.060 [INFO][4911] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-3-d55e308663" Aug 13 00:16:52.194375 containerd[1487]: 2025-08-13 00:16:52.077 [INFO][4911] ipam/ipam.go 511: Trying affinity for 192.168.75.192/26 host="ci-4081-3-5-3-d55e308663" Aug 13 00:16:52.194375 containerd[1487]: 2025-08-13 00:16:52.088 [INFO][4911] ipam/ipam.go 158: Attempting to load block cidr=192.168.75.192/26 host="ci-4081-3-5-3-d55e308663" Aug 13 00:16:52.194375 containerd[1487]: 2025-08-13 00:16:52.096 [INFO][4911] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.75.192/26 host="ci-4081-3-5-3-d55e308663" Aug 13 00:16:52.194375 containerd[1487]: 2025-08-13 00:16:52.096 [INFO][4911] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.75.192/26 handle="k8s-pod-network.cb0f102fb2d2a8ac9816da76d5261c2af1f83cdc27f97039751162201a5dfdef" host="ci-4081-3-5-3-d55e308663" Aug 13 00:16:52.194375 containerd[1487]: 2025-08-13 00:16:52.100 [INFO][4911] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.cb0f102fb2d2a8ac9816da76d5261c2af1f83cdc27f97039751162201a5dfdef Aug 13 00:16:52.194375 containerd[1487]: 2025-08-13 00:16:52.117 [INFO][4911] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.75.192/26 handle="k8s-pod-network.cb0f102fb2d2a8ac9816da76d5261c2af1f83cdc27f97039751162201a5dfdef" host="ci-4081-3-5-3-d55e308663" Aug 13 00:16:52.194375 containerd[1487]: 2025-08-13 00:16:52.128 [INFO][4911] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.75.200/26] block=192.168.75.192/26 handle="k8s-pod-network.cb0f102fb2d2a8ac9816da76d5261c2af1f83cdc27f97039751162201a5dfdef" host="ci-4081-3-5-3-d55e308663" Aug 13 00:16:52.194375 containerd[1487]: 2025-08-13 00:16:52.128 [INFO][4911] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.75.200/26] handle="k8s-pod-network.cb0f102fb2d2a8ac9816da76d5261c2af1f83cdc27f97039751162201a5dfdef" host="ci-4081-3-5-3-d55e308663" Aug 13 00:16:52.194375 containerd[1487]: 2025-08-13 00:16:52.129 [INFO][4911] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:16:52.194375 containerd[1487]: 2025-08-13 00:16:52.129 [INFO][4911] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.75.200/26] IPv6=[] ContainerID="cb0f102fb2d2a8ac9816da76d5261c2af1f83cdc27f97039751162201a5dfdef" HandleID="k8s-pod-network.cb0f102fb2d2a8ac9816da76d5261c2af1f83cdc27f97039751162201a5dfdef" Workload="ci--4081--3--5--3--d55e308663-k8s-calico--kube--controllers--b89966564--sv4v2-eth0" Aug 13 00:16:52.195038 containerd[1487]: 2025-08-13 00:16:52.141 [INFO][4899] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cb0f102fb2d2a8ac9816da76d5261c2af1f83cdc27f97039751162201a5dfdef" Namespace="calico-system" Pod="calico-kube-controllers-b89966564-sv4v2" WorkloadEndpoint="ci--4081--3--5--3--d55e308663-k8s-calico--kube--controllers--b89966564--sv4v2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--3--d55e308663-k8s-calico--kube--controllers--b89966564--sv4v2-eth0", GenerateName:"calico-kube-controllers-b89966564-", Namespace:"calico-system", SelfLink:"", UID:"f9823515-dfc4-4fbb-a853-8b1f7dc08f4c", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 16, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"b89966564", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-3-d55e308663", ContainerID:"", Pod:"calico-kube-controllers-b89966564-sv4v2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.75.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0d2a4850113", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:16:52.195038 containerd[1487]: 2025-08-13 00:16:52.143 [INFO][4899] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.75.200/32] ContainerID="cb0f102fb2d2a8ac9816da76d5261c2af1f83cdc27f97039751162201a5dfdef" Namespace="calico-system" Pod="calico-kube-controllers-b89966564-sv4v2" WorkloadEndpoint="ci--4081--3--5--3--d55e308663-k8s-calico--kube--controllers--b89966564--sv4v2-eth0" Aug 13 00:16:52.195038 containerd[1487]: 2025-08-13 00:16:52.143 [INFO][4899] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0d2a4850113 ContainerID="cb0f102fb2d2a8ac9816da76d5261c2af1f83cdc27f97039751162201a5dfdef" Namespace="calico-system" Pod="calico-kube-controllers-b89966564-sv4v2" WorkloadEndpoint="ci--4081--3--5--3--d55e308663-k8s-calico--kube--controllers--b89966564--sv4v2-eth0" Aug 13 00:16:52.195038 containerd[1487]: 2025-08-13 00:16:52.162 [INFO][4899] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cb0f102fb2d2a8ac9816da76d5261c2af1f83cdc27f97039751162201a5dfdef" Namespace="calico-system" Pod="calico-kube-controllers-b89966564-sv4v2" WorkloadEndpoint="ci--4081--3--5--3--d55e308663-k8s-calico--kube--controllers--b89966564--sv4v2-eth0" Aug 13 00:16:52.195038 containerd[1487]: 2025-08-13 00:16:52.163 [INFO][4899] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cb0f102fb2d2a8ac9816da76d5261c2af1f83cdc27f97039751162201a5dfdef" Namespace="calico-system" Pod="calico-kube-controllers-b89966564-sv4v2" WorkloadEndpoint="ci--4081--3--5--3--d55e308663-k8s-calico--kube--controllers--b89966564--sv4v2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--3--d55e308663-k8s-calico--kube--controllers--b89966564--sv4v2-eth0", GenerateName:"calico-kube-controllers-b89966564-", Namespace:"calico-system", SelfLink:"", UID:"f9823515-dfc4-4fbb-a853-8b1f7dc08f4c", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 16, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"b89966564", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-3-d55e308663", ContainerID:"cb0f102fb2d2a8ac9816da76d5261c2af1f83cdc27f97039751162201a5dfdef", Pod:"calico-kube-controllers-b89966564-sv4v2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.75.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0d2a4850113", MAC:"8e:f7:89:18:48:8a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:16:52.195038 containerd[1487]: 2025-08-13 00:16:52.187 [INFO][4899] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cb0f102fb2d2a8ac9816da76d5261c2af1f83cdc27f97039751162201a5dfdef" Namespace="calico-system" Pod="calico-kube-controllers-b89966564-sv4v2" WorkloadEndpoint="ci--4081--3--5--3--d55e308663-k8s-calico--kube--controllers--b89966564--sv4v2-eth0" Aug 13 00:16:52.266156 containerd[1487]: time="2025-08-13T00:16:52.263787722Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:16:52.266156 containerd[1487]: time="2025-08-13T00:16:52.264768407Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:16:52.266156 containerd[1487]: time="2025-08-13T00:16:52.264787927Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:16:52.268883 containerd[1487]: time="2025-08-13T00:16:52.268637830Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:16:52.312488 systemd[1]: Started cri-containerd-cb0f102fb2d2a8ac9816da76d5261c2af1f83cdc27f97039751162201a5dfdef.scope - libcontainer container cb0f102fb2d2a8ac9816da76d5261c2af1f83cdc27f97039751162201a5dfdef. Aug 13 00:16:52.389229 containerd[1487]: time="2025-08-13T00:16:52.389184219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b89966564-sv4v2,Uid:f9823515-dfc4-4fbb-a853-8b1f7dc08f4c,Namespace:calico-system,Attempt:1,} returns sandbox id \"cb0f102fb2d2a8ac9816da76d5261c2af1f83cdc27f97039751162201a5dfdef\"" Aug 13 00:16:52.517729 containerd[1487]: time="2025-08-13T00:16:52.517527654Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:16:52.519595 containerd[1487]: time="2025-08-13T00:16:52.519496706Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=44517149" Aug 13 00:16:52.520325 containerd[1487]: time="2025-08-13T00:16:52.520229750Z" level=info msg="ImageCreate event name:\"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:16:52.523962 containerd[1487]: time="2025-08-13T00:16:52.523632850Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:16:52.526243 containerd[1487]: time="2025-08-13T00:16:52.525189539Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 4.990357623s" Aug 13 00:16:52.526243 containerd[1487]: time="2025-08-13T00:16:52.525253099Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Aug 13 00:16:52.528294 containerd[1487]: time="2025-08-13T00:16:52.527675474Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 13 00:16:52.530202 containerd[1487]: time="2025-08-13T00:16:52.529635685Z" level=info msg="CreateContainer within sandbox \"4a7b1f63f04ba92954bca05be2c1619684920f886e5c1dc679c851ecae11cbf9\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 13 00:16:52.545827 containerd[1487]: time="2025-08-13T00:16:52.545636859Z" level=info msg="CreateContainer within sandbox \"4a7b1f63f04ba92954bca05be2c1619684920f886e5c1dc679c851ecae11cbf9\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"b83e78387c00efb786b01c0742f308a740c1b715da8c6221d69a64f5af1fdffb\"" Aug 13 00:16:52.548232 containerd[1487]: time="2025-08-13T00:16:52.546827106Z" level=info msg="StartContainer for \"b83e78387c00efb786b01c0742f308a740c1b715da8c6221d69a64f5af1fdffb\"" Aug 13 00:16:52.564824 systemd-networkd[1399]: cali9b98450fc81: Gained IPv6LL Aug 13 00:16:52.584626 systemd[1]: Started cri-containerd-b83e78387c00efb786b01c0742f308a740c1b715da8c6221d69a64f5af1fdffb.scope - libcontainer container b83e78387c00efb786b01c0742f308a740c1b715da8c6221d69a64f5af1fdffb. Aug 13 00:16:52.634354 containerd[1487]: time="2025-08-13T00:16:52.634182100Z" level=info msg="StartContainer for \"b83e78387c00efb786b01c0742f308a740c1b715da8c6221d69a64f5af1fdffb\" returns successfully" Aug 13 00:16:52.898022 containerd[1487]: time="2025-08-13T00:16:52.897007606Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:16:52.898462 containerd[1487]: time="2025-08-13T00:16:52.898428294Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Aug 13 00:16:52.901532 containerd[1487]: time="2025-08-13T00:16:52.901488152Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 373.532877ms" Aug 13 00:16:52.901665 containerd[1487]: time="2025-08-13T00:16:52.901650393Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Aug 13 00:16:52.903795 containerd[1487]: time="2025-08-13T00:16:52.903543365Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Aug 13 00:16:52.905352 containerd[1487]: time="2025-08-13T00:16:52.905108774Z" level=info msg="CreateContainer within sandbox \"476fc8cdacef13f175c9f6c921486c65e283ccaad2e041f1124e9677fa24a980\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 13 00:16:52.922934 containerd[1487]: time="2025-08-13T00:16:52.922848838Z" level=info msg="CreateContainer within sandbox \"476fc8cdacef13f175c9f6c921486c65e283ccaad2e041f1124e9677fa24a980\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"581a57281b814104d5fc225c662ef456c0ae39d2723b04d615e326156894de1e\"" Aug 13 00:16:52.924119 containerd[1487]: time="2025-08-13T00:16:52.924084205Z" level=info msg="StartContainer for \"581a57281b814104d5fc225c662ef456c0ae39d2723b04d615e326156894de1e\"" Aug 13 00:16:52.976121 systemd[1]: Started cri-containerd-581a57281b814104d5fc225c662ef456c0ae39d2723b04d615e326156894de1e.scope - libcontainer container 581a57281b814104d5fc225c662ef456c0ae39d2723b04d615e326156894de1e. Aug 13 00:16:53.010481 systemd-networkd[1399]: calid8706d58301: Gained IPv6LL Aug 13 00:16:53.048729 containerd[1487]: time="2025-08-13T00:16:53.048685254Z" level=info msg="StartContainer for \"581a57281b814104d5fc225c662ef456c0ae39d2723b04d615e326156894de1e\" returns successfully" Aug 13 00:16:53.202426 systemd-networkd[1399]: cali0d2a4850113: Gained IPv6LL Aug 13 00:16:54.001364 kubelet[2615]: I0813 00:16:54.001164 2615 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 00:16:54.018253 kubelet[2615]: I0813 00:16:54.017572 2615 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-765654ff88-wtksk" podStartSLOduration=29.701446382 podStartE2EDuration="35.01754955s" podCreationTimestamp="2025-08-13 00:16:19 +0000 UTC" firstStartedPulling="2025-08-13 00:16:47.210326178 +0000 UTC m=+47.684839651" lastFinishedPulling="2025-08-13 00:16:52.526429306 +0000 UTC m=+53.000942819" observedRunningTime="2025-08-13 00:16:53.001801542 +0000 UTC m=+53.476315055" watchObservedRunningTime="2025-08-13 00:16:54.01754955 +0000 UTC m=+54.492063063" Aug 13 00:16:54.018253 kubelet[2615]: I0813 00:16:54.017804 2615 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-765654ff88-hg6lz" podStartSLOduration=30.23201787 podStartE2EDuration="35.017799432s" podCreationTimestamp="2025-08-13 00:16:19 +0000 UTC" firstStartedPulling="2025-08-13 00:16:48.116752757 +0000 UTC m=+48.591266310" lastFinishedPulling="2025-08-13 00:16:52.902534399 +0000 UTC m=+53.377047872" observedRunningTime="2025-08-13 00:16:54.016767666 +0000 UTC m=+54.491281139" watchObservedRunningTime="2025-08-13 00:16:54.017799432 +0000 UTC m=+54.492312945" Aug 13 00:16:54.613562 containerd[1487]: time="2025-08-13T00:16:54.613471997Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:16:54.615775 containerd[1487]: time="2025-08-13T00:16:54.615716410Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8225702" Aug 13 00:16:54.625128 containerd[1487]: time="2025-08-13T00:16:54.624912863Z" level=info msg="ImageCreate event name:\"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:16:54.629258 containerd[1487]: time="2025-08-13T00:16:54.629138727Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:16:54.631336 containerd[1487]: time="2025-08-13T00:16:54.630754176Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"9594943\" in 1.727135131s" Aug 13 00:16:54.631336 containerd[1487]: time="2025-08-13T00:16:54.630799496Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\"" Aug 13 00:16:54.635723 containerd[1487]: time="2025-08-13T00:16:54.633939954Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Aug 13 00:16:54.637590 containerd[1487]: time="2025-08-13T00:16:54.637425254Z" level=info msg="CreateContainer within sandbox \"ab2cb5ca9f50a9e056ca54fa9e7a9df12f95ce00edd4e276ff3a01d423f1adba\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Aug 13 00:16:54.733603 containerd[1487]: time="2025-08-13T00:16:54.733375003Z" level=info msg="CreateContainer within sandbox \"ab2cb5ca9f50a9e056ca54fa9e7a9df12f95ce00edd4e276ff3a01d423f1adba\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"938307862d486c48fc44ad1f9b1205401d50c575fddf33c9a8a1910e6c60f0e3\"" Aug 13 00:16:54.734476 containerd[1487]: time="2025-08-13T00:16:54.734428689Z" level=info msg="StartContainer for \"938307862d486c48fc44ad1f9b1205401d50c575fddf33c9a8a1910e6c60f0e3\"" Aug 13 00:16:54.795506 systemd[1]: Started cri-containerd-938307862d486c48fc44ad1f9b1205401d50c575fddf33c9a8a1910e6c60f0e3.scope - libcontainer container 938307862d486c48fc44ad1f9b1205401d50c575fddf33c9a8a1910e6c60f0e3. Aug 13 00:16:54.867716 containerd[1487]: time="2025-08-13T00:16:54.867569930Z" level=info msg="StartContainer for \"938307862d486c48fc44ad1f9b1205401d50c575fddf33c9a8a1910e6c60f0e3\" returns successfully" Aug 13 00:16:55.007377 kubelet[2615]: I0813 00:16:55.007142 2615 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 00:16:57.469342 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3666244325.mount: Deactivated successfully. Aug 13 00:16:58.145073 containerd[1487]: time="2025-08-13T00:16:58.144088635Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:16:58.146656 containerd[1487]: time="2025-08-13T00:16:58.146621529Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=61838790" Aug 13 00:16:58.148402 containerd[1487]: time="2025-08-13T00:16:58.148350138Z" level=info msg="ImageCreate event name:\"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:16:58.153866 containerd[1487]: time="2025-08-13T00:16:58.153819408Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:16:58.154634 containerd[1487]: time="2025-08-13T00:16:58.154580812Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"61838636\" in 3.520596458s" Aug 13 00:16:58.154634 containerd[1487]: time="2025-08-13T00:16:58.154624652Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\"" Aug 13 00:16:58.159231 containerd[1487]: time="2025-08-13T00:16:58.159143637Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Aug 13 00:16:58.161121 containerd[1487]: time="2025-08-13T00:16:58.161083647Z" level=info msg="CreateContainer within sandbox \"fbc28fd8e5dc4f43065701903c832505cdfbdd4ef392f51bfccd187d91dbeec1\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Aug 13 00:16:58.182444 containerd[1487]: time="2025-08-13T00:16:58.181778719Z" level=info msg="CreateContainer within sandbox \"fbc28fd8e5dc4f43065701903c832505cdfbdd4ef392f51bfccd187d91dbeec1\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"37779cf891a9330303b0dbfade92b8f8265bfb8f31198d033c64bb097d113db4\"" Aug 13 00:16:58.184332 containerd[1487]: time="2025-08-13T00:16:58.183391408Z" level=info msg="StartContainer for \"37779cf891a9330303b0dbfade92b8f8265bfb8f31198d033c64bb097d113db4\"" Aug 13 00:16:58.278673 systemd[1]: Started cri-containerd-37779cf891a9330303b0dbfade92b8f8265bfb8f31198d033c64bb097d113db4.scope - libcontainer container 37779cf891a9330303b0dbfade92b8f8265bfb8f31198d033c64bb097d113db4. Aug 13 00:16:58.339626 containerd[1487]: time="2025-08-13T00:16:58.339404373Z" level=info msg="StartContainer for \"37779cf891a9330303b0dbfade92b8f8265bfb8f31198d033c64bb097d113db4\" returns successfully" Aug 13 00:16:59.050334 kubelet[2615]: I0813 00:16:59.047837 2615 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-nlr8k" podStartSLOduration=26.237200771 podStartE2EDuration="33.047812087s" podCreationTimestamp="2025-08-13 00:16:26 +0000 UTC" firstStartedPulling="2025-08-13 00:16:51.347213874 +0000 UTC m=+51.821727387" lastFinishedPulling="2025-08-13 00:16:58.15782511 +0000 UTC m=+58.632338703" observedRunningTime="2025-08-13 00:16:59.047312804 +0000 UTC m=+59.521826397" watchObservedRunningTime="2025-08-13 00:16:59.047812087 +0000 UTC m=+59.522325640" Aug 13 00:16:59.067746 systemd[1]: run-containerd-runc-k8s.io-37779cf891a9330303b0dbfade92b8f8265bfb8f31198d033c64bb097d113db4-runc.lQSlXT.mount: Deactivated successfully. Aug 13 00:16:59.649295 containerd[1487]: time="2025-08-13T00:16:59.649240102Z" level=info msg="StopPodSandbox for \"3d57bb024102e7ea0b07080efe2b7dc9b796d88b44b4ae63c089837653fcb8d0\"" Aug 13 00:16:59.837690 containerd[1487]: 2025-08-13 00:16:59.771 [WARNING][5189] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3d57bb024102e7ea0b07080efe2b7dc9b796d88b44b4ae63c089837653fcb8d0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--3--d55e308663-k8s-coredns--668d6bf9bc--hnn7j-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"2776ab78-3da7-4b02-86ed-3ed0ba1a8679", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 16, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-3-d55e308663", ContainerID:"1b19d30f69cdfc0a2b1c5c8ced7022e797af6d552d399dfb99365acdaa39d18f", Pod:"coredns-668d6bf9bc-hnn7j", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.75.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali767759facbe", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:16:59.837690 containerd[1487]: 2025-08-13 00:16:59.771 [INFO][5189] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3d57bb024102e7ea0b07080efe2b7dc9b796d88b44b4ae63c089837653fcb8d0" Aug 13 00:16:59.837690 containerd[1487]: 2025-08-13 00:16:59.771 [INFO][5189] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3d57bb024102e7ea0b07080efe2b7dc9b796d88b44b4ae63c089837653fcb8d0" iface="eth0" netns="" Aug 13 00:16:59.837690 containerd[1487]: 2025-08-13 00:16:59.771 [INFO][5189] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3d57bb024102e7ea0b07080efe2b7dc9b796d88b44b4ae63c089837653fcb8d0" Aug 13 00:16:59.837690 containerd[1487]: 2025-08-13 00:16:59.771 [INFO][5189] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3d57bb024102e7ea0b07080efe2b7dc9b796d88b44b4ae63c089837653fcb8d0" Aug 13 00:16:59.837690 containerd[1487]: 2025-08-13 00:16:59.815 [INFO][5198] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3d57bb024102e7ea0b07080efe2b7dc9b796d88b44b4ae63c089837653fcb8d0" HandleID="k8s-pod-network.3d57bb024102e7ea0b07080efe2b7dc9b796d88b44b4ae63c089837653fcb8d0" Workload="ci--4081--3--5--3--d55e308663-k8s-coredns--668d6bf9bc--hnn7j-eth0" Aug 13 00:16:59.837690 containerd[1487]: 2025-08-13 00:16:59.816 [INFO][5198] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:16:59.837690 containerd[1487]: 2025-08-13 00:16:59.816 [INFO][5198] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:16:59.837690 containerd[1487]: 2025-08-13 00:16:59.829 [WARNING][5198] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3d57bb024102e7ea0b07080efe2b7dc9b796d88b44b4ae63c089837653fcb8d0" HandleID="k8s-pod-network.3d57bb024102e7ea0b07080efe2b7dc9b796d88b44b4ae63c089837653fcb8d0" Workload="ci--4081--3--5--3--d55e308663-k8s-coredns--668d6bf9bc--hnn7j-eth0" Aug 13 00:16:59.837690 containerd[1487]: 2025-08-13 00:16:59.829 [INFO][5198] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3d57bb024102e7ea0b07080efe2b7dc9b796d88b44b4ae63c089837653fcb8d0" HandleID="k8s-pod-network.3d57bb024102e7ea0b07080efe2b7dc9b796d88b44b4ae63c089837653fcb8d0" Workload="ci--4081--3--5--3--d55e308663-k8s-coredns--668d6bf9bc--hnn7j-eth0" Aug 13 00:16:59.837690 containerd[1487]: 2025-08-13 00:16:59.831 [INFO][5198] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:16:59.837690 containerd[1487]: 2025-08-13 00:16:59.834 [INFO][5189] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3d57bb024102e7ea0b07080efe2b7dc9b796d88b44b4ae63c089837653fcb8d0" Aug 13 00:16:59.837690 containerd[1487]: time="2025-08-13T00:16:59.837545509Z" level=info msg="TearDown network for sandbox \"3d57bb024102e7ea0b07080efe2b7dc9b796d88b44b4ae63c089837653fcb8d0\" successfully" Aug 13 00:16:59.837690 containerd[1487]: time="2025-08-13T00:16:59.837571949Z" level=info msg="StopPodSandbox for \"3d57bb024102e7ea0b07080efe2b7dc9b796d88b44b4ae63c089837653fcb8d0\" returns successfully" Aug 13 00:16:59.838612 containerd[1487]: time="2025-08-13T00:16:59.838466394Z" level=info msg="RemovePodSandbox for \"3d57bb024102e7ea0b07080efe2b7dc9b796d88b44b4ae63c089837653fcb8d0\"" Aug 13 00:16:59.838612 containerd[1487]: time="2025-08-13T00:16:59.838507034Z" level=info msg="Forcibly stopping sandbox \"3d57bb024102e7ea0b07080efe2b7dc9b796d88b44b4ae63c089837653fcb8d0\"" Aug 13 00:17:00.007332 containerd[1487]: 2025-08-13 00:16:59.925 [WARNING][5212] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3d57bb024102e7ea0b07080efe2b7dc9b796d88b44b4ae63c089837653fcb8d0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--3--d55e308663-k8s-coredns--668d6bf9bc--hnn7j-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"2776ab78-3da7-4b02-86ed-3ed0ba1a8679", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 16, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-3-d55e308663", ContainerID:"1b19d30f69cdfc0a2b1c5c8ced7022e797af6d552d399dfb99365acdaa39d18f", Pod:"coredns-668d6bf9bc-hnn7j", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.75.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali767759facbe", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:17:00.007332 containerd[1487]: 2025-08-13 00:16:59.926 [INFO][5212] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3d57bb024102e7ea0b07080efe2b7dc9b796d88b44b4ae63c089837653fcb8d0" Aug 13 00:17:00.007332 containerd[1487]: 2025-08-13 00:16:59.926 [INFO][5212] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3d57bb024102e7ea0b07080efe2b7dc9b796d88b44b4ae63c089837653fcb8d0" iface="eth0" netns="" Aug 13 00:17:00.007332 containerd[1487]: 2025-08-13 00:16:59.926 [INFO][5212] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3d57bb024102e7ea0b07080efe2b7dc9b796d88b44b4ae63c089837653fcb8d0" Aug 13 00:17:00.007332 containerd[1487]: 2025-08-13 00:16:59.926 [INFO][5212] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3d57bb024102e7ea0b07080efe2b7dc9b796d88b44b4ae63c089837653fcb8d0" Aug 13 00:17:00.007332 containerd[1487]: 2025-08-13 00:16:59.981 [INFO][5219] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3d57bb024102e7ea0b07080efe2b7dc9b796d88b44b4ae63c089837653fcb8d0" HandleID="k8s-pod-network.3d57bb024102e7ea0b07080efe2b7dc9b796d88b44b4ae63c089837653fcb8d0" Workload="ci--4081--3--5--3--d55e308663-k8s-coredns--668d6bf9bc--hnn7j-eth0" Aug 13 00:17:00.007332 containerd[1487]: 2025-08-13 00:16:59.982 [INFO][5219] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:17:00.007332 containerd[1487]: 2025-08-13 00:16:59.982 [INFO][5219] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:17:00.007332 containerd[1487]: 2025-08-13 00:16:59.996 [WARNING][5219] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3d57bb024102e7ea0b07080efe2b7dc9b796d88b44b4ae63c089837653fcb8d0" HandleID="k8s-pod-network.3d57bb024102e7ea0b07080efe2b7dc9b796d88b44b4ae63c089837653fcb8d0" Workload="ci--4081--3--5--3--d55e308663-k8s-coredns--668d6bf9bc--hnn7j-eth0" Aug 13 00:17:00.007332 containerd[1487]: 2025-08-13 00:16:59.996 [INFO][5219] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3d57bb024102e7ea0b07080efe2b7dc9b796d88b44b4ae63c089837653fcb8d0" HandleID="k8s-pod-network.3d57bb024102e7ea0b07080efe2b7dc9b796d88b44b4ae63c089837653fcb8d0" Workload="ci--4081--3--5--3--d55e308663-k8s-coredns--668d6bf9bc--hnn7j-eth0" Aug 13 00:17:00.007332 containerd[1487]: 2025-08-13 00:17:00.000 [INFO][5219] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:17:00.007332 containerd[1487]: 2025-08-13 00:17:00.003 [INFO][5212] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3d57bb024102e7ea0b07080efe2b7dc9b796d88b44b4ae63c089837653fcb8d0" Aug 13 00:17:00.007332 containerd[1487]: time="2025-08-13T00:17:00.006356612Z" level=info msg="TearDown network for sandbox \"3d57bb024102e7ea0b07080efe2b7dc9b796d88b44b4ae63c089837653fcb8d0\" successfully" Aug 13 00:17:00.012616 containerd[1487]: time="2025-08-13T00:17:00.011976441Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3d57bb024102e7ea0b07080efe2b7dc9b796d88b44b4ae63c089837653fcb8d0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:17:00.012616 containerd[1487]: time="2025-08-13T00:17:00.012157962Z" level=info msg="RemovePodSandbox \"3d57bb024102e7ea0b07080efe2b7dc9b796d88b44b4ae63c089837653fcb8d0\" returns successfully" Aug 13 00:17:00.013197 containerd[1487]: time="2025-08-13T00:17:00.013157448Z" level=info msg="StopPodSandbox for \"59d1853111440219c386c83786fa171a27b9b9918d901451d71d8f9e07bc1254\"" Aug 13 00:17:00.155376 containerd[1487]: 2025-08-13 00:17:00.098 [WARNING][5237] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="59d1853111440219c386c83786fa171a27b9b9918d901451d71d8f9e07bc1254" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--3--d55e308663-k8s-calico--apiserver--765654ff88--hg6lz-eth0", GenerateName:"calico-apiserver-765654ff88-", Namespace:"calico-apiserver", SelfLink:"", UID:"27660f5a-c4be-4a6f-bd2d-93f2bdfb3e66", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 16, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"765654ff88", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-3-d55e308663", ContainerID:"476fc8cdacef13f175c9f6c921486c65e283ccaad2e041f1124e9677fa24a980", Pod:"calico-apiserver-765654ff88-hg6lz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.75.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calieffb97a9ea9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:17:00.155376 containerd[1487]: 2025-08-13 00:17:00.098 [INFO][5237] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="59d1853111440219c386c83786fa171a27b9b9918d901451d71d8f9e07bc1254" Aug 13 00:17:00.155376 containerd[1487]: 2025-08-13 00:17:00.099 [INFO][5237] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="59d1853111440219c386c83786fa171a27b9b9918d901451d71d8f9e07bc1254" iface="eth0" netns="" Aug 13 00:17:00.155376 containerd[1487]: 2025-08-13 00:17:00.099 [INFO][5237] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="59d1853111440219c386c83786fa171a27b9b9918d901451d71d8f9e07bc1254" Aug 13 00:17:00.155376 containerd[1487]: 2025-08-13 00:17:00.099 [INFO][5237] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="59d1853111440219c386c83786fa171a27b9b9918d901451d71d8f9e07bc1254" Aug 13 00:17:00.155376 containerd[1487]: 2025-08-13 00:17:00.127 [INFO][5264] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="59d1853111440219c386c83786fa171a27b9b9918d901451d71d8f9e07bc1254" HandleID="k8s-pod-network.59d1853111440219c386c83786fa171a27b9b9918d901451d71d8f9e07bc1254" Workload="ci--4081--3--5--3--d55e308663-k8s-calico--apiserver--765654ff88--hg6lz-eth0" Aug 13 00:17:00.155376 containerd[1487]: 2025-08-13 00:17:00.128 [INFO][5264] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:17:00.155376 containerd[1487]: 2025-08-13 00:17:00.128 [INFO][5264] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:17:00.155376 containerd[1487]: 2025-08-13 00:17:00.146 [WARNING][5264] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="59d1853111440219c386c83786fa171a27b9b9918d901451d71d8f9e07bc1254" HandleID="k8s-pod-network.59d1853111440219c386c83786fa171a27b9b9918d901451d71d8f9e07bc1254" Workload="ci--4081--3--5--3--d55e308663-k8s-calico--apiserver--765654ff88--hg6lz-eth0" Aug 13 00:17:00.155376 containerd[1487]: 2025-08-13 00:17:00.146 [INFO][5264] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="59d1853111440219c386c83786fa171a27b9b9918d901451d71d8f9e07bc1254" HandleID="k8s-pod-network.59d1853111440219c386c83786fa171a27b9b9918d901451d71d8f9e07bc1254" Workload="ci--4081--3--5--3--d55e308663-k8s-calico--apiserver--765654ff88--hg6lz-eth0" Aug 13 00:17:00.155376 containerd[1487]: 2025-08-13 00:17:00.150 [INFO][5264] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:17:00.155376 containerd[1487]: 2025-08-13 00:17:00.153 [INFO][5237] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="59d1853111440219c386c83786fa171a27b9b9918d901451d71d8f9e07bc1254" Aug 13 00:17:00.155376 containerd[1487]: time="2025-08-13T00:17:00.155355478Z" level=info msg="TearDown network for sandbox \"59d1853111440219c386c83786fa171a27b9b9918d901451d71d8f9e07bc1254\" successfully" Aug 13 00:17:00.155376 containerd[1487]: time="2025-08-13T00:17:00.155382758Z" level=info msg="StopPodSandbox for \"59d1853111440219c386c83786fa171a27b9b9918d901451d71d8f9e07bc1254\" returns successfully" Aug 13 00:17:00.157290 containerd[1487]: time="2025-08-13T00:17:00.156927047Z" level=info msg="RemovePodSandbox for \"59d1853111440219c386c83786fa171a27b9b9918d901451d71d8f9e07bc1254\"" Aug 13 00:17:00.157290 containerd[1487]: time="2025-08-13T00:17:00.156985887Z" level=info msg="Forcibly stopping sandbox \"59d1853111440219c386c83786fa171a27b9b9918d901451d71d8f9e07bc1254\"" Aug 13 00:17:00.265380 containerd[1487]: 2025-08-13 00:17:00.213 [WARNING][5279] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="59d1853111440219c386c83786fa171a27b9b9918d901451d71d8f9e07bc1254" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--3--d55e308663-k8s-calico--apiserver--765654ff88--hg6lz-eth0", GenerateName:"calico-apiserver-765654ff88-", Namespace:"calico-apiserver", SelfLink:"", UID:"27660f5a-c4be-4a6f-bd2d-93f2bdfb3e66", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 16, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"765654ff88", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-3-d55e308663", ContainerID:"476fc8cdacef13f175c9f6c921486c65e283ccaad2e041f1124e9677fa24a980", Pod:"calico-apiserver-765654ff88-hg6lz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.75.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calieffb97a9ea9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:17:00.265380 containerd[1487]: 2025-08-13 00:17:00.214 [INFO][5279] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="59d1853111440219c386c83786fa171a27b9b9918d901451d71d8f9e07bc1254" Aug 13 00:17:00.265380 containerd[1487]: 2025-08-13 00:17:00.214 [INFO][5279] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="59d1853111440219c386c83786fa171a27b9b9918d901451d71d8f9e07bc1254" iface="eth0" netns="" Aug 13 00:17:00.265380 containerd[1487]: 2025-08-13 00:17:00.215 [INFO][5279] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="59d1853111440219c386c83786fa171a27b9b9918d901451d71d8f9e07bc1254" Aug 13 00:17:00.265380 containerd[1487]: 2025-08-13 00:17:00.215 [INFO][5279] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="59d1853111440219c386c83786fa171a27b9b9918d901451d71d8f9e07bc1254" Aug 13 00:17:00.265380 containerd[1487]: 2025-08-13 00:17:00.246 [INFO][5287] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="59d1853111440219c386c83786fa171a27b9b9918d901451d71d8f9e07bc1254" HandleID="k8s-pod-network.59d1853111440219c386c83786fa171a27b9b9918d901451d71d8f9e07bc1254" Workload="ci--4081--3--5--3--d55e308663-k8s-calico--apiserver--765654ff88--hg6lz-eth0" Aug 13 00:17:00.265380 containerd[1487]: 2025-08-13 00:17:00.246 [INFO][5287] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:17:00.265380 containerd[1487]: 2025-08-13 00:17:00.246 [INFO][5287] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:17:00.265380 containerd[1487]: 2025-08-13 00:17:00.257 [WARNING][5287] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="59d1853111440219c386c83786fa171a27b9b9918d901451d71d8f9e07bc1254" HandleID="k8s-pod-network.59d1853111440219c386c83786fa171a27b9b9918d901451d71d8f9e07bc1254" Workload="ci--4081--3--5--3--d55e308663-k8s-calico--apiserver--765654ff88--hg6lz-eth0" Aug 13 00:17:00.265380 containerd[1487]: 2025-08-13 00:17:00.257 [INFO][5287] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="59d1853111440219c386c83786fa171a27b9b9918d901451d71d8f9e07bc1254" HandleID="k8s-pod-network.59d1853111440219c386c83786fa171a27b9b9918d901451d71d8f9e07bc1254" Workload="ci--4081--3--5--3--d55e308663-k8s-calico--apiserver--765654ff88--hg6lz-eth0" Aug 13 00:17:00.265380 containerd[1487]: 2025-08-13 00:17:00.260 [INFO][5287] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:17:00.265380 containerd[1487]: 2025-08-13 00:17:00.262 [INFO][5279] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="59d1853111440219c386c83786fa171a27b9b9918d901451d71d8f9e07bc1254" Aug 13 00:17:00.265380 containerd[1487]: time="2025-08-13T00:17:00.264490894Z" level=info msg="TearDown network for sandbox \"59d1853111440219c386c83786fa171a27b9b9918d901451d71d8f9e07bc1254\" successfully" Aug 13 00:17:00.269244 containerd[1487]: time="2025-08-13T00:17:00.269152319Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"59d1853111440219c386c83786fa171a27b9b9918d901451d71d8f9e07bc1254\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:17:00.269375 containerd[1487]: time="2025-08-13T00:17:00.269307280Z" level=info msg="RemovePodSandbox \"59d1853111440219c386c83786fa171a27b9b9918d901451d71d8f9e07bc1254\" returns successfully" Aug 13 00:17:00.271043 containerd[1487]: time="2025-08-13T00:17:00.270696367Z" level=info msg="StopPodSandbox for \"436ce7aee3cc20efd8bf2c1f401c393719939d699cc8d5ab5690b2a31bf42d40\"" Aug 13 00:17:00.376620 containerd[1487]: 2025-08-13 00:17:00.323 [WARNING][5301] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="436ce7aee3cc20efd8bf2c1f401c393719939d699cc8d5ab5690b2a31bf42d40" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--3--d55e308663-k8s-calico--kube--controllers--b89966564--sv4v2-eth0", GenerateName:"calico-kube-controllers-b89966564-", Namespace:"calico-system", SelfLink:"", UID:"f9823515-dfc4-4fbb-a853-8b1f7dc08f4c", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 16, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"b89966564", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-3-d55e308663", ContainerID:"cb0f102fb2d2a8ac9816da76d5261c2af1f83cdc27f97039751162201a5dfdef", Pod:"calico-kube-controllers-b89966564-sv4v2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.75.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0d2a4850113", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:17:00.376620 containerd[1487]: 2025-08-13 00:17:00.323 [INFO][5301] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="436ce7aee3cc20efd8bf2c1f401c393719939d699cc8d5ab5690b2a31bf42d40" Aug 13 00:17:00.376620 containerd[1487]: 2025-08-13 00:17:00.323 [INFO][5301] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="436ce7aee3cc20efd8bf2c1f401c393719939d699cc8d5ab5690b2a31bf42d40" iface="eth0" netns="" Aug 13 00:17:00.376620 containerd[1487]: 2025-08-13 00:17:00.323 [INFO][5301] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="436ce7aee3cc20efd8bf2c1f401c393719939d699cc8d5ab5690b2a31bf42d40" Aug 13 00:17:00.376620 containerd[1487]: 2025-08-13 00:17:00.323 [INFO][5301] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="436ce7aee3cc20efd8bf2c1f401c393719939d699cc8d5ab5690b2a31bf42d40" Aug 13 00:17:00.376620 containerd[1487]: 2025-08-13 00:17:00.354 [INFO][5308] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="436ce7aee3cc20efd8bf2c1f401c393719939d699cc8d5ab5690b2a31bf42d40" HandleID="k8s-pod-network.436ce7aee3cc20efd8bf2c1f401c393719939d699cc8d5ab5690b2a31bf42d40" Workload="ci--4081--3--5--3--d55e308663-k8s-calico--kube--controllers--b89966564--sv4v2-eth0" Aug 13 00:17:00.376620 containerd[1487]: 2025-08-13 00:17:00.354 [INFO][5308] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:17:00.376620 containerd[1487]: 2025-08-13 00:17:00.354 [INFO][5308] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:17:00.376620 containerd[1487]: 2025-08-13 00:17:00.370 [WARNING][5308] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="436ce7aee3cc20efd8bf2c1f401c393719939d699cc8d5ab5690b2a31bf42d40" HandleID="k8s-pod-network.436ce7aee3cc20efd8bf2c1f401c393719939d699cc8d5ab5690b2a31bf42d40" Workload="ci--4081--3--5--3--d55e308663-k8s-calico--kube--controllers--b89966564--sv4v2-eth0" Aug 13 00:17:00.376620 containerd[1487]: 2025-08-13 00:17:00.370 [INFO][5308] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="436ce7aee3cc20efd8bf2c1f401c393719939d699cc8d5ab5690b2a31bf42d40" HandleID="k8s-pod-network.436ce7aee3cc20efd8bf2c1f401c393719939d699cc8d5ab5690b2a31bf42d40" Workload="ci--4081--3--5--3--d55e308663-k8s-calico--kube--controllers--b89966564--sv4v2-eth0" Aug 13 00:17:00.376620 containerd[1487]: 2025-08-13 00:17:00.373 [INFO][5308] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:17:00.376620 containerd[1487]: 2025-08-13 00:17:00.374 [INFO][5301] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="436ce7aee3cc20efd8bf2c1f401c393719939d699cc8d5ab5690b2a31bf42d40" Aug 13 00:17:00.377561 containerd[1487]: time="2025-08-13T00:17:00.377133169Z" level=info msg="TearDown network for sandbox \"436ce7aee3cc20efd8bf2c1f401c393719939d699cc8d5ab5690b2a31bf42d40\" successfully" Aug 13 00:17:00.377561 containerd[1487]: time="2025-08-13T00:17:00.377181529Z" level=info msg="StopPodSandbox for \"436ce7aee3cc20efd8bf2c1f401c393719939d699cc8d5ab5690b2a31bf42d40\" returns successfully" Aug 13 00:17:00.377561 containerd[1487]: time="2025-08-13T00:17:00.377723412Z" level=info msg="RemovePodSandbox for \"436ce7aee3cc20efd8bf2c1f401c393719939d699cc8d5ab5690b2a31bf42d40\"" Aug 13 00:17:00.377561 containerd[1487]: time="2025-08-13T00:17:00.377756372Z" level=info msg="Forcibly stopping sandbox \"436ce7aee3cc20efd8bf2c1f401c393719939d699cc8d5ab5690b2a31bf42d40\"" Aug 13 00:17:00.497529 containerd[1487]: 2025-08-13 00:17:00.430 [WARNING][5322] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="436ce7aee3cc20efd8bf2c1f401c393719939d699cc8d5ab5690b2a31bf42d40" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--3--d55e308663-k8s-calico--kube--controllers--b89966564--sv4v2-eth0", GenerateName:"calico-kube-controllers-b89966564-", Namespace:"calico-system", SelfLink:"", UID:"f9823515-dfc4-4fbb-a853-8b1f7dc08f4c", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 16, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"b89966564", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-3-d55e308663", ContainerID:"cb0f102fb2d2a8ac9816da76d5261c2af1f83cdc27f97039751162201a5dfdef", Pod:"calico-kube-controllers-b89966564-sv4v2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.75.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0d2a4850113", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:17:00.497529 containerd[1487]: 2025-08-13 00:17:00.431 [INFO][5322] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="436ce7aee3cc20efd8bf2c1f401c393719939d699cc8d5ab5690b2a31bf42d40" Aug 13 00:17:00.497529 containerd[1487]: 2025-08-13 00:17:00.431 [INFO][5322] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="436ce7aee3cc20efd8bf2c1f401c393719939d699cc8d5ab5690b2a31bf42d40" iface="eth0" netns="" Aug 13 00:17:00.497529 containerd[1487]: 2025-08-13 00:17:00.431 [INFO][5322] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="436ce7aee3cc20efd8bf2c1f401c393719939d699cc8d5ab5690b2a31bf42d40" Aug 13 00:17:00.497529 containerd[1487]: 2025-08-13 00:17:00.431 [INFO][5322] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="436ce7aee3cc20efd8bf2c1f401c393719939d699cc8d5ab5690b2a31bf42d40" Aug 13 00:17:00.497529 containerd[1487]: 2025-08-13 00:17:00.476 [INFO][5329] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="436ce7aee3cc20efd8bf2c1f401c393719939d699cc8d5ab5690b2a31bf42d40" HandleID="k8s-pod-network.436ce7aee3cc20efd8bf2c1f401c393719939d699cc8d5ab5690b2a31bf42d40" Workload="ci--4081--3--5--3--d55e308663-k8s-calico--kube--controllers--b89966564--sv4v2-eth0" Aug 13 00:17:00.497529 containerd[1487]: 2025-08-13 00:17:00.478 [INFO][5329] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:17:00.497529 containerd[1487]: 2025-08-13 00:17:00.478 [INFO][5329] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:17:00.497529 containerd[1487]: 2025-08-13 00:17:00.489 [WARNING][5329] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="436ce7aee3cc20efd8bf2c1f401c393719939d699cc8d5ab5690b2a31bf42d40" HandleID="k8s-pod-network.436ce7aee3cc20efd8bf2c1f401c393719939d699cc8d5ab5690b2a31bf42d40" Workload="ci--4081--3--5--3--d55e308663-k8s-calico--kube--controllers--b89966564--sv4v2-eth0" Aug 13 00:17:00.497529 containerd[1487]: 2025-08-13 00:17:00.489 [INFO][5329] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="436ce7aee3cc20efd8bf2c1f401c393719939d699cc8d5ab5690b2a31bf42d40" HandleID="k8s-pod-network.436ce7aee3cc20efd8bf2c1f401c393719939d699cc8d5ab5690b2a31bf42d40" Workload="ci--4081--3--5--3--d55e308663-k8s-calico--kube--controllers--b89966564--sv4v2-eth0" Aug 13 00:17:00.497529 containerd[1487]: 2025-08-13 00:17:00.492 [INFO][5329] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:17:00.497529 containerd[1487]: 2025-08-13 00:17:00.494 [INFO][5322] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="436ce7aee3cc20efd8bf2c1f401c393719939d699cc8d5ab5690b2a31bf42d40" Aug 13 00:17:00.498034 containerd[1487]: time="2025-08-13T00:17:00.497549205Z" level=info msg="TearDown network for sandbox \"436ce7aee3cc20efd8bf2c1f401c393719939d699cc8d5ab5690b2a31bf42d40\" successfully" Aug 13 00:17:00.502167 containerd[1487]: time="2025-08-13T00:17:00.502081109Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"436ce7aee3cc20efd8bf2c1f401c393719939d699cc8d5ab5690b2a31bf42d40\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:17:00.503927 containerd[1487]: time="2025-08-13T00:17:00.502176869Z" level=info msg="RemovePodSandbox \"436ce7aee3cc20efd8bf2c1f401c393719939d699cc8d5ab5690b2a31bf42d40\" returns successfully" Aug 13 00:17:00.503927 containerd[1487]: time="2025-08-13T00:17:00.503418356Z" level=info msg="StopPodSandbox for \"723cfb97cf821550572dbfcc2de9f9ec348e0959abeda1144eeea8b5d8cfdffa\"" Aug 13 00:17:00.619513 containerd[1487]: 2025-08-13 00:17:00.562 [WARNING][5343] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="723cfb97cf821550572dbfcc2de9f9ec348e0959abeda1144eeea8b5d8cfdffa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--3--d55e308663-k8s-csi--node--driver--qctbn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"15e02000-307e-46ce-8cf9-fed7fd6d6dd8", ResourceVersion:"971", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 16, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-3-d55e308663", ContainerID:"ab2cb5ca9f50a9e056ca54fa9e7a9df12f95ce00edd4e276ff3a01d423f1adba", Pod:"csi-node-driver-qctbn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.75.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid31c84a3ae0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:17:00.619513 containerd[1487]: 2025-08-13 00:17:00.563 [INFO][5343] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="723cfb97cf821550572dbfcc2de9f9ec348e0959abeda1144eeea8b5d8cfdffa" Aug 13 00:17:00.619513 containerd[1487]: 2025-08-13 00:17:00.563 [INFO][5343] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="723cfb97cf821550572dbfcc2de9f9ec348e0959abeda1144eeea8b5d8cfdffa" iface="eth0" netns="" Aug 13 00:17:00.619513 containerd[1487]: 2025-08-13 00:17:00.564 [INFO][5343] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="723cfb97cf821550572dbfcc2de9f9ec348e0959abeda1144eeea8b5d8cfdffa" Aug 13 00:17:00.619513 containerd[1487]: 2025-08-13 00:17:00.564 [INFO][5343] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="723cfb97cf821550572dbfcc2de9f9ec348e0959abeda1144eeea8b5d8cfdffa" Aug 13 00:17:00.619513 containerd[1487]: 2025-08-13 00:17:00.598 [INFO][5350] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="723cfb97cf821550572dbfcc2de9f9ec348e0959abeda1144eeea8b5d8cfdffa" HandleID="k8s-pod-network.723cfb97cf821550572dbfcc2de9f9ec348e0959abeda1144eeea8b5d8cfdffa" Workload="ci--4081--3--5--3--d55e308663-k8s-csi--node--driver--qctbn-eth0" Aug 13 00:17:00.619513 containerd[1487]: 2025-08-13 00:17:00.599 [INFO][5350] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:17:00.619513 containerd[1487]: 2025-08-13 00:17:00.599 [INFO][5350] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:17:00.619513 containerd[1487]: 2025-08-13 00:17:00.612 [WARNING][5350] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="723cfb97cf821550572dbfcc2de9f9ec348e0959abeda1144eeea8b5d8cfdffa" HandleID="k8s-pod-network.723cfb97cf821550572dbfcc2de9f9ec348e0959abeda1144eeea8b5d8cfdffa" Workload="ci--4081--3--5--3--d55e308663-k8s-csi--node--driver--qctbn-eth0" Aug 13 00:17:00.619513 containerd[1487]: 2025-08-13 00:17:00.612 [INFO][5350] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="723cfb97cf821550572dbfcc2de9f9ec348e0959abeda1144eeea8b5d8cfdffa" HandleID="k8s-pod-network.723cfb97cf821550572dbfcc2de9f9ec348e0959abeda1144eeea8b5d8cfdffa" Workload="ci--4081--3--5--3--d55e308663-k8s-csi--node--driver--qctbn-eth0" Aug 13 00:17:00.619513 containerd[1487]: 2025-08-13 00:17:00.614 [INFO][5350] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:17:00.619513 containerd[1487]: 2025-08-13 00:17:00.616 [INFO][5343] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="723cfb97cf821550572dbfcc2de9f9ec348e0959abeda1144eeea8b5d8cfdffa" Aug 13 00:17:00.622493 containerd[1487]: time="2025-08-13T00:17:00.619630490Z" level=info msg="TearDown network for sandbox \"723cfb97cf821550572dbfcc2de9f9ec348e0959abeda1144eeea8b5d8cfdffa\" successfully" Aug 13 00:17:00.622493 containerd[1487]: time="2025-08-13T00:17:00.619657730Z" level=info msg="StopPodSandbox for \"723cfb97cf821550572dbfcc2de9f9ec348e0959abeda1144eeea8b5d8cfdffa\" returns successfully" Aug 13 00:17:00.622493 containerd[1487]: time="2025-08-13T00:17:00.621807861Z" level=info msg="RemovePodSandbox for \"723cfb97cf821550572dbfcc2de9f9ec348e0959abeda1144eeea8b5d8cfdffa\"" Aug 13 00:17:00.622493 containerd[1487]: time="2025-08-13T00:17:00.621850021Z" level=info msg="Forcibly stopping sandbox \"723cfb97cf821550572dbfcc2de9f9ec348e0959abeda1144eeea8b5d8cfdffa\"" Aug 13 00:17:00.766233 containerd[1487]: 2025-08-13 00:17:00.697 [WARNING][5364] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="723cfb97cf821550572dbfcc2de9f9ec348e0959abeda1144eeea8b5d8cfdffa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--3--d55e308663-k8s-csi--node--driver--qctbn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"15e02000-307e-46ce-8cf9-fed7fd6d6dd8", ResourceVersion:"971", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 16, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-3-d55e308663", ContainerID:"ab2cb5ca9f50a9e056ca54fa9e7a9df12f95ce00edd4e276ff3a01d423f1adba", Pod:"csi-node-driver-qctbn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.75.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid31c84a3ae0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:17:00.766233 containerd[1487]: 2025-08-13 00:17:00.697 [INFO][5364] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="723cfb97cf821550572dbfcc2de9f9ec348e0959abeda1144eeea8b5d8cfdffa" Aug 13 00:17:00.766233 containerd[1487]: 2025-08-13 00:17:00.697 [INFO][5364] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="723cfb97cf821550572dbfcc2de9f9ec348e0959abeda1144eeea8b5d8cfdffa" iface="eth0" netns="" Aug 13 00:17:00.766233 containerd[1487]: 2025-08-13 00:17:00.697 [INFO][5364] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="723cfb97cf821550572dbfcc2de9f9ec348e0959abeda1144eeea8b5d8cfdffa" Aug 13 00:17:00.766233 containerd[1487]: 2025-08-13 00:17:00.697 [INFO][5364] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="723cfb97cf821550572dbfcc2de9f9ec348e0959abeda1144eeea8b5d8cfdffa" Aug 13 00:17:00.766233 containerd[1487]: 2025-08-13 00:17:00.744 [INFO][5371] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="723cfb97cf821550572dbfcc2de9f9ec348e0959abeda1144eeea8b5d8cfdffa" HandleID="k8s-pod-network.723cfb97cf821550572dbfcc2de9f9ec348e0959abeda1144eeea8b5d8cfdffa" Workload="ci--4081--3--5--3--d55e308663-k8s-csi--node--driver--qctbn-eth0" Aug 13 00:17:00.766233 containerd[1487]: 2025-08-13 00:17:00.746 [INFO][5371] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:17:00.766233 containerd[1487]: 2025-08-13 00:17:00.746 [INFO][5371] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:17:00.766233 containerd[1487]: 2025-08-13 00:17:00.758 [WARNING][5371] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="723cfb97cf821550572dbfcc2de9f9ec348e0959abeda1144eeea8b5d8cfdffa" HandleID="k8s-pod-network.723cfb97cf821550572dbfcc2de9f9ec348e0959abeda1144eeea8b5d8cfdffa" Workload="ci--4081--3--5--3--d55e308663-k8s-csi--node--driver--qctbn-eth0" Aug 13 00:17:00.766233 containerd[1487]: 2025-08-13 00:17:00.758 [INFO][5371] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="723cfb97cf821550572dbfcc2de9f9ec348e0959abeda1144eeea8b5d8cfdffa" HandleID="k8s-pod-network.723cfb97cf821550572dbfcc2de9f9ec348e0959abeda1144eeea8b5d8cfdffa" Workload="ci--4081--3--5--3--d55e308663-k8s-csi--node--driver--qctbn-eth0" Aug 13 00:17:00.766233 containerd[1487]: 2025-08-13 00:17:00.760 [INFO][5371] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:17:00.766233 containerd[1487]: 2025-08-13 00:17:00.762 [INFO][5364] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="723cfb97cf821550572dbfcc2de9f9ec348e0959abeda1144eeea8b5d8cfdffa" Aug 13 00:17:00.766233 containerd[1487]: time="2025-08-13T00:17:00.765453619Z" level=info msg="TearDown network for sandbox \"723cfb97cf821550572dbfcc2de9f9ec348e0959abeda1144eeea8b5d8cfdffa\" successfully" Aug 13 00:17:00.772468 containerd[1487]: time="2025-08-13T00:17:00.772410496Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"723cfb97cf821550572dbfcc2de9f9ec348e0959abeda1144eeea8b5d8cfdffa\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:17:00.772971 containerd[1487]: time="2025-08-13T00:17:00.772490897Z" level=info msg="RemovePodSandbox \"723cfb97cf821550572dbfcc2de9f9ec348e0959abeda1144eeea8b5d8cfdffa\" returns successfully" Aug 13 00:17:00.773717 containerd[1487]: time="2025-08-13T00:17:00.773634423Z" level=info msg="StopPodSandbox for \"d39f1233e5d1fec42e5249ebbc08ca6ee08cd40ff689c32e159a4fa4e1f28f61\"" Aug 13 00:17:00.884175 containerd[1487]: 2025-08-13 00:17:00.824 [WARNING][5385] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d39f1233e5d1fec42e5249ebbc08ca6ee08cd40ff689c32e159a4fa4e1f28f61" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--3--d55e308663-k8s-coredns--668d6bf9bc--xvrh7-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"fad88c9c-63aa-46b4-a961-f6ee4056948d", ResourceVersion:"1014", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 16, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-3-d55e308663", ContainerID:"9d181fe0ea015f56a07be96685c9000e8548a5f8c56ff015e6ce34eeb198fdb4", Pod:"coredns-668d6bf9bc-xvrh7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.75.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9b98450fc81", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:17:00.884175 containerd[1487]: 2025-08-13 00:17:00.824 [INFO][5385] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d39f1233e5d1fec42e5249ebbc08ca6ee08cd40ff689c32e159a4fa4e1f28f61" Aug 13 00:17:00.884175 containerd[1487]: 2025-08-13 00:17:00.824 [INFO][5385] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d39f1233e5d1fec42e5249ebbc08ca6ee08cd40ff689c32e159a4fa4e1f28f61" iface="eth0" netns="" Aug 13 00:17:00.884175 containerd[1487]: 2025-08-13 00:17:00.824 [INFO][5385] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d39f1233e5d1fec42e5249ebbc08ca6ee08cd40ff689c32e159a4fa4e1f28f61" Aug 13 00:17:00.884175 containerd[1487]: 2025-08-13 00:17:00.824 [INFO][5385] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d39f1233e5d1fec42e5249ebbc08ca6ee08cd40ff689c32e159a4fa4e1f28f61" Aug 13 00:17:00.884175 containerd[1487]: 2025-08-13 00:17:00.857 [INFO][5392] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d39f1233e5d1fec42e5249ebbc08ca6ee08cd40ff689c32e159a4fa4e1f28f61" HandleID="k8s-pod-network.d39f1233e5d1fec42e5249ebbc08ca6ee08cd40ff689c32e159a4fa4e1f28f61" Workload="ci--4081--3--5--3--d55e308663-k8s-coredns--668d6bf9bc--xvrh7-eth0" Aug 13 00:17:00.884175 containerd[1487]: 2025-08-13 00:17:00.858 [INFO][5392] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:17:00.884175 containerd[1487]: 2025-08-13 00:17:00.858 [INFO][5392] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:17:00.884175 containerd[1487]: 2025-08-13 00:17:00.874 [WARNING][5392] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d39f1233e5d1fec42e5249ebbc08ca6ee08cd40ff689c32e159a4fa4e1f28f61" HandleID="k8s-pod-network.d39f1233e5d1fec42e5249ebbc08ca6ee08cd40ff689c32e159a4fa4e1f28f61" Workload="ci--4081--3--5--3--d55e308663-k8s-coredns--668d6bf9bc--xvrh7-eth0" Aug 13 00:17:00.884175 containerd[1487]: 2025-08-13 00:17:00.874 [INFO][5392] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d39f1233e5d1fec42e5249ebbc08ca6ee08cd40ff689c32e159a4fa4e1f28f61" HandleID="k8s-pod-network.d39f1233e5d1fec42e5249ebbc08ca6ee08cd40ff689c32e159a4fa4e1f28f61" Workload="ci--4081--3--5--3--d55e308663-k8s-coredns--668d6bf9bc--xvrh7-eth0" Aug 13 00:17:00.884175 containerd[1487]: 2025-08-13 00:17:00.877 [INFO][5392] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:17:00.884175 containerd[1487]: 2025-08-13 00:17:00.881 [INFO][5385] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d39f1233e5d1fec42e5249ebbc08ca6ee08cd40ff689c32e159a4fa4e1f28f61" Aug 13 00:17:00.885120 containerd[1487]: time="2025-08-13T00:17:00.884160366Z" level=info msg="TearDown network for sandbox \"d39f1233e5d1fec42e5249ebbc08ca6ee08cd40ff689c32e159a4fa4e1f28f61\" successfully" Aug 13 00:17:00.885120 containerd[1487]: time="2025-08-13T00:17:00.884196446Z" level=info msg="StopPodSandbox for \"d39f1233e5d1fec42e5249ebbc08ca6ee08cd40ff689c32e159a4fa4e1f28f61\" returns successfully" Aug 13 00:17:00.886575 containerd[1487]: time="2025-08-13T00:17:00.886522899Z" level=info msg="RemovePodSandbox for \"d39f1233e5d1fec42e5249ebbc08ca6ee08cd40ff689c32e159a4fa4e1f28f61\"" Aug 13 00:17:00.886575 containerd[1487]: time="2025-08-13T00:17:00.886574459Z" level=info msg="Forcibly stopping sandbox \"d39f1233e5d1fec42e5249ebbc08ca6ee08cd40ff689c32e159a4fa4e1f28f61\"" Aug 13 00:17:00.995887 containerd[1487]: 2025-08-13 00:17:00.939 [WARNING][5406] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d39f1233e5d1fec42e5249ebbc08ca6ee08cd40ff689c32e159a4fa4e1f28f61" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--3--d55e308663-k8s-coredns--668d6bf9bc--xvrh7-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"fad88c9c-63aa-46b4-a961-f6ee4056948d", ResourceVersion:"1014", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 16, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-3-d55e308663", ContainerID:"9d181fe0ea015f56a07be96685c9000e8548a5f8c56ff015e6ce34eeb198fdb4", Pod:"coredns-668d6bf9bc-xvrh7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.75.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9b98450fc81", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:17:00.995887 containerd[1487]: 2025-08-13 00:17:00.939 [INFO][5406] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d39f1233e5d1fec42e5249ebbc08ca6ee08cd40ff689c32e159a4fa4e1f28f61" Aug 13 00:17:00.995887 containerd[1487]: 2025-08-13 00:17:00.940 [INFO][5406] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d39f1233e5d1fec42e5249ebbc08ca6ee08cd40ff689c32e159a4fa4e1f28f61" iface="eth0" netns="" Aug 13 00:17:00.995887 containerd[1487]: 2025-08-13 00:17:00.940 [INFO][5406] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d39f1233e5d1fec42e5249ebbc08ca6ee08cd40ff689c32e159a4fa4e1f28f61" Aug 13 00:17:00.995887 containerd[1487]: 2025-08-13 00:17:00.940 [INFO][5406] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d39f1233e5d1fec42e5249ebbc08ca6ee08cd40ff689c32e159a4fa4e1f28f61" Aug 13 00:17:00.995887 containerd[1487]: 2025-08-13 00:17:00.974 [INFO][5413] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d39f1233e5d1fec42e5249ebbc08ca6ee08cd40ff689c32e159a4fa4e1f28f61" HandleID="k8s-pod-network.d39f1233e5d1fec42e5249ebbc08ca6ee08cd40ff689c32e159a4fa4e1f28f61" Workload="ci--4081--3--5--3--d55e308663-k8s-coredns--668d6bf9bc--xvrh7-eth0" Aug 13 00:17:00.995887 containerd[1487]: 2025-08-13 00:17:00.974 [INFO][5413] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:17:00.995887 containerd[1487]: 2025-08-13 00:17:00.974 [INFO][5413] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:17:00.995887 containerd[1487]: 2025-08-13 00:17:00.987 [WARNING][5413] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d39f1233e5d1fec42e5249ebbc08ca6ee08cd40ff689c32e159a4fa4e1f28f61" HandleID="k8s-pod-network.d39f1233e5d1fec42e5249ebbc08ca6ee08cd40ff689c32e159a4fa4e1f28f61" Workload="ci--4081--3--5--3--d55e308663-k8s-coredns--668d6bf9bc--xvrh7-eth0" Aug 13 00:17:00.995887 containerd[1487]: 2025-08-13 00:17:00.987 [INFO][5413] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d39f1233e5d1fec42e5249ebbc08ca6ee08cd40ff689c32e159a4fa4e1f28f61" HandleID="k8s-pod-network.d39f1233e5d1fec42e5249ebbc08ca6ee08cd40ff689c32e159a4fa4e1f28f61" Workload="ci--4081--3--5--3--d55e308663-k8s-coredns--668d6bf9bc--xvrh7-eth0" Aug 13 00:17:00.995887 containerd[1487]: 2025-08-13 00:17:00.989 [INFO][5413] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:17:00.995887 containerd[1487]: 2025-08-13 00:17:00.993 [INFO][5406] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d39f1233e5d1fec42e5249ebbc08ca6ee08cd40ff689c32e159a4fa4e1f28f61" Aug 13 00:17:00.996501 containerd[1487]: time="2025-08-13T00:17:00.995966516Z" level=info msg="TearDown network for sandbox \"d39f1233e5d1fec42e5249ebbc08ca6ee08cd40ff689c32e159a4fa4e1f28f61\" successfully" Aug 13 00:17:01.000879 containerd[1487]: time="2025-08-13T00:17:01.000764502Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d39f1233e5d1fec42e5249ebbc08ca6ee08cd40ff689c32e159a4fa4e1f28f61\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:17:01.001019 containerd[1487]: time="2025-08-13T00:17:01.000910783Z" level=info msg="RemovePodSandbox \"d39f1233e5d1fec42e5249ebbc08ca6ee08cd40ff689c32e159a4fa4e1f28f61\" returns successfully" Aug 13 00:17:01.002097 containerd[1487]: time="2025-08-13T00:17:01.001750387Z" level=info msg="StopPodSandbox for \"e0cdc0198487554e49bd19757d13f2026f25ab840bc62e6bcabe80f271c4c719\"" Aug 13 00:17:01.129247 containerd[1487]: 2025-08-13 00:17:01.059 [WARNING][5427] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e0cdc0198487554e49bd19757d13f2026f25ab840bc62e6bcabe80f271c4c719" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--3--d55e308663-k8s-goldmane--768f4c5c69--nlr8k-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"00aebb23-204a-4ca9-bec2-40b1eea2bd4a", ResourceVersion:"1060", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 16, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-3-d55e308663", ContainerID:"fbc28fd8e5dc4f43065701903c832505cdfbdd4ef392f51bfccd187d91dbeec1", Pod:"goldmane-768f4c5c69-nlr8k", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.75.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid8706d58301", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:17:01.129247 containerd[1487]: 2025-08-13 00:17:01.060 [INFO][5427] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e0cdc0198487554e49bd19757d13f2026f25ab840bc62e6bcabe80f271c4c719" Aug 13 00:17:01.129247 containerd[1487]: 2025-08-13 00:17:01.060 [INFO][5427] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e0cdc0198487554e49bd19757d13f2026f25ab840bc62e6bcabe80f271c4c719" iface="eth0" netns="" Aug 13 00:17:01.129247 containerd[1487]: 2025-08-13 00:17:01.060 [INFO][5427] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e0cdc0198487554e49bd19757d13f2026f25ab840bc62e6bcabe80f271c4c719" Aug 13 00:17:01.129247 containerd[1487]: 2025-08-13 00:17:01.060 [INFO][5427] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e0cdc0198487554e49bd19757d13f2026f25ab840bc62e6bcabe80f271c4c719" Aug 13 00:17:01.129247 containerd[1487]: 2025-08-13 00:17:01.093 [INFO][5434] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e0cdc0198487554e49bd19757d13f2026f25ab840bc62e6bcabe80f271c4c719" HandleID="k8s-pod-network.e0cdc0198487554e49bd19757d13f2026f25ab840bc62e6bcabe80f271c4c719" Workload="ci--4081--3--5--3--d55e308663-k8s-goldmane--768f4c5c69--nlr8k-eth0" Aug 13 00:17:01.129247 containerd[1487]: 2025-08-13 00:17:01.093 [INFO][5434] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:17:01.129247 containerd[1487]: 2025-08-13 00:17:01.093 [INFO][5434] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:17:01.129247 containerd[1487]: 2025-08-13 00:17:01.111 [WARNING][5434] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e0cdc0198487554e49bd19757d13f2026f25ab840bc62e6bcabe80f271c4c719" HandleID="k8s-pod-network.e0cdc0198487554e49bd19757d13f2026f25ab840bc62e6bcabe80f271c4c719" Workload="ci--4081--3--5--3--d55e308663-k8s-goldmane--768f4c5c69--nlr8k-eth0" Aug 13 00:17:01.129247 containerd[1487]: 2025-08-13 00:17:01.111 [INFO][5434] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e0cdc0198487554e49bd19757d13f2026f25ab840bc62e6bcabe80f271c4c719" HandleID="k8s-pod-network.e0cdc0198487554e49bd19757d13f2026f25ab840bc62e6bcabe80f271c4c719" Workload="ci--4081--3--5--3--d55e308663-k8s-goldmane--768f4c5c69--nlr8k-eth0" Aug 13 00:17:01.129247 containerd[1487]: 2025-08-13 00:17:01.119 [INFO][5434] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:17:01.129247 containerd[1487]: 2025-08-13 00:17:01.125 [INFO][5427] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e0cdc0198487554e49bd19757d13f2026f25ab840bc62e6bcabe80f271c4c719" Aug 13 00:17:01.131016 containerd[1487]: time="2025-08-13T00:17:01.130975341Z" level=info msg="TearDown network for sandbox \"e0cdc0198487554e49bd19757d13f2026f25ab840bc62e6bcabe80f271c4c719\" successfully" Aug 13 00:17:01.131016 containerd[1487]: time="2025-08-13T00:17:01.131013221Z" level=info msg="StopPodSandbox for \"e0cdc0198487554e49bd19757d13f2026f25ab840bc62e6bcabe80f271c4c719\" returns successfully" Aug 13 00:17:01.132854 containerd[1487]: time="2025-08-13T00:17:01.132812990Z" level=info msg="RemovePodSandbox for \"e0cdc0198487554e49bd19757d13f2026f25ab840bc62e6bcabe80f271c4c719\"" Aug 13 00:17:01.132854 containerd[1487]: time="2025-08-13T00:17:01.132855431Z" level=info msg="Forcibly stopping sandbox \"e0cdc0198487554e49bd19757d13f2026f25ab840bc62e6bcabe80f271c4c719\"" Aug 13 00:17:01.320576 containerd[1487]: 2025-08-13 00:17:01.220 [WARNING][5448] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e0cdc0198487554e49bd19757d13f2026f25ab840bc62e6bcabe80f271c4c719" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--3--d55e308663-k8s-goldmane--768f4c5c69--nlr8k-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"00aebb23-204a-4ca9-bec2-40b1eea2bd4a", ResourceVersion:"1060", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 16, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-3-d55e308663", ContainerID:"fbc28fd8e5dc4f43065701903c832505cdfbdd4ef392f51bfccd187d91dbeec1", Pod:"goldmane-768f4c5c69-nlr8k", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.75.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid8706d58301", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:17:01.320576 containerd[1487]: 2025-08-13 00:17:01.221 [INFO][5448] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e0cdc0198487554e49bd19757d13f2026f25ab840bc62e6bcabe80f271c4c719" Aug 13 00:17:01.320576 containerd[1487]: 2025-08-13 00:17:01.221 [INFO][5448] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e0cdc0198487554e49bd19757d13f2026f25ab840bc62e6bcabe80f271c4c719" iface="eth0" netns="" Aug 13 00:17:01.320576 containerd[1487]: 2025-08-13 00:17:01.221 [INFO][5448] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e0cdc0198487554e49bd19757d13f2026f25ab840bc62e6bcabe80f271c4c719" Aug 13 00:17:01.320576 containerd[1487]: 2025-08-13 00:17:01.221 [INFO][5448] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e0cdc0198487554e49bd19757d13f2026f25ab840bc62e6bcabe80f271c4c719" Aug 13 00:17:01.320576 containerd[1487]: 2025-08-13 00:17:01.293 [INFO][5455] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e0cdc0198487554e49bd19757d13f2026f25ab840bc62e6bcabe80f271c4c719" HandleID="k8s-pod-network.e0cdc0198487554e49bd19757d13f2026f25ab840bc62e6bcabe80f271c4c719" Workload="ci--4081--3--5--3--d55e308663-k8s-goldmane--768f4c5c69--nlr8k-eth0" Aug 13 00:17:01.320576 containerd[1487]: 2025-08-13 00:17:01.293 [INFO][5455] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:17:01.320576 containerd[1487]: 2025-08-13 00:17:01.293 [INFO][5455] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:17:01.320576 containerd[1487]: 2025-08-13 00:17:01.307 [WARNING][5455] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e0cdc0198487554e49bd19757d13f2026f25ab840bc62e6bcabe80f271c4c719" HandleID="k8s-pod-network.e0cdc0198487554e49bd19757d13f2026f25ab840bc62e6bcabe80f271c4c719" Workload="ci--4081--3--5--3--d55e308663-k8s-goldmane--768f4c5c69--nlr8k-eth0" Aug 13 00:17:01.320576 containerd[1487]: 2025-08-13 00:17:01.308 [INFO][5455] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e0cdc0198487554e49bd19757d13f2026f25ab840bc62e6bcabe80f271c4c719" HandleID="k8s-pod-network.e0cdc0198487554e49bd19757d13f2026f25ab840bc62e6bcabe80f271c4c719" Workload="ci--4081--3--5--3--d55e308663-k8s-goldmane--768f4c5c69--nlr8k-eth0" Aug 13 00:17:01.320576 containerd[1487]: 2025-08-13 00:17:01.311 [INFO][5455] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:17:01.320576 containerd[1487]: 2025-08-13 00:17:01.315 [INFO][5448] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e0cdc0198487554e49bd19757d13f2026f25ab840bc62e6bcabe80f271c4c719" Aug 13 00:17:01.320576 containerd[1487]: time="2025-08-13T00:17:01.320482889Z" level=info msg="TearDown network for sandbox \"e0cdc0198487554e49bd19757d13f2026f25ab840bc62e6bcabe80f271c4c719\" successfully" Aug 13 00:17:01.398099 containerd[1487]: time="2025-08-13T00:17:01.398053254Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e0cdc0198487554e49bd19757d13f2026f25ab840bc62e6bcabe80f271c4c719\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:17:01.400286 containerd[1487]: time="2025-08-13T00:17:01.398439656Z" level=info msg="RemovePodSandbox \"e0cdc0198487554e49bd19757d13f2026f25ab840bc62e6bcabe80f271c4c719\" returns successfully" Aug 13 00:17:01.401053 containerd[1487]: time="2025-08-13T00:17:01.400725067Z" level=info msg="StopPodSandbox for \"3f86c6587922f68f1c54663844230c619cca8870150a5e888f6f452008abfeb2\"" Aug 13 00:17:01.629491 containerd[1487]: 2025-08-13 00:17:01.519 [WARNING][5473] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="3f86c6587922f68f1c54663844230c619cca8870150a5e888f6f452008abfeb2" WorkloadEndpoint="ci--4081--3--5--3--d55e308663-k8s-whisker--5fb6587569--g54r9-eth0" Aug 13 00:17:01.629491 containerd[1487]: 2025-08-13 00:17:01.519 [INFO][5473] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3f86c6587922f68f1c54663844230c619cca8870150a5e888f6f452008abfeb2" Aug 13 00:17:01.629491 containerd[1487]: 2025-08-13 00:17:01.519 [INFO][5473] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3f86c6587922f68f1c54663844230c619cca8870150a5e888f6f452008abfeb2" iface="eth0" netns="" Aug 13 00:17:01.629491 containerd[1487]: 2025-08-13 00:17:01.519 [INFO][5473] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3f86c6587922f68f1c54663844230c619cca8870150a5e888f6f452008abfeb2" Aug 13 00:17:01.629491 containerd[1487]: 2025-08-13 00:17:01.519 [INFO][5473] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3f86c6587922f68f1c54663844230c619cca8870150a5e888f6f452008abfeb2" Aug 13 00:17:01.629491 containerd[1487]: 2025-08-13 00:17:01.587 [INFO][5480] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3f86c6587922f68f1c54663844230c619cca8870150a5e888f6f452008abfeb2" HandleID="k8s-pod-network.3f86c6587922f68f1c54663844230c619cca8870150a5e888f6f452008abfeb2" Workload="ci--4081--3--5--3--d55e308663-k8s-whisker--5fb6587569--g54r9-eth0" Aug 13 00:17:01.629491 containerd[1487]: 2025-08-13 00:17:01.588 [INFO][5480] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:17:01.629491 containerd[1487]: 2025-08-13 00:17:01.588 [INFO][5480] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:17:01.629491 containerd[1487]: 2025-08-13 00:17:01.619 [WARNING][5480] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3f86c6587922f68f1c54663844230c619cca8870150a5e888f6f452008abfeb2" HandleID="k8s-pod-network.3f86c6587922f68f1c54663844230c619cca8870150a5e888f6f452008abfeb2" Workload="ci--4081--3--5--3--d55e308663-k8s-whisker--5fb6587569--g54r9-eth0" Aug 13 00:17:01.629491 containerd[1487]: 2025-08-13 00:17:01.619 [INFO][5480] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3f86c6587922f68f1c54663844230c619cca8870150a5e888f6f452008abfeb2" HandleID="k8s-pod-network.3f86c6587922f68f1c54663844230c619cca8870150a5e888f6f452008abfeb2" Workload="ci--4081--3--5--3--d55e308663-k8s-whisker--5fb6587569--g54r9-eth0" Aug 13 00:17:01.629491 containerd[1487]: 2025-08-13 00:17:01.622 [INFO][5480] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:17:01.629491 containerd[1487]: 2025-08-13 00:17:01.625 [INFO][5473] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3f86c6587922f68f1c54663844230c619cca8870150a5e888f6f452008abfeb2" Aug 13 00:17:01.629491 containerd[1487]: time="2025-08-13T00:17:01.629029058Z" level=info msg="TearDown network for sandbox \"3f86c6587922f68f1c54663844230c619cca8870150a5e888f6f452008abfeb2\" successfully" Aug 13 00:17:01.629491 containerd[1487]: time="2025-08-13T00:17:01.629056818Z" level=info msg="StopPodSandbox for \"3f86c6587922f68f1c54663844230c619cca8870150a5e888f6f452008abfeb2\" returns successfully" Aug 13 00:17:01.632889 containerd[1487]: time="2025-08-13T00:17:01.632375955Z" level=info msg="RemovePodSandbox for \"3f86c6587922f68f1c54663844230c619cca8870150a5e888f6f452008abfeb2\"" Aug 13 00:17:01.632889 containerd[1487]: time="2025-08-13T00:17:01.632640917Z" level=info msg="Forcibly stopping sandbox \"3f86c6587922f68f1c54663844230c619cca8870150a5e888f6f452008abfeb2\"" Aug 13 00:17:01.813303 containerd[1487]: 2025-08-13 00:17:01.719 [WARNING][5494] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="3f86c6587922f68f1c54663844230c619cca8870150a5e888f6f452008abfeb2" WorkloadEndpoint="ci--4081--3--5--3--d55e308663-k8s-whisker--5fb6587569--g54r9-eth0" Aug 13 00:17:01.813303 containerd[1487]: 2025-08-13 00:17:01.720 [INFO][5494] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3f86c6587922f68f1c54663844230c619cca8870150a5e888f6f452008abfeb2" Aug 13 00:17:01.813303 containerd[1487]: 2025-08-13 00:17:01.720 [INFO][5494] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3f86c6587922f68f1c54663844230c619cca8870150a5e888f6f452008abfeb2" iface="eth0" netns="" Aug 13 00:17:01.813303 containerd[1487]: 2025-08-13 00:17:01.720 [INFO][5494] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3f86c6587922f68f1c54663844230c619cca8870150a5e888f6f452008abfeb2" Aug 13 00:17:01.813303 containerd[1487]: 2025-08-13 00:17:01.720 [INFO][5494] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3f86c6587922f68f1c54663844230c619cca8870150a5e888f6f452008abfeb2" Aug 13 00:17:01.813303 containerd[1487]: 2025-08-13 00:17:01.782 [INFO][5501] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3f86c6587922f68f1c54663844230c619cca8870150a5e888f6f452008abfeb2" HandleID="k8s-pod-network.3f86c6587922f68f1c54663844230c619cca8870150a5e888f6f452008abfeb2" Workload="ci--4081--3--5--3--d55e308663-k8s-whisker--5fb6587569--g54r9-eth0" Aug 13 00:17:01.813303 containerd[1487]: 2025-08-13 00:17:01.784 [INFO][5501] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:17:01.813303 containerd[1487]: 2025-08-13 00:17:01.784 [INFO][5501] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:17:01.813303 containerd[1487]: 2025-08-13 00:17:01.799 [WARNING][5501] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3f86c6587922f68f1c54663844230c619cca8870150a5e888f6f452008abfeb2" HandleID="k8s-pod-network.3f86c6587922f68f1c54663844230c619cca8870150a5e888f6f452008abfeb2" Workload="ci--4081--3--5--3--d55e308663-k8s-whisker--5fb6587569--g54r9-eth0" Aug 13 00:17:01.813303 containerd[1487]: 2025-08-13 00:17:01.799 [INFO][5501] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3f86c6587922f68f1c54663844230c619cca8870150a5e888f6f452008abfeb2" HandleID="k8s-pod-network.3f86c6587922f68f1c54663844230c619cca8870150a5e888f6f452008abfeb2" Workload="ci--4081--3--5--3--d55e308663-k8s-whisker--5fb6587569--g54r9-eth0" Aug 13 00:17:01.813303 containerd[1487]: 2025-08-13 00:17:01.803 [INFO][5501] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:17:01.813303 containerd[1487]: 2025-08-13 00:17:01.808 [INFO][5494] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3f86c6587922f68f1c54663844230c619cca8870150a5e888f6f452008abfeb2" Aug 13 00:17:01.813303 containerd[1487]: time="2025-08-13T00:17:01.812367654Z" level=info msg="TearDown network for sandbox \"3f86c6587922f68f1c54663844230c619cca8870150a5e888f6f452008abfeb2\" successfully" Aug 13 00:17:01.821017 containerd[1487]: time="2025-08-13T00:17:01.820975979Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3f86c6587922f68f1c54663844230c619cca8870150a5e888f6f452008abfeb2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:17:01.821569 containerd[1487]: time="2025-08-13T00:17:01.821344901Z" level=info msg="RemovePodSandbox \"3f86c6587922f68f1c54663844230c619cca8870150a5e888f6f452008abfeb2\" returns successfully" Aug 13 00:17:01.823162 containerd[1487]: time="2025-08-13T00:17:01.823133430Z" level=info msg="StopPodSandbox for \"84786ed3b70e4973214dcce40b9864db2b66c65e0740185d43ccb48d12b49d28\"" Aug 13 00:17:02.045921 containerd[1487]: 2025-08-13 00:17:01.938 [WARNING][5515] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="84786ed3b70e4973214dcce40b9864db2b66c65e0740185d43ccb48d12b49d28" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--3--d55e308663-k8s-calico--apiserver--765654ff88--wtksk-eth0", GenerateName:"calico-apiserver-765654ff88-", Namespace:"calico-apiserver", SelfLink:"", UID:"58f3844d-62f0-42cd-b23f-5ade8a1b059c", ResourceVersion:"1029", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 16, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"765654ff88", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-3-d55e308663", ContainerID:"4a7b1f63f04ba92954bca05be2c1619684920f886e5c1dc679c851ecae11cbf9", Pod:"calico-apiserver-765654ff88-wtksk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.75.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2b418482583", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:17:02.045921 containerd[1487]: 2025-08-13 00:17:01.940 [INFO][5515] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="84786ed3b70e4973214dcce40b9864db2b66c65e0740185d43ccb48d12b49d28" Aug 13 00:17:02.045921 containerd[1487]: 2025-08-13 00:17:01.940 [INFO][5515] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="84786ed3b70e4973214dcce40b9864db2b66c65e0740185d43ccb48d12b49d28" iface="eth0" netns="" Aug 13 00:17:02.045921 containerd[1487]: 2025-08-13 00:17:01.940 [INFO][5515] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="84786ed3b70e4973214dcce40b9864db2b66c65e0740185d43ccb48d12b49d28" Aug 13 00:17:02.045921 containerd[1487]: 2025-08-13 00:17:01.940 [INFO][5515] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="84786ed3b70e4973214dcce40b9864db2b66c65e0740185d43ccb48d12b49d28" Aug 13 00:17:02.045921 containerd[1487]: 2025-08-13 00:17:02.017 [INFO][5522] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="84786ed3b70e4973214dcce40b9864db2b66c65e0740185d43ccb48d12b49d28" HandleID="k8s-pod-network.84786ed3b70e4973214dcce40b9864db2b66c65e0740185d43ccb48d12b49d28" Workload="ci--4081--3--5--3--d55e308663-k8s-calico--apiserver--765654ff88--wtksk-eth0" Aug 13 00:17:02.045921 containerd[1487]: 2025-08-13 00:17:02.017 [INFO][5522] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:17:02.045921 containerd[1487]: 2025-08-13 00:17:02.017 [INFO][5522] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:17:02.045921 containerd[1487]: 2025-08-13 00:17:02.033 [WARNING][5522] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="84786ed3b70e4973214dcce40b9864db2b66c65e0740185d43ccb48d12b49d28" HandleID="k8s-pod-network.84786ed3b70e4973214dcce40b9864db2b66c65e0740185d43ccb48d12b49d28" Workload="ci--4081--3--5--3--d55e308663-k8s-calico--apiserver--765654ff88--wtksk-eth0" Aug 13 00:17:02.045921 containerd[1487]: 2025-08-13 00:17:02.033 [INFO][5522] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="84786ed3b70e4973214dcce40b9864db2b66c65e0740185d43ccb48d12b49d28" HandleID="k8s-pod-network.84786ed3b70e4973214dcce40b9864db2b66c65e0740185d43ccb48d12b49d28" Workload="ci--4081--3--5--3--d55e308663-k8s-calico--apiserver--765654ff88--wtksk-eth0" Aug 13 00:17:02.045921 containerd[1487]: 2025-08-13 00:17:02.038 [INFO][5522] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:17:02.045921 containerd[1487]: 2025-08-13 00:17:02.042 [INFO][5515] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="84786ed3b70e4973214dcce40b9864db2b66c65e0740185d43ccb48d12b49d28" Aug 13 00:17:02.045921 containerd[1487]: time="2025-08-13T00:17:02.045580627Z" level=info msg="TearDown network for sandbox \"84786ed3b70e4973214dcce40b9864db2b66c65e0740185d43ccb48d12b49d28\" successfully" Aug 13 00:17:02.045921 containerd[1487]: time="2025-08-13T00:17:02.045604987Z" level=info msg="StopPodSandbox for \"84786ed3b70e4973214dcce40b9864db2b66c65e0740185d43ccb48d12b49d28\" returns successfully" Aug 13 00:17:02.050419 containerd[1487]: time="2025-08-13T00:17:02.048297401Z" level=info msg="RemovePodSandbox for \"84786ed3b70e4973214dcce40b9864db2b66c65e0740185d43ccb48d12b49d28\"" Aug 13 00:17:02.050419 containerd[1487]: time="2025-08-13T00:17:02.048345282Z" level=info msg="Forcibly stopping sandbox \"84786ed3b70e4973214dcce40b9864db2b66c65e0740185d43ccb48d12b49d28\"" Aug 13 00:17:02.290641 containerd[1487]: 2025-08-13 00:17:02.193 [WARNING][5536] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="84786ed3b70e4973214dcce40b9864db2b66c65e0740185d43ccb48d12b49d28" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--3--d55e308663-k8s-calico--apiserver--765654ff88--wtksk-eth0", GenerateName:"calico-apiserver-765654ff88-", Namespace:"calico-apiserver", SelfLink:"", UID:"58f3844d-62f0-42cd-b23f-5ade8a1b059c", ResourceVersion:"1029", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 16, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"765654ff88", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-3-d55e308663", ContainerID:"4a7b1f63f04ba92954bca05be2c1619684920f886e5c1dc679c851ecae11cbf9", Pod:"calico-apiserver-765654ff88-wtksk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.75.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2b418482583", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:17:02.290641 containerd[1487]: 2025-08-13 00:17:02.193 [INFO][5536] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="84786ed3b70e4973214dcce40b9864db2b66c65e0740185d43ccb48d12b49d28" Aug 13 00:17:02.290641 containerd[1487]: 2025-08-13 00:17:02.193 [INFO][5536] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="84786ed3b70e4973214dcce40b9864db2b66c65e0740185d43ccb48d12b49d28" iface="eth0" netns="" Aug 13 00:17:02.290641 containerd[1487]: 2025-08-13 00:17:02.193 [INFO][5536] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="84786ed3b70e4973214dcce40b9864db2b66c65e0740185d43ccb48d12b49d28" Aug 13 00:17:02.290641 containerd[1487]: 2025-08-13 00:17:02.193 [INFO][5536] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="84786ed3b70e4973214dcce40b9864db2b66c65e0740185d43ccb48d12b49d28" Aug 13 00:17:02.290641 containerd[1487]: 2025-08-13 00:17:02.256 [INFO][5543] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="84786ed3b70e4973214dcce40b9864db2b66c65e0740185d43ccb48d12b49d28" HandleID="k8s-pod-network.84786ed3b70e4973214dcce40b9864db2b66c65e0740185d43ccb48d12b49d28" Workload="ci--4081--3--5--3--d55e308663-k8s-calico--apiserver--765654ff88--wtksk-eth0" Aug 13 00:17:02.290641 containerd[1487]: 2025-08-13 00:17:02.257 [INFO][5543] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:17:02.290641 containerd[1487]: 2025-08-13 00:17:02.257 [INFO][5543] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:17:02.290641 containerd[1487]: 2025-08-13 00:17:02.277 [WARNING][5543] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="84786ed3b70e4973214dcce40b9864db2b66c65e0740185d43ccb48d12b49d28" HandleID="k8s-pod-network.84786ed3b70e4973214dcce40b9864db2b66c65e0740185d43ccb48d12b49d28" Workload="ci--4081--3--5--3--d55e308663-k8s-calico--apiserver--765654ff88--wtksk-eth0" Aug 13 00:17:02.290641 containerd[1487]: 2025-08-13 00:17:02.277 [INFO][5543] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="84786ed3b70e4973214dcce40b9864db2b66c65e0740185d43ccb48d12b49d28" HandleID="k8s-pod-network.84786ed3b70e4973214dcce40b9864db2b66c65e0740185d43ccb48d12b49d28" Workload="ci--4081--3--5--3--d55e308663-k8s-calico--apiserver--765654ff88--wtksk-eth0" Aug 13 00:17:02.290641 containerd[1487]: 2025-08-13 00:17:02.283 [INFO][5543] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:17:02.290641 containerd[1487]: 2025-08-13 00:17:02.286 [INFO][5536] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="84786ed3b70e4973214dcce40b9864db2b66c65e0740185d43ccb48d12b49d28" Aug 13 00:17:02.291486 containerd[1487]: time="2025-08-13T00:17:02.291450014Z" level=info msg="TearDown network for sandbox \"84786ed3b70e4973214dcce40b9864db2b66c65e0740185d43ccb48d12b49d28\" successfully" Aug 13 00:17:02.305974 containerd[1487]: time="2025-08-13T00:17:02.305810048Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"84786ed3b70e4973214dcce40b9864db2b66c65e0740185d43ccb48d12b49d28\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:17:02.306521 containerd[1487]: time="2025-08-13T00:17:02.306219250Z" level=info msg="RemovePodSandbox \"84786ed3b70e4973214dcce40b9864db2b66c65e0740185d43ccb48d12b49d28\" returns successfully" Aug 13 00:17:02.649034 containerd[1487]: time="2025-08-13T00:17:02.648961256Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:17:02.652743 containerd[1487]: time="2025-08-13T00:17:02.651864671Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=48128336" Aug 13 00:17:02.657010 containerd[1487]: time="2025-08-13T00:17:02.655768651Z" level=info msg="ImageCreate event name:\"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:17:02.665673 containerd[1487]: time="2025-08-13T00:17:02.665628062Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:17:02.666147 containerd[1487]: time="2025-08-13T00:17:02.666105144Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"49497545\" in 4.506728546s" Aug 13 00:17:02.666147 containerd[1487]: time="2025-08-13T00:17:02.666145544Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\"" Aug 13 00:17:02.667881 containerd[1487]: time="2025-08-13T00:17:02.667640672Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Aug 13 00:17:02.689970 containerd[1487]: time="2025-08-13T00:17:02.688971982Z" level=info msg="CreateContainer within sandbox \"cb0f102fb2d2a8ac9816da76d5261c2af1f83cdc27f97039751162201a5dfdef\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Aug 13 00:17:02.709010 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2137711095.mount: Deactivated successfully. Aug 13 00:17:02.712541 containerd[1487]: time="2025-08-13T00:17:02.712114581Z" level=info msg="CreateContainer within sandbox \"cb0f102fb2d2a8ac9816da76d5261c2af1f83cdc27f97039751162201a5dfdef\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"e23b972514357b814c5b07c08d8314d0c2d3f55ae15dab0e48347283c7a67c38\"" Aug 13 00:17:02.714465 containerd[1487]: time="2025-08-13T00:17:02.714423953Z" level=info msg="StartContainer for \"e23b972514357b814c5b07c08d8314d0c2d3f55ae15dab0e48347283c7a67c38\"" Aug 13 00:17:02.766626 systemd[1]: Started cri-containerd-e23b972514357b814c5b07c08d8314d0c2d3f55ae15dab0e48347283c7a67c38.scope - libcontainer container e23b972514357b814c5b07c08d8314d0c2d3f55ae15dab0e48347283c7a67c38. Aug 13 00:17:02.839877 containerd[1487]: time="2025-08-13T00:17:02.839790439Z" level=info msg="StartContainer for \"e23b972514357b814c5b07c08d8314d0c2d3f55ae15dab0e48347283c7a67c38\" returns successfully" Aug 13 00:17:03.159223 kubelet[2615]: I0813 00:17:03.159157 2615 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-b89966564-sv4v2" podStartSLOduration=26.884860405 podStartE2EDuration="37.159137594s" podCreationTimestamp="2025-08-13 00:16:26 +0000 UTC" firstStartedPulling="2025-08-13 00:16:52.393146002 +0000 UTC m=+52.867659515" lastFinishedPulling="2025-08-13 00:17:02.667422951 +0000 UTC m=+63.141936704" observedRunningTime="2025-08-13 00:17:03.105101239 +0000 UTC m=+63.579614792" watchObservedRunningTime="2025-08-13 00:17:03.159137594 +0000 UTC m=+63.633651107" Aug 13 00:17:04.461353 containerd[1487]: time="2025-08-13T00:17:04.461294036Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:17:04.463894 containerd[1487]: time="2025-08-13T00:17:04.463831529Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=13754366" Aug 13 00:17:04.466049 containerd[1487]: time="2025-08-13T00:17:04.465657698Z" level=info msg="ImageCreate event name:\"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:17:04.469227 containerd[1487]: time="2025-08-13T00:17:04.469171595Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:17:04.471546 containerd[1487]: time="2025-08-13T00:17:04.470388362Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"15123559\" in 1.802707089s" Aug 13 00:17:04.471546 containerd[1487]: time="2025-08-13T00:17:04.470431722Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\"" Aug 13 00:17:04.476975 containerd[1487]: time="2025-08-13T00:17:04.476692913Z" level=info msg="CreateContainer within sandbox \"ab2cb5ca9f50a9e056ca54fa9e7a9df12f95ce00edd4e276ff3a01d423f1adba\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Aug 13 00:17:04.504149 containerd[1487]: time="2025-08-13T00:17:04.504085971Z" level=info msg="CreateContainer within sandbox \"ab2cb5ca9f50a9e056ca54fa9e7a9df12f95ce00edd4e276ff3a01d423f1adba\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"ab75eca7cf4477c45a48b512290b18af698118fed1bb17473927b868c9036bbd\"" Aug 13 00:17:04.508684 containerd[1487]: time="2025-08-13T00:17:04.508630234Z" level=info msg="StartContainer for \"ab75eca7cf4477c45a48b512290b18af698118fed1bb17473927b868c9036bbd\"" Aug 13 00:17:04.573491 systemd[1]: Started cri-containerd-ab75eca7cf4477c45a48b512290b18af698118fed1bb17473927b868c9036bbd.scope - libcontainer container ab75eca7cf4477c45a48b512290b18af698118fed1bb17473927b868c9036bbd. Aug 13 00:17:04.642228 containerd[1487]: time="2025-08-13T00:17:04.641383782Z" level=info msg="StartContainer for \"ab75eca7cf4477c45a48b512290b18af698118fed1bb17473927b868c9036bbd\" returns successfully" Aug 13 00:17:04.783450 kubelet[2615]: I0813 00:17:04.783176 2615 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Aug 13 00:17:04.790933 kubelet[2615]: I0813 00:17:04.790902 2615 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Aug 13 00:17:05.020381 kubelet[2615]: I0813 00:17:05.020331 2615 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 00:17:05.026472 kubelet[2615]: I0813 00:17:05.025820 2615 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 00:17:21.400147 systemd[1]: run-containerd-runc-k8s.io-bff2584ab78b29ac2e66fe7960591e6c5594f9857694e7c78708c6243af67b00-runc.aFY7RI.mount: Deactivated successfully. Aug 13 00:17:30.171514 kubelet[2615]: I0813 00:17:30.171217 2615 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-qctbn" podStartSLOduration=49.889834867 podStartE2EDuration="1m5.171193392s" podCreationTimestamp="2025-08-13 00:16:25 +0000 UTC" firstStartedPulling="2025-08-13 00:16:49.189831641 +0000 UTC m=+49.664345154" lastFinishedPulling="2025-08-13 00:17:04.471190206 +0000 UTC m=+64.945703679" observedRunningTime="2025-08-13 00:17:05.157789571 +0000 UTC m=+65.632303084" watchObservedRunningTime="2025-08-13 00:17:30.171193392 +0000 UTC m=+90.645706945" Aug 13 00:17:46.090446 systemd[1]: run-containerd-runc-k8s.io-37779cf891a9330303b0dbfade92b8f8265bfb8f31198d033c64bb097d113db4-runc.zAGfIF.mount: Deactivated successfully. Aug 13 00:17:58.566995 systemd[1]: Started sshd@7-159.69.112.232:22-139.178.89.65:54816.service - OpenSSH per-connection server daemon (139.178.89.65:54816). Aug 13 00:17:59.570702 sshd[5799]: Accepted publickey for core from 139.178.89.65 port 54816 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:17:59.573558 sshd[5799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:17:59.582108 systemd-logind[1467]: New session 8 of user core. Aug 13 00:17:59.584535 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 13 00:18:00.061579 systemd[1]: run-containerd-runc-k8s.io-37779cf891a9330303b0dbfade92b8f8265bfb8f31198d033c64bb097d113db4-runc.DYy24s.mount: Deactivated successfully. Aug 13 00:18:00.434884 sshd[5799]: pam_unix(sshd:session): session closed for user core Aug 13 00:18:00.441338 systemd[1]: sshd@7-159.69.112.232:22-139.178.89.65:54816.service: Deactivated successfully. Aug 13 00:18:00.444406 systemd[1]: session-8.scope: Deactivated successfully. Aug 13 00:18:00.446420 systemd-logind[1467]: Session 8 logged out. Waiting for processes to exit. Aug 13 00:18:00.449439 systemd-logind[1467]: Removed session 8. Aug 13 00:18:03.113759 systemd[1]: run-containerd-runc-k8s.io-e23b972514357b814c5b07c08d8314d0c2d3f55ae15dab0e48347283c7a67c38-runc.2Oiqt2.mount: Deactivated successfully. Aug 13 00:18:05.615846 systemd[1]: Started sshd@8-159.69.112.232:22-139.178.89.65:47292.service - OpenSSH per-connection server daemon (139.178.89.65:47292). Aug 13 00:18:06.618439 sshd[5863]: Accepted publickey for core from 139.178.89.65 port 47292 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:18:06.622127 sshd[5863]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:18:06.629048 systemd-logind[1467]: New session 9 of user core. Aug 13 00:18:06.635506 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 13 00:18:07.390694 sshd[5863]: pam_unix(sshd:session): session closed for user core Aug 13 00:18:07.396943 systemd-logind[1467]: Session 9 logged out. Waiting for processes to exit. Aug 13 00:18:07.397436 systemd[1]: sshd@8-159.69.112.232:22-139.178.89.65:47292.service: Deactivated successfully. Aug 13 00:18:07.401861 systemd[1]: session-9.scope: Deactivated successfully. Aug 13 00:18:07.405753 systemd-logind[1467]: Removed session 9. Aug 13 00:18:12.572807 systemd[1]: Started sshd@9-159.69.112.232:22-139.178.89.65:36872.service - OpenSSH per-connection server daemon (139.178.89.65:36872). Aug 13 00:18:13.571062 sshd[5905]: Accepted publickey for core from 139.178.89.65 port 36872 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:18:13.575354 sshd[5905]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:18:13.590358 systemd-logind[1467]: New session 10 of user core. Aug 13 00:18:13.595680 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 13 00:18:14.339716 sshd[5905]: pam_unix(sshd:session): session closed for user core Aug 13 00:18:14.344967 systemd[1]: sshd@9-159.69.112.232:22-139.178.89.65:36872.service: Deactivated successfully. Aug 13 00:18:14.349321 systemd[1]: session-10.scope: Deactivated successfully. Aug 13 00:18:14.350553 systemd-logind[1467]: Session 10 logged out. Waiting for processes to exit. Aug 13 00:18:14.351920 systemd-logind[1467]: Removed session 10. Aug 13 00:18:14.516790 systemd[1]: Started sshd@10-159.69.112.232:22-139.178.89.65:36874.service - OpenSSH per-connection server daemon (139.178.89.65:36874). Aug 13 00:18:15.509832 sshd[5919]: Accepted publickey for core from 139.178.89.65 port 36874 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:18:15.511988 sshd[5919]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:18:15.518149 systemd-logind[1467]: New session 11 of user core. Aug 13 00:18:15.523700 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 13 00:18:16.338802 sshd[5919]: pam_unix(sshd:session): session closed for user core Aug 13 00:18:16.344196 systemd-logind[1467]: Session 11 logged out. Waiting for processes to exit. Aug 13 00:18:16.345599 systemd[1]: sshd@10-159.69.112.232:22-139.178.89.65:36874.service: Deactivated successfully. Aug 13 00:18:16.349372 systemd[1]: session-11.scope: Deactivated successfully. Aug 13 00:18:16.353824 systemd-logind[1467]: Removed session 11. Aug 13 00:18:16.517976 systemd[1]: Started sshd@11-159.69.112.232:22-139.178.89.65:36890.service - OpenSSH per-connection server daemon (139.178.89.65:36890). Aug 13 00:18:17.511842 sshd[5930]: Accepted publickey for core from 139.178.89.65 port 36890 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:18:17.514316 sshd[5930]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:18:17.521923 systemd-logind[1467]: New session 12 of user core. Aug 13 00:18:17.526541 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 13 00:18:18.285004 sshd[5930]: pam_unix(sshd:session): session closed for user core Aug 13 00:18:18.290102 systemd[1]: sshd@11-159.69.112.232:22-139.178.89.65:36890.service: Deactivated successfully. Aug 13 00:18:18.293782 systemd[1]: session-12.scope: Deactivated successfully. Aug 13 00:18:18.294775 systemd-logind[1467]: Session 12 logged out. Waiting for processes to exit. Aug 13 00:18:18.296134 systemd-logind[1467]: Removed session 12. Aug 13 00:18:23.473603 systemd[1]: Started sshd@12-159.69.112.232:22-139.178.89.65:39590.service - OpenSSH per-connection server daemon (139.178.89.65:39590). Aug 13 00:18:24.470896 sshd[5985]: Accepted publickey for core from 139.178.89.65 port 39590 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:18:24.473246 sshd[5985]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:18:24.482329 systemd-logind[1467]: New session 13 of user core. Aug 13 00:18:24.485767 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 13 00:18:25.258894 sshd[5985]: pam_unix(sshd:session): session closed for user core Aug 13 00:18:25.265587 systemd[1]: sshd@12-159.69.112.232:22-139.178.89.65:39590.service: Deactivated successfully. Aug 13 00:18:25.269746 systemd[1]: session-13.scope: Deactivated successfully. Aug 13 00:18:25.272773 systemd-logind[1467]: Session 13 logged out. Waiting for processes to exit. Aug 13 00:18:25.274312 systemd-logind[1467]: Removed session 13. Aug 13 00:18:25.437742 systemd[1]: Started sshd@13-159.69.112.232:22-139.178.89.65:39606.service - OpenSSH per-connection server daemon (139.178.89.65:39606). Aug 13 00:18:26.429718 sshd[5998]: Accepted publickey for core from 139.178.89.65 port 39606 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:18:26.433215 sshd[5998]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:18:26.442307 systemd-logind[1467]: New session 14 of user core. Aug 13 00:18:26.449358 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 13 00:18:27.411698 sshd[5998]: pam_unix(sshd:session): session closed for user core Aug 13 00:18:27.424995 systemd[1]: sshd@13-159.69.112.232:22-139.178.89.65:39606.service: Deactivated successfully. Aug 13 00:18:27.430187 systemd[1]: session-14.scope: Deactivated successfully. Aug 13 00:18:27.434620 systemd-logind[1467]: Session 14 logged out. Waiting for processes to exit. Aug 13 00:18:27.437044 systemd-logind[1467]: Removed session 14. Aug 13 00:18:27.591396 systemd[1]: Started sshd@14-159.69.112.232:22-139.178.89.65:39622.service - OpenSSH per-connection server daemon (139.178.89.65:39622). Aug 13 00:18:28.592121 sshd[6010]: Accepted publickey for core from 139.178.89.65 port 39622 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:18:28.595009 sshd[6010]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:18:28.600495 systemd-logind[1467]: New session 15 of user core. Aug 13 00:18:28.605510 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 13 00:18:30.078718 sshd[6010]: pam_unix(sshd:session): session closed for user core Aug 13 00:18:30.084862 systemd[1]: sshd@14-159.69.112.232:22-139.178.89.65:39622.service: Deactivated successfully. Aug 13 00:18:30.089015 systemd[1]: session-15.scope: Deactivated successfully. Aug 13 00:18:30.093062 systemd-logind[1467]: Session 15 logged out. Waiting for processes to exit. Aug 13 00:18:30.098203 systemd-logind[1467]: Removed session 15. Aug 13 00:18:30.254846 systemd[1]: Started sshd@15-159.69.112.232:22-139.178.89.65:53460.service - OpenSSH per-connection server daemon (139.178.89.65:53460). Aug 13 00:18:31.249988 sshd[6049]: Accepted publickey for core from 139.178.89.65 port 53460 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:18:31.252880 sshd[6049]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:18:31.261207 systemd-logind[1467]: New session 16 of user core. Aug 13 00:18:31.267073 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 13 00:18:32.228534 sshd[6049]: pam_unix(sshd:session): session closed for user core Aug 13 00:18:32.233955 systemd-logind[1467]: Session 16 logged out. Waiting for processes to exit. Aug 13 00:18:32.236163 systemd[1]: sshd@15-159.69.112.232:22-139.178.89.65:53460.service: Deactivated successfully. Aug 13 00:18:32.238764 systemd[1]: session-16.scope: Deactivated successfully. Aug 13 00:18:32.242736 systemd-logind[1467]: Removed session 16. Aug 13 00:18:32.402688 systemd[1]: Started sshd@16-159.69.112.232:22-139.178.89.65:53472.service - OpenSSH per-connection server daemon (139.178.89.65:53472). Aug 13 00:18:33.393661 sshd[6060]: Accepted publickey for core from 139.178.89.65 port 53472 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:18:33.396363 sshd[6060]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:18:33.402433 systemd-logind[1467]: New session 17 of user core. Aug 13 00:18:33.409648 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 13 00:18:34.163611 sshd[6060]: pam_unix(sshd:session): session closed for user core Aug 13 00:18:34.169447 systemd[1]: sshd@16-159.69.112.232:22-139.178.89.65:53472.service: Deactivated successfully. Aug 13 00:18:34.172818 systemd[1]: session-17.scope: Deactivated successfully. Aug 13 00:18:34.175610 systemd-logind[1467]: Session 17 logged out. Waiting for processes to exit. Aug 13 00:18:34.177159 systemd-logind[1467]: Removed session 17. Aug 13 00:18:39.350811 systemd[1]: Started sshd@17-159.69.112.232:22-139.178.89.65:33740.service - OpenSSH per-connection server daemon (139.178.89.65:33740). Aug 13 00:18:40.345542 sshd[6097]: Accepted publickey for core from 139.178.89.65 port 33740 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:18:40.348387 sshd[6097]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:18:40.354466 systemd-logind[1467]: New session 18 of user core. Aug 13 00:18:40.360618 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 13 00:18:41.112250 sshd[6097]: pam_unix(sshd:session): session closed for user core Aug 13 00:18:41.118227 systemd[1]: sshd@17-159.69.112.232:22-139.178.89.65:33740.service: Deactivated successfully. Aug 13 00:18:41.120653 systemd[1]: session-18.scope: Deactivated successfully. Aug 13 00:18:41.121856 systemd-logind[1467]: Session 18 logged out. Waiting for processes to exit. Aug 13 00:18:41.123727 systemd-logind[1467]: Removed session 18. Aug 13 00:18:46.290970 systemd[1]: Started sshd@18-159.69.112.232:22-139.178.89.65:33742.service - OpenSSH per-connection server daemon (139.178.89.65:33742). Aug 13 00:18:47.286950 sshd[6128]: Accepted publickey for core from 139.178.89.65 port 33742 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:18:47.288212 sshd[6128]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:18:47.293929 systemd-logind[1467]: New session 19 of user core. Aug 13 00:18:47.300638 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 13 00:18:48.059187 sshd[6128]: pam_unix(sshd:session): session closed for user core Aug 13 00:18:48.063699 systemd-logind[1467]: Session 19 logged out. Waiting for processes to exit. Aug 13 00:18:48.064582 systemd[1]: sshd@18-159.69.112.232:22-139.178.89.65:33742.service: Deactivated successfully. Aug 13 00:18:48.067732 systemd[1]: session-19.scope: Deactivated successfully. Aug 13 00:18:48.069787 systemd-logind[1467]: Removed session 19.