Sep 12 17:40:44.339597 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 12 17:40:44.339618 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Sep 12 15:59:19 -00 2025 Sep 12 17:40:44.339626 kernel: KASLR enabled Sep 12 17:40:44.339632 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Sep 12 17:40:44.339639 kernel: printk: bootconsole [pl11] enabled Sep 12 17:40:44.339645 kernel: efi: EFI v2.7 by EDK II Sep 12 17:40:44.339652 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3ead8b98 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Sep 12 17:40:44.339658 kernel: random: crng init done Sep 12 17:40:44.339664 kernel: ACPI: Early table checksum verification disabled Sep 12 17:40:44.339670 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Sep 12 17:40:44.339676 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 17:40:44.339682 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 17:40:44.339690 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Sep 12 17:40:44.339696 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 17:40:44.339703 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 17:40:44.339709 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 17:40:44.339716 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 17:40:44.339723 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 17:40:44.339730 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 17:40:44.339736 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Sep 12 17:40:44.339743 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 17:40:44.339749 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Sep 12 17:40:44.339755 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Sep 12 17:40:44.339761 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Sep 12 17:40:44.339768 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Sep 12 17:40:44.339774 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Sep 12 17:40:44.339780 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Sep 12 17:40:44.339786 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Sep 12 17:40:44.339794 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Sep 12 17:40:44.339801 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Sep 12 17:40:44.339807 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Sep 12 17:40:44.339813 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Sep 12 17:40:44.339820 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Sep 12 17:40:44.339826 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Sep 12 17:40:44.339832 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Sep 12 17:40:44.339838 kernel: Zone ranges: Sep 12 17:40:44.339845 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Sep 12 17:40:44.339851 kernel: DMA32 empty Sep 12 17:40:44.339857 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Sep 12 17:40:44.339863 kernel: Movable zone start for each node Sep 12 17:40:44.339873 kernel: Early memory node ranges Sep 12 17:40:44.339880 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Sep 12 17:40:44.339887 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Sep 12 17:40:44.341054 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Sep 12 17:40:44.341070 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Sep 12 17:40:44.341084 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Sep 12 17:40:44.341091 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Sep 12 17:40:44.341098 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Sep 12 17:40:44.341105 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Sep 12 17:40:44.341112 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Sep 12 17:40:44.341119 kernel: psci: probing for conduit method from ACPI. Sep 12 17:40:44.341126 kernel: psci: PSCIv1.1 detected in firmware. Sep 12 17:40:44.341133 kernel: psci: Using standard PSCI v0.2 function IDs Sep 12 17:40:44.341139 kernel: psci: MIGRATE_INFO_TYPE not supported. Sep 12 17:40:44.341146 kernel: psci: SMC Calling Convention v1.4 Sep 12 17:40:44.341153 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Sep 12 17:40:44.341159 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Sep 12 17:40:44.341168 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Sep 12 17:40:44.341183 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Sep 12 17:40:44.341193 kernel: pcpu-alloc: [0] 0 [0] 1 Sep 12 17:40:44.341200 kernel: Detected PIPT I-cache on CPU0 Sep 12 17:40:44.341207 kernel: CPU features: detected: GIC system register CPU interface Sep 12 17:40:44.341213 kernel: CPU features: detected: Hardware dirty bit management Sep 12 17:40:44.341220 kernel: CPU features: detected: Spectre-BHB Sep 12 17:40:44.341227 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 12 17:40:44.341234 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 12 17:40:44.341241 kernel: CPU features: detected: ARM erratum 1418040 Sep 12 17:40:44.341247 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Sep 12 17:40:44.341256 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 12 17:40:44.341263 kernel: alternatives: applying boot alternatives Sep 12 17:40:44.341271 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=1e63d3057914877efa0eb5f75703bd3a3d4c120bdf4a7ab97f41083e29183e56 Sep 12 17:40:44.341278 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 17:40:44.341285 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 12 17:40:44.341292 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 12 17:40:44.341299 kernel: Fallback order for Node 0: 0 Sep 12 17:40:44.341306 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Sep 12 17:40:44.341312 kernel: Policy zone: Normal Sep 12 17:40:44.341319 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 17:40:44.341326 kernel: software IO TLB: area num 2. Sep 12 17:40:44.341334 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Sep 12 17:40:44.341341 kernel: Memory: 3982564K/4194160K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39488K init, 897K bss, 211596K reserved, 0K cma-reserved) Sep 12 17:40:44.341348 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 12 17:40:44.341355 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 17:40:44.341362 kernel: rcu: RCU event tracing is enabled. Sep 12 17:40:44.341369 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 12 17:40:44.341376 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 17:40:44.341383 kernel: Tracing variant of Tasks RCU enabled. Sep 12 17:40:44.341390 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 17:40:44.341397 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 12 17:40:44.341403 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 12 17:40:44.341412 kernel: GICv3: 960 SPIs implemented Sep 12 17:40:44.341419 kernel: GICv3: 0 Extended SPIs implemented Sep 12 17:40:44.341425 kernel: Root IRQ handler: gic_handle_irq Sep 12 17:40:44.341432 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 12 17:40:44.341439 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Sep 12 17:40:44.341446 kernel: ITS: No ITS available, not enabling LPIs Sep 12 17:40:44.341453 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 12 17:40:44.341460 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 12 17:40:44.341466 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 12 17:40:44.341473 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 12 17:40:44.341480 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 12 17:40:44.341489 kernel: Console: colour dummy device 80x25 Sep 12 17:40:44.341496 kernel: printk: console [tty1] enabled Sep 12 17:40:44.341503 kernel: ACPI: Core revision 20230628 Sep 12 17:40:44.341510 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 12 17:40:44.341517 kernel: pid_max: default: 32768 minimum: 301 Sep 12 17:40:44.341524 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 12 17:40:44.341531 kernel: landlock: Up and running. Sep 12 17:40:44.341538 kernel: SELinux: Initializing. Sep 12 17:40:44.341545 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 17:40:44.341552 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 17:40:44.341561 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 17:40:44.341568 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 17:40:44.341575 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Sep 12 17:40:44.341582 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 Sep 12 17:40:44.341589 kernel: Hyper-V: enabling crash_kexec_post_notifiers Sep 12 17:40:44.341596 kernel: rcu: Hierarchical SRCU implementation. Sep 12 17:40:44.341603 kernel: rcu: Max phase no-delay instances is 400. Sep 12 17:40:44.341617 kernel: Remapping and enabling EFI services. Sep 12 17:40:44.341625 kernel: smp: Bringing up secondary CPUs ... Sep 12 17:40:44.341632 kernel: Detected PIPT I-cache on CPU1 Sep 12 17:40:44.341639 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Sep 12 17:40:44.341648 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 12 17:40:44.341655 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 12 17:40:44.341663 kernel: smp: Brought up 1 node, 2 CPUs Sep 12 17:40:44.341670 kernel: SMP: Total of 2 processors activated. Sep 12 17:40:44.341677 kernel: CPU features: detected: 32-bit EL0 Support Sep 12 17:40:44.341686 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Sep 12 17:40:44.341694 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 12 17:40:44.341701 kernel: CPU features: detected: CRC32 instructions Sep 12 17:40:44.341709 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 12 17:40:44.341716 kernel: CPU features: detected: LSE atomic instructions Sep 12 17:40:44.341723 kernel: CPU features: detected: Privileged Access Never Sep 12 17:40:44.341731 kernel: CPU: All CPU(s) started at EL1 Sep 12 17:40:44.341738 kernel: alternatives: applying system-wide alternatives Sep 12 17:40:44.341745 kernel: devtmpfs: initialized Sep 12 17:40:44.341754 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 17:40:44.341762 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 12 17:40:44.341769 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 17:40:44.341776 kernel: SMBIOS 3.1.0 present. Sep 12 17:40:44.341784 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Sep 12 17:40:44.341791 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 17:40:44.341799 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 12 17:40:44.341806 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 12 17:40:44.341813 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 12 17:40:44.341822 kernel: audit: initializing netlink subsys (disabled) Sep 12 17:40:44.341830 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Sep 12 17:40:44.341837 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 17:40:44.341844 kernel: cpuidle: using governor menu Sep 12 17:40:44.341852 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 12 17:40:44.341859 kernel: ASID allocator initialised with 32768 entries Sep 12 17:40:44.341866 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 17:40:44.341874 kernel: Serial: AMBA PL011 UART driver Sep 12 17:40:44.341881 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 12 17:40:44.341890 kernel: Modules: 0 pages in range for non-PLT usage Sep 12 17:40:44.341916 kernel: Modules: 508992 pages in range for PLT usage Sep 12 17:40:44.341924 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 12 17:40:44.341931 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 12 17:40:44.341939 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 12 17:40:44.341946 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 12 17:40:44.341953 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 17:40:44.341961 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 17:40:44.341968 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 12 17:40:44.341978 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 12 17:40:44.341985 kernel: ACPI: Added _OSI(Module Device) Sep 12 17:40:44.341992 kernel: ACPI: Added _OSI(Processor Device) Sep 12 17:40:44.342000 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 17:40:44.342007 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 12 17:40:44.342015 kernel: ACPI: Interpreter enabled Sep 12 17:40:44.342022 kernel: ACPI: Using GIC for interrupt routing Sep 12 17:40:44.342029 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Sep 12 17:40:44.342036 kernel: printk: console [ttyAMA0] enabled Sep 12 17:40:44.342045 kernel: printk: bootconsole [pl11] disabled Sep 12 17:40:44.342053 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Sep 12 17:40:44.342060 kernel: iommu: Default domain type: Translated Sep 12 17:40:44.342067 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 12 17:40:44.342075 kernel: efivars: Registered efivars operations Sep 12 17:40:44.342082 kernel: vgaarb: loaded Sep 12 17:40:44.342089 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 12 17:40:44.342097 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 17:40:44.342104 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 17:40:44.342113 kernel: pnp: PnP ACPI init Sep 12 17:40:44.342120 kernel: pnp: PnP ACPI: found 0 devices Sep 12 17:40:44.342127 kernel: NET: Registered PF_INET protocol family Sep 12 17:40:44.342135 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 12 17:40:44.342142 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 12 17:40:44.342150 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 17:40:44.342157 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 12 17:40:44.342164 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 12 17:40:44.342172 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 12 17:40:44.342181 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 17:40:44.342188 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 17:40:44.342196 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 17:40:44.342203 kernel: PCI: CLS 0 bytes, default 64 Sep 12 17:40:44.342210 kernel: kvm [1]: HYP mode not available Sep 12 17:40:44.342217 kernel: Initialise system trusted keyrings Sep 12 17:40:44.342225 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 12 17:40:44.342232 kernel: Key type asymmetric registered Sep 12 17:40:44.342239 kernel: Asymmetric key parser 'x509' registered Sep 12 17:40:44.342248 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 12 17:40:44.342256 kernel: io scheduler mq-deadline registered Sep 12 17:40:44.342263 kernel: io scheduler kyber registered Sep 12 17:40:44.342270 kernel: io scheduler bfq registered Sep 12 17:40:44.342278 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 17:40:44.342285 kernel: thunder_xcv, ver 1.0 Sep 12 17:40:44.342293 kernel: thunder_bgx, ver 1.0 Sep 12 17:40:44.342300 kernel: nicpf, ver 1.0 Sep 12 17:40:44.342307 kernel: nicvf, ver 1.0 Sep 12 17:40:44.342493 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 12 17:40:44.342576 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-12T17:40:43 UTC (1757698843) Sep 12 17:40:44.342587 kernel: efifb: probing for efifb Sep 12 17:40:44.342595 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Sep 12 17:40:44.342602 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Sep 12 17:40:44.342610 kernel: efifb: scrolling: redraw Sep 12 17:40:44.342617 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 12 17:40:44.342625 kernel: Console: switching to colour frame buffer device 128x48 Sep 12 17:40:44.342634 kernel: fb0: EFI VGA frame buffer device Sep 12 17:40:44.342642 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Sep 12 17:40:44.342649 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 12 17:40:44.342656 kernel: No ACPI PMU IRQ for CPU0 Sep 12 17:40:44.342663 kernel: No ACPI PMU IRQ for CPU1 Sep 12 17:40:44.342671 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Sep 12 17:40:44.342684 kernel: watchdog: Delayed init of the lockup detector failed: -19 Sep 12 17:40:44.342691 kernel: watchdog: Hard watchdog permanently disabled Sep 12 17:40:44.342699 kernel: NET: Registered PF_INET6 protocol family Sep 12 17:40:44.342708 kernel: Segment Routing with IPv6 Sep 12 17:40:44.342715 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 17:40:44.342722 kernel: NET: Registered PF_PACKET protocol family Sep 12 17:40:44.342730 kernel: Key type dns_resolver registered Sep 12 17:40:44.342737 kernel: registered taskstats version 1 Sep 12 17:40:44.342744 kernel: Loading compiled-in X.509 certificates Sep 12 17:40:44.342752 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: 2d576b5e69e6c5de2f731966fe8b55173c144d02' Sep 12 17:40:44.342759 kernel: Key type .fscrypt registered Sep 12 17:40:44.342766 kernel: Key type fscrypt-provisioning registered Sep 12 17:40:44.342775 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 12 17:40:44.342782 kernel: ima: Allocated hash algorithm: sha1 Sep 12 17:40:44.342790 kernel: ima: No architecture policies found Sep 12 17:40:44.342797 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 12 17:40:44.342804 kernel: clk: Disabling unused clocks Sep 12 17:40:44.342811 kernel: Freeing unused kernel memory: 39488K Sep 12 17:40:44.342819 kernel: Run /init as init process Sep 12 17:40:44.342826 kernel: with arguments: Sep 12 17:40:44.342833 kernel: /init Sep 12 17:40:44.342841 kernel: with environment: Sep 12 17:40:44.342849 kernel: HOME=/ Sep 12 17:40:44.342856 kernel: TERM=linux Sep 12 17:40:44.342863 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 17:40:44.342872 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 12 17:40:44.342882 systemd[1]: Detected virtualization microsoft. Sep 12 17:40:44.342890 systemd[1]: Detected architecture arm64. Sep 12 17:40:44.344940 systemd[1]: Running in initrd. Sep 12 17:40:44.344956 systemd[1]: No hostname configured, using default hostname. Sep 12 17:40:44.344964 systemd[1]: Hostname set to . Sep 12 17:40:44.344972 systemd[1]: Initializing machine ID from random generator. Sep 12 17:40:44.344980 systemd[1]: Queued start job for default target initrd.target. Sep 12 17:40:44.344988 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:40:44.344996 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:40:44.345005 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 17:40:44.345013 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:40:44.345030 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 17:40:44.345053 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 17:40:44.345063 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 17:40:44.345071 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 17:40:44.345079 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:40:44.345087 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:40:44.345097 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:40:44.345105 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:40:44.345113 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:40:44.345121 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:40:44.345129 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:40:44.345137 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:40:44.345145 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 17:40:44.345153 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 12 17:40:44.345161 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:40:44.345171 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:40:44.345179 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:40:44.345187 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:40:44.345195 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 17:40:44.345203 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:40:44.345211 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 17:40:44.345218 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 17:40:44.345226 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:40:44.345234 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:40:44.345267 systemd-journald[217]: Collecting audit messages is disabled. Sep 12 17:40:44.345287 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:40:44.345296 systemd-journald[217]: Journal started Sep 12 17:40:44.345317 systemd-journald[217]: Runtime Journal (/run/log/journal/5dd1b16d40074591b22bc16cf438a7c0) is 8.0M, max 78.5M, 70.5M free. Sep 12 17:40:44.343142 systemd-modules-load[218]: Inserted module 'overlay' Sep 12 17:40:44.376779 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 17:40:44.376827 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:40:44.376841 kernel: Bridge firewalling registered Sep 12 17:40:44.380211 systemd-modules-load[218]: Inserted module 'br_netfilter' Sep 12 17:40:44.393226 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 17:40:44.400559 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:40:44.414558 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 17:40:44.426743 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:40:44.437336 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:40:44.459170 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:40:44.469040 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:40:44.502467 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 17:40:44.520284 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:40:44.529744 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:40:44.559135 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:40:44.565868 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:40:44.590286 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 17:40:44.597079 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:40:44.611590 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:40:44.640680 dracut-cmdline[249]: dracut-dracut-053 Sep 12 17:40:44.640088 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:40:44.659492 dracut-cmdline[249]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=1e63d3057914877efa0eb5f75703bd3a3d4c120bdf4a7ab97f41083e29183e56 Sep 12 17:40:44.693646 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:40:44.730092 systemd-resolved[261]: Positive Trust Anchors: Sep 12 17:40:44.730105 systemd-resolved[261]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:40:44.730137 systemd-resolved[261]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:40:44.736974 systemd-resolved[261]: Defaulting to hostname 'linux'. Sep 12 17:40:44.737983 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:40:44.744496 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:40:44.832912 kernel: SCSI subsystem initialized Sep 12 17:40:44.840909 kernel: Loading iSCSI transport class v2.0-870. Sep 12 17:40:44.850973 kernel: iscsi: registered transport (tcp) Sep 12 17:40:44.868835 kernel: iscsi: registered transport (qla4xxx) Sep 12 17:40:44.868866 kernel: QLogic iSCSI HBA Driver Sep 12 17:40:44.903634 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 17:40:44.918167 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 17:40:44.953415 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 17:40:44.953464 kernel: device-mapper: uevent: version 1.0.3 Sep 12 17:40:44.960562 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 12 17:40:45.041921 kernel: raid6: neonx8 gen() 15771 MB/s Sep 12 17:40:45.061906 kernel: raid6: neonx4 gen() 15670 MB/s Sep 12 17:40:45.082908 kernel: raid6: neonx2 gen() 13227 MB/s Sep 12 17:40:45.103905 kernel: raid6: neonx1 gen() 10513 MB/s Sep 12 17:40:45.123903 kernel: raid6: int64x8 gen() 6977 MB/s Sep 12 17:40:45.143904 kernel: raid6: int64x4 gen() 7349 MB/s Sep 12 17:40:45.164904 kernel: raid6: int64x2 gen() 6131 MB/s Sep 12 17:40:45.188307 kernel: raid6: int64x1 gen() 5058 MB/s Sep 12 17:40:45.188318 kernel: raid6: using algorithm neonx8 gen() 15771 MB/s Sep 12 17:40:45.213440 kernel: raid6: .... xor() 12061 MB/s, rmw enabled Sep 12 17:40:45.213460 kernel: raid6: using neon recovery algorithm Sep 12 17:40:45.225054 kernel: xor: measuring software checksum speed Sep 12 17:40:45.225071 kernel: 8regs : 19816 MB/sec Sep 12 17:40:45.228759 kernel: 32regs : 19646 MB/sec Sep 12 17:40:45.232663 kernel: arm64_neon : 27025 MB/sec Sep 12 17:40:45.237253 kernel: xor: using function: arm64_neon (27025 MB/sec) Sep 12 17:40:45.288915 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 17:40:45.301951 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:40:45.318039 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:40:45.338148 systemd-udevd[436]: Using default interface naming scheme 'v255'. Sep 12 17:40:45.343657 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:40:45.370224 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 17:40:45.386811 dracut-pre-trigger[449]: rd.md=0: removing MD RAID activation Sep 12 17:40:45.414520 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:40:45.430172 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:40:45.472644 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:40:45.494149 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 17:40:45.517434 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 17:40:45.533577 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:40:45.548764 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:40:45.568119 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:40:45.592007 kernel: hv_vmbus: Vmbus version:5.3 Sep 12 17:40:45.597087 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 17:40:45.632461 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 12 17:40:45.632486 kernel: hv_vmbus: registering driver hid_hyperv Sep 12 17:40:45.632505 kernel: hv_vmbus: registering driver hyperv_keyboard Sep 12 17:40:45.632515 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Sep 12 17:40:45.612798 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:40:45.716030 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 12 17:40:45.716062 kernel: hv_vmbus: registering driver hv_netvsc Sep 12 17:40:45.716072 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Sep 12 17:40:45.716092 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Sep 12 17:40:45.716280 kernel: hv_vmbus: registering driver hv_storvsc Sep 12 17:40:45.716291 kernel: PTP clock support registered Sep 12 17:40:45.716300 kernel: scsi host0: storvsc_host_t Sep 12 17:40:45.716410 kernel: scsi host1: storvsc_host_t Sep 12 17:40:45.658535 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:40:45.658724 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:40:45.744564 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Sep 12 17:40:45.689686 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:40:45.724270 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:40:45.724598 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:40:45.736807 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:40:45.783979 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Sep 12 17:40:45.784315 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:40:45.802126 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:40:45.831672 kernel: hv_utils: Registering HyperV Utility Driver Sep 12 17:40:45.831695 kernel: hv_vmbus: registering driver hv_utils Sep 12 17:40:45.802278 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:40:46.246562 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Sep 12 17:40:46.246737 kernel: hv_utils: Heartbeat IC version 3.0 Sep 12 17:40:46.246749 kernel: hv_utils: Shutdown IC version 3.2 Sep 12 17:40:46.246758 kernel: hv_utils: TimeSync IC version 4.0 Sep 12 17:40:46.246768 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 12 17:40:46.246786 kernel: hv_netvsc 000d3afb-d674-000d-3afb-d674000d3afb eth0: VF slot 1 added Sep 12 17:40:45.824104 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:40:46.240498 systemd-resolved[261]: Clock change detected. Flushing caches. Sep 12 17:40:46.255599 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:40:46.286863 kernel: hv_vmbus: registering driver hv_pci Sep 12 17:40:46.286891 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Sep 12 17:40:46.287043 kernel: hv_pci 3cf1144d-19ca-4c4b-ae6f-6f04d9382b9a: PCI VMBus probing: Using version 0x10004 Sep 12 17:40:46.287082 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:40:46.337166 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Sep 12 17:40:46.337350 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Sep 12 17:40:46.337442 kernel: sd 0:0:0:0: [sda] Write Protect is off Sep 12 17:40:46.337535 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Sep 12 17:40:46.337626 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Sep 12 17:40:46.491978 kernel: hv_pci 3cf1144d-19ca-4c4b-ae6f-6f04d9382b9a: PCI host bridge to bus 19ca:00 Sep 12 17:40:46.492215 kernel: pci_bus 19ca:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Sep 12 17:40:46.492326 kernel: pci_bus 19ca:00: No busn resource found for root bus, will use [bus 00-ff] Sep 12 17:40:46.508165 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 17:40:46.508217 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Sep 12 17:40:46.509597 kernel: pci 19ca:00:02.0: [15b3:1018] type 00 class 0x020000 Sep 12 17:40:46.522141 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:40:46.542776 kernel: pci 19ca:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Sep 12 17:40:46.542940 kernel: pci 19ca:00:02.0: enabling Extended Tags Sep 12 17:40:46.542958 kernel: pci 19ca:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 19ca:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Sep 12 17:40:46.570093 kernel: pci_bus 19ca:00: busn_res: [bus 00-ff] end is updated to 00 Sep 12 17:40:46.570295 kernel: pci 19ca:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Sep 12 17:40:46.613698 kernel: mlx5_core 19ca:00:02.0: enabling device (0000 -> 0002) Sep 12 17:40:46.620841 kernel: mlx5_core 19ca:00:02.0: firmware version: 16.31.2424 Sep 12 17:40:46.896108 kernel: hv_netvsc 000d3afb-d674-000d-3afb-d674000d3afb eth0: VF registering: eth1 Sep 12 17:40:46.896327 kernel: mlx5_core 19ca:00:02.0 eth1: joined to eth0 Sep 12 17:40:46.907082 kernel: mlx5_core 19ca:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Sep 12 17:40:46.920823 kernel: mlx5_core 19ca:00:02.0 enP6602s1: renamed from eth1 Sep 12 17:40:47.031896 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Sep 12 17:40:47.181652 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Sep 12 17:40:47.205818 kernel: BTRFS: device fsid 5a23a06a-00d4-4606-89bf-13e31a563129 devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (482) Sep 12 17:40:47.213834 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (492) Sep 12 17:40:47.226990 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Sep 12 17:40:47.234086 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Sep 12 17:40:47.255340 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Sep 12 17:40:47.271043 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 17:40:47.299857 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 17:40:47.308814 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 17:40:47.319818 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 17:40:48.322830 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 17:40:48.323495 disk-uuid[603]: The operation has completed successfully. Sep 12 17:40:48.389387 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 17:40:48.389483 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 17:40:48.418000 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 17:40:48.431137 sh[716]: Success Sep 12 17:40:48.471828 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 12 17:40:48.839244 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 17:40:48.848911 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 17:40:48.859565 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 17:40:48.898931 kernel: BTRFS info (device dm-0): first mount of filesystem 5a23a06a-00d4-4606-89bf-13e31a563129 Sep 12 17:40:48.898983 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 12 17:40:48.906647 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 12 17:40:48.912448 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 17:40:48.917533 kernel: BTRFS info (device dm-0): using free space tree Sep 12 17:40:49.380110 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 17:40:49.386015 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 12 17:40:49.398038 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 17:40:49.420997 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 17:40:49.450891 kernel: BTRFS info (device sda6): first mount of filesystem daec7f45-8bde-44bd-bec0-4b8eac931d0c Sep 12 17:40:49.450938 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 12 17:40:49.455592 kernel: BTRFS info (device sda6): using free space tree Sep 12 17:40:49.525460 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:40:49.547911 kernel: BTRFS info (device sda6): auto enabling async discard Sep 12 17:40:49.551103 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:40:49.567086 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 12 17:40:49.580205 kernel: BTRFS info (device sda6): last unmount of filesystem daec7f45-8bde-44bd-bec0-4b8eac931d0c Sep 12 17:40:49.584382 systemd-networkd[896]: lo: Link UP Sep 12 17:40:49.584393 systemd-networkd[896]: lo: Gained carrier Sep 12 17:40:49.585983 systemd-networkd[896]: Enumeration completed Sep 12 17:40:49.586074 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:40:49.596083 systemd-networkd[896]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:40:49.596087 systemd-networkd[896]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:40:49.596889 systemd[1]: Reached target network.target - Network. Sep 12 17:40:49.605852 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 17:40:49.649051 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 17:40:49.776777 kernel: mlx5_core 19ca:00:02.0 enP6602s1: Link up Sep 12 17:40:49.777043 kernel: buffer_size[0]=0 is not enough for lossless buffer Sep 12 17:40:49.857070 kernel: hv_netvsc 000d3afb-d674-000d-3afb-d674000d3afb eth0: Data path switched to VF: enP6602s1 Sep 12 17:40:49.856733 systemd-networkd[896]: enP6602s1: Link UP Sep 12 17:40:49.856842 systemd-networkd[896]: eth0: Link UP Sep 12 17:40:49.856943 systemd-networkd[896]: eth0: Gained carrier Sep 12 17:40:49.856952 systemd-networkd[896]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:40:49.880025 systemd-networkd[896]: enP6602s1: Gained carrier Sep 12 17:40:49.896858 systemd-networkd[896]: eth0: DHCPv4 address 10.200.20.41/24, gateway 10.200.20.1 acquired from 168.63.129.16 Sep 12 17:40:50.597784 ignition[905]: Ignition 2.19.0 Sep 12 17:40:50.597819 ignition[905]: Stage: fetch-offline Sep 12 17:40:50.601621 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:40:50.597858 ignition[905]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:40:50.597867 ignition[905]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 17:40:50.597960 ignition[905]: parsed url from cmdline: "" Sep 12 17:40:50.624953 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 12 17:40:50.597964 ignition[905]: no config URL provided Sep 12 17:40:50.597968 ignition[905]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 17:40:50.597975 ignition[905]: no config at "/usr/lib/ignition/user.ign" Sep 12 17:40:50.597980 ignition[905]: failed to fetch config: resource requires networking Sep 12 17:40:50.598153 ignition[905]: Ignition finished successfully Sep 12 17:40:50.651194 ignition[913]: Ignition 2.19.0 Sep 12 17:40:50.651200 ignition[913]: Stage: fetch Sep 12 17:40:50.651392 ignition[913]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:40:50.651401 ignition[913]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 17:40:50.651495 ignition[913]: parsed url from cmdline: "" Sep 12 17:40:50.651498 ignition[913]: no config URL provided Sep 12 17:40:50.651502 ignition[913]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 17:40:50.651510 ignition[913]: no config at "/usr/lib/ignition/user.ign" Sep 12 17:40:50.651536 ignition[913]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Sep 12 17:40:50.749009 ignition[913]: GET result: OK Sep 12 17:40:50.749092 ignition[913]: config has been read from IMDS userdata Sep 12 17:40:50.753152 unknown[913]: fetched base config from "system" Sep 12 17:40:50.749147 ignition[913]: parsing config with SHA512: 8c3b578600754a38ba181643c0b7801ed5e60d9d87e66dda026c2efc27cf959c3f117ad740c24d5515d5a5b43d77cc10f03eef8279f1f0a3a30bfba8d928e33d Sep 12 17:40:50.753165 unknown[913]: fetched base config from "system" Sep 12 17:40:50.753577 ignition[913]: fetch: fetch complete Sep 12 17:40:50.753171 unknown[913]: fetched user config from "azure" Sep 12 17:40:50.753581 ignition[913]: fetch: fetch passed Sep 12 17:40:50.759782 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 12 17:40:50.753635 ignition[913]: Ignition finished successfully Sep 12 17:40:50.784093 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 17:40:50.806181 ignition[919]: Ignition 2.19.0 Sep 12 17:40:50.806191 ignition[919]: Stage: kargs Sep 12 17:40:50.810896 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 17:40:50.806380 ignition[919]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:40:50.806392 ignition[919]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 17:40:50.807312 ignition[919]: kargs: kargs passed Sep 12 17:40:50.807354 ignition[919]: Ignition finished successfully Sep 12 17:40:50.834128 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 17:40:50.859497 ignition[925]: Ignition 2.19.0 Sep 12 17:40:50.859514 ignition[925]: Stage: disks Sep 12 17:40:50.859725 ignition[925]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:40:50.866125 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 17:40:50.859735 ignition[925]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 17:40:50.872325 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 17:40:50.863839 ignition[925]: disks: disks passed Sep 12 17:40:50.883039 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 17:40:50.863892 ignition[925]: Ignition finished successfully Sep 12 17:40:50.894504 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:40:50.905438 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:40:50.916543 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:40:50.941087 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 17:40:51.032505 systemd-fsck[933]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Sep 12 17:40:51.038600 systemd-networkd[896]: eth0: Gained IPv6LL Sep 12 17:40:51.044140 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 17:40:51.061980 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 17:40:51.120828 kernel: EXT4-fs (sda9): mounted filesystem fc6c61a7-153d-4e7f-95c0-bffdb4824d71 r/w with ordered data mode. Quota mode: none. Sep 12 17:40:51.121239 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 17:40:51.129987 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 17:40:51.181916 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:40:51.205010 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (944) Sep 12 17:40:51.218128 kernel: BTRFS info (device sda6): first mount of filesystem daec7f45-8bde-44bd-bec0-4b8eac931d0c Sep 12 17:40:51.218167 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 12 17:40:51.222092 kernel: BTRFS info (device sda6): using free space tree Sep 12 17:40:51.230813 kernel: BTRFS info (device sda6): auto enabling async discard Sep 12 17:40:51.245914 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 17:40:51.253072 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 12 17:40:51.263697 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 17:40:51.263737 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:40:51.295239 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:40:51.303005 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 17:40:51.319119 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 17:40:51.995088 coreos-metadata[961]: Sep 12 17:40:51.995 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 12 17:40:52.006198 coreos-metadata[961]: Sep 12 17:40:52.006 INFO Fetch successful Sep 12 17:40:52.006198 coreos-metadata[961]: Sep 12 17:40:52.006 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Sep 12 17:40:52.023523 coreos-metadata[961]: Sep 12 17:40:52.023 INFO Fetch successful Sep 12 17:40:52.040023 coreos-metadata[961]: Sep 12 17:40:52.039 INFO wrote hostname ci-4081.3.6-a-d7d9773d19 to /sysroot/etc/hostname Sep 12 17:40:52.041444 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 12 17:40:52.513568 initrd-setup-root[974]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 17:40:52.573663 initrd-setup-root[981]: cut: /sysroot/etc/group: No such file or directory Sep 12 17:40:52.593699 initrd-setup-root[988]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 17:40:52.606332 initrd-setup-root[995]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 17:40:53.998725 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 17:40:54.017011 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 17:40:54.025968 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 17:40:54.049812 kernel: BTRFS info (device sda6): last unmount of filesystem daec7f45-8bde-44bd-bec0-4b8eac931d0c Sep 12 17:40:54.050602 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 17:40:54.073249 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 17:40:54.093657 ignition[1063]: INFO : Ignition 2.19.0 Sep 12 17:40:54.093657 ignition[1063]: INFO : Stage: mount Sep 12 17:40:54.093657 ignition[1063]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:40:54.093657 ignition[1063]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 17:40:54.093657 ignition[1063]: INFO : mount: mount passed Sep 12 17:40:54.093657 ignition[1063]: INFO : Ignition finished successfully Sep 12 17:40:54.097364 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 17:40:54.114025 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 17:40:54.137943 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:40:54.181674 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1074) Sep 12 17:40:54.181742 kernel: BTRFS info (device sda6): first mount of filesystem daec7f45-8bde-44bd-bec0-4b8eac931d0c Sep 12 17:40:54.192030 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 12 17:40:54.192067 kernel: BTRFS info (device sda6): using free space tree Sep 12 17:40:54.199813 kernel: BTRFS info (device sda6): auto enabling async discard Sep 12 17:40:54.201699 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:40:54.224706 ignition[1091]: INFO : Ignition 2.19.0 Sep 12 17:40:54.230180 ignition[1091]: INFO : Stage: files Sep 12 17:40:54.230180 ignition[1091]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:40:54.230180 ignition[1091]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 17:40:54.230180 ignition[1091]: DEBUG : files: compiled without relabeling support, skipping Sep 12 17:40:54.279388 ignition[1091]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 17:40:54.279388 ignition[1091]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 17:40:54.407532 ignition[1091]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 17:40:54.415196 ignition[1091]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 17:40:54.415196 ignition[1091]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 17:40:54.410237 unknown[1091]: wrote ssh authorized keys file for user: core Sep 12 17:40:54.460172 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 12 17:40:54.471481 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Sep 12 17:40:54.751485 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 12 17:40:55.041681 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 12 17:40:55.041681 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 12 17:40:55.041681 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 17:40:55.041681 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:40:55.041681 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:40:55.041681 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:40:55.041681 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:40:55.041681 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:40:55.041681 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:40:55.139962 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:40:55.139962 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:40:55.139962 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 12 17:40:55.139962 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 12 17:40:55.139962 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 12 17:40:55.139962 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Sep 12 17:40:55.578147 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 12 17:40:55.801460 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 12 17:40:55.801460 ignition[1091]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 12 17:40:55.849859 ignition[1091]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:40:55.861910 ignition[1091]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:40:55.861910 ignition[1091]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 12 17:40:55.861910 ignition[1091]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Sep 12 17:40:55.861910 ignition[1091]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 17:40:55.861910 ignition[1091]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:40:55.861910 ignition[1091]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:40:55.861910 ignition[1091]: INFO : files: files passed Sep 12 17:40:55.861910 ignition[1091]: INFO : Ignition finished successfully Sep 12 17:40:55.863899 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 17:40:55.900115 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 17:40:55.917980 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 17:40:55.985468 initrd-setup-root-after-ignition[1118]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:40:55.985468 initrd-setup-root-after-ignition[1118]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:40:55.937675 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 17:40:56.016259 initrd-setup-root-after-ignition[1122]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:40:55.942932 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 17:40:55.957686 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:40:55.965575 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 17:40:56.003660 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 17:40:56.034760 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 17:40:56.036846 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 17:40:56.047652 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 17:40:56.058374 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 17:40:56.071346 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 17:40:56.094118 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 17:40:56.124196 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:40:56.140084 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 17:40:56.159746 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 17:40:56.159891 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 17:40:56.171791 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:40:56.184516 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:40:56.196869 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 17:40:56.207884 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 17:40:56.207963 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:40:56.223914 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 17:40:56.236129 systemd[1]: Stopped target basic.target - Basic System. Sep 12 17:40:56.246437 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 17:40:56.257367 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:40:56.269437 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 17:40:56.281552 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 17:40:56.299844 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:40:56.306607 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 17:40:56.318982 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 17:40:56.329390 systemd[1]: Stopped target swap.target - Swaps. Sep 12 17:40:56.339859 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 17:40:56.339949 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:40:56.354597 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:40:56.361018 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:40:56.374169 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 17:40:56.374222 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:40:56.386867 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 17:40:56.386948 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 17:40:56.399269 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 17:40:56.399331 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:40:56.406593 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 17:40:56.475913 ignition[1144]: INFO : Ignition 2.19.0 Sep 12 17:40:56.475913 ignition[1144]: INFO : Stage: umount Sep 12 17:40:56.475913 ignition[1144]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:40:56.475913 ignition[1144]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 17:40:56.475913 ignition[1144]: INFO : umount: umount passed Sep 12 17:40:56.475913 ignition[1144]: INFO : Ignition finished successfully Sep 12 17:40:56.406652 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 17:40:56.417606 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 12 17:40:56.417654 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 12 17:40:56.449123 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 17:40:56.461906 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 17:40:56.461994 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:40:56.471970 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 17:40:56.481632 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 17:40:56.481703 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:40:56.491946 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 17:40:56.492003 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:40:56.519772 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 17:40:56.519900 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 17:40:56.536133 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 17:40:56.536242 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 17:40:56.560451 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 17:40:56.560518 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 17:40:56.571609 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 12 17:40:56.571665 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 12 17:40:56.583987 systemd[1]: Stopped target network.target - Network. Sep 12 17:40:56.594572 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 17:40:56.594653 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:40:56.606844 systemd[1]: Stopped target paths.target - Path Units. Sep 12 17:40:56.618744 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 17:40:56.621825 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:40:56.633751 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 17:40:56.648510 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 17:40:56.654765 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 17:40:56.654840 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:40:56.667255 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 17:40:56.667309 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:40:56.679011 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 17:40:56.679075 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 17:40:56.689282 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 17:40:56.689339 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 17:40:56.700370 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 17:40:56.948595 kernel: hv_netvsc 000d3afb-d674-000d-3afb-d674000d3afb eth0: Data path switched from VF: enP6602s1 Sep 12 17:40:56.710661 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 17:40:56.715473 systemd-networkd[896]: eth0: DHCPv6 lease lost Sep 12 17:40:56.723346 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 17:40:56.723960 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 17:40:56.724063 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 17:40:56.733917 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 17:40:56.734081 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 17:40:56.740329 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 17:40:56.740401 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:40:56.752639 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 17:40:56.752707 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 17:40:56.775043 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 17:40:56.788132 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 17:40:56.788221 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:40:56.801360 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:40:56.819942 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 17:40:56.820037 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 17:40:56.835698 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 17:40:56.835825 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:40:56.847034 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 17:40:56.847099 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 17:40:56.858255 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 17:40:56.858312 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:40:56.871144 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 17:40:56.871302 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:40:56.883286 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 17:40:56.883371 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 17:40:56.894636 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 17:40:56.894744 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:40:56.905830 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 17:40:56.905891 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:40:56.922342 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 17:40:56.922406 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 17:40:56.948667 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:40:56.948730 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:40:56.971005 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 17:40:57.226197 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Sep 12 17:40:56.986209 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 17:40:56.986291 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:40:56.998915 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:40:56.998969 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:40:57.011186 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 17:40:57.011321 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 17:40:57.032543 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 17:40:57.032678 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 17:40:57.043541 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 17:40:57.073062 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 17:40:57.092225 systemd[1]: Switching root. Sep 12 17:40:57.286235 systemd-journald[217]: Journal stopped Sep 12 17:40:44.339597 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 12 17:40:44.339618 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Sep 12 15:59:19 -00 2025 Sep 12 17:40:44.339626 kernel: KASLR enabled Sep 12 17:40:44.339632 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Sep 12 17:40:44.339639 kernel: printk: bootconsole [pl11] enabled Sep 12 17:40:44.339645 kernel: efi: EFI v2.7 by EDK II Sep 12 17:40:44.339652 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3ead8b98 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Sep 12 17:40:44.339658 kernel: random: crng init done Sep 12 17:40:44.339664 kernel: ACPI: Early table checksum verification disabled Sep 12 17:40:44.339670 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Sep 12 17:40:44.339676 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 17:40:44.339682 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 17:40:44.339690 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Sep 12 17:40:44.339696 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 17:40:44.339703 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 17:40:44.339709 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 17:40:44.339716 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 17:40:44.339723 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 17:40:44.339730 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 17:40:44.339736 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Sep 12 17:40:44.339743 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 17:40:44.339749 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Sep 12 17:40:44.339755 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Sep 12 17:40:44.339761 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Sep 12 17:40:44.339768 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Sep 12 17:40:44.339774 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Sep 12 17:40:44.339780 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Sep 12 17:40:44.339786 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Sep 12 17:40:44.339794 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Sep 12 17:40:44.339801 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Sep 12 17:40:44.339807 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Sep 12 17:40:44.339813 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Sep 12 17:40:44.339820 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Sep 12 17:40:44.339826 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Sep 12 17:40:44.339832 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Sep 12 17:40:44.339838 kernel: Zone ranges: Sep 12 17:40:44.339845 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Sep 12 17:40:44.339851 kernel: DMA32 empty Sep 12 17:40:44.339857 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Sep 12 17:40:44.339863 kernel: Movable zone start for each node Sep 12 17:40:44.339873 kernel: Early memory node ranges Sep 12 17:40:44.339880 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Sep 12 17:40:44.339887 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Sep 12 17:40:44.341054 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Sep 12 17:40:44.341070 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Sep 12 17:40:44.341084 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Sep 12 17:40:44.341091 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Sep 12 17:40:44.341098 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Sep 12 17:40:44.341105 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Sep 12 17:40:44.341112 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Sep 12 17:40:44.341119 kernel: psci: probing for conduit method from ACPI. Sep 12 17:40:44.341126 kernel: psci: PSCIv1.1 detected in firmware. Sep 12 17:40:44.341133 kernel: psci: Using standard PSCI v0.2 function IDs Sep 12 17:40:44.341139 kernel: psci: MIGRATE_INFO_TYPE not supported. Sep 12 17:40:44.341146 kernel: psci: SMC Calling Convention v1.4 Sep 12 17:40:44.341153 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Sep 12 17:40:44.341159 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Sep 12 17:40:44.341168 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Sep 12 17:40:44.341183 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Sep 12 17:40:44.341193 kernel: pcpu-alloc: [0] 0 [0] 1 Sep 12 17:40:44.341200 kernel: Detected PIPT I-cache on CPU0 Sep 12 17:40:44.341207 kernel: CPU features: detected: GIC system register CPU interface Sep 12 17:40:44.341213 kernel: CPU features: detected: Hardware dirty bit management Sep 12 17:40:44.341220 kernel: CPU features: detected: Spectre-BHB Sep 12 17:40:44.341227 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 12 17:40:44.341234 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 12 17:40:44.341241 kernel: CPU features: detected: ARM erratum 1418040 Sep 12 17:40:44.341247 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Sep 12 17:40:44.341256 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 12 17:40:44.341263 kernel: alternatives: applying boot alternatives Sep 12 17:40:44.341271 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=1e63d3057914877efa0eb5f75703bd3a3d4c120bdf4a7ab97f41083e29183e56 Sep 12 17:40:44.341278 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 17:40:44.341285 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 12 17:40:44.341292 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 12 17:40:44.341299 kernel: Fallback order for Node 0: 0 Sep 12 17:40:44.341306 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Sep 12 17:40:44.341312 kernel: Policy zone: Normal Sep 12 17:40:44.341319 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 17:40:44.341326 kernel: software IO TLB: area num 2. Sep 12 17:40:44.341334 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Sep 12 17:40:44.341341 kernel: Memory: 3982564K/4194160K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39488K init, 897K bss, 211596K reserved, 0K cma-reserved) Sep 12 17:40:44.341348 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 12 17:40:44.341355 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 17:40:44.341362 kernel: rcu: RCU event tracing is enabled. Sep 12 17:40:44.341369 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 12 17:40:44.341376 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 17:40:44.341383 kernel: Tracing variant of Tasks RCU enabled. Sep 12 17:40:44.341390 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 17:40:44.341397 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 12 17:40:44.341403 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 12 17:40:44.341412 kernel: GICv3: 960 SPIs implemented Sep 12 17:40:44.341419 kernel: GICv3: 0 Extended SPIs implemented Sep 12 17:40:44.341425 kernel: Root IRQ handler: gic_handle_irq Sep 12 17:40:44.341432 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 12 17:40:44.341439 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Sep 12 17:40:44.341446 kernel: ITS: No ITS available, not enabling LPIs Sep 12 17:40:44.341453 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 12 17:40:44.341460 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 12 17:40:44.341466 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 12 17:40:44.341473 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 12 17:40:44.341480 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 12 17:40:44.341489 kernel: Console: colour dummy device 80x25 Sep 12 17:40:44.341496 kernel: printk: console [tty1] enabled Sep 12 17:40:44.341503 kernel: ACPI: Core revision 20230628 Sep 12 17:40:44.341510 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 12 17:40:44.341517 kernel: pid_max: default: 32768 minimum: 301 Sep 12 17:40:44.341524 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 12 17:40:44.341531 kernel: landlock: Up and running. Sep 12 17:40:44.341538 kernel: SELinux: Initializing. Sep 12 17:40:44.341545 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 17:40:44.341552 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 17:40:44.341561 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 17:40:44.341568 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 17:40:44.341575 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Sep 12 17:40:44.341582 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 Sep 12 17:40:44.341589 kernel: Hyper-V: enabling crash_kexec_post_notifiers Sep 12 17:40:44.341596 kernel: rcu: Hierarchical SRCU implementation. Sep 12 17:40:44.341603 kernel: rcu: Max phase no-delay instances is 400. Sep 12 17:40:44.341617 kernel: Remapping and enabling EFI services. Sep 12 17:40:44.341625 kernel: smp: Bringing up secondary CPUs ... Sep 12 17:40:44.341632 kernel: Detected PIPT I-cache on CPU1 Sep 12 17:40:44.341639 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Sep 12 17:40:44.341648 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 12 17:40:44.341655 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 12 17:40:44.341663 kernel: smp: Brought up 1 node, 2 CPUs Sep 12 17:40:44.341670 kernel: SMP: Total of 2 processors activated. Sep 12 17:40:44.341677 kernel: CPU features: detected: 32-bit EL0 Support Sep 12 17:40:44.341686 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Sep 12 17:40:44.341694 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 12 17:40:44.341701 kernel: CPU features: detected: CRC32 instructions Sep 12 17:40:44.341709 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 12 17:40:44.341716 kernel: CPU features: detected: LSE atomic instructions Sep 12 17:40:44.341723 kernel: CPU features: detected: Privileged Access Never Sep 12 17:40:44.341731 kernel: CPU: All CPU(s) started at EL1 Sep 12 17:40:44.341738 kernel: alternatives: applying system-wide alternatives Sep 12 17:40:44.341745 kernel: devtmpfs: initialized Sep 12 17:40:44.341754 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 17:40:44.341762 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 12 17:40:44.341769 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 17:40:44.341776 kernel: SMBIOS 3.1.0 present. Sep 12 17:40:44.341784 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Sep 12 17:40:44.341791 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 17:40:44.341799 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 12 17:40:44.341806 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 12 17:40:44.341813 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 12 17:40:44.341822 kernel: audit: initializing netlink subsys (disabled) Sep 12 17:40:44.341830 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Sep 12 17:40:44.341837 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 17:40:44.341844 kernel: cpuidle: using governor menu Sep 12 17:40:44.341852 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 12 17:40:44.341859 kernel: ASID allocator initialised with 32768 entries Sep 12 17:40:44.341866 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 17:40:44.341874 kernel: Serial: AMBA PL011 UART driver Sep 12 17:40:44.341881 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 12 17:40:44.341890 kernel: Modules: 0 pages in range for non-PLT usage Sep 12 17:40:44.341916 kernel: Modules: 508992 pages in range for PLT usage Sep 12 17:40:44.341924 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 12 17:40:44.341931 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 12 17:40:44.341939 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 12 17:40:44.341946 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 12 17:40:44.341953 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 17:40:44.341961 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 17:40:44.341968 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 12 17:40:44.341978 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 12 17:40:44.341985 kernel: ACPI: Added _OSI(Module Device) Sep 12 17:40:44.341992 kernel: ACPI: Added _OSI(Processor Device) Sep 12 17:40:44.342000 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 17:40:44.342007 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 12 17:40:44.342015 kernel: ACPI: Interpreter enabled Sep 12 17:40:44.342022 kernel: ACPI: Using GIC for interrupt routing Sep 12 17:40:44.342029 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Sep 12 17:40:44.342036 kernel: printk: console [ttyAMA0] enabled Sep 12 17:40:44.342045 kernel: printk: bootconsole [pl11] disabled Sep 12 17:40:44.342053 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Sep 12 17:40:44.342060 kernel: iommu: Default domain type: Translated Sep 12 17:40:44.342067 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 12 17:40:44.342075 kernel: efivars: Registered efivars operations Sep 12 17:40:44.342082 kernel: vgaarb: loaded Sep 12 17:40:44.342089 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 12 17:40:44.342097 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 17:40:44.342104 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 17:40:44.342113 kernel: pnp: PnP ACPI init Sep 12 17:40:44.342120 kernel: pnp: PnP ACPI: found 0 devices Sep 12 17:40:44.342127 kernel: NET: Registered PF_INET protocol family Sep 12 17:40:44.342135 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 12 17:40:44.342142 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 12 17:40:44.342150 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 17:40:44.342157 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 12 17:40:44.342164 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 12 17:40:44.342172 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 12 17:40:44.342181 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 17:40:44.342188 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 17:40:44.342196 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 17:40:44.342203 kernel: PCI: CLS 0 bytes, default 64 Sep 12 17:40:44.342210 kernel: kvm [1]: HYP mode not available Sep 12 17:40:44.342217 kernel: Initialise system trusted keyrings Sep 12 17:40:44.342225 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 12 17:40:44.342232 kernel: Key type asymmetric registered Sep 12 17:40:44.342239 kernel: Asymmetric key parser 'x509' registered Sep 12 17:40:44.342248 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 12 17:40:44.342256 kernel: io scheduler mq-deadline registered Sep 12 17:40:44.342263 kernel: io scheduler kyber registered Sep 12 17:40:44.342270 kernel: io scheduler bfq registered Sep 12 17:40:44.342278 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 17:40:44.342285 kernel: thunder_xcv, ver 1.0 Sep 12 17:40:44.342293 kernel: thunder_bgx, ver 1.0 Sep 12 17:40:44.342300 kernel: nicpf, ver 1.0 Sep 12 17:40:44.342307 kernel: nicvf, ver 1.0 Sep 12 17:40:44.342493 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 12 17:40:44.342576 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-12T17:40:43 UTC (1757698843) Sep 12 17:40:44.342587 kernel: efifb: probing for efifb Sep 12 17:40:44.342595 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Sep 12 17:40:44.342602 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Sep 12 17:40:44.342610 kernel: efifb: scrolling: redraw Sep 12 17:40:44.342617 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 12 17:40:44.342625 kernel: Console: switching to colour frame buffer device 128x48 Sep 12 17:40:44.342634 kernel: fb0: EFI VGA frame buffer device Sep 12 17:40:44.342642 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Sep 12 17:40:44.342649 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 12 17:40:44.342656 kernel: No ACPI PMU IRQ for CPU0 Sep 12 17:40:44.342663 kernel: No ACPI PMU IRQ for CPU1 Sep 12 17:40:44.342671 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Sep 12 17:40:44.342684 kernel: watchdog: Delayed init of the lockup detector failed: -19 Sep 12 17:40:44.342691 kernel: watchdog: Hard watchdog permanently disabled Sep 12 17:40:44.342699 kernel: NET: Registered PF_INET6 protocol family Sep 12 17:40:44.342708 kernel: Segment Routing with IPv6 Sep 12 17:40:44.342715 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 17:40:44.342722 kernel: NET: Registered PF_PACKET protocol family Sep 12 17:40:44.342730 kernel: Key type dns_resolver registered Sep 12 17:40:44.342737 kernel: registered taskstats version 1 Sep 12 17:40:44.342744 kernel: Loading compiled-in X.509 certificates Sep 12 17:40:44.342752 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: 2d576b5e69e6c5de2f731966fe8b55173c144d02' Sep 12 17:40:44.342759 kernel: Key type .fscrypt registered Sep 12 17:40:44.342766 kernel: Key type fscrypt-provisioning registered Sep 12 17:40:44.342775 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 12 17:40:44.342782 kernel: ima: Allocated hash algorithm: sha1 Sep 12 17:40:44.342790 kernel: ima: No architecture policies found Sep 12 17:40:44.342797 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 12 17:40:44.342804 kernel: clk: Disabling unused clocks Sep 12 17:40:44.342811 kernel: Freeing unused kernel memory: 39488K Sep 12 17:40:44.342819 kernel: Run /init as init process Sep 12 17:40:44.342826 kernel: with arguments: Sep 12 17:40:44.342833 kernel: /init Sep 12 17:40:44.342841 kernel: with environment: Sep 12 17:40:44.342849 kernel: HOME=/ Sep 12 17:40:44.342856 kernel: TERM=linux Sep 12 17:40:44.342863 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 17:40:44.342872 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 12 17:40:44.342882 systemd[1]: Detected virtualization microsoft. Sep 12 17:40:44.342890 systemd[1]: Detected architecture arm64. Sep 12 17:40:44.344940 systemd[1]: Running in initrd. Sep 12 17:40:44.344956 systemd[1]: No hostname configured, using default hostname. Sep 12 17:40:44.344964 systemd[1]: Hostname set to . Sep 12 17:40:44.344972 systemd[1]: Initializing machine ID from random generator. Sep 12 17:40:44.344980 systemd[1]: Queued start job for default target initrd.target. Sep 12 17:40:44.344988 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:40:44.344996 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:40:44.345005 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 17:40:44.345013 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:40:44.345030 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 17:40:44.345053 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 17:40:44.345063 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 17:40:44.345071 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 17:40:44.345079 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:40:44.345087 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:40:44.345097 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:40:44.345105 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:40:44.345113 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:40:44.345121 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:40:44.345129 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:40:44.345137 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:40:44.345145 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 17:40:44.345153 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 12 17:40:44.345161 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:40:44.345171 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:40:44.345179 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:40:44.345187 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:40:44.345195 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 17:40:44.345203 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:40:44.345211 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 17:40:44.345218 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 17:40:44.345226 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:40:44.345234 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:40:44.345267 systemd-journald[217]: Collecting audit messages is disabled. Sep 12 17:40:44.345287 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:40:44.345296 systemd-journald[217]: Journal started Sep 12 17:40:44.345317 systemd-journald[217]: Runtime Journal (/run/log/journal/5dd1b16d40074591b22bc16cf438a7c0) is 8.0M, max 78.5M, 70.5M free. Sep 12 17:40:44.343142 systemd-modules-load[218]: Inserted module 'overlay' Sep 12 17:40:44.376779 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 17:40:44.376827 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:40:44.376841 kernel: Bridge firewalling registered Sep 12 17:40:44.380211 systemd-modules-load[218]: Inserted module 'br_netfilter' Sep 12 17:40:44.393226 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 17:40:44.400559 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:40:44.414558 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 17:40:44.426743 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:40:44.437336 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:40:44.459170 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:40:44.469040 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:40:44.502467 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 17:40:44.520284 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:40:44.529744 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:40:44.559135 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:40:44.565868 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:40:44.590286 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 17:40:44.597079 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:40:44.611590 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:40:44.640680 dracut-cmdline[249]: dracut-dracut-053 Sep 12 17:40:44.640088 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:40:44.659492 dracut-cmdline[249]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=1e63d3057914877efa0eb5f75703bd3a3d4c120bdf4a7ab97f41083e29183e56 Sep 12 17:40:44.693646 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:40:44.730092 systemd-resolved[261]: Positive Trust Anchors: Sep 12 17:40:44.730105 systemd-resolved[261]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:40:44.730137 systemd-resolved[261]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:40:44.736974 systemd-resolved[261]: Defaulting to hostname 'linux'. Sep 12 17:40:44.737983 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:40:44.744496 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:40:44.832912 kernel: SCSI subsystem initialized Sep 12 17:40:44.840909 kernel: Loading iSCSI transport class v2.0-870. Sep 12 17:40:44.850973 kernel: iscsi: registered transport (tcp) Sep 12 17:40:44.868835 kernel: iscsi: registered transport (qla4xxx) Sep 12 17:40:44.868866 kernel: QLogic iSCSI HBA Driver Sep 12 17:40:44.903634 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 17:40:44.918167 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 17:40:44.953415 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 17:40:44.953464 kernel: device-mapper: uevent: version 1.0.3 Sep 12 17:40:44.960562 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 12 17:40:45.041921 kernel: raid6: neonx8 gen() 15771 MB/s Sep 12 17:40:45.061906 kernel: raid6: neonx4 gen() 15670 MB/s Sep 12 17:40:45.082908 kernel: raid6: neonx2 gen() 13227 MB/s Sep 12 17:40:45.103905 kernel: raid6: neonx1 gen() 10513 MB/s Sep 12 17:40:45.123903 kernel: raid6: int64x8 gen() 6977 MB/s Sep 12 17:40:45.143904 kernel: raid6: int64x4 gen() 7349 MB/s Sep 12 17:40:45.164904 kernel: raid6: int64x2 gen() 6131 MB/s Sep 12 17:40:45.188307 kernel: raid6: int64x1 gen() 5058 MB/s Sep 12 17:40:45.188318 kernel: raid6: using algorithm neonx8 gen() 15771 MB/s Sep 12 17:40:45.213440 kernel: raid6: .... xor() 12061 MB/s, rmw enabled Sep 12 17:40:45.213460 kernel: raid6: using neon recovery algorithm Sep 12 17:40:45.225054 kernel: xor: measuring software checksum speed Sep 12 17:40:45.225071 kernel: 8regs : 19816 MB/sec Sep 12 17:40:45.228759 kernel: 32regs : 19646 MB/sec Sep 12 17:40:45.232663 kernel: arm64_neon : 27025 MB/sec Sep 12 17:40:45.237253 kernel: xor: using function: arm64_neon (27025 MB/sec) Sep 12 17:40:45.288915 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 17:40:45.301951 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:40:45.318039 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:40:45.338148 systemd-udevd[436]: Using default interface naming scheme 'v255'. Sep 12 17:40:45.343657 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:40:45.370224 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 17:40:45.386811 dracut-pre-trigger[449]: rd.md=0: removing MD RAID activation Sep 12 17:40:45.414520 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:40:45.430172 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:40:45.472644 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:40:45.494149 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 17:40:45.517434 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 17:40:45.533577 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:40:45.548764 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:40:45.568119 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:40:45.592007 kernel: hv_vmbus: Vmbus version:5.3 Sep 12 17:40:45.597087 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 17:40:45.632461 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 12 17:40:45.632486 kernel: hv_vmbus: registering driver hid_hyperv Sep 12 17:40:45.632505 kernel: hv_vmbus: registering driver hyperv_keyboard Sep 12 17:40:45.632515 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Sep 12 17:40:45.612798 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:40:45.716030 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 12 17:40:45.716062 kernel: hv_vmbus: registering driver hv_netvsc Sep 12 17:40:45.716072 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Sep 12 17:40:45.716092 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Sep 12 17:40:45.716280 kernel: hv_vmbus: registering driver hv_storvsc Sep 12 17:40:45.716291 kernel: PTP clock support registered Sep 12 17:40:45.716300 kernel: scsi host0: storvsc_host_t Sep 12 17:40:45.716410 kernel: scsi host1: storvsc_host_t Sep 12 17:40:45.658535 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:40:45.658724 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:40:45.744564 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Sep 12 17:40:45.689686 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:40:45.724270 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:40:45.724598 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:40:45.736807 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:40:45.783979 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Sep 12 17:40:45.784315 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:40:45.802126 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:40:45.831672 kernel: hv_utils: Registering HyperV Utility Driver Sep 12 17:40:45.831695 kernel: hv_vmbus: registering driver hv_utils Sep 12 17:40:45.802278 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:40:46.246562 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Sep 12 17:40:46.246737 kernel: hv_utils: Heartbeat IC version 3.0 Sep 12 17:40:46.246749 kernel: hv_utils: Shutdown IC version 3.2 Sep 12 17:40:46.246758 kernel: hv_utils: TimeSync IC version 4.0 Sep 12 17:40:46.246768 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 12 17:40:46.246786 kernel: hv_netvsc 000d3afb-d674-000d-3afb-d674000d3afb eth0: VF slot 1 added Sep 12 17:40:45.824104 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:40:46.240498 systemd-resolved[261]: Clock change detected. Flushing caches. Sep 12 17:40:46.255599 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:40:46.286863 kernel: hv_vmbus: registering driver hv_pci Sep 12 17:40:46.286891 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Sep 12 17:40:46.287043 kernel: hv_pci 3cf1144d-19ca-4c4b-ae6f-6f04d9382b9a: PCI VMBus probing: Using version 0x10004 Sep 12 17:40:46.287082 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:40:46.337166 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Sep 12 17:40:46.337350 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Sep 12 17:40:46.337442 kernel: sd 0:0:0:0: [sda] Write Protect is off Sep 12 17:40:46.337535 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Sep 12 17:40:46.337626 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Sep 12 17:40:46.491978 kernel: hv_pci 3cf1144d-19ca-4c4b-ae6f-6f04d9382b9a: PCI host bridge to bus 19ca:00 Sep 12 17:40:46.492215 kernel: pci_bus 19ca:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Sep 12 17:40:46.492326 kernel: pci_bus 19ca:00: No busn resource found for root bus, will use [bus 00-ff] Sep 12 17:40:46.508165 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 17:40:46.508217 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Sep 12 17:40:46.509597 kernel: pci 19ca:00:02.0: [15b3:1018] type 00 class 0x020000 Sep 12 17:40:46.522141 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:40:46.542776 kernel: pci 19ca:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Sep 12 17:40:46.542940 kernel: pci 19ca:00:02.0: enabling Extended Tags Sep 12 17:40:46.542958 kernel: pci 19ca:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 19ca:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Sep 12 17:40:46.570093 kernel: pci_bus 19ca:00: busn_res: [bus 00-ff] end is updated to 00 Sep 12 17:40:46.570295 kernel: pci 19ca:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Sep 12 17:40:46.613698 kernel: mlx5_core 19ca:00:02.0: enabling device (0000 -> 0002) Sep 12 17:40:46.620841 kernel: mlx5_core 19ca:00:02.0: firmware version: 16.31.2424 Sep 12 17:40:46.896108 kernel: hv_netvsc 000d3afb-d674-000d-3afb-d674000d3afb eth0: VF registering: eth1 Sep 12 17:40:46.896327 kernel: mlx5_core 19ca:00:02.0 eth1: joined to eth0 Sep 12 17:40:46.907082 kernel: mlx5_core 19ca:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Sep 12 17:40:46.920823 kernel: mlx5_core 19ca:00:02.0 enP6602s1: renamed from eth1 Sep 12 17:40:47.031896 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Sep 12 17:40:47.181652 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Sep 12 17:40:47.205818 kernel: BTRFS: device fsid 5a23a06a-00d4-4606-89bf-13e31a563129 devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (482) Sep 12 17:40:47.213834 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (492) Sep 12 17:40:47.226990 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Sep 12 17:40:47.234086 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Sep 12 17:40:47.255340 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Sep 12 17:40:47.271043 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 17:40:47.299857 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 17:40:47.308814 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 17:40:47.319818 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 17:40:48.322830 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 17:40:48.323495 disk-uuid[603]: The operation has completed successfully. Sep 12 17:40:48.389387 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 17:40:48.389483 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 17:40:48.418000 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 17:40:48.431137 sh[716]: Success Sep 12 17:40:48.471828 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 12 17:40:48.839244 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 17:40:48.848911 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 17:40:48.859565 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 17:40:48.898931 kernel: BTRFS info (device dm-0): first mount of filesystem 5a23a06a-00d4-4606-89bf-13e31a563129 Sep 12 17:40:48.898983 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 12 17:40:48.906647 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 12 17:40:48.912448 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 17:40:48.917533 kernel: BTRFS info (device dm-0): using free space tree Sep 12 17:40:49.380110 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 17:40:49.386015 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 12 17:40:49.398038 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 17:40:49.420997 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 17:40:49.450891 kernel: BTRFS info (device sda6): first mount of filesystem daec7f45-8bde-44bd-bec0-4b8eac931d0c Sep 12 17:40:49.450938 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 12 17:40:49.455592 kernel: BTRFS info (device sda6): using free space tree Sep 12 17:40:49.525460 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:40:49.547911 kernel: BTRFS info (device sda6): auto enabling async discard Sep 12 17:40:49.551103 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:40:49.567086 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 12 17:40:49.580205 kernel: BTRFS info (device sda6): last unmount of filesystem daec7f45-8bde-44bd-bec0-4b8eac931d0c Sep 12 17:40:49.584382 systemd-networkd[896]: lo: Link UP Sep 12 17:40:49.584393 systemd-networkd[896]: lo: Gained carrier Sep 12 17:40:49.585983 systemd-networkd[896]: Enumeration completed Sep 12 17:40:49.586074 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:40:49.596083 systemd-networkd[896]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:40:49.596087 systemd-networkd[896]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:40:49.596889 systemd[1]: Reached target network.target - Network. Sep 12 17:40:49.605852 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 17:40:49.649051 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 17:40:49.776777 kernel: mlx5_core 19ca:00:02.0 enP6602s1: Link up Sep 12 17:40:49.777043 kernel: buffer_size[0]=0 is not enough for lossless buffer Sep 12 17:40:49.857070 kernel: hv_netvsc 000d3afb-d674-000d-3afb-d674000d3afb eth0: Data path switched to VF: enP6602s1 Sep 12 17:40:49.856733 systemd-networkd[896]: enP6602s1: Link UP Sep 12 17:40:49.856842 systemd-networkd[896]: eth0: Link UP Sep 12 17:40:49.856943 systemd-networkd[896]: eth0: Gained carrier Sep 12 17:40:49.856952 systemd-networkd[896]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:40:49.880025 systemd-networkd[896]: enP6602s1: Gained carrier Sep 12 17:40:49.896858 systemd-networkd[896]: eth0: DHCPv4 address 10.200.20.41/24, gateway 10.200.20.1 acquired from 168.63.129.16 Sep 12 17:40:50.597784 ignition[905]: Ignition 2.19.0 Sep 12 17:40:50.597819 ignition[905]: Stage: fetch-offline Sep 12 17:40:50.601621 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:40:50.597858 ignition[905]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:40:50.597867 ignition[905]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 17:40:50.597960 ignition[905]: parsed url from cmdline: "" Sep 12 17:40:50.624953 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 12 17:40:50.597964 ignition[905]: no config URL provided Sep 12 17:40:50.597968 ignition[905]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 17:40:50.597975 ignition[905]: no config at "/usr/lib/ignition/user.ign" Sep 12 17:40:50.597980 ignition[905]: failed to fetch config: resource requires networking Sep 12 17:40:50.598153 ignition[905]: Ignition finished successfully Sep 12 17:40:50.651194 ignition[913]: Ignition 2.19.0 Sep 12 17:40:50.651200 ignition[913]: Stage: fetch Sep 12 17:40:50.651392 ignition[913]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:40:50.651401 ignition[913]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 17:40:50.651495 ignition[913]: parsed url from cmdline: "" Sep 12 17:40:50.651498 ignition[913]: no config URL provided Sep 12 17:40:50.651502 ignition[913]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 17:40:50.651510 ignition[913]: no config at "/usr/lib/ignition/user.ign" Sep 12 17:40:50.651536 ignition[913]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Sep 12 17:40:50.749009 ignition[913]: GET result: OK Sep 12 17:40:50.749092 ignition[913]: config has been read from IMDS userdata Sep 12 17:40:50.753152 unknown[913]: fetched base config from "system" Sep 12 17:40:50.749147 ignition[913]: parsing config with SHA512: 8c3b578600754a38ba181643c0b7801ed5e60d9d87e66dda026c2efc27cf959c3f117ad740c24d5515d5a5b43d77cc10f03eef8279f1f0a3a30bfba8d928e33d Sep 12 17:40:50.753165 unknown[913]: fetched base config from "system" Sep 12 17:40:50.753577 ignition[913]: fetch: fetch complete Sep 12 17:40:50.753171 unknown[913]: fetched user config from "azure" Sep 12 17:40:50.753581 ignition[913]: fetch: fetch passed Sep 12 17:40:50.759782 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 12 17:40:50.753635 ignition[913]: Ignition finished successfully Sep 12 17:40:50.784093 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 17:40:50.806181 ignition[919]: Ignition 2.19.0 Sep 12 17:40:50.806191 ignition[919]: Stage: kargs Sep 12 17:40:50.810896 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 17:40:50.806380 ignition[919]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:40:50.806392 ignition[919]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 17:40:50.807312 ignition[919]: kargs: kargs passed Sep 12 17:40:50.807354 ignition[919]: Ignition finished successfully Sep 12 17:40:50.834128 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 17:40:50.859497 ignition[925]: Ignition 2.19.0 Sep 12 17:40:50.859514 ignition[925]: Stage: disks Sep 12 17:40:50.859725 ignition[925]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:40:50.866125 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 17:40:50.859735 ignition[925]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 17:40:50.872325 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 17:40:50.863839 ignition[925]: disks: disks passed Sep 12 17:40:50.883039 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 17:40:50.863892 ignition[925]: Ignition finished successfully Sep 12 17:40:50.894504 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:40:50.905438 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:40:50.916543 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:40:50.941087 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 17:40:51.032505 systemd-fsck[933]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Sep 12 17:40:51.038600 systemd-networkd[896]: eth0: Gained IPv6LL Sep 12 17:40:51.044140 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 17:40:51.061980 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 17:40:51.120828 kernel: EXT4-fs (sda9): mounted filesystem fc6c61a7-153d-4e7f-95c0-bffdb4824d71 r/w with ordered data mode. Quota mode: none. Sep 12 17:40:51.121239 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 17:40:51.129987 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 17:40:51.181916 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:40:51.205010 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (944) Sep 12 17:40:51.218128 kernel: BTRFS info (device sda6): first mount of filesystem daec7f45-8bde-44bd-bec0-4b8eac931d0c Sep 12 17:40:51.218167 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 12 17:40:51.222092 kernel: BTRFS info (device sda6): using free space tree Sep 12 17:40:51.230813 kernel: BTRFS info (device sda6): auto enabling async discard Sep 12 17:40:51.245914 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 17:40:51.253072 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 12 17:40:51.263697 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 17:40:51.263737 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:40:51.295239 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:40:51.303005 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 17:40:51.319119 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 17:40:51.995088 coreos-metadata[961]: Sep 12 17:40:51.995 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 12 17:40:52.006198 coreos-metadata[961]: Sep 12 17:40:52.006 INFO Fetch successful Sep 12 17:40:52.006198 coreos-metadata[961]: Sep 12 17:40:52.006 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Sep 12 17:40:52.023523 coreos-metadata[961]: Sep 12 17:40:52.023 INFO Fetch successful Sep 12 17:40:52.040023 coreos-metadata[961]: Sep 12 17:40:52.039 INFO wrote hostname ci-4081.3.6-a-d7d9773d19 to /sysroot/etc/hostname Sep 12 17:40:52.041444 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 12 17:40:52.513568 initrd-setup-root[974]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 17:40:52.573663 initrd-setup-root[981]: cut: /sysroot/etc/group: No such file or directory Sep 12 17:40:52.593699 initrd-setup-root[988]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 17:40:52.606332 initrd-setup-root[995]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 17:40:53.998725 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 17:40:54.017011 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 17:40:54.025968 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 17:40:54.049812 kernel: BTRFS info (device sda6): last unmount of filesystem daec7f45-8bde-44bd-bec0-4b8eac931d0c Sep 12 17:40:54.050602 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 17:40:54.073249 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 17:40:54.093657 ignition[1063]: INFO : Ignition 2.19.0 Sep 12 17:40:54.093657 ignition[1063]: INFO : Stage: mount Sep 12 17:40:54.093657 ignition[1063]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:40:54.093657 ignition[1063]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 17:40:54.093657 ignition[1063]: INFO : mount: mount passed Sep 12 17:40:54.093657 ignition[1063]: INFO : Ignition finished successfully Sep 12 17:40:54.097364 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 17:40:54.114025 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 17:40:54.137943 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:40:54.181674 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1074) Sep 12 17:40:54.181742 kernel: BTRFS info (device sda6): first mount of filesystem daec7f45-8bde-44bd-bec0-4b8eac931d0c Sep 12 17:40:54.192030 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 12 17:40:54.192067 kernel: BTRFS info (device sda6): using free space tree Sep 12 17:40:54.199813 kernel: BTRFS info (device sda6): auto enabling async discard Sep 12 17:40:54.201699 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:40:54.224706 ignition[1091]: INFO : Ignition 2.19.0 Sep 12 17:40:54.230180 ignition[1091]: INFO : Stage: files Sep 12 17:40:54.230180 ignition[1091]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:40:54.230180 ignition[1091]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 17:40:54.230180 ignition[1091]: DEBUG : files: compiled without relabeling support, skipping Sep 12 17:40:54.279388 ignition[1091]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 17:40:54.279388 ignition[1091]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 17:40:54.407532 ignition[1091]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 17:40:54.415196 ignition[1091]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 17:40:54.415196 ignition[1091]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 17:40:54.410237 unknown[1091]: wrote ssh authorized keys file for user: core Sep 12 17:40:54.460172 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 12 17:40:54.471481 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Sep 12 17:40:54.751485 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 12 17:40:55.041681 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 12 17:40:55.041681 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 12 17:40:55.041681 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 17:40:55.041681 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:40:55.041681 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:40:55.041681 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:40:55.041681 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:40:55.041681 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:40:55.041681 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:40:55.139962 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:40:55.139962 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:40:55.139962 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 12 17:40:55.139962 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 12 17:40:55.139962 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 12 17:40:55.139962 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Sep 12 17:40:55.578147 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 12 17:40:55.801460 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 12 17:40:55.801460 ignition[1091]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 12 17:40:55.849859 ignition[1091]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:40:55.861910 ignition[1091]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:40:55.861910 ignition[1091]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 12 17:40:55.861910 ignition[1091]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Sep 12 17:40:55.861910 ignition[1091]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 17:40:55.861910 ignition[1091]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:40:55.861910 ignition[1091]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:40:55.861910 ignition[1091]: INFO : files: files passed Sep 12 17:40:55.861910 ignition[1091]: INFO : Ignition finished successfully Sep 12 17:40:55.863899 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 17:40:55.900115 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 17:40:55.917980 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 17:40:55.985468 initrd-setup-root-after-ignition[1118]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:40:55.985468 initrd-setup-root-after-ignition[1118]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:40:55.937675 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 17:40:56.016259 initrd-setup-root-after-ignition[1122]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:40:55.942932 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 17:40:55.957686 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:40:55.965575 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 17:40:56.003660 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 17:40:56.034760 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 17:40:56.036846 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 17:40:56.047652 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 17:40:56.058374 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 17:40:56.071346 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 17:40:56.094118 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 17:40:56.124196 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:40:56.140084 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 17:40:56.159746 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 17:40:56.159891 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 17:40:56.171791 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:40:56.184516 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:40:56.196869 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 17:40:56.207884 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 17:40:56.207963 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:40:56.223914 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 17:40:56.236129 systemd[1]: Stopped target basic.target - Basic System. Sep 12 17:40:56.246437 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 17:40:56.257367 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:40:56.269437 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 17:40:56.281552 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 17:40:56.299844 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:40:56.306607 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 17:40:56.318982 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 17:40:56.329390 systemd[1]: Stopped target swap.target - Swaps. Sep 12 17:40:56.339859 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 17:40:56.339949 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:40:56.354597 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:40:56.361018 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:40:56.374169 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 17:40:56.374222 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:40:56.386867 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 17:40:56.386948 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 17:40:56.399269 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 17:40:56.399331 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:40:56.406593 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 17:40:56.475913 ignition[1144]: INFO : Ignition 2.19.0 Sep 12 17:40:56.475913 ignition[1144]: INFO : Stage: umount Sep 12 17:40:56.475913 ignition[1144]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:40:56.475913 ignition[1144]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 17:40:56.475913 ignition[1144]: INFO : umount: umount passed Sep 12 17:40:56.475913 ignition[1144]: INFO : Ignition finished successfully Sep 12 17:40:56.406652 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 17:40:56.417606 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 12 17:40:56.417654 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 12 17:40:56.449123 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 17:40:56.461906 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 17:40:56.461994 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:40:56.471970 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 17:40:56.481632 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 17:40:56.481703 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:40:56.491946 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 17:40:56.492003 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:40:56.519772 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 17:40:56.519900 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 17:40:56.536133 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 17:40:56.536242 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 17:40:56.560451 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 17:40:56.560518 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 17:40:56.571609 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 12 17:40:56.571665 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 12 17:40:56.583987 systemd[1]: Stopped target network.target - Network. Sep 12 17:40:56.594572 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 17:40:56.594653 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:40:56.606844 systemd[1]: Stopped target paths.target - Path Units. Sep 12 17:40:56.618744 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 17:40:56.621825 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:40:56.633751 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 17:40:56.648510 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 17:40:56.654765 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 17:40:56.654840 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:40:56.667255 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 17:40:56.667309 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:40:56.679011 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 17:40:56.679075 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 17:40:56.689282 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 17:40:56.689339 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 17:40:56.700370 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 17:40:56.948595 kernel: hv_netvsc 000d3afb-d674-000d-3afb-d674000d3afb eth0: Data path switched from VF: enP6602s1 Sep 12 17:40:56.710661 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 17:40:56.715473 systemd-networkd[896]: eth0: DHCPv6 lease lost Sep 12 17:40:56.723346 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 17:40:56.723960 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 17:40:56.724063 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 17:40:56.733917 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 17:40:56.734081 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 17:40:56.740329 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 17:40:56.740401 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:40:56.752639 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 17:40:56.752707 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 17:40:56.775043 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 17:40:56.788132 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 17:40:56.788221 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:40:56.801360 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:40:56.819942 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 17:40:56.820037 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 17:40:56.835698 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 17:40:56.835825 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:40:56.847034 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 17:40:56.847099 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 17:40:56.858255 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 17:40:56.858312 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:40:56.871144 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 17:40:56.871302 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:40:56.883286 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 17:40:56.883371 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 17:40:56.894636 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 17:40:56.894744 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:40:56.905830 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 17:40:56.905891 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:40:56.922342 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 17:40:56.922406 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 17:40:56.948667 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:40:56.948730 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:40:56.971005 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 17:40:57.226197 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Sep 12 17:40:56.986209 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 17:40:56.986291 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:40:56.998915 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:40:56.998969 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:40:57.011186 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 17:40:57.011321 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 17:40:57.032543 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 17:40:57.032678 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 17:40:57.043541 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 17:40:57.073062 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 17:40:57.092225 systemd[1]: Switching root. Sep 12 17:40:57.286235 systemd-journald[217]: Journal stopped Sep 12 17:41:06.466216 kernel: SELinux: policy capability network_peer_controls=1 Sep 12 17:41:06.466240 kernel: SELinux: policy capability open_perms=1 Sep 12 17:41:06.466251 kernel: SELinux: policy capability extended_socket_class=1 Sep 12 17:41:06.466259 kernel: SELinux: policy capability always_check_network=0 Sep 12 17:41:06.466269 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 12 17:41:06.466277 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 12 17:41:06.466286 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 12 17:41:06.466294 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 12 17:41:06.466302 kernel: audit: type=1403 audit(1757698859.204:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 12 17:41:06.466312 systemd[1]: Successfully loaded SELinux policy in 166.893ms. Sep 12 17:41:06.466323 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.427ms. Sep 12 17:41:06.466334 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 12 17:41:06.466343 systemd[1]: Detected virtualization microsoft. Sep 12 17:41:06.466351 systemd[1]: Detected architecture arm64. Sep 12 17:41:06.466361 systemd[1]: Detected first boot. Sep 12 17:41:06.466373 systemd[1]: Hostname set to . Sep 12 17:41:06.466382 systemd[1]: Initializing machine ID from random generator. Sep 12 17:41:06.466391 zram_generator::config[1184]: No configuration found. Sep 12 17:41:06.466401 systemd[1]: Populated /etc with preset unit settings. Sep 12 17:41:06.466410 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 12 17:41:06.466419 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 12 17:41:06.466428 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 12 17:41:06.466440 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 12 17:41:06.466450 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 12 17:41:06.466460 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 12 17:41:06.466469 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 12 17:41:06.466479 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 12 17:41:06.466488 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 12 17:41:06.466498 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 12 17:41:06.466509 systemd[1]: Created slice user.slice - User and Session Slice. Sep 12 17:41:06.466518 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:41:06.466527 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:41:06.466537 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 12 17:41:06.466546 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 12 17:41:06.466556 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 12 17:41:06.466565 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:41:06.466574 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 12 17:41:06.466585 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:41:06.466595 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 12 17:41:06.466604 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 12 17:41:06.466615 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 12 17:41:06.466625 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 12 17:41:06.466635 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:41:06.466644 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:41:06.466655 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:41:06.466665 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:41:06.466675 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 12 17:41:06.466684 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 12 17:41:06.466694 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:41:06.466703 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:41:06.466714 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:41:06.466725 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 12 17:41:06.466735 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 12 17:41:06.466745 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 12 17:41:06.466755 systemd[1]: Mounting media.mount - External Media Directory... Sep 12 17:41:06.466765 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 12 17:41:06.466775 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 12 17:41:06.466784 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 12 17:41:06.466805 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 12 17:41:06.466816 systemd[1]: Reached target machines.target - Containers. Sep 12 17:41:06.466825 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 12 17:41:06.466835 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:41:06.466845 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:41:06.466854 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 12 17:41:06.466864 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:41:06.466875 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:41:06.466886 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:41:06.466896 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 12 17:41:06.466906 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:41:06.466916 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 12 17:41:06.466925 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 12 17:41:06.466935 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 12 17:41:06.466945 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 12 17:41:06.466955 systemd[1]: Stopped systemd-fsck-usr.service. Sep 12 17:41:06.466966 kernel: fuse: init (API version 7.39) Sep 12 17:41:06.466975 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:41:06.466989 kernel: loop: module loaded Sep 12 17:41:06.466998 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:41:06.467008 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 17:41:06.467033 systemd-journald[1287]: Collecting audit messages is disabled. Sep 12 17:41:06.467056 systemd-journald[1287]: Journal started Sep 12 17:41:06.467076 systemd-journald[1287]: Runtime Journal (/run/log/journal/484e9cc9e9a9496bb61da622c6ea495d) is 8.0M, max 78.5M, 70.5M free. Sep 12 17:41:05.246722 systemd[1]: Queued start job for default target multi-user.target. Sep 12 17:41:05.435435 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Sep 12 17:41:05.435809 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 12 17:41:05.436108 systemd[1]: systemd-journald.service: Consumed 3.213s CPU time. Sep 12 17:41:06.477811 kernel: ACPI: bus type drm_connector registered Sep 12 17:41:06.477870 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 12 17:41:06.517266 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:41:06.526101 systemd[1]: verity-setup.service: Deactivated successfully. Sep 12 17:41:06.526146 systemd[1]: Stopped verity-setup.service. Sep 12 17:41:06.542507 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:41:06.543353 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 12 17:41:06.549139 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 12 17:41:06.556595 systemd[1]: Mounted media.mount - External Media Directory. Sep 12 17:41:06.563135 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 12 17:41:06.569655 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 12 17:41:06.575830 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 12 17:41:06.581420 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 12 17:41:06.588725 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:41:06.595717 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 12 17:41:06.595861 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 12 17:41:06.602189 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:41:06.602315 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:41:06.612884 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:41:06.613011 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:41:06.619697 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:41:06.619832 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:41:06.626707 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 12 17:41:06.626850 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 12 17:41:06.633765 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:41:06.633906 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:41:06.640982 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:41:06.647519 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 17:41:06.654896 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 12 17:41:06.661988 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:41:06.678564 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 17:41:06.689882 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 12 17:41:06.697280 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 12 17:41:06.703553 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 12 17:41:06.703592 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:41:06.710114 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 12 17:41:06.720509 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 12 17:41:06.727963 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 12 17:41:06.733773 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:41:06.774003 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 12 17:41:06.781180 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 12 17:41:06.787379 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:41:06.788446 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 12 17:41:06.794417 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:41:06.795994 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:41:06.802944 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 12 17:41:06.817207 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 12 17:41:06.826021 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 12 17:41:06.834393 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 12 17:41:06.841978 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 12 17:41:06.849602 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 12 17:41:06.868209 udevadm[1321]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 12 17:41:06.880076 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 12 17:41:06.887139 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 12 17:41:06.897501 systemd-journald[1287]: Time spent on flushing to /var/log/journal/484e9cc9e9a9496bb61da622c6ea495d is 13.610ms for 902 entries. Sep 12 17:41:06.897501 systemd-journald[1287]: System Journal (/var/log/journal/484e9cc9e9a9496bb61da622c6ea495d) is 8.0M, max 2.6G, 2.6G free. Sep 12 17:41:06.974785 systemd-journald[1287]: Received client request to flush runtime journal. Sep 12 17:41:06.974867 kernel: loop0: detected capacity change from 0 to 114432 Sep 12 17:41:06.907344 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 12 17:41:06.976572 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 12 17:41:06.995075 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:41:07.018249 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 12 17:41:07.019579 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 12 17:41:07.456454 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 12 17:41:07.470990 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:41:07.505956 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 12 17:41:07.561830 kernel: loop1: detected capacity change from 0 to 203944 Sep 12 17:41:07.628876 kernel: loop2: detected capacity change from 0 to 31320 Sep 12 17:41:07.667681 systemd-tmpfiles[1336]: ACLs are not supported, ignoring. Sep 12 17:41:07.668065 systemd-tmpfiles[1336]: ACLs are not supported, ignoring. Sep 12 17:41:07.672253 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:41:08.295822 kernel: loop3: detected capacity change from 0 to 114328 Sep 12 17:41:08.897891 kernel: loop4: detected capacity change from 0 to 114432 Sep 12 17:41:08.929407 kernel: loop5: detected capacity change from 0 to 203944 Sep 12 17:41:08.944823 kernel: loop6: detected capacity change from 0 to 31320 Sep 12 17:41:08.956821 kernel: loop7: detected capacity change from 0 to 114328 Sep 12 17:41:08.959709 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 12 17:41:08.972171 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:41:08.982929 (sd-merge)[1342]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Sep 12 17:41:08.983348 (sd-merge)[1342]: Merged extensions into '/usr'. Sep 12 17:41:08.987351 systemd[1]: Reloading requested from client PID 1318 ('systemd-sysext') (unit systemd-sysext.service)... Sep 12 17:41:08.987366 systemd[1]: Reloading... Sep 12 17:41:09.000631 systemd-udevd[1344]: Using default interface naming scheme 'v255'. Sep 12 17:41:09.052828 zram_generator::config[1366]: No configuration found. Sep 12 17:41:09.182930 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:41:09.240615 systemd[1]: Reloading finished in 252 ms. Sep 12 17:41:09.271103 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 12 17:41:09.296042 systemd[1]: Starting ensure-sysext.service... Sep 12 17:41:09.301560 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:41:09.362224 systemd[1]: Reloading requested from client PID 1425 ('systemctl') (unit ensure-sysext.service)... Sep 12 17:41:09.362239 systemd[1]: Reloading... Sep 12 17:41:09.396547 systemd-tmpfiles[1426]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 12 17:41:09.397601 systemd-tmpfiles[1426]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 12 17:41:09.400454 systemd-tmpfiles[1426]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 12 17:41:09.400809 systemd-tmpfiles[1426]: ACLs are not supported, ignoring. Sep 12 17:41:09.400934 systemd-tmpfiles[1426]: ACLs are not supported, ignoring. Sep 12 17:41:09.432839 zram_generator::config[1457]: No configuration found. Sep 12 17:41:09.437563 systemd-tmpfiles[1426]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:41:09.437573 systemd-tmpfiles[1426]: Skipping /boot Sep 12 17:41:09.448946 systemd-tmpfiles[1426]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:41:09.448957 systemd-tmpfiles[1426]: Skipping /boot Sep 12 17:41:09.541678 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:41:09.598752 systemd[1]: Reloading finished in 236 ms. Sep 12 17:41:09.611514 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:41:09.632104 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 12 17:41:09.674992 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 12 17:41:09.682618 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 12 17:41:09.698287 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:41:09.706454 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 12 17:41:09.718963 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:41:09.720402 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:41:09.729516 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:41:09.744824 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:41:09.750883 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:41:09.751635 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:41:09.752346 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:41:09.759261 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:41:09.759419 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:41:09.768413 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:41:09.768567 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:41:09.784894 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 12 17:41:09.792917 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:41:09.798192 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:41:09.806087 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:41:09.814162 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:41:09.821644 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:41:09.825776 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 12 17:41:09.836771 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:41:09.850408 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:41:09.852004 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:41:09.863283 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:41:09.863870 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:41:09.875575 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:41:09.876784 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:41:09.901263 systemd[1]: Finished ensure-sysext.service. Sep 12 17:41:09.917534 systemd[1]: Expecting device dev-ptp_hyperv.device - /dev/ptp_hyperv... Sep 12 17:41:09.928419 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:41:09.946225 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:41:09.962099 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:41:09.986065 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:41:10.001050 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:41:10.010268 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:41:10.017993 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:41:10.030002 systemd[1]: Reached target time-set.target - System Time Set. Sep 12 17:41:10.038424 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 12 17:41:10.039238 augenrules[1576]: No rules Sep 12 17:41:10.047016 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 12 17:41:10.051878 kernel: mousedev: PS/2 mouse device common for all mice Sep 12 17:41:10.054864 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:41:10.055011 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:41:10.062034 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:41:10.062168 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:41:10.071426 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:41:10.071569 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:41:10.081454 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:41:10.081597 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:41:10.097960 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 12 17:41:10.100722 systemd[1]: Condition check resulted in dev-ptp_hyperv.device - /dev/ptp_hyperv being skipped. Sep 12 17:41:10.101340 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:41:10.101413 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:41:10.120905 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 12 17:41:10.135452 kernel: hv_vmbus: registering driver hv_balloon Sep 12 17:41:10.135526 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Sep 12 17:41:10.142253 kernel: hv_balloon: Memory hot add disabled on ARM64 Sep 12 17:41:10.157267 kernel: hv_vmbus: registering driver hyperv_fb Sep 12 17:41:10.166071 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Sep 12 17:41:10.166101 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Sep 12 17:41:10.175571 kernel: Console: switching to colour dummy device 80x25 Sep 12 17:41:10.191018 kernel: Console: switching to colour frame buffer device 128x48 Sep 12 17:41:10.177353 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:41:10.204083 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:41:10.204297 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:41:10.226145 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:41:10.236764 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:41:10.237197 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:41:10.241042 systemd-resolved[1516]: Positive Trust Anchors: Sep 12 17:41:10.241318 systemd-resolved[1516]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:41:10.241405 systemd-resolved[1516]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:41:10.256434 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:41:10.278406 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1541) Sep 12 17:41:10.330314 systemd-resolved[1516]: Using system hostname 'ci-4081.3.6-a-d7d9773d19'. Sep 12 17:41:10.331881 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:41:10.340293 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Sep 12 17:41:10.346534 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:41:10.359403 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 12 17:41:10.377355 systemd-networkd[1587]: lo: Link UP Sep 12 17:41:10.377363 systemd-networkd[1587]: lo: Gained carrier Sep 12 17:41:10.380332 systemd-networkd[1587]: Enumeration completed Sep 12 17:41:10.381153 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:41:10.381767 systemd-networkd[1587]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:41:10.381985 systemd-networkd[1587]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:41:10.387410 systemd[1]: Reached target network.target - Network. Sep 12 17:41:10.400510 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 12 17:41:10.446830 kernel: mlx5_core 19ca:00:02.0 enP6602s1: Link up Sep 12 17:41:10.451871 kernel: buffer_size[0]=0 is not enough for lossless buffer Sep 12 17:41:10.450404 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 12 17:41:10.494047 kernel: hv_netvsc 000d3afb-d674-000d-3afb-d674000d3afb eth0: Data path switched to VF: enP6602s1 Sep 12 17:41:10.495984 systemd-networkd[1587]: enP6602s1: Link UP Sep 12 17:41:10.496101 systemd-networkd[1587]: eth0: Link UP Sep 12 17:41:10.496104 systemd-networkd[1587]: eth0: Gained carrier Sep 12 17:41:10.496119 systemd-networkd[1587]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:41:10.506136 systemd-networkd[1587]: enP6602s1: Gained carrier Sep 12 17:41:10.513887 systemd-networkd[1587]: eth0: DHCPv4 address 10.200.20.41/24, gateway 10.200.20.1 acquired from 168.63.129.16 Sep 12 17:41:10.577867 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 12 17:41:10.590974 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 12 17:41:10.684824 lvm[1656]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 17:41:10.727283 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 12 17:41:10.734757 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:41:10.745974 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 12 17:41:10.757402 lvm[1658]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 17:41:10.787855 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 12 17:41:12.026768 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:41:12.475906 systemd-networkd[1587]: eth0: Gained IPv6LL Sep 12 17:41:12.482844 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 12 17:41:12.490223 systemd[1]: Reached target network-online.target - Network is Online. Sep 12 17:41:12.881944 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 12 17:41:12.890355 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 17:41:17.347877 ldconfig[1313]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 12 17:41:17.363640 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 12 17:41:17.376985 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 12 17:41:17.402898 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 12 17:41:17.410333 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:41:17.417169 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 12 17:41:17.424699 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 12 17:41:17.433260 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 12 17:41:17.440141 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 12 17:41:17.448602 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 12 17:41:17.457502 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 12 17:41:17.457545 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:41:17.464170 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:41:17.489209 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 12 17:41:17.497208 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 12 17:41:17.508845 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 12 17:41:17.515971 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 12 17:41:17.522200 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:41:17.527577 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:41:17.532584 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:41:17.532616 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:41:17.571902 systemd[1]: Starting chronyd.service - NTP client/server... Sep 12 17:41:17.580971 systemd[1]: Starting containerd.service - containerd container runtime... Sep 12 17:41:17.591965 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 12 17:41:17.600377 (chronyd)[1670]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Sep 12 17:41:17.600991 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 12 17:41:17.609956 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 12 17:41:17.623188 chronyd[1679]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Sep 12 17:41:17.626317 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 12 17:41:17.632325 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 12 17:41:17.632378 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Sep 12 17:41:17.634578 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Sep 12 17:41:17.642217 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Sep 12 17:41:17.642762 KVP[1680]: KVP starting; pid is:1680 Sep 12 17:41:17.643438 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:41:17.650427 jq[1674]: false Sep 12 17:41:17.656079 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 12 17:41:17.666009 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 12 17:41:17.673720 kernel: hv_utils: KVP IC version 4.0 Sep 12 17:41:17.667371 KVP[1680]: KVP LIC Version: 3.1 Sep 12 17:41:17.675524 chronyd[1679]: Timezone right/UTC failed leap second check, ignoring Sep 12 17:41:17.676009 chronyd[1679]: Loaded seccomp filter (level 2) Sep 12 17:41:17.678989 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 12 17:41:17.687202 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 12 17:41:17.696368 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 12 17:41:17.711048 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 12 17:41:17.720229 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 12 17:41:17.721037 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 12 17:41:17.722170 systemd[1]: Starting update-engine.service - Update Engine... Sep 12 17:41:17.734857 extend-filesystems[1677]: Found loop4 Sep 12 17:41:17.741082 extend-filesystems[1677]: Found loop5 Sep 12 17:41:17.741082 extend-filesystems[1677]: Found loop6 Sep 12 17:41:17.741082 extend-filesystems[1677]: Found loop7 Sep 12 17:41:17.741082 extend-filesystems[1677]: Found sda Sep 12 17:41:17.741082 extend-filesystems[1677]: Found sda1 Sep 12 17:41:17.741082 extend-filesystems[1677]: Found sda2 Sep 12 17:41:17.741082 extend-filesystems[1677]: Found sda3 Sep 12 17:41:17.741082 extend-filesystems[1677]: Found usr Sep 12 17:41:17.741082 extend-filesystems[1677]: Found sda4 Sep 12 17:41:17.741082 extend-filesystems[1677]: Found sda6 Sep 12 17:41:17.741082 extend-filesystems[1677]: Found sda7 Sep 12 17:41:17.741082 extend-filesystems[1677]: Found sda9 Sep 12 17:41:17.741082 extend-filesystems[1677]: Checking size of /dev/sda9 Sep 12 17:41:17.748989 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 12 17:41:17.912352 extend-filesystems[1677]: Old size kept for /dev/sda9 Sep 12 17:41:17.912352 extend-filesystems[1677]: Found sr0 Sep 12 17:41:17.937456 jq[1694]: true Sep 12 17:41:17.760311 systemd[1]: Started chronyd.service - NTP client/server. Sep 12 17:41:17.939197 update_engine[1692]: I20250912 17:41:17.854241 1692 main.cc:92] Flatcar Update Engine starting Sep 12 17:41:17.787479 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 12 17:41:17.787656 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 12 17:41:17.789155 systemd[1]: motdgen.service: Deactivated successfully. Sep 12 17:41:17.941098 tar[1705]: linux-arm64/helm Sep 12 17:41:17.789306 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 12 17:41:17.805186 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 12 17:41:17.941475 jq[1708]: true Sep 12 17:41:17.805345 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 12 17:41:17.838470 (ntainerd)[1710]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 12 17:41:17.839904 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 12 17:41:17.866117 systemd-logind[1691]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 12 17:41:17.866432 systemd-logind[1691]: New seat seat0. Sep 12 17:41:17.869391 systemd[1]: Started systemd-logind.service - User Login Management. Sep 12 17:41:17.883550 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 12 17:41:17.883773 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 12 17:41:18.001488 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1728) Sep 12 17:41:18.003608 dbus-daemon[1673]: [system] SELinux support is enabled Sep 12 17:41:18.003848 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 12 17:41:18.011157 update_engine[1692]: I20250912 17:41:18.010921 1692 update_check_scheduler.cc:74] Next update check in 2m46s Sep 12 17:41:18.018050 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 12 17:41:18.019284 dbus-daemon[1673]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 12 17:41:18.018086 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 12 17:41:18.029443 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 12 17:41:18.029588 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 12 17:41:18.040287 systemd[1]: Started update-engine.service - Update Engine. Sep 12 17:41:18.062266 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 12 17:41:18.077021 bash[1758]: Updated "/home/core/.ssh/authorized_keys" Sep 12 17:41:18.077593 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 12 17:41:18.086353 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 12 17:41:18.178902 coreos-metadata[1672]: Sep 12 17:41:18.177 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 12 17:41:18.181607 coreos-metadata[1672]: Sep 12 17:41:18.181 INFO Fetch successful Sep 12 17:41:18.181768 coreos-metadata[1672]: Sep 12 17:41:18.181 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Sep 12 17:41:18.187892 coreos-metadata[1672]: Sep 12 17:41:18.187 INFO Fetch successful Sep 12 17:41:18.188279 coreos-metadata[1672]: Sep 12 17:41:18.188 INFO Fetching http://168.63.129.16/machine/c04dc6de-9206-4c96-828d-c9dd14865ce0/b3e0a4ad%2D972f%2D4218%2Db750%2D38b717faa13d.%5Fci%2D4081.3.6%2Da%2Dd7d9773d19?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Sep 12 17:41:18.191984 coreos-metadata[1672]: Sep 12 17:41:18.191 INFO Fetch successful Sep 12 17:41:18.191984 coreos-metadata[1672]: Sep 12 17:41:18.191 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Sep 12 17:41:18.213821 coreos-metadata[1672]: Sep 12 17:41:18.210 INFO Fetch successful Sep 12 17:41:18.252584 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 12 17:41:18.271247 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 12 17:41:18.416929 locksmithd[1774]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 12 17:41:18.653989 tar[1705]: linux-arm64/LICENSE Sep 12 17:41:18.654223 tar[1705]: linux-arm64/README.md Sep 12 17:41:18.672503 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 12 17:41:18.742313 containerd[1710]: time="2025-09-12T17:41:18.742197860Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 12 17:41:18.798829 containerd[1710]: time="2025-09-12T17:41:18.797887700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:41:18.799956 containerd[1710]: time="2025-09-12T17:41:18.799916820Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.106-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:41:18.800053 containerd[1710]: time="2025-09-12T17:41:18.800038580Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 12 17:41:18.800882 containerd[1710]: time="2025-09-12T17:41:18.800831980Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 12 17:41:18.803689 containerd[1710]: time="2025-09-12T17:41:18.801140580Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 12 17:41:18.803689 containerd[1710]: time="2025-09-12T17:41:18.802942340Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 12 17:41:18.803689 containerd[1710]: time="2025-09-12T17:41:18.803087140Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:41:18.803689 containerd[1710]: time="2025-09-12T17:41:18.803112460Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:41:18.803689 containerd[1710]: time="2025-09-12T17:41:18.803423780Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:41:18.803689 containerd[1710]: time="2025-09-12T17:41:18.803459420Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 12 17:41:18.803689 containerd[1710]: time="2025-09-12T17:41:18.803480260Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:41:18.803689 containerd[1710]: time="2025-09-12T17:41:18.803492620Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 12 17:41:18.803689 containerd[1710]: time="2025-09-12T17:41:18.803611700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:41:18.804292 containerd[1710]: time="2025-09-12T17:41:18.804270860Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:41:18.806425 containerd[1710]: time="2025-09-12T17:41:18.806018500Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:41:18.806425 containerd[1710]: time="2025-09-12T17:41:18.806066860Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 12 17:41:18.806425 containerd[1710]: time="2025-09-12T17:41:18.806187300Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 12 17:41:18.806425 containerd[1710]: time="2025-09-12T17:41:18.806257260Z" level=info msg="metadata content store policy set" policy=shared Sep 12 17:41:18.831435 containerd[1710]: time="2025-09-12T17:41:18.831371180Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 12 17:41:18.831563 containerd[1710]: time="2025-09-12T17:41:18.831478100Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 12 17:41:18.831563 containerd[1710]: time="2025-09-12T17:41:18.831497140Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 12 17:41:18.831563 containerd[1710]: time="2025-09-12T17:41:18.831515220Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 12 17:41:18.831563 containerd[1710]: time="2025-09-12T17:41:18.831531340Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 12 17:41:18.831730 containerd[1710]: time="2025-09-12T17:41:18.831697980Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 12 17:41:18.832007 containerd[1710]: time="2025-09-12T17:41:18.831985260Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 12 17:41:18.832122 containerd[1710]: time="2025-09-12T17:41:18.832101060Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 12 17:41:18.832155 containerd[1710]: time="2025-09-12T17:41:18.832125780Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 12 17:41:18.832155 containerd[1710]: time="2025-09-12T17:41:18.832140900Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 12 17:41:18.832194 containerd[1710]: time="2025-09-12T17:41:18.832160940Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 12 17:41:18.832194 containerd[1710]: time="2025-09-12T17:41:18.832177020Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 12 17:41:18.832194 containerd[1710]: time="2025-09-12T17:41:18.832190300Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 12 17:41:18.832249 containerd[1710]: time="2025-09-12T17:41:18.832204940Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 12 17:41:18.832249 containerd[1710]: time="2025-09-12T17:41:18.832221740Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 12 17:41:18.832249 containerd[1710]: time="2025-09-12T17:41:18.832234460Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 12 17:41:18.832346 containerd[1710]: time="2025-09-12T17:41:18.832246220Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 12 17:41:18.832346 containerd[1710]: time="2025-09-12T17:41:18.832260740Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 12 17:41:18.832346 containerd[1710]: time="2025-09-12T17:41:18.832281980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 12 17:41:18.832346 containerd[1710]: time="2025-09-12T17:41:18.832297580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 12 17:41:18.832346 containerd[1710]: time="2025-09-12T17:41:18.832310260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 12 17:41:18.832346 containerd[1710]: time="2025-09-12T17:41:18.832324060Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 12 17:41:18.832346 containerd[1710]: time="2025-09-12T17:41:18.832335820Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 12 17:41:18.832471 containerd[1710]: time="2025-09-12T17:41:18.832350620Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 12 17:41:18.832471 containerd[1710]: time="2025-09-12T17:41:18.832362180Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 12 17:41:18.832471 containerd[1710]: time="2025-09-12T17:41:18.832374500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 12 17:41:18.832471 containerd[1710]: time="2025-09-12T17:41:18.832390860Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 12 17:41:18.832471 containerd[1710]: time="2025-09-12T17:41:18.832405220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 12 17:41:18.832471 containerd[1710]: time="2025-09-12T17:41:18.832416500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 12 17:41:18.832471 containerd[1710]: time="2025-09-12T17:41:18.832427540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 12 17:41:18.832471 containerd[1710]: time="2025-09-12T17:41:18.832442580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 12 17:41:18.832471 containerd[1710]: time="2025-09-12T17:41:18.832457540Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 12 17:41:18.832623 containerd[1710]: time="2025-09-12T17:41:18.832477980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 12 17:41:18.832623 containerd[1710]: time="2025-09-12T17:41:18.832490300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 12 17:41:18.832623 containerd[1710]: time="2025-09-12T17:41:18.832501580Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 12 17:41:18.832623 containerd[1710]: time="2025-09-12T17:41:18.832550220Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 12 17:41:18.832623 containerd[1710]: time="2025-09-12T17:41:18.832567820Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 12 17:41:18.832623 containerd[1710]: time="2025-09-12T17:41:18.832577940Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 12 17:41:18.832623 containerd[1710]: time="2025-09-12T17:41:18.832589500Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 12 17:41:18.832623 containerd[1710]: time="2025-09-12T17:41:18.832599820Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 12 17:41:18.832623 containerd[1710]: time="2025-09-12T17:41:18.832612540Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 12 17:41:18.832623 containerd[1710]: time="2025-09-12T17:41:18.832621860Z" level=info msg="NRI interface is disabled by configuration." Sep 12 17:41:18.832784 containerd[1710]: time="2025-09-12T17:41:18.832633100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 12 17:41:18.835814 containerd[1710]: time="2025-09-12T17:41:18.833802500Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 12 17:41:18.835814 containerd[1710]: time="2025-09-12T17:41:18.834640580Z" level=info msg="Connect containerd service" Sep 12 17:41:18.835814 containerd[1710]: time="2025-09-12T17:41:18.834681460Z" level=info msg="using legacy CRI server" Sep 12 17:41:18.835814 containerd[1710]: time="2025-09-12T17:41:18.834688660Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 12 17:41:18.835814 containerd[1710]: time="2025-09-12T17:41:18.835676740Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 12 17:41:18.837395 containerd[1710]: time="2025-09-12T17:41:18.837197860Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 17:41:18.838050 containerd[1710]: time="2025-09-12T17:41:18.838014020Z" level=info msg="Start subscribing containerd event" Sep 12 17:41:18.838076 containerd[1710]: time="2025-09-12T17:41:18.838068300Z" level=info msg="Start recovering state" Sep 12 17:41:18.838163 containerd[1710]: time="2025-09-12T17:41:18.838146020Z" level=info msg="Start event monitor" Sep 12 17:41:18.838187 containerd[1710]: time="2025-09-12T17:41:18.838163820Z" level=info msg="Start snapshots syncer" Sep 12 17:41:18.838187 containerd[1710]: time="2025-09-12T17:41:18.838173340Z" level=info msg="Start cni network conf syncer for default" Sep 12 17:41:18.838187 containerd[1710]: time="2025-09-12T17:41:18.838180500Z" level=info msg="Start streaming server" Sep 12 17:41:18.840800 containerd[1710]: time="2025-09-12T17:41:18.839355820Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 12 17:41:18.840800 containerd[1710]: time="2025-09-12T17:41:18.839413020Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 12 17:41:18.842578 systemd[1]: Started containerd.service - containerd container runtime. Sep 12 17:41:18.849024 containerd[1710]: time="2025-09-12T17:41:18.848977060Z" level=info msg="containerd successfully booted in 0.111187s" Sep 12 17:41:18.858289 sshd_keygen[1711]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 12 17:41:18.880555 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 12 17:41:18.894186 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 12 17:41:18.901405 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Sep 12 17:41:18.909981 systemd[1]: issuegen.service: Deactivated successfully. Sep 12 17:41:18.911948 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 12 17:41:18.933189 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 12 17:41:18.942620 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Sep 12 17:41:18.972936 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 12 17:41:18.992188 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 12 17:41:19.004243 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 12 17:41:19.010937 systemd[1]: Reached target getty.target - Login Prompts. Sep 12 17:41:19.111540 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:41:19.119495 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 12 17:41:19.119885 (kubelet)[1835]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:41:19.129911 systemd[1]: Startup finished in 681ms (kernel) + 14.904s (initrd) + 20.091s (userspace) = 35.677s. Sep 12 17:41:19.634608 kubelet[1835]: E0912 17:41:19.634564 1835 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:41:19.637155 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:41:19.637424 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:41:19.846103 login[1828]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Sep 12 17:41:19.866097 login[1829]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:41:19.875995 systemd-logind[1691]: New session 2 of user core. Sep 12 17:41:19.878276 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 12 17:41:19.891074 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 12 17:41:19.934315 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 12 17:41:19.942063 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 12 17:41:19.960832 (systemd)[1848]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 12 17:41:20.298336 systemd[1848]: Queued start job for default target default.target. Sep 12 17:41:20.304214 systemd[1848]: Created slice app.slice - User Application Slice. Sep 12 17:41:20.304244 systemd[1848]: Reached target paths.target - Paths. Sep 12 17:41:20.304256 systemd[1848]: Reached target timers.target - Timers. Sep 12 17:41:20.305699 systemd[1848]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 12 17:41:20.317264 systemd[1848]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 12 17:41:20.317394 systemd[1848]: Reached target sockets.target - Sockets. Sep 12 17:41:20.317409 systemd[1848]: Reached target basic.target - Basic System. Sep 12 17:41:20.317519 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 12 17:41:20.318598 systemd[1848]: Reached target default.target - Main User Target. Sep 12 17:41:20.318663 systemd[1848]: Startup finished in 350ms. Sep 12 17:41:20.322052 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 12 17:41:20.846572 login[1828]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:41:20.851957 systemd-logind[1691]: New session 1 of user core. Sep 12 17:41:20.858993 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 12 17:41:21.099866 waagent[1825]: 2025-09-12T17:41:21.099589Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Sep 12 17:41:21.105524 waagent[1825]: 2025-09-12T17:41:21.105456Z INFO Daemon Daemon OS: flatcar 4081.3.6 Sep 12 17:41:21.110134 waagent[1825]: 2025-09-12T17:41:21.110084Z INFO Daemon Daemon Python: 3.11.9 Sep 12 17:41:21.117498 waagent[1825]: 2025-09-12T17:41:21.117250Z INFO Daemon Daemon Run daemon Sep 12 17:41:21.122346 waagent[1825]: 2025-09-12T17:41:21.122283Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.6' Sep 12 17:41:21.132218 waagent[1825]: 2025-09-12T17:41:21.132147Z INFO Daemon Daemon Using waagent for provisioning Sep 12 17:41:21.138016 waagent[1825]: 2025-09-12T17:41:21.137967Z INFO Daemon Daemon Activate resource disk Sep 12 17:41:21.143403 waagent[1825]: 2025-09-12T17:41:21.143351Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Sep 12 17:41:21.154496 waagent[1825]: 2025-09-12T17:41:21.154434Z INFO Daemon Daemon Found device: None Sep 12 17:41:21.160110 waagent[1825]: 2025-09-12T17:41:21.160053Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Sep 12 17:41:21.171588 waagent[1825]: 2025-09-12T17:41:21.171526Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Sep 12 17:41:21.184594 waagent[1825]: 2025-09-12T17:41:21.184535Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 12 17:41:21.190331 waagent[1825]: 2025-09-12T17:41:21.190280Z INFO Daemon Daemon Running default provisioning handler Sep 12 17:41:21.202635 waagent[1825]: 2025-09-12T17:41:21.202559Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Sep 12 17:41:21.219391 waagent[1825]: 2025-09-12T17:41:21.219321Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Sep 12 17:41:21.229523 waagent[1825]: 2025-09-12T17:41:21.229460Z INFO Daemon Daemon cloud-init is enabled: False Sep 12 17:41:21.234503 waagent[1825]: 2025-09-12T17:41:21.234453Z INFO Daemon Daemon Copying ovf-env.xml Sep 12 17:41:21.355873 waagent[1825]: 2025-09-12T17:41:21.355711Z INFO Daemon Daemon Successfully mounted dvd Sep 12 17:41:21.386872 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Sep 12 17:41:21.388427 waagent[1825]: 2025-09-12T17:41:21.388276Z INFO Daemon Daemon Detect protocol endpoint Sep 12 17:41:21.393288 waagent[1825]: 2025-09-12T17:41:21.393224Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 12 17:41:21.399230 waagent[1825]: 2025-09-12T17:41:21.399169Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Sep 12 17:41:21.405662 waagent[1825]: 2025-09-12T17:41:21.405610Z INFO Daemon Daemon Test for route to 168.63.129.16 Sep 12 17:41:21.411956 waagent[1825]: 2025-09-12T17:41:21.411899Z INFO Daemon Daemon Route to 168.63.129.16 exists Sep 12 17:41:21.418439 waagent[1825]: 2025-09-12T17:41:21.418380Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Sep 12 17:41:21.475489 waagent[1825]: 2025-09-12T17:41:21.475443Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Sep 12 17:41:21.482199 waagent[1825]: 2025-09-12T17:41:21.482169Z INFO Daemon Daemon Wire protocol version:2012-11-30 Sep 12 17:41:21.487379 waagent[1825]: 2025-09-12T17:41:21.487334Z INFO Daemon Daemon Server preferred version:2015-04-05 Sep 12 17:41:21.868428 waagent[1825]: 2025-09-12T17:41:21.868313Z INFO Daemon Daemon Initializing goal state during protocol detection Sep 12 17:41:21.876122 waagent[1825]: 2025-09-12T17:41:21.876046Z INFO Daemon Daemon Forcing an update of the goal state. Sep 12 17:41:21.886151 waagent[1825]: 2025-09-12T17:41:21.886098Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Sep 12 17:41:21.933100 waagent[1825]: 2025-09-12T17:41:21.933051Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.175 Sep 12 17:41:21.938944 waagent[1825]: 2025-09-12T17:41:21.938891Z INFO Daemon Sep 12 17:41:21.941882 waagent[1825]: 2025-09-12T17:41:21.941821Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 0c13a9e4-39fa-4797-bbba-511b42cc51d8 eTag: 2801402818578542738 source: Fabric] Sep 12 17:41:21.953188 waagent[1825]: 2025-09-12T17:41:21.953136Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Sep 12 17:41:21.960028 waagent[1825]: 2025-09-12T17:41:21.959979Z INFO Daemon Sep 12 17:41:21.965216 waagent[1825]: 2025-09-12T17:41:21.965157Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Sep 12 17:41:21.977599 waagent[1825]: 2025-09-12T17:41:21.977556Z INFO Daemon Daemon Downloading artifacts profile blob Sep 12 17:41:22.129023 waagent[1825]: 2025-09-12T17:41:22.128872Z INFO Daemon Downloaded certificate {'thumbprint': '76A348DCFF3B8133CDEE9642E89AA2CE80F6BC6E', 'hasPrivateKey': True} Sep 12 17:41:22.139449 waagent[1825]: 2025-09-12T17:41:22.139389Z INFO Daemon Fetch goal state completed Sep 12 17:41:22.186487 waagent[1825]: 2025-09-12T17:41:22.186432Z INFO Daemon Daemon Starting provisioning Sep 12 17:41:22.191567 waagent[1825]: 2025-09-12T17:41:22.191501Z INFO Daemon Daemon Handle ovf-env.xml. Sep 12 17:41:22.196189 waagent[1825]: 2025-09-12T17:41:22.196135Z INFO Daemon Daemon Set hostname [ci-4081.3.6-a-d7d9773d19] Sep 12 17:41:22.233159 waagent[1825]: 2025-09-12T17:41:22.233074Z INFO Daemon Daemon Publish hostname [ci-4081.3.6-a-d7d9773d19] Sep 12 17:41:22.239433 waagent[1825]: 2025-09-12T17:41:22.239364Z INFO Daemon Daemon Examine /proc/net/route for primary interface Sep 12 17:41:22.245958 waagent[1825]: 2025-09-12T17:41:22.245896Z INFO Daemon Daemon Primary interface is [eth0] Sep 12 17:41:22.317418 systemd-networkd[1587]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:41:22.317426 systemd-networkd[1587]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:41:22.317457 systemd-networkd[1587]: eth0: DHCP lease lost Sep 12 17:41:22.318936 waagent[1825]: 2025-09-12T17:41:22.318534Z INFO Daemon Daemon Create user account if not exists Sep 12 17:41:22.325199 waagent[1825]: 2025-09-12T17:41:22.325130Z INFO Daemon Daemon User core already exists, skip useradd Sep 12 17:41:22.331018 waagent[1825]: 2025-09-12T17:41:22.330952Z INFO Daemon Daemon Configure sudoer Sep 12 17:41:22.331091 systemd-networkd[1587]: eth0: DHCPv6 lease lost Sep 12 17:41:22.336354 waagent[1825]: 2025-09-12T17:41:22.336263Z INFO Daemon Daemon Configure sshd Sep 12 17:41:22.341442 waagent[1825]: 2025-09-12T17:41:22.341383Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Sep 12 17:41:22.360202 waagent[1825]: 2025-09-12T17:41:22.354001Z INFO Daemon Daemon Deploy ssh public key. Sep 12 17:41:22.373875 systemd-networkd[1587]: eth0: DHCPv4 address 10.200.20.41/24, gateway 10.200.20.1 acquired from 168.63.129.16 Sep 12 17:41:23.576492 waagent[1825]: 2025-09-12T17:41:23.576436Z INFO Daemon Daemon Provisioning complete Sep 12 17:41:23.595076 waagent[1825]: 2025-09-12T17:41:23.595025Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Sep 12 17:41:23.601540 waagent[1825]: 2025-09-12T17:41:23.601483Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Sep 12 17:41:23.610941 waagent[1825]: 2025-09-12T17:41:23.610892Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Sep 12 17:41:23.747658 waagent[1898]: 2025-09-12T17:41:23.747026Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Sep 12 17:41:23.747658 waagent[1898]: 2025-09-12T17:41:23.747178Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.6 Sep 12 17:41:23.747658 waagent[1898]: 2025-09-12T17:41:23.747230Z INFO ExtHandler ExtHandler Python: 3.11.9 Sep 12 17:41:23.818937 waagent[1898]: 2025-09-12T17:41:23.818852Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.6; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Sep 12 17:41:23.819263 waagent[1898]: 2025-09-12T17:41:23.819226Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 12 17:41:23.819430 waagent[1898]: 2025-09-12T17:41:23.819394Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 12 17:41:23.828220 waagent[1898]: 2025-09-12T17:41:23.828077Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Sep 12 17:41:23.837775 waagent[1898]: 2025-09-12T17:41:23.837726Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Sep 12 17:41:23.838624 waagent[1898]: 2025-09-12T17:41:23.838522Z INFO ExtHandler Sep 12 17:41:23.839823 waagent[1898]: 2025-09-12T17:41:23.838754Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 542e4bd8-05b2-4070-9c61-fd94b89b1668 eTag: 2801402818578542738 source: Fabric] Sep 12 17:41:23.839823 waagent[1898]: 2025-09-12T17:41:23.839096Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Sep 12 17:41:23.839823 waagent[1898]: 2025-09-12T17:41:23.839661Z INFO ExtHandler Sep 12 17:41:23.839823 waagent[1898]: 2025-09-12T17:41:23.839730Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Sep 12 17:41:23.844372 waagent[1898]: 2025-09-12T17:41:23.844328Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Sep 12 17:41:23.916900 waagent[1898]: 2025-09-12T17:41:23.916791Z INFO ExtHandler Downloaded certificate {'thumbprint': '76A348DCFF3B8133CDEE9642E89AA2CE80F6BC6E', 'hasPrivateKey': True} Sep 12 17:41:23.917641 waagent[1898]: 2025-09-12T17:41:23.917591Z INFO ExtHandler Fetch goal state completed Sep 12 17:41:23.934784 waagent[1898]: 2025-09-12T17:41:23.934718Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1898 Sep 12 17:41:23.935124 waagent[1898]: 2025-09-12T17:41:23.935085Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Sep 12 17:41:23.936906 waagent[1898]: 2025-09-12T17:41:23.936857Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.6', '', 'Flatcar Container Linux by Kinvolk'] Sep 12 17:41:23.937373 waagent[1898]: 2025-09-12T17:41:23.937335Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Sep 12 17:41:23.994276 waagent[1898]: 2025-09-12T17:41:23.994238Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Sep 12 17:41:23.994629 waagent[1898]: 2025-09-12T17:41:23.994587Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Sep 12 17:41:24.000788 waagent[1898]: 2025-09-12T17:41:24.000753Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Sep 12 17:41:24.007483 systemd[1]: Reloading requested from client PID 1911 ('systemctl') (unit waagent.service)... Sep 12 17:41:24.007498 systemd[1]: Reloading... Sep 12 17:41:24.092881 zram_generator::config[1948]: No configuration found. Sep 12 17:41:24.200084 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:41:24.279400 systemd[1]: Reloading finished in 271 ms. Sep 12 17:41:24.302709 waagent[1898]: 2025-09-12T17:41:24.302330Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Sep 12 17:41:24.308257 systemd[1]: Reloading requested from client PID 2000 ('systemctl') (unit waagent.service)... Sep 12 17:41:24.308272 systemd[1]: Reloading... Sep 12 17:41:24.400832 zram_generator::config[2037]: No configuration found. Sep 12 17:41:24.502264 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:41:24.578184 systemd[1]: Reloading finished in 269 ms. Sep 12 17:41:24.602782 waagent[1898]: 2025-09-12T17:41:24.601998Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Sep 12 17:41:24.602782 waagent[1898]: 2025-09-12T17:41:24.602164Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Sep 12 17:41:25.169840 waagent[1898]: 2025-09-12T17:41:25.169565Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Sep 12 17:41:25.170275 waagent[1898]: 2025-09-12T17:41:25.170215Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Sep 12 17:41:25.171096 waagent[1898]: 2025-09-12T17:41:25.171005Z INFO ExtHandler ExtHandler Starting env monitor service. Sep 12 17:41:25.171638 waagent[1898]: 2025-09-12T17:41:25.171415Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Sep 12 17:41:25.171638 waagent[1898]: 2025-09-12T17:41:25.171585Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 12 17:41:25.171922 waagent[1898]: 2025-09-12T17:41:25.171873Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 12 17:41:25.172813 waagent[1898]: 2025-09-12T17:41:25.172043Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 12 17:41:25.172813 waagent[1898]: 2025-09-12T17:41:25.172202Z INFO EnvHandler ExtHandler Configure routes Sep 12 17:41:25.172813 waagent[1898]: 2025-09-12T17:41:25.172265Z INFO EnvHandler ExtHandler Gateway:None Sep 12 17:41:25.172813 waagent[1898]: 2025-09-12T17:41:25.172307Z INFO EnvHandler ExtHandler Routes:None Sep 12 17:41:25.173088 waagent[1898]: 2025-09-12T17:41:25.173035Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 12 17:41:25.173421 waagent[1898]: 2025-09-12T17:41:25.173373Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Sep 12 17:41:25.173587 waagent[1898]: 2025-09-12T17:41:25.173546Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Sep 12 17:41:25.173875 waagent[1898]: 2025-09-12T17:41:25.173829Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Sep 12 17:41:25.173875 waagent[1898]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Sep 12 17:41:25.173875 waagent[1898]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Sep 12 17:41:25.173875 waagent[1898]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Sep 12 17:41:25.173875 waagent[1898]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Sep 12 17:41:25.173875 waagent[1898]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 12 17:41:25.173875 waagent[1898]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 12 17:41:25.174556 waagent[1898]: 2025-09-12T17:41:25.174485Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Sep 12 17:41:25.174976 waagent[1898]: 2025-09-12T17:41:25.174939Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Sep 12 17:41:25.175422 waagent[1898]: 2025-09-12T17:41:25.174881Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Sep 12 17:41:25.175540 waagent[1898]: 2025-09-12T17:41:25.175497Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Sep 12 17:41:25.184524 waagent[1898]: 2025-09-12T17:41:25.182739Z INFO ExtHandler ExtHandler Sep 12 17:41:25.184524 waagent[1898]: 2025-09-12T17:41:25.182871Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: b281849b-f24f-4bbe-8bf0-5a19ede6c01e correlation 37243eb7-ed17-4634-ad64-ab3cf3b781dc created: 2025-09-12T17:39:55.106407Z] Sep 12 17:41:25.184524 waagent[1898]: 2025-09-12T17:41:25.183238Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Sep 12 17:41:25.184524 waagent[1898]: 2025-09-12T17:41:25.183783Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Sep 12 17:41:25.221622 waagent[1898]: 2025-09-12T17:41:25.221562Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 060E5B42-9138-4C6D-82CE-A480B1306E42;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Sep 12 17:41:25.334287 waagent[1898]: 2025-09-12T17:41:25.334195Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Sep 12 17:41:25.334287 waagent[1898]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 12 17:41:25.334287 waagent[1898]: pkts bytes target prot opt in out source destination Sep 12 17:41:25.334287 waagent[1898]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 12 17:41:25.334287 waagent[1898]: pkts bytes target prot opt in out source destination Sep 12 17:41:25.334287 waagent[1898]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Sep 12 17:41:25.334287 waagent[1898]: pkts bytes target prot opt in out source destination Sep 12 17:41:25.334287 waagent[1898]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Sep 12 17:41:25.334287 waagent[1898]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 12 17:41:25.334287 waagent[1898]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 12 17:41:25.337397 waagent[1898]: 2025-09-12T17:41:25.337328Z INFO EnvHandler ExtHandler Current Firewall rules: Sep 12 17:41:25.337397 waagent[1898]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 12 17:41:25.337397 waagent[1898]: pkts bytes target prot opt in out source destination Sep 12 17:41:25.337397 waagent[1898]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 12 17:41:25.337397 waagent[1898]: pkts bytes target prot opt in out source destination Sep 12 17:41:25.337397 waagent[1898]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Sep 12 17:41:25.337397 waagent[1898]: pkts bytes target prot opt in out source destination Sep 12 17:41:25.337397 waagent[1898]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Sep 12 17:41:25.337397 waagent[1898]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 12 17:41:25.337397 waagent[1898]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 12 17:41:25.337659 waagent[1898]: 2025-09-12T17:41:25.337621Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Sep 12 17:41:25.377833 waagent[1898]: 2025-09-12T17:41:25.377432Z INFO MonitorHandler ExtHandler Network interfaces: Sep 12 17:41:25.377833 waagent[1898]: Executing ['ip', '-a', '-o', 'link']: Sep 12 17:41:25.377833 waagent[1898]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Sep 12 17:41:25.377833 waagent[1898]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:fb:d6:74 brd ff:ff:ff:ff:ff:ff Sep 12 17:41:25.377833 waagent[1898]: 3: enP6602s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:fb:d6:74 brd ff:ff:ff:ff:ff:ff\ altname enP6602p0s2 Sep 12 17:41:25.377833 waagent[1898]: Executing ['ip', '-4', '-a', '-o', 'address']: Sep 12 17:41:25.377833 waagent[1898]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Sep 12 17:41:25.377833 waagent[1898]: 2: eth0 inet 10.200.20.41/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Sep 12 17:41:25.377833 waagent[1898]: Executing ['ip', '-6', '-a', '-o', 'address']: Sep 12 17:41:25.377833 waagent[1898]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Sep 12 17:41:25.377833 waagent[1898]: 2: eth0 inet6 fe80::20d:3aff:fefb:d674/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Sep 12 17:41:29.713696 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 12 17:41:29.721986 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:41:29.835512 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:41:29.846100 (kubelet)[2127]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:41:29.935720 kubelet[2127]: E0912 17:41:29.935618 2127 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:41:29.938505 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:41:29.938653 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:41:39.963858 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 12 17:41:39.971981 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:41:40.066513 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:41:40.071109 (kubelet)[2142]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:41:40.182395 kubelet[2142]: E0912 17:41:40.182273 2142 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:41:40.184639 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:41:40.184769 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:41:41.464544 chronyd[1679]: Selected source PHC0 Sep 12 17:41:42.137725 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 12 17:41:42.146134 systemd[1]: Started sshd@0-10.200.20.41:22-10.200.16.10:60302.service - OpenSSH per-connection server daemon (10.200.16.10:60302). Sep 12 17:41:42.655614 sshd[2150]: Accepted publickey for core from 10.200.16.10 port 60302 ssh2: RSA SHA256:AyxZlAlMlcR5vugKaGkRn7Fc8dKoxnuuK/wm0HuJN6k Sep 12 17:41:42.656947 sshd[2150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:41:42.660812 systemd-logind[1691]: New session 3 of user core. Sep 12 17:41:42.668948 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 12 17:41:43.094615 systemd[1]: Started sshd@1-10.200.20.41:22-10.200.16.10:60314.service - OpenSSH per-connection server daemon (10.200.16.10:60314). Sep 12 17:41:43.560681 sshd[2155]: Accepted publickey for core from 10.200.16.10 port 60314 ssh2: RSA SHA256:AyxZlAlMlcR5vugKaGkRn7Fc8dKoxnuuK/wm0HuJN6k Sep 12 17:41:43.562054 sshd[2155]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:41:43.566905 systemd-logind[1691]: New session 4 of user core. Sep 12 17:41:43.572954 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 12 17:41:43.896214 sshd[2155]: pam_unix(sshd:session): session closed for user core Sep 12 17:41:43.899558 systemd[1]: sshd@1-10.200.20.41:22-10.200.16.10:60314.service: Deactivated successfully. Sep 12 17:41:43.901087 systemd[1]: session-4.scope: Deactivated successfully. Sep 12 17:41:43.901766 systemd-logind[1691]: Session 4 logged out. Waiting for processes to exit. Sep 12 17:41:43.902604 systemd-logind[1691]: Removed session 4. Sep 12 17:41:43.981193 systemd[1]: Started sshd@2-10.200.20.41:22-10.200.16.10:60326.service - OpenSSH per-connection server daemon (10.200.16.10:60326). Sep 12 17:41:44.426645 sshd[2162]: Accepted publickey for core from 10.200.16.10 port 60326 ssh2: RSA SHA256:AyxZlAlMlcR5vugKaGkRn7Fc8dKoxnuuK/wm0HuJN6k Sep 12 17:41:44.428079 sshd[2162]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:41:44.433010 systemd-logind[1691]: New session 5 of user core. Sep 12 17:41:44.439935 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 12 17:41:44.754989 sshd[2162]: pam_unix(sshd:session): session closed for user core Sep 12 17:41:44.757595 systemd-logind[1691]: Session 5 logged out. Waiting for processes to exit. Sep 12 17:41:44.759234 systemd[1]: sshd@2-10.200.20.41:22-10.200.16.10:60326.service: Deactivated successfully. Sep 12 17:41:44.761683 systemd[1]: session-5.scope: Deactivated successfully. Sep 12 17:41:44.763101 systemd-logind[1691]: Removed session 5. Sep 12 17:41:44.827175 systemd[1]: Started sshd@3-10.200.20.41:22-10.200.16.10:60330.service - OpenSSH per-connection server daemon (10.200.16.10:60330). Sep 12 17:41:45.246314 sshd[2169]: Accepted publickey for core from 10.200.16.10 port 60330 ssh2: RSA SHA256:AyxZlAlMlcR5vugKaGkRn7Fc8dKoxnuuK/wm0HuJN6k Sep 12 17:41:45.247641 sshd[2169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:41:45.251422 systemd-logind[1691]: New session 6 of user core. Sep 12 17:41:45.259935 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 12 17:41:45.560359 sshd[2169]: pam_unix(sshd:session): session closed for user core Sep 12 17:41:45.562885 systemd[1]: sshd@3-10.200.20.41:22-10.200.16.10:60330.service: Deactivated successfully. Sep 12 17:41:45.564332 systemd[1]: session-6.scope: Deactivated successfully. Sep 12 17:41:45.565583 systemd-logind[1691]: Session 6 logged out. Waiting for processes to exit. Sep 12 17:41:45.566447 systemd-logind[1691]: Removed session 6. Sep 12 17:41:45.643548 systemd[1]: Started sshd@4-10.200.20.41:22-10.200.16.10:60332.service - OpenSSH per-connection server daemon (10.200.16.10:60332). Sep 12 17:41:46.101774 sshd[2176]: Accepted publickey for core from 10.200.16.10 port 60332 ssh2: RSA SHA256:AyxZlAlMlcR5vugKaGkRn7Fc8dKoxnuuK/wm0HuJN6k Sep 12 17:41:46.103133 sshd[2176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:41:46.106916 systemd-logind[1691]: New session 7 of user core. Sep 12 17:41:46.114926 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 12 17:41:46.547255 sudo[2179]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 12 17:41:46.547528 sudo[2179]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:41:46.581558 sudo[2179]: pam_unix(sudo:session): session closed for user root Sep 12 17:41:46.666434 sshd[2176]: pam_unix(sshd:session): session closed for user core Sep 12 17:41:46.669565 systemd-logind[1691]: Session 7 logged out. Waiting for processes to exit. Sep 12 17:41:46.670551 systemd[1]: sshd@4-10.200.20.41:22-10.200.16.10:60332.service: Deactivated successfully. Sep 12 17:41:46.672175 systemd[1]: session-7.scope: Deactivated successfully. Sep 12 17:41:46.674169 systemd-logind[1691]: Removed session 7. Sep 12 17:41:46.751697 systemd[1]: Started sshd@5-10.200.20.41:22-10.200.16.10:60344.service - OpenSSH per-connection server daemon (10.200.16.10:60344). Sep 12 17:41:47.200428 sshd[2184]: Accepted publickey for core from 10.200.16.10 port 60344 ssh2: RSA SHA256:AyxZlAlMlcR5vugKaGkRn7Fc8dKoxnuuK/wm0HuJN6k Sep 12 17:41:47.201821 sshd[2184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:41:47.205424 systemd-logind[1691]: New session 8 of user core. Sep 12 17:41:47.215958 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 12 17:41:47.455969 sudo[2188]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 12 17:41:47.456252 sudo[2188]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:41:47.459514 sudo[2188]: pam_unix(sudo:session): session closed for user root Sep 12 17:41:47.464357 sudo[2187]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 12 17:41:47.464623 sudo[2187]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:41:47.482277 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 12 17:41:47.483805 auditctl[2191]: No rules Sep 12 17:41:47.484111 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 17:41:47.484279 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 12 17:41:47.487312 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 12 17:41:47.509250 augenrules[2209]: No rules Sep 12 17:41:47.510676 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 12 17:41:47.512651 sudo[2187]: pam_unix(sudo:session): session closed for user root Sep 12 17:41:47.590295 sshd[2184]: pam_unix(sshd:session): session closed for user core Sep 12 17:41:47.592972 systemd[1]: sshd@5-10.200.20.41:22-10.200.16.10:60344.service: Deactivated successfully. Sep 12 17:41:47.594449 systemd[1]: session-8.scope: Deactivated successfully. Sep 12 17:41:47.595583 systemd-logind[1691]: Session 8 logged out. Waiting for processes to exit. Sep 12 17:41:47.596499 systemd-logind[1691]: Removed session 8. Sep 12 17:41:47.663115 systemd[1]: Started sshd@6-10.200.20.41:22-10.200.16.10:60356.service - OpenSSH per-connection server daemon (10.200.16.10:60356). Sep 12 17:41:48.092456 sshd[2217]: Accepted publickey for core from 10.200.16.10 port 60356 ssh2: RSA SHA256:AyxZlAlMlcR5vugKaGkRn7Fc8dKoxnuuK/wm0HuJN6k Sep 12 17:41:48.093756 sshd[2217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:41:48.097365 systemd-logind[1691]: New session 9 of user core. Sep 12 17:41:48.103946 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 12 17:41:48.334787 sudo[2220]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 12 17:41:48.335088 sudo[2220]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:41:49.492036 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 12 17:41:49.493166 (dockerd)[2235]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 12 17:41:50.213617 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 12 17:41:50.221155 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:41:50.246879 dockerd[2235]: time="2025-09-12T17:41:50.246595279Z" level=info msg="Starting up" Sep 12 17:41:50.388695 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:41:50.398107 (kubelet)[2252]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:41:50.433070 kubelet[2252]: E0912 17:41:50.432992 2252 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:41:50.436179 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:41:50.436325 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:41:51.352319 dockerd[2235]: time="2025-09-12T17:41:51.352271766Z" level=info msg="Loading containers: start." Sep 12 17:41:51.555822 kernel: Initializing XFRM netlink socket Sep 12 17:41:51.796639 systemd-networkd[1587]: docker0: Link UP Sep 12 17:41:51.827939 dockerd[2235]: time="2025-09-12T17:41:51.827898997Z" level=info msg="Loading containers: done." Sep 12 17:41:51.841251 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3693158133-merged.mount: Deactivated successfully. Sep 12 17:41:51.850524 dockerd[2235]: time="2025-09-12T17:41:51.850053504Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 12 17:41:51.850524 dockerd[2235]: time="2025-09-12T17:41:51.850204064Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 12 17:41:51.850524 dockerd[2235]: time="2025-09-12T17:41:51.850340025Z" level=info msg="Daemon has completed initialization" Sep 12 17:41:51.925070 dockerd[2235]: time="2025-09-12T17:41:51.924508928Z" level=info msg="API listen on /run/docker.sock" Sep 12 17:41:51.925380 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 12 17:41:53.027916 containerd[1710]: time="2025-09-12T17:41:53.027876528Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\"" Sep 12 17:41:53.884094 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3756075306.mount: Deactivated successfully. Sep 12 17:41:55.105785 containerd[1710]: time="2025-09-12T17:41:55.105727901Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:55.108763 containerd[1710]: time="2025-09-12T17:41:55.108543590Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.13: active requests=0, bytes read=25687325" Sep 12 17:41:55.113426 containerd[1710]: time="2025-09-12T17:41:55.113364404Z" level=info msg="ImageCreate event name:\"sha256:0b1c07d8fd4a3526d5c44502e682df3627a3b01c1e608e5e24c3519c8fb337b6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:55.120599 containerd[1710]: time="2025-09-12T17:41:55.120536426Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:55.121766 containerd[1710]: time="2025-09-12T17:41:55.121563789Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.13\" with image id \"sha256:0b1c07d8fd4a3526d5c44502e682df3627a3b01c1e608e5e24c3519c8fb337b6\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\", size \"25683924\" in 2.09364694s" Sep 12 17:41:55.121766 containerd[1710]: time="2025-09-12T17:41:55.121623589Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\" returns image reference \"sha256:0b1c07d8fd4a3526d5c44502e682df3627a3b01c1e608e5e24c3519c8fb337b6\"" Sep 12 17:41:55.123464 containerd[1710]: time="2025-09-12T17:41:55.123431195Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\"" Sep 12 17:41:56.329524 containerd[1710]: time="2025-09-12T17:41:56.329475704Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:56.336999 containerd[1710]: time="2025-09-12T17:41:56.336963047Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.13: active requests=0, bytes read=22459767" Sep 12 17:41:56.344723 containerd[1710]: time="2025-09-12T17:41:56.344689550Z" level=info msg="ImageCreate event name:\"sha256:c359cb88f3d2147f2cb4c5ada4fbdeadc4b1c009d66c8f33f3856efaf04ee6ef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:56.351077 containerd[1710]: time="2025-09-12T17:41:56.351018129Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:56.352470 containerd[1710]: time="2025-09-12T17:41:56.352013452Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.13\" with image id \"sha256:c359cb88f3d2147f2cb4c5ada4fbdeadc4b1c009d66c8f33f3856efaf04ee6ef\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\", size \"24028542\" in 1.228547617s" Sep 12 17:41:56.352470 containerd[1710]: time="2025-09-12T17:41:56.352049292Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\" returns image reference \"sha256:c359cb88f3d2147f2cb4c5ada4fbdeadc4b1c009d66c8f33f3856efaf04ee6ef\"" Sep 12 17:41:56.353000 containerd[1710]: time="2025-09-12T17:41:56.352829414Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\"" Sep 12 17:41:57.348378 containerd[1710]: time="2025-09-12T17:41:57.348322210Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:57.351891 containerd[1710]: time="2025-09-12T17:41:57.351859421Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.13: active requests=0, bytes read=17127506" Sep 12 17:41:57.355337 containerd[1710]: time="2025-09-12T17:41:57.355308311Z" level=info msg="ImageCreate event name:\"sha256:5e3cbe2ba7db787c6aebfcf4484156dd4ebd7ede811ef72e8929593e59a5fa27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:57.360981 containerd[1710]: time="2025-09-12T17:41:57.360932688Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:57.362022 containerd[1710]: time="2025-09-12T17:41:57.361994171Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.13\" with image id \"sha256:5e3cbe2ba7db787c6aebfcf4484156dd4ebd7ede811ef72e8929593e59a5fa27\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\", size \"18696299\" in 1.009136357s" Sep 12 17:41:57.362220 containerd[1710]: time="2025-09-12T17:41:57.362117372Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\" returns image reference \"sha256:5e3cbe2ba7db787c6aebfcf4484156dd4ebd7ede811ef72e8929593e59a5fa27\"" Sep 12 17:41:57.363069 containerd[1710]: time="2025-09-12T17:41:57.362781174Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\"" Sep 12 17:41:58.288823 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Sep 12 17:41:58.443376 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount897358918.mount: Deactivated successfully. Sep 12 17:41:58.784380 containerd[1710]: time="2025-09-12T17:41:58.784316648Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:58.790940 containerd[1710]: time="2025-09-12T17:41:58.790914984Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.13: active requests=0, bytes read=26954907" Sep 12 17:41:58.795666 containerd[1710]: time="2025-09-12T17:41:58.795624355Z" level=info msg="ImageCreate event name:\"sha256:c15699f0b7002450249485b10f20211982dfd2bec4d61c86c35acebc659e794e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:58.800751 containerd[1710]: time="2025-09-12T17:41:58.800703087Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:58.801638 containerd[1710]: time="2025-09-12T17:41:58.801484969Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.13\" with image id \"sha256:c15699f0b7002450249485b10f20211982dfd2bec4d61c86c35acebc659e794e\", repo tag \"registry.k8s.io/kube-proxy:v1.31.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\", size \"26953926\" in 1.438658915s" Sep 12 17:41:58.801638 containerd[1710]: time="2025-09-12T17:41:58.801517489Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\" returns image reference \"sha256:c15699f0b7002450249485b10f20211982dfd2bec4d61c86c35acebc659e794e\"" Sep 12 17:41:58.802342 containerd[1710]: time="2025-09-12T17:41:58.802060610Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 12 17:41:59.598775 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1431465540.mount: Deactivated successfully. Sep 12 17:42:00.463735 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Sep 12 17:42:00.472046 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:42:00.597246 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:42:00.601226 (kubelet)[2495]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:42:00.635329 kubelet[2495]: E0912 17:42:00.635231 2495 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:42:00.637443 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:42:00.637580 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:42:01.824590 containerd[1710]: time="2025-09-12T17:42:01.824532110Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:01.827675 containerd[1710]: time="2025-09-12T17:42:01.827611159Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Sep 12 17:42:01.831400 containerd[1710]: time="2025-09-12T17:42:01.831351490Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:01.836619 containerd[1710]: time="2025-09-12T17:42:01.836562745Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:01.837764 containerd[1710]: time="2025-09-12T17:42:01.837735189Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 3.035643218s" Sep 12 17:42:01.837966 containerd[1710]: time="2025-09-12T17:42:01.837868229Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 12 17:42:01.838908 containerd[1710]: time="2025-09-12T17:42:01.838875352Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 12 17:42:02.535393 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4232141625.mount: Deactivated successfully. Sep 12 17:42:02.560848 containerd[1710]: time="2025-09-12T17:42:02.560442173Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:02.563033 containerd[1710]: time="2025-09-12T17:42:02.562987581Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Sep 12 17:42:02.566460 containerd[1710]: time="2025-09-12T17:42:02.566422031Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:02.574491 containerd[1710]: time="2025-09-12T17:42:02.574428295Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:02.575185 containerd[1710]: time="2025-09-12T17:42:02.575151177Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 736.140664ms" Sep 12 17:42:02.575295 containerd[1710]: time="2025-09-12T17:42:02.575278697Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 12 17:42:02.576167 containerd[1710]: time="2025-09-12T17:42:02.575973819Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 12 17:42:03.231074 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2015163220.mount: Deactivated successfully. Sep 12 17:42:03.461350 update_engine[1692]: I20250912 17:42:03.460757 1692 update_attempter.cc:509] Updating boot flags... Sep 12 17:42:03.547859 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2553) Sep 12 17:42:03.706895 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2552) Sep 12 17:42:06.672860 containerd[1710]: time="2025-09-12T17:42:06.671976510Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:06.675462 containerd[1710]: time="2025-09-12T17:42:06.675170279Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66537161" Sep 12 17:42:06.678453 containerd[1710]: time="2025-09-12T17:42:06.678395169Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:06.687181 containerd[1710]: time="2025-09-12T17:42:06.687102355Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:06.688597 containerd[1710]: time="2025-09-12T17:42:06.688234158Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 4.112223219s" Sep 12 17:42:06.688597 containerd[1710]: time="2025-09-12T17:42:06.688281679Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Sep 12 17:42:10.713960 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Sep 12 17:42:10.724189 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:42:10.826962 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:42:10.833494 (kubelet)[2676]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:42:10.932834 kubelet[2676]: E0912 17:42:10.932201 2676 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:42:10.935989 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:42:10.936120 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:42:12.848337 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:42:12.856246 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:42:12.880670 systemd[1]: Reloading requested from client PID 2691 ('systemctl') (unit session-9.scope)... Sep 12 17:42:12.880847 systemd[1]: Reloading... Sep 12 17:42:12.988826 zram_generator::config[2731]: No configuration found. Sep 12 17:42:13.096280 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:42:13.174440 systemd[1]: Reloading finished in 293 ms. Sep 12 17:42:13.222103 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 12 17:42:13.222207 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 12 17:42:13.223853 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:42:13.229122 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:42:13.445617 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:42:13.460159 (kubelet)[2798]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:42:13.496658 kubelet[2798]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:42:13.496658 kubelet[2798]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 12 17:42:13.496658 kubelet[2798]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:42:13.497041 kubelet[2798]: I0912 17:42:13.496715 2798 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:42:14.368642 kubelet[2798]: I0912 17:42:14.368601 2798 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 12 17:42:14.368642 kubelet[2798]: I0912 17:42:14.368632 2798 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:42:14.368920 kubelet[2798]: I0912 17:42:14.368899 2798 server.go:934] "Client rotation is on, will bootstrap in background" Sep 12 17:42:14.392680 kubelet[2798]: I0912 17:42:14.392643 2798 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:42:14.393071 kubelet[2798]: E0912 17:42:14.393036 2798 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.41:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.41:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:42:14.402324 kubelet[2798]: E0912 17:42:14.402283 2798 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 17:42:14.402537 kubelet[2798]: I0912 17:42:14.402470 2798 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 17:42:14.406461 kubelet[2798]: I0912 17:42:14.406374 2798 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:42:14.407189 kubelet[2798]: I0912 17:42:14.407168 2798 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 12 17:42:14.407835 kubelet[2798]: I0912 17:42:14.407413 2798 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:42:14.407835 kubelet[2798]: I0912 17:42:14.407442 2798 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-a-d7d9773d19","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 17:42:14.407835 kubelet[2798]: I0912 17:42:14.407618 2798 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:42:14.407835 kubelet[2798]: I0912 17:42:14.407626 2798 container_manager_linux.go:300] "Creating device plugin manager" Sep 12 17:42:14.408067 kubelet[2798]: I0912 17:42:14.407741 2798 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:42:14.410220 kubelet[2798]: I0912 17:42:14.409992 2798 kubelet.go:408] "Attempting to sync node with API server" Sep 12 17:42:14.410220 kubelet[2798]: I0912 17:42:14.410020 2798 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:42:14.410220 kubelet[2798]: I0912 17:42:14.410043 2798 kubelet.go:314] "Adding apiserver pod source" Sep 12 17:42:14.410220 kubelet[2798]: I0912 17:42:14.410059 2798 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:42:14.413145 kubelet[2798]: W0912 17:42:14.412886 2798 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.41:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-a-d7d9773d19&limit=500&resourceVersion=0": dial tcp 10.200.20.41:6443: connect: connection refused Sep 12 17:42:14.413145 kubelet[2798]: E0912 17:42:14.412952 2798 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.41:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-a-d7d9773d19&limit=500&resourceVersion=0\": dial tcp 10.200.20.41:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:42:14.414730 kubelet[2798]: W0912 17:42:14.414693 2798 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.41:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.41:6443: connect: connection refused Sep 12 17:42:14.415834 kubelet[2798]: E0912 17:42:14.414843 2798 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.41:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.41:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:42:14.415834 kubelet[2798]: I0912 17:42:14.414930 2798 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 12 17:42:14.415834 kubelet[2798]: I0912 17:42:14.415364 2798 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 17:42:14.415834 kubelet[2798]: W0912 17:42:14.415402 2798 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 12 17:42:14.416095 kubelet[2798]: I0912 17:42:14.416080 2798 server.go:1274] "Started kubelet" Sep 12 17:42:14.420920 kubelet[2798]: I0912 17:42:14.420902 2798 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:42:14.423288 kubelet[2798]: E0912 17:42:14.422106 2798 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.41:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.41:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.6-a-d7d9773d19.186499de3ff3a590 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.6-a-d7d9773d19,UID:ci-4081.3.6-a-d7d9773d19,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.6-a-d7d9773d19,},FirstTimestamp:2025-09-12 17:42:14.416057744 +0000 UTC m=+0.953060782,LastTimestamp:2025-09-12 17:42:14.416057744 +0000 UTC m=+0.953060782,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.6-a-d7d9773d19,}" Sep 12 17:42:14.423604 kubelet[2798]: I0912 17:42:14.423585 2798 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:42:14.423690 kubelet[2798]: I0912 17:42:14.423664 2798 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:42:14.424617 kubelet[2798]: I0912 17:42:14.424582 2798 server.go:449] "Adding debug handlers to kubelet server" Sep 12 17:42:14.425459 kubelet[2798]: I0912 17:42:14.425404 2798 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:42:14.425630 kubelet[2798]: I0912 17:42:14.425609 2798 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:42:14.426560 kubelet[2798]: I0912 17:42:14.426272 2798 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 12 17:42:14.427409 kubelet[2798]: I0912 17:42:14.427376 2798 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 12 17:42:14.427477 kubelet[2798]: I0912 17:42:14.427447 2798 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:42:14.427753 kubelet[2798]: W0912 17:42:14.427710 2798 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.41:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.41:6443: connect: connection refused Sep 12 17:42:14.427869 kubelet[2798]: E0912 17:42:14.427758 2798 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.41:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.41:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:42:14.429372 kubelet[2798]: E0912 17:42:14.428848 2798 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.6-a-d7d9773d19\" not found" Sep 12 17:42:14.429372 kubelet[2798]: E0912 17:42:14.429090 2798 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 17:42:14.429372 kubelet[2798]: E0912 17:42:14.429316 2798 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.41:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-a-d7d9773d19?timeout=10s\": dial tcp 10.200.20.41:6443: connect: connection refused" interval="200ms" Sep 12 17:42:14.430179 kubelet[2798]: I0912 17:42:14.430162 2798 factory.go:221] Registration of the systemd container factory successfully Sep 12 17:42:14.430434 kubelet[2798]: I0912 17:42:14.430415 2798 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:42:14.432065 kubelet[2798]: I0912 17:42:14.432047 2798 factory.go:221] Registration of the containerd container factory successfully Sep 12 17:42:14.486964 kubelet[2798]: I0912 17:42:14.486919 2798 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 17:42:14.488126 kubelet[2798]: I0912 17:42:14.487941 2798 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 17:42:14.488126 kubelet[2798]: I0912 17:42:14.488014 2798 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 12 17:42:14.488126 kubelet[2798]: I0912 17:42:14.488034 2798 kubelet.go:2321] "Starting kubelet main sync loop" Sep 12 17:42:14.488126 kubelet[2798]: E0912 17:42:14.488075 2798 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:42:14.490843 kubelet[2798]: W0912 17:42:14.490769 2798 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.41:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.41:6443: connect: connection refused Sep 12 17:42:14.490964 kubelet[2798]: E0912 17:42:14.490944 2798 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.41:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.41:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:42:14.491995 kubelet[2798]: I0912 17:42:14.491926 2798 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 12 17:42:14.492308 kubelet[2798]: I0912 17:42:14.492108 2798 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 12 17:42:14.492308 kubelet[2798]: I0912 17:42:14.492128 2798 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:42:14.497509 kubelet[2798]: I0912 17:42:14.497490 2798 policy_none.go:49] "None policy: Start" Sep 12 17:42:14.498639 kubelet[2798]: I0912 17:42:14.498374 2798 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 12 17:42:14.498639 kubelet[2798]: I0912 17:42:14.498395 2798 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:42:14.503418 kubelet[2798]: E0912 17:42:14.503046 2798 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.41:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.41:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.6-a-d7d9773d19.186499de3ff3a590 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.6-a-d7d9773d19,UID:ci-4081.3.6-a-d7d9773d19,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.6-a-d7d9773d19,},FirstTimestamp:2025-09-12 17:42:14.416057744 +0000 UTC m=+0.953060782,LastTimestamp:2025-09-12 17:42:14.416057744 +0000 UTC m=+0.953060782,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.6-a-d7d9773d19,}" Sep 12 17:42:14.507385 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 12 17:42:14.516239 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 12 17:42:14.519666 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 12 17:42:14.527828 kubelet[2798]: I0912 17:42:14.527608 2798 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 17:42:14.527828 kubelet[2798]: I0912 17:42:14.527831 2798 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:42:14.527950 kubelet[2798]: I0912 17:42:14.527842 2798 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:42:14.528939 kubelet[2798]: I0912 17:42:14.528248 2798 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:42:14.530738 kubelet[2798]: E0912 17:42:14.530617 2798 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.6-a-d7d9773d19\" not found" Sep 12 17:42:14.598673 systemd[1]: Created slice kubepods-burstable-pod4718d3bcd2abd155e22fadd6b35f5414.slice - libcontainer container kubepods-burstable-pod4718d3bcd2abd155e22fadd6b35f5414.slice. Sep 12 17:42:14.614145 systemd[1]: Created slice kubepods-burstable-podf5e575fed6ffde175d2dbc39c8199725.slice - libcontainer container kubepods-burstable-podf5e575fed6ffde175d2dbc39c8199725.slice. Sep 12 17:42:14.617922 systemd[1]: Created slice kubepods-burstable-podeeb3f0861972aeb5ee461d56a7cacdef.slice - libcontainer container kubepods-burstable-podeeb3f0861972aeb5ee461d56a7cacdef.slice. Sep 12 17:42:14.630481 kubelet[2798]: I0912 17:42:14.630098 2798 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.6-a-d7d9773d19" Sep 12 17:42:14.630481 kubelet[2798]: E0912 17:42:14.630388 2798 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.41:6443/api/v1/nodes\": dial tcp 10.200.20.41:6443: connect: connection refused" node="ci-4081.3.6-a-d7d9773d19" Sep 12 17:42:14.630930 kubelet[2798]: E0912 17:42:14.630837 2798 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.41:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-a-d7d9773d19?timeout=10s\": dial tcp 10.200.20.41:6443: connect: connection refused" interval="400ms" Sep 12 17:42:14.727764 kubelet[2798]: I0912 17:42:14.727729 2798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4718d3bcd2abd155e22fadd6b35f5414-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-a-d7d9773d19\" (UID: \"4718d3bcd2abd155e22fadd6b35f5414\") " pod="kube-system/kube-apiserver-ci-4081.3.6-a-d7d9773d19" Sep 12 17:42:14.727933 kubelet[2798]: I0912 17:42:14.727914 2798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/eeb3f0861972aeb5ee461d56a7cacdef-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-a-d7d9773d19\" (UID: \"eeb3f0861972aeb5ee461d56a7cacdef\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-a-d7d9773d19" Sep 12 17:42:14.728176 kubelet[2798]: I0912 17:42:14.728032 2798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/eeb3f0861972aeb5ee461d56a7cacdef-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-a-d7d9773d19\" (UID: \"eeb3f0861972aeb5ee461d56a7cacdef\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-a-d7d9773d19" Sep 12 17:42:14.728176 kubelet[2798]: I0912 17:42:14.728061 2798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f5e575fed6ffde175d2dbc39c8199725-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-a-d7d9773d19\" (UID: \"f5e575fed6ffde175d2dbc39c8199725\") " pod="kube-system/kube-scheduler-ci-4081.3.6-a-d7d9773d19" Sep 12 17:42:14.728176 kubelet[2798]: I0912 17:42:14.728087 2798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4718d3bcd2abd155e22fadd6b35f5414-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-a-d7d9773d19\" (UID: \"4718d3bcd2abd155e22fadd6b35f5414\") " pod="kube-system/kube-apiserver-ci-4081.3.6-a-d7d9773d19" Sep 12 17:42:14.728176 kubelet[2798]: I0912 17:42:14.728106 2798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4718d3bcd2abd155e22fadd6b35f5414-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-a-d7d9773d19\" (UID: \"4718d3bcd2abd155e22fadd6b35f5414\") " pod="kube-system/kube-apiserver-ci-4081.3.6-a-d7d9773d19" Sep 12 17:42:14.728176 kubelet[2798]: I0912 17:42:14.728122 2798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/eeb3f0861972aeb5ee461d56a7cacdef-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-a-d7d9773d19\" (UID: \"eeb3f0861972aeb5ee461d56a7cacdef\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-a-d7d9773d19" Sep 12 17:42:14.728306 kubelet[2798]: I0912 17:42:14.728139 2798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/eeb3f0861972aeb5ee461d56a7cacdef-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-a-d7d9773d19\" (UID: \"eeb3f0861972aeb5ee461d56a7cacdef\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-a-d7d9773d19" Sep 12 17:42:14.728306 kubelet[2798]: I0912 17:42:14.728157 2798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/eeb3f0861972aeb5ee461d56a7cacdef-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-a-d7d9773d19\" (UID: \"eeb3f0861972aeb5ee461d56a7cacdef\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-a-d7d9773d19" Sep 12 17:42:14.832820 kubelet[2798]: I0912 17:42:14.832733 2798 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.6-a-d7d9773d19" Sep 12 17:42:14.833122 kubelet[2798]: E0912 17:42:14.833081 2798 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.41:6443/api/v1/nodes\": dial tcp 10.200.20.41:6443: connect: connection refused" node="ci-4081.3.6-a-d7d9773d19" Sep 12 17:42:14.914282 containerd[1710]: time="2025-09-12T17:42:14.914170834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-a-d7d9773d19,Uid:4718d3bcd2abd155e22fadd6b35f5414,Namespace:kube-system,Attempt:0,}" Sep 12 17:42:14.916820 containerd[1710]: time="2025-09-12T17:42:14.916632497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-a-d7d9773d19,Uid:f5e575fed6ffde175d2dbc39c8199725,Namespace:kube-system,Attempt:0,}" Sep 12 17:42:14.920581 containerd[1710]: time="2025-09-12T17:42:14.920412933Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-a-d7d9773d19,Uid:eeb3f0861972aeb5ee461d56a7cacdef,Namespace:kube-system,Attempt:0,}" Sep 12 17:42:15.032024 kubelet[2798]: E0912 17:42:15.031981 2798 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.41:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-a-d7d9773d19?timeout=10s\": dial tcp 10.200.20.41:6443: connect: connection refused" interval="800ms" Sep 12 17:42:15.235318 kubelet[2798]: I0912 17:42:15.235267 2798 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.6-a-d7d9773d19" Sep 12 17:42:15.235839 kubelet[2798]: E0912 17:42:15.235809 2798 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.41:6443/api/v1/nodes\": dial tcp 10.200.20.41:6443: connect: connection refused" node="ci-4081.3.6-a-d7d9773d19" Sep 12 17:42:15.571789 kubelet[2798]: W0912 17:42:15.571660 2798 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.41:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-a-d7d9773d19&limit=500&resourceVersion=0": dial tcp 10.200.20.41:6443: connect: connection refused Sep 12 17:42:15.571789 kubelet[2798]: E0912 17:42:15.571723 2798 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.41:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-a-d7d9773d19&limit=500&resourceVersion=0\": dial tcp 10.200.20.41:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:42:15.656823 kubelet[2798]: W0912 17:42:15.656704 2798 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.41:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.41:6443: connect: connection refused Sep 12 17:42:15.656823 kubelet[2798]: E0912 17:42:15.656768 2798 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.41:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.41:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:42:15.680765 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4165375516.mount: Deactivated successfully. Sep 12 17:42:15.761950 kubelet[2798]: W0912 17:42:15.761888 2798 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.41:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.41:6443: connect: connection refused Sep 12 17:42:15.762072 kubelet[2798]: E0912 17:42:15.761956 2798 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.41:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.41:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:42:15.832939 kubelet[2798]: E0912 17:42:15.832826 2798 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.41:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-a-d7d9773d19?timeout=10s\": dial tcp 10.200.20.41:6443: connect: connection refused" interval="1.6s" Sep 12 17:42:16.037617 kubelet[2798]: I0912 17:42:16.037583 2798 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.6-a-d7d9773d19" Sep 12 17:42:16.038045 kubelet[2798]: E0912 17:42:16.038018 2798 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.41:6443/api/v1/nodes\": dial tcp 10.200.20.41:6443: connect: connection refused" node="ci-4081.3.6-a-d7d9773d19" Sep 12 17:42:16.044434 kubelet[2798]: W0912 17:42:16.044406 2798 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.41:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.41:6443: connect: connection refused Sep 12 17:42:16.044471 kubelet[2798]: E0912 17:42:16.044447 2798 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.41:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.41:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:42:16.471838 containerd[1710]: time="2025-09-12T17:42:16.471617346Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:42:16.477408 containerd[1710]: time="2025-09-12T17:42:16.477368562Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Sep 12 17:42:16.480028 containerd[1710]: time="2025-09-12T17:42:16.479990707Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:42:16.483452 containerd[1710]: time="2025-09-12T17:42:16.482677132Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:42:16.485214 containerd[1710]: time="2025-09-12T17:42:16.485173796Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 17:42:16.488703 containerd[1710]: time="2025-09-12T17:42:16.488659030Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:42:16.491100 containerd[1710]: time="2025-09-12T17:42:16.491043612Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 17:42:16.495517 containerd[1710]: time="2025-09-12T17:42:16.495482095Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:42:16.497826 containerd[1710]: time="2025-09-12T17:42:16.496064341Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 1.579370482s" Sep 12 17:42:16.497957 containerd[1710]: time="2025-09-12T17:42:16.497920638Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 1.577452624s" Sep 12 17:42:16.500179 containerd[1710]: time="2025-09-12T17:42:16.500140020Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 1.585882785s" Sep 12 17:42:16.542693 kubelet[2798]: E0912 17:42:16.542649 2798 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.41:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.41:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:42:17.068138 containerd[1710]: time="2025-09-12T17:42:17.067439012Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:42:17.068138 containerd[1710]: time="2025-09-12T17:42:17.067502252Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:42:17.068138 containerd[1710]: time="2025-09-12T17:42:17.067520812Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:42:17.068404 containerd[1710]: time="2025-09-12T17:42:17.067455812Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:42:17.068404 containerd[1710]: time="2025-09-12T17:42:17.067502932Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:42:17.068404 containerd[1710]: time="2025-09-12T17:42:17.067513772Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:42:17.069364 containerd[1710]: time="2025-09-12T17:42:17.069318298Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:42:17.069914 containerd[1710]: time="2025-09-12T17:42:17.069829459Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:42:17.078879 containerd[1710]: time="2025-09-12T17:42:17.078759526Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:42:17.078879 containerd[1710]: time="2025-09-12T17:42:17.078828207Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:42:17.078879 containerd[1710]: time="2025-09-12T17:42:17.078858647Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:42:17.079140 containerd[1710]: time="2025-09-12T17:42:17.078932687Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:42:17.116051 systemd[1]: Started cri-containerd-171c6401a614352044da29958007e0f85059c6969a59cbd6e8a7ef6c9ebba9a8.scope - libcontainer container 171c6401a614352044da29958007e0f85059c6969a59cbd6e8a7ef6c9ebba9a8. Sep 12 17:42:17.117064 systemd[1]: Started cri-containerd-219b92ce987f45a5b4465179f15617e8f8521e66e3ba2a93c926796c552e9893.scope - libcontainer container 219b92ce987f45a5b4465179f15617e8f8521e66e3ba2a93c926796c552e9893. Sep 12 17:42:17.119998 systemd[1]: Started cri-containerd-77d6dbb2a1c3028987e0c546c023cf44eafe3c95aa109e21ec6decb84640ed38.scope - libcontainer container 77d6dbb2a1c3028987e0c546c023cf44eafe3c95aa109e21ec6decb84640ed38. Sep 12 17:42:17.159108 containerd[1710]: time="2025-09-12T17:42:17.159002010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-a-d7d9773d19,Uid:eeb3f0861972aeb5ee461d56a7cacdef,Namespace:kube-system,Attempt:0,} returns sandbox id \"219b92ce987f45a5b4465179f15617e8f8521e66e3ba2a93c926796c552e9893\"" Sep 12 17:42:17.163377 containerd[1710]: time="2025-09-12T17:42:17.163334983Z" level=info msg="CreateContainer within sandbox \"219b92ce987f45a5b4465179f15617e8f8521e66e3ba2a93c926796c552e9893\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 12 17:42:17.182471 containerd[1710]: time="2025-09-12T17:42:17.182427121Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-a-d7d9773d19,Uid:f5e575fed6ffde175d2dbc39c8199725,Namespace:kube-system,Attempt:0,} returns sandbox id \"171c6401a614352044da29958007e0f85059c6969a59cbd6e8a7ef6c9ebba9a8\"" Sep 12 17:42:17.182915 containerd[1710]: time="2025-09-12T17:42:17.182555121Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-a-d7d9773d19,Uid:4718d3bcd2abd155e22fadd6b35f5414,Namespace:kube-system,Attempt:0,} returns sandbox id \"77d6dbb2a1c3028987e0c546c023cf44eafe3c95aa109e21ec6decb84640ed38\"" Sep 12 17:42:17.185713 containerd[1710]: time="2025-09-12T17:42:17.185478610Z" level=info msg="CreateContainer within sandbox \"77d6dbb2a1c3028987e0c546c023cf44eafe3c95aa109e21ec6decb84640ed38\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 12 17:42:17.185928 containerd[1710]: time="2025-09-12T17:42:17.185913891Z" level=info msg="CreateContainer within sandbox \"171c6401a614352044da29958007e0f85059c6969a59cbd6e8a7ef6c9ebba9a8\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 12 17:42:17.225218 containerd[1710]: time="2025-09-12T17:42:17.225168410Z" level=info msg="CreateContainer within sandbox \"219b92ce987f45a5b4465179f15617e8f8521e66e3ba2a93c926796c552e9893\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4c34e43329b9990a926aa3ca0971876c66c998d367e0dd19b94463d962926d57\"" Sep 12 17:42:17.226289 containerd[1710]: time="2025-09-12T17:42:17.226261214Z" level=info msg="StartContainer for \"4c34e43329b9990a926aa3ca0971876c66c998d367e0dd19b94463d962926d57\"" Sep 12 17:42:17.249953 systemd[1]: Started cri-containerd-4c34e43329b9990a926aa3ca0971876c66c998d367e0dd19b94463d962926d57.scope - libcontainer container 4c34e43329b9990a926aa3ca0971876c66c998d367e0dd19b94463d962926d57. Sep 12 17:42:17.265065 containerd[1710]: time="2025-09-12T17:42:17.265021291Z" level=info msg="CreateContainer within sandbox \"77d6dbb2a1c3028987e0c546c023cf44eafe3c95aa109e21ec6decb84640ed38\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f9fc692467d94f74e39fc44d0cb8d5a96acd36ff900ba5abffdaedd5b3dbe4b4\"" Sep 12 17:42:17.266976 containerd[1710]: time="2025-09-12T17:42:17.265856974Z" level=info msg="StartContainer for \"f9fc692467d94f74e39fc44d0cb8d5a96acd36ff900ba5abffdaedd5b3dbe4b4\"" Sep 12 17:42:17.279114 containerd[1710]: time="2025-09-12T17:42:17.279070574Z" level=info msg="CreateContainer within sandbox \"171c6401a614352044da29958007e0f85059c6969a59cbd6e8a7ef6c9ebba9a8\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"843dd8eecb7799df800b574e0493d16c8e380215135ce80b1397484352f7d791\"" Sep 12 17:42:17.281291 containerd[1710]: time="2025-09-12T17:42:17.281249301Z" level=info msg="StartContainer for \"843dd8eecb7799df800b574e0493d16c8e380215135ce80b1397484352f7d791\"" Sep 12 17:42:17.312057 systemd[1]: Started cri-containerd-f9fc692467d94f74e39fc44d0cb8d5a96acd36ff900ba5abffdaedd5b3dbe4b4.scope - libcontainer container f9fc692467d94f74e39fc44d0cb8d5a96acd36ff900ba5abffdaedd5b3dbe4b4. Sep 12 17:42:17.313182 containerd[1710]: time="2025-09-12T17:42:17.312386355Z" level=info msg="StartContainer for \"4c34e43329b9990a926aa3ca0971876c66c998d367e0dd19b94463d962926d57\" returns successfully" Sep 12 17:42:17.334043 systemd[1]: Started cri-containerd-843dd8eecb7799df800b574e0493d16c8e380215135ce80b1397484352f7d791.scope - libcontainer container 843dd8eecb7799df800b574e0493d16c8e380215135ce80b1397484352f7d791. Sep 12 17:42:17.384438 containerd[1710]: time="2025-09-12T17:42:17.384271933Z" level=info msg="StartContainer for \"f9fc692467d94f74e39fc44d0cb8d5a96acd36ff900ba5abffdaedd5b3dbe4b4\" returns successfully" Sep 12 17:42:17.410471 containerd[1710]: time="2025-09-12T17:42:17.410358852Z" level=info msg="StartContainer for \"843dd8eecb7799df800b574e0493d16c8e380215135ce80b1397484352f7d791\" returns successfully" Sep 12 17:42:17.433361 kubelet[2798]: E0912 17:42:17.433306 2798 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.41:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-a-d7d9773d19?timeout=10s\": dial tcp 10.200.20.41:6443: connect: connection refused" interval="3.2s" Sep 12 17:42:17.640674 kubelet[2798]: I0912 17:42:17.640535 2798 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.6-a-d7d9773d19" Sep 12 17:42:20.067450 kubelet[2798]: I0912 17:42:20.067402 2798 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081.3.6-a-d7d9773d19" Sep 12 17:42:20.067450 kubelet[2798]: E0912 17:42:20.067446 2798 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4081.3.6-a-d7d9773d19\": node \"ci-4081.3.6-a-d7d9773d19\" not found" Sep 12 17:42:20.416892 kubelet[2798]: I0912 17:42:20.416631 2798 apiserver.go:52] "Watching apiserver" Sep 12 17:42:20.428297 kubelet[2798]: I0912 17:42:20.428239 2798 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 12 17:42:21.303654 kubelet[2798]: W0912 17:42:21.303477 2798 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 12 17:42:22.006526 systemd[1]: Reloading requested from client PID 3070 ('systemctl') (unit session-9.scope)... Sep 12 17:42:22.006541 systemd[1]: Reloading... Sep 12 17:42:22.090840 zram_generator::config[3108]: No configuration found. Sep 12 17:42:22.205249 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:42:22.295316 systemd[1]: Reloading finished in 288 ms. Sep 12 17:42:22.330901 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:42:22.344737 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 17:42:22.344983 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:42:22.345036 systemd[1]: kubelet.service: Consumed 1.285s CPU time, 126.0M memory peak, 0B memory swap peak. Sep 12 17:42:22.351092 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:42:22.701052 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:42:22.704911 (kubelet)[3174]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:42:22.746213 kubelet[3174]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:42:22.746531 kubelet[3174]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 12 17:42:22.746584 kubelet[3174]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:42:22.746746 kubelet[3174]: I0912 17:42:22.746714 3174 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:42:22.755307 kubelet[3174]: I0912 17:42:22.755275 3174 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 12 17:42:22.755488 kubelet[3174]: I0912 17:42:22.755477 3174 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:42:22.755825 kubelet[3174]: I0912 17:42:22.755786 3174 server.go:934] "Client rotation is on, will bootstrap in background" Sep 12 17:42:22.757348 kubelet[3174]: I0912 17:42:22.757311 3174 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 12 17:42:22.760455 kubelet[3174]: I0912 17:42:22.760277 3174 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:42:22.765703 kubelet[3174]: E0912 17:42:22.765679 3174 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 17:42:22.765894 kubelet[3174]: I0912 17:42:22.765883 3174 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 17:42:22.769475 kubelet[3174]: I0912 17:42:22.769439 3174 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:42:22.769742 kubelet[3174]: I0912 17:42:22.769729 3174 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 12 17:42:22.770079 kubelet[3174]: I0912 17:42:22.770041 3174 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:42:22.770328 kubelet[3174]: I0912 17:42:22.770160 3174 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-a-d7d9773d19","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 17:42:22.770528 kubelet[3174]: I0912 17:42:22.770507 3174 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:42:22.770757 kubelet[3174]: I0912 17:42:22.770677 3174 container_manager_linux.go:300] "Creating device plugin manager" Sep 12 17:42:22.771160 kubelet[3174]: I0912 17:42:22.770999 3174 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:42:22.771352 kubelet[3174]: I0912 17:42:22.771310 3174 kubelet.go:408] "Attempting to sync node with API server" Sep 12 17:42:22.771352 kubelet[3174]: I0912 17:42:22.771328 3174 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:42:22.773129 kubelet[3174]: I0912 17:42:22.771789 3174 kubelet.go:314] "Adding apiserver pod source" Sep 12 17:42:22.773129 kubelet[3174]: I0912 17:42:22.771947 3174 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:42:22.782728 kubelet[3174]: I0912 17:42:22.778649 3174 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 12 17:42:22.782728 kubelet[3174]: I0912 17:42:22.779267 3174 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 17:42:22.782728 kubelet[3174]: I0912 17:42:22.780522 3174 server.go:1274] "Started kubelet" Sep 12 17:42:22.783245 kubelet[3174]: I0912 17:42:22.783209 3174 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:42:22.784556 kubelet[3174]: I0912 17:42:22.784245 3174 server.go:449] "Adding debug handlers to kubelet server" Sep 12 17:42:22.784915 kubelet[3174]: I0912 17:42:22.784867 3174 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:42:22.785904 kubelet[3174]: I0912 17:42:22.785497 3174 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:42:22.787808 kubelet[3174]: I0912 17:42:22.786446 3174 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:42:22.803265 kubelet[3174]: I0912 17:42:22.803226 3174 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:42:22.804765 kubelet[3174]: I0912 17:42:22.804731 3174 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 12 17:42:22.805911 kubelet[3174]: E0912 17:42:22.805888 3174 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.6-a-d7d9773d19\" not found" Sep 12 17:42:22.807728 kubelet[3174]: I0912 17:42:22.807711 3174 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 12 17:42:22.808226 kubelet[3174]: I0912 17:42:22.808212 3174 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:42:22.810882 kubelet[3174]: I0912 17:42:22.810850 3174 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 17:42:22.812820 kubelet[3174]: I0912 17:42:22.812776 3174 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 17:42:22.812913 kubelet[3174]: I0912 17:42:22.812904 3174 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 12 17:42:22.813032 kubelet[3174]: I0912 17:42:22.813021 3174 kubelet.go:2321] "Starting kubelet main sync loop" Sep 12 17:42:22.813137 kubelet[3174]: E0912 17:42:22.813120 3174 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:42:22.818817 kubelet[3174]: I0912 17:42:22.817250 3174 factory.go:221] Registration of the systemd container factory successfully Sep 12 17:42:22.818817 kubelet[3174]: I0912 17:42:22.817363 3174 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:42:22.822111 kubelet[3174]: E0912 17:42:22.822084 3174 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 17:42:22.824404 kubelet[3174]: I0912 17:42:22.824069 3174 factory.go:221] Registration of the containerd container factory successfully Sep 12 17:42:22.869058 kubelet[3174]: I0912 17:42:22.869036 3174 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 12 17:42:22.869280 kubelet[3174]: I0912 17:42:22.869207 3174 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 12 17:42:22.869280 kubelet[3174]: I0912 17:42:22.869232 3174 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:42:22.869590 kubelet[3174]: I0912 17:42:22.869489 3174 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 12 17:42:22.869590 kubelet[3174]: I0912 17:42:22.869515 3174 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 12 17:42:22.869590 kubelet[3174]: I0912 17:42:22.869536 3174 policy_none.go:49] "None policy: Start" Sep 12 17:42:22.870778 kubelet[3174]: I0912 17:42:22.870690 3174 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 12 17:42:22.870778 kubelet[3174]: I0912 17:42:22.870718 3174 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:42:22.870980 kubelet[3174]: I0912 17:42:22.870949 3174 state_mem.go:75] "Updated machine memory state" Sep 12 17:42:22.875221 kubelet[3174]: I0912 17:42:22.875173 3174 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 17:42:22.875371 kubelet[3174]: I0912 17:42:22.875351 3174 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:42:22.875421 kubelet[3174]: I0912 17:42:22.875376 3174 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:42:22.876499 kubelet[3174]: I0912 17:42:22.875936 3174 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:42:22.927657 kubelet[3174]: W0912 17:42:22.927386 3174 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 12 17:42:22.927657 kubelet[3174]: W0912 17:42:22.927566 3174 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 12 17:42:22.928165 kubelet[3174]: E0912 17:42:22.927676 3174 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4081.3.6-a-d7d9773d19\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.6-a-d7d9773d19" Sep 12 17:42:22.928165 kubelet[3174]: W0912 17:42:22.927965 3174 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 12 17:42:22.978786 kubelet[3174]: I0912 17:42:22.978752 3174 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.6-a-d7d9773d19" Sep 12 17:42:22.999731 kubelet[3174]: I0912 17:42:22.999688 3174 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081.3.6-a-d7d9773d19" Sep 12 17:42:22.999881 kubelet[3174]: I0912 17:42:22.999778 3174 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081.3.6-a-d7d9773d19" Sep 12 17:42:23.010467 kubelet[3174]: I0912 17:42:23.010250 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4718d3bcd2abd155e22fadd6b35f5414-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-a-d7d9773d19\" (UID: \"4718d3bcd2abd155e22fadd6b35f5414\") " pod="kube-system/kube-apiserver-ci-4081.3.6-a-d7d9773d19" Sep 12 17:42:23.010467 kubelet[3174]: I0912 17:42:23.010290 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/eeb3f0861972aeb5ee461d56a7cacdef-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-a-d7d9773d19\" (UID: \"eeb3f0861972aeb5ee461d56a7cacdef\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-a-d7d9773d19" Sep 12 17:42:23.010467 kubelet[3174]: I0912 17:42:23.010309 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/eeb3f0861972aeb5ee461d56a7cacdef-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-a-d7d9773d19\" (UID: \"eeb3f0861972aeb5ee461d56a7cacdef\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-a-d7d9773d19" Sep 12 17:42:23.010467 kubelet[3174]: I0912 17:42:23.010339 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f5e575fed6ffde175d2dbc39c8199725-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-a-d7d9773d19\" (UID: \"f5e575fed6ffde175d2dbc39c8199725\") " pod="kube-system/kube-scheduler-ci-4081.3.6-a-d7d9773d19" Sep 12 17:42:23.010467 kubelet[3174]: I0912 17:42:23.010354 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4718d3bcd2abd155e22fadd6b35f5414-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-a-d7d9773d19\" (UID: \"4718d3bcd2abd155e22fadd6b35f5414\") " pod="kube-system/kube-apiserver-ci-4081.3.6-a-d7d9773d19" Sep 12 17:42:23.010689 kubelet[3174]: I0912 17:42:23.010369 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/eeb3f0861972aeb5ee461d56a7cacdef-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-a-d7d9773d19\" (UID: \"eeb3f0861972aeb5ee461d56a7cacdef\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-a-d7d9773d19" Sep 12 17:42:23.010689 kubelet[3174]: I0912 17:42:23.010385 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/eeb3f0861972aeb5ee461d56a7cacdef-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-a-d7d9773d19\" (UID: \"eeb3f0861972aeb5ee461d56a7cacdef\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-a-d7d9773d19" Sep 12 17:42:23.010689 kubelet[3174]: I0912 17:42:23.010401 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/eeb3f0861972aeb5ee461d56a7cacdef-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-a-d7d9773d19\" (UID: \"eeb3f0861972aeb5ee461d56a7cacdef\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-a-d7d9773d19" Sep 12 17:42:23.010689 kubelet[3174]: I0912 17:42:23.010414 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4718d3bcd2abd155e22fadd6b35f5414-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-a-d7d9773d19\" (UID: \"4718d3bcd2abd155e22fadd6b35f5414\") " pod="kube-system/kube-apiserver-ci-4081.3.6-a-d7d9773d19" Sep 12 17:42:23.778637 kubelet[3174]: I0912 17:42:23.778596 3174 apiserver.go:52] "Watching apiserver" Sep 12 17:42:23.808920 kubelet[3174]: I0912 17:42:23.808883 3174 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 12 17:42:23.884681 kubelet[3174]: I0912 17:42:23.884277 3174 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.6-a-d7d9773d19" podStartSLOduration=1.8842581900000002 podStartE2EDuration="1.88425819s" podCreationTimestamp="2025-09-12 17:42:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:42:23.882889345 +0000 UTC m=+1.174887652" watchObservedRunningTime="2025-09-12 17:42:23.88425819 +0000 UTC m=+1.176256457" Sep 12 17:42:23.884681 kubelet[3174]: I0912 17:42:23.884405 3174 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.6-a-d7d9773d19" podStartSLOduration=1.884399591 podStartE2EDuration="1.884399591s" podCreationTimestamp="2025-09-12 17:42:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:42:23.868317531 +0000 UTC m=+1.160315838" watchObservedRunningTime="2025-09-12 17:42:23.884399591 +0000 UTC m=+1.176397898" Sep 12 17:42:23.916154 kubelet[3174]: I0912 17:42:23.916092 3174 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.6-a-d7d9773d19" podStartSLOduration=2.916074147 podStartE2EDuration="2.916074147s" podCreationTimestamp="2025-09-12 17:42:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:42:23.898386882 +0000 UTC m=+1.190385189" watchObservedRunningTime="2025-09-12 17:42:23.916074147 +0000 UTC m=+1.208072454" Sep 12 17:42:28.057718 kubelet[3174]: I0912 17:42:28.057677 3174 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 12 17:42:28.058259 containerd[1710]: time="2025-09-12T17:42:28.058088536Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 12 17:42:28.061051 kubelet[3174]: I0912 17:42:28.060823 3174 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 12 17:42:29.026254 systemd[1]: Created slice kubepods-besteffort-podafd2e3cd_bf42_4702_9cec_da919246b738.slice - libcontainer container kubepods-besteffort-podafd2e3cd_bf42_4702_9cec_da919246b738.slice. Sep 12 17:42:29.052932 kubelet[3174]: I0912 17:42:29.052638 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/afd2e3cd-bf42-4702-9cec-da919246b738-kube-proxy\") pod \"kube-proxy-h6xrw\" (UID: \"afd2e3cd-bf42-4702-9cec-da919246b738\") " pod="kube-system/kube-proxy-h6xrw" Sep 12 17:42:29.052932 kubelet[3174]: I0912 17:42:29.052837 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/afd2e3cd-bf42-4702-9cec-da919246b738-lib-modules\") pod \"kube-proxy-h6xrw\" (UID: \"afd2e3cd-bf42-4702-9cec-da919246b738\") " pod="kube-system/kube-proxy-h6xrw" Sep 12 17:42:29.052932 kubelet[3174]: I0912 17:42:29.052862 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/afd2e3cd-bf42-4702-9cec-da919246b738-xtables-lock\") pod \"kube-proxy-h6xrw\" (UID: \"afd2e3cd-bf42-4702-9cec-da919246b738\") " pod="kube-system/kube-proxy-h6xrw" Sep 12 17:42:29.052932 kubelet[3174]: I0912 17:42:29.052878 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vd4k\" (UniqueName: \"kubernetes.io/projected/afd2e3cd-bf42-4702-9cec-da919246b738-kube-api-access-2vd4k\") pod \"kube-proxy-h6xrw\" (UID: \"afd2e3cd-bf42-4702-9cec-da919246b738\") " pod="kube-system/kube-proxy-h6xrw" Sep 12 17:42:29.156476 systemd[1]: Created slice kubepods-besteffort-pod6698a6a1_a88c_475c_8c54_2d3743cb8bc5.slice - libcontainer container kubepods-besteffort-pod6698a6a1_a88c_475c_8c54_2d3743cb8bc5.slice. Sep 12 17:42:29.253948 kubelet[3174]: I0912 17:42:29.253909 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6698a6a1-a88c-475c-8c54-2d3743cb8bc5-var-lib-calico\") pod \"tigera-operator-58fc44c59b-g762r\" (UID: \"6698a6a1-a88c-475c-8c54-2d3743cb8bc5\") " pod="tigera-operator/tigera-operator-58fc44c59b-g762r" Sep 12 17:42:29.254478 kubelet[3174]: I0912 17:42:29.254413 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wz6mf\" (UniqueName: \"kubernetes.io/projected/6698a6a1-a88c-475c-8c54-2d3743cb8bc5-kube-api-access-wz6mf\") pod \"tigera-operator-58fc44c59b-g762r\" (UID: \"6698a6a1-a88c-475c-8c54-2d3743cb8bc5\") " pod="tigera-operator/tigera-operator-58fc44c59b-g762r" Sep 12 17:42:29.336426 containerd[1710]: time="2025-09-12T17:42:29.336303488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-h6xrw,Uid:afd2e3cd-bf42-4702-9cec-da919246b738,Namespace:kube-system,Attempt:0,}" Sep 12 17:42:29.382959 containerd[1710]: time="2025-09-12T17:42:29.382788139Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:42:29.383189 containerd[1710]: time="2025-09-12T17:42:29.382972980Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:42:29.383189 containerd[1710]: time="2025-09-12T17:42:29.382999020Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:42:29.383189 containerd[1710]: time="2025-09-12T17:42:29.383130300Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:42:29.405989 systemd[1]: Started cri-containerd-c9a2ee5ebe71b0bb7b777708dfd9278378dce9fb6787f1988c2104e196f3983d.scope - libcontainer container c9a2ee5ebe71b0bb7b777708dfd9278378dce9fb6787f1988c2104e196f3983d. Sep 12 17:42:29.430000 containerd[1710]: time="2025-09-12T17:42:29.429883473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-h6xrw,Uid:afd2e3cd-bf42-4702-9cec-da919246b738,Namespace:kube-system,Attempt:0,} returns sandbox id \"c9a2ee5ebe71b0bb7b777708dfd9278378dce9fb6787f1988c2104e196f3983d\"" Sep 12 17:42:29.433390 containerd[1710]: time="2025-09-12T17:42:29.433153565Z" level=info msg="CreateContainer within sandbox \"c9a2ee5ebe71b0bb7b777708dfd9278378dce9fb6787f1988c2104e196f3983d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 12 17:42:29.462823 containerd[1710]: time="2025-09-12T17:42:29.462497913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-g762r,Uid:6698a6a1-a88c-475c-8c54-2d3743cb8bc5,Namespace:tigera-operator,Attempt:0,}" Sep 12 17:42:29.472548 containerd[1710]: time="2025-09-12T17:42:29.472504030Z" level=info msg="CreateContainer within sandbox \"c9a2ee5ebe71b0bb7b777708dfd9278378dce9fb6787f1988c2104e196f3983d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ca6272386d777d463c2bdf098221c35c7515b465813196d4ade9c1ee1fefc9af\"" Sep 12 17:42:29.474838 containerd[1710]: time="2025-09-12T17:42:29.474770398Z" level=info msg="StartContainer for \"ca6272386d777d463c2bdf098221c35c7515b465813196d4ade9c1ee1fefc9af\"" Sep 12 17:42:29.503222 systemd[1]: Started cri-containerd-ca6272386d777d463c2bdf098221c35c7515b465813196d4ade9c1ee1fefc9af.scope - libcontainer container ca6272386d777d463c2bdf098221c35c7515b465813196d4ade9c1ee1fefc9af. Sep 12 17:42:29.519012 containerd[1710]: time="2025-09-12T17:42:29.518892041Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:42:29.519012 containerd[1710]: time="2025-09-12T17:42:29.518951721Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:42:29.519245 containerd[1710]: time="2025-09-12T17:42:29.518971921Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:42:29.519245 containerd[1710]: time="2025-09-12T17:42:29.519061641Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:42:29.539983 systemd[1]: Started cri-containerd-1ef95d585680eb6eb6b17dac7e5a34e12e2468ccfdadefd96006bbe7641d2cae.scope - libcontainer container 1ef95d585680eb6eb6b17dac7e5a34e12e2468ccfdadefd96006bbe7641d2cae. Sep 12 17:42:29.556917 containerd[1710]: time="2025-09-12T17:42:29.556845220Z" level=info msg="StartContainer for \"ca6272386d777d463c2bdf098221c35c7515b465813196d4ade9c1ee1fefc9af\" returns successfully" Sep 12 17:42:29.582545 containerd[1710]: time="2025-09-12T17:42:29.582499032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-g762r,Uid:6698a6a1-a88c-475c-8c54-2d3743cb8bc5,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"1ef95d585680eb6eb6b17dac7e5a34e12e2468ccfdadefd96006bbe7641d2cae\"" Sep 12 17:42:29.585111 containerd[1710]: time="2025-09-12T17:42:29.585062922Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 12 17:42:29.899013 kubelet[3174]: I0912 17:42:29.898855 3174 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-h6xrw" podStartSLOduration=0.898835376 podStartE2EDuration="898.835376ms" podCreationTimestamp="2025-09-12 17:42:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:42:29.898741135 +0000 UTC m=+7.190739442" watchObservedRunningTime="2025-09-12 17:42:29.898835376 +0000 UTC m=+7.190833683" Sep 12 17:42:31.175871 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2249990315.mount: Deactivated successfully. Sep 12 17:42:31.686842 containerd[1710]: time="2025-09-12T17:42:31.686721039Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:31.695251 containerd[1710]: time="2025-09-12T17:42:31.695203349Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=22152365" Sep 12 17:42:31.700329 containerd[1710]: time="2025-09-12T17:42:31.700273488Z" level=info msg="ImageCreate event name:\"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:31.705034 containerd[1710]: time="2025-09-12T17:42:31.704701984Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:31.705555 containerd[1710]: time="2025-09-12T17:42:31.705520187Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"22148360\" in 2.120415945s" Sep 12 17:42:31.705607 containerd[1710]: time="2025-09-12T17:42:31.705554227Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\"" Sep 12 17:42:31.708140 containerd[1710]: time="2025-09-12T17:42:31.707931035Z" level=info msg="CreateContainer within sandbox \"1ef95d585680eb6eb6b17dac7e5a34e12e2468ccfdadefd96006bbe7641d2cae\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 12 17:42:31.744875 containerd[1710]: time="2025-09-12T17:42:31.744827329Z" level=info msg="CreateContainer within sandbox \"1ef95d585680eb6eb6b17dac7e5a34e12e2468ccfdadefd96006bbe7641d2cae\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"81d006bfb4c82066a3447aad2a5018e8a4fd14b012d8f4a8e88cfb0b459b12e8\"" Sep 12 17:42:31.746848 containerd[1710]: time="2025-09-12T17:42:31.745585532Z" level=info msg="StartContainer for \"81d006bfb4c82066a3447aad2a5018e8a4fd14b012d8f4a8e88cfb0b459b12e8\"" Sep 12 17:42:31.780042 systemd[1]: Started cri-containerd-81d006bfb4c82066a3447aad2a5018e8a4fd14b012d8f4a8e88cfb0b459b12e8.scope - libcontainer container 81d006bfb4c82066a3447aad2a5018e8a4fd14b012d8f4a8e88cfb0b459b12e8. Sep 12 17:42:31.805868 containerd[1710]: time="2025-09-12T17:42:31.805779949Z" level=info msg="StartContainer for \"81d006bfb4c82066a3447aad2a5018e8a4fd14b012d8f4a8e88cfb0b459b12e8\" returns successfully" Sep 12 17:42:32.548449 kubelet[3174]: I0912 17:42:32.548314 3174 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-58fc44c59b-g762r" podStartSLOduration=1.425989561 podStartE2EDuration="3.548294593s" podCreationTimestamp="2025-09-12 17:42:29 +0000 UTC" firstStartedPulling="2025-09-12 17:42:29.584296719 +0000 UTC m=+6.876295026" lastFinishedPulling="2025-09-12 17:42:31.706601751 +0000 UTC m=+8.998600058" observedRunningTime="2025-09-12 17:42:31.881714304 +0000 UTC m=+9.173712651" watchObservedRunningTime="2025-09-12 17:42:32.548294593 +0000 UTC m=+9.840292900" Sep 12 17:42:38.112870 sudo[2220]: pam_unix(sudo:session): session closed for user root Sep 12 17:42:38.196426 sshd[2217]: pam_unix(sshd:session): session closed for user core Sep 12 17:42:38.201979 systemd-logind[1691]: Session 9 logged out. Waiting for processes to exit. Sep 12 17:42:38.202072 systemd[1]: session-9.scope: Deactivated successfully. Sep 12 17:42:38.202240 systemd[1]: session-9.scope: Consumed 7.091s CPU time, 148.2M memory peak, 0B memory swap peak. Sep 12 17:42:38.203069 systemd[1]: sshd@6-10.200.20.41:22-10.200.16.10:60356.service: Deactivated successfully. Sep 12 17:42:38.208033 systemd-logind[1691]: Removed session 9. Sep 12 17:42:46.898376 systemd[1]: Created slice kubepods-besteffort-podcc63edf5_fe97_4aab_a852_e3555b013d0d.slice - libcontainer container kubepods-besteffort-podcc63edf5_fe97_4aab_a852_e3555b013d0d.slice. Sep 12 17:42:46.979855 kubelet[3174]: I0912 17:42:46.979058 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/cc63edf5-fe97-4aab-a852-e3555b013d0d-typha-certs\") pod \"calico-typha-d7f5754c8-wn467\" (UID: \"cc63edf5-fe97-4aab-a852-e3555b013d0d\") " pod="calico-system/calico-typha-d7f5754c8-wn467" Sep 12 17:42:46.979855 kubelet[3174]: I0912 17:42:46.979104 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc63edf5-fe97-4aab-a852-e3555b013d0d-tigera-ca-bundle\") pod \"calico-typha-d7f5754c8-wn467\" (UID: \"cc63edf5-fe97-4aab-a852-e3555b013d0d\") " pod="calico-system/calico-typha-d7f5754c8-wn467" Sep 12 17:42:46.979855 kubelet[3174]: I0912 17:42:46.979125 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x275f\" (UniqueName: \"kubernetes.io/projected/cc63edf5-fe97-4aab-a852-e3555b013d0d-kube-api-access-x275f\") pod \"calico-typha-d7f5754c8-wn467\" (UID: \"cc63edf5-fe97-4aab-a852-e3555b013d0d\") " pod="calico-system/calico-typha-d7f5754c8-wn467" Sep 12 17:42:47.047847 systemd[1]: Created slice kubepods-besteffort-pod3395131b_d184_49c7_9c08_68bdf30cd062.slice - libcontainer container kubepods-besteffort-pod3395131b_d184_49c7_9c08_68bdf30cd062.slice. Sep 12 17:42:47.081057 kubelet[3174]: I0912 17:42:47.080973 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3395131b-d184-49c7-9c08-68bdf30cd062-var-lib-calico\") pod \"calico-node-4hr7d\" (UID: \"3395131b-d184-49c7-9c08-68bdf30cd062\") " pod="calico-system/calico-node-4hr7d" Sep 12 17:42:47.081057 kubelet[3174]: I0912 17:42:47.081057 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3395131b-d184-49c7-9c08-68bdf30cd062-xtables-lock\") pod \"calico-node-4hr7d\" (UID: \"3395131b-d184-49c7-9c08-68bdf30cd062\") " pod="calico-system/calico-node-4hr7d" Sep 12 17:42:47.081226 kubelet[3174]: I0912 17:42:47.081092 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/3395131b-d184-49c7-9c08-68bdf30cd062-cni-bin-dir\") pod \"calico-node-4hr7d\" (UID: \"3395131b-d184-49c7-9c08-68bdf30cd062\") " pod="calico-system/calico-node-4hr7d" Sep 12 17:42:47.081226 kubelet[3174]: I0912 17:42:47.081120 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/3395131b-d184-49c7-9c08-68bdf30cd062-cni-log-dir\") pod \"calico-node-4hr7d\" (UID: \"3395131b-d184-49c7-9c08-68bdf30cd062\") " pod="calico-system/calico-node-4hr7d" Sep 12 17:42:47.081226 kubelet[3174]: I0912 17:42:47.081136 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3395131b-d184-49c7-9c08-68bdf30cd062-tigera-ca-bundle\") pod \"calico-node-4hr7d\" (UID: \"3395131b-d184-49c7-9c08-68bdf30cd062\") " pod="calico-system/calico-node-4hr7d" Sep 12 17:42:47.081226 kubelet[3174]: I0912 17:42:47.081152 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/3395131b-d184-49c7-9c08-68bdf30cd062-policysync\") pod \"calico-node-4hr7d\" (UID: \"3395131b-d184-49c7-9c08-68bdf30cd062\") " pod="calico-system/calico-node-4hr7d" Sep 12 17:42:47.081226 kubelet[3174]: I0912 17:42:47.081167 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/3395131b-d184-49c7-9c08-68bdf30cd062-cni-net-dir\") pod \"calico-node-4hr7d\" (UID: \"3395131b-d184-49c7-9c08-68bdf30cd062\") " pod="calico-system/calico-node-4hr7d" Sep 12 17:42:47.081342 kubelet[3174]: I0912 17:42:47.081185 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tg4wv\" (UniqueName: \"kubernetes.io/projected/3395131b-d184-49c7-9c08-68bdf30cd062-kube-api-access-tg4wv\") pod \"calico-node-4hr7d\" (UID: \"3395131b-d184-49c7-9c08-68bdf30cd062\") " pod="calico-system/calico-node-4hr7d" Sep 12 17:42:47.081342 kubelet[3174]: I0912 17:42:47.081203 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3395131b-d184-49c7-9c08-68bdf30cd062-lib-modules\") pod \"calico-node-4hr7d\" (UID: \"3395131b-d184-49c7-9c08-68bdf30cd062\") " pod="calico-system/calico-node-4hr7d" Sep 12 17:42:47.081342 kubelet[3174]: I0912 17:42:47.081218 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/3395131b-d184-49c7-9c08-68bdf30cd062-node-certs\") pod \"calico-node-4hr7d\" (UID: \"3395131b-d184-49c7-9c08-68bdf30cd062\") " pod="calico-system/calico-node-4hr7d" Sep 12 17:42:47.081342 kubelet[3174]: I0912 17:42:47.081233 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/3395131b-d184-49c7-9c08-68bdf30cd062-var-run-calico\") pod \"calico-node-4hr7d\" (UID: \"3395131b-d184-49c7-9c08-68bdf30cd062\") " pod="calico-system/calico-node-4hr7d" Sep 12 17:42:47.081342 kubelet[3174]: I0912 17:42:47.081257 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/3395131b-d184-49c7-9c08-68bdf30cd062-flexvol-driver-host\") pod \"calico-node-4hr7d\" (UID: \"3395131b-d184-49c7-9c08-68bdf30cd062\") " pod="calico-system/calico-node-4hr7d" Sep 12 17:42:47.188822 kubelet[3174]: E0912 17:42:47.184624 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:47.188822 kubelet[3174]: W0912 17:42:47.185879 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:47.188822 kubelet[3174]: E0912 17:42:47.185908 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:47.189185 kubelet[3174]: E0912 17:42:47.189169 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:47.189389 kubelet[3174]: W0912 17:42:47.189248 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:47.189389 kubelet[3174]: E0912 17:42:47.189276 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:47.192889 kubelet[3174]: E0912 17:42:47.189516 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:47.192889 kubelet[3174]: W0912 17:42:47.189526 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:47.193414 kubelet[3174]: E0912 17:42:47.193247 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:47.193414 kubelet[3174]: E0912 17:42:47.193369 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:47.193414 kubelet[3174]: W0912 17:42:47.193378 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:47.193414 kubelet[3174]: E0912 17:42:47.193396 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:47.194380 kubelet[3174]: E0912 17:42:47.193634 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:47.194380 kubelet[3174]: W0912 17:42:47.193655 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:47.194380 kubelet[3174]: E0912 17:42:47.193672 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:47.194380 kubelet[3174]: E0912 17:42:47.193816 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:47.194380 kubelet[3174]: W0912 17:42:47.193825 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:47.194380 kubelet[3174]: E0912 17:42:47.193833 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:47.194380 kubelet[3174]: E0912 17:42:47.193950 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:47.194380 kubelet[3174]: W0912 17:42:47.193957 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:47.194380 kubelet[3174]: E0912 17:42:47.193965 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:47.194380 kubelet[3174]: E0912 17:42:47.194108 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:47.194747 kubelet[3174]: W0912 17:42:47.194116 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:47.194747 kubelet[3174]: E0912 17:42:47.194123 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:47.194969 kubelet[3174]: E0912 17:42:47.194942 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:47.194969 kubelet[3174]: W0912 17:42:47.194960 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:47.194969 kubelet[3174]: E0912 17:42:47.194978 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:47.195253 kubelet[3174]: E0912 17:42:47.195240 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:47.195391 kubelet[3174]: W0912 17:42:47.195281 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:47.195391 kubelet[3174]: E0912 17:42:47.195305 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:47.195692 kubelet[3174]: E0912 17:42:47.195635 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:47.195692 kubelet[3174]: W0912 17:42:47.195648 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:47.195692 kubelet[3174]: E0912 17:42:47.195680 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:47.197288 kubelet[3174]: E0912 17:42:47.197074 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:47.197288 kubelet[3174]: W0912 17:42:47.197101 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:47.197288 kubelet[3174]: E0912 17:42:47.197180 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:47.197410 kubelet[3174]: E0912 17:42:47.197309 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:47.197410 kubelet[3174]: W0912 17:42:47.197316 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:47.197452 kubelet[3174]: E0912 17:42:47.197412 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:47.197579 kubelet[3174]: E0912 17:42:47.197558 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:47.197579 kubelet[3174]: W0912 17:42:47.197572 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:47.197758 kubelet[3174]: E0912 17:42:47.197674 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:47.197988 kubelet[3174]: E0912 17:42:47.197939 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:47.198046 kubelet[3174]: W0912 17:42:47.197987 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:47.198046 kubelet[3174]: E0912 17:42:47.198008 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:47.201977 kubelet[3174]: E0912 17:42:47.201948 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:47.201977 kubelet[3174]: W0912 17:42:47.201967 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:47.201977 kubelet[3174]: E0912 17:42:47.201990 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:47.202290 kubelet[3174]: E0912 17:42:47.202261 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:47.202349 kubelet[3174]: W0912 17:42:47.202281 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:47.202349 kubelet[3174]: E0912 17:42:47.202321 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:47.202682 kubelet[3174]: E0912 17:42:47.202658 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:47.202744 kubelet[3174]: W0912 17:42:47.202676 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:47.202744 kubelet[3174]: E0912 17:42:47.202707 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:47.203134 kubelet[3174]: E0912 17:42:47.202874 3174 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dp2wf" podUID="93bea3a5-ac3c-4316-b474-aae40e5f08e7" Sep 12 17:42:47.207438 containerd[1710]: time="2025-09-12T17:42:47.207068364Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-d7f5754c8-wn467,Uid:cc63edf5-fe97-4aab-a852-e3555b013d0d,Namespace:calico-system,Attempt:0,}" Sep 12 17:42:47.226935 kubelet[3174]: E0912 17:42:47.225595 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:47.226935 kubelet[3174]: W0912 17:42:47.225628 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:47.226935 kubelet[3174]: E0912 17:42:47.225647 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:47.262391 containerd[1710]: time="2025-09-12T17:42:47.261363199Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:42:47.262391 containerd[1710]: time="2025-09-12T17:42:47.262314641Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:42:47.262391 containerd[1710]: time="2025-09-12T17:42:47.262333801Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:42:47.263641 containerd[1710]: time="2025-09-12T17:42:47.262429442Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:42:47.267681 kubelet[3174]: E0912 17:42:47.267562 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:47.267681 kubelet[3174]: W0912 17:42:47.267595 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:47.267681 kubelet[3174]: E0912 17:42:47.267617 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:47.268860 kubelet[3174]: E0912 17:42:47.267843 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:47.268860 kubelet[3174]: W0912 17:42:47.267857 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:47.268860 kubelet[3174]: E0912 17:42:47.267868 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:47.268860 kubelet[3174]: E0912 17:42:47.268180 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:47.268860 kubelet[3174]: W0912 17:42:47.268191 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:47.268860 kubelet[3174]: E0912 17:42:47.268202 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:47.268860 kubelet[3174]: E0912 17:42:47.268579 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:47.268860 kubelet[3174]: W0912 17:42:47.268590 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:47.268860 kubelet[3174]: E0912 17:42:47.268601 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:47.269942 kubelet[3174]: E0912 17:42:47.269903 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:47.269942 kubelet[3174]: W0912 17:42:47.269935 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:47.270046 kubelet[3174]: E0912 17:42:47.269949 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:47.270324 kubelet[3174]: E0912 17:42:47.270170 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:47.270324 kubelet[3174]: W0912 17:42:47.270184 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:47.270324 kubelet[3174]: E0912 17:42:47.270194 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:47.270412 kubelet[3174]: E0912 17:42:47.270354 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:47.270412 kubelet[3174]: W0912 17:42:47.270363 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:47.270412 kubelet[3174]: E0912 17:42:47.270371 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:47.270558 kubelet[3174]: E0912 17:42:47.270540 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:47.270558 kubelet[3174]: W0912 17:42:47.270553 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:47.270617 kubelet[3174]: E0912 17:42:47.270561 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:47.270771 kubelet[3174]: E0912 17:42:47.270749 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:47.270771 kubelet[3174]: W0912 17:42:47.270762 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:47.270862 kubelet[3174]: E0912 17:42:47.270772 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:47.270976 kubelet[3174]: E0912 17:42:47.270957 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:47.270976 kubelet[3174]: W0912 17:42:47.270971 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:47.271037 kubelet[3174]: E0912 17:42:47.270989 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:47.271155 kubelet[3174]: E0912 17:42:47.271134 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:47.271155 kubelet[3174]: W0912 17:42:47.271147 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:47.271155 kubelet[3174]: E0912 17:42:47.271155 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:47.271413 kubelet[3174]: E0912 17:42:47.271389 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:47.271413 kubelet[3174]: W0912 17:42:47.271405 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:47.271413 kubelet[3174]: E0912 17:42:47.271415 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:47.272881 kubelet[3174]: E0912 17:42:47.271921 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:47.272881 kubelet[3174]: W0912 17:42:47.271938 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:47.272881 kubelet[3174]: E0912 17:42:47.271949 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:47.273079 kubelet[3174]: E0912 17:42:47.273055 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:47.273079 kubelet[3174]: W0912 17:42:47.273076 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:47.273310 kubelet[3174]: E0912 17:42:47.273100 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:47.273310 kubelet[3174]: E0912 17:42:47.273279 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:47.273310 kubelet[3174]: W0912 17:42:47.273287 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:47.273310 kubelet[3174]: E0912 17:42:47.273296 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:47.273567 kubelet[3174]: E0912 17:42:47.273437 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:47.273567 kubelet[3174]: W0912 17:42:47.273451 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:47.273567 kubelet[3174]: E0912 17:42:47.273459 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:47.273652 kubelet[3174]: E0912 17:42:47.273603 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:47.273652 kubelet[3174]: W0912 17:42:47.273611 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:47.273652 kubelet[3174]: E0912 17:42:47.273619 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:47.274052 kubelet[3174]: E0912 17:42:47.273741 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:47.274052 kubelet[3174]: W0912 17:42:47.273749 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:47.274052 kubelet[3174]: E0912 17:42:47.273756 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:47.274052 kubelet[3174]: E0912 17:42:47.273899 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:47.274052 kubelet[3174]: W0912 17:42:47.273907 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:47.274052 kubelet[3174]: E0912 17:42:47.273917 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:47.274052 kubelet[3174]: E0912 17:42:47.274056 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:47.274202 kubelet[3174]: W0912 17:42:47.274064 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:47.274202 kubelet[3174]: E0912 17:42:47.274072 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:47.284382 kubelet[3174]: E0912 17:42:47.284198 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:47.284382 kubelet[3174]: W0912 17:42:47.284219 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:47.284382 kubelet[3174]: E0912 17:42:47.284234 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:47.284382 kubelet[3174]: I0912 17:42:47.284260 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/93bea3a5-ac3c-4316-b474-aae40e5f08e7-kubelet-dir\") pod \"csi-node-driver-dp2wf\" (UID: \"93bea3a5-ac3c-4316-b474-aae40e5f08e7\") " pod="calico-system/csi-node-driver-dp2wf" Sep 12 17:42:47.285981 kubelet[3174]: E0912 17:42:47.285102 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:47.285981 kubelet[3174]: W0912 17:42:47.285121 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:47.285981 kubelet[3174]: E0912 17:42:47.285144 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:47.285981 kubelet[3174]: I0912 17:42:47.285162 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s42ng\" (UniqueName: \"kubernetes.io/projected/93bea3a5-ac3c-4316-b474-aae40e5f08e7-kube-api-access-s42ng\") pod \"csi-node-driver-dp2wf\" (UID: \"93bea3a5-ac3c-4316-b474-aae40e5f08e7\") " pod="calico-system/csi-node-driver-dp2wf" Sep 12 17:42:47.285981 kubelet[3174]: E0912 17:42:47.285932 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:47.285981 kubelet[3174]: W0912 17:42:47.285945 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:47.285981 kubelet[3174]: E0912 17:42:47.285978 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:47.286172 kubelet[3174]: I0912 17:42:47.285997 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/93bea3a5-ac3c-4316-b474-aae40e5f08e7-socket-dir\") pod \"csi-node-driver-dp2wf\" (UID: \"93bea3a5-ac3c-4316-b474-aae40e5f08e7\") " pod="calico-system/csi-node-driver-dp2wf" Sep 12 17:42:47.286238 kubelet[3174]: E0912 17:42:47.286213 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:47.286238 kubelet[3174]: W0912 17:42:47.286230 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:47.286330 kubelet[3174]: E0912 17:42:47.286312 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:47.286368 kubelet[3174]: I0912 17:42:47.286335 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/93bea3a5-ac3c-4316-b474-aae40e5f08e7-registration-dir\") pod \"csi-node-driver-dp2wf\" (UID: \"93bea3a5-ac3c-4316-b474-aae40e5f08e7\") " pod="calico-system/csi-node-driver-dp2wf" Sep 12 17:42:47.286469 kubelet[3174]: E0912 17:42:47.286449 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:47.286469 kubelet[3174]: W0912 17:42:47.286461 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:47.286561 kubelet[3174]: E0912 17:42:47.286540 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:47.286738 kubelet[3174]: E0912 17:42:47.286717 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:47.286738 kubelet[3174]: W0912 17:42:47.286732 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:47.287969 kubelet[3174]: E0912 17:42:47.287558 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:47.287969 kubelet[3174]: E0912 17:42:47.287705 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:47.287969 kubelet[3174]: W0912 17:42:47.287715 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:47.287969 kubelet[3174]: E0912 17:42:47.287733 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:47.287969 kubelet[3174]: E0912 17:42:47.287927 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:47.287969 kubelet[3174]: W0912 17:42:47.287935 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:47.287969 kubelet[3174]: E0912 17:42:47.287947 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:47.287969 kubelet[3174]: I0912 17:42:47.287964 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/93bea3a5-ac3c-4316-b474-aae40e5f08e7-varrun\") pod \"csi-node-driver-dp2wf\" (UID: \"93bea3a5-ac3c-4316-b474-aae40e5f08e7\") " pod="calico-system/csi-node-driver-dp2wf" Sep 12 17:42:47.288201 kubelet[3174]: E0912 17:42:47.288110 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:47.288201 kubelet[3174]: W0912 17:42:47.288119 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:47.288201 kubelet[3174]: E0912 17:42:47.288133 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:47.288342 kubelet[3174]: E0912 17:42:47.288280 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:47.288342 kubelet[3174]: W0912 17:42:47.288289 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:47.288342 kubelet[3174]: E0912 17:42:47.288297 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:47.289130 kubelet[3174]: E0912 17:42:47.288480 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:47.289130 kubelet[3174]: W0912 17:42:47.288503 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:47.289130 kubelet[3174]: E0912 17:42:47.288516 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:47.289130 kubelet[3174]: E0912 17:42:47.289076 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:47.289130 kubelet[3174]: W0912 17:42:47.289087 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:47.289130 kubelet[3174]: E0912 17:42:47.289098 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:47.290349 kubelet[3174]: E0912 17:42:47.290106 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:47.290349 kubelet[3174]: W0912 17:42:47.290117 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:47.290349 kubelet[3174]: E0912 17:42:47.290129 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:47.290427 kubelet[3174]: E0912 17:42:47.290418 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:47.290450 kubelet[3174]: W0912 17:42:47.290428 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:47.290450 kubelet[3174]: E0912 17:42:47.290439 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:47.290672 kubelet[3174]: E0912 17:42:47.290650 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:47.290672 kubelet[3174]: W0912 17:42:47.290665 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:47.290807 kubelet[3174]: E0912 17:42:47.290677 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:47.300159 systemd[1]: Started cri-containerd-e7490a5d7541d99d226c93617c67362304a11ad87b8bb48f32f984714842ee1c.scope - libcontainer container e7490a5d7541d99d226c93617c67362304a11ad87b8bb48f32f984714842ee1c. Sep 12 17:42:47.351740 containerd[1710]: time="2025-09-12T17:42:47.351350855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4hr7d,Uid:3395131b-d184-49c7-9c08-68bdf30cd062,Namespace:calico-system,Attempt:0,}" Sep 12 17:42:47.391241 kubelet[3174]: E0912 17:42:47.391200 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:47.391379 kubelet[3174]: W0912 17:42:47.391224 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:47.391379 kubelet[3174]: E0912 17:42:47.391351 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:47.392358 kubelet[3174]: E0912 17:42:47.392152 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:47.392358 kubelet[3174]: W0912 17:42:47.392171 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:47.392358 kubelet[3174]: E0912 17:42:47.392192 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:47.393262 kubelet[3174]: E0912 17:42:47.392497 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:47.393262 kubelet[3174]: W0912 17:42:47.392516 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:47.393262 kubelet[3174]: E0912 17:42:47.392534 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:47.393262 kubelet[3174]: E0912 17:42:47.392990 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:47.393262 kubelet[3174]: W0912 17:42:47.393001 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:47.393262 kubelet[3174]: E0912 17:42:47.393183 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:47.393662 kubelet[3174]: E0912 17:42:47.393631 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:47.393662 kubelet[3174]: W0912 17:42:47.393648 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:47.393662 kubelet[3174]: E0912 17:42:47.393664 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:47.394863 kubelet[3174]: E0912 17:42:47.394301 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:47.394863 kubelet[3174]: W0912 17:42:47.394333 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:47.394863 kubelet[3174]: E0912 17:42:47.394415 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:47.394863 kubelet[3174]: E0912 17:42:47.394580 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:47.394863 kubelet[3174]: W0912 17:42:47.394588 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:47.394863 kubelet[3174]: E0912 17:42:47.394674 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:47.395042 kubelet[3174]: E0912 17:42:47.394880 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:47.395042 kubelet[3174]: W0912 17:42:47.394995 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:47.395083 kubelet[3174]: E0912 17:42:47.395072 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:47.395886 kubelet[3174]: E0912 17:42:47.395503 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:47.395886 kubelet[3174]: W0912 17:42:47.395521 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:47.398757 kubelet[3174]: E0912 17:42:47.397833 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:47.398757 kubelet[3174]: E0912 17:42:47.397991 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:47.398757 kubelet[3174]: W0912 17:42:47.398000 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:47.398757 kubelet[3174]: E0912 17:42:47.398030 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:47.398757 kubelet[3174]: E0912 17:42:47.398186 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:47.398757 kubelet[3174]: W0912 17:42:47.398194 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:47.398757 kubelet[3174]: E0912 17:42:47.398275 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:47.398757 kubelet[3174]: E0912 17:42:47.398374 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:47.398757 kubelet[3174]: W0912 17:42:47.398381 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:47.398757 kubelet[3174]: E0912 17:42:47.398462 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:47.399019 kubelet[3174]: E0912 17:42:47.398594 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:47.399019 kubelet[3174]: W0912 17:42:47.398601 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:47.399019 kubelet[3174]: E0912 17:42:47.398658 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:47.399019 kubelet[3174]: E0912 17:42:47.398766 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:47.399019 kubelet[3174]: W0912 17:42:47.398773 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:47.399019 kubelet[3174]: E0912 17:42:47.398784 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:47.399019 kubelet[3174]: E0912 17:42:47.398956 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:47.399019 kubelet[3174]: W0912 17:42:47.398964 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:47.399019 kubelet[3174]: E0912 17:42:47.398978 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:47.399199 kubelet[3174]: E0912 17:42:47.399169 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:47.399199 kubelet[3174]: W0912 17:42:47.399178 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:47.399199 kubelet[3174]: E0912 17:42:47.399192 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:47.402723 kubelet[3174]: E0912 17:42:47.399482 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:47.402723 kubelet[3174]: W0912 17:42:47.399500 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:47.402723 kubelet[3174]: E0912 17:42:47.399560 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:47.402723 kubelet[3174]: E0912 17:42:47.400156 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:47.402723 kubelet[3174]: W0912 17:42:47.400168 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:47.402723 kubelet[3174]: E0912 17:42:47.400183 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:47.402723 kubelet[3174]: E0912 17:42:47.400654 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:47.402723 kubelet[3174]: W0912 17:42:47.400666 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:47.402723 kubelet[3174]: E0912 17:42:47.400684 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:47.402723 kubelet[3174]: E0912 17:42:47.401109 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:47.403027 kubelet[3174]: W0912 17:42:47.401121 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:47.403027 kubelet[3174]: E0912 17:42:47.401207 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:47.403027 kubelet[3174]: E0912 17:42:47.401474 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:47.403027 kubelet[3174]: W0912 17:42:47.401486 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:47.403027 kubelet[3174]: E0912 17:42:47.401508 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:47.403694 kubelet[3174]: E0912 17:42:47.403539 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:47.403891 kubelet[3174]: W0912 17:42:47.403865 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:47.403926 kubelet[3174]: E0912 17:42:47.403897 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:47.406067 kubelet[3174]: E0912 17:42:47.406032 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:47.406067 kubelet[3174]: W0912 17:42:47.406053 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:47.406527 kubelet[3174]: E0912 17:42:47.406494 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:47.407188 kubelet[3174]: E0912 17:42:47.407162 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:47.407188 kubelet[3174]: W0912 17:42:47.407180 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:47.407282 kubelet[3174]: E0912 17:42:47.407196 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:47.408057 kubelet[3174]: E0912 17:42:47.408027 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:47.408057 kubelet[3174]: W0912 17:42:47.408047 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:47.408057 kubelet[3174]: E0912 17:42:47.408061 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:47.415619 containerd[1710]: time="2025-09-12T17:42:47.415572319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-d7f5754c8-wn467,Uid:cc63edf5-fe97-4aab-a852-e3555b013d0d,Namespace:calico-system,Attempt:0,} returns sandbox id \"e7490a5d7541d99d226c93617c67362304a11ad87b8bb48f32f984714842ee1c\"" Sep 12 17:42:47.421829 containerd[1710]: time="2025-09-12T17:42:47.420272612Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 12 17:42:47.433608 containerd[1710]: time="2025-09-12T17:42:47.433498930Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:42:47.433608 containerd[1710]: time="2025-09-12T17:42:47.433572930Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:42:47.433934 containerd[1710]: time="2025-09-12T17:42:47.433584850Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:42:47.434181 containerd[1710]: time="2025-09-12T17:42:47.434128132Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:42:47.447288 kubelet[3174]: E0912 17:42:47.445330 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:47.447288 kubelet[3174]: W0912 17:42:47.445835 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:47.447288 kubelet[3174]: E0912 17:42:47.445859 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:47.462576 systemd[1]: Started cri-containerd-e94b5cf056b8c06add02ab1526ff3ba9d4a2790f7ae442e272c0b42155de2b6d.scope - libcontainer container e94b5cf056b8c06add02ab1526ff3ba9d4a2790f7ae442e272c0b42155de2b6d. Sep 12 17:42:47.517522 containerd[1710]: time="2025-09-12T17:42:47.517485129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4hr7d,Uid:3395131b-d184-49c7-9c08-68bdf30cd062,Namespace:calico-system,Attempt:0,} returns sandbox id \"e94b5cf056b8c06add02ab1526ff3ba9d4a2790f7ae442e272c0b42155de2b6d\"" Sep 12 17:42:48.790044 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2578398219.mount: Deactivated successfully. Sep 12 17:42:48.815228 kubelet[3174]: E0912 17:42:48.814181 3174 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dp2wf" podUID="93bea3a5-ac3c-4316-b474-aae40e5f08e7" Sep 12 17:42:49.798813 containerd[1710]: time="2025-09-12T17:42:49.798751879Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:49.802535 containerd[1710]: time="2025-09-12T17:42:49.802394969Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=33105775" Sep 12 17:42:49.810123 containerd[1710]: time="2025-09-12T17:42:49.808973508Z" level=info msg="ImageCreate event name:\"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:49.814315 containerd[1710]: time="2025-09-12T17:42:49.813498841Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:49.814315 containerd[1710]: time="2025-09-12T17:42:49.814204763Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"33105629\" in 2.393896751s" Sep 12 17:42:49.814315 containerd[1710]: time="2025-09-12T17:42:49.814235123Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\"" Sep 12 17:42:49.815991 containerd[1710]: time="2025-09-12T17:42:49.815968408Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 12 17:42:49.830856 containerd[1710]: time="2025-09-12T17:42:49.830821210Z" level=info msg="CreateContainer within sandbox \"e7490a5d7541d99d226c93617c67362304a11ad87b8bb48f32f984714842ee1c\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 12 17:42:49.872870 containerd[1710]: time="2025-09-12T17:42:49.872828090Z" level=info msg="CreateContainer within sandbox \"e7490a5d7541d99d226c93617c67362304a11ad87b8bb48f32f984714842ee1c\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"11988b993c562134f2067025ec6347052eddff84dfea3db118609b2bb5602c12\"" Sep 12 17:42:49.873652 containerd[1710]: time="2025-09-12T17:42:49.873627773Z" level=info msg="StartContainer for \"11988b993c562134f2067025ec6347052eddff84dfea3db118609b2bb5602c12\"" Sep 12 17:42:49.905047 systemd[1]: Started cri-containerd-11988b993c562134f2067025ec6347052eddff84dfea3db118609b2bb5602c12.scope - libcontainer container 11988b993c562134f2067025ec6347052eddff84dfea3db118609b2bb5602c12. Sep 12 17:42:49.943696 containerd[1710]: time="2025-09-12T17:42:49.943638372Z" level=info msg="StartContainer for \"11988b993c562134f2067025ec6347052eddff84dfea3db118609b2bb5602c12\" returns successfully" Sep 12 17:42:50.813455 kubelet[3174]: E0912 17:42:50.813404 3174 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dp2wf" podUID="93bea3a5-ac3c-4316-b474-aae40e5f08e7" Sep 12 17:42:50.973205 kubelet[3174]: I0912 17:42:50.973128 3174 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-d7f5754c8-wn467" podStartSLOduration=2.576649512 podStartE2EDuration="4.97301159s" podCreationTimestamp="2025-09-12 17:42:46 +0000 UTC" firstStartedPulling="2025-09-12 17:42:47.419021808 +0000 UTC m=+24.711020115" lastFinishedPulling="2025-09-12 17:42:49.815383846 +0000 UTC m=+27.107382193" observedRunningTime="2025-09-12 17:42:50.938321931 +0000 UTC m=+28.230320238" watchObservedRunningTime="2025-09-12 17:42:50.97301159 +0000 UTC m=+28.265009897" Sep 12 17:42:50.998395 kubelet[3174]: E0912 17:42:50.998347 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:50.998395 kubelet[3174]: W0912 17:42:50.998383 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:50.998577 kubelet[3174]: E0912 17:42:50.998406 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:50.998679 kubelet[3174]: E0912 17:42:50.998653 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:50.998737 kubelet[3174]: W0912 17:42:50.998671 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:50.998737 kubelet[3174]: E0912 17:42:50.998693 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:50.999697 kubelet[3174]: E0912 17:42:50.999650 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:50.999779 kubelet[3174]: W0912 17:42:50.999683 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:50.999779 kubelet[3174]: E0912 17:42:50.999730 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:51.000029 kubelet[3174]: E0912 17:42:51.000007 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:51.000029 kubelet[3174]: W0912 17:42:51.000023 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:51.000106 kubelet[3174]: E0912 17:42:51.000033 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:51.000262 kubelet[3174]: E0912 17:42:51.000227 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:51.000262 kubelet[3174]: W0912 17:42:51.000245 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:51.000262 kubelet[3174]: E0912 17:42:51.000254 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:51.000434 kubelet[3174]: E0912 17:42:51.000413 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:51.000434 kubelet[3174]: W0912 17:42:51.000429 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:51.000487 kubelet[3174]: E0912 17:42:51.000447 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:51.000623 kubelet[3174]: E0912 17:42:51.000600 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:51.000623 kubelet[3174]: W0912 17:42:51.000614 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:51.000623 kubelet[3174]: E0912 17:42:51.000623 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:51.001324 kubelet[3174]: E0912 17:42:51.000784 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:51.001324 kubelet[3174]: W0912 17:42:51.000813 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:51.001324 kubelet[3174]: E0912 17:42:51.000822 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:51.001324 kubelet[3174]: E0912 17:42:51.001039 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:51.001324 kubelet[3174]: W0912 17:42:51.001049 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:51.001324 kubelet[3174]: E0912 17:42:51.001068 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:51.001324 kubelet[3174]: E0912 17:42:51.001235 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:51.001324 kubelet[3174]: W0912 17:42:51.001245 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:51.001324 kubelet[3174]: E0912 17:42:51.001253 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:51.001740 kubelet[3174]: E0912 17:42:51.001718 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:51.001740 kubelet[3174]: W0912 17:42:51.001733 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:51.001840 kubelet[3174]: E0912 17:42:51.001744 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:51.001990 kubelet[3174]: E0912 17:42:51.001969 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:51.001990 kubelet[3174]: W0912 17:42:51.001985 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:51.002064 kubelet[3174]: E0912 17:42:51.001994 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:51.002187 kubelet[3174]: E0912 17:42:51.002169 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:51.002187 kubelet[3174]: W0912 17:42:51.002182 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:51.002249 kubelet[3174]: E0912 17:42:51.002191 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:51.002661 kubelet[3174]: E0912 17:42:51.002379 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:51.002661 kubelet[3174]: W0912 17:42:51.002393 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:51.002661 kubelet[3174]: E0912 17:42:51.002409 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:51.002661 kubelet[3174]: E0912 17:42:51.002595 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:51.002661 kubelet[3174]: W0912 17:42:51.002619 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:51.002661 kubelet[3174]: E0912 17:42:51.002629 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:51.025659 kubelet[3174]: E0912 17:42:51.025625 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:51.025659 kubelet[3174]: W0912 17:42:51.025650 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:51.025920 kubelet[3174]: E0912 17:42:51.025672 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:51.026196 kubelet[3174]: E0912 17:42:51.025940 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:51.026196 kubelet[3174]: W0912 17:42:51.025956 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:51.026196 kubelet[3174]: E0912 17:42:51.025969 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:51.026196 kubelet[3174]: E0912 17:42:51.026153 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:51.026196 kubelet[3174]: W0912 17:42:51.026164 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:51.026196 kubelet[3174]: E0912 17:42:51.026179 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:51.026474 kubelet[3174]: E0912 17:42:51.026377 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:51.026474 kubelet[3174]: W0912 17:42:51.026392 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:51.026474 kubelet[3174]: E0912 17:42:51.026407 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:51.026756 kubelet[3174]: E0912 17:42:51.026549 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:51.026756 kubelet[3174]: W0912 17:42:51.026556 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:51.026756 kubelet[3174]: E0912 17:42:51.026571 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:51.026756 kubelet[3174]: E0912 17:42:51.026745 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:51.026756 kubelet[3174]: W0912 17:42:51.026752 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:51.026756 kubelet[3174]: E0912 17:42:51.026765 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:51.027192 kubelet[3174]: E0912 17:42:51.027097 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:51.027192 kubelet[3174]: W0912 17:42:51.027115 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:51.027192 kubelet[3174]: E0912 17:42:51.027137 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:51.027496 kubelet[3174]: E0912 17:42:51.027434 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:51.027496 kubelet[3174]: W0912 17:42:51.027449 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:51.027496 kubelet[3174]: E0912 17:42:51.027474 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:51.027902 kubelet[3174]: E0912 17:42:51.027757 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:51.027902 kubelet[3174]: W0912 17:42:51.027772 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:51.027902 kubelet[3174]: E0912 17:42:51.027808 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:51.028137 kubelet[3174]: E0912 17:42:51.028079 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:51.028137 kubelet[3174]: W0912 17:42:51.028091 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:51.028137 kubelet[3174]: E0912 17:42:51.028117 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:51.028649 kubelet[3174]: E0912 17:42:51.028546 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:51.028649 kubelet[3174]: W0912 17:42:51.028562 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:51.028649 kubelet[3174]: E0912 17:42:51.028598 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:51.029162 kubelet[3174]: E0912 17:42:51.029063 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:51.029162 kubelet[3174]: W0912 17:42:51.029080 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:51.029162 kubelet[3174]: E0912 17:42:51.029104 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:51.029522 kubelet[3174]: E0912 17:42:51.029430 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:51.029522 kubelet[3174]: W0912 17:42:51.029442 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:51.029522 kubelet[3174]: E0912 17:42:51.029455 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:51.029995 kubelet[3174]: E0912 17:42:51.029788 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:51.029995 kubelet[3174]: W0912 17:42:51.029872 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:51.029995 kubelet[3174]: E0912 17:42:51.029887 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:51.030315 kubelet[3174]: E0912 17:42:51.030238 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:51.030315 kubelet[3174]: W0912 17:42:51.030251 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:51.030315 kubelet[3174]: E0912 17:42:51.030295 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:51.030685 kubelet[3174]: E0912 17:42:51.030591 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:51.030685 kubelet[3174]: W0912 17:42:51.030605 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:51.030685 kubelet[3174]: E0912 17:42:51.030626 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:51.031120 kubelet[3174]: E0912 17:42:51.030979 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:51.031120 kubelet[3174]: W0912 17:42:51.030995 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:51.031120 kubelet[3174]: E0912 17:42:51.031015 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:51.031320 kubelet[3174]: E0912 17:42:51.031273 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:51.031320 kubelet[3174]: W0912 17:42:51.031287 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:51.031320 kubelet[3174]: E0912 17:42:51.031298 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:51.568770 containerd[1710]: time="2025-09-12T17:42:51.568732970Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:51.572479 containerd[1710]: time="2025-09-12T17:42:51.572448740Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4266814" Sep 12 17:42:51.576833 containerd[1710]: time="2025-09-12T17:42:51.576229231Z" level=info msg="ImageCreate event name:\"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:51.581955 containerd[1710]: time="2025-09-12T17:42:51.580980725Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:51.581955 containerd[1710]: time="2025-09-12T17:42:51.581676207Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5636015\" in 1.765594039s" Sep 12 17:42:51.581955 containerd[1710]: time="2025-09-12T17:42:51.581703967Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\"" Sep 12 17:42:51.584150 containerd[1710]: time="2025-09-12T17:42:51.584005533Z" level=info msg="CreateContainer within sandbox \"e94b5cf056b8c06add02ab1526ff3ba9d4a2790f7ae442e272c0b42155de2b6d\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 12 17:42:51.622184 containerd[1710]: time="2025-09-12T17:42:51.622118562Z" level=info msg="CreateContainer within sandbox \"e94b5cf056b8c06add02ab1526ff3ba9d4a2790f7ae442e272c0b42155de2b6d\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"320f4dc50438df0c4ebf46ade03c532a78955fc981f3a8530802fe881c5b33a3\"" Sep 12 17:42:51.623432 containerd[1710]: time="2025-09-12T17:42:51.623208925Z" level=info msg="StartContainer for \"320f4dc50438df0c4ebf46ade03c532a78955fc981f3a8530802fe881c5b33a3\"" Sep 12 17:42:51.655959 systemd[1]: Started cri-containerd-320f4dc50438df0c4ebf46ade03c532a78955fc981f3a8530802fe881c5b33a3.scope - libcontainer container 320f4dc50438df0c4ebf46ade03c532a78955fc981f3a8530802fe881c5b33a3. Sep 12 17:42:51.689666 containerd[1710]: time="2025-09-12T17:42:51.689504034Z" level=info msg="StartContainer for \"320f4dc50438df0c4ebf46ade03c532a78955fc981f3a8530802fe881c5b33a3\" returns successfully" Sep 12 17:42:51.698694 systemd[1]: cri-containerd-320f4dc50438df0c4ebf46ade03c532a78955fc981f3a8530802fe881c5b33a3.scope: Deactivated successfully. Sep 12 17:42:51.720350 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-320f4dc50438df0c4ebf46ade03c532a78955fc981f3a8530802fe881c5b33a3-rootfs.mount: Deactivated successfully. Sep 12 17:42:52.734559 containerd[1710]: time="2025-09-12T17:42:52.734489896Z" level=info msg="shim disconnected" id=320f4dc50438df0c4ebf46ade03c532a78955fc981f3a8530802fe881c5b33a3 namespace=k8s.io Sep 12 17:42:52.734559 containerd[1710]: time="2025-09-12T17:42:52.734570696Z" level=warning msg="cleaning up after shim disconnected" id=320f4dc50438df0c4ebf46ade03c532a78955fc981f3a8530802fe881c5b33a3 namespace=k8s.io Sep 12 17:42:52.735070 containerd[1710]: time="2025-09-12T17:42:52.734582096Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:42:52.814844 kubelet[3174]: E0912 17:42:52.814362 3174 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dp2wf" podUID="93bea3a5-ac3c-4316-b474-aae40e5f08e7" Sep 12 17:42:52.924428 containerd[1710]: time="2025-09-12T17:42:52.923196555Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 12 17:42:54.813439 kubelet[3174]: E0912 17:42:54.813386 3174 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dp2wf" podUID="93bea3a5-ac3c-4316-b474-aae40e5f08e7" Sep 12 17:42:56.604552 containerd[1710]: time="2025-09-12T17:42:56.604502979Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:56.607746 containerd[1710]: time="2025-09-12T17:42:56.607708062Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=65913477" Sep 12 17:42:56.613852 containerd[1710]: time="2025-09-12T17:42:56.613763148Z" level=info msg="ImageCreate event name:\"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:56.618933 containerd[1710]: time="2025-09-12T17:42:56.618191992Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:56.618933 containerd[1710]: time="2025-09-12T17:42:56.618820272Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"67282718\" in 3.695564277s" Sep 12 17:42:56.618933 containerd[1710]: time="2025-09-12T17:42:56.618848072Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\"" Sep 12 17:42:56.622014 containerd[1710]: time="2025-09-12T17:42:56.621977075Z" level=info msg="CreateContainer within sandbox \"e94b5cf056b8c06add02ab1526ff3ba9d4a2790f7ae442e272c0b42155de2b6d\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 12 17:42:56.665648 containerd[1710]: time="2025-09-12T17:42:56.665519675Z" level=info msg="CreateContainer within sandbox \"e94b5cf056b8c06add02ab1526ff3ba9d4a2790f7ae442e272c0b42155de2b6d\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"c7bf416ee6e9ffff1ee5bb5d8d23159163a2da213a5276f3072fd55738bbee59\"" Sep 12 17:42:56.668360 containerd[1710]: time="2025-09-12T17:42:56.667041076Z" level=info msg="StartContainer for \"c7bf416ee6e9ffff1ee5bb5d8d23159163a2da213a5276f3072fd55738bbee59\"" Sep 12 17:42:56.692393 systemd[1]: run-containerd-runc-k8s.io-c7bf416ee6e9ffff1ee5bb5d8d23159163a2da213a5276f3072fd55738bbee59-runc.ZOxFVJ.mount: Deactivated successfully. Sep 12 17:42:56.696996 systemd[1]: Started cri-containerd-c7bf416ee6e9ffff1ee5bb5d8d23159163a2da213a5276f3072fd55738bbee59.scope - libcontainer container c7bf416ee6e9ffff1ee5bb5d8d23159163a2da213a5276f3072fd55738bbee59. Sep 12 17:42:56.725482 containerd[1710]: time="2025-09-12T17:42:56.725414810Z" level=info msg="StartContainer for \"c7bf416ee6e9ffff1ee5bb5d8d23159163a2da213a5276f3072fd55738bbee59\" returns successfully" Sep 12 17:42:56.814106 kubelet[3174]: E0912 17:42:56.814049 3174 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dp2wf" podUID="93bea3a5-ac3c-4316-b474-aae40e5f08e7" Sep 12 17:42:58.201045 containerd[1710]: time="2025-09-12T17:42:58.200993176Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 17:42:58.210262 systemd[1]: cri-containerd-c7bf416ee6e9ffff1ee5bb5d8d23159163a2da213a5276f3072fd55738bbee59.scope: Deactivated successfully. Sep 12 17:42:58.229088 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c7bf416ee6e9ffff1ee5bb5d8d23159163a2da213a5276f3072fd55738bbee59-rootfs.mount: Deactivated successfully. Sep 12 17:42:58.286638 kubelet[3174]: I0912 17:42:58.286590 3174 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 12 17:42:58.458786 kubelet[3174]: I0912 17:42:58.375051 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnqpt\" (UniqueName: \"kubernetes.io/projected/b2903050-6e84-45bd-9086-8adaeb871d43-kube-api-access-bnqpt\") pod \"goldmane-7988f88666-fggbj\" (UID: \"b2903050-6e84-45bd-9086-8adaeb871d43\") " pod="calico-system/goldmane-7988f88666-fggbj" Sep 12 17:42:58.458786 kubelet[3174]: I0912 17:42:58.375085 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzk8v\" (UniqueName: \"kubernetes.io/projected/34236242-7d33-47ad-b860-36c4a84495b0-kube-api-access-wzk8v\") pod \"coredns-7c65d6cfc9-z8zzj\" (UID: \"34236242-7d33-47ad-b860-36c4a84495b0\") " pod="kube-system/coredns-7c65d6cfc9-z8zzj" Sep 12 17:42:58.458786 kubelet[3174]: I0912 17:42:58.375117 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dm68\" (UniqueName: \"kubernetes.io/projected/34b1f8d5-6c1c-4f8f-9138-d677d0b599a6-kube-api-access-4dm68\") pod \"coredns-7c65d6cfc9-8zkgx\" (UID: \"34b1f8d5-6c1c-4f8f-9138-d677d0b599a6\") " pod="kube-system/coredns-7c65d6cfc9-8zkgx" Sep 12 17:42:58.458786 kubelet[3174]: I0912 17:42:58.375137 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2903050-6e84-45bd-9086-8adaeb871d43-config\") pod \"goldmane-7988f88666-fggbj\" (UID: \"b2903050-6e84-45bd-9086-8adaeb871d43\") " pod="calico-system/goldmane-7988f88666-fggbj" Sep 12 17:42:58.458786 kubelet[3174]: I0912 17:42:58.375154 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/b2903050-6e84-45bd-9086-8adaeb871d43-goldmane-key-pair\") pod \"goldmane-7988f88666-fggbj\" (UID: \"b2903050-6e84-45bd-9086-8adaeb871d43\") " pod="calico-system/goldmane-7988f88666-fggbj" Sep 12 17:42:58.339854 systemd[1]: Created slice kubepods-burstable-pod34236242_7d33_47ad_b860_36c4a84495b0.slice - libcontainer container kubepods-burstable-pod34236242_7d33_47ad_b860_36c4a84495b0.slice. Sep 12 17:42:58.459095 kubelet[3174]: I0912 17:42:58.375172 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgkpn\" (UniqueName: \"kubernetes.io/projected/eacf3852-e8b2-4417-862e-486f289649f2-kube-api-access-mgkpn\") pod \"calico-apiserver-77d7968c8d-4l6tx\" (UID: \"eacf3852-e8b2-4417-862e-486f289649f2\") " pod="calico-apiserver/calico-apiserver-77d7968c8d-4l6tx" Sep 12 17:42:58.459095 kubelet[3174]: I0912 17:42:58.375188 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1b00cb80-9491-4739-9630-df8b4e81d405-whisker-ca-bundle\") pod \"whisker-79bf995685-p7788\" (UID: \"1b00cb80-9491-4739-9630-df8b4e81d405\") " pod="calico-system/whisker-79bf995685-p7788" Sep 12 17:42:58.459095 kubelet[3174]: I0912 17:42:58.375204 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba349ebc-a27b-4172-bfad-7c73a43c2008-tigera-ca-bundle\") pod \"calico-kube-controllers-84db5c8b4c-5nsqv\" (UID: \"ba349ebc-a27b-4172-bfad-7c73a43c2008\") " pod="calico-system/calico-kube-controllers-84db5c8b4c-5nsqv" Sep 12 17:42:58.459095 kubelet[3174]: I0912 17:42:58.375225 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/34b1f8d5-6c1c-4f8f-9138-d677d0b599a6-config-volume\") pod \"coredns-7c65d6cfc9-8zkgx\" (UID: \"34b1f8d5-6c1c-4f8f-9138-d677d0b599a6\") " pod="kube-system/coredns-7c65d6cfc9-8zkgx" Sep 12 17:42:58.459095 kubelet[3174]: I0912 17:42:58.375245 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/485bd820-ef8d-438d-9780-7a5190802f4b-calico-apiserver-certs\") pod \"calico-apiserver-77d7968c8d-pt2kx\" (UID: \"485bd820-ef8d-438d-9780-7a5190802f4b\") " pod="calico-apiserver/calico-apiserver-77d7968c8d-pt2kx" Sep 12 17:42:58.351316 systemd[1]: Created slice kubepods-besteffort-pod1b00cb80_9491_4739_9630_df8b4e81d405.slice - libcontainer container kubepods-besteffort-pod1b00cb80_9491_4739_9630_df8b4e81d405.slice. Sep 12 17:42:58.459248 kubelet[3174]: I0912 17:42:58.375263 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/eacf3852-e8b2-4417-862e-486f289649f2-calico-apiserver-certs\") pod \"calico-apiserver-77d7968c8d-4l6tx\" (UID: \"eacf3852-e8b2-4417-862e-486f289649f2\") " pod="calico-apiserver/calico-apiserver-77d7968c8d-4l6tx" Sep 12 17:42:58.459248 kubelet[3174]: I0912 17:42:58.375280 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1b00cb80-9491-4739-9630-df8b4e81d405-whisker-backend-key-pair\") pod \"whisker-79bf995685-p7788\" (UID: \"1b00cb80-9491-4739-9630-df8b4e81d405\") " pod="calico-system/whisker-79bf995685-p7788" Sep 12 17:42:58.459248 kubelet[3174]: I0912 17:42:58.375305 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/34236242-7d33-47ad-b860-36c4a84495b0-config-volume\") pod \"coredns-7c65d6cfc9-z8zzj\" (UID: \"34236242-7d33-47ad-b860-36c4a84495b0\") " pod="kube-system/coredns-7c65d6cfc9-z8zzj" Sep 12 17:42:58.459248 kubelet[3174]: I0912 17:42:58.375717 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b2903050-6e84-45bd-9086-8adaeb871d43-goldmane-ca-bundle\") pod \"goldmane-7988f88666-fggbj\" (UID: \"b2903050-6e84-45bd-9086-8adaeb871d43\") " pod="calico-system/goldmane-7988f88666-fggbj" Sep 12 17:42:58.459248 kubelet[3174]: I0912 17:42:58.375752 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4jwr\" (UniqueName: \"kubernetes.io/projected/1b00cb80-9491-4739-9630-df8b4e81d405-kube-api-access-c4jwr\") pod \"whisker-79bf995685-p7788\" (UID: \"1b00cb80-9491-4739-9630-df8b4e81d405\") " pod="calico-system/whisker-79bf995685-p7788" Sep 12 17:42:58.365120 systemd[1]: Created slice kubepods-burstable-pod34b1f8d5_6c1c_4f8f_9138_d677d0b599a6.slice - libcontainer container kubepods-burstable-pod34b1f8d5_6c1c_4f8f_9138_d677d0b599a6.slice. Sep 12 17:42:58.459396 kubelet[3174]: I0912 17:42:58.375836 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxbrz\" (UniqueName: \"kubernetes.io/projected/ba349ebc-a27b-4172-bfad-7c73a43c2008-kube-api-access-kxbrz\") pod \"calico-kube-controllers-84db5c8b4c-5nsqv\" (UID: \"ba349ebc-a27b-4172-bfad-7c73a43c2008\") " pod="calico-system/calico-kube-controllers-84db5c8b4c-5nsqv" Sep 12 17:42:58.459396 kubelet[3174]: I0912 17:42:58.375872 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6gvp\" (UniqueName: \"kubernetes.io/projected/485bd820-ef8d-438d-9780-7a5190802f4b-kube-api-access-w6gvp\") pod \"calico-apiserver-77d7968c8d-pt2kx\" (UID: \"485bd820-ef8d-438d-9780-7a5190802f4b\") " pod="calico-apiserver/calico-apiserver-77d7968c8d-pt2kx" Sep 12 17:42:58.377656 systemd[1]: Created slice kubepods-besteffort-podba349ebc_a27b_4172_bfad_7c73a43c2008.slice - libcontainer container kubepods-besteffort-podba349ebc_a27b_4172_bfad_7c73a43c2008.slice. Sep 12 17:42:58.384510 systemd[1]: Created slice kubepods-besteffort-podeacf3852_e8b2_4417_862e_486f289649f2.slice - libcontainer container kubepods-besteffort-podeacf3852_e8b2_4417_862e_486f289649f2.slice. Sep 12 17:42:58.395317 systemd[1]: Created slice kubepods-besteffort-pod485bd820_ef8d_438d_9780_7a5190802f4b.slice - libcontainer container kubepods-besteffort-pod485bd820_ef8d_438d_9780_7a5190802f4b.slice. Sep 12 17:42:58.402075 systemd[1]: Created slice kubepods-besteffort-podb2903050_6e84_45bd_9086_8adaeb871d43.slice - libcontainer container kubepods-besteffort-podb2903050_6e84_45bd_9086_8adaeb871d43.slice. Sep 12 17:42:58.727259 containerd[1710]: time="2025-09-12T17:42:58.727057936Z" level=info msg="shim disconnected" id=c7bf416ee6e9ffff1ee5bb5d8d23159163a2da213a5276f3072fd55738bbee59 namespace=k8s.io Sep 12 17:42:58.727259 containerd[1710]: time="2025-09-12T17:42:58.727112536Z" level=warning msg="cleaning up after shim disconnected" id=c7bf416ee6e9ffff1ee5bb5d8d23159163a2da213a5276f3072fd55738bbee59 namespace=k8s.io Sep 12 17:42:58.727259 containerd[1710]: time="2025-09-12T17:42:58.727121136Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:42:58.761840 containerd[1710]: time="2025-09-12T17:42:58.761558846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-z8zzj,Uid:34236242-7d33-47ad-b860-36c4a84495b0,Namespace:kube-system,Attempt:0,}" Sep 12 17:42:58.763467 containerd[1710]: time="2025-09-12T17:42:58.763433372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77d7968c8d-pt2kx,Uid:485bd820-ef8d-438d-9780-7a5190802f4b,Namespace:calico-apiserver,Attempt:0,}" Sep 12 17:42:58.763672 containerd[1710]: time="2025-09-12T17:42:58.763648733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-8zkgx,Uid:34b1f8d5-6c1c-4f8f-9138-d677d0b599a6,Namespace:kube-system,Attempt:0,}" Sep 12 17:42:58.764486 containerd[1710]: time="2025-09-12T17:42:58.764451575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-fggbj,Uid:b2903050-6e84-45bd-9086-8adaeb871d43,Namespace:calico-system,Attempt:0,}" Sep 12 17:42:58.764658 containerd[1710]: time="2025-09-12T17:42:58.764629056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-79bf995685-p7788,Uid:1b00cb80-9491-4739-9630-df8b4e81d405,Namespace:calico-system,Attempt:0,}" Sep 12 17:42:58.764786 containerd[1710]: time="2025-09-12T17:42:58.764764976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77d7968c8d-4l6tx,Uid:eacf3852-e8b2-4417-862e-486f289649f2,Namespace:calico-apiserver,Attempt:0,}" Sep 12 17:42:58.764964 containerd[1710]: time="2025-09-12T17:42:58.764941137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84db5c8b4c-5nsqv,Uid:ba349ebc-a27b-4172-bfad-7c73a43c2008,Namespace:calico-system,Attempt:0,}" Sep 12 17:42:58.822630 systemd[1]: Created slice kubepods-besteffort-pod93bea3a5_ac3c_4316_b474_aae40e5f08e7.slice - libcontainer container kubepods-besteffort-pod93bea3a5_ac3c_4316_b474_aae40e5f08e7.slice. Sep 12 17:42:58.826009 containerd[1710]: time="2025-09-12T17:42:58.825969692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dp2wf,Uid:93bea3a5-ac3c-4316-b474-aae40e5f08e7,Namespace:calico-system,Attempt:0,}" Sep 12 17:42:58.944369 containerd[1710]: time="2025-09-12T17:42:58.944320669Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 12 17:42:59.010628 containerd[1710]: time="2025-09-12T17:42:59.010120599Z" level=error msg="Failed to destroy network for sandbox \"cd6abb0062bd7ba1073963354d32df7359610744cb8d8c6a6297744ffa374540\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:59.012415 containerd[1710]: time="2025-09-12T17:42:59.012083526Z" level=error msg="encountered an error cleaning up failed sandbox \"cd6abb0062bd7ba1073963354d32df7359610744cb8d8c6a6297744ffa374540\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:59.012734 containerd[1710]: time="2025-09-12T17:42:59.012431007Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-z8zzj,Uid:34236242-7d33-47ad-b860-36c4a84495b0,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cd6abb0062bd7ba1073963354d32df7359610744cb8d8c6a6297744ffa374540\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:59.014942 kubelet[3174]: E0912 17:42:59.014875 3174 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd6abb0062bd7ba1073963354d32df7359610744cb8d8c6a6297744ffa374540\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:59.015136 kubelet[3174]: E0912 17:42:59.015117 3174 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd6abb0062bd7ba1073963354d32df7359610744cb8d8c6a6297744ffa374540\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-z8zzj" Sep 12 17:42:59.015224 kubelet[3174]: E0912 17:42:59.015207 3174 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd6abb0062bd7ba1073963354d32df7359610744cb8d8c6a6297744ffa374540\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-z8zzj" Sep 12 17:42:59.015384 kubelet[3174]: E0912 17:42:59.015340 3174 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-z8zzj_kube-system(34236242-7d33-47ad-b860-36c4a84495b0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-z8zzj_kube-system(34236242-7d33-47ad-b860-36c4a84495b0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cd6abb0062bd7ba1073963354d32df7359610744cb8d8c6a6297744ffa374540\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-z8zzj" podUID="34236242-7d33-47ad-b860-36c4a84495b0" Sep 12 17:42:59.155199 containerd[1710]: time="2025-09-12T17:42:59.155145503Z" level=error msg="Failed to destroy network for sandbox \"590bf42b7c72ffe4439636482a47564da68476fb70799b4fd5bc687c83d7422d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:59.155599 containerd[1710]: time="2025-09-12T17:42:59.155568824Z" level=error msg="encountered an error cleaning up failed sandbox \"590bf42b7c72ffe4439636482a47564da68476fb70799b4fd5bc687c83d7422d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:59.155729 containerd[1710]: time="2025-09-12T17:42:59.155702064Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77d7968c8d-pt2kx,Uid:485bd820-ef8d-438d-9780-7a5190802f4b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"590bf42b7c72ffe4439636482a47564da68476fb70799b4fd5bc687c83d7422d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:59.156089 kubelet[3174]: E0912 17:42:59.156054 3174 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"590bf42b7c72ffe4439636482a47564da68476fb70799b4fd5bc687c83d7422d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:59.156349 kubelet[3174]: E0912 17:42:59.156222 3174 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"590bf42b7c72ffe4439636482a47564da68476fb70799b4fd5bc687c83d7422d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77d7968c8d-pt2kx" Sep 12 17:42:59.156349 kubelet[3174]: E0912 17:42:59.156247 3174 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"590bf42b7c72ffe4439636482a47564da68476fb70799b4fd5bc687c83d7422d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77d7968c8d-pt2kx" Sep 12 17:42:59.156349 kubelet[3174]: E0912 17:42:59.156307 3174 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-77d7968c8d-pt2kx_calico-apiserver(485bd820-ef8d-438d-9780-7a5190802f4b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-77d7968c8d-pt2kx_calico-apiserver(485bd820-ef8d-438d-9780-7a5190802f4b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"590bf42b7c72ffe4439636482a47564da68476fb70799b4fd5bc687c83d7422d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-77d7968c8d-pt2kx" podUID="485bd820-ef8d-438d-9780-7a5190802f4b" Sep 12 17:42:59.163837 containerd[1710]: time="2025-09-12T17:42:59.163740250Z" level=error msg="Failed to destroy network for sandbox \"dee466ea8ab497ad4d734a41e5d2e891f495a89c6c03cab534d2ecd2d68fbb58\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:59.165137 containerd[1710]: time="2025-09-12T17:42:59.165075734Z" level=error msg="encountered an error cleaning up failed sandbox \"dee466ea8ab497ad4d734a41e5d2e891f495a89c6c03cab534d2ecd2d68fbb58\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:59.165256 containerd[1710]: time="2025-09-12T17:42:59.165149935Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-79bf995685-p7788,Uid:1b00cb80-9491-4739-9630-df8b4e81d405,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"dee466ea8ab497ad4d734a41e5d2e891f495a89c6c03cab534d2ecd2d68fbb58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:59.165315 containerd[1710]: time="2025-09-12T17:42:59.165292055Z" level=error msg="Failed to destroy network for sandbox \"090d81d7288e974c51d24dc1110b803ed2899ea08d388642880e4c2aef75e9db\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:59.167348 kubelet[3174]: E0912 17:42:59.167273 3174 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dee466ea8ab497ad4d734a41e5d2e891f495a89c6c03cab534d2ecd2d68fbb58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:59.167477 containerd[1710]: time="2025-09-12T17:42:59.166622419Z" level=error msg="encountered an error cleaning up failed sandbox \"090d81d7288e974c51d24dc1110b803ed2899ea08d388642880e4c2aef75e9db\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:59.168090 containerd[1710]: time="2025-09-12T17:42:59.167463302Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-8zkgx,Uid:34b1f8d5-6c1c-4f8f-9138-d677d0b599a6,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"090d81d7288e974c51d24dc1110b803ed2899ea08d388642880e4c2aef75e9db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:59.168183 kubelet[3174]: E0912 17:42:59.167835 3174 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dee466ea8ab497ad4d734a41e5d2e891f495a89c6c03cab534d2ecd2d68fbb58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-79bf995685-p7788" Sep 12 17:42:59.168183 kubelet[3174]: E0912 17:42:59.167861 3174 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dee466ea8ab497ad4d734a41e5d2e891f495a89c6c03cab534d2ecd2d68fbb58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-79bf995685-p7788" Sep 12 17:42:59.168183 kubelet[3174]: E0912 17:42:59.167925 3174 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-79bf995685-p7788_calico-system(1b00cb80-9491-4739-9630-df8b4e81d405)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-79bf995685-p7788_calico-system(1b00cb80-9491-4739-9630-df8b4e81d405)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dee466ea8ab497ad4d734a41e5d2e891f495a89c6c03cab534d2ecd2d68fbb58\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-79bf995685-p7788" podUID="1b00cb80-9491-4739-9630-df8b4e81d405" Sep 12 17:42:59.168562 kubelet[3174]: E0912 17:42:59.168396 3174 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"090d81d7288e974c51d24dc1110b803ed2899ea08d388642880e4c2aef75e9db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:59.168562 kubelet[3174]: E0912 17:42:59.168437 3174 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"090d81d7288e974c51d24dc1110b803ed2899ea08d388642880e4c2aef75e9db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-8zkgx" Sep 12 17:42:59.168562 kubelet[3174]: E0912 17:42:59.168453 3174 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"090d81d7288e974c51d24dc1110b803ed2899ea08d388642880e4c2aef75e9db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-8zkgx" Sep 12 17:42:59.168855 kubelet[3174]: E0912 17:42:59.168482 3174 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-8zkgx_kube-system(34b1f8d5-6c1c-4f8f-9138-d677d0b599a6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-8zkgx_kube-system(34b1f8d5-6c1c-4f8f-9138-d677d0b599a6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"090d81d7288e974c51d24dc1110b803ed2899ea08d388642880e4c2aef75e9db\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-8zkgx" podUID="34b1f8d5-6c1c-4f8f-9138-d677d0b599a6" Sep 12 17:42:59.185918 containerd[1710]: time="2025-09-12T17:42:59.185557800Z" level=error msg="Failed to destroy network for sandbox \"7eb4020e00307aa41e4e3a15a7c0709087298a72d3de5084f1316aea23488174\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:59.186765 containerd[1710]: time="2025-09-12T17:42:59.186726963Z" level=error msg="encountered an error cleaning up failed sandbox \"7eb4020e00307aa41e4e3a15a7c0709087298a72d3de5084f1316aea23488174\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:59.186985 containerd[1710]: time="2025-09-12T17:42:59.186960764Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77d7968c8d-4l6tx,Uid:eacf3852-e8b2-4417-862e-486f289649f2,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7eb4020e00307aa41e4e3a15a7c0709087298a72d3de5084f1316aea23488174\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:59.187342 containerd[1710]: time="2025-09-12T17:42:59.187239885Z" level=error msg="Failed to destroy network for sandbox \"04712ef3a2d2c86cc8c04cda027e7104680fc2131664548477d53ee9c6f951ba\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:59.187641 containerd[1710]: time="2025-09-12T17:42:59.187615766Z" level=error msg="encountered an error cleaning up failed sandbox \"04712ef3a2d2c86cc8c04cda027e7104680fc2131664548477d53ee9c6f951ba\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:59.187902 containerd[1710]: time="2025-09-12T17:42:59.187748047Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-fggbj,Uid:b2903050-6e84-45bd-9086-8adaeb871d43,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"04712ef3a2d2c86cc8c04cda027e7104680fc2131664548477d53ee9c6f951ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:59.188622 kubelet[3174]: E0912 17:42:59.188314 3174 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"04712ef3a2d2c86cc8c04cda027e7104680fc2131664548477d53ee9c6f951ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:59.188622 kubelet[3174]: E0912 17:42:59.188375 3174 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"04712ef3a2d2c86cc8c04cda027e7104680fc2131664548477d53ee9c6f951ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-fggbj" Sep 12 17:42:59.188622 kubelet[3174]: E0912 17:42:59.188394 3174 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"04712ef3a2d2c86cc8c04cda027e7104680fc2131664548477d53ee9c6f951ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-fggbj" Sep 12 17:42:59.188751 kubelet[3174]: E0912 17:42:59.188430 3174 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7988f88666-fggbj_calico-system(b2903050-6e84-45bd-9086-8adaeb871d43)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7988f88666-fggbj_calico-system(b2903050-6e84-45bd-9086-8adaeb871d43)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"04712ef3a2d2c86cc8c04cda027e7104680fc2131664548477d53ee9c6f951ba\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-fggbj" podUID="b2903050-6e84-45bd-9086-8adaeb871d43" Sep 12 17:42:59.190268 kubelet[3174]: E0912 17:42:59.190006 3174 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7eb4020e00307aa41e4e3a15a7c0709087298a72d3de5084f1316aea23488174\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:59.190268 kubelet[3174]: E0912 17:42:59.190060 3174 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7eb4020e00307aa41e4e3a15a7c0709087298a72d3de5084f1316aea23488174\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77d7968c8d-4l6tx" Sep 12 17:42:59.190268 kubelet[3174]: E0912 17:42:59.190076 3174 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7eb4020e00307aa41e4e3a15a7c0709087298a72d3de5084f1316aea23488174\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77d7968c8d-4l6tx" Sep 12 17:42:59.190453 kubelet[3174]: E0912 17:42:59.190112 3174 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-77d7968c8d-4l6tx_calico-apiserver(eacf3852-e8b2-4417-862e-486f289649f2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-77d7968c8d-4l6tx_calico-apiserver(eacf3852-e8b2-4417-862e-486f289649f2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7eb4020e00307aa41e4e3a15a7c0709087298a72d3de5084f1316aea23488174\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-77d7968c8d-4l6tx" podUID="eacf3852-e8b2-4417-862e-486f289649f2" Sep 12 17:42:59.201777 containerd[1710]: time="2025-09-12T17:42:59.201727811Z" level=error msg="Failed to destroy network for sandbox \"43624bbda2c99a2613364de29f20c8d71c1e06d9f460d518f6356f8e617a55ce\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:59.202446 containerd[1710]: time="2025-09-12T17:42:59.202416213Z" level=error msg="encountered an error cleaning up failed sandbox \"43624bbda2c99a2613364de29f20c8d71c1e06d9f460d518f6356f8e617a55ce\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:59.202567 containerd[1710]: time="2025-09-12T17:42:59.202545974Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84db5c8b4c-5nsqv,Uid:ba349ebc-a27b-4172-bfad-7c73a43c2008,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"43624bbda2c99a2613364de29f20c8d71c1e06d9f460d518f6356f8e617a55ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:59.203446 kubelet[3174]: E0912 17:42:59.202920 3174 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43624bbda2c99a2613364de29f20c8d71c1e06d9f460d518f6356f8e617a55ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:59.203446 kubelet[3174]: E0912 17:42:59.203008 3174 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43624bbda2c99a2613364de29f20c8d71c1e06d9f460d518f6356f8e617a55ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-84db5c8b4c-5nsqv" Sep 12 17:42:59.203446 kubelet[3174]: E0912 17:42:59.203047 3174 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43624bbda2c99a2613364de29f20c8d71c1e06d9f460d518f6356f8e617a55ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-84db5c8b4c-5nsqv" Sep 12 17:42:59.203622 kubelet[3174]: E0912 17:42:59.203133 3174 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-84db5c8b4c-5nsqv_calico-system(ba349ebc-a27b-4172-bfad-7c73a43c2008)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-84db5c8b4c-5nsqv_calico-system(ba349ebc-a27b-4172-bfad-7c73a43c2008)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"43624bbda2c99a2613364de29f20c8d71c1e06d9f460d518f6356f8e617a55ce\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-84db5c8b4c-5nsqv" podUID="ba349ebc-a27b-4172-bfad-7c73a43c2008" Sep 12 17:42:59.210189 containerd[1710]: time="2025-09-12T17:42:59.210141958Z" level=error msg="Failed to destroy network for sandbox \"80643f0741642ca43dac8477a4f2f1198059a655d085bc0b2fef769facd339c0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:59.210600 containerd[1710]: time="2025-09-12T17:42:59.210574840Z" level=error msg="encountered an error cleaning up failed sandbox \"80643f0741642ca43dac8477a4f2f1198059a655d085bc0b2fef769facd339c0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:59.210724 containerd[1710]: time="2025-09-12T17:42:59.210703920Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dp2wf,Uid:93bea3a5-ac3c-4316-b474-aae40e5f08e7,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"80643f0741642ca43dac8477a4f2f1198059a655d085bc0b2fef769facd339c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:59.211178 kubelet[3174]: E0912 17:42:59.210971 3174 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80643f0741642ca43dac8477a4f2f1198059a655d085bc0b2fef769facd339c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:59.211178 kubelet[3174]: E0912 17:42:59.211042 3174 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80643f0741642ca43dac8477a4f2f1198059a655d085bc0b2fef769facd339c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dp2wf" Sep 12 17:42:59.211178 kubelet[3174]: E0912 17:42:59.211079 3174 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80643f0741642ca43dac8477a4f2f1198059a655d085bc0b2fef769facd339c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dp2wf" Sep 12 17:42:59.211321 kubelet[3174]: E0912 17:42:59.211125 3174 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dp2wf_calico-system(93bea3a5-ac3c-4316-b474-aae40e5f08e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dp2wf_calico-system(93bea3a5-ac3c-4316-b474-aae40e5f08e7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"80643f0741642ca43dac8477a4f2f1198059a655d085bc0b2fef769facd339c0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dp2wf" podUID="93bea3a5-ac3c-4316-b474-aae40e5f08e7" Sep 12 17:42:59.944419 kubelet[3174]: I0912 17:42:59.944384 3174 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="590bf42b7c72ffe4439636482a47564da68476fb70799b4fd5bc687c83d7422d" Sep 12 17:42:59.945643 containerd[1710]: time="2025-09-12T17:42:59.945057785Z" level=info msg="StopPodSandbox for \"590bf42b7c72ffe4439636482a47564da68476fb70799b4fd5bc687c83d7422d\"" Sep 12 17:42:59.945643 containerd[1710]: time="2025-09-12T17:42:59.945225745Z" level=info msg="Ensure that sandbox 590bf42b7c72ffe4439636482a47564da68476fb70799b4fd5bc687c83d7422d in task-service has been cleanup successfully" Sep 12 17:42:59.947301 kubelet[3174]: I0912 17:42:59.946874 3174 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="090d81d7288e974c51d24dc1110b803ed2899ea08d388642880e4c2aef75e9db" Sep 12 17:42:59.948766 containerd[1710]: time="2025-09-12T17:42:59.948713916Z" level=info msg="StopPodSandbox for \"090d81d7288e974c51d24dc1110b803ed2899ea08d388642880e4c2aef75e9db\"" Sep 12 17:42:59.949407 containerd[1710]: time="2025-09-12T17:42:59.949372479Z" level=info msg="Ensure that sandbox 090d81d7288e974c51d24dc1110b803ed2899ea08d388642880e4c2aef75e9db in task-service has been cleanup successfully" Sep 12 17:42:59.951603 kubelet[3174]: I0912 17:42:59.950612 3174 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd6abb0062bd7ba1073963354d32df7359610744cb8d8c6a6297744ffa374540" Sep 12 17:42:59.952446 containerd[1710]: time="2025-09-12T17:42:59.952099647Z" level=info msg="StopPodSandbox for \"cd6abb0062bd7ba1073963354d32df7359610744cb8d8c6a6297744ffa374540\"" Sep 12 17:42:59.953068 containerd[1710]: time="2025-09-12T17:42:59.952851730Z" level=info msg="Ensure that sandbox cd6abb0062bd7ba1073963354d32df7359610744cb8d8c6a6297744ffa374540 in task-service has been cleanup successfully" Sep 12 17:42:59.955857 kubelet[3174]: I0912 17:42:59.955830 3174 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04712ef3a2d2c86cc8c04cda027e7104680fc2131664548477d53ee9c6f951ba" Sep 12 17:42:59.959132 containerd[1710]: time="2025-09-12T17:42:59.958095106Z" level=info msg="StopPodSandbox for \"04712ef3a2d2c86cc8c04cda027e7104680fc2131664548477d53ee9c6f951ba\"" Sep 12 17:42:59.959132 containerd[1710]: time="2025-09-12T17:42:59.958269107Z" level=info msg="Ensure that sandbox 04712ef3a2d2c86cc8c04cda027e7104680fc2131664548477d53ee9c6f951ba in task-service has been cleanup successfully" Sep 12 17:42:59.967514 kubelet[3174]: I0912 17:42:59.967449 3174 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="80643f0741642ca43dac8477a4f2f1198059a655d085bc0b2fef769facd339c0" Sep 12 17:42:59.968136 containerd[1710]: time="2025-09-12T17:42:59.968103738Z" level=info msg="StopPodSandbox for \"80643f0741642ca43dac8477a4f2f1198059a655d085bc0b2fef769facd339c0\"" Sep 12 17:42:59.970720 containerd[1710]: time="2025-09-12T17:42:59.969868744Z" level=info msg="Ensure that sandbox 80643f0741642ca43dac8477a4f2f1198059a655d085bc0b2fef769facd339c0 in task-service has been cleanup successfully" Sep 12 17:42:59.972858 kubelet[3174]: I0912 17:42:59.972060 3174 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43624bbda2c99a2613364de29f20c8d71c1e06d9f460d518f6356f8e617a55ce" Sep 12 17:42:59.974356 containerd[1710]: time="2025-09-12T17:42:59.974305958Z" level=info msg="StopPodSandbox for \"43624bbda2c99a2613364de29f20c8d71c1e06d9f460d518f6356f8e617a55ce\"" Sep 12 17:42:59.974554 containerd[1710]: time="2025-09-12T17:42:59.974485199Z" level=info msg="Ensure that sandbox 43624bbda2c99a2613364de29f20c8d71c1e06d9f460d518f6356f8e617a55ce in task-service has been cleanup successfully" Sep 12 17:42:59.976063 kubelet[3174]: I0912 17:42:59.976030 3174 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7eb4020e00307aa41e4e3a15a7c0709087298a72d3de5084f1316aea23488174" Sep 12 17:42:59.976577 containerd[1710]: time="2025-09-12T17:42:59.976531685Z" level=info msg="StopPodSandbox for \"7eb4020e00307aa41e4e3a15a7c0709087298a72d3de5084f1316aea23488174\"" Sep 12 17:42:59.976728 containerd[1710]: time="2025-09-12T17:42:59.976697046Z" level=info msg="Ensure that sandbox 7eb4020e00307aa41e4e3a15a7c0709087298a72d3de5084f1316aea23488174 in task-service has been cleanup successfully" Sep 12 17:42:59.979468 kubelet[3174]: I0912 17:42:59.979440 3174 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dee466ea8ab497ad4d734a41e5d2e891f495a89c6c03cab534d2ecd2d68fbb58" Sep 12 17:42:59.980164 containerd[1710]: time="2025-09-12T17:42:59.980119857Z" level=info msg="StopPodSandbox for \"dee466ea8ab497ad4d734a41e5d2e891f495a89c6c03cab534d2ecd2d68fbb58\"" Sep 12 17:42:59.980767 containerd[1710]: time="2025-09-12T17:42:59.980436258Z" level=info msg="Ensure that sandbox dee466ea8ab497ad4d734a41e5d2e891f495a89c6c03cab534d2ecd2d68fbb58 in task-service has been cleanup successfully" Sep 12 17:43:00.058839 containerd[1710]: time="2025-09-12T17:43:00.058749868Z" level=error msg="StopPodSandbox for \"590bf42b7c72ffe4439636482a47564da68476fb70799b4fd5bc687c83d7422d\" failed" error="failed to destroy network for sandbox \"590bf42b7c72ffe4439636482a47564da68476fb70799b4fd5bc687c83d7422d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:43:00.059090 kubelet[3174]: E0912 17:43:00.059004 3174 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"590bf42b7c72ffe4439636482a47564da68476fb70799b4fd5bc687c83d7422d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="590bf42b7c72ffe4439636482a47564da68476fb70799b4fd5bc687c83d7422d" Sep 12 17:43:00.059144 kubelet[3174]: E0912 17:43:00.059109 3174 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"590bf42b7c72ffe4439636482a47564da68476fb70799b4fd5bc687c83d7422d"} Sep 12 17:43:00.059839 kubelet[3174]: E0912 17:43:00.059164 3174 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"485bd820-ef8d-438d-9780-7a5190802f4b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"590bf42b7c72ffe4439636482a47564da68476fb70799b4fd5bc687c83d7422d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:43:00.059839 kubelet[3174]: E0912 17:43:00.059195 3174 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"485bd820-ef8d-438d-9780-7a5190802f4b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"590bf42b7c72ffe4439636482a47564da68476fb70799b4fd5bc687c83d7422d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-77d7968c8d-pt2kx" podUID="485bd820-ef8d-438d-9780-7a5190802f4b" Sep 12 17:43:00.063231 containerd[1710]: time="2025-09-12T17:43:00.063163042Z" level=error msg="StopPodSandbox for \"090d81d7288e974c51d24dc1110b803ed2899ea08d388642880e4c2aef75e9db\" failed" error="failed to destroy network for sandbox \"090d81d7288e974c51d24dc1110b803ed2899ea08d388642880e4c2aef75e9db\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:43:00.063455 kubelet[3174]: E0912 17:43:00.063396 3174 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"090d81d7288e974c51d24dc1110b803ed2899ea08d388642880e4c2aef75e9db\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="090d81d7288e974c51d24dc1110b803ed2899ea08d388642880e4c2aef75e9db" Sep 12 17:43:00.063511 kubelet[3174]: E0912 17:43:00.063456 3174 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"090d81d7288e974c51d24dc1110b803ed2899ea08d388642880e4c2aef75e9db"} Sep 12 17:43:00.063511 kubelet[3174]: E0912 17:43:00.063490 3174 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"34b1f8d5-6c1c-4f8f-9138-d677d0b599a6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"090d81d7288e974c51d24dc1110b803ed2899ea08d388642880e4c2aef75e9db\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:43:00.063619 kubelet[3174]: E0912 17:43:00.063514 3174 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"34b1f8d5-6c1c-4f8f-9138-d677d0b599a6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"090d81d7288e974c51d24dc1110b803ed2899ea08d388642880e4c2aef75e9db\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-8zkgx" podUID="34b1f8d5-6c1c-4f8f-9138-d677d0b599a6" Sep 12 17:43:00.067312 containerd[1710]: time="2025-09-12T17:43:00.067247735Z" level=error msg="StopPodSandbox for \"cd6abb0062bd7ba1073963354d32df7359610744cb8d8c6a6297744ffa374540\" failed" error="failed to destroy network for sandbox \"cd6abb0062bd7ba1073963354d32df7359610744cb8d8c6a6297744ffa374540\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:43:00.067520 kubelet[3174]: E0912 17:43:00.067478 3174 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cd6abb0062bd7ba1073963354d32df7359610744cb8d8c6a6297744ffa374540\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cd6abb0062bd7ba1073963354d32df7359610744cb8d8c6a6297744ffa374540" Sep 12 17:43:00.067599 kubelet[3174]: E0912 17:43:00.067539 3174 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cd6abb0062bd7ba1073963354d32df7359610744cb8d8c6a6297744ffa374540"} Sep 12 17:43:00.067599 kubelet[3174]: E0912 17:43:00.067574 3174 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"34236242-7d33-47ad-b860-36c4a84495b0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cd6abb0062bd7ba1073963354d32df7359610744cb8d8c6a6297744ffa374540\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:43:00.067690 kubelet[3174]: E0912 17:43:00.067596 3174 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"34236242-7d33-47ad-b860-36c4a84495b0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cd6abb0062bd7ba1073963354d32df7359610744cb8d8c6a6297744ffa374540\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-z8zzj" podUID="34236242-7d33-47ad-b860-36c4a84495b0" Sep 12 17:43:00.077130 containerd[1710]: time="2025-09-12T17:43:00.076870526Z" level=error msg="StopPodSandbox for \"80643f0741642ca43dac8477a4f2f1198059a655d085bc0b2fef769facd339c0\" failed" error="failed to destroy network for sandbox \"80643f0741642ca43dac8477a4f2f1198059a655d085bc0b2fef769facd339c0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:43:00.077393 kubelet[3174]: E0912 17:43:00.077137 3174 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"80643f0741642ca43dac8477a4f2f1198059a655d085bc0b2fef769facd339c0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="80643f0741642ca43dac8477a4f2f1198059a655d085bc0b2fef769facd339c0" Sep 12 17:43:00.077393 kubelet[3174]: E0912 17:43:00.077188 3174 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"80643f0741642ca43dac8477a4f2f1198059a655d085bc0b2fef769facd339c0"} Sep 12 17:43:00.077393 kubelet[3174]: E0912 17:43:00.077228 3174 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"93bea3a5-ac3c-4316-b474-aae40e5f08e7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"80643f0741642ca43dac8477a4f2f1198059a655d085bc0b2fef769facd339c0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:43:00.077393 kubelet[3174]: E0912 17:43:00.077249 3174 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"93bea3a5-ac3c-4316-b474-aae40e5f08e7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"80643f0741642ca43dac8477a4f2f1198059a655d085bc0b2fef769facd339c0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dp2wf" podUID="93bea3a5-ac3c-4316-b474-aae40e5f08e7" Sep 12 17:43:00.087681 containerd[1710]: time="2025-09-12T17:43:00.087593760Z" level=error msg="StopPodSandbox for \"7eb4020e00307aa41e4e3a15a7c0709087298a72d3de5084f1316aea23488174\" failed" error="failed to destroy network for sandbox \"7eb4020e00307aa41e4e3a15a7c0709087298a72d3de5084f1316aea23488174\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:43:00.088590 kubelet[3174]: E0912 17:43:00.088545 3174 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7eb4020e00307aa41e4e3a15a7c0709087298a72d3de5084f1316aea23488174\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7eb4020e00307aa41e4e3a15a7c0709087298a72d3de5084f1316aea23488174" Sep 12 17:43:00.088665 kubelet[3174]: E0912 17:43:00.088599 3174 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7eb4020e00307aa41e4e3a15a7c0709087298a72d3de5084f1316aea23488174"} Sep 12 17:43:00.088665 kubelet[3174]: E0912 17:43:00.088639 3174 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"eacf3852-e8b2-4417-862e-486f289649f2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7eb4020e00307aa41e4e3a15a7c0709087298a72d3de5084f1316aea23488174\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:43:00.088757 kubelet[3174]: E0912 17:43:00.088659 3174 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"eacf3852-e8b2-4417-862e-486f289649f2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7eb4020e00307aa41e4e3a15a7c0709087298a72d3de5084f1316aea23488174\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-77d7968c8d-4l6tx" podUID="eacf3852-e8b2-4417-862e-486f289649f2" Sep 12 17:43:00.090866 containerd[1710]: time="2025-09-12T17:43:00.090253368Z" level=error msg="StopPodSandbox for \"04712ef3a2d2c86cc8c04cda027e7104680fc2131664548477d53ee9c6f951ba\" failed" error="failed to destroy network for sandbox \"04712ef3a2d2c86cc8c04cda027e7104680fc2131664548477d53ee9c6f951ba\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:43:00.090971 kubelet[3174]: E0912 17:43:00.090517 3174 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"04712ef3a2d2c86cc8c04cda027e7104680fc2131664548477d53ee9c6f951ba\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="04712ef3a2d2c86cc8c04cda027e7104680fc2131664548477d53ee9c6f951ba" Sep 12 17:43:00.090971 kubelet[3174]: E0912 17:43:00.090555 3174 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"04712ef3a2d2c86cc8c04cda027e7104680fc2131664548477d53ee9c6f951ba"} Sep 12 17:43:00.090971 kubelet[3174]: E0912 17:43:00.090583 3174 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b2903050-6e84-45bd-9086-8adaeb871d43\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"04712ef3a2d2c86cc8c04cda027e7104680fc2131664548477d53ee9c6f951ba\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:43:00.090971 kubelet[3174]: E0912 17:43:00.090605 3174 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b2903050-6e84-45bd-9086-8adaeb871d43\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"04712ef3a2d2c86cc8c04cda027e7104680fc2131664548477d53ee9c6f951ba\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-fggbj" podUID="b2903050-6e84-45bd-9086-8adaeb871d43" Sep 12 17:43:00.095403 containerd[1710]: time="2025-09-12T17:43:00.095294624Z" level=error msg="StopPodSandbox for \"43624bbda2c99a2613364de29f20c8d71c1e06d9f460d518f6356f8e617a55ce\" failed" error="failed to destroy network for sandbox \"43624bbda2c99a2613364de29f20c8d71c1e06d9f460d518f6356f8e617a55ce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:43:00.095900 kubelet[3174]: E0912 17:43:00.095625 3174 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"43624bbda2c99a2613364de29f20c8d71c1e06d9f460d518f6356f8e617a55ce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="43624bbda2c99a2613364de29f20c8d71c1e06d9f460d518f6356f8e617a55ce" Sep 12 17:43:00.095900 kubelet[3174]: E0912 17:43:00.095672 3174 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"43624bbda2c99a2613364de29f20c8d71c1e06d9f460d518f6356f8e617a55ce"} Sep 12 17:43:00.095900 kubelet[3174]: E0912 17:43:00.095703 3174 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ba349ebc-a27b-4172-bfad-7c73a43c2008\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"43624bbda2c99a2613364de29f20c8d71c1e06d9f460d518f6356f8e617a55ce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:43:00.095900 kubelet[3174]: E0912 17:43:00.095724 3174 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ba349ebc-a27b-4172-bfad-7c73a43c2008\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"43624bbda2c99a2613364de29f20c8d71c1e06d9f460d518f6356f8e617a55ce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-84db5c8b4c-5nsqv" podUID="ba349ebc-a27b-4172-bfad-7c73a43c2008" Sep 12 17:43:00.096499 containerd[1710]: time="2025-09-12T17:43:00.096179667Z" level=error msg="StopPodSandbox for \"dee466ea8ab497ad4d734a41e5d2e891f495a89c6c03cab534d2ecd2d68fbb58\" failed" error="failed to destroy network for sandbox \"dee466ea8ab497ad4d734a41e5d2e891f495a89c6c03cab534d2ecd2d68fbb58\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:43:00.096571 kubelet[3174]: E0912 17:43:00.096348 3174 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"dee466ea8ab497ad4d734a41e5d2e891f495a89c6c03cab534d2ecd2d68fbb58\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="dee466ea8ab497ad4d734a41e5d2e891f495a89c6c03cab534d2ecd2d68fbb58" Sep 12 17:43:00.096571 kubelet[3174]: E0912 17:43:00.096407 3174 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"dee466ea8ab497ad4d734a41e5d2e891f495a89c6c03cab534d2ecd2d68fbb58"} Sep 12 17:43:00.096571 kubelet[3174]: E0912 17:43:00.096432 3174 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1b00cb80-9491-4739-9630-df8b4e81d405\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dee466ea8ab497ad4d734a41e5d2e891f495a89c6c03cab534d2ecd2d68fbb58\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:43:00.096571 kubelet[3174]: E0912 17:43:00.096453 3174 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1b00cb80-9491-4739-9630-df8b4e81d405\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dee466ea8ab497ad4d734a41e5d2e891f495a89c6c03cab534d2ecd2d68fbb58\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-79bf995685-p7788" podUID="1b00cb80-9491-4739-9630-df8b4e81d405" Sep 12 17:43:10.188363 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1590186480.mount: Deactivated successfully. Sep 12 17:43:10.816472 containerd[1710]: time="2025-09-12T17:43:10.816282350Z" level=info msg="StopPodSandbox for \"7eb4020e00307aa41e4e3a15a7c0709087298a72d3de5084f1316aea23488174\"" Sep 12 17:43:10.818880 containerd[1710]: time="2025-09-12T17:43:10.817419594Z" level=info msg="StopPodSandbox for \"090d81d7288e974c51d24dc1110b803ed2899ea08d388642880e4c2aef75e9db\"" Sep 12 17:43:10.853401 containerd[1710]: time="2025-09-12T17:43:10.853341821Z" level=error msg="StopPodSandbox for \"090d81d7288e974c51d24dc1110b803ed2899ea08d388642880e4c2aef75e9db\" failed" error="failed to destroy network for sandbox \"090d81d7288e974c51d24dc1110b803ed2899ea08d388642880e4c2aef75e9db\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:43:10.853843 kubelet[3174]: E0912 17:43:10.853547 3174 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"090d81d7288e974c51d24dc1110b803ed2899ea08d388642880e4c2aef75e9db\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="090d81d7288e974c51d24dc1110b803ed2899ea08d388642880e4c2aef75e9db" Sep 12 17:43:10.853843 kubelet[3174]: E0912 17:43:10.853602 3174 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"090d81d7288e974c51d24dc1110b803ed2899ea08d388642880e4c2aef75e9db"} Sep 12 17:43:10.853843 kubelet[3174]: E0912 17:43:10.853638 3174 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"34b1f8d5-6c1c-4f8f-9138-d677d0b599a6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"090d81d7288e974c51d24dc1110b803ed2899ea08d388642880e4c2aef75e9db\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:43:10.853843 kubelet[3174]: E0912 17:43:10.853663 3174 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"34b1f8d5-6c1c-4f8f-9138-d677d0b599a6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"090d81d7288e974c51d24dc1110b803ed2899ea08d388642880e4c2aef75e9db\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-8zkgx" podUID="34b1f8d5-6c1c-4f8f-9138-d677d0b599a6" Sep 12 17:43:10.858476 containerd[1710]: time="2025-09-12T17:43:10.858429316Z" level=error msg="StopPodSandbox for \"7eb4020e00307aa41e4e3a15a7c0709087298a72d3de5084f1316aea23488174\" failed" error="failed to destroy network for sandbox \"7eb4020e00307aa41e4e3a15a7c0709087298a72d3de5084f1316aea23488174\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:43:10.858921 kubelet[3174]: E0912 17:43:10.858763 3174 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7eb4020e00307aa41e4e3a15a7c0709087298a72d3de5084f1316aea23488174\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7eb4020e00307aa41e4e3a15a7c0709087298a72d3de5084f1316aea23488174" Sep 12 17:43:10.858921 kubelet[3174]: E0912 17:43:10.858832 3174 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7eb4020e00307aa41e4e3a15a7c0709087298a72d3de5084f1316aea23488174"} Sep 12 17:43:10.858921 kubelet[3174]: E0912 17:43:10.858866 3174 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"eacf3852-e8b2-4417-862e-486f289649f2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7eb4020e00307aa41e4e3a15a7c0709087298a72d3de5084f1316aea23488174\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:43:10.858921 kubelet[3174]: E0912 17:43:10.858888 3174 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"eacf3852-e8b2-4417-862e-486f289649f2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7eb4020e00307aa41e4e3a15a7c0709087298a72d3de5084f1316aea23488174\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-77d7968c8d-4l6tx" podUID="eacf3852-e8b2-4417-862e-486f289649f2" Sep 12 17:43:12.816913 containerd[1710]: time="2025-09-12T17:43:12.815830261Z" level=info msg="StopPodSandbox for \"cd6abb0062bd7ba1073963354d32df7359610744cb8d8c6a6297744ffa374540\"" Sep 12 17:43:12.816913 containerd[1710]: time="2025-09-12T17:43:12.816005261Z" level=info msg="StopPodSandbox for \"43624bbda2c99a2613364de29f20c8d71c1e06d9f460d518f6356f8e617a55ce\"" Sep 12 17:43:12.817383 containerd[1710]: time="2025-09-12T17:43:12.815830421Z" level=info msg="StopPodSandbox for \"04712ef3a2d2c86cc8c04cda027e7104680fc2131664548477d53ee9c6f951ba\"" Sep 12 17:43:12.852749 containerd[1710]: time="2025-09-12T17:43:12.852638331Z" level=error msg="StopPodSandbox for \"cd6abb0062bd7ba1073963354d32df7359610744cb8d8c6a6297744ffa374540\" failed" error="failed to destroy network for sandbox \"cd6abb0062bd7ba1073963354d32df7359610744cb8d8c6a6297744ffa374540\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:43:12.853249 kubelet[3174]: E0912 17:43:12.853030 3174 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cd6abb0062bd7ba1073963354d32df7359610744cb8d8c6a6297744ffa374540\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cd6abb0062bd7ba1073963354d32df7359610744cb8d8c6a6297744ffa374540" Sep 12 17:43:12.853249 kubelet[3174]: E0912 17:43:12.853095 3174 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cd6abb0062bd7ba1073963354d32df7359610744cb8d8c6a6297744ffa374540"} Sep 12 17:43:12.853249 kubelet[3174]: E0912 17:43:12.853129 3174 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"34236242-7d33-47ad-b860-36c4a84495b0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cd6abb0062bd7ba1073963354d32df7359610744cb8d8c6a6297744ffa374540\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:43:12.853249 kubelet[3174]: E0912 17:43:12.853159 3174 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"34236242-7d33-47ad-b860-36c4a84495b0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cd6abb0062bd7ba1073963354d32df7359610744cb8d8c6a6297744ffa374540\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-z8zzj" podUID="34236242-7d33-47ad-b860-36c4a84495b0" Sep 12 17:43:12.871352 containerd[1710]: time="2025-09-12T17:43:12.871303347Z" level=error msg="StopPodSandbox for \"43624bbda2c99a2613364de29f20c8d71c1e06d9f460d518f6356f8e617a55ce\" failed" error="failed to destroy network for sandbox \"43624bbda2c99a2613364de29f20c8d71c1e06d9f460d518f6356f8e617a55ce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:43:12.872430 kubelet[3174]: E0912 17:43:12.871711 3174 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"43624bbda2c99a2613364de29f20c8d71c1e06d9f460d518f6356f8e617a55ce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="43624bbda2c99a2613364de29f20c8d71c1e06d9f460d518f6356f8e617a55ce" Sep 12 17:43:12.872430 kubelet[3174]: E0912 17:43:12.871771 3174 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"43624bbda2c99a2613364de29f20c8d71c1e06d9f460d518f6356f8e617a55ce"} Sep 12 17:43:12.872430 kubelet[3174]: E0912 17:43:12.871821 3174 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ba349ebc-a27b-4172-bfad-7c73a43c2008\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"43624bbda2c99a2613364de29f20c8d71c1e06d9f460d518f6356f8e617a55ce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:43:12.872430 kubelet[3174]: E0912 17:43:12.871844 3174 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ba349ebc-a27b-4172-bfad-7c73a43c2008\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"43624bbda2c99a2613364de29f20c8d71c1e06d9f460d518f6356f8e617a55ce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-84db5c8b4c-5nsqv" podUID="ba349ebc-a27b-4172-bfad-7c73a43c2008" Sep 12 17:43:12.875121 containerd[1710]: time="2025-09-12T17:43:12.875074558Z" level=error msg="StopPodSandbox for \"04712ef3a2d2c86cc8c04cda027e7104680fc2131664548477d53ee9c6f951ba\" failed" error="failed to destroy network for sandbox \"04712ef3a2d2c86cc8c04cda027e7104680fc2131664548477d53ee9c6f951ba\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:43:12.875631 kubelet[3174]: E0912 17:43:12.875496 3174 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"04712ef3a2d2c86cc8c04cda027e7104680fc2131664548477d53ee9c6f951ba\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="04712ef3a2d2c86cc8c04cda027e7104680fc2131664548477d53ee9c6f951ba" Sep 12 17:43:12.875631 kubelet[3174]: E0912 17:43:12.875552 3174 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"04712ef3a2d2c86cc8c04cda027e7104680fc2131664548477d53ee9c6f951ba"} Sep 12 17:43:12.875631 kubelet[3174]: E0912 17:43:12.875582 3174 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b2903050-6e84-45bd-9086-8adaeb871d43\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"04712ef3a2d2c86cc8c04cda027e7104680fc2131664548477d53ee9c6f951ba\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:43:12.875631 kubelet[3174]: E0912 17:43:12.875603 3174 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b2903050-6e84-45bd-9086-8adaeb871d43\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"04712ef3a2d2c86cc8c04cda027e7104680fc2131664548477d53ee9c6f951ba\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-fggbj" podUID="b2903050-6e84-45bd-9086-8adaeb871d43" Sep 12 17:43:14.074617 containerd[1710]: time="2025-09-12T17:43:14.074155430Z" level=info msg="StopPodSandbox for \"80643f0741642ca43dac8477a4f2f1198059a655d085bc0b2fef769facd339c0\"" Sep 12 17:43:14.116597 containerd[1710]: time="2025-09-12T17:43:14.116549637Z" level=error msg="StopPodSandbox for \"80643f0741642ca43dac8477a4f2f1198059a655d085bc0b2fef769facd339c0\" failed" error="failed to destroy network for sandbox \"80643f0741642ca43dac8477a4f2f1198059a655d085bc0b2fef769facd339c0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:43:14.117128 kubelet[3174]: E0912 17:43:14.116977 3174 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"80643f0741642ca43dac8477a4f2f1198059a655d085bc0b2fef769facd339c0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="80643f0741642ca43dac8477a4f2f1198059a655d085bc0b2fef769facd339c0" Sep 12 17:43:14.117128 kubelet[3174]: E0912 17:43:14.117031 3174 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"80643f0741642ca43dac8477a4f2f1198059a655d085bc0b2fef769facd339c0"} Sep 12 17:43:14.117128 kubelet[3174]: E0912 17:43:14.117070 3174 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"93bea3a5-ac3c-4316-b474-aae40e5f08e7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"80643f0741642ca43dac8477a4f2f1198059a655d085bc0b2fef769facd339c0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:43:14.117128 kubelet[3174]: E0912 17:43:14.117093 3174 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"93bea3a5-ac3c-4316-b474-aae40e5f08e7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"80643f0741642ca43dac8477a4f2f1198059a655d085bc0b2fef769facd339c0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dp2wf" podUID="93bea3a5-ac3c-4316-b474-aae40e5f08e7" Sep 12 17:43:14.815258 containerd[1710]: time="2025-09-12T17:43:14.814997450Z" level=info msg="StopPodSandbox for \"590bf42b7c72ffe4439636482a47564da68476fb70799b4fd5bc687c83d7422d\"" Sep 12 17:43:14.843585 containerd[1710]: time="2025-09-12T17:43:14.841669090Z" level=error msg="StopPodSandbox for \"590bf42b7c72ffe4439636482a47564da68476fb70799b4fd5bc687c83d7422d\" failed" error="failed to destroy network for sandbox \"590bf42b7c72ffe4439636482a47564da68476fb70799b4fd5bc687c83d7422d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:43:14.843718 kubelet[3174]: E0912 17:43:14.843450 3174 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"590bf42b7c72ffe4439636482a47564da68476fb70799b4fd5bc687c83d7422d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="590bf42b7c72ffe4439636482a47564da68476fb70799b4fd5bc687c83d7422d" Sep 12 17:43:14.843718 kubelet[3174]: E0912 17:43:14.843495 3174 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"590bf42b7c72ffe4439636482a47564da68476fb70799b4fd5bc687c83d7422d"} Sep 12 17:43:14.843718 kubelet[3174]: E0912 17:43:14.843526 3174 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"485bd820-ef8d-438d-9780-7a5190802f4b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"590bf42b7c72ffe4439636482a47564da68476fb70799b4fd5bc687c83d7422d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:43:14.843718 kubelet[3174]: E0912 17:43:14.843546 3174 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"485bd820-ef8d-438d-9780-7a5190802f4b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"590bf42b7c72ffe4439636482a47564da68476fb70799b4fd5bc687c83d7422d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-77d7968c8d-pt2kx" podUID="485bd820-ef8d-438d-9780-7a5190802f4b" Sep 12 17:43:15.817555 containerd[1710]: time="2025-09-12T17:43:15.814732205Z" level=info msg="StopPodSandbox for \"dee466ea8ab497ad4d734a41e5d2e891f495a89c6c03cab534d2ecd2d68fbb58\"" Sep 12 17:43:15.850602 containerd[1710]: time="2025-09-12T17:43:15.850547272Z" level=error msg="StopPodSandbox for \"dee466ea8ab497ad4d734a41e5d2e891f495a89c6c03cab534d2ecd2d68fbb58\" failed" error="failed to destroy network for sandbox \"dee466ea8ab497ad4d734a41e5d2e891f495a89c6c03cab534d2ecd2d68fbb58\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:43:15.855108 kubelet[3174]: E0912 17:43:15.854299 3174 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"dee466ea8ab497ad4d734a41e5d2e891f495a89c6c03cab534d2ecd2d68fbb58\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="dee466ea8ab497ad4d734a41e5d2e891f495a89c6c03cab534d2ecd2d68fbb58" Sep 12 17:43:15.855108 kubelet[3174]: E0912 17:43:15.854354 3174 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"dee466ea8ab497ad4d734a41e5d2e891f495a89c6c03cab534d2ecd2d68fbb58"} Sep 12 17:43:15.855108 kubelet[3174]: E0912 17:43:15.854388 3174 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1b00cb80-9491-4739-9630-df8b4e81d405\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dee466ea8ab497ad4d734a41e5d2e891f495a89c6c03cab534d2ecd2d68fbb58\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:43:15.855108 kubelet[3174]: E0912 17:43:15.854410 3174 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1b00cb80-9491-4739-9630-df8b4e81d405\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dee466ea8ab497ad4d734a41e5d2e891f495a89c6c03cab534d2ecd2d68fbb58\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-79bf995685-p7788" podUID="1b00cb80-9491-4739-9630-df8b4e81d405" Sep 12 17:43:16.020068 containerd[1710]: time="2025-09-12T17:43:16.019345458Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:43:18.257900 containerd[1710]: time="2025-09-12T17:43:18.257846533Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=151100457" Sep 12 17:43:19.105978 containerd[1710]: time="2025-09-12T17:43:19.105909221Z" level=info msg="ImageCreate event name:\"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:43:19.611997 containerd[1710]: time="2025-09-12T17:43:19.611891042Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:43:19.613263 containerd[1710]: time="2025-09-12T17:43:19.612642324Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"151100319\" in 20.668275414s" Sep 12 17:43:19.613263 containerd[1710]: time="2025-09-12T17:43:19.612674284Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\"" Sep 12 17:43:19.634509 containerd[1710]: time="2025-09-12T17:43:19.634464307Z" level=info msg="CreateContainer within sandbox \"e94b5cf056b8c06add02ab1526ff3ba9d4a2790f7ae442e272c0b42155de2b6d\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 12 17:43:20.458069 containerd[1710]: time="2025-09-12T17:43:20.457949524Z" level=info msg="CreateContainer within sandbox \"e94b5cf056b8c06add02ab1526ff3ba9d4a2790f7ae442e272c0b42155de2b6d\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"d54fd04d887aeda9c0525071e47c2dcde022de21ec0121d2cd66b5b686559343\"" Sep 12 17:43:20.458929 containerd[1710]: time="2025-09-12T17:43:20.458898687Z" level=info msg="StartContainer for \"d54fd04d887aeda9c0525071e47c2dcde022de21ec0121d2cd66b5b686559343\"" Sep 12 17:43:20.486917 systemd[1]: Started cri-containerd-d54fd04d887aeda9c0525071e47c2dcde022de21ec0121d2cd66b5b686559343.scope - libcontainer container d54fd04d887aeda9c0525071e47c2dcde022de21ec0121d2cd66b5b686559343. Sep 12 17:43:20.520997 containerd[1710]: time="2025-09-12T17:43:20.520512584Z" level=info msg="StartContainer for \"d54fd04d887aeda9c0525071e47c2dcde022de21ec0121d2cd66b5b686559343\" returns successfully" Sep 12 17:43:20.943346 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 12 17:43:20.943498 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 12 17:43:21.054507 containerd[1710]: time="2025-09-12T17:43:21.054458046Z" level=info msg="StopPodSandbox for \"dee466ea8ab497ad4d734a41e5d2e891f495a89c6c03cab534d2ecd2d68fbb58\"" Sep 12 17:43:21.160410 kubelet[3174]: I0912 17:43:21.160328 3174 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-4hr7d" podStartSLOduration=2.06621028 podStartE2EDuration="34.160308111s" podCreationTimestamp="2025-09-12 17:42:47 +0000 UTC" firstStartedPulling="2025-09-12 17:42:47.520013177 +0000 UTC m=+24.812011444" lastFinishedPulling="2025-09-12 17:43:19.614110968 +0000 UTC m=+56.906109275" observedRunningTime="2025-09-12 17:43:21.156612101 +0000 UTC m=+58.448610448" watchObservedRunningTime="2025-09-12 17:43:21.160308111 +0000 UTC m=+58.452306418" Sep 12 17:43:21.250059 containerd[1710]: 2025-09-12 17:43:21.185 [INFO][4490] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dee466ea8ab497ad4d734a41e5d2e891f495a89c6c03cab534d2ecd2d68fbb58" Sep 12 17:43:21.250059 containerd[1710]: 2025-09-12 17:43:21.185 [INFO][4490] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="dee466ea8ab497ad4d734a41e5d2e891f495a89c6c03cab534d2ecd2d68fbb58" iface="eth0" netns="/var/run/netns/cni-d4719e02-de2e-4db4-fd71-0eed44a3b65e" Sep 12 17:43:21.250059 containerd[1710]: 2025-09-12 17:43:21.185 [INFO][4490] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="dee466ea8ab497ad4d734a41e5d2e891f495a89c6c03cab534d2ecd2d68fbb58" iface="eth0" netns="/var/run/netns/cni-d4719e02-de2e-4db4-fd71-0eed44a3b65e" Sep 12 17:43:21.250059 containerd[1710]: 2025-09-12 17:43:21.186 [INFO][4490] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="dee466ea8ab497ad4d734a41e5d2e891f495a89c6c03cab534d2ecd2d68fbb58" iface="eth0" netns="/var/run/netns/cni-d4719e02-de2e-4db4-fd71-0eed44a3b65e" Sep 12 17:43:21.250059 containerd[1710]: 2025-09-12 17:43:21.186 [INFO][4490] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dee466ea8ab497ad4d734a41e5d2e891f495a89c6c03cab534d2ecd2d68fbb58" Sep 12 17:43:21.250059 containerd[1710]: 2025-09-12 17:43:21.186 [INFO][4490] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dee466ea8ab497ad4d734a41e5d2e891f495a89c6c03cab534d2ecd2d68fbb58" Sep 12 17:43:21.250059 containerd[1710]: 2025-09-12 17:43:21.233 [INFO][4521] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dee466ea8ab497ad4d734a41e5d2e891f495a89c6c03cab534d2ecd2d68fbb58" HandleID="k8s-pod-network.dee466ea8ab497ad4d734a41e5d2e891f495a89c6c03cab534d2ecd2d68fbb58" Workload="ci--4081.3.6--a--d7d9773d19-k8s-whisker--79bf995685--p7788-eth0" Sep 12 17:43:21.250059 containerd[1710]: 2025-09-12 17:43:21.233 [INFO][4521] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:43:21.250059 containerd[1710]: 2025-09-12 17:43:21.233 [INFO][4521] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:43:21.250059 containerd[1710]: 2025-09-12 17:43:21.243 [WARNING][4521] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dee466ea8ab497ad4d734a41e5d2e891f495a89c6c03cab534d2ecd2d68fbb58" HandleID="k8s-pod-network.dee466ea8ab497ad4d734a41e5d2e891f495a89c6c03cab534d2ecd2d68fbb58" Workload="ci--4081.3.6--a--d7d9773d19-k8s-whisker--79bf995685--p7788-eth0" Sep 12 17:43:21.250059 containerd[1710]: 2025-09-12 17:43:21.243 [INFO][4521] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dee466ea8ab497ad4d734a41e5d2e891f495a89c6c03cab534d2ecd2d68fbb58" HandleID="k8s-pod-network.dee466ea8ab497ad4d734a41e5d2e891f495a89c6c03cab534d2ecd2d68fbb58" Workload="ci--4081.3.6--a--d7d9773d19-k8s-whisker--79bf995685--p7788-eth0" Sep 12 17:43:21.250059 containerd[1710]: 2025-09-12 17:43:21.245 [INFO][4521] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:43:21.250059 containerd[1710]: 2025-09-12 17:43:21.247 [INFO][4490] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dee466ea8ab497ad4d734a41e5d2e891f495a89c6c03cab534d2ecd2d68fbb58" Sep 12 17:43:21.254686 containerd[1710]: time="2025-09-12T17:43:21.254526983Z" level=info msg="TearDown network for sandbox \"dee466ea8ab497ad4d734a41e5d2e891f495a89c6c03cab534d2ecd2d68fbb58\" successfully" Sep 12 17:43:21.254686 containerd[1710]: time="2025-09-12T17:43:21.254562543Z" level=info msg="StopPodSandbox for \"dee466ea8ab497ad4d734a41e5d2e891f495a89c6c03cab534d2ecd2d68fbb58\" returns successfully" Sep 12 17:43:21.255960 systemd[1]: run-netns-cni\x2dd4719e02\x2dde2e\x2d4db4\x2dfd71\x2d0eed44a3b65e.mount: Deactivated successfully. Sep 12 17:43:21.316277 kubelet[3174]: I0912 17:43:21.316235 3174 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1b00cb80-9491-4739-9630-df8b4e81d405-whisker-ca-bundle\") pod \"1b00cb80-9491-4739-9630-df8b4e81d405\" (UID: \"1b00cb80-9491-4739-9630-df8b4e81d405\") " Sep 12 17:43:21.316425 kubelet[3174]: I0912 17:43:21.316311 3174 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1b00cb80-9491-4739-9630-df8b4e81d405-whisker-backend-key-pair\") pod \"1b00cb80-9491-4739-9630-df8b4e81d405\" (UID: \"1b00cb80-9491-4739-9630-df8b4e81d405\") " Sep 12 17:43:21.316425 kubelet[3174]: I0912 17:43:21.316334 3174 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c4jwr\" (UniqueName: \"kubernetes.io/projected/1b00cb80-9491-4739-9630-df8b4e81d405-kube-api-access-c4jwr\") pod \"1b00cb80-9491-4739-9630-df8b4e81d405\" (UID: \"1b00cb80-9491-4739-9630-df8b4e81d405\") " Sep 12 17:43:21.317035 kubelet[3174]: I0912 17:43:21.316769 3174 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b00cb80-9491-4739-9630-df8b4e81d405-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "1b00cb80-9491-4739-9630-df8b4e81d405" (UID: "1b00cb80-9491-4739-9630-df8b4e81d405"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 12 17:43:21.320563 systemd[1]: var-lib-kubelet-pods-1b00cb80\x2d9491\x2d4739\x2d9630\x2ddf8b4e81d405-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dc4jwr.mount: Deactivated successfully. Sep 12 17:43:21.320678 systemd[1]: var-lib-kubelet-pods-1b00cb80\x2d9491\x2d4739\x2d9630\x2ddf8b4e81d405-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 12 17:43:21.321003 kubelet[3174]: I0912 17:43:21.320940 3174 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b00cb80-9491-4739-9630-df8b4e81d405-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "1b00cb80-9491-4739-9630-df8b4e81d405" (UID: "1b00cb80-9491-4739-9630-df8b4e81d405"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 12 17:43:21.321287 kubelet[3174]: I0912 17:43:21.321256 3174 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b00cb80-9491-4739-9630-df8b4e81d405-kube-api-access-c4jwr" (OuterVolumeSpecName: "kube-api-access-c4jwr") pod "1b00cb80-9491-4739-9630-df8b4e81d405" (UID: "1b00cb80-9491-4739-9630-df8b4e81d405"). InnerVolumeSpecName "kube-api-access-c4jwr". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 12 17:43:21.416813 kubelet[3174]: I0912 17:43:21.416696 3174 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1b00cb80-9491-4739-9630-df8b4e81d405-whisker-ca-bundle\") on node \"ci-4081.3.6-a-d7d9773d19\" DevicePath \"\"" Sep 12 17:43:21.416813 kubelet[3174]: I0912 17:43:21.416735 3174 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1b00cb80-9491-4739-9630-df8b4e81d405-whisker-backend-key-pair\") on node \"ci-4081.3.6-a-d7d9773d19\" DevicePath \"\"" Sep 12 17:43:21.416813 kubelet[3174]: I0912 17:43:21.416748 3174 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c4jwr\" (UniqueName: \"kubernetes.io/projected/1b00cb80-9491-4739-9630-df8b4e81d405-kube-api-access-c4jwr\") on node \"ci-4081.3.6-a-d7d9773d19\" DevicePath \"\"" Sep 12 17:43:22.122990 systemd[1]: Removed slice kubepods-besteffort-pod1b00cb80_9491_4739_9630_df8b4e81d405.slice - libcontainer container kubepods-besteffort-pod1b00cb80_9491_4739_9630_df8b4e81d405.slice. Sep 12 17:43:22.140428 systemd[1]: run-containerd-runc-k8s.io-d54fd04d887aeda9c0525071e47c2dcde022de21ec0121d2cd66b5b686559343-runc.7Nu39S.mount: Deactivated successfully. Sep 12 17:43:22.225469 systemd[1]: Created slice kubepods-besteffort-pod8864b20a_8236_49e9_9367_c7e12f5d3669.slice - libcontainer container kubepods-besteffort-pod8864b20a_8236_49e9_9367_c7e12f5d3669.slice. Sep 12 17:43:22.323254 kubelet[3174]: I0912 17:43:22.323199 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8864b20a-8236-49e9-9367-c7e12f5d3669-whisker-backend-key-pair\") pod \"whisker-77df4565f6-vv6s9\" (UID: \"8864b20a-8236-49e9-9367-c7e12f5d3669\") " pod="calico-system/whisker-77df4565f6-vv6s9" Sep 12 17:43:22.323254 kubelet[3174]: I0912 17:43:22.323251 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8864b20a-8236-49e9-9367-c7e12f5d3669-whisker-ca-bundle\") pod \"whisker-77df4565f6-vv6s9\" (UID: \"8864b20a-8236-49e9-9367-c7e12f5d3669\") " pod="calico-system/whisker-77df4565f6-vv6s9" Sep 12 17:43:22.323636 kubelet[3174]: I0912 17:43:22.323275 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltnqz\" (UniqueName: \"kubernetes.io/projected/8864b20a-8236-49e9-9367-c7e12f5d3669-kube-api-access-ltnqz\") pod \"whisker-77df4565f6-vv6s9\" (UID: \"8864b20a-8236-49e9-9367-c7e12f5d3669\") " pod="calico-system/whisker-77df4565f6-vv6s9" Sep 12 17:43:22.529466 containerd[1710]: time="2025-09-12T17:43:22.529406063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-77df4565f6-vv6s9,Uid:8864b20a-8236-49e9-9367-c7e12f5d3669,Namespace:calico-system,Attempt:0,}" Sep 12 17:43:22.790526 systemd-networkd[1587]: cali5736189b601: Link UP Sep 12 17:43:22.791940 systemd-networkd[1587]: cali5736189b601: Gained carrier Sep 12 17:43:22.826083 containerd[1710]: 2025-09-12 17:43:22.631 [INFO][4654] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 17:43:22.826083 containerd[1710]: 2025-09-12 17:43:22.650 [INFO][4654] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--a--d7d9773d19-k8s-whisker--77df4565f6--vv6s9-eth0 whisker-77df4565f6- calico-system 8864b20a-8236-49e9-9367-c7e12f5d3669 932 0 2025-09-12 17:43:22 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:77df4565f6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.6-a-d7d9773d19 whisker-77df4565f6-vv6s9 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali5736189b601 [] [] }} ContainerID="45dddb650a5f2aac9ff86e5a7f7b2239ea16b1651f7c9ed57ddf3dbcd28ed8a2" Namespace="calico-system" Pod="whisker-77df4565f6-vv6s9" WorkloadEndpoint="ci--4081.3.6--a--d7d9773d19-k8s-whisker--77df4565f6--vv6s9-" Sep 12 17:43:22.826083 containerd[1710]: 2025-09-12 17:43:22.650 [INFO][4654] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="45dddb650a5f2aac9ff86e5a7f7b2239ea16b1651f7c9ed57ddf3dbcd28ed8a2" Namespace="calico-system" Pod="whisker-77df4565f6-vv6s9" WorkloadEndpoint="ci--4081.3.6--a--d7d9773d19-k8s-whisker--77df4565f6--vv6s9-eth0" Sep 12 17:43:22.826083 containerd[1710]: 2025-09-12 17:43:22.704 [INFO][4667] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="45dddb650a5f2aac9ff86e5a7f7b2239ea16b1651f7c9ed57ddf3dbcd28ed8a2" HandleID="k8s-pod-network.45dddb650a5f2aac9ff86e5a7f7b2239ea16b1651f7c9ed57ddf3dbcd28ed8a2" Workload="ci--4081.3.6--a--d7d9773d19-k8s-whisker--77df4565f6--vv6s9-eth0" Sep 12 17:43:22.826083 containerd[1710]: 2025-09-12 17:43:22.706 [INFO][4667] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="45dddb650a5f2aac9ff86e5a7f7b2239ea16b1651f7c9ed57ddf3dbcd28ed8a2" HandleID="k8s-pod-network.45dddb650a5f2aac9ff86e5a7f7b2239ea16b1651f7c9ed57ddf3dbcd28ed8a2" Workload="ci--4081.3.6--a--d7d9773d19-k8s-whisker--77df4565f6--vv6s9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400031a160), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-a-d7d9773d19", "pod":"whisker-77df4565f6-vv6s9", "timestamp":"2025-09-12 17:43:22.704573649 +0000 UTC"}, Hostname:"ci-4081.3.6-a-d7d9773d19", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:43:22.826083 containerd[1710]: 2025-09-12 17:43:22.706 [INFO][4667] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:43:22.826083 containerd[1710]: 2025-09-12 17:43:22.706 [INFO][4667] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:43:22.826083 containerd[1710]: 2025-09-12 17:43:22.706 [INFO][4667] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-a-d7d9773d19' Sep 12 17:43:22.826083 containerd[1710]: 2025-09-12 17:43:22.717 [INFO][4667] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.45dddb650a5f2aac9ff86e5a7f7b2239ea16b1651f7c9ed57ddf3dbcd28ed8a2" host="ci-4081.3.6-a-d7d9773d19" Sep 12 17:43:22.826083 containerd[1710]: 2025-09-12 17:43:22.727 [INFO][4667] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-a-d7d9773d19" Sep 12 17:43:22.826083 containerd[1710]: 2025-09-12 17:43:22.733 [INFO][4667] ipam/ipam.go 511: Trying affinity for 192.168.69.128/26 host="ci-4081.3.6-a-d7d9773d19" Sep 12 17:43:22.826083 containerd[1710]: 2025-09-12 17:43:22.736 [INFO][4667] ipam/ipam.go 158: Attempting to load block cidr=192.168.69.128/26 host="ci-4081.3.6-a-d7d9773d19" Sep 12 17:43:22.826083 containerd[1710]: 2025-09-12 17:43:22.739 [INFO][4667] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.69.128/26 host="ci-4081.3.6-a-d7d9773d19" Sep 12 17:43:22.826083 containerd[1710]: 2025-09-12 17:43:22.739 [INFO][4667] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.69.128/26 handle="k8s-pod-network.45dddb650a5f2aac9ff86e5a7f7b2239ea16b1651f7c9ed57ddf3dbcd28ed8a2" host="ci-4081.3.6-a-d7d9773d19" Sep 12 17:43:22.826083 containerd[1710]: 2025-09-12 17:43:22.742 [INFO][4667] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.45dddb650a5f2aac9ff86e5a7f7b2239ea16b1651f7c9ed57ddf3dbcd28ed8a2 Sep 12 17:43:22.826083 containerd[1710]: 2025-09-12 17:43:22.747 [INFO][4667] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.69.128/26 handle="k8s-pod-network.45dddb650a5f2aac9ff86e5a7f7b2239ea16b1651f7c9ed57ddf3dbcd28ed8a2" host="ci-4081.3.6-a-d7d9773d19" Sep 12 17:43:22.826083 containerd[1710]: 2025-09-12 17:43:22.758 [INFO][4667] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.69.129/26] block=192.168.69.128/26 handle="k8s-pod-network.45dddb650a5f2aac9ff86e5a7f7b2239ea16b1651f7c9ed57ddf3dbcd28ed8a2" host="ci-4081.3.6-a-d7d9773d19" Sep 12 17:43:22.826083 containerd[1710]: 2025-09-12 17:43:22.758 [INFO][4667] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.69.129/26] handle="k8s-pod-network.45dddb650a5f2aac9ff86e5a7f7b2239ea16b1651f7c9ed57ddf3dbcd28ed8a2" host="ci-4081.3.6-a-d7d9773d19" Sep 12 17:43:22.826083 containerd[1710]: 2025-09-12 17:43:22.758 [INFO][4667] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:43:22.826083 containerd[1710]: 2025-09-12 17:43:22.758 [INFO][4667] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.69.129/26] IPv6=[] ContainerID="45dddb650a5f2aac9ff86e5a7f7b2239ea16b1651f7c9ed57ddf3dbcd28ed8a2" HandleID="k8s-pod-network.45dddb650a5f2aac9ff86e5a7f7b2239ea16b1651f7c9ed57ddf3dbcd28ed8a2" Workload="ci--4081.3.6--a--d7d9773d19-k8s-whisker--77df4565f6--vv6s9-eth0" Sep 12 17:43:22.829243 containerd[1710]: 2025-09-12 17:43:22.763 [INFO][4654] cni-plugin/k8s.go 418: Populated endpoint ContainerID="45dddb650a5f2aac9ff86e5a7f7b2239ea16b1651f7c9ed57ddf3dbcd28ed8a2" Namespace="calico-system" Pod="whisker-77df4565f6-vv6s9" WorkloadEndpoint="ci--4081.3.6--a--d7d9773d19-k8s-whisker--77df4565f6--vv6s9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--d7d9773d19-k8s-whisker--77df4565f6--vv6s9-eth0", GenerateName:"whisker-77df4565f6-", Namespace:"calico-system", SelfLink:"", UID:"8864b20a-8236-49e9-9367-c7e12f5d3669", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 43, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"77df4565f6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-d7d9773d19", ContainerID:"", Pod:"whisker-77df4565f6-vv6s9", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.69.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali5736189b601", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:43:22.829243 containerd[1710]: 2025-09-12 17:43:22.763 [INFO][4654] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.69.129/32] ContainerID="45dddb650a5f2aac9ff86e5a7f7b2239ea16b1651f7c9ed57ddf3dbcd28ed8a2" Namespace="calico-system" Pod="whisker-77df4565f6-vv6s9" WorkloadEndpoint="ci--4081.3.6--a--d7d9773d19-k8s-whisker--77df4565f6--vv6s9-eth0" Sep 12 17:43:22.829243 containerd[1710]: 2025-09-12 17:43:22.764 [INFO][4654] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5736189b601 ContainerID="45dddb650a5f2aac9ff86e5a7f7b2239ea16b1651f7c9ed57ddf3dbcd28ed8a2" Namespace="calico-system" Pod="whisker-77df4565f6-vv6s9" WorkloadEndpoint="ci--4081.3.6--a--d7d9773d19-k8s-whisker--77df4565f6--vv6s9-eth0" Sep 12 17:43:22.829243 containerd[1710]: 2025-09-12 17:43:22.792 [INFO][4654] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="45dddb650a5f2aac9ff86e5a7f7b2239ea16b1651f7c9ed57ddf3dbcd28ed8a2" Namespace="calico-system" Pod="whisker-77df4565f6-vv6s9" WorkloadEndpoint="ci--4081.3.6--a--d7d9773d19-k8s-whisker--77df4565f6--vv6s9-eth0" Sep 12 17:43:22.829243 containerd[1710]: 2025-09-12 17:43:22.794 [INFO][4654] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="45dddb650a5f2aac9ff86e5a7f7b2239ea16b1651f7c9ed57ddf3dbcd28ed8a2" Namespace="calico-system" Pod="whisker-77df4565f6-vv6s9" WorkloadEndpoint="ci--4081.3.6--a--d7d9773d19-k8s-whisker--77df4565f6--vv6s9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--d7d9773d19-k8s-whisker--77df4565f6--vv6s9-eth0", GenerateName:"whisker-77df4565f6-", Namespace:"calico-system", SelfLink:"", UID:"8864b20a-8236-49e9-9367-c7e12f5d3669", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 43, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"77df4565f6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-d7d9773d19", ContainerID:"45dddb650a5f2aac9ff86e5a7f7b2239ea16b1651f7c9ed57ddf3dbcd28ed8a2", Pod:"whisker-77df4565f6-vv6s9", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.69.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali5736189b601", MAC:"6e:ca:b1:79:26:51", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:43:22.829243 containerd[1710]: 2025-09-12 17:43:22.818 [INFO][4654] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="45dddb650a5f2aac9ff86e5a7f7b2239ea16b1651f7c9ed57ddf3dbcd28ed8a2" Namespace="calico-system" Pod="whisker-77df4565f6-vv6s9" WorkloadEndpoint="ci--4081.3.6--a--d7d9773d19-k8s-whisker--77df4565f6--vv6s9-eth0" Sep 12 17:43:22.836778 containerd[1710]: time="2025-09-12T17:43:22.836172309Z" level=info msg="StopPodSandbox for \"7eb4020e00307aa41e4e3a15a7c0709087298a72d3de5084f1316aea23488174\"" Sep 12 17:43:22.839919 containerd[1710]: time="2025-09-12T17:43:22.836566710Z" level=info msg="StopPodSandbox for \"090d81d7288e974c51d24dc1110b803ed2899ea08d388642880e4c2aef75e9db\"" Sep 12 17:43:22.841759 kubelet[3174]: I0912 17:43:22.841715 3174 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b00cb80-9491-4739-9630-df8b4e81d405" path="/var/lib/kubelet/pods/1b00cb80-9491-4739-9630-df8b4e81d405/volumes" Sep 12 17:43:22.843830 containerd[1710]: time="2025-09-12T17:43:22.843750290Z" level=info msg="StopPodSandbox for \"dee466ea8ab497ad4d734a41e5d2e891f495a89c6c03cab534d2ecd2d68fbb58\"" Sep 12 17:43:22.934475 containerd[1710]: time="2025-09-12T17:43:22.930304340Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:43:22.934475 containerd[1710]: time="2025-09-12T17:43:22.930364460Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:43:22.934475 containerd[1710]: time="2025-09-12T17:43:22.930399181Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:43:22.934475 containerd[1710]: time="2025-09-12T17:43:22.930480941Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:43:23.002900 systemd[1]: Started cri-containerd-45dddb650a5f2aac9ff86e5a7f7b2239ea16b1651f7c9ed57ddf3dbcd28ed8a2.scope - libcontainer container 45dddb650a5f2aac9ff86e5a7f7b2239ea16b1651f7c9ed57ddf3dbcd28ed8a2. Sep 12 17:43:23.097769 containerd[1710]: 2025-09-12 17:43:23.007 [INFO][4709] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7eb4020e00307aa41e4e3a15a7c0709087298a72d3de5084f1316aea23488174" Sep 12 17:43:23.097769 containerd[1710]: 2025-09-12 17:43:23.014 [INFO][4709] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7eb4020e00307aa41e4e3a15a7c0709087298a72d3de5084f1316aea23488174" iface="eth0" netns="/var/run/netns/cni-0c095239-092d-886c-9027-0fc033eb0502" Sep 12 17:43:23.097769 containerd[1710]: 2025-09-12 17:43:23.014 [INFO][4709] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7eb4020e00307aa41e4e3a15a7c0709087298a72d3de5084f1316aea23488174" iface="eth0" netns="/var/run/netns/cni-0c095239-092d-886c-9027-0fc033eb0502" Sep 12 17:43:23.097769 containerd[1710]: 2025-09-12 17:43:23.014 [INFO][4709] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7eb4020e00307aa41e4e3a15a7c0709087298a72d3de5084f1316aea23488174" iface="eth0" netns="/var/run/netns/cni-0c095239-092d-886c-9027-0fc033eb0502" Sep 12 17:43:23.097769 containerd[1710]: 2025-09-12 17:43:23.014 [INFO][4709] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7eb4020e00307aa41e4e3a15a7c0709087298a72d3de5084f1316aea23488174" Sep 12 17:43:23.097769 containerd[1710]: 2025-09-12 17:43:23.014 [INFO][4709] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7eb4020e00307aa41e4e3a15a7c0709087298a72d3de5084f1316aea23488174" Sep 12 17:43:23.097769 containerd[1710]: 2025-09-12 17:43:23.068 [INFO][4772] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7eb4020e00307aa41e4e3a15a7c0709087298a72d3de5084f1316aea23488174" HandleID="k8s-pod-network.7eb4020e00307aa41e4e3a15a7c0709087298a72d3de5084f1316aea23488174" Workload="ci--4081.3.6--a--d7d9773d19-k8s-calico--apiserver--77d7968c8d--4l6tx-eth0" Sep 12 17:43:23.097769 containerd[1710]: 2025-09-12 17:43:23.069 [INFO][4772] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:43:23.097769 containerd[1710]: 2025-09-12 17:43:23.069 [INFO][4772] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:43:23.097769 containerd[1710]: 2025-09-12 17:43:23.085 [WARNING][4772] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7eb4020e00307aa41e4e3a15a7c0709087298a72d3de5084f1316aea23488174" HandleID="k8s-pod-network.7eb4020e00307aa41e4e3a15a7c0709087298a72d3de5084f1316aea23488174" Workload="ci--4081.3.6--a--d7d9773d19-k8s-calico--apiserver--77d7968c8d--4l6tx-eth0" Sep 12 17:43:23.097769 containerd[1710]: 2025-09-12 17:43:23.085 [INFO][4772] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7eb4020e00307aa41e4e3a15a7c0709087298a72d3de5084f1316aea23488174" HandleID="k8s-pod-network.7eb4020e00307aa41e4e3a15a7c0709087298a72d3de5084f1316aea23488174" Workload="ci--4081.3.6--a--d7d9773d19-k8s-calico--apiserver--77d7968c8d--4l6tx-eth0" Sep 12 17:43:23.097769 containerd[1710]: 2025-09-12 17:43:23.087 [INFO][4772] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:43:23.097769 containerd[1710]: 2025-09-12 17:43:23.091 [INFO][4709] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7eb4020e00307aa41e4e3a15a7c0709087298a72d3de5084f1316aea23488174" Sep 12 17:43:23.101258 containerd[1710]: time="2025-09-12T17:43:23.101202954Z" level=info msg="TearDown network for sandbox \"7eb4020e00307aa41e4e3a15a7c0709087298a72d3de5084f1316aea23488174\" successfully" Sep 12 17:43:23.101258 containerd[1710]: time="2025-09-12T17:43:23.101250314Z" level=info msg="StopPodSandbox for \"7eb4020e00307aa41e4e3a15a7c0709087298a72d3de5084f1316aea23488174\" returns successfully" Sep 12 17:43:23.101975 containerd[1710]: time="2025-09-12T17:43:23.101936116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77d7968c8d-4l6tx,Uid:eacf3852-e8b2-4417-862e-486f289649f2,Namespace:calico-apiserver,Attempt:1,}" Sep 12 17:43:23.129332 containerd[1710]: 2025-09-12 17:43:23.032 [WARNING][4722] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="dee466ea8ab497ad4d734a41e5d2e891f495a89c6c03cab534d2ecd2d68fbb58" WorkloadEndpoint="ci--4081.3.6--a--d7d9773d19-k8s-whisker--79bf995685--p7788-eth0" Sep 12 17:43:23.129332 containerd[1710]: 2025-09-12 17:43:23.033 [INFO][4722] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dee466ea8ab497ad4d734a41e5d2e891f495a89c6c03cab534d2ecd2d68fbb58" Sep 12 17:43:23.129332 containerd[1710]: 2025-09-12 17:43:23.033 [INFO][4722] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dee466ea8ab497ad4d734a41e5d2e891f495a89c6c03cab534d2ecd2d68fbb58" iface="eth0" netns="" Sep 12 17:43:23.129332 containerd[1710]: 2025-09-12 17:43:23.033 [INFO][4722] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dee466ea8ab497ad4d734a41e5d2e891f495a89c6c03cab534d2ecd2d68fbb58" Sep 12 17:43:23.129332 containerd[1710]: 2025-09-12 17:43:23.033 [INFO][4722] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dee466ea8ab497ad4d734a41e5d2e891f495a89c6c03cab534d2ecd2d68fbb58" Sep 12 17:43:23.129332 containerd[1710]: 2025-09-12 17:43:23.099 [INFO][4777] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dee466ea8ab497ad4d734a41e5d2e891f495a89c6c03cab534d2ecd2d68fbb58" HandleID="k8s-pod-network.dee466ea8ab497ad4d734a41e5d2e891f495a89c6c03cab534d2ecd2d68fbb58" Workload="ci--4081.3.6--a--d7d9773d19-k8s-whisker--79bf995685--p7788-eth0" Sep 12 17:43:23.129332 containerd[1710]: 2025-09-12 17:43:23.100 [INFO][4777] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:43:23.129332 containerd[1710]: 2025-09-12 17:43:23.100 [INFO][4777] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:43:23.129332 containerd[1710]: 2025-09-12 17:43:23.120 [WARNING][4777] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dee466ea8ab497ad4d734a41e5d2e891f495a89c6c03cab534d2ecd2d68fbb58" HandleID="k8s-pod-network.dee466ea8ab497ad4d734a41e5d2e891f495a89c6c03cab534d2ecd2d68fbb58" Workload="ci--4081.3.6--a--d7d9773d19-k8s-whisker--79bf995685--p7788-eth0" Sep 12 17:43:23.129332 containerd[1710]: 2025-09-12 17:43:23.121 [INFO][4777] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dee466ea8ab497ad4d734a41e5d2e891f495a89c6c03cab534d2ecd2d68fbb58" HandleID="k8s-pod-network.dee466ea8ab497ad4d734a41e5d2e891f495a89c6c03cab534d2ecd2d68fbb58" Workload="ci--4081.3.6--a--d7d9773d19-k8s-whisker--79bf995685--p7788-eth0" Sep 12 17:43:23.129332 containerd[1710]: 2025-09-12 17:43:23.124 [INFO][4777] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:43:23.129332 containerd[1710]: 2025-09-12 17:43:23.128 [INFO][4722] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dee466ea8ab497ad4d734a41e5d2e891f495a89c6c03cab534d2ecd2d68fbb58" Sep 12 17:43:23.130909 containerd[1710]: time="2025-09-12T17:43:23.130879959Z" level=info msg="TearDown network for sandbox \"dee466ea8ab497ad4d734a41e5d2e891f495a89c6c03cab534d2ecd2d68fbb58\" successfully" Sep 12 17:43:23.130999 containerd[1710]: time="2025-09-12T17:43:23.130985239Z" level=info msg="StopPodSandbox for \"dee466ea8ab497ad4d734a41e5d2e891f495a89c6c03cab534d2ecd2d68fbb58\" returns successfully" Sep 12 17:43:23.132431 containerd[1710]: time="2025-09-12T17:43:23.132396884Z" level=info msg="RemovePodSandbox for \"dee466ea8ab497ad4d734a41e5d2e891f495a89c6c03cab534d2ecd2d68fbb58\"" Sep 12 17:43:23.168332 containerd[1710]: 2025-09-12 17:43:23.048 [INFO][4721] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="090d81d7288e974c51d24dc1110b803ed2899ea08d388642880e4c2aef75e9db" Sep 12 17:43:23.168332 containerd[1710]: 2025-09-12 17:43:23.049 [INFO][4721] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="090d81d7288e974c51d24dc1110b803ed2899ea08d388642880e4c2aef75e9db" iface="eth0" netns="/var/run/netns/cni-b4efaeef-fffc-0fb8-eb81-8898a9a33be8" Sep 12 17:43:23.168332 containerd[1710]: 2025-09-12 17:43:23.049 [INFO][4721] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="090d81d7288e974c51d24dc1110b803ed2899ea08d388642880e4c2aef75e9db" iface="eth0" netns="/var/run/netns/cni-b4efaeef-fffc-0fb8-eb81-8898a9a33be8" Sep 12 17:43:23.168332 containerd[1710]: 2025-09-12 17:43:23.050 [INFO][4721] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="090d81d7288e974c51d24dc1110b803ed2899ea08d388642880e4c2aef75e9db" iface="eth0" netns="/var/run/netns/cni-b4efaeef-fffc-0fb8-eb81-8898a9a33be8" Sep 12 17:43:23.168332 containerd[1710]: 2025-09-12 17:43:23.050 [INFO][4721] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="090d81d7288e974c51d24dc1110b803ed2899ea08d388642880e4c2aef75e9db" Sep 12 17:43:23.168332 containerd[1710]: 2025-09-12 17:43:23.050 [INFO][4721] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="090d81d7288e974c51d24dc1110b803ed2899ea08d388642880e4c2aef75e9db" Sep 12 17:43:23.168332 containerd[1710]: 2025-09-12 17:43:23.100 [INFO][4786] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="090d81d7288e974c51d24dc1110b803ed2899ea08d388642880e4c2aef75e9db" HandleID="k8s-pod-network.090d81d7288e974c51d24dc1110b803ed2899ea08d388642880e4c2aef75e9db" Workload="ci--4081.3.6--a--d7d9773d19-k8s-coredns--7c65d6cfc9--8zkgx-eth0" Sep 12 17:43:23.168332 containerd[1710]: 2025-09-12 17:43:23.100 [INFO][4786] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:43:23.168332 containerd[1710]: 2025-09-12 17:43:23.124 [INFO][4786] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:43:23.168332 containerd[1710]: 2025-09-12 17:43:23.143 [WARNING][4786] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="090d81d7288e974c51d24dc1110b803ed2899ea08d388642880e4c2aef75e9db" HandleID="k8s-pod-network.090d81d7288e974c51d24dc1110b803ed2899ea08d388642880e4c2aef75e9db" Workload="ci--4081.3.6--a--d7d9773d19-k8s-coredns--7c65d6cfc9--8zkgx-eth0" Sep 12 17:43:23.168332 containerd[1710]: 2025-09-12 17:43:23.143 [INFO][4786] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="090d81d7288e974c51d24dc1110b803ed2899ea08d388642880e4c2aef75e9db" HandleID="k8s-pod-network.090d81d7288e974c51d24dc1110b803ed2899ea08d388642880e4c2aef75e9db" Workload="ci--4081.3.6--a--d7d9773d19-k8s-coredns--7c65d6cfc9--8zkgx-eth0" Sep 12 17:43:23.168332 containerd[1710]: 2025-09-12 17:43:23.145 [INFO][4786] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:43:23.168332 containerd[1710]: 2025-09-12 17:43:23.148 [INFO][4721] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="090d81d7288e974c51d24dc1110b803ed2899ea08d388642880e4c2aef75e9db" Sep 12 17:43:23.175100 containerd[1710]: time="2025-09-12T17:43:23.174603005Z" level=info msg="Forcibly stopping sandbox \"dee466ea8ab497ad4d734a41e5d2e891f495a89c6c03cab534d2ecd2d68fbb58\"" Sep 12 17:43:23.180638 containerd[1710]: time="2025-09-12T17:43:23.180512302Z" level=info msg="TearDown network for sandbox \"090d81d7288e974c51d24dc1110b803ed2899ea08d388642880e4c2aef75e9db\" successfully" Sep 12 17:43:23.181223 containerd[1710]: time="2025-09-12T17:43:23.180979984Z" level=info msg="StopPodSandbox for \"090d81d7288e974c51d24dc1110b803ed2899ea08d388642880e4c2aef75e9db\" returns successfully" Sep 12 17:43:23.182265 containerd[1710]: time="2025-09-12T17:43:23.182067067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-8zkgx,Uid:34b1f8d5-6c1c-4f8f-9138-d677d0b599a6,Namespace:kube-system,Attempt:1,}" Sep 12 17:43:23.226679 containerd[1710]: time="2025-09-12T17:43:23.225016831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-77df4565f6-vv6s9,Uid:8864b20a-8236-49e9-9367-c7e12f5d3669,Namespace:calico-system,Attempt:0,} returns sandbox id \"45dddb650a5f2aac9ff86e5a7f7b2239ea16b1651f7c9ed57ddf3dbcd28ed8a2\"" Sep 12 17:43:23.231400 containerd[1710]: time="2025-09-12T17:43:23.231305529Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 12 17:43:23.312875 kernel: bpftool[4869]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 12 17:43:23.389123 systemd-networkd[1587]: calie2d7971697e: Link UP Sep 12 17:43:23.389640 systemd-networkd[1587]: calie2d7971697e: Gained carrier Sep 12 17:43:23.415755 containerd[1710]: 2025-09-12 17:43:23.297 [WARNING][4830] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="dee466ea8ab497ad4d734a41e5d2e891f495a89c6c03cab534d2ecd2d68fbb58" WorkloadEndpoint="ci--4081.3.6--a--d7d9773d19-k8s-whisker--79bf995685--p7788-eth0" Sep 12 17:43:23.415755 containerd[1710]: 2025-09-12 17:43:23.297 [INFO][4830] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dee466ea8ab497ad4d734a41e5d2e891f495a89c6c03cab534d2ecd2d68fbb58" Sep 12 17:43:23.415755 containerd[1710]: 2025-09-12 17:43:23.297 [INFO][4830] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dee466ea8ab497ad4d734a41e5d2e891f495a89c6c03cab534d2ecd2d68fbb58" iface="eth0" netns="" Sep 12 17:43:23.415755 containerd[1710]: 2025-09-12 17:43:23.297 [INFO][4830] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dee466ea8ab497ad4d734a41e5d2e891f495a89c6c03cab534d2ecd2d68fbb58" Sep 12 17:43:23.415755 containerd[1710]: 2025-09-12 17:43:23.297 [INFO][4830] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dee466ea8ab497ad4d734a41e5d2e891f495a89c6c03cab534d2ecd2d68fbb58" Sep 12 17:43:23.415755 containerd[1710]: 2025-09-12 17:43:23.364 [INFO][4868] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dee466ea8ab497ad4d734a41e5d2e891f495a89c6c03cab534d2ecd2d68fbb58" HandleID="k8s-pod-network.dee466ea8ab497ad4d734a41e5d2e891f495a89c6c03cab534d2ecd2d68fbb58" Workload="ci--4081.3.6--a--d7d9773d19-k8s-whisker--79bf995685--p7788-eth0" Sep 12 17:43:23.415755 containerd[1710]: 2025-09-12 17:43:23.364 [INFO][4868] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:43:23.415755 containerd[1710]: 2025-09-12 17:43:23.381 [INFO][4868] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:43:23.415755 containerd[1710]: 2025-09-12 17:43:23.401 [WARNING][4868] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dee466ea8ab497ad4d734a41e5d2e891f495a89c6c03cab534d2ecd2d68fbb58" HandleID="k8s-pod-network.dee466ea8ab497ad4d734a41e5d2e891f495a89c6c03cab534d2ecd2d68fbb58" Workload="ci--4081.3.6--a--d7d9773d19-k8s-whisker--79bf995685--p7788-eth0" Sep 12 17:43:23.415755 containerd[1710]: 2025-09-12 17:43:23.401 [INFO][4868] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dee466ea8ab497ad4d734a41e5d2e891f495a89c6c03cab534d2ecd2d68fbb58" HandleID="k8s-pod-network.dee466ea8ab497ad4d734a41e5d2e891f495a89c6c03cab534d2ecd2d68fbb58" Workload="ci--4081.3.6--a--d7d9773d19-k8s-whisker--79bf995685--p7788-eth0" Sep 12 17:43:23.415755 containerd[1710]: 2025-09-12 17:43:23.402 [INFO][4868] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:43:23.415755 containerd[1710]: 2025-09-12 17:43:23.413 [INFO][4830] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dee466ea8ab497ad4d734a41e5d2e891f495a89c6c03cab534d2ecd2d68fbb58" Sep 12 17:43:23.416119 containerd[1710]: time="2025-09-12T17:43:23.415816742Z" level=info msg="TearDown network for sandbox \"dee466ea8ab497ad4d734a41e5d2e891f495a89c6c03cab534d2ecd2d68fbb58\" successfully" Sep 12 17:43:23.425468 containerd[1710]: time="2025-09-12T17:43:23.425405849Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dee466ea8ab497ad4d734a41e5d2e891f495a89c6c03cab534d2ecd2d68fbb58\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:43:23.425587 containerd[1710]: time="2025-09-12T17:43:23.425482810Z" level=info msg="RemovePodSandbox \"dee466ea8ab497ad4d734a41e5d2e891f495a89c6c03cab534d2ecd2d68fbb58\" returns successfully" Sep 12 17:43:23.426255 containerd[1710]: 2025-09-12 17:43:23.195 [INFO][4808] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 17:43:23.426255 containerd[1710]: 2025-09-12 17:43:23.246 [INFO][4808] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--a--d7d9773d19-k8s-calico--apiserver--77d7968c8d--4l6tx-eth0 calico-apiserver-77d7968c8d- calico-apiserver eacf3852-e8b2-4417-862e-486f289649f2 941 0 2025-09-12 17:42:41 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:77d7968c8d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-a-d7d9773d19 calico-apiserver-77d7968c8d-4l6tx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie2d7971697e [] [] }} ContainerID="2e043943ed2078640213c2899bfb4797eff7657c53d39b3f67223af420258bd4" Namespace="calico-apiserver" Pod="calico-apiserver-77d7968c8d-4l6tx" WorkloadEndpoint="ci--4081.3.6--a--d7d9773d19-k8s-calico--apiserver--77d7968c8d--4l6tx-" Sep 12 17:43:23.426255 containerd[1710]: 2025-09-12 17:43:23.246 [INFO][4808] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2e043943ed2078640213c2899bfb4797eff7657c53d39b3f67223af420258bd4" Namespace="calico-apiserver" Pod="calico-apiserver-77d7968c8d-4l6tx" WorkloadEndpoint="ci--4081.3.6--a--d7d9773d19-k8s-calico--apiserver--77d7968c8d--4l6tx-eth0" Sep 12 17:43:23.426255 containerd[1710]: 2025-09-12 17:43:23.321 [INFO][4847] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2e043943ed2078640213c2899bfb4797eff7657c53d39b3f67223af420258bd4" HandleID="k8s-pod-network.2e043943ed2078640213c2899bfb4797eff7657c53d39b3f67223af420258bd4" Workload="ci--4081.3.6--a--d7d9773d19-k8s-calico--apiserver--77d7968c8d--4l6tx-eth0" Sep 12 17:43:23.426255 containerd[1710]: 2025-09-12 17:43:23.321 [INFO][4847] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2e043943ed2078640213c2899bfb4797eff7657c53d39b3f67223af420258bd4" HandleID="k8s-pod-network.2e043943ed2078640213c2899bfb4797eff7657c53d39b3f67223af420258bd4" Workload="ci--4081.3.6--a--d7d9773d19-k8s-calico--apiserver--77d7968c8d--4l6tx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b1a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.6-a-d7d9773d19", "pod":"calico-apiserver-77d7968c8d-4l6tx", "timestamp":"2025-09-12 17:43:23.321162388 +0000 UTC"}, Hostname:"ci-4081.3.6-a-d7d9773d19", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:43:23.426255 containerd[1710]: 2025-09-12 17:43:23.321 [INFO][4847] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:43:23.426255 containerd[1710]: 2025-09-12 17:43:23.321 [INFO][4847] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:43:23.426255 containerd[1710]: 2025-09-12 17:43:23.321 [INFO][4847] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-a-d7d9773d19' Sep 12 17:43:23.426255 containerd[1710]: 2025-09-12 17:43:23.337 [INFO][4847] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2e043943ed2078640213c2899bfb4797eff7657c53d39b3f67223af420258bd4" host="ci-4081.3.6-a-d7d9773d19" Sep 12 17:43:23.426255 containerd[1710]: 2025-09-12 17:43:23.343 [INFO][4847] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-a-d7d9773d19" Sep 12 17:43:23.426255 containerd[1710]: 2025-09-12 17:43:23.351 [INFO][4847] ipam/ipam.go 511: Trying affinity for 192.168.69.128/26 host="ci-4081.3.6-a-d7d9773d19" Sep 12 17:43:23.426255 containerd[1710]: 2025-09-12 17:43:23.354 [INFO][4847] ipam/ipam.go 158: Attempting to load block cidr=192.168.69.128/26 host="ci-4081.3.6-a-d7d9773d19" Sep 12 17:43:23.426255 containerd[1710]: 2025-09-12 17:43:23.360 [INFO][4847] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.69.128/26 host="ci-4081.3.6-a-d7d9773d19" Sep 12 17:43:23.426255 containerd[1710]: 2025-09-12 17:43:23.360 [INFO][4847] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.69.128/26 handle="k8s-pod-network.2e043943ed2078640213c2899bfb4797eff7657c53d39b3f67223af420258bd4" host="ci-4081.3.6-a-d7d9773d19" Sep 12 17:43:23.426255 containerd[1710]: 2025-09-12 17:43:23.362 [INFO][4847] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2e043943ed2078640213c2899bfb4797eff7657c53d39b3f67223af420258bd4 Sep 12 17:43:23.426255 containerd[1710]: 2025-09-12 17:43:23.369 [INFO][4847] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.69.128/26 handle="k8s-pod-network.2e043943ed2078640213c2899bfb4797eff7657c53d39b3f67223af420258bd4" host="ci-4081.3.6-a-d7d9773d19" Sep 12 17:43:23.426255 containerd[1710]: 2025-09-12 17:43:23.381 [INFO][4847] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.69.130/26] block=192.168.69.128/26 handle="k8s-pod-network.2e043943ed2078640213c2899bfb4797eff7657c53d39b3f67223af420258bd4" host="ci-4081.3.6-a-d7d9773d19" Sep 12 17:43:23.426255 containerd[1710]: 2025-09-12 17:43:23.381 [INFO][4847] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.69.130/26] handle="k8s-pod-network.2e043943ed2078640213c2899bfb4797eff7657c53d39b3f67223af420258bd4" host="ci-4081.3.6-a-d7d9773d19" Sep 12 17:43:23.426255 containerd[1710]: 2025-09-12 17:43:23.381 [INFO][4847] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:43:23.426255 containerd[1710]: 2025-09-12 17:43:23.381 [INFO][4847] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.69.130/26] IPv6=[] ContainerID="2e043943ed2078640213c2899bfb4797eff7657c53d39b3f67223af420258bd4" HandleID="k8s-pod-network.2e043943ed2078640213c2899bfb4797eff7657c53d39b3f67223af420258bd4" Workload="ci--4081.3.6--a--d7d9773d19-k8s-calico--apiserver--77d7968c8d--4l6tx-eth0" Sep 12 17:43:23.426763 containerd[1710]: 2025-09-12 17:43:23.386 [INFO][4808] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2e043943ed2078640213c2899bfb4797eff7657c53d39b3f67223af420258bd4" Namespace="calico-apiserver" Pod="calico-apiserver-77d7968c8d-4l6tx" WorkloadEndpoint="ci--4081.3.6--a--d7d9773d19-k8s-calico--apiserver--77d7968c8d--4l6tx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--d7d9773d19-k8s-calico--apiserver--77d7968c8d--4l6tx-eth0", GenerateName:"calico-apiserver-77d7968c8d-", Namespace:"calico-apiserver", SelfLink:"", UID:"eacf3852-e8b2-4417-862e-486f289649f2", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 42, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77d7968c8d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-d7d9773d19", ContainerID:"", Pod:"calico-apiserver-77d7968c8d-4l6tx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.69.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie2d7971697e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:43:23.426763 containerd[1710]: 2025-09-12 17:43:23.386 [INFO][4808] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.69.130/32] ContainerID="2e043943ed2078640213c2899bfb4797eff7657c53d39b3f67223af420258bd4" Namespace="calico-apiserver" Pod="calico-apiserver-77d7968c8d-4l6tx" WorkloadEndpoint="ci--4081.3.6--a--d7d9773d19-k8s-calico--apiserver--77d7968c8d--4l6tx-eth0" Sep 12 17:43:23.426763 containerd[1710]: 2025-09-12 17:43:23.386 [INFO][4808] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie2d7971697e ContainerID="2e043943ed2078640213c2899bfb4797eff7657c53d39b3f67223af420258bd4" Namespace="calico-apiserver" Pod="calico-apiserver-77d7968c8d-4l6tx" WorkloadEndpoint="ci--4081.3.6--a--d7d9773d19-k8s-calico--apiserver--77d7968c8d--4l6tx-eth0" Sep 12 17:43:23.426763 containerd[1710]: 2025-09-12 17:43:23.390 [INFO][4808] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2e043943ed2078640213c2899bfb4797eff7657c53d39b3f67223af420258bd4" Namespace="calico-apiserver" Pod="calico-apiserver-77d7968c8d-4l6tx" WorkloadEndpoint="ci--4081.3.6--a--d7d9773d19-k8s-calico--apiserver--77d7968c8d--4l6tx-eth0" Sep 12 17:43:23.426763 containerd[1710]: 2025-09-12 17:43:23.391 [INFO][4808] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2e043943ed2078640213c2899bfb4797eff7657c53d39b3f67223af420258bd4" Namespace="calico-apiserver" Pod="calico-apiserver-77d7968c8d-4l6tx" WorkloadEndpoint="ci--4081.3.6--a--d7d9773d19-k8s-calico--apiserver--77d7968c8d--4l6tx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--d7d9773d19-k8s-calico--apiserver--77d7968c8d--4l6tx-eth0", GenerateName:"calico-apiserver-77d7968c8d-", Namespace:"calico-apiserver", SelfLink:"", UID:"eacf3852-e8b2-4417-862e-486f289649f2", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 42, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77d7968c8d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-d7d9773d19", ContainerID:"2e043943ed2078640213c2899bfb4797eff7657c53d39b3f67223af420258bd4", Pod:"calico-apiserver-77d7968c8d-4l6tx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.69.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie2d7971697e", MAC:"4a:b7:28:9d:0f:9c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:43:23.426763 containerd[1710]: 2025-09-12 17:43:23.420 [INFO][4808] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2e043943ed2078640213c2899bfb4797eff7657c53d39b3f67223af420258bd4" Namespace="calico-apiserver" Pod="calico-apiserver-77d7968c8d-4l6tx" WorkloadEndpoint="ci--4081.3.6--a--d7d9773d19-k8s-calico--apiserver--77d7968c8d--4l6tx-eth0" Sep 12 17:43:23.432281 systemd[1]: run-netns-cni\x2d0c095239\x2d092d\x2d886c\x2d9027\x2d0fc033eb0502.mount: Deactivated successfully. Sep 12 17:43:23.432372 systemd[1]: run-netns-cni\x2db4efaeef\x2dfffc\x2d0fb8\x2deb81\x2d8898a9a33be8.mount: Deactivated successfully. Sep 12 17:43:23.494661 containerd[1710]: time="2025-09-12T17:43:23.493986447Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:43:23.494787 containerd[1710]: time="2025-09-12T17:43:23.494680329Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:43:23.494787 containerd[1710]: time="2025-09-12T17:43:23.494710769Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:43:23.499410 containerd[1710]: time="2025-09-12T17:43:23.494958090Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:43:23.504113 systemd-networkd[1587]: cali4e928688eed: Link UP Sep 12 17:43:23.504742 systemd-networkd[1587]: cali4e928688eed: Gained carrier Sep 12 17:43:23.526985 systemd[1]: Started cri-containerd-2e043943ed2078640213c2899bfb4797eff7657c53d39b3f67223af420258bd4.scope - libcontainer container 2e043943ed2078640213c2899bfb4797eff7657c53d39b3f67223af420258bd4. Sep 12 17:43:23.543575 containerd[1710]: 2025-09-12 17:43:23.320 [INFO][4843] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--a--d7d9773d19-k8s-coredns--7c65d6cfc9--8zkgx-eth0 coredns-7c65d6cfc9- kube-system 34b1f8d5-6c1c-4f8f-9138-d677d0b599a6 942 0 2025-09-12 17:42:29 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-a-d7d9773d19 coredns-7c65d6cfc9-8zkgx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4e928688eed [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="81a529ab21804d4c2d8940cfa1fcc480fa1d78de73ac98845db1cb1e6ad27b12" Namespace="kube-system" Pod="coredns-7c65d6cfc9-8zkgx" WorkloadEndpoint="ci--4081.3.6--a--d7d9773d19-k8s-coredns--7c65d6cfc9--8zkgx-" Sep 12 17:43:23.543575 containerd[1710]: 2025-09-12 17:43:23.321 [INFO][4843] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="81a529ab21804d4c2d8940cfa1fcc480fa1d78de73ac98845db1cb1e6ad27b12" Namespace="kube-system" Pod="coredns-7c65d6cfc9-8zkgx" WorkloadEndpoint="ci--4081.3.6--a--d7d9773d19-k8s-coredns--7c65d6cfc9--8zkgx-eth0" Sep 12 17:43:23.543575 containerd[1710]: 2025-09-12 17:43:23.375 [INFO][4877] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="81a529ab21804d4c2d8940cfa1fcc480fa1d78de73ac98845db1cb1e6ad27b12" HandleID="k8s-pod-network.81a529ab21804d4c2d8940cfa1fcc480fa1d78de73ac98845db1cb1e6ad27b12" Workload="ci--4081.3.6--a--d7d9773d19-k8s-coredns--7c65d6cfc9--8zkgx-eth0" Sep 12 17:43:23.543575 containerd[1710]: 2025-09-12 17:43:23.375 [INFO][4877] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="81a529ab21804d4c2d8940cfa1fcc480fa1d78de73ac98845db1cb1e6ad27b12" HandleID="k8s-pod-network.81a529ab21804d4c2d8940cfa1fcc480fa1d78de73ac98845db1cb1e6ad27b12" Workload="ci--4081.3.6--a--d7d9773d19-k8s-coredns--7c65d6cfc9--8zkgx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003e1390), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-a-d7d9773d19", "pod":"coredns-7c65d6cfc9-8zkgx", "timestamp":"2025-09-12 17:43:23.375001184 +0000 UTC"}, Hostname:"ci-4081.3.6-a-d7d9773d19", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:43:23.543575 containerd[1710]: 2025-09-12 17:43:23.375 [INFO][4877] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:43:23.543575 containerd[1710]: 2025-09-12 17:43:23.406 [INFO][4877] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:43:23.543575 containerd[1710]: 2025-09-12 17:43:23.406 [INFO][4877] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-a-d7d9773d19' Sep 12 17:43:23.543575 containerd[1710]: 2025-09-12 17:43:23.436 [INFO][4877] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.81a529ab21804d4c2d8940cfa1fcc480fa1d78de73ac98845db1cb1e6ad27b12" host="ci-4081.3.6-a-d7d9773d19" Sep 12 17:43:23.543575 containerd[1710]: 2025-09-12 17:43:23.444 [INFO][4877] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-a-d7d9773d19" Sep 12 17:43:23.543575 containerd[1710]: 2025-09-12 17:43:23.450 [INFO][4877] ipam/ipam.go 511: Trying affinity for 192.168.69.128/26 host="ci-4081.3.6-a-d7d9773d19" Sep 12 17:43:23.543575 containerd[1710]: 2025-09-12 17:43:23.454 [INFO][4877] ipam/ipam.go 158: Attempting to load block cidr=192.168.69.128/26 host="ci-4081.3.6-a-d7d9773d19" Sep 12 17:43:23.543575 containerd[1710]: 2025-09-12 17:43:23.459 [INFO][4877] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.69.128/26 host="ci-4081.3.6-a-d7d9773d19" Sep 12 17:43:23.543575 containerd[1710]: 2025-09-12 17:43:23.459 [INFO][4877] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.69.128/26 handle="k8s-pod-network.81a529ab21804d4c2d8940cfa1fcc480fa1d78de73ac98845db1cb1e6ad27b12" host="ci-4081.3.6-a-d7d9773d19" Sep 12 17:43:23.543575 containerd[1710]: 2025-09-12 17:43:23.463 [INFO][4877] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.81a529ab21804d4c2d8940cfa1fcc480fa1d78de73ac98845db1cb1e6ad27b12 Sep 12 17:43:23.543575 containerd[1710]: 2025-09-12 17:43:23.470 [INFO][4877] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.69.128/26 handle="k8s-pod-network.81a529ab21804d4c2d8940cfa1fcc480fa1d78de73ac98845db1cb1e6ad27b12" host="ci-4081.3.6-a-d7d9773d19" Sep 12 17:43:23.543575 containerd[1710]: 2025-09-12 17:43:23.485 [INFO][4877] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.69.131/26] block=192.168.69.128/26 handle="k8s-pod-network.81a529ab21804d4c2d8940cfa1fcc480fa1d78de73ac98845db1cb1e6ad27b12" host="ci-4081.3.6-a-d7d9773d19" Sep 12 17:43:23.543575 containerd[1710]: 2025-09-12 17:43:23.485 [INFO][4877] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.69.131/26] handle="k8s-pod-network.81a529ab21804d4c2d8940cfa1fcc480fa1d78de73ac98845db1cb1e6ad27b12" host="ci-4081.3.6-a-d7d9773d19" Sep 12 17:43:23.543575 containerd[1710]: 2025-09-12 17:43:23.486 [INFO][4877] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:43:23.543575 containerd[1710]: 2025-09-12 17:43:23.486 [INFO][4877] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.69.131/26] IPv6=[] ContainerID="81a529ab21804d4c2d8940cfa1fcc480fa1d78de73ac98845db1cb1e6ad27b12" HandleID="k8s-pod-network.81a529ab21804d4c2d8940cfa1fcc480fa1d78de73ac98845db1cb1e6ad27b12" Workload="ci--4081.3.6--a--d7d9773d19-k8s-coredns--7c65d6cfc9--8zkgx-eth0" Sep 12 17:43:23.545614 containerd[1710]: 2025-09-12 17:43:23.488 [INFO][4843] cni-plugin/k8s.go 418: Populated endpoint ContainerID="81a529ab21804d4c2d8940cfa1fcc480fa1d78de73ac98845db1cb1e6ad27b12" Namespace="kube-system" Pod="coredns-7c65d6cfc9-8zkgx" WorkloadEndpoint="ci--4081.3.6--a--d7d9773d19-k8s-coredns--7c65d6cfc9--8zkgx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--d7d9773d19-k8s-coredns--7c65d6cfc9--8zkgx-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"34b1f8d5-6c1c-4f8f-9138-d677d0b599a6", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 42, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-d7d9773d19", ContainerID:"", Pod:"coredns-7c65d6cfc9-8zkgx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.69.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4e928688eed", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:43:23.545614 containerd[1710]: 2025-09-12 17:43:23.488 [INFO][4843] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.69.131/32] ContainerID="81a529ab21804d4c2d8940cfa1fcc480fa1d78de73ac98845db1cb1e6ad27b12" Namespace="kube-system" Pod="coredns-7c65d6cfc9-8zkgx" WorkloadEndpoint="ci--4081.3.6--a--d7d9773d19-k8s-coredns--7c65d6cfc9--8zkgx-eth0" Sep 12 17:43:23.545614 containerd[1710]: 2025-09-12 17:43:23.489 [INFO][4843] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4e928688eed ContainerID="81a529ab21804d4c2d8940cfa1fcc480fa1d78de73ac98845db1cb1e6ad27b12" Namespace="kube-system" Pod="coredns-7c65d6cfc9-8zkgx" WorkloadEndpoint="ci--4081.3.6--a--d7d9773d19-k8s-coredns--7c65d6cfc9--8zkgx-eth0" Sep 12 17:43:23.545614 containerd[1710]: 2025-09-12 17:43:23.504 [INFO][4843] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="81a529ab21804d4c2d8940cfa1fcc480fa1d78de73ac98845db1cb1e6ad27b12" Namespace="kube-system" Pod="coredns-7c65d6cfc9-8zkgx" WorkloadEndpoint="ci--4081.3.6--a--d7d9773d19-k8s-coredns--7c65d6cfc9--8zkgx-eth0" Sep 12 17:43:23.545614 containerd[1710]: 2025-09-12 17:43:23.507 [INFO][4843] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="81a529ab21804d4c2d8940cfa1fcc480fa1d78de73ac98845db1cb1e6ad27b12" Namespace="kube-system" Pod="coredns-7c65d6cfc9-8zkgx" WorkloadEndpoint="ci--4081.3.6--a--d7d9773d19-k8s-coredns--7c65d6cfc9--8zkgx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--d7d9773d19-k8s-coredns--7c65d6cfc9--8zkgx-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"34b1f8d5-6c1c-4f8f-9138-d677d0b599a6", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 42, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-d7d9773d19", ContainerID:"81a529ab21804d4c2d8940cfa1fcc480fa1d78de73ac98845db1cb1e6ad27b12", Pod:"coredns-7c65d6cfc9-8zkgx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.69.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4e928688eed", MAC:"1a:e3:b4:88:ce:e4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:43:23.545614 containerd[1710]: 2025-09-12 17:43:23.541 [INFO][4843] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="81a529ab21804d4c2d8940cfa1fcc480fa1d78de73ac98845db1cb1e6ad27b12" Namespace="kube-system" Pod="coredns-7c65d6cfc9-8zkgx" WorkloadEndpoint="ci--4081.3.6--a--d7d9773d19-k8s-coredns--7c65d6cfc9--8zkgx-eth0" Sep 12 17:43:23.582693 containerd[1710]: time="2025-09-12T17:43:23.582613063Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:43:23.583237 containerd[1710]: time="2025-09-12T17:43:23.582670143Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:43:23.583237 containerd[1710]: time="2025-09-12T17:43:23.582680623Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:43:23.583237 containerd[1710]: time="2025-09-12T17:43:23.582756504Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:43:23.606201 containerd[1710]: time="2025-09-12T17:43:23.605782490Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77d7968c8d-4l6tx,Uid:eacf3852-e8b2-4417-862e-486f289649f2,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"2e043943ed2078640213c2899bfb4797eff7657c53d39b3f67223af420258bd4\"" Sep 12 17:43:23.621024 systemd[1]: Started cri-containerd-81a529ab21804d4c2d8940cfa1fcc480fa1d78de73ac98845db1cb1e6ad27b12.scope - libcontainer container 81a529ab21804d4c2d8940cfa1fcc480fa1d78de73ac98845db1cb1e6ad27b12. Sep 12 17:43:23.663241 containerd[1710]: time="2025-09-12T17:43:23.663013095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-8zkgx,Uid:34b1f8d5-6c1c-4f8f-9138-d677d0b599a6,Namespace:kube-system,Attempt:1,} returns sandbox id \"81a529ab21804d4c2d8940cfa1fcc480fa1d78de73ac98845db1cb1e6ad27b12\"" Sep 12 17:43:23.670253 containerd[1710]: time="2025-09-12T17:43:23.670021235Z" level=info msg="CreateContainer within sandbox \"81a529ab21804d4c2d8940cfa1fcc480fa1d78de73ac98845db1cb1e6ad27b12\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:43:23.706273 containerd[1710]: time="2025-09-12T17:43:23.706222620Z" level=info msg="CreateContainer within sandbox \"81a529ab21804d4c2d8940cfa1fcc480fa1d78de73ac98845db1cb1e6ad27b12\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c519a058bdb6d6b3412f9275ad900b838eb85ffab7a1ee9ac297e4991401fc13\"" Sep 12 17:43:23.707775 containerd[1710]: time="2025-09-12T17:43:23.707747704Z" level=info msg="StartContainer for \"c519a058bdb6d6b3412f9275ad900b838eb85ffab7a1ee9ac297e4991401fc13\"" Sep 12 17:43:23.734963 systemd[1]: Started cri-containerd-c519a058bdb6d6b3412f9275ad900b838eb85ffab7a1ee9ac297e4991401fc13.scope - libcontainer container c519a058bdb6d6b3412f9275ad900b838eb85ffab7a1ee9ac297e4991401fc13. Sep 12 17:43:23.755331 systemd-networkd[1587]: vxlan.calico: Link UP Sep 12 17:43:23.755340 systemd-networkd[1587]: vxlan.calico: Gained carrier Sep 12 17:43:23.781540 containerd[1710]: time="2025-09-12T17:43:23.781375077Z" level=info msg="StartContainer for \"c519a058bdb6d6b3412f9275ad900b838eb85ffab7a1ee9ac297e4991401fc13\" returns successfully" Sep 12 17:43:24.172392 kubelet[3174]: I0912 17:43:24.172307 3174 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-8zkgx" podStartSLOduration=55.172267445 podStartE2EDuration="55.172267445s" podCreationTimestamp="2025-09-12 17:42:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:43:24.148039695 +0000 UTC m=+61.440038002" watchObservedRunningTime="2025-09-12 17:43:24.172267445 +0000 UTC m=+61.464265712" Sep 12 17:43:24.818020 containerd[1710]: time="2025-09-12T17:43:24.816946506Z" level=info msg="StopPodSandbox for \"cd6abb0062bd7ba1073963354d32df7359610744cb8d8c6a6297744ffa374540\"" Sep 12 17:43:24.829944 systemd-networkd[1587]: cali5736189b601: Gained IPv6LL Sep 12 17:43:24.957238 systemd-networkd[1587]: cali4e928688eed: Gained IPv6LL Sep 12 17:43:24.964593 containerd[1710]: 2025-09-12 17:43:24.884 [INFO][5112] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cd6abb0062bd7ba1073963354d32df7359610744cb8d8c6a6297744ffa374540" Sep 12 17:43:24.964593 containerd[1710]: 2025-09-12 17:43:24.885 [INFO][5112] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cd6abb0062bd7ba1073963354d32df7359610744cb8d8c6a6297744ffa374540" iface="eth0" netns="/var/run/netns/cni-cd1570e4-d55e-7ba9-7cc1-b5a95b5c5c2c" Sep 12 17:43:24.964593 containerd[1710]: 2025-09-12 17:43:24.885 [INFO][5112] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cd6abb0062bd7ba1073963354d32df7359610744cb8d8c6a6297744ffa374540" iface="eth0" netns="/var/run/netns/cni-cd1570e4-d55e-7ba9-7cc1-b5a95b5c5c2c" Sep 12 17:43:24.964593 containerd[1710]: 2025-09-12 17:43:24.887 [INFO][5112] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cd6abb0062bd7ba1073963354d32df7359610744cb8d8c6a6297744ffa374540" iface="eth0" netns="/var/run/netns/cni-cd1570e4-d55e-7ba9-7cc1-b5a95b5c5c2c" Sep 12 17:43:24.964593 containerd[1710]: 2025-09-12 17:43:24.888 [INFO][5112] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cd6abb0062bd7ba1073963354d32df7359610744cb8d8c6a6297744ffa374540" Sep 12 17:43:24.964593 containerd[1710]: 2025-09-12 17:43:24.888 [INFO][5112] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cd6abb0062bd7ba1073963354d32df7359610744cb8d8c6a6297744ffa374540" Sep 12 17:43:24.964593 containerd[1710]: 2025-09-12 17:43:24.944 [INFO][5124] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cd6abb0062bd7ba1073963354d32df7359610744cb8d8c6a6297744ffa374540" HandleID="k8s-pod-network.cd6abb0062bd7ba1073963354d32df7359610744cb8d8c6a6297744ffa374540" Workload="ci--4081.3.6--a--d7d9773d19-k8s-coredns--7c65d6cfc9--z8zzj-eth0" Sep 12 17:43:24.964593 containerd[1710]: 2025-09-12 17:43:24.944 [INFO][5124] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:43:24.964593 containerd[1710]: 2025-09-12 17:43:24.944 [INFO][5124] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:43:24.964593 containerd[1710]: 2025-09-12 17:43:24.955 [WARNING][5124] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cd6abb0062bd7ba1073963354d32df7359610744cb8d8c6a6297744ffa374540" HandleID="k8s-pod-network.cd6abb0062bd7ba1073963354d32df7359610744cb8d8c6a6297744ffa374540" Workload="ci--4081.3.6--a--d7d9773d19-k8s-coredns--7c65d6cfc9--z8zzj-eth0" Sep 12 17:43:24.964593 containerd[1710]: 2025-09-12 17:43:24.955 [INFO][5124] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cd6abb0062bd7ba1073963354d32df7359610744cb8d8c6a6297744ffa374540" HandleID="k8s-pod-network.cd6abb0062bd7ba1073963354d32df7359610744cb8d8c6a6297744ffa374540" Workload="ci--4081.3.6--a--d7d9773d19-k8s-coredns--7c65d6cfc9--z8zzj-eth0" Sep 12 17:43:24.964593 containerd[1710]: 2025-09-12 17:43:24.958 [INFO][5124] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:43:24.964593 containerd[1710]: 2025-09-12 17:43:24.960 [INFO][5112] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cd6abb0062bd7ba1073963354d32df7359610744cb8d8c6a6297744ffa374540" Sep 12 17:43:24.966828 containerd[1710]: time="2025-09-12T17:43:24.965258574Z" level=info msg="TearDown network for sandbox \"cd6abb0062bd7ba1073963354d32df7359610744cb8d8c6a6297744ffa374540\" successfully" Sep 12 17:43:24.966828 containerd[1710]: time="2025-09-12T17:43:24.965291894Z" level=info msg="StopPodSandbox for \"cd6abb0062bd7ba1073963354d32df7359610744cb8d8c6a6297744ffa374540\" returns successfully" Sep 12 17:43:24.967585 containerd[1710]: time="2025-09-12T17:43:24.967483861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-z8zzj,Uid:34236242-7d33-47ad-b860-36c4a84495b0,Namespace:kube-system,Attempt:1,}" Sep 12 17:43:24.968444 systemd[1]: run-netns-cni\x2dcd1570e4\x2dd55e\x2d7ba9\x2d7cc1\x2db5a95b5c5c2c.mount: Deactivated successfully. Sep 12 17:43:25.035817 containerd[1710]: time="2025-09-12T17:43:25.035756338Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:43:25.039998 containerd[1710]: time="2025-09-12T17:43:25.039630429Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4605606" Sep 12 17:43:25.042485 containerd[1710]: time="2025-09-12T17:43:25.042352117Z" level=info msg="ImageCreate event name:\"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:43:25.050544 containerd[1710]: time="2025-09-12T17:43:25.050050859Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:43:25.069975 containerd[1710]: time="2025-09-12T17:43:25.069115554Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"5974839\" in 1.837767585s" Sep 12 17:43:25.069975 containerd[1710]: time="2025-09-12T17:43:25.069184034Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\"" Sep 12 17:43:25.075843 containerd[1710]: time="2025-09-12T17:43:25.074643930Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 17:43:25.077169 containerd[1710]: time="2025-09-12T17:43:25.077100417Z" level=info msg="CreateContainer within sandbox \"45dddb650a5f2aac9ff86e5a7f7b2239ea16b1651f7c9ed57ddf3dbcd28ed8a2\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 12 17:43:25.127350 containerd[1710]: time="2025-09-12T17:43:25.127103361Z" level=info msg="CreateContainer within sandbox \"45dddb650a5f2aac9ff86e5a7f7b2239ea16b1651f7c9ed57ddf3dbcd28ed8a2\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"060ff9891f0229ca6e69a4d0b217d321dfd5ef8b2581a501aaaf134be4811fd9\"" Sep 12 17:43:25.129348 containerd[1710]: time="2025-09-12T17:43:25.128870686Z" level=info msg="StartContainer for \"060ff9891f0229ca6e69a4d0b217d321dfd5ef8b2581a501aaaf134be4811fd9\"" Sep 12 17:43:25.148894 systemd-networkd[1587]: vxlan.calico: Gained IPv6LL Sep 12 17:43:25.167994 systemd[1]: Started cri-containerd-060ff9891f0229ca6e69a4d0b217d321dfd5ef8b2581a501aaaf134be4811fd9.scope - libcontainer container 060ff9891f0229ca6e69a4d0b217d321dfd5ef8b2581a501aaaf134be4811fd9. Sep 12 17:43:25.199694 systemd-networkd[1587]: calif5158b762c2: Link UP Sep 12 17:43:25.200670 systemd-networkd[1587]: calif5158b762c2: Gained carrier Sep 12 17:43:25.228533 containerd[1710]: time="2025-09-12T17:43:25.228486694Z" level=info msg="StartContainer for \"060ff9891f0229ca6e69a4d0b217d321dfd5ef8b2581a501aaaf134be4811fd9\" returns successfully" Sep 12 17:43:25.231812 containerd[1710]: 2025-09-12 17:43:25.076 [INFO][5132] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--a--d7d9773d19-k8s-coredns--7c65d6cfc9--z8zzj-eth0 coredns-7c65d6cfc9- kube-system 34236242-7d33-47ad-b860-36c4a84495b0 967 0 2025-09-12 17:42:29 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-a-d7d9773d19 coredns-7c65d6cfc9-z8zzj eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif5158b762c2 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="73dac1189271e1be755dfa80f43d1ee5826924e90819a36fc49d8dd183a2ab2f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-z8zzj" WorkloadEndpoint="ci--4081.3.6--a--d7d9773d19-k8s-coredns--7c65d6cfc9--z8zzj-" Sep 12 17:43:25.231812 containerd[1710]: 2025-09-12 17:43:25.077 [INFO][5132] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="73dac1189271e1be755dfa80f43d1ee5826924e90819a36fc49d8dd183a2ab2f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-z8zzj" WorkloadEndpoint="ci--4081.3.6--a--d7d9773d19-k8s-coredns--7c65d6cfc9--z8zzj-eth0" Sep 12 17:43:25.231812 containerd[1710]: 2025-09-12 17:43:25.106 [INFO][5144] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="73dac1189271e1be755dfa80f43d1ee5826924e90819a36fc49d8dd183a2ab2f" HandleID="k8s-pod-network.73dac1189271e1be755dfa80f43d1ee5826924e90819a36fc49d8dd183a2ab2f" Workload="ci--4081.3.6--a--d7d9773d19-k8s-coredns--7c65d6cfc9--z8zzj-eth0" Sep 12 17:43:25.231812 containerd[1710]: 2025-09-12 17:43:25.106 [INFO][5144] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="73dac1189271e1be755dfa80f43d1ee5826924e90819a36fc49d8dd183a2ab2f" HandleID="k8s-pod-network.73dac1189271e1be755dfa80f43d1ee5826924e90819a36fc49d8dd183a2ab2f" Workload="ci--4081.3.6--a--d7d9773d19-k8s-coredns--7c65d6cfc9--z8zzj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d30a0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-a-d7d9773d19", "pod":"coredns-7c65d6cfc9-z8zzj", "timestamp":"2025-09-12 17:43:25.106708142 +0000 UTC"}, Hostname:"ci-4081.3.6-a-d7d9773d19", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:43:25.231812 containerd[1710]: 2025-09-12 17:43:25.106 [INFO][5144] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:43:25.231812 containerd[1710]: 2025-09-12 17:43:25.107 [INFO][5144] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:43:25.231812 containerd[1710]: 2025-09-12 17:43:25.107 [INFO][5144] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-a-d7d9773d19' Sep 12 17:43:25.231812 containerd[1710]: 2025-09-12 17:43:25.123 [INFO][5144] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.73dac1189271e1be755dfa80f43d1ee5826924e90819a36fc49d8dd183a2ab2f" host="ci-4081.3.6-a-d7d9773d19" Sep 12 17:43:25.231812 containerd[1710]: 2025-09-12 17:43:25.129 [INFO][5144] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-a-d7d9773d19" Sep 12 17:43:25.231812 containerd[1710]: 2025-09-12 17:43:25.150 [INFO][5144] ipam/ipam.go 511: Trying affinity for 192.168.69.128/26 host="ci-4081.3.6-a-d7d9773d19" Sep 12 17:43:25.231812 containerd[1710]: 2025-09-12 17:43:25.153 [INFO][5144] ipam/ipam.go 158: Attempting to load block cidr=192.168.69.128/26 host="ci-4081.3.6-a-d7d9773d19" Sep 12 17:43:25.231812 containerd[1710]: 2025-09-12 17:43:25.158 [INFO][5144] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.69.128/26 host="ci-4081.3.6-a-d7d9773d19" Sep 12 17:43:25.231812 containerd[1710]: 2025-09-12 17:43:25.158 [INFO][5144] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.69.128/26 handle="k8s-pod-network.73dac1189271e1be755dfa80f43d1ee5826924e90819a36fc49d8dd183a2ab2f" host="ci-4081.3.6-a-d7d9773d19" Sep 12 17:43:25.231812 containerd[1710]: 2025-09-12 17:43:25.160 [INFO][5144] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.73dac1189271e1be755dfa80f43d1ee5826924e90819a36fc49d8dd183a2ab2f Sep 12 17:43:25.231812 containerd[1710]: 2025-09-12 17:43:25.171 [INFO][5144] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.69.128/26 handle="k8s-pod-network.73dac1189271e1be755dfa80f43d1ee5826924e90819a36fc49d8dd183a2ab2f" host="ci-4081.3.6-a-d7d9773d19" Sep 12 17:43:25.231812 containerd[1710]: 2025-09-12 17:43:25.188 [INFO][5144] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.69.132/26] block=192.168.69.128/26 handle="k8s-pod-network.73dac1189271e1be755dfa80f43d1ee5826924e90819a36fc49d8dd183a2ab2f" host="ci-4081.3.6-a-d7d9773d19" Sep 12 17:43:25.231812 containerd[1710]: 2025-09-12 17:43:25.188 [INFO][5144] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.69.132/26] handle="k8s-pod-network.73dac1189271e1be755dfa80f43d1ee5826924e90819a36fc49d8dd183a2ab2f" host="ci-4081.3.6-a-d7d9773d19" Sep 12 17:43:25.231812 containerd[1710]: 2025-09-12 17:43:25.188 [INFO][5144] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:43:25.231812 containerd[1710]: 2025-09-12 17:43:25.188 [INFO][5144] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.69.132/26] IPv6=[] ContainerID="73dac1189271e1be755dfa80f43d1ee5826924e90819a36fc49d8dd183a2ab2f" HandleID="k8s-pod-network.73dac1189271e1be755dfa80f43d1ee5826924e90819a36fc49d8dd183a2ab2f" Workload="ci--4081.3.6--a--d7d9773d19-k8s-coredns--7c65d6cfc9--z8zzj-eth0" Sep 12 17:43:25.232294 containerd[1710]: 2025-09-12 17:43:25.194 [INFO][5132] cni-plugin/k8s.go 418: Populated endpoint ContainerID="73dac1189271e1be755dfa80f43d1ee5826924e90819a36fc49d8dd183a2ab2f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-z8zzj" WorkloadEndpoint="ci--4081.3.6--a--d7d9773d19-k8s-coredns--7c65d6cfc9--z8zzj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--d7d9773d19-k8s-coredns--7c65d6cfc9--z8zzj-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"34236242-7d33-47ad-b860-36c4a84495b0", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 42, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-d7d9773d19", ContainerID:"", Pod:"coredns-7c65d6cfc9-z8zzj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.69.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif5158b762c2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:43:25.232294 containerd[1710]: 2025-09-12 17:43:25.194 [INFO][5132] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.69.132/32] ContainerID="73dac1189271e1be755dfa80f43d1ee5826924e90819a36fc49d8dd183a2ab2f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-z8zzj" WorkloadEndpoint="ci--4081.3.6--a--d7d9773d19-k8s-coredns--7c65d6cfc9--z8zzj-eth0" Sep 12 17:43:25.232294 containerd[1710]: 2025-09-12 17:43:25.194 [INFO][5132] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif5158b762c2 ContainerID="73dac1189271e1be755dfa80f43d1ee5826924e90819a36fc49d8dd183a2ab2f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-z8zzj" WorkloadEndpoint="ci--4081.3.6--a--d7d9773d19-k8s-coredns--7c65d6cfc9--z8zzj-eth0" Sep 12 17:43:25.232294 containerd[1710]: 2025-09-12 17:43:25.202 [INFO][5132] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="73dac1189271e1be755dfa80f43d1ee5826924e90819a36fc49d8dd183a2ab2f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-z8zzj" WorkloadEndpoint="ci--4081.3.6--a--d7d9773d19-k8s-coredns--7c65d6cfc9--z8zzj-eth0" Sep 12 17:43:25.232294 containerd[1710]: 2025-09-12 17:43:25.205 [INFO][5132] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="73dac1189271e1be755dfa80f43d1ee5826924e90819a36fc49d8dd183a2ab2f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-z8zzj" WorkloadEndpoint="ci--4081.3.6--a--d7d9773d19-k8s-coredns--7c65d6cfc9--z8zzj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--d7d9773d19-k8s-coredns--7c65d6cfc9--z8zzj-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"34236242-7d33-47ad-b860-36c4a84495b0", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 42, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-d7d9773d19", ContainerID:"73dac1189271e1be755dfa80f43d1ee5826924e90819a36fc49d8dd183a2ab2f", Pod:"coredns-7c65d6cfc9-z8zzj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.69.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif5158b762c2", MAC:"a6:02:1c:55:62:a5", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:43:25.232294 containerd[1710]: 2025-09-12 17:43:25.223 [INFO][5132] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="73dac1189271e1be755dfa80f43d1ee5826924e90819a36fc49d8dd183a2ab2f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-z8zzj" WorkloadEndpoint="ci--4081.3.6--a--d7d9773d19-k8s-coredns--7c65d6cfc9--z8zzj-eth0" Sep 12 17:43:25.257784 containerd[1710]: time="2025-09-12T17:43:25.257619498Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:43:25.257784 containerd[1710]: time="2025-09-12T17:43:25.257726258Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:43:25.257784 containerd[1710]: time="2025-09-12T17:43:25.257738618Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:43:25.258339 containerd[1710]: time="2025-09-12T17:43:25.258099339Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:43:25.281165 systemd[1]: Started cri-containerd-73dac1189271e1be755dfa80f43d1ee5826924e90819a36fc49d8dd183a2ab2f.scope - libcontainer container 73dac1189271e1be755dfa80f43d1ee5826924e90819a36fc49d8dd183a2ab2f. Sep 12 17:43:25.313738 containerd[1710]: time="2025-09-12T17:43:25.313680980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-z8zzj,Uid:34236242-7d33-47ad-b860-36c4a84495b0,Namespace:kube-system,Attempt:1,} returns sandbox id \"73dac1189271e1be755dfa80f43d1ee5826924e90819a36fc49d8dd183a2ab2f\"" Sep 12 17:43:25.319741 containerd[1710]: time="2025-09-12T17:43:25.319515037Z" level=info msg="CreateContainer within sandbox \"73dac1189271e1be755dfa80f43d1ee5826924e90819a36fc49d8dd183a2ab2f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:43:25.340058 systemd-networkd[1587]: calie2d7971697e: Gained IPv6LL Sep 12 17:43:25.352726 containerd[1710]: time="2025-09-12T17:43:25.352680292Z" level=info msg="CreateContainer within sandbox \"73dac1189271e1be755dfa80f43d1ee5826924e90819a36fc49d8dd183a2ab2f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"39a59ed2aa29215ae5b8aaa90f0b5eee937be41f81d0370fdb9cdc49aa4a77aa\"" Sep 12 17:43:25.354145 containerd[1710]: time="2025-09-12T17:43:25.354122777Z" level=info msg="StartContainer for \"39a59ed2aa29215ae5b8aaa90f0b5eee937be41f81d0370fdb9cdc49aa4a77aa\"" Sep 12 17:43:25.378995 systemd[1]: Started cri-containerd-39a59ed2aa29215ae5b8aaa90f0b5eee937be41f81d0370fdb9cdc49aa4a77aa.scope - libcontainer container 39a59ed2aa29215ae5b8aaa90f0b5eee937be41f81d0370fdb9cdc49aa4a77aa. Sep 12 17:43:25.410650 containerd[1710]: time="2025-09-12T17:43:25.410202498Z" level=info msg="StartContainer for \"39a59ed2aa29215ae5b8aaa90f0b5eee937be41f81d0370fdb9cdc49aa4a77aa\" returns successfully" Sep 12 17:43:25.432535 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount362004403.mount: Deactivated successfully. Sep 12 17:43:25.815720 containerd[1710]: time="2025-09-12T17:43:25.814671621Z" level=info msg="StopPodSandbox for \"80643f0741642ca43dac8477a4f2f1198059a655d085bc0b2fef769facd339c0\"" Sep 12 17:43:25.816243 containerd[1710]: time="2025-09-12T17:43:25.815975223Z" level=info msg="StopPodSandbox for \"43624bbda2c99a2613364de29f20c8d71c1e06d9f460d518f6356f8e617a55ce\"" Sep 12 17:43:25.997566 containerd[1710]: 2025-09-12 17:43:25.897 [INFO][5298] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="43624bbda2c99a2613364de29f20c8d71c1e06d9f460d518f6356f8e617a55ce" Sep 12 17:43:25.997566 containerd[1710]: 2025-09-12 17:43:25.899 [INFO][5298] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="43624bbda2c99a2613364de29f20c8d71c1e06d9f460d518f6356f8e617a55ce" iface="eth0" netns="/var/run/netns/cni-239fc440-6a6b-e0e7-a539-131a793d3bbf" Sep 12 17:43:25.997566 containerd[1710]: 2025-09-12 17:43:25.899 [INFO][5298] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="43624bbda2c99a2613364de29f20c8d71c1e06d9f460d518f6356f8e617a55ce" iface="eth0" netns="/var/run/netns/cni-239fc440-6a6b-e0e7-a539-131a793d3bbf" Sep 12 17:43:25.997566 containerd[1710]: 2025-09-12 17:43:25.899 [INFO][5298] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="43624bbda2c99a2613364de29f20c8d71c1e06d9f460d518f6356f8e617a55ce" iface="eth0" netns="/var/run/netns/cni-239fc440-6a6b-e0e7-a539-131a793d3bbf" Sep 12 17:43:25.997566 containerd[1710]: 2025-09-12 17:43:25.899 [INFO][5298] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="43624bbda2c99a2613364de29f20c8d71c1e06d9f460d518f6356f8e617a55ce" Sep 12 17:43:25.997566 containerd[1710]: 2025-09-12 17:43:25.899 [INFO][5298] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="43624bbda2c99a2613364de29f20c8d71c1e06d9f460d518f6356f8e617a55ce" Sep 12 17:43:25.997566 containerd[1710]: 2025-09-12 17:43:25.927 [INFO][5312] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="43624bbda2c99a2613364de29f20c8d71c1e06d9f460d518f6356f8e617a55ce" HandleID="k8s-pod-network.43624bbda2c99a2613364de29f20c8d71c1e06d9f460d518f6356f8e617a55ce" Workload="ci--4081.3.6--a--d7d9773d19-k8s-calico--kube--controllers--84db5c8b4c--5nsqv-eth0" Sep 12 17:43:25.997566 containerd[1710]: 2025-09-12 17:43:25.973 [INFO][5312] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:43:25.997566 containerd[1710]: 2025-09-12 17:43:25.974 [INFO][5312] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:43:25.997566 containerd[1710]: 2025-09-12 17:43:25.988 [WARNING][5312] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="43624bbda2c99a2613364de29f20c8d71c1e06d9f460d518f6356f8e617a55ce" HandleID="k8s-pod-network.43624bbda2c99a2613364de29f20c8d71c1e06d9f460d518f6356f8e617a55ce" Workload="ci--4081.3.6--a--d7d9773d19-k8s-calico--kube--controllers--84db5c8b4c--5nsqv-eth0" Sep 12 17:43:25.997566 containerd[1710]: 2025-09-12 17:43:25.989 [INFO][5312] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="43624bbda2c99a2613364de29f20c8d71c1e06d9f460d518f6356f8e617a55ce" HandleID="k8s-pod-network.43624bbda2c99a2613364de29f20c8d71c1e06d9f460d518f6356f8e617a55ce" Workload="ci--4081.3.6--a--d7d9773d19-k8s-calico--kube--controllers--84db5c8b4c--5nsqv-eth0" Sep 12 17:43:25.997566 containerd[1710]: 2025-09-12 17:43:25.991 [INFO][5312] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:43:25.997566 containerd[1710]: 2025-09-12 17:43:25.992 [INFO][5298] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="43624bbda2c99a2613364de29f20c8d71c1e06d9f460d518f6356f8e617a55ce" Sep 12 17:43:26.001013 containerd[1710]: time="2025-09-12T17:43:25.998885324Z" level=info msg="TearDown network for sandbox \"43624bbda2c99a2613364de29f20c8d71c1e06d9f460d518f6356f8e617a55ce\" successfully" Sep 12 17:43:26.001013 containerd[1710]: time="2025-09-12T17:43:26.000466766Z" level=info msg="StopPodSandbox for \"43624bbda2c99a2613364de29f20c8d71c1e06d9f460d518f6356f8e617a55ce\" returns successfully" Sep 12 17:43:26.002166 systemd[1]: run-netns-cni\x2d239fc440\x2d6a6b\x2de0e7\x2da539\x2d131a793d3bbf.mount: Deactivated successfully. Sep 12 17:43:26.017426 containerd[1710]: time="2025-09-12T17:43:26.016975626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84db5c8b4c-5nsqv,Uid:ba349ebc-a27b-4172-bfad-7c73a43c2008,Namespace:calico-system,Attempt:1,}" Sep 12 17:43:26.022832 containerd[1710]: 2025-09-12 17:43:25.930 [INFO][5295] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="80643f0741642ca43dac8477a4f2f1198059a655d085bc0b2fef769facd339c0" Sep 12 17:43:26.022832 containerd[1710]: 2025-09-12 17:43:25.973 [INFO][5295] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="80643f0741642ca43dac8477a4f2f1198059a655d085bc0b2fef769facd339c0" iface="eth0" netns="/var/run/netns/cni-0e7572af-666e-0609-1f1b-397f5b804ed4" Sep 12 17:43:26.022832 containerd[1710]: 2025-09-12 17:43:25.974 [INFO][5295] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="80643f0741642ca43dac8477a4f2f1198059a655d085bc0b2fef769facd339c0" iface="eth0" netns="/var/run/netns/cni-0e7572af-666e-0609-1f1b-397f5b804ed4" Sep 12 17:43:26.022832 containerd[1710]: 2025-09-12 17:43:25.974 [INFO][5295] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="80643f0741642ca43dac8477a4f2f1198059a655d085bc0b2fef769facd339c0" iface="eth0" netns="/var/run/netns/cni-0e7572af-666e-0609-1f1b-397f5b804ed4" Sep 12 17:43:26.022832 containerd[1710]: 2025-09-12 17:43:25.974 [INFO][5295] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="80643f0741642ca43dac8477a4f2f1198059a655d085bc0b2fef769facd339c0" Sep 12 17:43:26.022832 containerd[1710]: 2025-09-12 17:43:25.974 [INFO][5295] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="80643f0741642ca43dac8477a4f2f1198059a655d085bc0b2fef769facd339c0" Sep 12 17:43:26.022832 containerd[1710]: 2025-09-12 17:43:26.006 [INFO][5318] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="80643f0741642ca43dac8477a4f2f1198059a655d085bc0b2fef769facd339c0" HandleID="k8s-pod-network.80643f0741642ca43dac8477a4f2f1198059a655d085bc0b2fef769facd339c0" Workload="ci--4081.3.6--a--d7d9773d19-k8s-csi--node--driver--dp2wf-eth0" Sep 12 17:43:26.022832 containerd[1710]: 2025-09-12 17:43:26.006 [INFO][5318] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:43:26.022832 containerd[1710]: 2025-09-12 17:43:26.006 [INFO][5318] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:43:26.022832 containerd[1710]: 2025-09-12 17:43:26.016 [WARNING][5318] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="80643f0741642ca43dac8477a4f2f1198059a655d085bc0b2fef769facd339c0" HandleID="k8s-pod-network.80643f0741642ca43dac8477a4f2f1198059a655d085bc0b2fef769facd339c0" Workload="ci--4081.3.6--a--d7d9773d19-k8s-csi--node--driver--dp2wf-eth0" Sep 12 17:43:26.022832 containerd[1710]: 2025-09-12 17:43:26.016 [INFO][5318] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="80643f0741642ca43dac8477a4f2f1198059a655d085bc0b2fef769facd339c0" HandleID="k8s-pod-network.80643f0741642ca43dac8477a4f2f1198059a655d085bc0b2fef769facd339c0" Workload="ci--4081.3.6--a--d7d9773d19-k8s-csi--node--driver--dp2wf-eth0" Sep 12 17:43:26.022832 containerd[1710]: 2025-09-12 17:43:26.019 [INFO][5318] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:43:26.022832 containerd[1710]: 2025-09-12 17:43:26.020 [INFO][5295] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="80643f0741642ca43dac8477a4f2f1198059a655d085bc0b2fef769facd339c0" Sep 12 17:43:26.026727 containerd[1710]: time="2025-09-12T17:43:26.024024354Z" level=info msg="TearDown network for sandbox \"80643f0741642ca43dac8477a4f2f1198059a655d085bc0b2fef769facd339c0\" successfully" Sep 12 17:43:26.026727 containerd[1710]: time="2025-09-12T17:43:26.024059674Z" level=info msg="StopPodSandbox for \"80643f0741642ca43dac8477a4f2f1198059a655d085bc0b2fef769facd339c0\" returns successfully" Sep 12 17:43:26.026727 containerd[1710]: time="2025-09-12T17:43:26.026405237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dp2wf,Uid:93bea3a5-ac3c-4316-b474-aae40e5f08e7,Namespace:calico-system,Attempt:1,}" Sep 12 17:43:26.025749 systemd[1]: run-netns-cni\x2d0e7572af\x2d666e\x2d0609\x2d1f1b\x2d397f5b804ed4.mount: Deactivated successfully. Sep 12 17:43:26.213657 kubelet[3174]: I0912 17:43:26.213089 3174 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-z8zzj" podStartSLOduration=57.213057343 podStartE2EDuration="57.213057343s" podCreationTimestamp="2025-09-12 17:42:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:43:26.211777141 +0000 UTC m=+63.503775568" watchObservedRunningTime="2025-09-12 17:43:26.213057343 +0000 UTC m=+63.505055650" Sep 12 17:43:26.258491 systemd-networkd[1587]: calia31df1c47c0: Link UP Sep 12 17:43:26.261632 systemd-networkd[1587]: calia31df1c47c0: Gained carrier Sep 12 17:43:26.285580 containerd[1710]: 2025-09-12 17:43:26.129 [INFO][5325] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--a--d7d9773d19-k8s-calico--kube--controllers--84db5c8b4c--5nsqv-eth0 calico-kube-controllers-84db5c8b4c- calico-system ba349ebc-a27b-4172-bfad-7c73a43c2008 982 0 2025-09-12 17:42:47 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:84db5c8b4c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.6-a-d7d9773d19 calico-kube-controllers-84db5c8b4c-5nsqv eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calia31df1c47c0 [] [] }} ContainerID="4ea5157740c0102f14474c8206550039edac8cd023495bc5e892e0911d0cf563" Namespace="calico-system" Pod="calico-kube-controllers-84db5c8b4c-5nsqv" WorkloadEndpoint="ci--4081.3.6--a--d7d9773d19-k8s-calico--kube--controllers--84db5c8b4c--5nsqv-" Sep 12 17:43:26.285580 containerd[1710]: 2025-09-12 17:43:26.129 [INFO][5325] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4ea5157740c0102f14474c8206550039edac8cd023495bc5e892e0911d0cf563" Namespace="calico-system" Pod="calico-kube-controllers-84db5c8b4c-5nsqv" WorkloadEndpoint="ci--4081.3.6--a--d7d9773d19-k8s-calico--kube--controllers--84db5c8b4c--5nsqv-eth0" Sep 12 17:43:26.285580 containerd[1710]: 2025-09-12 17:43:26.185 [INFO][5350] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4ea5157740c0102f14474c8206550039edac8cd023495bc5e892e0911d0cf563" HandleID="k8s-pod-network.4ea5157740c0102f14474c8206550039edac8cd023495bc5e892e0911d0cf563" Workload="ci--4081.3.6--a--d7d9773d19-k8s-calico--kube--controllers--84db5c8b4c--5nsqv-eth0" Sep 12 17:43:26.285580 containerd[1710]: 2025-09-12 17:43:26.185 [INFO][5350] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4ea5157740c0102f14474c8206550039edac8cd023495bc5e892e0911d0cf563" HandleID="k8s-pod-network.4ea5157740c0102f14474c8206550039edac8cd023495bc5e892e0911d0cf563" Workload="ci--4081.3.6--a--d7d9773d19-k8s-calico--kube--controllers--84db5c8b4c--5nsqv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d38c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-a-d7d9773d19", "pod":"calico-kube-controllers-84db5c8b4c-5nsqv", "timestamp":"2025-09-12 17:43:26.185214109 +0000 UTC"}, Hostname:"ci-4081.3.6-a-d7d9773d19", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:43:26.285580 containerd[1710]: 2025-09-12 17:43:26.185 [INFO][5350] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:43:26.285580 containerd[1710]: 2025-09-12 17:43:26.185 [INFO][5350] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:43:26.285580 containerd[1710]: 2025-09-12 17:43:26.185 [INFO][5350] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-a-d7d9773d19' Sep 12 17:43:26.285580 containerd[1710]: 2025-09-12 17:43:26.210 [INFO][5350] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4ea5157740c0102f14474c8206550039edac8cd023495bc5e892e0911d0cf563" host="ci-4081.3.6-a-d7d9773d19" Sep 12 17:43:26.285580 containerd[1710]: 2025-09-12 17:43:26.219 [INFO][5350] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-a-d7d9773d19" Sep 12 17:43:26.285580 containerd[1710]: 2025-09-12 17:43:26.224 [INFO][5350] ipam/ipam.go 511: Trying affinity for 192.168.69.128/26 host="ci-4081.3.6-a-d7d9773d19" Sep 12 17:43:26.285580 containerd[1710]: 2025-09-12 17:43:26.227 [INFO][5350] ipam/ipam.go 158: Attempting to load block cidr=192.168.69.128/26 host="ci-4081.3.6-a-d7d9773d19" Sep 12 17:43:26.285580 containerd[1710]: 2025-09-12 17:43:26.230 [INFO][5350] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.69.128/26 host="ci-4081.3.6-a-d7d9773d19" Sep 12 17:43:26.285580 containerd[1710]: 2025-09-12 17:43:26.230 [INFO][5350] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.69.128/26 handle="k8s-pod-network.4ea5157740c0102f14474c8206550039edac8cd023495bc5e892e0911d0cf563" host="ci-4081.3.6-a-d7d9773d19" Sep 12 17:43:26.285580 containerd[1710]: 2025-09-12 17:43:26.231 [INFO][5350] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4ea5157740c0102f14474c8206550039edac8cd023495bc5e892e0911d0cf563 Sep 12 17:43:26.285580 containerd[1710]: 2025-09-12 17:43:26.238 [INFO][5350] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.69.128/26 handle="k8s-pod-network.4ea5157740c0102f14474c8206550039edac8cd023495bc5e892e0911d0cf563" host="ci-4081.3.6-a-d7d9773d19" Sep 12 17:43:26.285580 containerd[1710]: 2025-09-12 17:43:26.250 [INFO][5350] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.69.133/26] block=192.168.69.128/26 handle="k8s-pod-network.4ea5157740c0102f14474c8206550039edac8cd023495bc5e892e0911d0cf563" host="ci-4081.3.6-a-d7d9773d19" Sep 12 17:43:26.285580 containerd[1710]: 2025-09-12 17:43:26.250 [INFO][5350] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.69.133/26] handle="k8s-pod-network.4ea5157740c0102f14474c8206550039edac8cd023495bc5e892e0911d0cf563" host="ci-4081.3.6-a-d7d9773d19" Sep 12 17:43:26.285580 containerd[1710]: 2025-09-12 17:43:26.250 [INFO][5350] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:43:26.285580 containerd[1710]: 2025-09-12 17:43:26.250 [INFO][5350] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.69.133/26] IPv6=[] ContainerID="4ea5157740c0102f14474c8206550039edac8cd023495bc5e892e0911d0cf563" HandleID="k8s-pod-network.4ea5157740c0102f14474c8206550039edac8cd023495bc5e892e0911d0cf563" Workload="ci--4081.3.6--a--d7d9773d19-k8s-calico--kube--controllers--84db5c8b4c--5nsqv-eth0" Sep 12 17:43:26.286475 containerd[1710]: 2025-09-12 17:43:26.253 [INFO][5325] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4ea5157740c0102f14474c8206550039edac8cd023495bc5e892e0911d0cf563" Namespace="calico-system" Pod="calico-kube-controllers-84db5c8b4c-5nsqv" WorkloadEndpoint="ci--4081.3.6--a--d7d9773d19-k8s-calico--kube--controllers--84db5c8b4c--5nsqv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--d7d9773d19-k8s-calico--kube--controllers--84db5c8b4c--5nsqv-eth0", GenerateName:"calico-kube-controllers-84db5c8b4c-", Namespace:"calico-system", SelfLink:"", UID:"ba349ebc-a27b-4172-bfad-7c73a43c2008", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 42, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"84db5c8b4c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-d7d9773d19", ContainerID:"", Pod:"calico-kube-controllers-84db5c8b4c-5nsqv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.69.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia31df1c47c0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:43:26.286475 containerd[1710]: 2025-09-12 17:43:26.253 [INFO][5325] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.69.133/32] ContainerID="4ea5157740c0102f14474c8206550039edac8cd023495bc5e892e0911d0cf563" Namespace="calico-system" Pod="calico-kube-controllers-84db5c8b4c-5nsqv" WorkloadEndpoint="ci--4081.3.6--a--d7d9773d19-k8s-calico--kube--controllers--84db5c8b4c--5nsqv-eth0" Sep 12 17:43:26.286475 containerd[1710]: 2025-09-12 17:43:26.253 [INFO][5325] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia31df1c47c0 ContainerID="4ea5157740c0102f14474c8206550039edac8cd023495bc5e892e0911d0cf563" Namespace="calico-system" Pod="calico-kube-controllers-84db5c8b4c-5nsqv" WorkloadEndpoint="ci--4081.3.6--a--d7d9773d19-k8s-calico--kube--controllers--84db5c8b4c--5nsqv-eth0" Sep 12 17:43:26.286475 containerd[1710]: 2025-09-12 17:43:26.260 [INFO][5325] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4ea5157740c0102f14474c8206550039edac8cd023495bc5e892e0911d0cf563" Namespace="calico-system" Pod="calico-kube-controllers-84db5c8b4c-5nsqv" WorkloadEndpoint="ci--4081.3.6--a--d7d9773d19-k8s-calico--kube--controllers--84db5c8b4c--5nsqv-eth0" Sep 12 17:43:26.286475 containerd[1710]: 2025-09-12 17:43:26.263 [INFO][5325] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4ea5157740c0102f14474c8206550039edac8cd023495bc5e892e0911d0cf563" Namespace="calico-system" Pod="calico-kube-controllers-84db5c8b4c-5nsqv" WorkloadEndpoint="ci--4081.3.6--a--d7d9773d19-k8s-calico--kube--controllers--84db5c8b4c--5nsqv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--d7d9773d19-k8s-calico--kube--controllers--84db5c8b4c--5nsqv-eth0", GenerateName:"calico-kube-controllers-84db5c8b4c-", Namespace:"calico-system", SelfLink:"", UID:"ba349ebc-a27b-4172-bfad-7c73a43c2008", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 42, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"84db5c8b4c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-d7d9773d19", ContainerID:"4ea5157740c0102f14474c8206550039edac8cd023495bc5e892e0911d0cf563", Pod:"calico-kube-controllers-84db5c8b4c-5nsqv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.69.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia31df1c47c0", MAC:"7a:d0:de:89:29:f0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:43:26.286475 containerd[1710]: 2025-09-12 17:43:26.281 [INFO][5325] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4ea5157740c0102f14474c8206550039edac8cd023495bc5e892e0911d0cf563" Namespace="calico-system" Pod="calico-kube-controllers-84db5c8b4c-5nsqv" WorkloadEndpoint="ci--4081.3.6--a--d7d9773d19-k8s-calico--kube--controllers--84db5c8b4c--5nsqv-eth0" Sep 12 17:43:26.315030 containerd[1710]: time="2025-09-12T17:43:26.314400586Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:43:26.315030 containerd[1710]: time="2025-09-12T17:43:26.314514466Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:43:26.315030 containerd[1710]: time="2025-09-12T17:43:26.314559866Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:43:26.315030 containerd[1710]: time="2025-09-12T17:43:26.314823386Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:43:26.343979 systemd[1]: Started cri-containerd-4ea5157740c0102f14474c8206550039edac8cd023495bc5e892e0911d0cf563.scope - libcontainer container 4ea5157740c0102f14474c8206550039edac8cd023495bc5e892e0911d0cf563. Sep 12 17:43:26.372324 systemd-networkd[1587]: cali3434b6d2853: Link UP Sep 12 17:43:26.373282 systemd-networkd[1587]: cali3434b6d2853: Gained carrier Sep 12 17:43:26.401384 containerd[1710]: time="2025-09-12T17:43:26.401318651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84db5c8b4c-5nsqv,Uid:ba349ebc-a27b-4172-bfad-7c73a43c2008,Namespace:calico-system,Attempt:1,} returns sandbox id \"4ea5157740c0102f14474c8206550039edac8cd023495bc5e892e0911d0cf563\"" Sep 12 17:43:26.401872 containerd[1710]: 2025-09-12 17:43:26.150 [INFO][5333] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--a--d7d9773d19-k8s-csi--node--driver--dp2wf-eth0 csi-node-driver- calico-system 93bea3a5-ac3c-4316-b474-aae40e5f08e7 983 0 2025-09-12 17:42:47 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:856c6b598f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.6-a-d7d9773d19 csi-node-driver-dp2wf eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali3434b6d2853 [] [] }} ContainerID="2e4488b21967bb49f374767e6b89906a7d093cbb4139becb94a3551cf2db9dc9" Namespace="calico-system" Pod="csi-node-driver-dp2wf" WorkloadEndpoint="ci--4081.3.6--a--d7d9773d19-k8s-csi--node--driver--dp2wf-" Sep 12 17:43:26.401872 containerd[1710]: 2025-09-12 17:43:26.150 [INFO][5333] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2e4488b21967bb49f374767e6b89906a7d093cbb4139becb94a3551cf2db9dc9" Namespace="calico-system" Pod="csi-node-driver-dp2wf" WorkloadEndpoint="ci--4081.3.6--a--d7d9773d19-k8s-csi--node--driver--dp2wf-eth0" Sep 12 17:43:26.401872 containerd[1710]: 2025-09-12 17:43:26.206 [INFO][5355] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2e4488b21967bb49f374767e6b89906a7d093cbb4139becb94a3551cf2db9dc9" HandleID="k8s-pod-network.2e4488b21967bb49f374767e6b89906a7d093cbb4139becb94a3551cf2db9dc9" Workload="ci--4081.3.6--a--d7d9773d19-k8s-csi--node--driver--dp2wf-eth0" Sep 12 17:43:26.401872 containerd[1710]: 2025-09-12 17:43:26.209 [INFO][5355] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2e4488b21967bb49f374767e6b89906a7d093cbb4139becb94a3551cf2db9dc9" HandleID="k8s-pod-network.2e4488b21967bb49f374767e6b89906a7d093cbb4139becb94a3551cf2db9dc9" Workload="ci--4081.3.6--a--d7d9773d19-k8s-csi--node--driver--dp2wf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3930), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-a-d7d9773d19", "pod":"csi-node-driver-dp2wf", "timestamp":"2025-09-12 17:43:26.206729855 +0000 UTC"}, Hostname:"ci-4081.3.6-a-d7d9773d19", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:43:26.401872 containerd[1710]: 2025-09-12 17:43:26.210 [INFO][5355] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:43:26.401872 containerd[1710]: 2025-09-12 17:43:26.250 [INFO][5355] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:43:26.401872 containerd[1710]: 2025-09-12 17:43:26.251 [INFO][5355] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-a-d7d9773d19' Sep 12 17:43:26.401872 containerd[1710]: 2025-09-12 17:43:26.309 [INFO][5355] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2e4488b21967bb49f374767e6b89906a7d093cbb4139becb94a3551cf2db9dc9" host="ci-4081.3.6-a-d7d9773d19" Sep 12 17:43:26.401872 containerd[1710]: 2025-09-12 17:43:26.320 [INFO][5355] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-a-d7d9773d19" Sep 12 17:43:26.401872 containerd[1710]: 2025-09-12 17:43:26.330 [INFO][5355] ipam/ipam.go 511: Trying affinity for 192.168.69.128/26 host="ci-4081.3.6-a-d7d9773d19" Sep 12 17:43:26.401872 containerd[1710]: 2025-09-12 17:43:26.336 [INFO][5355] ipam/ipam.go 158: Attempting to load block cidr=192.168.69.128/26 host="ci-4081.3.6-a-d7d9773d19" Sep 12 17:43:26.401872 containerd[1710]: 2025-09-12 17:43:26.342 [INFO][5355] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.69.128/26 host="ci-4081.3.6-a-d7d9773d19" Sep 12 17:43:26.401872 containerd[1710]: 2025-09-12 17:43:26.343 [INFO][5355] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.69.128/26 handle="k8s-pod-network.2e4488b21967bb49f374767e6b89906a7d093cbb4139becb94a3551cf2db9dc9" host="ci-4081.3.6-a-d7d9773d19" Sep 12 17:43:26.401872 containerd[1710]: 2025-09-12 17:43:26.345 [INFO][5355] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2e4488b21967bb49f374767e6b89906a7d093cbb4139becb94a3551cf2db9dc9 Sep 12 17:43:26.401872 containerd[1710]: 2025-09-12 17:43:26.355 [INFO][5355] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.69.128/26 handle="k8s-pod-network.2e4488b21967bb49f374767e6b89906a7d093cbb4139becb94a3551cf2db9dc9" host="ci-4081.3.6-a-d7d9773d19" Sep 12 17:43:26.401872 containerd[1710]: 2025-09-12 17:43:26.366 [INFO][5355] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.69.134/26] block=192.168.69.128/26 handle="k8s-pod-network.2e4488b21967bb49f374767e6b89906a7d093cbb4139becb94a3551cf2db9dc9" host="ci-4081.3.6-a-d7d9773d19" Sep 12 17:43:26.401872 containerd[1710]: 2025-09-12 17:43:26.366 [INFO][5355] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.69.134/26] handle="k8s-pod-network.2e4488b21967bb49f374767e6b89906a7d093cbb4139becb94a3551cf2db9dc9" host="ci-4081.3.6-a-d7d9773d19" Sep 12 17:43:26.401872 containerd[1710]: 2025-09-12 17:43:26.366 [INFO][5355] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:43:26.401872 containerd[1710]: 2025-09-12 17:43:26.366 [INFO][5355] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.69.134/26] IPv6=[] ContainerID="2e4488b21967bb49f374767e6b89906a7d093cbb4139becb94a3551cf2db9dc9" HandleID="k8s-pod-network.2e4488b21967bb49f374767e6b89906a7d093cbb4139becb94a3551cf2db9dc9" Workload="ci--4081.3.6--a--d7d9773d19-k8s-csi--node--driver--dp2wf-eth0" Sep 12 17:43:26.403577 containerd[1710]: 2025-09-12 17:43:26.369 [INFO][5333] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2e4488b21967bb49f374767e6b89906a7d093cbb4139becb94a3551cf2db9dc9" Namespace="calico-system" Pod="csi-node-driver-dp2wf" WorkloadEndpoint="ci--4081.3.6--a--d7d9773d19-k8s-csi--node--driver--dp2wf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--d7d9773d19-k8s-csi--node--driver--dp2wf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"93bea3a5-ac3c-4316-b474-aae40e5f08e7", ResourceVersion:"983", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 42, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-d7d9773d19", ContainerID:"", Pod:"csi-node-driver-dp2wf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.69.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3434b6d2853", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:43:26.403577 containerd[1710]: 2025-09-12 17:43:26.369 [INFO][5333] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.69.134/32] ContainerID="2e4488b21967bb49f374767e6b89906a7d093cbb4139becb94a3551cf2db9dc9" Namespace="calico-system" Pod="csi-node-driver-dp2wf" WorkloadEndpoint="ci--4081.3.6--a--d7d9773d19-k8s-csi--node--driver--dp2wf-eth0" Sep 12 17:43:26.403577 containerd[1710]: 2025-09-12 17:43:26.369 [INFO][5333] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3434b6d2853 ContainerID="2e4488b21967bb49f374767e6b89906a7d093cbb4139becb94a3551cf2db9dc9" Namespace="calico-system" Pod="csi-node-driver-dp2wf" WorkloadEndpoint="ci--4081.3.6--a--d7d9773d19-k8s-csi--node--driver--dp2wf-eth0" Sep 12 17:43:26.403577 containerd[1710]: 2025-09-12 17:43:26.374 [INFO][5333] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2e4488b21967bb49f374767e6b89906a7d093cbb4139becb94a3551cf2db9dc9" Namespace="calico-system" Pod="csi-node-driver-dp2wf" WorkloadEndpoint="ci--4081.3.6--a--d7d9773d19-k8s-csi--node--driver--dp2wf-eth0" Sep 12 17:43:26.403577 containerd[1710]: 2025-09-12 17:43:26.375 [INFO][5333] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2e4488b21967bb49f374767e6b89906a7d093cbb4139becb94a3551cf2db9dc9" Namespace="calico-system" Pod="csi-node-driver-dp2wf" WorkloadEndpoint="ci--4081.3.6--a--d7d9773d19-k8s-csi--node--driver--dp2wf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--d7d9773d19-k8s-csi--node--driver--dp2wf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"93bea3a5-ac3c-4316-b474-aae40e5f08e7", ResourceVersion:"983", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 42, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-d7d9773d19", ContainerID:"2e4488b21967bb49f374767e6b89906a7d093cbb4139becb94a3551cf2db9dc9", Pod:"csi-node-driver-dp2wf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.69.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3434b6d2853", MAC:"2a:ec:5c:0a:5d:e7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:43:26.403577 containerd[1710]: 2025-09-12 17:43:26.396 [INFO][5333] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2e4488b21967bb49f374767e6b89906a7d093cbb4139becb94a3551cf2db9dc9" Namespace="calico-system" Pod="csi-node-driver-dp2wf" WorkloadEndpoint="ci--4081.3.6--a--d7d9773d19-k8s-csi--node--driver--dp2wf-eth0" Sep 12 17:43:26.435626 containerd[1710]: time="2025-09-12T17:43:26.435014731Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:43:26.437337 containerd[1710]: time="2025-09-12T17:43:26.437224374Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:43:26.437337 containerd[1710]: time="2025-09-12T17:43:26.437255374Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:43:26.437726 containerd[1710]: time="2025-09-12T17:43:26.437660255Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:43:26.462987 systemd[1]: Started cri-containerd-2e4488b21967bb49f374767e6b89906a7d093cbb4139becb94a3551cf2db9dc9.scope - libcontainer container 2e4488b21967bb49f374767e6b89906a7d093cbb4139becb94a3551cf2db9dc9. Sep 12 17:43:26.487058 containerd[1710]: time="2025-09-12T17:43:26.487014394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dp2wf,Uid:93bea3a5-ac3c-4316-b474-aae40e5f08e7,Namespace:calico-system,Attempt:1,} returns sandbox id \"2e4488b21967bb49f374767e6b89906a7d093cbb4139becb94a3551cf2db9dc9\"" Sep 12 17:43:26.684189 systemd-networkd[1587]: calif5158b762c2: Gained IPv6LL Sep 12 17:43:27.772625 systemd-networkd[1587]: cali3434b6d2853: Gained IPv6LL Sep 12 17:43:27.815856 containerd[1710]: time="2025-09-12T17:43:27.815758162Z" level=info msg="StopPodSandbox for \"04712ef3a2d2c86cc8c04cda027e7104680fc2131664548477d53ee9c6f951ba\"" Sep 12 17:43:27.952371 containerd[1710]: 2025-09-12 17:43:27.905 [INFO][5482] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="04712ef3a2d2c86cc8c04cda027e7104680fc2131664548477d53ee9c6f951ba" Sep 12 17:43:27.952371 containerd[1710]: 2025-09-12 17:43:27.905 [INFO][5482] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="04712ef3a2d2c86cc8c04cda027e7104680fc2131664548477d53ee9c6f951ba" iface="eth0" netns="/var/run/netns/cni-180c0918-d0f8-3381-c11d-3b3385021b4a" Sep 12 17:43:27.952371 containerd[1710]: 2025-09-12 17:43:27.905 [INFO][5482] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="04712ef3a2d2c86cc8c04cda027e7104680fc2131664548477d53ee9c6f951ba" iface="eth0" netns="/var/run/netns/cni-180c0918-d0f8-3381-c11d-3b3385021b4a" Sep 12 17:43:27.952371 containerd[1710]: 2025-09-12 17:43:27.906 [INFO][5482] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="04712ef3a2d2c86cc8c04cda027e7104680fc2131664548477d53ee9c6f951ba" iface="eth0" netns="/var/run/netns/cni-180c0918-d0f8-3381-c11d-3b3385021b4a" Sep 12 17:43:27.952371 containerd[1710]: 2025-09-12 17:43:27.906 [INFO][5482] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="04712ef3a2d2c86cc8c04cda027e7104680fc2131664548477d53ee9c6f951ba" Sep 12 17:43:27.952371 containerd[1710]: 2025-09-12 17:43:27.906 [INFO][5482] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="04712ef3a2d2c86cc8c04cda027e7104680fc2131664548477d53ee9c6f951ba" Sep 12 17:43:27.952371 containerd[1710]: 2025-09-12 17:43:27.937 [INFO][5491] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="04712ef3a2d2c86cc8c04cda027e7104680fc2131664548477d53ee9c6f951ba" HandleID="k8s-pod-network.04712ef3a2d2c86cc8c04cda027e7104680fc2131664548477d53ee9c6f951ba" Workload="ci--4081.3.6--a--d7d9773d19-k8s-goldmane--7988f88666--fggbj-eth0" Sep 12 17:43:27.952371 containerd[1710]: 2025-09-12 17:43:27.937 [INFO][5491] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:43:27.952371 containerd[1710]: 2025-09-12 17:43:27.937 [INFO][5491] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:43:27.952371 containerd[1710]: 2025-09-12 17:43:27.946 [WARNING][5491] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="04712ef3a2d2c86cc8c04cda027e7104680fc2131664548477d53ee9c6f951ba" HandleID="k8s-pod-network.04712ef3a2d2c86cc8c04cda027e7104680fc2131664548477d53ee9c6f951ba" Workload="ci--4081.3.6--a--d7d9773d19-k8s-goldmane--7988f88666--fggbj-eth0" Sep 12 17:43:27.952371 containerd[1710]: 2025-09-12 17:43:27.947 [INFO][5491] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="04712ef3a2d2c86cc8c04cda027e7104680fc2131664548477d53ee9c6f951ba" HandleID="k8s-pod-network.04712ef3a2d2c86cc8c04cda027e7104680fc2131664548477d53ee9c6f951ba" Workload="ci--4081.3.6--a--d7d9773d19-k8s-goldmane--7988f88666--fggbj-eth0" Sep 12 17:43:27.952371 containerd[1710]: 2025-09-12 17:43:27.949 [INFO][5491] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:43:27.952371 containerd[1710]: 2025-09-12 17:43:27.950 [INFO][5482] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="04712ef3a2d2c86cc8c04cda027e7104680fc2131664548477d53ee9c6f951ba" Sep 12 17:43:27.954578 containerd[1710]: time="2025-09-12T17:43:27.952508287Z" level=info msg="TearDown network for sandbox \"04712ef3a2d2c86cc8c04cda027e7104680fc2131664548477d53ee9c6f951ba\" successfully" Sep 12 17:43:27.954578 containerd[1710]: time="2025-09-12T17:43:27.952531487Z" level=info msg="StopPodSandbox for \"04712ef3a2d2c86cc8c04cda027e7104680fc2131664548477d53ee9c6f951ba\" returns successfully" Sep 12 17:43:27.954578 containerd[1710]: time="2025-09-12T17:43:27.953180488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-fggbj,Uid:b2903050-6e84-45bd-9086-8adaeb871d43,Namespace:calico-system,Attempt:1,}" Sep 12 17:43:27.956174 systemd[1]: run-netns-cni\x2d180c0918\x2dd0f8\x2d3381\x2dc11d\x2d3b3385021b4a.mount: Deactivated successfully. Sep 12 17:43:27.964103 systemd-networkd[1587]: calia31df1c47c0: Gained IPv6LL Sep 12 17:43:28.060107 containerd[1710]: time="2025-09-12T17:43:28.059978457Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:43:28.065679 containerd[1710]: time="2025-09-12T17:43:28.065626744Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=44530807" Sep 12 17:43:28.069965 containerd[1710]: time="2025-09-12T17:43:28.069922909Z" level=info msg="ImageCreate event name:\"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:43:28.077123 containerd[1710]: time="2025-09-12T17:43:28.077062598Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:43:28.077930 containerd[1710]: time="2025-09-12T17:43:28.077511678Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 3.002821908s" Sep 12 17:43:28.078146 containerd[1710]: time="2025-09-12T17:43:28.077546958Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 12 17:43:28.082347 containerd[1710]: time="2025-09-12T17:43:28.082294004Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 12 17:43:28.085587 containerd[1710]: time="2025-09-12T17:43:28.085524568Z" level=info msg="CreateContainer within sandbox \"2e043943ed2078640213c2899bfb4797eff7657c53d39b3f67223af420258bd4\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 17:43:28.140127 containerd[1710]: time="2025-09-12T17:43:28.140078154Z" level=info msg="CreateContainer within sandbox \"2e043943ed2078640213c2899bfb4797eff7657c53d39b3f67223af420258bd4\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"bc0865a7c573a371246b08945ed5d85278120f7266beb15142140824eca5308a\"" Sep 12 17:43:28.142041 containerd[1710]: time="2025-09-12T17:43:28.141294596Z" level=info msg="StartContainer for \"bc0865a7c573a371246b08945ed5d85278120f7266beb15142140824eca5308a\"" Sep 12 17:43:28.186573 systemd[1]: Started cri-containerd-bc0865a7c573a371246b08945ed5d85278120f7266beb15142140824eca5308a.scope - libcontainer container bc0865a7c573a371246b08945ed5d85278120f7266beb15142140824eca5308a. Sep 12 17:43:28.226063 systemd-networkd[1587]: cali62f6ab35743: Link UP Sep 12 17:43:28.226844 systemd-networkd[1587]: cali62f6ab35743: Gained carrier Sep 12 17:43:28.252238 containerd[1710]: 2025-09-12 17:43:28.099 [INFO][5501] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--a--d7d9773d19-k8s-goldmane--7988f88666--fggbj-eth0 goldmane-7988f88666- calico-system b2903050-6e84-45bd-9086-8adaeb871d43 1004 0 2025-09-12 17:42:46 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7988f88666 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.6-a-d7d9773d19 goldmane-7988f88666-fggbj eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali62f6ab35743 [] [] }} ContainerID="d51a68bc598d499473fa0a7757425d50d0d261fd08aa25aca2e4831ef54ca432" Namespace="calico-system" Pod="goldmane-7988f88666-fggbj" WorkloadEndpoint="ci--4081.3.6--a--d7d9773d19-k8s-goldmane--7988f88666--fggbj-" Sep 12 17:43:28.252238 containerd[1710]: 2025-09-12 17:43:28.100 [INFO][5501] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d51a68bc598d499473fa0a7757425d50d0d261fd08aa25aca2e4831ef54ca432" Namespace="calico-system" Pod="goldmane-7988f88666-fggbj" WorkloadEndpoint="ci--4081.3.6--a--d7d9773d19-k8s-goldmane--7988f88666--fggbj-eth0" Sep 12 17:43:28.252238 containerd[1710]: 2025-09-12 17:43:28.138 [INFO][5514] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d51a68bc598d499473fa0a7757425d50d0d261fd08aa25aca2e4831ef54ca432" HandleID="k8s-pod-network.d51a68bc598d499473fa0a7757425d50d0d261fd08aa25aca2e4831ef54ca432" Workload="ci--4081.3.6--a--d7d9773d19-k8s-goldmane--7988f88666--fggbj-eth0" Sep 12 17:43:28.252238 containerd[1710]: 2025-09-12 17:43:28.138 [INFO][5514] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d51a68bc598d499473fa0a7757425d50d0d261fd08aa25aca2e4831ef54ca432" HandleID="k8s-pod-network.d51a68bc598d499473fa0a7757425d50d0d261fd08aa25aca2e4831ef54ca432" Workload="ci--4081.3.6--a--d7d9773d19-k8s-goldmane--7988f88666--fggbj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3600), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-a-d7d9773d19", "pod":"goldmane-7988f88666-fggbj", "timestamp":"2025-09-12 17:43:28.138731272 +0000 UTC"}, Hostname:"ci-4081.3.6-a-d7d9773d19", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:43:28.252238 containerd[1710]: 2025-09-12 17:43:28.138 [INFO][5514] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:43:28.252238 containerd[1710]: 2025-09-12 17:43:28.138 [INFO][5514] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:43:28.252238 containerd[1710]: 2025-09-12 17:43:28.139 [INFO][5514] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-a-d7d9773d19' Sep 12 17:43:28.252238 containerd[1710]: 2025-09-12 17:43:28.152 [INFO][5514] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d51a68bc598d499473fa0a7757425d50d0d261fd08aa25aca2e4831ef54ca432" host="ci-4081.3.6-a-d7d9773d19" Sep 12 17:43:28.252238 containerd[1710]: 2025-09-12 17:43:28.159 [INFO][5514] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-a-d7d9773d19" Sep 12 17:43:28.252238 containerd[1710]: 2025-09-12 17:43:28.180 [INFO][5514] ipam/ipam.go 511: Trying affinity for 192.168.69.128/26 host="ci-4081.3.6-a-d7d9773d19" Sep 12 17:43:28.252238 containerd[1710]: 2025-09-12 17:43:28.186 [INFO][5514] ipam/ipam.go 158: Attempting to load block cidr=192.168.69.128/26 host="ci-4081.3.6-a-d7d9773d19" Sep 12 17:43:28.252238 containerd[1710]: 2025-09-12 17:43:28.191 [INFO][5514] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.69.128/26 host="ci-4081.3.6-a-d7d9773d19" Sep 12 17:43:28.252238 containerd[1710]: 2025-09-12 17:43:28.191 [INFO][5514] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.69.128/26 handle="k8s-pod-network.d51a68bc598d499473fa0a7757425d50d0d261fd08aa25aca2e4831ef54ca432" host="ci-4081.3.6-a-d7d9773d19" Sep 12 17:43:28.252238 containerd[1710]: 2025-09-12 17:43:28.195 [INFO][5514] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d51a68bc598d499473fa0a7757425d50d0d261fd08aa25aca2e4831ef54ca432 Sep 12 17:43:28.252238 containerd[1710]: 2025-09-12 17:43:28.202 [INFO][5514] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.69.128/26 handle="k8s-pod-network.d51a68bc598d499473fa0a7757425d50d0d261fd08aa25aca2e4831ef54ca432" host="ci-4081.3.6-a-d7d9773d19" Sep 12 17:43:28.252238 containerd[1710]: 2025-09-12 17:43:28.216 [INFO][5514] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.69.135/26] block=192.168.69.128/26 handle="k8s-pod-network.d51a68bc598d499473fa0a7757425d50d0d261fd08aa25aca2e4831ef54ca432" host="ci-4081.3.6-a-d7d9773d19" Sep 12 17:43:28.252238 containerd[1710]: 2025-09-12 17:43:28.216 [INFO][5514] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.69.135/26] handle="k8s-pod-network.d51a68bc598d499473fa0a7757425d50d0d261fd08aa25aca2e4831ef54ca432" host="ci-4081.3.6-a-d7d9773d19" Sep 12 17:43:28.252238 containerd[1710]: 2025-09-12 17:43:28.216 [INFO][5514] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:43:28.252238 containerd[1710]: 2025-09-12 17:43:28.216 [INFO][5514] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.69.135/26] IPv6=[] ContainerID="d51a68bc598d499473fa0a7757425d50d0d261fd08aa25aca2e4831ef54ca432" HandleID="k8s-pod-network.d51a68bc598d499473fa0a7757425d50d0d261fd08aa25aca2e4831ef54ca432" Workload="ci--4081.3.6--a--d7d9773d19-k8s-goldmane--7988f88666--fggbj-eth0" Sep 12 17:43:28.254269 containerd[1710]: 2025-09-12 17:43:28.220 [INFO][5501] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d51a68bc598d499473fa0a7757425d50d0d261fd08aa25aca2e4831ef54ca432" Namespace="calico-system" Pod="goldmane-7988f88666-fggbj" WorkloadEndpoint="ci--4081.3.6--a--d7d9773d19-k8s-goldmane--7988f88666--fggbj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--d7d9773d19-k8s-goldmane--7988f88666--fggbj-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"b2903050-6e84-45bd-9086-8adaeb871d43", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 42, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-d7d9773d19", ContainerID:"", Pod:"goldmane-7988f88666-fggbj", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.69.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali62f6ab35743", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:43:28.254269 containerd[1710]: 2025-09-12 17:43:28.220 [INFO][5501] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.69.135/32] ContainerID="d51a68bc598d499473fa0a7757425d50d0d261fd08aa25aca2e4831ef54ca432" Namespace="calico-system" Pod="goldmane-7988f88666-fggbj" WorkloadEndpoint="ci--4081.3.6--a--d7d9773d19-k8s-goldmane--7988f88666--fggbj-eth0" Sep 12 17:43:28.254269 containerd[1710]: 2025-09-12 17:43:28.220 [INFO][5501] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali62f6ab35743 ContainerID="d51a68bc598d499473fa0a7757425d50d0d261fd08aa25aca2e4831ef54ca432" Namespace="calico-system" Pod="goldmane-7988f88666-fggbj" WorkloadEndpoint="ci--4081.3.6--a--d7d9773d19-k8s-goldmane--7988f88666--fggbj-eth0" Sep 12 17:43:28.254269 containerd[1710]: 2025-09-12 17:43:28.227 [INFO][5501] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d51a68bc598d499473fa0a7757425d50d0d261fd08aa25aca2e4831ef54ca432" Namespace="calico-system" Pod="goldmane-7988f88666-fggbj" WorkloadEndpoint="ci--4081.3.6--a--d7d9773d19-k8s-goldmane--7988f88666--fggbj-eth0" Sep 12 17:43:28.254269 containerd[1710]: 2025-09-12 17:43:28.229 [INFO][5501] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d51a68bc598d499473fa0a7757425d50d0d261fd08aa25aca2e4831ef54ca432" Namespace="calico-system" Pod="goldmane-7988f88666-fggbj" WorkloadEndpoint="ci--4081.3.6--a--d7d9773d19-k8s-goldmane--7988f88666--fggbj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--d7d9773d19-k8s-goldmane--7988f88666--fggbj-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"b2903050-6e84-45bd-9086-8adaeb871d43", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 42, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-d7d9773d19", ContainerID:"d51a68bc598d499473fa0a7757425d50d0d261fd08aa25aca2e4831ef54ca432", Pod:"goldmane-7988f88666-fggbj", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.69.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali62f6ab35743", MAC:"aa:17:d4:ea:aa:ae", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:43:28.254269 containerd[1710]: 2025-09-12 17:43:28.249 [INFO][5501] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d51a68bc598d499473fa0a7757425d50d0d261fd08aa25aca2e4831ef54ca432" Namespace="calico-system" Pod="goldmane-7988f88666-fggbj" WorkloadEndpoint="ci--4081.3.6--a--d7d9773d19-k8s-goldmane--7988f88666--fggbj-eth0" Sep 12 17:43:28.446989 containerd[1710]: time="2025-09-12T17:43:28.446851485Z" level=info msg="StartContainer for \"bc0865a7c573a371246b08945ed5d85278120f7266beb15142140824eca5308a\" returns successfully" Sep 12 17:43:28.471703 containerd[1710]: time="2025-09-12T17:43:28.468619951Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:43:28.471703 containerd[1710]: time="2025-09-12T17:43:28.468683512Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:43:28.471703 containerd[1710]: time="2025-09-12T17:43:28.468699512Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:43:28.471703 containerd[1710]: time="2025-09-12T17:43:28.468785032Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:43:28.501974 systemd[1]: Started cri-containerd-d51a68bc598d499473fa0a7757425d50d0d261fd08aa25aca2e4831ef54ca432.scope - libcontainer container d51a68bc598d499473fa0a7757425d50d0d261fd08aa25aca2e4831ef54ca432. Sep 12 17:43:28.543626 containerd[1710]: time="2025-09-12T17:43:28.543576162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-fggbj,Uid:b2903050-6e84-45bd-9086-8adaeb871d43,Namespace:calico-system,Attempt:1,} returns sandbox id \"d51a68bc598d499473fa0a7757425d50d0d261fd08aa25aca2e4831ef54ca432\"" Sep 12 17:43:29.308060 systemd-networkd[1587]: cali62f6ab35743: Gained IPv6LL Sep 12 17:43:29.815271 containerd[1710]: time="2025-09-12T17:43:29.815206340Z" level=info msg="StopPodSandbox for \"590bf42b7c72ffe4439636482a47564da68476fb70799b4fd5bc687c83d7422d\"" Sep 12 17:43:29.885833 kubelet[3174]: I0912 17:43:29.884900 3174 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-77d7968c8d-4l6tx" podStartSLOduration=44.412232958 podStartE2EDuration="48.884879745s" podCreationTimestamp="2025-09-12 17:42:41 +0000 UTC" firstStartedPulling="2025-09-12 17:43:23.607244254 +0000 UTC m=+60.899242561" lastFinishedPulling="2025-09-12 17:43:28.079891041 +0000 UTC m=+65.371889348" observedRunningTime="2025-09-12 17:43:29.205670243 +0000 UTC m=+66.497668550" watchObservedRunningTime="2025-09-12 17:43:29.884879745 +0000 UTC m=+67.176878052" Sep 12 17:43:29.943556 containerd[1710]: 2025-09-12 17:43:29.886 [INFO][5626] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="590bf42b7c72ffe4439636482a47564da68476fb70799b4fd5bc687c83d7422d" Sep 12 17:43:29.943556 containerd[1710]: 2025-09-12 17:43:29.886 [INFO][5626] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="590bf42b7c72ffe4439636482a47564da68476fb70799b4fd5bc687c83d7422d" iface="eth0" netns="/var/run/netns/cni-2a3fec63-c2e0-652a-8c95-4807f6502dfc" Sep 12 17:43:29.943556 containerd[1710]: 2025-09-12 17:43:29.888 [INFO][5626] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="590bf42b7c72ffe4439636482a47564da68476fb70799b4fd5bc687c83d7422d" iface="eth0" netns="/var/run/netns/cni-2a3fec63-c2e0-652a-8c95-4807f6502dfc" Sep 12 17:43:29.943556 containerd[1710]: 2025-09-12 17:43:29.889 [INFO][5626] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="590bf42b7c72ffe4439636482a47564da68476fb70799b4fd5bc687c83d7422d" iface="eth0" netns="/var/run/netns/cni-2a3fec63-c2e0-652a-8c95-4807f6502dfc" Sep 12 17:43:29.943556 containerd[1710]: 2025-09-12 17:43:29.889 [INFO][5626] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="590bf42b7c72ffe4439636482a47564da68476fb70799b4fd5bc687c83d7422d" Sep 12 17:43:29.943556 containerd[1710]: 2025-09-12 17:43:29.889 [INFO][5626] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="590bf42b7c72ffe4439636482a47564da68476fb70799b4fd5bc687c83d7422d" Sep 12 17:43:29.943556 containerd[1710]: 2025-09-12 17:43:29.925 [INFO][5633] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="590bf42b7c72ffe4439636482a47564da68476fb70799b4fd5bc687c83d7422d" HandleID="k8s-pod-network.590bf42b7c72ffe4439636482a47564da68476fb70799b4fd5bc687c83d7422d" Workload="ci--4081.3.6--a--d7d9773d19-k8s-calico--apiserver--77d7968c8d--pt2kx-eth0" Sep 12 17:43:29.943556 containerd[1710]: 2025-09-12 17:43:29.925 [INFO][5633] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:43:29.943556 containerd[1710]: 2025-09-12 17:43:29.925 [INFO][5633] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:43:29.943556 containerd[1710]: 2025-09-12 17:43:29.937 [WARNING][5633] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="590bf42b7c72ffe4439636482a47564da68476fb70799b4fd5bc687c83d7422d" HandleID="k8s-pod-network.590bf42b7c72ffe4439636482a47564da68476fb70799b4fd5bc687c83d7422d" Workload="ci--4081.3.6--a--d7d9773d19-k8s-calico--apiserver--77d7968c8d--pt2kx-eth0" Sep 12 17:43:29.943556 containerd[1710]: 2025-09-12 17:43:29.937 [INFO][5633] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="590bf42b7c72ffe4439636482a47564da68476fb70799b4fd5bc687c83d7422d" HandleID="k8s-pod-network.590bf42b7c72ffe4439636482a47564da68476fb70799b4fd5bc687c83d7422d" Workload="ci--4081.3.6--a--d7d9773d19-k8s-calico--apiserver--77d7968c8d--pt2kx-eth0" Sep 12 17:43:29.943556 containerd[1710]: 2025-09-12 17:43:29.940 [INFO][5633] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:43:29.943556 containerd[1710]: 2025-09-12 17:43:29.942 [INFO][5626] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="590bf42b7c72ffe4439636482a47564da68476fb70799b4fd5bc687c83d7422d" Sep 12 17:43:29.944438 containerd[1710]: time="2025-09-12T17:43:29.943728256Z" level=info msg="TearDown network for sandbox \"590bf42b7c72ffe4439636482a47564da68476fb70799b4fd5bc687c83d7422d\" successfully" Sep 12 17:43:29.944438 containerd[1710]: time="2025-09-12T17:43:29.943765376Z" level=info msg="StopPodSandbox for \"590bf42b7c72ffe4439636482a47564da68476fb70799b4fd5bc687c83d7422d\" returns successfully" Sep 12 17:43:29.946936 systemd[1]: run-netns-cni\x2d2a3fec63\x2dc2e0\x2d652a\x2d8c95\x2d4807f6502dfc.mount: Deactivated successfully. Sep 12 17:43:29.949006 containerd[1710]: time="2025-09-12T17:43:29.947267460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77d7968c8d-pt2kx,Uid:485bd820-ef8d-438d-9780-7a5190802f4b,Namespace:calico-apiserver,Attempt:1,}" Sep 12 17:43:30.124048 systemd-networkd[1587]: cali5606342071b: Link UP Sep 12 17:43:30.124234 systemd-networkd[1587]: cali5606342071b: Gained carrier Sep 12 17:43:30.146253 containerd[1710]: 2025-09-12 17:43:30.040 [INFO][5639] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--a--d7d9773d19-k8s-calico--apiserver--77d7968c8d--pt2kx-eth0 calico-apiserver-77d7968c8d- calico-apiserver 485bd820-ef8d-438d-9780-7a5190802f4b 1019 0 2025-09-12 17:42:41 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:77d7968c8d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-a-d7d9773d19 calico-apiserver-77d7968c8d-pt2kx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali5606342071b [] [] }} ContainerID="2e3126afb5e8fd0f623821dc89e7bad8caaa578945c9ea4e8d5255a239078edb" Namespace="calico-apiserver" Pod="calico-apiserver-77d7968c8d-pt2kx" WorkloadEndpoint="ci--4081.3.6--a--d7d9773d19-k8s-calico--apiserver--77d7968c8d--pt2kx-" Sep 12 17:43:30.146253 containerd[1710]: 2025-09-12 17:43:30.040 [INFO][5639] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2e3126afb5e8fd0f623821dc89e7bad8caaa578945c9ea4e8d5255a239078edb" Namespace="calico-apiserver" Pod="calico-apiserver-77d7968c8d-pt2kx" WorkloadEndpoint="ci--4081.3.6--a--d7d9773d19-k8s-calico--apiserver--77d7968c8d--pt2kx-eth0" Sep 12 17:43:30.146253 containerd[1710]: 2025-09-12 17:43:30.066 [INFO][5653] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2e3126afb5e8fd0f623821dc89e7bad8caaa578945c9ea4e8d5255a239078edb" HandleID="k8s-pod-network.2e3126afb5e8fd0f623821dc89e7bad8caaa578945c9ea4e8d5255a239078edb" Workload="ci--4081.3.6--a--d7d9773d19-k8s-calico--apiserver--77d7968c8d--pt2kx-eth0" Sep 12 17:43:30.146253 containerd[1710]: 2025-09-12 17:43:30.067 [INFO][5653] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2e3126afb5e8fd0f623821dc89e7bad8caaa578945c9ea4e8d5255a239078edb" HandleID="k8s-pod-network.2e3126afb5e8fd0f623821dc89e7bad8caaa578945c9ea4e8d5255a239078edb" Workload="ci--4081.3.6--a--d7d9773d19-k8s-calico--apiserver--77d7968c8d--pt2kx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b610), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.6-a-d7d9773d19", "pod":"calico-apiserver-77d7968c8d-pt2kx", "timestamp":"2025-09-12 17:43:30.066875645 +0000 UTC"}, Hostname:"ci-4081.3.6-a-d7d9773d19", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:43:30.146253 containerd[1710]: 2025-09-12 17:43:30.067 [INFO][5653] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:43:30.146253 containerd[1710]: 2025-09-12 17:43:30.067 [INFO][5653] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:43:30.146253 containerd[1710]: 2025-09-12 17:43:30.067 [INFO][5653] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-a-d7d9773d19' Sep 12 17:43:30.146253 containerd[1710]: 2025-09-12 17:43:30.077 [INFO][5653] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2e3126afb5e8fd0f623821dc89e7bad8caaa578945c9ea4e8d5255a239078edb" host="ci-4081.3.6-a-d7d9773d19" Sep 12 17:43:30.146253 containerd[1710]: 2025-09-12 17:43:30.081 [INFO][5653] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-a-d7d9773d19" Sep 12 17:43:30.146253 containerd[1710]: 2025-09-12 17:43:30.086 [INFO][5653] ipam/ipam.go 511: Trying affinity for 192.168.69.128/26 host="ci-4081.3.6-a-d7d9773d19" Sep 12 17:43:30.146253 containerd[1710]: 2025-09-12 17:43:30.088 [INFO][5653] ipam/ipam.go 158: Attempting to load block cidr=192.168.69.128/26 host="ci-4081.3.6-a-d7d9773d19" Sep 12 17:43:30.146253 containerd[1710]: 2025-09-12 17:43:30.091 [INFO][5653] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.69.128/26 host="ci-4081.3.6-a-d7d9773d19" Sep 12 17:43:30.146253 containerd[1710]: 2025-09-12 17:43:30.091 [INFO][5653] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.69.128/26 handle="k8s-pod-network.2e3126afb5e8fd0f623821dc89e7bad8caaa578945c9ea4e8d5255a239078edb" host="ci-4081.3.6-a-d7d9773d19" Sep 12 17:43:30.146253 containerd[1710]: 2025-09-12 17:43:30.093 [INFO][5653] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2e3126afb5e8fd0f623821dc89e7bad8caaa578945c9ea4e8d5255a239078edb Sep 12 17:43:30.146253 containerd[1710]: 2025-09-12 17:43:30.102 [INFO][5653] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.69.128/26 handle="k8s-pod-network.2e3126afb5e8fd0f623821dc89e7bad8caaa578945c9ea4e8d5255a239078edb" host="ci-4081.3.6-a-d7d9773d19" Sep 12 17:43:30.146253 containerd[1710]: 2025-09-12 17:43:30.117 [INFO][5653] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.69.136/26] block=192.168.69.128/26 handle="k8s-pod-network.2e3126afb5e8fd0f623821dc89e7bad8caaa578945c9ea4e8d5255a239078edb" host="ci-4081.3.6-a-d7d9773d19" Sep 12 17:43:30.146253 containerd[1710]: 2025-09-12 17:43:30.118 [INFO][5653] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.69.136/26] handle="k8s-pod-network.2e3126afb5e8fd0f623821dc89e7bad8caaa578945c9ea4e8d5255a239078edb" host="ci-4081.3.6-a-d7d9773d19" Sep 12 17:43:30.146253 containerd[1710]: 2025-09-12 17:43:30.118 [INFO][5653] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:43:30.146253 containerd[1710]: 2025-09-12 17:43:30.118 [INFO][5653] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.69.136/26] IPv6=[] ContainerID="2e3126afb5e8fd0f623821dc89e7bad8caaa578945c9ea4e8d5255a239078edb" HandleID="k8s-pod-network.2e3126afb5e8fd0f623821dc89e7bad8caaa578945c9ea4e8d5255a239078edb" Workload="ci--4081.3.6--a--d7d9773d19-k8s-calico--apiserver--77d7968c8d--pt2kx-eth0" Sep 12 17:43:30.146896 containerd[1710]: 2025-09-12 17:43:30.120 [INFO][5639] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2e3126afb5e8fd0f623821dc89e7bad8caaa578945c9ea4e8d5255a239078edb" Namespace="calico-apiserver" Pod="calico-apiserver-77d7968c8d-pt2kx" WorkloadEndpoint="ci--4081.3.6--a--d7d9773d19-k8s-calico--apiserver--77d7968c8d--pt2kx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--d7d9773d19-k8s-calico--apiserver--77d7968c8d--pt2kx-eth0", GenerateName:"calico-apiserver-77d7968c8d-", Namespace:"calico-apiserver", SelfLink:"", UID:"485bd820-ef8d-438d-9780-7a5190802f4b", ResourceVersion:"1019", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 42, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77d7968c8d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-d7d9773d19", ContainerID:"", Pod:"calico-apiserver-77d7968c8d-pt2kx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.69.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5606342071b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:43:30.146896 containerd[1710]: 2025-09-12 17:43:30.120 [INFO][5639] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.69.136/32] ContainerID="2e3126afb5e8fd0f623821dc89e7bad8caaa578945c9ea4e8d5255a239078edb" Namespace="calico-apiserver" Pod="calico-apiserver-77d7968c8d-pt2kx" WorkloadEndpoint="ci--4081.3.6--a--d7d9773d19-k8s-calico--apiserver--77d7968c8d--pt2kx-eth0" Sep 12 17:43:30.146896 containerd[1710]: 2025-09-12 17:43:30.120 [INFO][5639] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5606342071b ContainerID="2e3126afb5e8fd0f623821dc89e7bad8caaa578945c9ea4e8d5255a239078edb" Namespace="calico-apiserver" Pod="calico-apiserver-77d7968c8d-pt2kx" WorkloadEndpoint="ci--4081.3.6--a--d7d9773d19-k8s-calico--apiserver--77d7968c8d--pt2kx-eth0" Sep 12 17:43:30.146896 containerd[1710]: 2025-09-12 17:43:30.123 [INFO][5639] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2e3126afb5e8fd0f623821dc89e7bad8caaa578945c9ea4e8d5255a239078edb" Namespace="calico-apiserver" Pod="calico-apiserver-77d7968c8d-pt2kx" WorkloadEndpoint="ci--4081.3.6--a--d7d9773d19-k8s-calico--apiserver--77d7968c8d--pt2kx-eth0" Sep 12 17:43:30.146896 containerd[1710]: 2025-09-12 17:43:30.123 [INFO][5639] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2e3126afb5e8fd0f623821dc89e7bad8caaa578945c9ea4e8d5255a239078edb" Namespace="calico-apiserver" Pod="calico-apiserver-77d7968c8d-pt2kx" WorkloadEndpoint="ci--4081.3.6--a--d7d9773d19-k8s-calico--apiserver--77d7968c8d--pt2kx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--d7d9773d19-k8s-calico--apiserver--77d7968c8d--pt2kx-eth0", GenerateName:"calico-apiserver-77d7968c8d-", Namespace:"calico-apiserver", SelfLink:"", UID:"485bd820-ef8d-438d-9780-7a5190802f4b", ResourceVersion:"1019", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 42, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77d7968c8d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-d7d9773d19", ContainerID:"2e3126afb5e8fd0f623821dc89e7bad8caaa578945c9ea4e8d5255a239078edb", Pod:"calico-apiserver-77d7968c8d-pt2kx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.69.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5606342071b", MAC:"12:a9:ea:87:b6:0a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:43:30.146896 containerd[1710]: 2025-09-12 17:43:30.140 [INFO][5639] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2e3126afb5e8fd0f623821dc89e7bad8caaa578945c9ea4e8d5255a239078edb" Namespace="calico-apiserver" Pod="calico-apiserver-77d7968c8d-pt2kx" WorkloadEndpoint="ci--4081.3.6--a--d7d9773d19-k8s-calico--apiserver--77d7968c8d--pt2kx-eth0" Sep 12 17:43:30.167396 containerd[1710]: time="2025-09-12T17:43:30.166680753Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:43:30.167396 containerd[1710]: time="2025-09-12T17:43:30.167368675Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:43:30.167825 containerd[1710]: time="2025-09-12T17:43:30.167445515Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:43:30.167825 containerd[1710]: time="2025-09-12T17:43:30.167602876Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:43:30.194354 systemd[1]: run-containerd-runc-k8s.io-2e3126afb5e8fd0f623821dc89e7bad8caaa578945c9ea4e8d5255a239078edb-runc.wuK3t6.mount: Deactivated successfully. Sep 12 17:43:30.203972 systemd[1]: Started cri-containerd-2e3126afb5e8fd0f623821dc89e7bad8caaa578945c9ea4e8d5255a239078edb.scope - libcontainer container 2e3126afb5e8fd0f623821dc89e7bad8caaa578945c9ea4e8d5255a239078edb. Sep 12 17:43:30.247206 containerd[1710]: time="2025-09-12T17:43:30.247143270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77d7968c8d-pt2kx,Uid:485bd820-ef8d-438d-9780-7a5190802f4b,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"2e3126afb5e8fd0f623821dc89e7bad8caaa578945c9ea4e8d5255a239078edb\"" Sep 12 17:43:30.252076 containerd[1710]: time="2025-09-12T17:43:30.252011164Z" level=info msg="CreateContainer within sandbox \"2e3126afb5e8fd0f623821dc89e7bad8caaa578945c9ea4e8d5255a239078edb\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 17:43:30.291788 containerd[1710]: time="2025-09-12T17:43:30.291698041Z" level=info msg="CreateContainer within sandbox \"2e3126afb5e8fd0f623821dc89e7bad8caaa578945c9ea4e8d5255a239078edb\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"1a8687ed4e80620d5a20f09a0d61ea3a0a937192f63a41b063a3f7df320c38fb\"" Sep 12 17:43:30.292705 containerd[1710]: time="2025-09-12T17:43:30.292668444Z" level=info msg="StartContainer for \"1a8687ed4e80620d5a20f09a0d61ea3a0a937192f63a41b063a3f7df320c38fb\"" Sep 12 17:43:30.322965 systemd[1]: Started cri-containerd-1a8687ed4e80620d5a20f09a0d61ea3a0a937192f63a41b063a3f7df320c38fb.scope - libcontainer container 1a8687ed4e80620d5a20f09a0d61ea3a0a937192f63a41b063a3f7df320c38fb. Sep 12 17:43:30.374710 containerd[1710]: time="2025-09-12T17:43:30.374653645Z" level=info msg="StartContainer for \"1a8687ed4e80620d5a20f09a0d61ea3a0a937192f63a41b063a3f7df320c38fb\" returns successfully" Sep 12 17:43:31.207101 kubelet[3174]: I0912 17:43:31.206991 3174 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:43:31.230388 kubelet[3174]: I0912 17:43:31.230323 3174 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-77d7968c8d-pt2kx" podStartSLOduration=50.230303726 podStartE2EDuration="50.230303726s" podCreationTimestamp="2025-09-12 17:42:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:43:31.230276286 +0000 UTC m=+68.522274593" watchObservedRunningTime="2025-09-12 17:43:31.230303726 +0000 UTC m=+68.522301993" Sep 12 17:43:31.739994 systemd-networkd[1587]: cali5606342071b: Gained IPv6LL Sep 12 17:43:31.811567 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount496163420.mount: Deactivated successfully. Sep 12 17:43:31.912887 containerd[1710]: time="2025-09-12T17:43:31.912827096Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:43:31.915714 containerd[1710]: time="2025-09-12T17:43:31.915506544Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=30823700" Sep 12 17:43:31.921173 containerd[1710]: time="2025-09-12T17:43:31.921112321Z" level=info msg="ImageCreate event name:\"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:43:31.927904 containerd[1710]: time="2025-09-12T17:43:31.927143218Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:43:31.927904 containerd[1710]: time="2025-09-12T17:43:31.927685460Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"30823530\" in 3.844581935s" Sep 12 17:43:31.927904 containerd[1710]: time="2025-09-12T17:43:31.927714900Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\"" Sep 12 17:43:31.931264 containerd[1710]: time="2025-09-12T17:43:31.931223751Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 12 17:43:31.932509 containerd[1710]: time="2025-09-12T17:43:31.932469514Z" level=info msg="CreateContainer within sandbox \"45dddb650a5f2aac9ff86e5a7f7b2239ea16b1651f7c9ed57ddf3dbcd28ed8a2\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 12 17:43:31.998751 containerd[1710]: time="2025-09-12T17:43:31.998562629Z" level=info msg="CreateContainer within sandbox \"45dddb650a5f2aac9ff86e5a7f7b2239ea16b1651f7c9ed57ddf3dbcd28ed8a2\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"b7d37d18b5bd1c573b4988bfa3ee508bee30583270a15458dff15fee994f8166\"" Sep 12 17:43:32.001191 containerd[1710]: time="2025-09-12T17:43:32.000668795Z" level=info msg="StartContainer for \"b7d37d18b5bd1c573b4988bfa3ee508bee30583270a15458dff15fee994f8166\"" Sep 12 17:43:32.047967 systemd[1]: Started cri-containerd-b7d37d18b5bd1c573b4988bfa3ee508bee30583270a15458dff15fee994f8166.scope - libcontainer container b7d37d18b5bd1c573b4988bfa3ee508bee30583270a15458dff15fee994f8166. Sep 12 17:43:32.105152 containerd[1710]: time="2025-09-12T17:43:32.104831142Z" level=info msg="StartContainer for \"b7d37d18b5bd1c573b4988bfa3ee508bee30583270a15458dff15fee994f8166\" returns successfully" Sep 12 17:43:32.213271 kubelet[3174]: I0912 17:43:32.212506 3174 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:43:32.232920 kubelet[3174]: I0912 17:43:32.232853 3174 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-77df4565f6-vv6s9" podStartSLOduration=1.533443299 podStartE2EDuration="10.232833519s" podCreationTimestamp="2025-09-12 17:43:22 +0000 UTC" firstStartedPulling="2025-09-12 17:43:23.229624644 +0000 UTC m=+60.521622951" lastFinishedPulling="2025-09-12 17:43:31.929014864 +0000 UTC m=+69.221013171" observedRunningTime="2025-09-12 17:43:32.232711919 +0000 UTC m=+69.524710226" watchObservedRunningTime="2025-09-12 17:43:32.232833519 +0000 UTC m=+69.524831826" Sep 12 17:43:34.893838 containerd[1710]: time="2025-09-12T17:43:34.893604003Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:43:34.897472 containerd[1710]: time="2025-09-12T17:43:34.897430612Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=48134957" Sep 12 17:43:34.901690 containerd[1710]: time="2025-09-12T17:43:34.901654822Z" level=info msg="ImageCreate event name:\"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:43:34.906944 containerd[1710]: time="2025-09-12T17:43:34.906892315Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:43:34.907868 containerd[1710]: time="2025-09-12T17:43:34.907256116Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"49504166\" in 2.975990525s" Sep 12 17:43:34.907868 containerd[1710]: time="2025-09-12T17:43:34.907293156Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\"" Sep 12 17:43:34.910560 containerd[1710]: time="2025-09-12T17:43:34.910531084Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 12 17:43:34.926240 containerd[1710]: time="2025-09-12T17:43:34.926060401Z" level=info msg="CreateContainer within sandbox \"4ea5157740c0102f14474c8206550039edac8cd023495bc5e892e0911d0cf563\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 12 17:43:34.968158 containerd[1710]: time="2025-09-12T17:43:34.968106862Z" level=info msg="CreateContainer within sandbox \"4ea5157740c0102f14474c8206550039edac8cd023495bc5e892e0911d0cf563\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"cc053c66acb9dab112239205d2c64c4499e1d0dc8e770693baf3b610977a1538\"" Sep 12 17:43:34.969615 containerd[1710]: time="2025-09-12T17:43:34.968884504Z" level=info msg="StartContainer for \"cc053c66acb9dab112239205d2c64c4499e1d0dc8e770693baf3b610977a1538\"" Sep 12 17:43:35.027966 systemd[1]: Started cri-containerd-cc053c66acb9dab112239205d2c64c4499e1d0dc8e770693baf3b610977a1538.scope - libcontainer container cc053c66acb9dab112239205d2c64c4499e1d0dc8e770693baf3b610977a1538. Sep 12 17:43:35.068914 containerd[1710]: time="2025-09-12T17:43:35.068854745Z" level=info msg="StartContainer for \"cc053c66acb9dab112239205d2c64c4499e1d0dc8e770693baf3b610977a1538\" returns successfully" Sep 12 17:43:35.281516 kubelet[3174]: I0912 17:43:35.281351 3174 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-84db5c8b4c-5nsqv" podStartSLOduration=39.778101075 podStartE2EDuration="48.281331978s" podCreationTimestamp="2025-09-12 17:42:47 +0000 UTC" firstStartedPulling="2025-09-12 17:43:26.404912095 +0000 UTC m=+63.696910402" lastFinishedPulling="2025-09-12 17:43:34.908143038 +0000 UTC m=+72.200141305" observedRunningTime="2025-09-12 17:43:35.244338009 +0000 UTC m=+72.536336316" watchObservedRunningTime="2025-09-12 17:43:35.281331978 +0000 UTC m=+72.573330285" Sep 12 17:43:36.402335 containerd[1710]: time="2025-09-12T17:43:36.402281681Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:43:36.416833 containerd[1710]: time="2025-09-12T17:43:36.416584996Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8227489" Sep 12 17:43:36.424565 containerd[1710]: time="2025-09-12T17:43:36.424511815Z" level=info msg="ImageCreate event name:\"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:43:36.429869 containerd[1710]: time="2025-09-12T17:43:36.429821228Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:43:36.430835 containerd[1710]: time="2025-09-12T17:43:36.430384269Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"9596730\" in 1.519817025s" Sep 12 17:43:36.430835 containerd[1710]: time="2025-09-12T17:43:36.430417069Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\"" Sep 12 17:43:36.432455 containerd[1710]: time="2025-09-12T17:43:36.431970713Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 12 17:43:36.434156 containerd[1710]: time="2025-09-12T17:43:36.434066638Z" level=info msg="CreateContainer within sandbox \"2e4488b21967bb49f374767e6b89906a7d093cbb4139becb94a3551cf2db9dc9\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 12 17:43:36.489495 containerd[1710]: time="2025-09-12T17:43:36.489448691Z" level=info msg="CreateContainer within sandbox \"2e4488b21967bb49f374767e6b89906a7d093cbb4139becb94a3551cf2db9dc9\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"5db5decad08e1f53bcdbeb5c60f868bb52ca3bb221d8ae0882b7c2b6a9361310\"" Sep 12 17:43:36.491855 containerd[1710]: time="2025-09-12T17:43:36.490434014Z" level=info msg="StartContainer for \"5db5decad08e1f53bcdbeb5c60f868bb52ca3bb221d8ae0882b7c2b6a9361310\"" Sep 12 17:43:36.531020 systemd[1]: Started cri-containerd-5db5decad08e1f53bcdbeb5c60f868bb52ca3bb221d8ae0882b7c2b6a9361310.scope - libcontainer container 5db5decad08e1f53bcdbeb5c60f868bb52ca3bb221d8ae0882b7c2b6a9361310. Sep 12 17:43:36.569222 containerd[1710]: time="2025-09-12T17:43:36.569166044Z" level=info msg="StartContainer for \"5db5decad08e1f53bcdbeb5c60f868bb52ca3bb221d8ae0882b7c2b6a9361310\" returns successfully" Sep 12 17:43:36.683198 kubelet[3174]: I0912 17:43:36.682903 3174 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:43:39.017826 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2395588744.mount: Deactivated successfully. Sep 12 17:43:39.793447 containerd[1710]: time="2025-09-12T17:43:39.793190899Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:43:39.798604 containerd[1710]: time="2025-09-12T17:43:39.798447312Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=61845332" Sep 12 17:43:39.804650 containerd[1710]: time="2025-09-12T17:43:39.804594887Z" level=info msg="ImageCreate event name:\"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:43:39.809889 containerd[1710]: time="2025-09-12T17:43:39.809820099Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:43:39.810820 containerd[1710]: time="2025-09-12T17:43:39.810608861Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"61845178\" in 3.378601668s" Sep 12 17:43:39.810820 containerd[1710]: time="2025-09-12T17:43:39.810644301Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\"" Sep 12 17:43:39.812670 containerd[1710]: time="2025-09-12T17:43:39.812635626Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 12 17:43:39.813579 containerd[1710]: time="2025-09-12T17:43:39.813546108Z" level=info msg="CreateContainer within sandbox \"d51a68bc598d499473fa0a7757425d50d0d261fd08aa25aca2e4831ef54ca432\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 12 17:43:39.852969 containerd[1710]: time="2025-09-12T17:43:39.852771483Z" level=info msg="CreateContainer within sandbox \"d51a68bc598d499473fa0a7757425d50d0d261fd08aa25aca2e4831ef54ca432\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"98737d8d1b7fa045b6854547902a735708ce161b7e0713fc371d8f5bddb51ad0\"" Sep 12 17:43:39.854658 containerd[1710]: time="2025-09-12T17:43:39.853441684Z" level=info msg="StartContainer for \"98737d8d1b7fa045b6854547902a735708ce161b7e0713fc371d8f5bddb51ad0\"" Sep 12 17:43:39.891196 systemd[1]: Started cri-containerd-98737d8d1b7fa045b6854547902a735708ce161b7e0713fc371d8f5bddb51ad0.scope - libcontainer container 98737d8d1b7fa045b6854547902a735708ce161b7e0713fc371d8f5bddb51ad0. Sep 12 17:43:39.928523 containerd[1710]: time="2025-09-12T17:43:39.928473905Z" level=info msg="StartContainer for \"98737d8d1b7fa045b6854547902a735708ce161b7e0713fc371d8f5bddb51ad0\" returns successfully" Sep 12 17:43:41.263047 systemd[1]: run-containerd-runc-k8s.io-98737d8d1b7fa045b6854547902a735708ce161b7e0713fc371d8f5bddb51ad0-runc.ZBLvwd.mount: Deactivated successfully. Sep 12 17:43:41.512076 containerd[1710]: time="2025-09-12T17:43:41.512021444Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:43:41.515643 containerd[1710]: time="2025-09-12T17:43:41.515412293Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=13761208" Sep 12 17:43:41.518824 containerd[1710]: time="2025-09-12T17:43:41.518475620Z" level=info msg="ImageCreate event name:\"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:43:41.529635 containerd[1710]: time="2025-09-12T17:43:41.529600007Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:43:41.530383 containerd[1710]: time="2025-09-12T17:43:41.530345089Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"15130401\" in 1.717669703s" Sep 12 17:43:41.530383 containerd[1710]: time="2025-09-12T17:43:41.530381969Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\"" Sep 12 17:43:41.534214 containerd[1710]: time="2025-09-12T17:43:41.534139258Z" level=info msg="CreateContainer within sandbox \"2e4488b21967bb49f374767e6b89906a7d093cbb4139becb94a3551cf2db9dc9\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 12 17:43:41.580556 containerd[1710]: time="2025-09-12T17:43:41.580428569Z" level=info msg="CreateContainer within sandbox \"2e4488b21967bb49f374767e6b89906a7d093cbb4139becb94a3551cf2db9dc9\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"8e70f616cba8cc8ff2db5c50e550f2168d4e1200e7d841ff1fbeaae2fe2f56ea\"" Sep 12 17:43:41.581301 containerd[1710]: time="2025-09-12T17:43:41.581092211Z" level=info msg="StartContainer for \"8e70f616cba8cc8ff2db5c50e550f2168d4e1200e7d841ff1fbeaae2fe2f56ea\"" Sep 12 17:43:41.614992 systemd[1]: Started cri-containerd-8e70f616cba8cc8ff2db5c50e550f2168d4e1200e7d841ff1fbeaae2fe2f56ea.scope - libcontainer container 8e70f616cba8cc8ff2db5c50e550f2168d4e1200e7d841ff1fbeaae2fe2f56ea. Sep 12 17:43:41.652228 containerd[1710]: time="2025-09-12T17:43:41.652002244Z" level=info msg="StartContainer for \"8e70f616cba8cc8ff2db5c50e550f2168d4e1200e7d841ff1fbeaae2fe2f56ea\" returns successfully" Sep 12 17:43:41.947888 kubelet[3174]: I0912 17:43:41.947752 3174 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 12 17:43:41.947888 kubelet[3174]: I0912 17:43:41.947812 3174 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 12 17:43:42.264981 kubelet[3174]: I0912 17:43:42.264821 3174 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-7988f88666-fggbj" podStartSLOduration=44.998128609 podStartE2EDuration="56.264786509s" podCreationTimestamp="2025-09-12 17:42:46 +0000 UTC" firstStartedPulling="2025-09-12 17:43:28.545205884 +0000 UTC m=+65.837204151" lastFinishedPulling="2025-09-12 17:43:39.811863784 +0000 UTC m=+77.103862051" observedRunningTime="2025-09-12 17:43:40.263605994 +0000 UTC m=+77.555604341" watchObservedRunningTime="2025-09-12 17:43:42.264786509 +0000 UTC m=+79.556784816" Sep 12 17:43:42.266334 kubelet[3174]: I0912 17:43:42.265764 3174 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-dp2wf" podStartSLOduration=40.222900458 podStartE2EDuration="55.265751073s" podCreationTimestamp="2025-09-12 17:42:47 +0000 UTC" firstStartedPulling="2025-09-12 17:43:26.488276236 +0000 UTC m=+63.780274543" lastFinishedPulling="2025-09-12 17:43:41.531126851 +0000 UTC m=+78.823125158" observedRunningTime="2025-09-12 17:43:42.263638064 +0000 UTC m=+79.555636371" watchObservedRunningTime="2025-09-12 17:43:42.265751073 +0000 UTC m=+79.557749340" Sep 12 17:44:04.458839 update_engine[1692]: I20250912 17:44:04.458580 1692 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Sep 12 17:44:04.458839 update_engine[1692]: I20250912 17:44:04.458631 1692 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Sep 12 17:44:04.459685 update_engine[1692]: I20250912 17:44:04.459367 1692 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Sep 12 17:44:04.460481 update_engine[1692]: I20250912 17:44:04.460134 1692 omaha_request_params.cc:62] Current group set to lts Sep 12 17:44:04.460481 update_engine[1692]: I20250912 17:44:04.460213 1692 update_attempter.cc:499] Already updated boot flags. Skipping. Sep 12 17:44:04.460481 update_engine[1692]: I20250912 17:44:04.460222 1692 update_attempter.cc:643] Scheduling an action processor start. Sep 12 17:44:04.460481 update_engine[1692]: I20250912 17:44:04.460239 1692 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 12 17:44:04.461663 update_engine[1692]: I20250912 17:44:04.461581 1692 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Sep 12 17:44:04.463289 update_engine[1692]: I20250912 17:44:04.462307 1692 omaha_request_action.cc:271] Posting an Omaha request to disabled Sep 12 17:44:04.463289 update_engine[1692]: I20250912 17:44:04.462324 1692 omaha_request_action.cc:272] Request: Sep 12 17:44:04.463289 update_engine[1692]: Sep 12 17:44:04.463289 update_engine[1692]: Sep 12 17:44:04.463289 update_engine[1692]: Sep 12 17:44:04.463289 update_engine[1692]: Sep 12 17:44:04.463289 update_engine[1692]: Sep 12 17:44:04.463289 update_engine[1692]: Sep 12 17:44:04.463289 update_engine[1692]: Sep 12 17:44:04.463289 update_engine[1692]: Sep 12 17:44:04.463289 update_engine[1692]: I20250912 17:44:04.462330 1692 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 12 17:44:04.467031 update_engine[1692]: I20250912 17:44:04.465636 1692 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 12 17:44:04.467031 update_engine[1692]: I20250912 17:44:04.465986 1692 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 12 17:44:04.467203 locksmithd[1774]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Sep 12 17:44:04.525011 update_engine[1692]: E20250912 17:44:04.524947 1692 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 12 17:44:04.525153 update_engine[1692]: I20250912 17:44:04.525059 1692 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Sep 12 17:44:14.464884 update_engine[1692]: I20250912 17:44:14.462913 1692 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 12 17:44:14.464884 update_engine[1692]: I20250912 17:44:14.463150 1692 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 12 17:44:14.464884 update_engine[1692]: I20250912 17:44:14.463382 1692 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 12 17:44:14.572138 update_engine[1692]: E20250912 17:44:14.571980 1692 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 12 17:44:14.572138 update_engine[1692]: I20250912 17:44:14.572083 1692 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Sep 12 17:44:21.061316 systemd[1]: run-containerd-runc-k8s.io-d54fd04d887aeda9c0525071e47c2dcde022de21ec0121d2cd66b5b686559343-runc.0qfEHx.mount: Deactivated successfully. Sep 12 17:44:23.430401 containerd[1710]: time="2025-09-12T17:44:23.430357398Z" level=info msg="StopPodSandbox for \"7eb4020e00307aa41e4e3a15a7c0709087298a72d3de5084f1316aea23488174\"" Sep 12 17:44:23.508011 containerd[1710]: 2025-09-12 17:44:23.469 [WARNING][6193] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7eb4020e00307aa41e4e3a15a7c0709087298a72d3de5084f1316aea23488174" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--d7d9773d19-k8s-calico--apiserver--77d7968c8d--4l6tx-eth0", GenerateName:"calico-apiserver-77d7968c8d-", Namespace:"calico-apiserver", SelfLink:"", UID:"eacf3852-e8b2-4417-862e-486f289649f2", ResourceVersion:"1031", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 42, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77d7968c8d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-d7d9773d19", ContainerID:"2e043943ed2078640213c2899bfb4797eff7657c53d39b3f67223af420258bd4", Pod:"calico-apiserver-77d7968c8d-4l6tx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.69.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie2d7971697e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:44:23.508011 containerd[1710]: 2025-09-12 17:44:23.469 [INFO][6193] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7eb4020e00307aa41e4e3a15a7c0709087298a72d3de5084f1316aea23488174" Sep 12 17:44:23.508011 containerd[1710]: 2025-09-12 17:44:23.469 [INFO][6193] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7eb4020e00307aa41e4e3a15a7c0709087298a72d3de5084f1316aea23488174" iface="eth0" netns="" Sep 12 17:44:23.508011 containerd[1710]: 2025-09-12 17:44:23.469 [INFO][6193] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7eb4020e00307aa41e4e3a15a7c0709087298a72d3de5084f1316aea23488174" Sep 12 17:44:23.508011 containerd[1710]: 2025-09-12 17:44:23.469 [INFO][6193] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7eb4020e00307aa41e4e3a15a7c0709087298a72d3de5084f1316aea23488174" Sep 12 17:44:23.508011 containerd[1710]: 2025-09-12 17:44:23.489 [INFO][6200] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7eb4020e00307aa41e4e3a15a7c0709087298a72d3de5084f1316aea23488174" HandleID="k8s-pod-network.7eb4020e00307aa41e4e3a15a7c0709087298a72d3de5084f1316aea23488174" Workload="ci--4081.3.6--a--d7d9773d19-k8s-calico--apiserver--77d7968c8d--4l6tx-eth0" Sep 12 17:44:23.508011 containerd[1710]: 2025-09-12 17:44:23.489 [INFO][6200] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:44:23.508011 containerd[1710]: 2025-09-12 17:44:23.489 [INFO][6200] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:44:23.508011 containerd[1710]: 2025-09-12 17:44:23.501 [WARNING][6200] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7eb4020e00307aa41e4e3a15a7c0709087298a72d3de5084f1316aea23488174" HandleID="k8s-pod-network.7eb4020e00307aa41e4e3a15a7c0709087298a72d3de5084f1316aea23488174" Workload="ci--4081.3.6--a--d7d9773d19-k8s-calico--apiserver--77d7968c8d--4l6tx-eth0" Sep 12 17:44:23.508011 containerd[1710]: 2025-09-12 17:44:23.501 [INFO][6200] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7eb4020e00307aa41e4e3a15a7c0709087298a72d3de5084f1316aea23488174" HandleID="k8s-pod-network.7eb4020e00307aa41e4e3a15a7c0709087298a72d3de5084f1316aea23488174" Workload="ci--4081.3.6--a--d7d9773d19-k8s-calico--apiserver--77d7968c8d--4l6tx-eth0" Sep 12 17:44:23.508011 containerd[1710]: 2025-09-12 17:44:23.504 [INFO][6200] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:44:23.508011 containerd[1710]: 2025-09-12 17:44:23.506 [INFO][6193] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7eb4020e00307aa41e4e3a15a7c0709087298a72d3de5084f1316aea23488174" Sep 12 17:44:23.508445 containerd[1710]: time="2025-09-12T17:44:23.508056750Z" level=info msg="TearDown network for sandbox \"7eb4020e00307aa41e4e3a15a7c0709087298a72d3de5084f1316aea23488174\" successfully" Sep 12 17:44:23.508445 containerd[1710]: time="2025-09-12T17:44:23.508083910Z" level=info msg="StopPodSandbox for \"7eb4020e00307aa41e4e3a15a7c0709087298a72d3de5084f1316aea23488174\" returns successfully" Sep 12 17:44:23.508942 containerd[1710]: time="2025-09-12T17:44:23.508911592Z" level=info msg="RemovePodSandbox for \"7eb4020e00307aa41e4e3a15a7c0709087298a72d3de5084f1316aea23488174\"" Sep 12 17:44:23.509007 containerd[1710]: time="2025-09-12T17:44:23.508949352Z" level=info msg="Forcibly stopping sandbox \"7eb4020e00307aa41e4e3a15a7c0709087298a72d3de5084f1316aea23488174\"" Sep 12 17:44:23.584091 containerd[1710]: 2025-09-12 17:44:23.545 [WARNING][6215] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7eb4020e00307aa41e4e3a15a7c0709087298a72d3de5084f1316aea23488174" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--d7d9773d19-k8s-calico--apiserver--77d7968c8d--4l6tx-eth0", GenerateName:"calico-apiserver-77d7968c8d-", Namespace:"calico-apiserver", SelfLink:"", UID:"eacf3852-e8b2-4417-862e-486f289649f2", ResourceVersion:"1031", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 42, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77d7968c8d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-d7d9773d19", ContainerID:"2e043943ed2078640213c2899bfb4797eff7657c53d39b3f67223af420258bd4", Pod:"calico-apiserver-77d7968c8d-4l6tx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.69.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie2d7971697e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:44:23.584091 containerd[1710]: 2025-09-12 17:44:23.546 [INFO][6215] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7eb4020e00307aa41e4e3a15a7c0709087298a72d3de5084f1316aea23488174" Sep 12 17:44:23.584091 containerd[1710]: 2025-09-12 17:44:23.546 [INFO][6215] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7eb4020e00307aa41e4e3a15a7c0709087298a72d3de5084f1316aea23488174" iface="eth0" netns="" Sep 12 17:44:23.584091 containerd[1710]: 2025-09-12 17:44:23.546 [INFO][6215] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7eb4020e00307aa41e4e3a15a7c0709087298a72d3de5084f1316aea23488174" Sep 12 17:44:23.584091 containerd[1710]: 2025-09-12 17:44:23.546 [INFO][6215] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7eb4020e00307aa41e4e3a15a7c0709087298a72d3de5084f1316aea23488174" Sep 12 17:44:23.584091 containerd[1710]: 2025-09-12 17:44:23.571 [INFO][6222] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7eb4020e00307aa41e4e3a15a7c0709087298a72d3de5084f1316aea23488174" HandleID="k8s-pod-network.7eb4020e00307aa41e4e3a15a7c0709087298a72d3de5084f1316aea23488174" Workload="ci--4081.3.6--a--d7d9773d19-k8s-calico--apiserver--77d7968c8d--4l6tx-eth0" Sep 12 17:44:23.584091 containerd[1710]: 2025-09-12 17:44:23.571 [INFO][6222] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:44:23.584091 containerd[1710]: 2025-09-12 17:44:23.571 [INFO][6222] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:44:23.584091 containerd[1710]: 2025-09-12 17:44:23.579 [WARNING][6222] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7eb4020e00307aa41e4e3a15a7c0709087298a72d3de5084f1316aea23488174" HandleID="k8s-pod-network.7eb4020e00307aa41e4e3a15a7c0709087298a72d3de5084f1316aea23488174" Workload="ci--4081.3.6--a--d7d9773d19-k8s-calico--apiserver--77d7968c8d--4l6tx-eth0" Sep 12 17:44:23.584091 containerd[1710]: 2025-09-12 17:44:23.579 [INFO][6222] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7eb4020e00307aa41e4e3a15a7c0709087298a72d3de5084f1316aea23488174" HandleID="k8s-pod-network.7eb4020e00307aa41e4e3a15a7c0709087298a72d3de5084f1316aea23488174" Workload="ci--4081.3.6--a--d7d9773d19-k8s-calico--apiserver--77d7968c8d--4l6tx-eth0" Sep 12 17:44:23.584091 containerd[1710]: 2025-09-12 17:44:23.581 [INFO][6222] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:44:23.584091 containerd[1710]: 2025-09-12 17:44:23.582 [INFO][6215] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7eb4020e00307aa41e4e3a15a7c0709087298a72d3de5084f1316aea23488174" Sep 12 17:44:23.584515 containerd[1710]: time="2025-09-12T17:44:23.584145816Z" level=info msg="TearDown network for sandbox \"7eb4020e00307aa41e4e3a15a7c0709087298a72d3de5084f1316aea23488174\" successfully" Sep 12 17:44:23.593245 containerd[1710]: time="2025-09-12T17:44:23.592890563Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7eb4020e00307aa41e4e3a15a7c0709087298a72d3de5084f1316aea23488174\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:44:23.593245 containerd[1710]: time="2025-09-12T17:44:23.592988323Z" level=info msg="RemovePodSandbox \"7eb4020e00307aa41e4e3a15a7c0709087298a72d3de5084f1316aea23488174\" returns successfully" Sep 12 17:44:23.593992 containerd[1710]: time="2025-09-12T17:44:23.593526084Z" level=info msg="StopPodSandbox for \"590bf42b7c72ffe4439636482a47564da68476fb70799b4fd5bc687c83d7422d\"" Sep 12 17:44:23.665915 containerd[1710]: 2025-09-12 17:44:23.630 [WARNING][6237] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="590bf42b7c72ffe4439636482a47564da68476fb70799b4fd5bc687c83d7422d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--d7d9773d19-k8s-calico--apiserver--77d7968c8d--pt2kx-eth0", GenerateName:"calico-apiserver-77d7968c8d-", Namespace:"calico-apiserver", SelfLink:"", UID:"485bd820-ef8d-438d-9780-7a5190802f4b", ResourceVersion:"1077", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 42, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77d7968c8d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-d7d9773d19", ContainerID:"2e3126afb5e8fd0f623821dc89e7bad8caaa578945c9ea4e8d5255a239078edb", Pod:"calico-apiserver-77d7968c8d-pt2kx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.69.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5606342071b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:44:23.665915 containerd[1710]: 2025-09-12 17:44:23.631 [INFO][6237] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="590bf42b7c72ffe4439636482a47564da68476fb70799b4fd5bc687c83d7422d" Sep 12 17:44:23.665915 containerd[1710]: 2025-09-12 17:44:23.631 [INFO][6237] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="590bf42b7c72ffe4439636482a47564da68476fb70799b4fd5bc687c83d7422d" iface="eth0" netns="" Sep 12 17:44:23.665915 containerd[1710]: 2025-09-12 17:44:23.631 [INFO][6237] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="590bf42b7c72ffe4439636482a47564da68476fb70799b4fd5bc687c83d7422d" Sep 12 17:44:23.665915 containerd[1710]: 2025-09-12 17:44:23.631 [INFO][6237] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="590bf42b7c72ffe4439636482a47564da68476fb70799b4fd5bc687c83d7422d" Sep 12 17:44:23.665915 containerd[1710]: 2025-09-12 17:44:23.651 [INFO][6244] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="590bf42b7c72ffe4439636482a47564da68476fb70799b4fd5bc687c83d7422d" HandleID="k8s-pod-network.590bf42b7c72ffe4439636482a47564da68476fb70799b4fd5bc687c83d7422d" Workload="ci--4081.3.6--a--d7d9773d19-k8s-calico--apiserver--77d7968c8d--pt2kx-eth0" Sep 12 17:44:23.665915 containerd[1710]: 2025-09-12 17:44:23.651 [INFO][6244] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:44:23.665915 containerd[1710]: 2025-09-12 17:44:23.651 [INFO][6244] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:44:23.665915 containerd[1710]: 2025-09-12 17:44:23.661 [WARNING][6244] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="590bf42b7c72ffe4439636482a47564da68476fb70799b4fd5bc687c83d7422d" HandleID="k8s-pod-network.590bf42b7c72ffe4439636482a47564da68476fb70799b4fd5bc687c83d7422d" Workload="ci--4081.3.6--a--d7d9773d19-k8s-calico--apiserver--77d7968c8d--pt2kx-eth0" Sep 12 17:44:23.665915 containerd[1710]: 2025-09-12 17:44:23.661 [INFO][6244] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="590bf42b7c72ffe4439636482a47564da68476fb70799b4fd5bc687c83d7422d" HandleID="k8s-pod-network.590bf42b7c72ffe4439636482a47564da68476fb70799b4fd5bc687c83d7422d" Workload="ci--4081.3.6--a--d7d9773d19-k8s-calico--apiserver--77d7968c8d--pt2kx-eth0" Sep 12 17:44:23.665915 containerd[1710]: 2025-09-12 17:44:23.662 [INFO][6244] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:44:23.665915 containerd[1710]: 2025-09-12 17:44:23.664 [INFO][6237] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="590bf42b7c72ffe4439636482a47564da68476fb70799b4fd5bc687c83d7422d" Sep 12 17:44:23.665915 containerd[1710]: time="2025-09-12T17:44:23.665762300Z" level=info msg="TearDown network for sandbox \"590bf42b7c72ffe4439636482a47564da68476fb70799b4fd5bc687c83d7422d\" successfully" Sep 12 17:44:23.665915 containerd[1710]: time="2025-09-12T17:44:23.665789260Z" level=info msg="StopPodSandbox for \"590bf42b7c72ffe4439636482a47564da68476fb70799b4fd5bc687c83d7422d\" returns successfully" Sep 12 17:44:23.666775 containerd[1710]: time="2025-09-12T17:44:23.666743183Z" level=info msg="RemovePodSandbox for \"590bf42b7c72ffe4439636482a47564da68476fb70799b4fd5bc687c83d7422d\"" Sep 12 17:44:23.666864 containerd[1710]: time="2025-09-12T17:44:23.666779823Z" level=info msg="Forcibly stopping sandbox \"590bf42b7c72ffe4439636482a47564da68476fb70799b4fd5bc687c83d7422d\"" Sep 12 17:44:23.732504 containerd[1710]: 2025-09-12 17:44:23.700 [WARNING][6258] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="590bf42b7c72ffe4439636482a47564da68476fb70799b4fd5bc687c83d7422d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--d7d9773d19-k8s-calico--apiserver--77d7968c8d--pt2kx-eth0", GenerateName:"calico-apiserver-77d7968c8d-", Namespace:"calico-apiserver", SelfLink:"", UID:"485bd820-ef8d-438d-9780-7a5190802f4b", ResourceVersion:"1077", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 42, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77d7968c8d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-d7d9773d19", ContainerID:"2e3126afb5e8fd0f623821dc89e7bad8caaa578945c9ea4e8d5255a239078edb", Pod:"calico-apiserver-77d7968c8d-pt2kx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.69.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5606342071b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:44:23.732504 containerd[1710]: 2025-09-12 17:44:23.701 [INFO][6258] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="590bf42b7c72ffe4439636482a47564da68476fb70799b4fd5bc687c83d7422d" Sep 12 17:44:23.732504 containerd[1710]: 2025-09-12 17:44:23.701 [INFO][6258] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="590bf42b7c72ffe4439636482a47564da68476fb70799b4fd5bc687c83d7422d" iface="eth0" netns="" Sep 12 17:44:23.732504 containerd[1710]: 2025-09-12 17:44:23.701 [INFO][6258] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="590bf42b7c72ffe4439636482a47564da68476fb70799b4fd5bc687c83d7422d" Sep 12 17:44:23.732504 containerd[1710]: 2025-09-12 17:44:23.701 [INFO][6258] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="590bf42b7c72ffe4439636482a47564da68476fb70799b4fd5bc687c83d7422d" Sep 12 17:44:23.732504 containerd[1710]: 2025-09-12 17:44:23.719 [INFO][6265] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="590bf42b7c72ffe4439636482a47564da68476fb70799b4fd5bc687c83d7422d" HandleID="k8s-pod-network.590bf42b7c72ffe4439636482a47564da68476fb70799b4fd5bc687c83d7422d" Workload="ci--4081.3.6--a--d7d9773d19-k8s-calico--apiserver--77d7968c8d--pt2kx-eth0" Sep 12 17:44:23.732504 containerd[1710]: 2025-09-12 17:44:23.719 [INFO][6265] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:44:23.732504 containerd[1710]: 2025-09-12 17:44:23.719 [INFO][6265] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:44:23.732504 containerd[1710]: 2025-09-12 17:44:23.728 [WARNING][6265] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="590bf42b7c72ffe4439636482a47564da68476fb70799b4fd5bc687c83d7422d" HandleID="k8s-pod-network.590bf42b7c72ffe4439636482a47564da68476fb70799b4fd5bc687c83d7422d" Workload="ci--4081.3.6--a--d7d9773d19-k8s-calico--apiserver--77d7968c8d--pt2kx-eth0" Sep 12 17:44:23.732504 containerd[1710]: 2025-09-12 17:44:23.728 [INFO][6265] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="590bf42b7c72ffe4439636482a47564da68476fb70799b4fd5bc687c83d7422d" HandleID="k8s-pod-network.590bf42b7c72ffe4439636482a47564da68476fb70799b4fd5bc687c83d7422d" Workload="ci--4081.3.6--a--d7d9773d19-k8s-calico--apiserver--77d7968c8d--pt2kx-eth0" Sep 12 17:44:23.732504 containerd[1710]: 2025-09-12 17:44:23.729 [INFO][6265] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:44:23.732504 containerd[1710]: 2025-09-12 17:44:23.731 [INFO][6258] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="590bf42b7c72ffe4439636482a47564da68476fb70799b4fd5bc687c83d7422d" Sep 12 17:44:23.732961 containerd[1710]: time="2025-09-12T17:44:23.732555219Z" level=info msg="TearDown network for sandbox \"590bf42b7c72ffe4439636482a47564da68476fb70799b4fd5bc687c83d7422d\" successfully" Sep 12 17:44:23.739892 containerd[1710]: time="2025-09-12T17:44:23.739836801Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"590bf42b7c72ffe4439636482a47564da68476fb70799b4fd5bc687c83d7422d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:44:23.740067 containerd[1710]: time="2025-09-12T17:44:23.739918841Z" level=info msg="RemovePodSandbox \"590bf42b7c72ffe4439636482a47564da68476fb70799b4fd5bc687c83d7422d\" returns successfully" Sep 12 17:44:23.740585 containerd[1710]: time="2025-09-12T17:44:23.740360562Z" level=info msg="StopPodSandbox for \"cd6abb0062bd7ba1073963354d32df7359610744cb8d8c6a6297744ffa374540\"" Sep 12 17:44:23.813816 containerd[1710]: 2025-09-12 17:44:23.780 [WARNING][6279] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cd6abb0062bd7ba1073963354d32df7359610744cb8d8c6a6297744ffa374540" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--d7d9773d19-k8s-coredns--7c65d6cfc9--z8zzj-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"34236242-7d33-47ad-b860-36c4a84495b0", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 42, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-d7d9773d19", ContainerID:"73dac1189271e1be755dfa80f43d1ee5826924e90819a36fc49d8dd183a2ab2f", Pod:"coredns-7c65d6cfc9-z8zzj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.69.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif5158b762c2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:44:23.813816 containerd[1710]: 2025-09-12 17:44:23.780 [INFO][6279] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cd6abb0062bd7ba1073963354d32df7359610744cb8d8c6a6297744ffa374540" Sep 12 17:44:23.813816 containerd[1710]: 2025-09-12 17:44:23.780 [INFO][6279] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cd6abb0062bd7ba1073963354d32df7359610744cb8d8c6a6297744ffa374540" iface="eth0" netns="" Sep 12 17:44:23.813816 containerd[1710]: 2025-09-12 17:44:23.780 [INFO][6279] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cd6abb0062bd7ba1073963354d32df7359610744cb8d8c6a6297744ffa374540" Sep 12 17:44:23.813816 containerd[1710]: 2025-09-12 17:44:23.780 [INFO][6279] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cd6abb0062bd7ba1073963354d32df7359610744cb8d8c6a6297744ffa374540" Sep 12 17:44:23.813816 containerd[1710]: 2025-09-12 17:44:23.798 [INFO][6287] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cd6abb0062bd7ba1073963354d32df7359610744cb8d8c6a6297744ffa374540" HandleID="k8s-pod-network.cd6abb0062bd7ba1073963354d32df7359610744cb8d8c6a6297744ffa374540" Workload="ci--4081.3.6--a--d7d9773d19-k8s-coredns--7c65d6cfc9--z8zzj-eth0" Sep 12 17:44:23.813816 containerd[1710]: 2025-09-12 17:44:23.798 [INFO][6287] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:44:23.813816 containerd[1710]: 2025-09-12 17:44:23.798 [INFO][6287] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:44:23.813816 containerd[1710]: 2025-09-12 17:44:23.808 [WARNING][6287] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cd6abb0062bd7ba1073963354d32df7359610744cb8d8c6a6297744ffa374540" HandleID="k8s-pod-network.cd6abb0062bd7ba1073963354d32df7359610744cb8d8c6a6297744ffa374540" Workload="ci--4081.3.6--a--d7d9773d19-k8s-coredns--7c65d6cfc9--z8zzj-eth0" Sep 12 17:44:23.813816 containerd[1710]: 2025-09-12 17:44:23.808 [INFO][6287] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cd6abb0062bd7ba1073963354d32df7359610744cb8d8c6a6297744ffa374540" HandleID="k8s-pod-network.cd6abb0062bd7ba1073963354d32df7359610744cb8d8c6a6297744ffa374540" Workload="ci--4081.3.6--a--d7d9773d19-k8s-coredns--7c65d6cfc9--z8zzj-eth0" Sep 12 17:44:23.813816 containerd[1710]: 2025-09-12 17:44:23.810 [INFO][6287] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:44:23.813816 containerd[1710]: 2025-09-12 17:44:23.811 [INFO][6279] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cd6abb0062bd7ba1073963354d32df7359610744cb8d8c6a6297744ffa374540" Sep 12 17:44:23.813816 containerd[1710]: time="2025-09-12T17:44:23.813517220Z" level=info msg="TearDown network for sandbox \"cd6abb0062bd7ba1073963354d32df7359610744cb8d8c6a6297744ffa374540\" successfully" Sep 12 17:44:23.813816 containerd[1710]: time="2025-09-12T17:44:23.813542380Z" level=info msg="StopPodSandbox for \"cd6abb0062bd7ba1073963354d32df7359610744cb8d8c6a6297744ffa374540\" returns successfully" Sep 12 17:44:23.814324 containerd[1710]: time="2025-09-12T17:44:23.814149902Z" level=info msg="RemovePodSandbox for \"cd6abb0062bd7ba1073963354d32df7359610744cb8d8c6a6297744ffa374540\"" Sep 12 17:44:23.814324 containerd[1710]: time="2025-09-12T17:44:23.814177742Z" level=info msg="Forcibly stopping sandbox \"cd6abb0062bd7ba1073963354d32df7359610744cb8d8c6a6297744ffa374540\"" Sep 12 17:44:23.904022 containerd[1710]: 2025-09-12 17:44:23.866 [WARNING][6301] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cd6abb0062bd7ba1073963354d32df7359610744cb8d8c6a6297744ffa374540" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--d7d9773d19-k8s-coredns--7c65d6cfc9--z8zzj-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"34236242-7d33-47ad-b860-36c4a84495b0", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 42, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-d7d9773d19", ContainerID:"73dac1189271e1be755dfa80f43d1ee5826924e90819a36fc49d8dd183a2ab2f", Pod:"coredns-7c65d6cfc9-z8zzj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.69.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif5158b762c2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:44:23.904022 containerd[1710]: 2025-09-12 17:44:23.866 [INFO][6301] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cd6abb0062bd7ba1073963354d32df7359610744cb8d8c6a6297744ffa374540" Sep 12 17:44:23.904022 containerd[1710]: 2025-09-12 17:44:23.866 [INFO][6301] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cd6abb0062bd7ba1073963354d32df7359610744cb8d8c6a6297744ffa374540" iface="eth0" netns="" Sep 12 17:44:23.904022 containerd[1710]: 2025-09-12 17:44:23.866 [INFO][6301] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cd6abb0062bd7ba1073963354d32df7359610744cb8d8c6a6297744ffa374540" Sep 12 17:44:23.904022 containerd[1710]: 2025-09-12 17:44:23.866 [INFO][6301] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cd6abb0062bd7ba1073963354d32df7359610744cb8d8c6a6297744ffa374540" Sep 12 17:44:23.904022 containerd[1710]: 2025-09-12 17:44:23.887 [INFO][6309] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cd6abb0062bd7ba1073963354d32df7359610744cb8d8c6a6297744ffa374540" HandleID="k8s-pod-network.cd6abb0062bd7ba1073963354d32df7359610744cb8d8c6a6297744ffa374540" Workload="ci--4081.3.6--a--d7d9773d19-k8s-coredns--7c65d6cfc9--z8zzj-eth0" Sep 12 17:44:23.904022 containerd[1710]: 2025-09-12 17:44:23.887 [INFO][6309] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:44:23.904022 containerd[1710]: 2025-09-12 17:44:23.887 [INFO][6309] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:44:23.904022 containerd[1710]: 2025-09-12 17:44:23.898 [WARNING][6309] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cd6abb0062bd7ba1073963354d32df7359610744cb8d8c6a6297744ffa374540" HandleID="k8s-pod-network.cd6abb0062bd7ba1073963354d32df7359610744cb8d8c6a6297744ffa374540" Workload="ci--4081.3.6--a--d7d9773d19-k8s-coredns--7c65d6cfc9--z8zzj-eth0" Sep 12 17:44:23.904022 containerd[1710]: 2025-09-12 17:44:23.898 [INFO][6309] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cd6abb0062bd7ba1073963354d32df7359610744cb8d8c6a6297744ffa374540" HandleID="k8s-pod-network.cd6abb0062bd7ba1073963354d32df7359610744cb8d8c6a6297744ffa374540" Workload="ci--4081.3.6--a--d7d9773d19-k8s-coredns--7c65d6cfc9--z8zzj-eth0" Sep 12 17:44:23.904022 containerd[1710]: 2025-09-12 17:44:23.899 [INFO][6309] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:44:23.904022 containerd[1710]: 2025-09-12 17:44:23.901 [INFO][6301] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cd6abb0062bd7ba1073963354d32df7359610744cb8d8c6a6297744ffa374540" Sep 12 17:44:23.904424 containerd[1710]: time="2025-09-12T17:44:23.904081090Z" level=info msg="TearDown network for sandbox \"cd6abb0062bd7ba1073963354d32df7359610744cb8d8c6a6297744ffa374540\" successfully" Sep 12 17:44:23.912822 containerd[1710]: time="2025-09-12T17:44:23.912327715Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cd6abb0062bd7ba1073963354d32df7359610744cb8d8c6a6297744ffa374540\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:44:23.912822 containerd[1710]: time="2025-09-12T17:44:23.912448635Z" level=info msg="RemovePodSandbox \"cd6abb0062bd7ba1073963354d32df7359610744cb8d8c6a6297744ffa374540\" returns successfully" Sep 12 17:44:23.912977 containerd[1710]: time="2025-09-12T17:44:23.912939157Z" level=info msg="StopPodSandbox for \"43624bbda2c99a2613364de29f20c8d71c1e06d9f460d518f6356f8e617a55ce\"" Sep 12 17:44:23.991362 containerd[1710]: 2025-09-12 17:44:23.952 [WARNING][6323] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="43624bbda2c99a2613364de29f20c8d71c1e06d9f460d518f6356f8e617a55ce" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--d7d9773d19-k8s-calico--kube--controllers--84db5c8b4c--5nsqv-eth0", GenerateName:"calico-kube-controllers-84db5c8b4c-", Namespace:"calico-system", SelfLink:"", UID:"ba349ebc-a27b-4172-bfad-7c73a43c2008", ResourceVersion:"1062", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 42, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"84db5c8b4c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-d7d9773d19", ContainerID:"4ea5157740c0102f14474c8206550039edac8cd023495bc5e892e0911d0cf563", Pod:"calico-kube-controllers-84db5c8b4c-5nsqv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.69.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia31df1c47c0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:44:23.991362 containerd[1710]: 2025-09-12 17:44:23.952 [INFO][6323] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="43624bbda2c99a2613364de29f20c8d71c1e06d9f460d518f6356f8e617a55ce" Sep 12 17:44:23.991362 containerd[1710]: 2025-09-12 17:44:23.952 [INFO][6323] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="43624bbda2c99a2613364de29f20c8d71c1e06d9f460d518f6356f8e617a55ce" iface="eth0" netns="" Sep 12 17:44:23.991362 containerd[1710]: 2025-09-12 17:44:23.952 [INFO][6323] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="43624bbda2c99a2613364de29f20c8d71c1e06d9f460d518f6356f8e617a55ce" Sep 12 17:44:23.991362 containerd[1710]: 2025-09-12 17:44:23.952 [INFO][6323] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="43624bbda2c99a2613364de29f20c8d71c1e06d9f460d518f6356f8e617a55ce" Sep 12 17:44:23.991362 containerd[1710]: 2025-09-12 17:44:23.976 [INFO][6330] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="43624bbda2c99a2613364de29f20c8d71c1e06d9f460d518f6356f8e617a55ce" HandleID="k8s-pod-network.43624bbda2c99a2613364de29f20c8d71c1e06d9f460d518f6356f8e617a55ce" Workload="ci--4081.3.6--a--d7d9773d19-k8s-calico--kube--controllers--84db5c8b4c--5nsqv-eth0" Sep 12 17:44:23.991362 containerd[1710]: 2025-09-12 17:44:23.976 [INFO][6330] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:44:23.991362 containerd[1710]: 2025-09-12 17:44:23.976 [INFO][6330] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:44:23.991362 containerd[1710]: 2025-09-12 17:44:23.985 [WARNING][6330] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="43624bbda2c99a2613364de29f20c8d71c1e06d9f460d518f6356f8e617a55ce" HandleID="k8s-pod-network.43624bbda2c99a2613364de29f20c8d71c1e06d9f460d518f6356f8e617a55ce" Workload="ci--4081.3.6--a--d7d9773d19-k8s-calico--kube--controllers--84db5c8b4c--5nsqv-eth0" Sep 12 17:44:23.991362 containerd[1710]: 2025-09-12 17:44:23.985 [INFO][6330] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="43624bbda2c99a2613364de29f20c8d71c1e06d9f460d518f6356f8e617a55ce" HandleID="k8s-pod-network.43624bbda2c99a2613364de29f20c8d71c1e06d9f460d518f6356f8e617a55ce" Workload="ci--4081.3.6--a--d7d9773d19-k8s-calico--kube--controllers--84db5c8b4c--5nsqv-eth0" Sep 12 17:44:23.991362 containerd[1710]: 2025-09-12 17:44:23.986 [INFO][6330] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:44:23.991362 containerd[1710]: 2025-09-12 17:44:23.988 [INFO][6323] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="43624bbda2c99a2613364de29f20c8d71c1e06d9f460d518f6356f8e617a55ce" Sep 12 17:44:23.991362 containerd[1710]: time="2025-09-12T17:44:23.991348791Z" level=info msg="TearDown network for sandbox \"43624bbda2c99a2613364de29f20c8d71c1e06d9f460d518f6356f8e617a55ce\" successfully" Sep 12 17:44:23.991757 containerd[1710]: time="2025-09-12T17:44:23.991373551Z" level=info msg="StopPodSandbox for \"43624bbda2c99a2613364de29f20c8d71c1e06d9f460d518f6356f8e617a55ce\" returns successfully" Sep 12 17:44:23.992860 containerd[1710]: time="2025-09-12T17:44:23.991928592Z" level=info msg="RemovePodSandbox for \"43624bbda2c99a2613364de29f20c8d71c1e06d9f460d518f6356f8e617a55ce\"" Sep 12 17:44:23.992860 containerd[1710]: time="2025-09-12T17:44:23.991963793Z" level=info msg="Forcibly stopping sandbox \"43624bbda2c99a2613364de29f20c8d71c1e06d9f460d518f6356f8e617a55ce\"" Sep 12 17:44:24.082977 containerd[1710]: 2025-09-12 17:44:24.035 [WARNING][6344] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="43624bbda2c99a2613364de29f20c8d71c1e06d9f460d518f6356f8e617a55ce" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--d7d9773d19-k8s-calico--kube--controllers--84db5c8b4c--5nsqv-eth0", GenerateName:"calico-kube-controllers-84db5c8b4c-", Namespace:"calico-system", SelfLink:"", UID:"ba349ebc-a27b-4172-bfad-7c73a43c2008", ResourceVersion:"1062", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 42, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"84db5c8b4c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-d7d9773d19", ContainerID:"4ea5157740c0102f14474c8206550039edac8cd023495bc5e892e0911d0cf563", Pod:"calico-kube-controllers-84db5c8b4c-5nsqv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.69.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia31df1c47c0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:44:24.082977 containerd[1710]: 2025-09-12 17:44:24.036 [INFO][6344] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="43624bbda2c99a2613364de29f20c8d71c1e06d9f460d518f6356f8e617a55ce" Sep 12 17:44:24.082977 containerd[1710]: 2025-09-12 17:44:24.036 [INFO][6344] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="43624bbda2c99a2613364de29f20c8d71c1e06d9f460d518f6356f8e617a55ce" iface="eth0" netns="" Sep 12 17:44:24.082977 containerd[1710]: 2025-09-12 17:44:24.036 [INFO][6344] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="43624bbda2c99a2613364de29f20c8d71c1e06d9f460d518f6356f8e617a55ce" Sep 12 17:44:24.082977 containerd[1710]: 2025-09-12 17:44:24.036 [INFO][6344] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="43624bbda2c99a2613364de29f20c8d71c1e06d9f460d518f6356f8e617a55ce" Sep 12 17:44:24.082977 containerd[1710]: 2025-09-12 17:44:24.064 [INFO][6351] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="43624bbda2c99a2613364de29f20c8d71c1e06d9f460d518f6356f8e617a55ce" HandleID="k8s-pod-network.43624bbda2c99a2613364de29f20c8d71c1e06d9f460d518f6356f8e617a55ce" Workload="ci--4081.3.6--a--d7d9773d19-k8s-calico--kube--controllers--84db5c8b4c--5nsqv-eth0" Sep 12 17:44:24.082977 containerd[1710]: 2025-09-12 17:44:24.064 [INFO][6351] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:44:24.082977 containerd[1710]: 2025-09-12 17:44:24.065 [INFO][6351] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:44:24.082977 containerd[1710]: 2025-09-12 17:44:24.077 [WARNING][6351] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="43624bbda2c99a2613364de29f20c8d71c1e06d9f460d518f6356f8e617a55ce" HandleID="k8s-pod-network.43624bbda2c99a2613364de29f20c8d71c1e06d9f460d518f6356f8e617a55ce" Workload="ci--4081.3.6--a--d7d9773d19-k8s-calico--kube--controllers--84db5c8b4c--5nsqv-eth0" Sep 12 17:44:24.082977 containerd[1710]: 2025-09-12 17:44:24.077 [INFO][6351] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="43624bbda2c99a2613364de29f20c8d71c1e06d9f460d518f6356f8e617a55ce" HandleID="k8s-pod-network.43624bbda2c99a2613364de29f20c8d71c1e06d9f460d518f6356f8e617a55ce" Workload="ci--4081.3.6--a--d7d9773d19-k8s-calico--kube--controllers--84db5c8b4c--5nsqv-eth0" Sep 12 17:44:24.082977 containerd[1710]: 2025-09-12 17:44:24.079 [INFO][6351] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:44:24.082977 containerd[1710]: 2025-09-12 17:44:24.080 [INFO][6344] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="43624bbda2c99a2613364de29f20c8d71c1e06d9f460d518f6356f8e617a55ce" Sep 12 17:44:24.083393 containerd[1710]: time="2025-09-12T17:44:24.083023464Z" level=info msg="TearDown network for sandbox \"43624bbda2c99a2613364de29f20c8d71c1e06d9f460d518f6356f8e617a55ce\" successfully" Sep 12 17:44:24.094635 containerd[1710]: time="2025-09-12T17:44:24.094559178Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"43624bbda2c99a2613364de29f20c8d71c1e06d9f460d518f6356f8e617a55ce\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:44:24.094804 containerd[1710]: time="2025-09-12T17:44:24.094644099Z" level=info msg="RemovePodSandbox \"43624bbda2c99a2613364de29f20c8d71c1e06d9f460d518f6356f8e617a55ce\" returns successfully" Sep 12 17:44:24.095484 containerd[1710]: time="2025-09-12T17:44:24.095058500Z" level=info msg="StopPodSandbox for \"090d81d7288e974c51d24dc1110b803ed2899ea08d388642880e4c2aef75e9db\"" Sep 12 17:44:24.190311 containerd[1710]: 2025-09-12 17:44:24.148 [WARNING][6365] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="090d81d7288e974c51d24dc1110b803ed2899ea08d388642880e4c2aef75e9db" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--d7d9773d19-k8s-coredns--7c65d6cfc9--8zkgx-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"34b1f8d5-6c1c-4f8f-9138-d677d0b599a6", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 42, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-d7d9773d19", ContainerID:"81a529ab21804d4c2d8940cfa1fcc480fa1d78de73ac98845db1cb1e6ad27b12", Pod:"coredns-7c65d6cfc9-8zkgx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.69.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4e928688eed", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:44:24.190311 containerd[1710]: 2025-09-12 17:44:24.148 [INFO][6365] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="090d81d7288e974c51d24dc1110b803ed2899ea08d388642880e4c2aef75e9db" Sep 12 17:44:24.190311 containerd[1710]: 2025-09-12 17:44:24.148 [INFO][6365] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="090d81d7288e974c51d24dc1110b803ed2899ea08d388642880e4c2aef75e9db" iface="eth0" netns="" Sep 12 17:44:24.190311 containerd[1710]: 2025-09-12 17:44:24.148 [INFO][6365] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="090d81d7288e974c51d24dc1110b803ed2899ea08d388642880e4c2aef75e9db" Sep 12 17:44:24.190311 containerd[1710]: 2025-09-12 17:44:24.148 [INFO][6365] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="090d81d7288e974c51d24dc1110b803ed2899ea08d388642880e4c2aef75e9db" Sep 12 17:44:24.190311 containerd[1710]: 2025-09-12 17:44:24.175 [INFO][6373] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="090d81d7288e974c51d24dc1110b803ed2899ea08d388642880e4c2aef75e9db" HandleID="k8s-pod-network.090d81d7288e974c51d24dc1110b803ed2899ea08d388642880e4c2aef75e9db" Workload="ci--4081.3.6--a--d7d9773d19-k8s-coredns--7c65d6cfc9--8zkgx-eth0" Sep 12 17:44:24.190311 containerd[1710]: 2025-09-12 17:44:24.176 [INFO][6373] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:44:24.190311 containerd[1710]: 2025-09-12 17:44:24.176 [INFO][6373] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:44:24.190311 containerd[1710]: 2025-09-12 17:44:24.184 [WARNING][6373] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="090d81d7288e974c51d24dc1110b803ed2899ea08d388642880e4c2aef75e9db" HandleID="k8s-pod-network.090d81d7288e974c51d24dc1110b803ed2899ea08d388642880e4c2aef75e9db" Workload="ci--4081.3.6--a--d7d9773d19-k8s-coredns--7c65d6cfc9--8zkgx-eth0" Sep 12 17:44:24.190311 containerd[1710]: 2025-09-12 17:44:24.184 [INFO][6373] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="090d81d7288e974c51d24dc1110b803ed2899ea08d388642880e4c2aef75e9db" HandleID="k8s-pod-network.090d81d7288e974c51d24dc1110b803ed2899ea08d388642880e4c2aef75e9db" Workload="ci--4081.3.6--a--d7d9773d19-k8s-coredns--7c65d6cfc9--8zkgx-eth0" Sep 12 17:44:24.190311 containerd[1710]: 2025-09-12 17:44:24.186 [INFO][6373] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:44:24.190311 containerd[1710]: 2025-09-12 17:44:24.187 [INFO][6365] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="090d81d7288e974c51d24dc1110b803ed2899ea08d388642880e4c2aef75e9db" Sep 12 17:44:24.190905 containerd[1710]: time="2025-09-12T17:44:24.190742905Z" level=info msg="TearDown network for sandbox \"090d81d7288e974c51d24dc1110b803ed2899ea08d388642880e4c2aef75e9db\" successfully" Sep 12 17:44:24.190905 containerd[1710]: time="2025-09-12T17:44:24.190776465Z" level=info msg="StopPodSandbox for \"090d81d7288e974c51d24dc1110b803ed2899ea08d388642880e4c2aef75e9db\" returns successfully" Sep 12 17:44:24.192242 containerd[1710]: time="2025-09-12T17:44:24.192182110Z" level=info msg="RemovePodSandbox for \"090d81d7288e974c51d24dc1110b803ed2899ea08d388642880e4c2aef75e9db\"" Sep 12 17:44:24.192242 containerd[1710]: time="2025-09-12T17:44:24.192215230Z" level=info msg="Forcibly stopping sandbox \"090d81d7288e974c51d24dc1110b803ed2899ea08d388642880e4c2aef75e9db\"" Sep 12 17:44:24.275826 containerd[1710]: 2025-09-12 17:44:24.230 [WARNING][6387] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="090d81d7288e974c51d24dc1110b803ed2899ea08d388642880e4c2aef75e9db" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--d7d9773d19-k8s-coredns--7c65d6cfc9--8zkgx-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"34b1f8d5-6c1c-4f8f-9138-d677d0b599a6", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 42, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-d7d9773d19", ContainerID:"81a529ab21804d4c2d8940cfa1fcc480fa1d78de73ac98845db1cb1e6ad27b12", Pod:"coredns-7c65d6cfc9-8zkgx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.69.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4e928688eed", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:44:24.275826 containerd[1710]: 2025-09-12 17:44:24.231 [INFO][6387] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="090d81d7288e974c51d24dc1110b803ed2899ea08d388642880e4c2aef75e9db" Sep 12 17:44:24.275826 containerd[1710]: 2025-09-12 17:44:24.231 [INFO][6387] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="090d81d7288e974c51d24dc1110b803ed2899ea08d388642880e4c2aef75e9db" iface="eth0" netns="" Sep 12 17:44:24.275826 containerd[1710]: 2025-09-12 17:44:24.231 [INFO][6387] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="090d81d7288e974c51d24dc1110b803ed2899ea08d388642880e4c2aef75e9db" Sep 12 17:44:24.275826 containerd[1710]: 2025-09-12 17:44:24.231 [INFO][6387] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="090d81d7288e974c51d24dc1110b803ed2899ea08d388642880e4c2aef75e9db" Sep 12 17:44:24.275826 containerd[1710]: 2025-09-12 17:44:24.251 [INFO][6395] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="090d81d7288e974c51d24dc1110b803ed2899ea08d388642880e4c2aef75e9db" HandleID="k8s-pod-network.090d81d7288e974c51d24dc1110b803ed2899ea08d388642880e4c2aef75e9db" Workload="ci--4081.3.6--a--d7d9773d19-k8s-coredns--7c65d6cfc9--8zkgx-eth0" Sep 12 17:44:24.275826 containerd[1710]: 2025-09-12 17:44:24.251 [INFO][6395] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:44:24.275826 containerd[1710]: 2025-09-12 17:44:24.252 [INFO][6395] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:44:24.275826 containerd[1710]: 2025-09-12 17:44:24.269 [WARNING][6395] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="090d81d7288e974c51d24dc1110b803ed2899ea08d388642880e4c2aef75e9db" HandleID="k8s-pod-network.090d81d7288e974c51d24dc1110b803ed2899ea08d388642880e4c2aef75e9db" Workload="ci--4081.3.6--a--d7d9773d19-k8s-coredns--7c65d6cfc9--8zkgx-eth0" Sep 12 17:44:24.275826 containerd[1710]: 2025-09-12 17:44:24.269 [INFO][6395] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="090d81d7288e974c51d24dc1110b803ed2899ea08d388642880e4c2aef75e9db" HandleID="k8s-pod-network.090d81d7288e974c51d24dc1110b803ed2899ea08d388642880e4c2aef75e9db" Workload="ci--4081.3.6--a--d7d9773d19-k8s-coredns--7c65d6cfc9--8zkgx-eth0" Sep 12 17:44:24.275826 containerd[1710]: 2025-09-12 17:44:24.270 [INFO][6395] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:44:24.275826 containerd[1710]: 2025-09-12 17:44:24.271 [INFO][6387] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="090d81d7288e974c51d24dc1110b803ed2899ea08d388642880e4c2aef75e9db" Sep 12 17:44:24.276225 containerd[1710]: time="2025-09-12T17:44:24.275985480Z" level=info msg="TearDown network for sandbox \"090d81d7288e974c51d24dc1110b803ed2899ea08d388642880e4c2aef75e9db\" successfully" Sep 12 17:44:24.304883 containerd[1710]: time="2025-09-12T17:44:24.304717565Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"090d81d7288e974c51d24dc1110b803ed2899ea08d388642880e4c2aef75e9db\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:44:24.304883 containerd[1710]: time="2025-09-12T17:44:24.304820445Z" level=info msg="RemovePodSandbox \"090d81d7288e974c51d24dc1110b803ed2899ea08d388642880e4c2aef75e9db\" returns successfully" Sep 12 17:44:24.305916 containerd[1710]: time="2025-09-12T17:44:24.305348647Z" level=info msg="StopPodSandbox for \"04712ef3a2d2c86cc8c04cda027e7104680fc2131664548477d53ee9c6f951ba\"" Sep 12 17:44:24.404185 containerd[1710]: 2025-09-12 17:44:24.370 [WARNING][6409] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="04712ef3a2d2c86cc8c04cda027e7104680fc2131664548477d53ee9c6f951ba" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--d7d9773d19-k8s-goldmane--7988f88666--fggbj-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"b2903050-6e84-45bd-9086-8adaeb871d43", ResourceVersion:"1157", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 42, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-d7d9773d19", ContainerID:"d51a68bc598d499473fa0a7757425d50d0d261fd08aa25aca2e4831ef54ca432", Pod:"goldmane-7988f88666-fggbj", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.69.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali62f6ab35743", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:44:24.404185 containerd[1710]: 2025-09-12 17:44:24.370 [INFO][6409] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="04712ef3a2d2c86cc8c04cda027e7104680fc2131664548477d53ee9c6f951ba" Sep 12 17:44:24.404185 containerd[1710]: 2025-09-12 17:44:24.370 [INFO][6409] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="04712ef3a2d2c86cc8c04cda027e7104680fc2131664548477d53ee9c6f951ba" iface="eth0" netns="" Sep 12 17:44:24.404185 containerd[1710]: 2025-09-12 17:44:24.370 [INFO][6409] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="04712ef3a2d2c86cc8c04cda027e7104680fc2131664548477d53ee9c6f951ba" Sep 12 17:44:24.404185 containerd[1710]: 2025-09-12 17:44:24.370 [INFO][6409] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="04712ef3a2d2c86cc8c04cda027e7104680fc2131664548477d53ee9c6f951ba" Sep 12 17:44:24.404185 containerd[1710]: 2025-09-12 17:44:24.389 [INFO][6416] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="04712ef3a2d2c86cc8c04cda027e7104680fc2131664548477d53ee9c6f951ba" HandleID="k8s-pod-network.04712ef3a2d2c86cc8c04cda027e7104680fc2131664548477d53ee9c6f951ba" Workload="ci--4081.3.6--a--d7d9773d19-k8s-goldmane--7988f88666--fggbj-eth0" Sep 12 17:44:24.404185 containerd[1710]: 2025-09-12 17:44:24.389 [INFO][6416] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:44:24.404185 containerd[1710]: 2025-09-12 17:44:24.389 [INFO][6416] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:44:24.404185 containerd[1710]: 2025-09-12 17:44:24.398 [WARNING][6416] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="04712ef3a2d2c86cc8c04cda027e7104680fc2131664548477d53ee9c6f951ba" HandleID="k8s-pod-network.04712ef3a2d2c86cc8c04cda027e7104680fc2131664548477d53ee9c6f951ba" Workload="ci--4081.3.6--a--d7d9773d19-k8s-goldmane--7988f88666--fggbj-eth0" Sep 12 17:44:24.404185 containerd[1710]: 2025-09-12 17:44:24.399 [INFO][6416] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="04712ef3a2d2c86cc8c04cda027e7104680fc2131664548477d53ee9c6f951ba" HandleID="k8s-pod-network.04712ef3a2d2c86cc8c04cda027e7104680fc2131664548477d53ee9c6f951ba" Workload="ci--4081.3.6--a--d7d9773d19-k8s-goldmane--7988f88666--fggbj-eth0" Sep 12 17:44:24.404185 containerd[1710]: 2025-09-12 17:44:24.400 [INFO][6416] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:44:24.404185 containerd[1710]: 2025-09-12 17:44:24.402 [INFO][6409] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="04712ef3a2d2c86cc8c04cda027e7104680fc2131664548477d53ee9c6f951ba" Sep 12 17:44:24.404585 containerd[1710]: time="2025-09-12T17:44:24.404236262Z" level=info msg="TearDown network for sandbox \"04712ef3a2d2c86cc8c04cda027e7104680fc2131664548477d53ee9c6f951ba\" successfully" Sep 12 17:44:24.404585 containerd[1710]: time="2025-09-12T17:44:24.404261822Z" level=info msg="StopPodSandbox for \"04712ef3a2d2c86cc8c04cda027e7104680fc2131664548477d53ee9c6f951ba\" returns successfully" Sep 12 17:44:24.404917 containerd[1710]: time="2025-09-12T17:44:24.404890504Z" level=info msg="RemovePodSandbox for \"04712ef3a2d2c86cc8c04cda027e7104680fc2131664548477d53ee9c6f951ba\"" Sep 12 17:44:24.404971 containerd[1710]: time="2025-09-12T17:44:24.404924464Z" level=info msg="Forcibly stopping sandbox \"04712ef3a2d2c86cc8c04cda027e7104680fc2131664548477d53ee9c6f951ba\"" Sep 12 17:44:24.463630 update_engine[1692]: I20250912 17:44:24.462939 1692 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 12 17:44:24.463630 update_engine[1692]: I20250912 17:44:24.463140 1692 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 12 17:44:24.463630 update_engine[1692]: I20250912 17:44:24.463374 1692 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 12 17:44:24.478399 containerd[1710]: 2025-09-12 17:44:24.442 [WARNING][6430] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="04712ef3a2d2c86cc8c04cda027e7104680fc2131664548477d53ee9c6f951ba" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--d7d9773d19-k8s-goldmane--7988f88666--fggbj-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"b2903050-6e84-45bd-9086-8adaeb871d43", ResourceVersion:"1157", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 42, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-d7d9773d19", ContainerID:"d51a68bc598d499473fa0a7757425d50d0d261fd08aa25aca2e4831ef54ca432", Pod:"goldmane-7988f88666-fggbj", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.69.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali62f6ab35743", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:44:24.478399 containerd[1710]: 2025-09-12 17:44:24.442 [INFO][6430] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="04712ef3a2d2c86cc8c04cda027e7104680fc2131664548477d53ee9c6f951ba" Sep 12 17:44:24.478399 containerd[1710]: 2025-09-12 17:44:24.442 [INFO][6430] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="04712ef3a2d2c86cc8c04cda027e7104680fc2131664548477d53ee9c6f951ba" iface="eth0" netns="" Sep 12 17:44:24.478399 containerd[1710]: 2025-09-12 17:44:24.442 [INFO][6430] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="04712ef3a2d2c86cc8c04cda027e7104680fc2131664548477d53ee9c6f951ba" Sep 12 17:44:24.478399 containerd[1710]: 2025-09-12 17:44:24.442 [INFO][6430] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="04712ef3a2d2c86cc8c04cda027e7104680fc2131664548477d53ee9c6f951ba" Sep 12 17:44:24.478399 containerd[1710]: 2025-09-12 17:44:24.462 [INFO][6437] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="04712ef3a2d2c86cc8c04cda027e7104680fc2131664548477d53ee9c6f951ba" HandleID="k8s-pod-network.04712ef3a2d2c86cc8c04cda027e7104680fc2131664548477d53ee9c6f951ba" Workload="ci--4081.3.6--a--d7d9773d19-k8s-goldmane--7988f88666--fggbj-eth0" Sep 12 17:44:24.478399 containerd[1710]: 2025-09-12 17:44:24.463 [INFO][6437] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:44:24.478399 containerd[1710]: 2025-09-12 17:44:24.463 [INFO][6437] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:44:24.478399 containerd[1710]: 2025-09-12 17:44:24.472 [WARNING][6437] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="04712ef3a2d2c86cc8c04cda027e7104680fc2131664548477d53ee9c6f951ba" HandleID="k8s-pod-network.04712ef3a2d2c86cc8c04cda027e7104680fc2131664548477d53ee9c6f951ba" Workload="ci--4081.3.6--a--d7d9773d19-k8s-goldmane--7988f88666--fggbj-eth0" Sep 12 17:44:24.478399 containerd[1710]: 2025-09-12 17:44:24.472 [INFO][6437] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="04712ef3a2d2c86cc8c04cda027e7104680fc2131664548477d53ee9c6f951ba" HandleID="k8s-pod-network.04712ef3a2d2c86cc8c04cda027e7104680fc2131664548477d53ee9c6f951ba" Workload="ci--4081.3.6--a--d7d9773d19-k8s-goldmane--7988f88666--fggbj-eth0" Sep 12 17:44:24.478399 containerd[1710]: 2025-09-12 17:44:24.473 [INFO][6437] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:44:24.478399 containerd[1710]: 2025-09-12 17:44:24.475 [INFO][6430] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="04712ef3a2d2c86cc8c04cda027e7104680fc2131664548477d53ee9c6f951ba" Sep 12 17:44:24.479102 containerd[1710]: time="2025-09-12T17:44:24.478493003Z" level=info msg="TearDown network for sandbox \"04712ef3a2d2c86cc8c04cda027e7104680fc2131664548477d53ee9c6f951ba\" successfully" Sep 12 17:44:24.487734 containerd[1710]: time="2025-09-12T17:44:24.487561910Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"04712ef3a2d2c86cc8c04cda027e7104680fc2131664548477d53ee9c6f951ba\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:44:24.487734 containerd[1710]: time="2025-09-12T17:44:24.487643631Z" level=info msg="RemovePodSandbox \"04712ef3a2d2c86cc8c04cda027e7104680fc2131664548477d53ee9c6f951ba\" returns successfully" Sep 12 17:44:24.488192 containerd[1710]: time="2025-09-12T17:44:24.488164032Z" level=info msg="StopPodSandbox for \"80643f0741642ca43dac8477a4f2f1198059a655d085bc0b2fef769facd339c0\"" Sep 12 17:44:24.568085 update_engine[1692]: E20250912 17:44:24.567900 1692 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 12 17:44:24.568085 update_engine[1692]: I20250912 17:44:24.567986 1692 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Sep 12 17:44:24.579531 containerd[1710]: 2025-09-12 17:44:24.523 [WARNING][6451] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="80643f0741642ca43dac8477a4f2f1198059a655d085bc0b2fef769facd339c0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--d7d9773d19-k8s-csi--node--driver--dp2wf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"93bea3a5-ac3c-4316-b474-aae40e5f08e7", ResourceVersion:"1112", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 42, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-d7d9773d19", ContainerID:"2e4488b21967bb49f374767e6b89906a7d093cbb4139becb94a3551cf2db9dc9", Pod:"csi-node-driver-dp2wf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.69.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3434b6d2853", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:44:24.579531 containerd[1710]: 2025-09-12 17:44:24.524 [INFO][6451] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="80643f0741642ca43dac8477a4f2f1198059a655d085bc0b2fef769facd339c0" Sep 12 17:44:24.579531 containerd[1710]: 2025-09-12 17:44:24.524 [INFO][6451] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="80643f0741642ca43dac8477a4f2f1198059a655d085bc0b2fef769facd339c0" iface="eth0" netns="" Sep 12 17:44:24.579531 containerd[1710]: 2025-09-12 17:44:24.524 [INFO][6451] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="80643f0741642ca43dac8477a4f2f1198059a655d085bc0b2fef769facd339c0" Sep 12 17:44:24.579531 containerd[1710]: 2025-09-12 17:44:24.524 [INFO][6451] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="80643f0741642ca43dac8477a4f2f1198059a655d085bc0b2fef769facd339c0" Sep 12 17:44:24.579531 containerd[1710]: 2025-09-12 17:44:24.553 [INFO][6459] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="80643f0741642ca43dac8477a4f2f1198059a655d085bc0b2fef769facd339c0" HandleID="k8s-pod-network.80643f0741642ca43dac8477a4f2f1198059a655d085bc0b2fef769facd339c0" Workload="ci--4081.3.6--a--d7d9773d19-k8s-csi--node--driver--dp2wf-eth0" Sep 12 17:44:24.579531 containerd[1710]: 2025-09-12 17:44:24.553 [INFO][6459] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:44:24.579531 containerd[1710]: 2025-09-12 17:44:24.553 [INFO][6459] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:44:24.579531 containerd[1710]: 2025-09-12 17:44:24.573 [WARNING][6459] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="80643f0741642ca43dac8477a4f2f1198059a655d085bc0b2fef769facd339c0" HandleID="k8s-pod-network.80643f0741642ca43dac8477a4f2f1198059a655d085bc0b2fef769facd339c0" Workload="ci--4081.3.6--a--d7d9773d19-k8s-csi--node--driver--dp2wf-eth0" Sep 12 17:44:24.579531 containerd[1710]: 2025-09-12 17:44:24.573 [INFO][6459] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="80643f0741642ca43dac8477a4f2f1198059a655d085bc0b2fef769facd339c0" HandleID="k8s-pod-network.80643f0741642ca43dac8477a4f2f1198059a655d085bc0b2fef769facd339c0" Workload="ci--4081.3.6--a--d7d9773d19-k8s-csi--node--driver--dp2wf-eth0" Sep 12 17:44:24.579531 containerd[1710]: 2025-09-12 17:44:24.575 [INFO][6459] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:44:24.579531 containerd[1710]: 2025-09-12 17:44:24.578 [INFO][6451] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="80643f0741642ca43dac8477a4f2f1198059a655d085bc0b2fef769facd339c0" Sep 12 17:44:24.580749 containerd[1710]: time="2025-09-12T17:44:24.579865106Z" level=info msg="TearDown network for sandbox \"80643f0741642ca43dac8477a4f2f1198059a655d085bc0b2fef769facd339c0\" successfully" Sep 12 17:44:24.580749 containerd[1710]: time="2025-09-12T17:44:24.579895546Z" level=info msg="StopPodSandbox for \"80643f0741642ca43dac8477a4f2f1198059a655d085bc0b2fef769facd339c0\" returns successfully" Sep 12 17:44:24.580749 containerd[1710]: time="2025-09-12T17:44:24.580349307Z" level=info msg="RemovePodSandbox for \"80643f0741642ca43dac8477a4f2f1198059a655d085bc0b2fef769facd339c0\"" Sep 12 17:44:24.580749 containerd[1710]: time="2025-09-12T17:44:24.580378707Z" level=info msg="Forcibly stopping sandbox \"80643f0741642ca43dac8477a4f2f1198059a655d085bc0b2fef769facd339c0\"" Sep 12 17:44:24.705708 containerd[1710]: 2025-09-12 17:44:24.648 [WARNING][6473] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="80643f0741642ca43dac8477a4f2f1198059a655d085bc0b2fef769facd339c0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--d7d9773d19-k8s-csi--node--driver--dp2wf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"93bea3a5-ac3c-4316-b474-aae40e5f08e7", ResourceVersion:"1112", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 42, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-d7d9773d19", ContainerID:"2e4488b21967bb49f374767e6b89906a7d093cbb4139becb94a3551cf2db9dc9", Pod:"csi-node-driver-dp2wf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.69.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3434b6d2853", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:44:24.705708 containerd[1710]: 2025-09-12 17:44:24.649 [INFO][6473] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="80643f0741642ca43dac8477a4f2f1198059a655d085bc0b2fef769facd339c0" Sep 12 17:44:24.705708 containerd[1710]: 2025-09-12 17:44:24.649 [INFO][6473] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="80643f0741642ca43dac8477a4f2f1198059a655d085bc0b2fef769facd339c0" iface="eth0" netns="" Sep 12 17:44:24.705708 containerd[1710]: 2025-09-12 17:44:24.649 [INFO][6473] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="80643f0741642ca43dac8477a4f2f1198059a655d085bc0b2fef769facd339c0" Sep 12 17:44:24.705708 containerd[1710]: 2025-09-12 17:44:24.649 [INFO][6473] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="80643f0741642ca43dac8477a4f2f1198059a655d085bc0b2fef769facd339c0" Sep 12 17:44:24.705708 containerd[1710]: 2025-09-12 17:44:24.685 [INFO][6480] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="80643f0741642ca43dac8477a4f2f1198059a655d085bc0b2fef769facd339c0" HandleID="k8s-pod-network.80643f0741642ca43dac8477a4f2f1198059a655d085bc0b2fef769facd339c0" Workload="ci--4081.3.6--a--d7d9773d19-k8s-csi--node--driver--dp2wf-eth0" Sep 12 17:44:24.705708 containerd[1710]: 2025-09-12 17:44:24.685 [INFO][6480] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:44:24.705708 containerd[1710]: 2025-09-12 17:44:24.685 [INFO][6480] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:44:24.705708 containerd[1710]: 2025-09-12 17:44:24.699 [WARNING][6480] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="80643f0741642ca43dac8477a4f2f1198059a655d085bc0b2fef769facd339c0" HandleID="k8s-pod-network.80643f0741642ca43dac8477a4f2f1198059a655d085bc0b2fef769facd339c0" Workload="ci--4081.3.6--a--d7d9773d19-k8s-csi--node--driver--dp2wf-eth0" Sep 12 17:44:24.705708 containerd[1710]: 2025-09-12 17:44:24.699 [INFO][6480] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="80643f0741642ca43dac8477a4f2f1198059a655d085bc0b2fef769facd339c0" HandleID="k8s-pod-network.80643f0741642ca43dac8477a4f2f1198059a655d085bc0b2fef769facd339c0" Workload="ci--4081.3.6--a--d7d9773d19-k8s-csi--node--driver--dp2wf-eth0" Sep 12 17:44:24.705708 containerd[1710]: 2025-09-12 17:44:24.701 [INFO][6480] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:44:24.705708 containerd[1710]: 2025-09-12 17:44:24.704 [INFO][6473] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="80643f0741642ca43dac8477a4f2f1198059a655d085bc0b2fef769facd339c0" Sep 12 17:44:24.707830 containerd[1710]: time="2025-09-12T17:44:24.706328843Z" level=info msg="TearDown network for sandbox \"80643f0741642ca43dac8477a4f2f1198059a655d085bc0b2fef769facd339c0\" successfully" Sep 12 17:44:24.716157 containerd[1710]: time="2025-09-12T17:44:24.716115032Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"80643f0741642ca43dac8477a4f2f1198059a655d085bc0b2fef769facd339c0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:44:24.716356 containerd[1710]: time="2025-09-12T17:44:24.716339033Z" level=info msg="RemovePodSandbox \"80643f0741642ca43dac8477a4f2f1198059a655d085bc0b2fef769facd339c0\" returns successfully" Sep 12 17:44:34.462303 update_engine[1692]: I20250912 17:44:34.462213 1692 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 12 17:44:34.463231 update_engine[1692]: I20250912 17:44:34.462953 1692 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 12 17:44:34.463231 update_engine[1692]: I20250912 17:44:34.463186 1692 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 12 17:44:34.566842 update_engine[1692]: E20250912 17:44:34.566263 1692 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 12 17:44:34.566842 update_engine[1692]: I20250912 17:44:34.566392 1692 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Sep 12 17:44:34.566842 update_engine[1692]: I20250912 17:44:34.566407 1692 omaha_request_action.cc:617] Omaha request response: Sep 12 17:44:34.566842 update_engine[1692]: E20250912 17:44:34.566501 1692 omaha_request_action.cc:636] Omaha request network transfer failed. Sep 12 17:44:34.566842 update_engine[1692]: I20250912 17:44:34.566519 1692 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Sep 12 17:44:34.566842 update_engine[1692]: I20250912 17:44:34.566525 1692 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 12 17:44:34.566842 update_engine[1692]: I20250912 17:44:34.566529 1692 update_attempter.cc:306] Processing Done. Sep 12 17:44:34.566842 update_engine[1692]: E20250912 17:44:34.566545 1692 update_attempter.cc:619] Update failed. Sep 12 17:44:34.566842 update_engine[1692]: I20250912 17:44:34.566552 1692 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Sep 12 17:44:34.566842 update_engine[1692]: I20250912 17:44:34.566558 1692 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Sep 12 17:44:34.566842 update_engine[1692]: I20250912 17:44:34.566563 1692 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Sep 12 17:44:34.566842 update_engine[1692]: I20250912 17:44:34.566632 1692 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 12 17:44:34.566842 update_engine[1692]: I20250912 17:44:34.566653 1692 omaha_request_action.cc:271] Posting an Omaha request to disabled Sep 12 17:44:34.566842 update_engine[1692]: I20250912 17:44:34.566659 1692 omaha_request_action.cc:272] Request: Sep 12 17:44:34.566842 update_engine[1692]: Sep 12 17:44:34.566842 update_engine[1692]: Sep 12 17:44:34.567305 update_engine[1692]: Sep 12 17:44:34.567305 update_engine[1692]: Sep 12 17:44:34.567305 update_engine[1692]: Sep 12 17:44:34.567305 update_engine[1692]: Sep 12 17:44:34.567305 update_engine[1692]: I20250912 17:44:34.566664 1692 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 12 17:44:34.567305 update_engine[1692]: I20250912 17:44:34.566854 1692 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 12 17:44:34.567305 update_engine[1692]: I20250912 17:44:34.567051 1692 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 12 17:44:34.567437 locksmithd[1774]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Sep 12 17:44:34.668610 update_engine[1692]: E20250912 17:44:34.668548 1692 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 12 17:44:34.668746 update_engine[1692]: I20250912 17:44:34.668638 1692 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Sep 12 17:44:34.668746 update_engine[1692]: I20250912 17:44:34.668648 1692 omaha_request_action.cc:617] Omaha request response: Sep 12 17:44:34.668746 update_engine[1692]: I20250912 17:44:34.668654 1692 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 12 17:44:34.668746 update_engine[1692]: I20250912 17:44:34.668659 1692 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 12 17:44:34.668746 update_engine[1692]: I20250912 17:44:34.668664 1692 update_attempter.cc:306] Processing Done. Sep 12 17:44:34.668746 update_engine[1692]: I20250912 17:44:34.668673 1692 update_attempter.cc:310] Error event sent. Sep 12 17:44:34.668746 update_engine[1692]: I20250912 17:44:34.668684 1692 update_check_scheduler.cc:74] Next update check in 49m53s Sep 12 17:44:34.669046 locksmithd[1774]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Sep 12 17:44:40.939290 systemd[1]: Started sshd@7-10.200.20.41:22-10.200.16.10:47516.service - OpenSSH per-connection server daemon (10.200.16.10:47516). Sep 12 17:44:41.409262 sshd[6534]: Accepted publickey for core from 10.200.16.10 port 47516 ssh2: RSA SHA256:AyxZlAlMlcR5vugKaGkRn7Fc8dKoxnuuK/wm0HuJN6k Sep 12 17:44:41.411478 sshd[6534]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:44:41.418033 systemd-logind[1691]: New session 10 of user core. Sep 12 17:44:41.422961 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 12 17:44:41.844655 sshd[6534]: pam_unix(sshd:session): session closed for user core Sep 12 17:44:41.849708 systemd-logind[1691]: Session 10 logged out. Waiting for processes to exit. Sep 12 17:44:41.849952 systemd[1]: sshd@7-10.200.20.41:22-10.200.16.10:47516.service: Deactivated successfully. Sep 12 17:44:41.851681 systemd[1]: session-10.scope: Deactivated successfully. Sep 12 17:44:41.852538 systemd-logind[1691]: Removed session 10. Sep 12 17:44:46.933013 systemd[1]: Started sshd@8-10.200.20.41:22-10.200.16.10:47524.service - OpenSSH per-connection server daemon (10.200.16.10:47524). Sep 12 17:44:47.403239 sshd[6574]: Accepted publickey for core from 10.200.16.10 port 47524 ssh2: RSA SHA256:AyxZlAlMlcR5vugKaGkRn7Fc8dKoxnuuK/wm0HuJN6k Sep 12 17:44:47.404850 sshd[6574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:44:47.411936 systemd-logind[1691]: New session 11 of user core. Sep 12 17:44:47.417142 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 12 17:44:47.835138 sshd[6574]: pam_unix(sshd:session): session closed for user core Sep 12 17:44:47.841201 systemd[1]: sshd@8-10.200.20.41:22-10.200.16.10:47524.service: Deactivated successfully. Sep 12 17:44:47.843747 systemd[1]: session-11.scope: Deactivated successfully. Sep 12 17:44:47.847147 systemd-logind[1691]: Session 11 logged out. Waiting for processes to exit. Sep 12 17:44:47.851152 systemd-logind[1691]: Removed session 11. Sep 12 17:44:51.056177 systemd[1]: run-containerd-runc-k8s.io-d54fd04d887aeda9c0525071e47c2dcde022de21ec0121d2cd66b5b686559343-runc.oW3xck.mount: Deactivated successfully. Sep 12 17:44:52.923159 systemd[1]: Started sshd@9-10.200.20.41:22-10.200.16.10:33564.service - OpenSSH per-connection server daemon (10.200.16.10:33564). Sep 12 17:44:53.383527 sshd[6610]: Accepted publickey for core from 10.200.16.10 port 33564 ssh2: RSA SHA256:AyxZlAlMlcR5vugKaGkRn7Fc8dKoxnuuK/wm0HuJN6k Sep 12 17:44:53.385281 sshd[6610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:44:53.390153 systemd-logind[1691]: New session 12 of user core. Sep 12 17:44:53.398022 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 12 17:44:53.799368 sshd[6610]: pam_unix(sshd:session): session closed for user core Sep 12 17:44:53.802531 systemd-logind[1691]: Session 12 logged out. Waiting for processes to exit. Sep 12 17:44:53.802772 systemd[1]: sshd@9-10.200.20.41:22-10.200.16.10:33564.service: Deactivated successfully. Sep 12 17:44:53.805039 systemd[1]: session-12.scope: Deactivated successfully. Sep 12 17:44:53.807671 systemd-logind[1691]: Removed session 12. Sep 12 17:44:53.883928 systemd[1]: Started sshd@10-10.200.20.41:22-10.200.16.10:33576.service - OpenSSH per-connection server daemon (10.200.16.10:33576). Sep 12 17:44:54.333575 sshd[6624]: Accepted publickey for core from 10.200.16.10 port 33576 ssh2: RSA SHA256:AyxZlAlMlcR5vugKaGkRn7Fc8dKoxnuuK/wm0HuJN6k Sep 12 17:44:54.335026 sshd[6624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:44:54.339298 systemd-logind[1691]: New session 13 of user core. Sep 12 17:44:54.345982 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 12 17:44:54.768545 sshd[6624]: pam_unix(sshd:session): session closed for user core Sep 12 17:44:54.772684 systemd[1]: sshd@10-10.200.20.41:22-10.200.16.10:33576.service: Deactivated successfully. Sep 12 17:44:54.774607 systemd[1]: session-13.scope: Deactivated successfully. Sep 12 17:44:54.776180 systemd-logind[1691]: Session 13 logged out. Waiting for processes to exit. Sep 12 17:44:54.777103 systemd-logind[1691]: Removed session 13. Sep 12 17:44:54.846196 systemd[1]: Started sshd@11-10.200.20.41:22-10.200.16.10:33590.service - OpenSSH per-connection server daemon (10.200.16.10:33590). Sep 12 17:44:55.268621 sshd[6635]: Accepted publickey for core from 10.200.16.10 port 33590 ssh2: RSA SHA256:AyxZlAlMlcR5vugKaGkRn7Fc8dKoxnuuK/wm0HuJN6k Sep 12 17:44:55.270060 sshd[6635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:44:55.274506 systemd-logind[1691]: New session 14 of user core. Sep 12 17:44:55.279962 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 12 17:44:55.661422 sshd[6635]: pam_unix(sshd:session): session closed for user core Sep 12 17:44:55.665504 systemd[1]: sshd@11-10.200.20.41:22-10.200.16.10:33590.service: Deactivated successfully. Sep 12 17:44:55.667872 systemd[1]: session-14.scope: Deactivated successfully. Sep 12 17:44:55.670419 systemd-logind[1691]: Session 14 logged out. Waiting for processes to exit. Sep 12 17:44:55.672007 systemd-logind[1691]: Removed session 14. Sep 12 17:45:00.749167 systemd[1]: Started sshd@12-10.200.20.41:22-10.200.16.10:49846.service - OpenSSH per-connection server daemon (10.200.16.10:49846). Sep 12 17:45:01.205739 sshd[6720]: Accepted publickey for core from 10.200.16.10 port 49846 ssh2: RSA SHA256:AyxZlAlMlcR5vugKaGkRn7Fc8dKoxnuuK/wm0HuJN6k Sep 12 17:45:01.207168 sshd[6720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:45:01.211720 systemd-logind[1691]: New session 15 of user core. Sep 12 17:45:01.214961 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 12 17:45:01.596133 sshd[6720]: pam_unix(sshd:session): session closed for user core Sep 12 17:45:01.599370 systemd[1]: sshd@12-10.200.20.41:22-10.200.16.10:49846.service: Deactivated successfully. Sep 12 17:45:01.602285 systemd[1]: session-15.scope: Deactivated successfully. Sep 12 17:45:01.603692 systemd-logind[1691]: Session 15 logged out. Waiting for processes to exit. Sep 12 17:45:01.604636 systemd-logind[1691]: Removed session 15. Sep 12 17:45:06.682776 systemd[1]: Started sshd@13-10.200.20.41:22-10.200.16.10:49858.service - OpenSSH per-connection server daemon (10.200.16.10:49858). Sep 12 17:45:07.131835 sshd[6747]: Accepted publickey for core from 10.200.16.10 port 49858 ssh2: RSA SHA256:AyxZlAlMlcR5vugKaGkRn7Fc8dKoxnuuK/wm0HuJN6k Sep 12 17:45:07.133289 sshd[6747]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:45:07.137425 systemd-logind[1691]: New session 16 of user core. Sep 12 17:45:07.143939 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 12 17:45:07.525940 sshd[6747]: pam_unix(sshd:session): session closed for user core Sep 12 17:45:07.529529 systemd[1]: sshd@13-10.200.20.41:22-10.200.16.10:49858.service: Deactivated successfully. Sep 12 17:45:07.531358 systemd[1]: session-16.scope: Deactivated successfully. Sep 12 17:45:07.532127 systemd-logind[1691]: Session 16 logged out. Waiting for processes to exit. Sep 12 17:45:07.533045 systemd-logind[1691]: Removed session 16. Sep 12 17:45:12.634110 systemd[1]: Started sshd@14-10.200.20.41:22-10.200.16.10:34094.service - OpenSSH per-connection server daemon (10.200.16.10:34094). Sep 12 17:45:13.091101 sshd[6764]: Accepted publickey for core from 10.200.16.10 port 34094 ssh2: RSA SHA256:AyxZlAlMlcR5vugKaGkRn7Fc8dKoxnuuK/wm0HuJN6k Sep 12 17:45:13.092594 sshd[6764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:45:13.096698 systemd-logind[1691]: New session 17 of user core. Sep 12 17:45:13.100961 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 12 17:45:13.484836 sshd[6764]: pam_unix(sshd:session): session closed for user core Sep 12 17:45:13.491971 systemd[1]: sshd@14-10.200.20.41:22-10.200.16.10:34094.service: Deactivated successfully. Sep 12 17:45:13.496566 systemd[1]: session-17.scope: Deactivated successfully. Sep 12 17:45:13.499556 systemd-logind[1691]: Session 17 logged out. Waiting for processes to exit. Sep 12 17:45:13.501190 systemd-logind[1691]: Removed session 17. Sep 12 17:45:18.571180 systemd[1]: Started sshd@15-10.200.20.41:22-10.200.16.10:34104.service - OpenSSH per-connection server daemon (10.200.16.10:34104). Sep 12 17:45:18.995882 sshd[6776]: Accepted publickey for core from 10.200.16.10 port 34104 ssh2: RSA SHA256:AyxZlAlMlcR5vugKaGkRn7Fc8dKoxnuuK/wm0HuJN6k Sep 12 17:45:18.997273 sshd[6776]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:45:19.002767 systemd-logind[1691]: New session 18 of user core. Sep 12 17:45:19.008963 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 12 17:45:19.401056 sshd[6776]: pam_unix(sshd:session): session closed for user core Sep 12 17:45:19.406089 systemd[1]: sshd@15-10.200.20.41:22-10.200.16.10:34104.service: Deactivated successfully. Sep 12 17:45:19.409526 systemd[1]: session-18.scope: Deactivated successfully. Sep 12 17:45:19.410607 systemd-logind[1691]: Session 18 logged out. Waiting for processes to exit. Sep 12 17:45:19.412087 systemd-logind[1691]: Removed session 18. Sep 12 17:45:24.494031 systemd[1]: Started sshd@16-10.200.20.41:22-10.200.16.10:41814.service - OpenSSH per-connection server daemon (10.200.16.10:41814). Sep 12 17:45:24.937011 sshd[6812]: Accepted publickey for core from 10.200.16.10 port 41814 ssh2: RSA SHA256:AyxZlAlMlcR5vugKaGkRn7Fc8dKoxnuuK/wm0HuJN6k Sep 12 17:45:24.938475 sshd[6812]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:45:24.942998 systemd-logind[1691]: New session 19 of user core. Sep 12 17:45:24.948981 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 12 17:45:25.332237 sshd[6812]: pam_unix(sshd:session): session closed for user core Sep 12 17:45:25.335967 systemd[1]: sshd@16-10.200.20.41:22-10.200.16.10:41814.service: Deactivated successfully. Sep 12 17:45:25.338082 systemd[1]: session-19.scope: Deactivated successfully. Sep 12 17:45:25.339008 systemd-logind[1691]: Session 19 logged out. Waiting for processes to exit. Sep 12 17:45:25.340646 systemd-logind[1691]: Removed session 19. Sep 12 17:45:30.419635 systemd[1]: Started sshd@17-10.200.20.41:22-10.200.16.10:38350.service - OpenSSH per-connection server daemon (10.200.16.10:38350). Sep 12 17:45:30.915231 sshd[6869]: Accepted publickey for core from 10.200.16.10 port 38350 ssh2: RSA SHA256:AyxZlAlMlcR5vugKaGkRn7Fc8dKoxnuuK/wm0HuJN6k Sep 12 17:45:30.916716 sshd[6869]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:45:30.921151 systemd-logind[1691]: New session 20 of user core. Sep 12 17:45:30.922968 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 12 17:45:31.338931 sshd[6869]: pam_unix(sshd:session): session closed for user core Sep 12 17:45:31.343030 systemd[1]: sshd@17-10.200.20.41:22-10.200.16.10:38350.service: Deactivated successfully. Sep 12 17:45:31.344930 systemd[1]: session-20.scope: Deactivated successfully. Sep 12 17:45:31.345760 systemd-logind[1691]: Session 20 logged out. Waiting for processes to exit. Sep 12 17:45:31.346828 systemd-logind[1691]: Removed session 20. Sep 12 17:45:31.426774 systemd[1]: Started sshd@18-10.200.20.41:22-10.200.16.10:38356.service - OpenSSH per-connection server daemon (10.200.16.10:38356). Sep 12 17:45:31.916772 sshd[6882]: Accepted publickey for core from 10.200.16.10 port 38356 ssh2: RSA SHA256:AyxZlAlMlcR5vugKaGkRn7Fc8dKoxnuuK/wm0HuJN6k Sep 12 17:45:31.918248 sshd[6882]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:45:31.922095 systemd-logind[1691]: New session 21 of user core. Sep 12 17:45:31.927972 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 12 17:45:32.479445 sshd[6882]: pam_unix(sshd:session): session closed for user core Sep 12 17:45:32.483488 systemd[1]: sshd@18-10.200.20.41:22-10.200.16.10:38356.service: Deactivated successfully. Sep 12 17:45:32.485741 systemd[1]: session-21.scope: Deactivated successfully. Sep 12 17:45:32.486502 systemd-logind[1691]: Session 21 logged out. Waiting for processes to exit. Sep 12 17:45:32.487673 systemd-logind[1691]: Removed session 21. Sep 12 17:45:32.575113 systemd[1]: Started sshd@19-10.200.20.41:22-10.200.16.10:38366.service - OpenSSH per-connection server daemon (10.200.16.10:38366). Sep 12 17:45:33.064468 sshd[6893]: Accepted publickey for core from 10.200.16.10 port 38366 ssh2: RSA SHA256:AyxZlAlMlcR5vugKaGkRn7Fc8dKoxnuuK/wm0HuJN6k Sep 12 17:45:33.066053 sshd[6893]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:45:33.070371 systemd-logind[1691]: New session 22 of user core. Sep 12 17:45:33.076961 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 12 17:45:35.136874 sshd[6893]: pam_unix(sshd:session): session closed for user core Sep 12 17:45:35.144317 systemd-logind[1691]: Session 22 logged out. Waiting for processes to exit. Sep 12 17:45:35.144776 systemd[1]: sshd@19-10.200.20.41:22-10.200.16.10:38366.service: Deactivated successfully. Sep 12 17:45:35.149201 systemd[1]: session-22.scope: Deactivated successfully. Sep 12 17:45:35.151326 systemd-logind[1691]: Removed session 22. Sep 12 17:45:35.215840 systemd[1]: Started sshd@20-10.200.20.41:22-10.200.16.10:38380.service - OpenSSH per-connection server daemon (10.200.16.10:38380). Sep 12 17:45:35.663831 sshd[6917]: Accepted publickey for core from 10.200.16.10 port 38380 ssh2: RSA SHA256:AyxZlAlMlcR5vugKaGkRn7Fc8dKoxnuuK/wm0HuJN6k Sep 12 17:45:35.665264 sshd[6917]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:45:35.669640 systemd-logind[1691]: New session 23 of user core. Sep 12 17:45:35.675026 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 12 17:45:36.176143 sshd[6917]: pam_unix(sshd:session): session closed for user core Sep 12 17:45:36.179767 systemd-logind[1691]: Session 23 logged out. Waiting for processes to exit. Sep 12 17:45:36.179985 systemd[1]: sshd@20-10.200.20.41:22-10.200.16.10:38380.service: Deactivated successfully. Sep 12 17:45:36.182119 systemd[1]: session-23.scope: Deactivated successfully. Sep 12 17:45:36.184867 systemd-logind[1691]: Removed session 23. Sep 12 17:45:36.268049 systemd[1]: Started sshd@21-10.200.20.41:22-10.200.16.10:38388.service - OpenSSH per-connection server daemon (10.200.16.10:38388). Sep 12 17:45:36.754251 sshd[6928]: Accepted publickey for core from 10.200.16.10 port 38388 ssh2: RSA SHA256:AyxZlAlMlcR5vugKaGkRn7Fc8dKoxnuuK/wm0HuJN6k Sep 12 17:45:36.755769 sshd[6928]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:45:36.760013 systemd-logind[1691]: New session 24 of user core. Sep 12 17:45:36.764960 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 12 17:45:37.165362 sshd[6928]: pam_unix(sshd:session): session closed for user core Sep 12 17:45:37.170038 systemd-logind[1691]: Session 24 logged out. Waiting for processes to exit. Sep 12 17:45:37.170786 systemd[1]: sshd@21-10.200.20.41:22-10.200.16.10:38388.service: Deactivated successfully. Sep 12 17:45:37.172767 systemd[1]: session-24.scope: Deactivated successfully. Sep 12 17:45:37.173723 systemd-logind[1691]: Removed session 24. Sep 12 17:45:42.258125 systemd[1]: Started sshd@22-10.200.20.41:22-10.200.16.10:37784.service - OpenSSH per-connection server daemon (10.200.16.10:37784). Sep 12 17:45:42.753440 sshd[6944]: Accepted publickey for core from 10.200.16.10 port 37784 ssh2: RSA SHA256:AyxZlAlMlcR5vugKaGkRn7Fc8dKoxnuuK/wm0HuJN6k Sep 12 17:45:42.756460 sshd[6944]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:45:42.763501 systemd-logind[1691]: New session 25 of user core. Sep 12 17:45:42.768143 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 12 17:45:43.194138 sshd[6944]: pam_unix(sshd:session): session closed for user core Sep 12 17:45:43.200835 systemd[1]: sshd@22-10.200.20.41:22-10.200.16.10:37784.service: Deactivated successfully. Sep 12 17:45:43.200886 systemd-logind[1691]: Session 25 logged out. Waiting for processes to exit. Sep 12 17:45:43.203185 systemd[1]: session-25.scope: Deactivated successfully. Sep 12 17:45:43.204344 systemd-logind[1691]: Removed session 25. Sep 12 17:45:44.203247 systemd[1]: run-containerd-runc-k8s.io-cc053c66acb9dab112239205d2c64c4499e1d0dc8e770693baf3b610977a1538-runc.Xha1pc.mount: Deactivated successfully. Sep 12 17:45:48.289196 systemd[1]: Started sshd@23-10.200.20.41:22-10.200.16.10:37786.service - OpenSSH per-connection server daemon (10.200.16.10:37786). Sep 12 17:45:48.780843 sshd[6975]: Accepted publickey for core from 10.200.16.10 port 37786 ssh2: RSA SHA256:AyxZlAlMlcR5vugKaGkRn7Fc8dKoxnuuK/wm0HuJN6k Sep 12 17:45:48.784451 sshd[6975]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:45:48.788677 systemd-logind[1691]: New session 26 of user core. Sep 12 17:45:48.794062 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 12 17:45:49.207716 sshd[6975]: pam_unix(sshd:session): session closed for user core Sep 12 17:45:49.212471 systemd-logind[1691]: Session 26 logged out. Waiting for processes to exit. Sep 12 17:45:49.215183 systemd[1]: sshd@23-10.200.20.41:22-10.200.16.10:37786.service: Deactivated successfully. Sep 12 17:45:49.218356 systemd[1]: session-26.scope: Deactivated successfully. Sep 12 17:45:49.220452 systemd-logind[1691]: Removed session 26. Sep 12 17:45:54.300151 systemd[1]: Started sshd@24-10.200.20.41:22-10.200.16.10:56462.service - OpenSSH per-connection server daemon (10.200.16.10:56462). Sep 12 17:45:54.784612 sshd[7009]: Accepted publickey for core from 10.200.16.10 port 56462 ssh2: RSA SHA256:AyxZlAlMlcR5vugKaGkRn7Fc8dKoxnuuK/wm0HuJN6k Sep 12 17:45:54.786069 sshd[7009]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:45:54.791173 systemd-logind[1691]: New session 27 of user core. Sep 12 17:45:54.795957 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 12 17:45:55.210289 sshd[7009]: pam_unix(sshd:session): session closed for user core Sep 12 17:45:55.214436 systemd-logind[1691]: Session 27 logged out. Waiting for processes to exit. Sep 12 17:45:55.216038 systemd[1]: sshd@24-10.200.20.41:22-10.200.16.10:56462.service: Deactivated successfully. Sep 12 17:45:55.218693 systemd[1]: session-27.scope: Deactivated successfully. Sep 12 17:45:55.219933 systemd-logind[1691]: Removed session 27. Sep 12 17:46:00.300512 systemd[1]: Started sshd@25-10.200.20.41:22-10.200.16.10:45390.service - OpenSSH per-connection server daemon (10.200.16.10:45390). Sep 12 17:46:00.795748 sshd[7083]: Accepted publickey for core from 10.200.16.10 port 45390 ssh2: RSA SHA256:AyxZlAlMlcR5vugKaGkRn7Fc8dKoxnuuK/wm0HuJN6k Sep 12 17:46:00.797363 sshd[7083]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:46:00.802900 systemd-logind[1691]: New session 28 of user core. Sep 12 17:46:00.806203 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 12 17:46:01.205998 sshd[7083]: pam_unix(sshd:session): session closed for user core Sep 12 17:46:01.210278 systemd[1]: sshd@25-10.200.20.41:22-10.200.16.10:45390.service: Deactivated successfully. Sep 12 17:46:01.213395 systemd[1]: session-28.scope: Deactivated successfully. Sep 12 17:46:01.214960 systemd-logind[1691]: Session 28 logged out. Waiting for processes to exit. Sep 12 17:46:01.217375 systemd-logind[1691]: Removed session 28. Sep 12 17:46:06.292188 systemd[1]: Started sshd@26-10.200.20.41:22-10.200.16.10:45396.service - OpenSSH per-connection server daemon (10.200.16.10:45396). Sep 12 17:46:06.737965 sshd[7104]: Accepted publickey for core from 10.200.16.10 port 45396 ssh2: RSA SHA256:AyxZlAlMlcR5vugKaGkRn7Fc8dKoxnuuK/wm0HuJN6k Sep 12 17:46:06.739469 sshd[7104]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:46:06.744023 systemd-logind[1691]: New session 29 of user core. Sep 12 17:46:06.745995 systemd[1]: Started session-29.scope - Session 29 of User core. Sep 12 17:46:07.126692 sshd[7104]: pam_unix(sshd:session): session closed for user core Sep 12 17:46:07.129829 systemd[1]: sshd@26-10.200.20.41:22-10.200.16.10:45396.service: Deactivated successfully. Sep 12 17:46:07.132125 systemd[1]: session-29.scope: Deactivated successfully. Sep 12 17:46:07.134703 systemd-logind[1691]: Session 29 logged out. Waiting for processes to exit. Sep 12 17:46:07.136146 systemd-logind[1691]: Removed session 29.